## Informazioni sul libro

# Matrix Mathematics: Theory, Facts, and Formulas - Second Edition

## Descrizione

When first published in 2005, *Matrix Mathematics* quickly became the essential reference book for users of matrices in all branches of engineering, science, and applied mathematics. In this fully updated and expanded edition, the author brings together the latest results on matrix theory to make this the most complete, current, and easy-to-use book on matrices.

Each chapter describes relevant background theory followed by specialized results. Hundreds of identities, inequalities, and matrix facts are stated clearly and rigorously with cross references, citations to the literature, and illuminating remarks. Beginning with preliminaries on sets, functions, and relations,*Matrix Mathematics* covers all of the major topics in matrix theory, including matrix transformations; polynomial matrices; matrix decompositions; generalized inverses; Kronecker and Schur algebra; positive-semidefinite matrices; vector and matrix norms; the matrix exponential and stability theory; and linear systems and control theory. Also included are a detailed list of symbols, a summary of notation and conventions, an extensive bibliography and author index with page references, and an exhaustive subject index. This significantly expanded edition of *Matrix Mathematics* features a wealth of new material on graphs, scalar identities and inequalities, alternative partial orderings, matrix pencils, finite groups, zeros of multivariable transfer functions, roots of polynomials, convex functions, and matrix norms.

Covers hundreds of important and useful results on matrix theory, many never before available in any book

Provides a list of symbols and a summary of conventions for easy use

Includes an extensive collection of scalar identities and inequalities

Features a detailed bibliography and author index with page references

Includes an exhaustive subject index with cross-referencing

- Editore:
- Princeton University Press
- Pubblicato:
- Jul 6, 2009
- ISBN:
- 9781400833344
- Formato:
- Libro

## Anteprima del libro

### Matrix Mathematics - Dennis S. Bernstein

**Matrix Mathematics **

**Matrix Mathematics **

*Theory, Facts, and Formulas *

Dennis S. Bernstein

Copyright © 2009 by Princeton University Press

Published by Princeton University Press,

41 William Street, Princeton, New Jersey 08540

In the United Kingdom: Princeton University Press,

6 Oxford Street, Woodstock, Oxfordshire, 0X20 1TW

All Rights Reserved

Library of Congress Cataloging-in-Publication Data

Bernstein, Dennis S., 1954–

Matrix mathematics: theory, facts, and formulas/Dennis S. Bernstein. –2nd ed.

p. cm.

Includes bibliographical references and index.

ISBN 978-0-691-13287-7 (hardcover: alk. paper)

ISBN 978-0-691-14039-1 (pbk.: alk. paper)

1. Matrices. 2. Linear systems. I. Title.

QA188.B475 2008

512.9’434—dc22

2008036257

British Library Cataloging-in-Publication Data is available

This book has been composed in Computer Modern and Helvetica.

The publisher would like to acknowledge the author of this volume for providing the camera-ready copy from which this book was printed.

Printed on acid-free paper. ∞

press.princeton.edu

Printed in the United States of America

10 9 8 7 6 5 4 3 2 1

*To the memory of my parents*,

*Irma Shorrie (Hirshon) Bernstein and Milton Bernstein*,

*whose love and guidance are everlasting *

. . . vessels, unable to contain the great light flowing into them, shatter and break. . . . the remains of the broken vessels fall . . . into the lowest world, where they remain scattered and hidden

Thor . . . placed the horn to his lips . . . He drank with all his might and kept drinking as long as ever he was able; when he paused to look, he could see that the level had sunk a little, . . . for the other end lay out in the ocean itself.

*Contents *

*Contents*

**Preface to the Second Edition **

**Preface to the First Edition **

**Special Symbols **

**Conventions, Notation, and Terminology **

**1. Preliminaries **

**1.1 Logic **

**1.2 Sets **

**1.3 Integers, Real Numbers, and Complex Numbers **

**1.4 Functions **

**1.5 Relations **

**1.6 Graphs **

**1.7 Facts on Logic, Sets, Functions, and Relations **

**1.8 Facts on Graphs **

**1.9 Facts on Binomial Identities and Sums **

**1.10 Facts on Convex Functions **

**1.11 Facts on Scalar Identities and Inequalities in One Variable **

**1.12 Facts on Scalar Identities and Inequalities in Two Variables **

**1.13 Facts on Scalar Identities and Inequalities in Three Variables **

**1.14 Facts on Scalar Identities and Inequalities in Four Variables **

**1.15 Facts on Scalar Identities and Inequalities in Six Variables **

**1.16 Facts on Scalar Identities and Inequalities in Eight Variables **

**1.17 Facts on Scalar Identities and Inequalities in n Variables **

**1.18 Facts on Scalar Identities and Inequalities in 2 n Variables **

**1.19 Facts on Scalar Identities and Inequalities in 3 n Variables **

**1.20 Facts on Scalar Identities and Inequalities in Complex Variables **

**1.21 Facts on Trigonometric and Hyperbolic Identities **

**1.22 Notes **

**2. Basic Matrix Properties **

**2.1 Matrix Algebra **

**2.2 Transpose and Inner Product **

**2.3 Convex Sets, Cones, and Subspaces **

**2.4 Range and Null Space **

**2.5 Rank and Defect **

**2.6 Invertibility **

**2.7 The Determinant **

**2.8 Partitioned Matrices **

**2.9 Facts on Polars, Cones, Dual Cones, Convex Hulls, and Subspaces **

**2.10 Facts on Range, Null Space, Rank, and Defect **

**2.11 Facts on the Range, Rank, Null Space, and Defect of Partitioned Matrices **

**2.12 Facts on the Inner Product, Outer Product, Trace, and Matrix Powers **

**2.13 Facts on the Determinant **

**2.14 Facts on the Determinant of Partitioned Matrices **

**2.15 Facts on Left and Right Inverses **

**2.16 Facts on the Adjugate and Inverses **

**2.17 Facts on the Inverse of Partitioned Matrices **

**2.18 Facts on Commutators **

**2.19 Facts on Complex Matrices **

**2.20 Facts on Geometry **

**2.21 Facts on Majorization **

**2.22 Notes **

**3. Matrix Classes and Transformations **

**3.1 Matrix Classes **

**3.2 Matrices Related to Graphs **

**3.3 Lie Algebras and Groups **

**3.4 Matrix Transformations **

**3.5 Projectors, Idempotent Matrices, and Subspaces **

**3.6 Facts on Group-Invertible and Range-Hermitian Matrices **

**3.7 Facts on Normal, Hermitian, and Skew-Hermitian Matrices **

**3.8 Facts on Commutators **

**3.9 Facts on Linear Interpolation **

**3.10 Facts on the Cross Product **

**3.11 Facts on Unitary and Shifted-Unitary Matrices **

**3.12 Facts on Idempotent Matrices **

**3.13 Facts on Projectors **

**3.14 Facts on Reflectors **

**3.15 Facts on Involutory Matrices **

**3.16 Facts on Tripotent Matrices **

**3.17 Facts on Nilpotent Matrices **

**3.18 Facts on Hankel and Toeplitz Matrices **

**3.19 Facts on Tridiagonal Matrices **

**3.20 Facts on Hamiltonian and Symplectic Matrices **

**3.21 Facts on Matrices Related to Graphs **

**3.22 Facts on Triangular, Irreducible, Cauchy, Dissipative, Contractive, and Centrosymmetric Matrices **

**3.23 Facts on Groups **

**3.24 Facts on Quaternions **

**3.25 Notes **

**4. Polynomial Matrices and Rational Transfer Functions **

**4.1 Polynomials **

**4.2 Polynomial Matrices **

**4.3 The Smith Decomposition and Similarity Invariants **

**4.4 Eigenvalues **

**4.5 Eigenvectors **

**4.6 The Minimal Polynomial **

**4.7 Rational Transfer Functions and the Smith-McMillan Decomposition **

**4.8 Facts on Polynomials and Rational Functions **

**4.9 Facts on the Characteristic and Minimal Polynomials **

**4.10 Facts on the Spectrum **

**4.11 Facts on Graphs and Nonnegative Matrices **

**4.12 Notes **

**5. Matrix Decompositions **

**5.1 Smith Form **

**5.2 Multicompanion Form **

**5.3 Hypercompanion Form and Jordan Form **

**5.4 Schur Decomposition **

**5.5 Eigenstructure Properties **

**5.6 Singular Value Decomposition **

**5.7 Pencils and the Kronecker Canonical Form **

**5.8 Facts on the Inertia **

**5.9 Facts on Matrix Transformations for One Matrix **

**5.10 Facts on Matrix Transformations for Two or More Matrices **

**5.11 Facts on Eigenvalues and Singular Values for One Matrix **

**5.12 Facts on Eigenvalues and Singular Values for Two or More Matrices **

**5.13 Facts on Matrix Pencils **

**5.14 Facts on Matrix Eigenstructure **

**5.15 Facts on Matrix Factorizations **

**5.16 Facts on Companion, Vandermonde, Circulant, and Hadamard Matrices **

**5.17 Facts on Simultaneous Transformations **

**5.18 Facts on the Polar Decomposition **

**5.19 Facts on Additive Decompositions **

**5.20 Notes **

**6. Generalized Inverses **

**6.1 Moore-Penrose Generalized Inverse **

**6.2 Drazin Generalized Inverse **

**6.3 Facts on the Moore-Penrose Generalized Inverse for One Matrix **

**6.4 Facts on the Moore-Penrose Generalized Inverse for Two or More Matrices **

**6.5 Facts on the Moore-Penrose Generalized Inverse for Partitioned Matrices **

**6.6 Facts on the Drazin and Group Generalized Inverses **

**6.7 Notes **

**7. Kronecker and Schur Algebra **

**7.1 Kronecker Product **

**7.2 Kronecker Sum and Linear Matrix Equations **

**7.3 Schur Product **

**7.4 Facts on the Kronecker Product **

**7.5 Facts on the Kronecker Sum **

**7.6 Facts on the Schur Product **

**7.7 Notes **

**8. Positive-Semidefinite Matrices **

**8.1 Positive-Semidefinite and Positive-Definite Orderings **

**8.2 Submatrices **

**8.3 Simultaneous Diagonalization **

**8.4 Eigenvalue Inequalities **

**8.5 Exponential, Square Root, and Logarithm of Hermitian Matrices **

**8.6 Matrix Inequalities **

**8.7 Facts on Range and Rank **

**8.8 Facts on Structured Positive-Semidefinite Matrices **

**8.9 Facts on Identities and Inequalities for One Matrix **

**8.10 Facts on Identities and Inequalities for Two or More Matrices **

**8.11 Facts on Identities and Inequalities for Partitioned Matrices **

**8.12 Facts on the Trace **

**8.13 Facts on the Determinant **

**8.14 Facts on Convex Sets and Convex Functions **

**8.15 Facts on Quadratic Forms **

**8.16 Facts on the Gaussian Density **

**8.17 Facts on Simultaneous Diagonalization **

**8.18 Facts on Eigenvalues and Singular Values for One Matrix **

**8.19 Facts on Eigenvalues and Singular Values for Two or More Matrices **

**8.20 Facts on Alternative Partial Orderings **

**8.21 Facts on Generalized Inverses **

**8.22 Facts on the Kronecker and Schur Products **

**8.23 Notes **

**9. Norms **

**9.1 Vector Norms **

**9.2 Matrix Norms **

**9.3 Compatible Norms **

**9.4 Induced Norms **

**9.5 Induced Lower Bound **

**9.6 Singular Value Inequalities **

**9.7 Facts on Vector Norms **

**9.8 Facts on Matrix Norms for One Matrix **

**9.9 Facts on Matrix Norms for Two or More Matrices **

**9.10 Facts on Matrix Norms for Partitioned Matrices **

**9.11 Facts on Matrix Norms and Eigenvalues for One Matrix **

**9.12 Facts on Matrix Norms and Eigenvalues for Two or More Matrices **

**9.13 Facts on Matrix Norms and Singular Values for One Matrix **

**9.14 Facts on Matrix Norms and Singular Values for Two or More Matrices **

**9.15 Facts on Linear Equations and Least Squares **

**9.16 Notes **

**10. Functions of Matrices and Their Derivatives **

**10.1 Open Sets and Closed Sets **

**10.2 Limits **

**10.3 Continuity **

**10.4 Derivatives **

**10.5 Functions of a Matrix **

**10.6 Matrix Square Root and Matrix Sign Functions **

**10.7 Matrix Derivatives **

**10.8 Facts on One Set **

**10.9 Facts on Two or More Sets **

**10.10 Facts on Matrix Functions **

**10.11 Facts on Functions **

**10.12 Facts on Derivatives **

**10.13 Facts on Infinite Series **

**10.14 Notes **

**11. The Matrix Exponential and Stability Theory **

**11.1 Definition of the Matrix Exponential **

**11.2 Structure of the Matrix Exponential **

**11.3 Explicit Expressions **

**11.4 Matrix Logarithms **

**11.5 Principal Logarithm **

**11.6 Lie Groups **

**11.7 Lyapunov Stability Theory **

**11.8 Linear Stability Theory **

**11.9 The Lyapunov Equation **

**11.10 Discrete-Time Stability Theory **

**11.11 Facts on Matrix Exponential Formulas **

**11.12 Facts on the Matrix Sine and Cosine **

**11.13 Facts on the Matrix Exponential for One Matrix **

**11.14 Facts on the Matrix Exponential for Two or More Matrices **

**11.15 Facts on the Matrix Exponential and Eigenvalues, Singular Values, and Norms for One Matrix **

**11.16 Facts on the Matrix Exponential and Eigenvalues, Singular Values, and Norms for Two or More Matrices **

**11.17 Facts on Stable Polynomials **

**11.18 Facts on Stable Matrices **

**11.19 Facts on Almost Nonnegative Matrices **

**11.20 Facts on Discrete-Time-Stable Polynomials **

**11.21 Facts on Discrete-Time-Stable Matrices **

**11.22 Facts on Lie Groups **

**11.23 Facts on Subspace Decomposition **

**11.24 Notes **

**12. Linear Systems and Control Theory **

**12.1 State Space and Transfer Function Models **

**12.2 Laplace Transform Analysis **

**12.3 The Unobservable Subspace and Observability **

**12.4 Observable Asymptotic Stability **

**12.5 Detectability **

**12.6 The Controllable Subspace and Controllability **

**12.7 Controllable Asymptotic Stability **

**12.8 Stabilizability **

**12.9 Realization Theory **

**12.10 Zeros **

**12.11 H2 System Norm **

**12.12 Harmonic Steady-State Response **

**12.13 System Interconnections **

**12.14 Standard Control Problem **

**12.15 Linear-Quadratic Control **

**12.16 Solutions of the Riccati Equation **

**12.17 The Stabilizing Solution of the Riccati Equation **

**12.18 The Maximal Solution of the Riccati Equation **

**12.19 Positive-Semidefinite and Positive-Definite Solutions of the Riccati Equation **

**12.20 Facts on Stability, Observability, and Controllability **

**12.21 Facts on the Lyapunov Equation and Inertia **

**12.22 Facts on Realizations and the H2 System Norm **

**12.23 Facts on the Riccati Equation **

**12.24 Notes **

**Bibliography **

**Author Index **

**Index **

*Preface to the Second Edition *

*Preface to the Second Edition*

This second edition of *Matrix Mathematics *represents a major expansion of the original work. While the total number of pages is increased 57% from 752 to 1181, the increase is actually greater since this edition is typeset in a smaller font to facilitate a manageable physical size.

The second edition expands on the first edition in several ways. For example, the new version includes material on graphs (developed within the framework of relations and partially ordered sets), as well as alternative partial orderings of matrices, such as rank subtractivity, star, and generalized Löwner. This edition also includes additional material on the Kronecker canonical form and matrix pencils; matrix representations of finite groups; zeros of multi-input, multi-output transfer functions; equalities and inequalities for real and complex numbers; bounds on the roots of polynomials; convex functions; and vector and matrix norms.

The additional material as well as works published subsequent to the first edition increased the number of cited works from 820 to 1540, an increase of 87%. To increase the utility of the bibliography, this edition uses the back reference

feature of LATEX, which indicates where each reference is cited in the text. As in the first edition, the second edition includes an author index. The expansion of the first edition resulted in an increase in the size of the index from 108 pages to 161 pages.

The first edition included 57 problems, while the current edition has 74. These problems represent extensions or generalizations of known results, sometimes motivated by gaps in the literature.

In this edition, I have attempted to correct all errors that appeared in the first edition. As with the first edition, readers are encouraged to contact me about errors or omissions in the current edition, which I will periodically update on my home page.

### Acknowledgments

I am grateful to many individuals who kindly provided advice and material for this edition. Some readers alerted me to errors, while others suggested additional material. In other cases I sought out researchers to help me understand the precise nature of interesting results. At the risk of omitting those who were helpful, I am pleased to acknowledge the following: Mark Balas, Jason Bernstein, Sanjay Bhat, Gerald Bourgeois, Adam Brzezinski, Francesco Bullo, Vijay Chellaboina, Naveena Crasta, Anthony D’Amato, Sever Dragomir, Bojana Drincic, Harry Dym, Matthew Fledderjohn, Haoyun Fu, Masatoshi Fujii, Takayumi Furuta, Steven Gillijns, Rishi Graham, Wassim Haddad, Nicholas Higham, Diederich Hinrichsen, Matthew Holzel, Qing Hui, Masatoshi Ito, Iman Izadi, Pierre Kabamba, Marthe Kassouf, Christopher King, Siddharth Kirtikar, Michael Margliot, Roy Mathias, Peter Mercer, Alex Olshevsky, Paul Otanez, Bela Palancz, Harish Palanthandalam-Madapusi, Fotios Paliogiannis, Isaiah Pantelis, Wei Ren, Ricardo Sanfelice, Mario Santillo, Amit Sanyal, Christoph Schmoeger, Demetrios Serakos, Wasin So, Robert Sullivan, Dogan Sumer, Yongge Tian, Götz Trenkler, Panagiotis Tsiotras, Takeaki Yamazaki, Jin Yan, Masahiro Yanagida, Vera Zeidan, Chenwei Zhang, Fuzhen Zhang, and Qing-Chang Zhong.

As with the first edition, I am especially indebted to my family, who endured four more years of my consistent absence to make this revision a reality. It is clear that any attempt to fully embrace the enormous body of mathematics known as matrix theory is a neverending task. After devoting more than two decades to this project of reassembling the scattered shards, I remain, like Thor, barely able to perceive a dent in the vast knowledge that resides in the hundreds of thousands of pages devoted to this fascinating and incredibly useful subject. Yet, it is my hope that this book will prove to be valuable to everyone who uses matrices, and will inspire interest in a mathematical construction whose secrets and mysteries have no bounds.

*Preface to the First Edition *

*Preface to the First Edition*

The idea for this book began with the realization that at the heart of the solution to many problems in science, mathematics, and engineering often lies a matrix fact,

that is, an identity, inequality, or property of matrices that is crucial to the solution of the problem. Although there are numerous excellent books on linear algebra and matrix theory, no one book contains all or even most of the vast number of matrix facts that appear throughout the scientific, mathematical, and engineering literature. This book is an attempt to organize many of these facts into a reference source for users of matrix theory in diverse applications areas.

Viewed as an extension of scalar mathematics, matrix mathematics provides the means to manipulate and analyze multidimensional quantities. Matrix mathematics thus provides powerful tools for a broad range of problems in science and engineering. For example, the matrix-based analysis of systems of ordinary differential equations accounts for interaction among all of the state variables. The discretization of partial differential equations by means of finite differences and finite elements yields linear algebraic or differential equations whose matrix structure reflects the nature of physical solutions [**1269]. Multivariate probability theory and statistical analysis use matrix methods to represent probability distributions, to compute moments, and to perform linear regression for data analysis [517, 621, 671, 720, 972, 1212]. The study of linear differential equations [709, 710, 746] depends heavily on matrix analysis, while linear systems and control theory are matrix-intensive areas of engineering [3, 68, 146, 150, 319, 321, 356, 379, 381, 456, 515, 631, 764, 877, 890, 960, 1121, 1174, 1182, 1228, 1232, 1243, 1368, 1402, 1490, 1535]. In addition, matrices are widely used in rigid body dynamics [28, 745, 753, 811, 829, 874, 995, 1053, 1095, 1096, 1216, 1231, 1253, 1384], structural mechanics [888, 1015, 1127], computational fluid dynamics [313, 492, 1460], circuit theory [32], queuing and stochastic systems [659, 944, 1061], econometrics [413, 973, 1146], geodesy [1272], game theory [229, 924, 1264], computer graphics [65, 511], computer vision [966], optimization [259, 382, 978], signal processing [720, 1193, 1395], classical and quantum information theory [361, 720, 1069, 1113], communications systems [800, 801], statistics [594, 671, 973, 1146, 1208], statistical mechanics [18, 163, 164, 1406], demography [305, 828], combinatorics, networks, and graph theory [132, 169, 183, 227, 239, 270, 272, 275, 310, 311, 343, 282, 371, 415, 438, 494, 514, 571, 616, 654, 720, 868, 945, 956, 1172, 1421], optics [563, 677, 820], dimensional analysis [658, 1283], and number theory [865]. **

In all applications involving matrices, computational techniques are essential for obtaining numerical solutions. The development of efficient and reliable algorithms for matrix computations is therefore an important area of research that has been extensively developed [**98, 312, 404, 583, 699, 701, 740, 774, 1255, 1256, 1258, 1260, 1347, 1403, 1461, 1465, 1467, 1513]. To facilitate the solution of matrix problems, entire computer packages have been developed using the language of matrices. However, this book is concerned with the analytical properties of matrices rather than their computational aspects. **

This book encompasses a broad range of fundamental questions in matrix theory, which, in many cases can be viewed as extensions of related questions in scalar mathematics. A few such questions follow.

What are the basic properties of matrices? How can matrices be characterized, classified, and quantified?

How can a matrix be decomposed into simpler matrices? A matrix decomposition may involve addition, multiplication, and partition. Decomposing a matrix into its fundamental components provides insight into its algebraic and geometric properties. For example, the polar decomposition states that every square matrix can be written as the product of a rotation and a dilation analogous to the polar representation of a complex number.

Given a pair of matrices having certain properties, what can be inferred about the sum, product, and concatenation of these matrices? In particular, if a matrix has a given property, to what extent does that property change or remain unchanged if the matrix is perturbed by another matrix of a certain type by means of addition, multiplication, or concatenation? For example, if a matrix is nonsingular, how large can an additive perturbation to that matrix be without the sum becoming singular?

How can properties of a matrix be determined by means of simple operations? For example, how can the location of the eigenvalues of a matrix be estimated directly in terms of the entries of the matrix?

To what extent do matrices satisfy the formal properties of the real numbers? For example, while 0 *≤ a ≤ b *implies that *ar ≤ br *for real numbers *a, b *and a positive integer *r*, when does 0 *≤ A ≤ B *imply *Ar ≤ Br *for positive-semidefinite matrices *A *and *B *and with the positive-semidefinite ordering?

Questions of these types have occupied matrix theorists for at least a century, with motivation from diverse applications. The existing scope and depth of knowledge are enormous. Taken together, this body of knowledge provides a powerful framework for developing and analyzing models for scientific and engineering applications.

This book is intended to be useful to at least four groups of readers. Since linear algebra is a standard course in the mathematical sciences and engineering, graduate students in these fields can use this book to expand the scope of their linear algebra text. For instructors, many of the facts can be used as exercises to augment standard material in matrix courses. For researchers in the mathematical sciences, including statistics, physics, and engineering, this book can be used as a general reference on matrix theory. Finally, for users of matrices in the applied sciences, this book will provide access to a large body of results in matrix theory. By collecting these results in a single source, it is my hope that this book will prove to be convenient and useful for a broad range of applications. The material in this book is thus intended to complement the large number of classical and modern texts and reference works on linear algebra and matrix theory [**11, 384, 516, 554, 555, 572, 600, 719, 812, 897, 964, 981, 988, 1033, 1072, 1078, 1125, 1172, 1225, 1269]. **

After a review of mathematical preliminaries in **Chapter 1, fundamental properties of matrices are described in Chapter 2. Chapter 3 summarizes the major classes of matrices and various matrix transformations. In Chapter 4 we turn to polynomial and rational matrices whose basic properties are essential for understanding the structure of constant matrices. Chapter 5 is concerned with various decompositions of matrices including the Jordan, Schur, and singular value decompositions. Chapter 6 provides a brief treatment of generalized inverses, while Chapter 7 describes the Kronecker and Schur product operations. Chapter 8 is concerned with the properties of positive-semidefinite matrices. A detailed treatment of vector and matrix norms is given in Chapter 9, while formulas for matrix derivatives are given in Chapter 10. Next, Chapter 11 focuses on the matrix exponential and stability theory, which are central to the study of linear differential equations. In Chapter 12 we apply matrix theory to the analysis of linear systems, their state space realizations, and their transfer function representation. This chapter also includes a discussion of the matrix Riccati equation of control theory. **

Each chapter provides a core of results with, in many cases, complete proofs. Sections at the end of each chapter provide a collection of Facts organized to correspond to the order of topics in the chapter. These Facts include corollaries and special cases of results presented in the chapter, as well as related results that go beyond the results of the chapter. In some cases the Facts include open problems, illuminating remarks, and hints regarding proofs. The Facts are intended to provide the reader with a useful reference collection of matrix results as well as a gateway to the matrix theory literature.

### Acknowledgments

The writing of this book spanned more than a decade and a half, during which time numerous individuals contributed both directly and indirectly. I am grateful for the helpful comments of many people who contributed technical material and insightful suggestions, all of which greatly improved the presentation and content of the book. In addition, numerous individuals generously agreed to read sections or chapters of the book for clarity and accuracy. I wish to thank Jasim Ahmed, Suhail Akhtar, David Bayard, Sanjay Bhat, Tony Bloch, Peter Bullen, Steve Campbell, Agostino Capponi, Ramu Chandra, Jaganath Chandrasekhar, Nalin Chaturvedi, Vijay Chellaboina, Jie Chen, David Clements, Dan Davison,

Dimitris Dimogianopoulos, Jiu Ding, D. Z. Djokovic, R. Scott Erwin, R. W. Fare-brother, Danny Georgiev, Joseph Grcar, Wassim Haddad, Yoram Halevi, Jesse Hoagg, Roger Horn, David Hyland, Iman Izadi, Pierre Kabamba, Vikram Kapila, Fuad Kittaneh, Seth Lacy, Thomas Laffey, Cedric Langbort, Alan Laub, Alexander Leonessa, Kai-Yew Lum, Pertti Makila, Roy Mathias, N. Harris McClamroch, Boris Mordukhovich, Sergei Nersesov, JinHyoung Oh, Concetta Pilotto, Harish Palanthandalum-Madapusi, Michael Piovoso, Leiba Rodman, Phil Roe, Carsten Scherer, Wasin So, Andy Sparks, Edward Tate, Yongge Tian, Panagiotis Tsiotras, Feng Tyan, Ravi Venugopal, Jan Willems, Hong Wong, Vera Zeidan, Xingzhi Zhan, and Fuzhen Zhang for their assistance. Nevertheless, I take full responsibility for any remaining errors, and I encourage readers to alert me to any mistakes, corrections of which will be posted on the web. Solutions to the open problems are also welcome.

Portions of the manuscript were typed by Jill Straehla and Linda Smith at Harris Corporation, and by Debbie Laird, Kathy Stolaruk, and Suzanne Smith at the University of Michigan. John Rogosich of Techsetters, Inc., provided invaluable assistance with LATEX issues, and Jennifer Slater carefully copyedited the entire manuscript. I also thank JinHyoung Oh and Joshua Kang for writing C code to refine the index.

I especially thank Vickie Kearn of Princeton University Press for her wise guidance and constant encouragement. Vickie managed to address all of my concerns and anxieties, and helped me improve the manuscript in many ways.

Finally, I extend my greatest appreciation for the (uncountably) infinite patience of my family, who endured the days, weeks, months, and years that this project consumed. The writing of this book began with toddlers and ended with a teenager and a twenty-year old. We can all be thankful it is finally finished.

*Special Symbols *

*Special Symbols*

**General Notation **

**Chapter 1 **

**Chapter 2 **

**Chapter 3 **

**Chapter 4 **

**Chapter 5 **

**Chapter 6 **

**Chapter 7 **

**Chapter 8 **

**Chapter 9 **

**Chapter 10 **

**Chapter 11 **

**Chapter 12 **

### Conventions, Notation, and Terminology

*The reader is encouraged to review this section in order to ensure correct interpretation of the statements in this book*.

When a word is defined, it is italicized.

means equal by definition, where *A **B *means that the left-hand expression *A *is defined to be the right-hand expression *B*.

A mathematical object defined by a constructive procedure is *well defined *if the constructive procedure produces a uniquely defined object.

Analogous statements are written in parallel using the following style: If *n *is (even, odd), then *n *+ 1 is (odd, even).

The variables *i, j, k, l, m, n *always denote integers. Hence, *k *≥ 0 denotes a nonnegative integer, *k *is taken over positive integers.

.

The letter *s *always represents a complex scalar. The letter *z *may or may not represent a complex scalar.

The inequalities *c *≤ *a *≤ *d *and *c *≤ *b *≤ *d *are written simultaneously as

The prefix non

means not

in the words nonconstant, nonempty, nonintegral, nonnegative, nonreal, nonsingular, nonsquare, nonunique, and nonzero. In some traditional usage, non

may mean not necessarily.

Unique

means exactly one.

Increasing

and decreasing

indicate strict change for a change in the argument. The word strict

is superfluous, and thus is omitted. Nonincreasing means nowhere increasing, while nondecreasing means nowhere decreasing.

A set can have a finite or infinite number of elements. A finite set has a finite number of elements.

Multisets can have repeated elements. Hence, {*x*}ms and {*x, x*}ms are different. The listed elements *α, β, γ *of the conventional set {*α, β, γ*} need not be distinct. For example, {*α, β, α*} = {*α, β*}.

In statements of the form "Let spec(*A**r**r *are assumed to be distinct.

Square brackets are used alternately with parentheses. For example, *f*[*g*(*x*)] denotes *f*(*g*(*x*)).

The order in which the elements of the set {*x*1, . . . ,*xn*} and the elements of the multiset {*x*1, . . . ,*xn*}ms are listed has no significance. The components of the *n*-tuple (*x*1, . . . ,*xn*) are ordered.

denotes the sequence (*x*1, *x*2, . . . ). A sequence can be viewed as a tuple with a countably infinite number of components, where the order of the components is relevant and the components need not be distinct.

The composition of functions *f *and *g *is denoted by *f *• *g*. The traditional notation *f **g *is reserved for the Schur product.

.

The terminology graph

corresponds to what is commonly called a simple directed graph,

while the terminology symmetric graph

corresponds to a simple undirected graph.

The range of cos∇1 is [0*, π*], the range of sin−1 is [−*π*/2, *π*/2], the range of tan−1 is (−*π*/2, *π*/2), and the range of cot−1 is (0, *π*).

The *angle between two vectors *is an element of [0*, π*]. Therefore, by using cos−1, the inner product of two vectors can be used to compute the angle between two vectors.

For all *α *1. For all *k *1.

For all square matrices *A, A**I*. With this convention, it is possible to write

for all −1 < *α *.

Neither ∞ nor −∞ is a real number. However, some operations are defined for these objects as extended real numbers, such as ∞ + ∞ = ∞, ∞∞ = ∞, and, for all nonzero real numbers α, α∞ = sign(α)∞. 0∞ and ∞ −∞ are not defined. See [71, pp. 14, 15].

Let *a *and *b *be real numbers such that *a < b*. A *finite interval *is of the form (*a, b*), [*a, b*), (*a, b*], or [*a, b*], whereas an *infinite interval *is of the form (−∞*, a*), (−∞*, a*], (*a*, ∞), [*a*, ∞), or (−∞, ∞). An *interval *is either a finite interval or an infinite interval. An *extended infinite interval *includes either ∞ or −∞. For example, [−∞*, a*) and [−∞*, a*] include −∞, (*a*, ∞] and [*a*, ∞] include ∞, and [−∞, ∞] includes −∞ and ∞.

."

. Hence, 0 is both a real number and an imaginary number.

The notation Re *A *and Im *A *represents the real and imaginary parts of *A*, respectively. Some books use Re *A *and Im *A *(*A *+ *A*(*A *− *A**).

For the scalar ordering ≤,

if *x *≤ *y*, then *x < y *if and only if *x *≠ *y*. For the entrywise vector and matrix orderings, *x *≤ *y *and *x *≠ *y *do not imply that *x < y*.

Operations denoted by superscripts are applied before operations represented by preceding operators. For example, tr (*A *+ *B*)² means tr[(*A *+ *B*). This convention simplifies many formulas.

*n *is a column vector, which is also a matrix with one column. In mathematics, vector

generally refers to an abstract vector not resolved in coordinates.

Sets have elements, vectors and sequences have components, and matrices have entries. This terminology has no mathematical consequence.

The notation *x*(*i*) represents the *i*th component of the vector *x*.

The notation *A*(*i, j*) represents the scalar (*i, j*) entry of *A*. *Ai, j *or *Aij *denotes a block or submatrix of *A*.

All matrices have nonnegative integral dimensions. If a matrix has either zero rows or zero columns, then the matrix is empty.

The entries of a submatrix *Â *of a matrix *A *are the entries of *A *located in specified rows and columns. *Â *is a block of *A *if *Â *is a submatrix of *A *whose entries are entries of adjacent rows and columns of *A*. Every matrix is both a submatrix and block of itself.

The determinant of a submatrix is a subdeterminant. Some books use minor.

The determinant of a matrix is also a subdeterminant of the matrix.

The dimension of the null space of a matrix is its defect. Some books use nullity.

A block of a square matrix is diagonally located if the block is square and the diagonal entries of the block are also diagonal entries of the matrix; otherwise, the block is off-diagonally located. This terminology avoids confusion with a diagonal block,

which is a block that is also a square, diagonal submatrix.

and similarly for *B, C*, and *D*.

The Schur product of matrices *A *and *B *is denoted by *A **B*. Matrix multiplication is given priority over Schur multiplication, that is, *A **BC *means *A *(*BC*).

is denoted by *A*A. The traditional notation is adj *A*, while the notation *A*A is used in [1259]. If *A *is a scalar then *A*. However, for all *n *.

becomes *A*, *A** becomes *A*T, Hermitian

becomes symmetric,

unitary

becomes orthogonal,

unitarily

becomes orthogonally,

and congruence

becomes T-congruence.

A square complex matrix *A *is symmetric if *A*T = *A *and orthogonal if *A*T*A *= *I*.

all of whose diagonal entries are real are ordered as dmax(*A*) = d1(*A*) ≥ d2(*A*) ≥ · · · ≥ d*n*(*A*) = dmin(*A*).

Every *n *× *n *matrix has *n *eigenvalues. Hence, eigenvalues are counted in accordance with their algebraic multiplicity. The phrase distinct eigenvalues

ignores algebraic multiplicity.

max(*A*1(*A*2(*A**n*(*A*min(*A*).

The inertia of a matrix is written as

Some books use the notation (*ν*(*A*), *δ*(*A*), *π*(*A*)).

, amult*A*in the multispectrum of *A*, gmult*A*) is the number of Jordan blocks of *A *, and ind*A*) is the order of the largest Jordan block of *A *. The index of *A*, denoted by ind*A *= ind*A*(0), is the order of the largest Jordan block of *A *associated with the eigenvalue 0.

is semisimple if the order of every Jordan block of *A *is 1, and cyclic if *A *has exactly one Jordan block associated with each of its eigenvalues. Defective means not semisimple, while derogatory means not cyclic.

An *n *× *m *matrix has exactly min{*n, m*} singular values, exactly rank *A *of which are positive.

The min{*n, m*are ordered as *σ*max (*A**σ*1(*A*) ≥ *σ*2(*A*) ≥ · · · ≥ *σ*min {*n, m*}(*A*). If *n *= *m*, then *σ*min(*A**σn*(*A*). The notation *σ*min(*A*) is defined only for square matrices.

Positive-semidefinite and positive-definite matrices are Hermitian.

, and also has real eigenvalues.

satisfies *A*² = *A*, while a projector is a Hermitian, idempotent matrix. Some books use projector

for idempotent and orthogonal projector

for projector. A reflector is a Hermitian, involutory matrix. A projector is a normal matrix each of whose eigenvalues is 1 or 0, while a reflector is a normal matrix each of whose eigenvalues is 1 or −1.

An elementary matrix is a nonsingular matrix formed by adding an outer-product matrix to the identity matrix. An elementary reflector is a reflector exactly one of whose eigenvalues is −1. An elementary projector is a projector exactly one of whose eigenvalues is 0. Elementary reflectors are elementary matrices. However, elementary projectors are not elementary matrices since elementary projectors are singular.

A range-Hermitian matrix is a square matrix whose range is equal to the range of its complex conjugate transpose. These matrices are also called EP

matrices.

The polynomials 1 and *s*³ + 5*s*² −4 are monic. The zero polynomial is not monic.

The rank of a polynomial matrix *P *is the maximum rank of *P*(*s*. This quantity is also called the normal rank. We denote this quantity by rank *P *as distinct from rank *P*(*s*), which denotes the rank of the matrix *P*(*s*).

The rank of a rational transfer function *G *is the maximum rank of *G*(*s*excluding poles of the entries of *G*. This quantity is also called the normal rank. We denote this quantity by rank *G *as distinct from rank *G*(*s*), which denotes the rank of the matrix *G*(*s*).

to denote the direct sum of matrices or subspaces.

represents the matrix obtained by replacing every entry of *A *by its absolute value.

to denote this matrix.

.

.

*Terminology Relating to Inequalities *

Let ≤

be a partial ordering, let *X *be a set, and consider the inequality

Inequality (1) is *sharp *if there exists *x**X *such that *f*(*x*0) = *g*(*x*0).

The inequality

is a monotonicity result.

The inequality

where *p *is not identically equal to either *f *or *g *on *X*, is an *interpolation *or *refinement *of (1). The inequality

where *α *> 1, is a *reversal *of (1).

Defining *h*(*x**g*(*x*) −*f*(*x*), it follows that (1) is equivalent to

Now, suppose that *h *has a global minimizer *x**X*. Then, (5) implies that

Consequently, inequalities are often expressed equivalently in terms of optimization problems, and vice versa.

Many inequalities are based on a single function that is either monotonic or convex.

*Matrix Mathematics *

*Matrix Mathematics*

*Chapter One *

*Chapter One*

*Preliminaries *

*Preliminaries*

In this chapter we review some basic terminology and results concerning logic, sets, functions, and related concepts. This material is used throughout the book.

**1.1 Logic **

Every *statement *is either true or false, but not both. Let *A *and *B *be statements. The *negation *of *A *is the statement (not *A*), the *both *of *A *and *B *is the statement (*A *and *B*), and the *either *of *A *and *B *is the statement (*A *or *B*). The statement (*A *or *B*) does not contradict (*A *and *B*), that is, the word or

is inclusive. Exclusive or

is indicated by the phrase but not both.

The statements "*A *and *B *or *C* and

*A *or *B *and *C* are ambiguous. We therefore write

*A *and either *B *or *C* and

either *A *or both *B *and *C*."

Let *A *and *B *be statements. The *implication *statement "if *A *is satisfied, then *B *is satisfied or, equivalently,

*A *implies *B*" is written as *A **B*, while *A **B *is equivalent to [(*A **B*) and (*A **B*)]. Of course, *A B *means *B **A*. A *tautology *is a statement that is true regardless of whether the component statements are true or false. For example, the statement "(*A *and *B*) implies *A*" is a tautology. A *contradiction *is a statement that is false regardless of whether the component statements are true or false. For example, the statement "*A *implies (not) *A*" is a contradiction.

Suppose that *A **B*. Then, *A *is satisfied *if and only if B *is satisfied. The implication *A **B *(the only if

part) is *necessity*, while *B **A *(the if

part) is *sufficiency*. The *converse *statement of *A **B *is *B **A*. The statement *A **B *is equivalent to its *contrapositive *statement (not *B*(not *A*).

A *theorem *is a significant statement, while a *proposition *is a theorem of less significance. The primary role of a *lemma *is to support the proof of a theorem or proposition. Furthermore, a *corollary *is a consequence of a theorem or proposition. Finally, a *fact *is either a theorem, proposition, lemma, or corollary. Theorems, propositions, lemmas, corollaries, and facts are provably true statements.

Suppose that *A**A **B **B*′. Then, *A**B*′ is a corollary of *A **B*.

Let *A*, *B*, and *C *be statements, and assume that *A **B*. Then, *A **B *is a *strengthening *of the statement (*A *and *C**B*. If, in addition, *A **C*, then the statement (*A *and *C**B *has a *redundant assumption*.

**1.2 Sets **

A *set *{*x, y*, . . . } is a collection of elements. A set may have a finite or infinite number of elements. A *finite set *has a finite number of elements.

be a set. Then,

means that *x *is an *element *. If *w *, then we write

, is the *empty set*is *nonempty*.

A set cannot have repeated elements. For example, {*x, x*} = {*x*}. However, a *multiset *is a collection of elements that allows for repetition. The multiset consisting of two copies of *x *is written as {*x, x*}ms. However, we do not assume that the listed elements *x, y *of the conventional set {*x, y*is the *cardinality *).

There are two basic types of mathematical statements for quantifiers. An *existential statement *is of the form

while a *universal statement *has the structure

or, equivalently,

be sets. The *intersection *given by

(the *union *) is

The *complement **relative *is

is specified, then the *complement *is

If *x *implies that *x *is *contained *is a *subset *), which is written as

is a *proper subset *are *disjoint *. A *partition *.

" extend directly to multisets. For example,

By ignoring repetitions, a multiset can be converted to a set, while a set can be viewed as a multiset with distinct elements.

The *Cartesian product *1 *× · · · × **n **n *is the set consisting of *tuples *of the form (*x*1, . . . , *xn*), where *xi **i *for all *i *{1, . . . , *n*}. A tuple with *n *components is an *n-tuple*. Note that the components of an *n*-tuple are ordered but need not be distinct.

, and

~," respectively, statements about statements *A *and *B *, and vice versa. For example, the tautology

is equivalent to

**1.3 Integers, Real Numbers, and Complex Numbers **

denote the real and complex number fields, respectively, whose elements are *scalars*. Define

Let *x *. Then, *x *= *y *+ *jz*, where *y, z *. Define the *complex conjugate *of *x *by

and the real part Re *x *of *x *and the imaginary part Im *x *of *x *by

and

Furthermore, the *absolute value *|*x*| of *x *is defined by

The *closed left half plane *(CLHP), *open left half plane *(OLHP), *closed right half plane *(CRHP), and *open right half plane *defined by

The imaginary numbers are represented by *j *. Note that 0 is both a real number and an imaginary number.

Next, we define the *open unit disk *(OUD) and the *closed unit disk *(CUD) by

and

The complements of the open unit disk and the closed unit disk are given, respectively, by the *closed punctured plane *(CPP) and the *open punctured plane*, which are defined by

and

.

**1.4 Functions **

be sets. Then, a *function f *is a rule *f*that assigns a unique element *f*(*x*) (the *image *of *x*to each element *x *. Equivalently, a function *f**× *such that, for all *x *, it follows that there exists *y *such that (*x, y*and such that, if (*x, y*1), (*x, y*, then *y*1 = *y*is the *domain *of *f*is the *codomain *of *f*, then *f *. The set *f*(*f*), is the *range *of *f*is a set and *g*: *f**Z*, then *g *• *f**Z *(the *composition *of *g *and *f*. If *x*1, *x*and *f*(*x*1) = *f*(*x*2) implies that *x*1 = *x*2, then *f *is *one-to-one*(*f*, then *f *is *onto*. The function *I *defined by *I *(*x**x *for all *x *is the *identity *. Finally, *x *is a *fixed point *of the function *f*if *f*(*x*) = *x*.

The following result shows that function composition is associative.

**Proposition 1.4.1. **be sets, and let *f*, *g*, *h*. Then,

Hence, we write *h • g • f *for *h • *(*g • f*) and (*h • g*) *• f*.

. Furthermore, let *f*, it follows that *f*. Then, *f *is a *canonical mapping*, and *f*) is a *canonical form*, it follows that the function *f *.

Let *f*. Then, *f *is *left invertible *if there exists a function *g*(a *left inverse *of *f*) such that *g • f *= *I *, whereas *f *is *right invertible *if there exists a function *h*(a *right inverse *of *f*) such that *f•h *= *I *. In addition, the function *f*is *invertible *if there exists a function *f*(the *inverse *of *f*) such that *f*−1 *• f *= *I *and *f • f*−1 = *I *. The *inverse image f*is defined by

Note that the set *f*) can be defined whether or not *f *is invertible. In fact, *f*−1[*f*.

**Theorem 1.4.2. **be sets, and let *f*. Then, the following statements hold:

*i*) *f *is left invertible if and only if *f *is one-to-one.

*ii*) *f *is right invertible if and only if *f *is onto.

Furthermore, the following statements are equivalent:

*iii*) *f *is invertible.

*iv*) *f *has a unique inverse.

*v*) *f *is one-to-one and onto.

*vi*) *f *is left invertible and right invertible.

*vii*) *f *has a unique left inverse.

*viii*) *f *has a unique