Sei sulla pagina 1di 10

Fundamentals of Matrix Computations

Second Edition

David S. Watkins

A Wiley-Interscience Publication

JOHN WILEY & SONS, INC.


New York / Chichester / Weinheim / Brisbane / Singapore / Toronto

Contents

Preface Acknowledgments 1 Gaussian Elimination and Its Variants 1.1 Matrix Multiplication 1.2 Systems of Linear Equations 1.3 Triangular Systems 1.4 Positive Denite Systems; Cholesky Decomposition 1.5 Banded Positive Denite Systems 1.6 Sparse Positive Denite Systems 1.7 Gaussian Elimination and the Decomposition 1.8 Gaussian Elimination with Pivoting 1.9 Sparse Gaussian Elimination 2 Sensitivity of Linear Systems 2.1 Vector and Matrix Norms 2.2 Condition Numbers

ix xiii 1 1 11 23 32 54 63 70 93 106 111 112 120


v

vi

CONTENTS

2.3 2.4 2.5 2.6 2.7 2.8 2.9

Perturbing the Coefcient Matrix A Posteriori Error Analysis Using the Residual Roundoff Errors; Backward Stability Propagation of Roundoff Errors Backward Error Analysis of Gaussian Elimination Scaling Componentwise Sensitivity Analysis

133 137 139 148 157 171 175 181 181 185 212 220 239 249 261 262 266 275 281 289 289 305 314 334 349 356 372 392 396

3 The Least Squares Problem 3.1 The Discrete Least Squares Problem 3.2 Orthogonal Matrices, Rotators, and Reectors 3.3 Solution of the Least Squares Problem 3.4 The Gram-Schmidt Process 3.5 Geometric Approach 3.6 Updating the QR Decomposition 4 The Singular Value Decomposition 4.1 Introduction 4.2 Some Basic Applications of Singular Values 4.3 The SVD and the Least Squares Problem 4.4 Sensitivity of the Least Squares Problem 5 Eigenvalues and Eigenvectors I 5.1 Systems of Differential Equations 5.2 Basic Facts 5.3 The Power Method and Some Simple Extensions 5.4 Similarity Transforms 5.5 Reduction to Hessenberg and Tridiagonal Forms 5.6 The QR Algorithm 5.7 Implementation of the QR algorithm 5.8 Use of the QR Algorithm to Calculate Eigenvectors 5.9 The SVD Revisited

CONTENTS

vii

6 Eigenvalues and Eigenvectors II 6.1 Eigenspaces and Invariant Subspaces 6.2 Subspace Iteration, Simultaneous Iteration, and the QR Algorithm 6.3 Eigenvalues of Large, Sparse Matrices, I 6.4 Eigenvalues of Large, Sparse Matrices, II 6.5 Sensitivity of Eigenvalues and Eigenvectors 6.6 Methods for the Symmetric Eigenvalue Problem 6.7 The Generalized Eigenvalue Problem 7 Iterative Methods for Linear Systems 7.1 A Model Problem 7.2 The Classical Iterative Methods 7.3 Convergence of Iterative Methods 7.4 Descent Methods; Steepest Descent 7.5 Preconditioners 7.6 The Conjugate-Gradient Method 7.7 Derivation of the CG Algorithm 7.8 Convergence of the CG Algorithm 7.9 Indenite and Nonsymmetric Problems Appendix References Index Index of MATLAB Terms

413 413 420 433 451 462 476 502 521 521 530 544 559 571 576 581 590 596

Some Sources of Software for Matrix Computations 603 605 611 617

Preface

This book was written for advanced undergraduates, graduate students, and mature scientists in mathematics, computer science, engineering, and all disciplines in which numerical methods are used. At the heart of most scientic computer codes lie matrix computations, so it is important to understand how to perform such computations efciently and accurately. This book meets that need by providing a detailed introduction to the fundamental ideas of numerical linear algebra. The prerequisites are a rst course in linear algebra and some experience with computer programming. For the understanding of some of the examples, especially in the second half of the book, the student will nd it helpful to have had a rst course in differential equations. There are several other excellent books on this subject, including those by Demmel [15], Golub and Van Loan [33], and Trefethen and Bau [71]. Students who are new to this material often nd those books quite difcult to read. The purpose of this book is to provide a gentler, more gradual introduction to the subject that is nevertheless mathematically solid. The strong positive student response to the rst edition has assured me that my rst attempt was successful and encouraged me to produce this updated and extended edition. The rst edition was aimed mainly at the undergraduate level. As it turned out, the book also found a great deal of use as a graduate text. I have therefore added new material to make the book more attractive at the graduate level. These additions are detailed below. However, the text remains suitable for undergraduate use, as the elementary material has been kept largely intact, and more elementary exercises have been added. The instructor can control the level of difculty by deciding which
ix

PREFACE

sections to cover and how far to push into each section. Numerous advanced topics are developed in exercises at the ends of the sections. The book contains many exercises, ranging from easy to moderately difcult. Some are interspersed with the textual material and others are collected at the end of each section. Those that are interspersed with the text are meant to be worked immediately by the reader. This is my way of getting students actively involved in the learning process. In order to get something out, you have to put something in. Many of the exercises at the ends of sections are lengthy and may appear intimidating at rst. However, the persistent student will nd that s/he can make it through them with the help of the ample hints and advice that are given. I encourage every student to work as many of the exercises as possible. Numbering Scheme Nearly all numbered items in this book, including theorems, lemmas, numbered equations, examples, and exercises, share a single numbering scheme. For example, the rst numbered item in Section 1.3 is Theorem 1.3.1. The next two numbered items are displayed equations, which are numbered (1.3.2) and (1.3.3), respectively. These are followed by the rst exercise of the section, which bears the number 1.3.4. Thus each item has a unique number: the only item in the book that has the number 1.3.4 is Exercise 1.3.4. Although this scheme is unusual, I believe that most readers will nd it perfectly natural, once they have gotten used to it. Its big advantage is that it makes things easy to nd: The reader who has located Exercises 1.4.15 and 1.4.25 but is looking for Example 1.4.20, knows for sure that this example lies somewhere between the two exercises. There are a couple of exceptions to the scheme. For technical reasons related to the type setting, tables and gures (the so-called oating bodies) are numbered separately by chapter. For example, the third gure of Chapter 1 is Figure 1.3. New Features of the Second Edition Use of MATLAB By now MATLAB1 is rmly established as the most widely used vehicle for teaching matrix computations. MATLAB is an easy to use, very high-level language that allows the student to perform much more elaborate computational experiments than before. MATLAB is also widely used in industry. I have therefore added many examples and exercises that make use of MATLAB. This book is not, however, an introduction to MATLAB, nor is it a MATLAB manual. For those purposes there are other books available, for example, the MATLAB Guide by Higham and Higham [40].

1 MATLAB

is a registered trademark of the MathWorks, Inc. (http://www.mathworks.com)

PREFACE

xi

However, MATLABs extensive help facilities are good enough that the reader may feel no need for a supplementary text. In an effort to make it easier for the student to use MATLAB with this book, I have included an index of MATLAB terms, separate from the ordinary index. I used to make my students write and debug their own Fortran programs. I have left the Fortran exercises from the rst edition largely intact. I hope a few students will choose to work through some of these worthwhile projects. More Applications In order to help the student better understand the importance of the subject matter of this book, I have included more examples and exercises on applications (solved using MATLAB), mostly at the beginnings of chapters. I have chosen very simple applications: electrical circuits, mass-spring systems, simple partial differential equations. In my opinion the simplest examples are the ones from which we can learn the most. Earlier Introduction of the Singular Value Decomposition (SVD) The SVD is one of the most important tools in numerical linear algebra. In the rst edition it was placed in the nal chapter of the book, because it is impossible to discuss methods for computing the SVD until after eigenvalue problems have been discussed. I have since decided that the SVD needs to be introduced sooner, so that the student can nd out earlier about its properties and uses. With the help of MATLAB, the student can experiment with the SVD without knowing anything about how it is computed. Therefore I have added a brief chapter on the SVD in the middle of the book. New Material on Iterative Methods The biggest addition to the book is a chapter on iterative methods for solving large, sparse systems of linear equations. The main focus of the chapter is the powerful conjugate-gradient method for solving symmetric, positive denite systems. However, the classical iterations are also discussed, and so are preconditioners. Krylov subspace methods for solving indenite and nonsymmetric problems are surveyed briey. There are also two new sections on methods for solving large, sparse eigenvalue problems. The discussion includes the popular implicitly-restarted Arnoldi and Jacobi-Davidson methods. I hope that these additions in particular will make the book more attractive as a graduate text. Other New Features To make the book more versatile, a number of other topics have been added, including:

xii

PREFACE

a discussion of how to update the decomposition when a row or column is added to or deleted from the data matrix, as happens in signal processing and data analysis applications.

a section introducing new methods for the symmetric eigenvalue problem that have been developed since the rst edition was published.

A few topics have been deleted on the grounds that they are either obsolete or too specialized. I have also taken the opportunity to correct several vexing errors from the rst edition. I can only hope that I have not introduced too many new ones.
DAVID S. WATKINS
Pullman, Washington January, 2002

a backward error analysis of Gaussian elimination, including a discussion of the modern componentwise error analysis. a discussion of reorthogonalization, a practical means of obtaining numerically orthonormal vectors.

Acknowledgments

I am greatly indebted to the authors of some of the early works in this eld. These include A. S. Householder [43], J. H. Wilkinson [81], G. E. Forsythe and C. B. Moler [24], G. W. Stewart [67], C. L. Lawson and R. J. Hanson [48], B. N. Parlett [54], A. George and J. W. Liu [30], as well as the authors of the Handbook [83], the EISPACK Guide [64], and the LINPACK Users Guide [18]. All of them inuenced me profoundly. By the way, every one of these books is still worth reading today. Special thanks go to Cleve Moler for inventing MATLAB, which has changed everything. Most of the rst edition was written while I was on leave, at the University of Bielefeld, Germany. I am pleased to thank once again my host and longtime friend, Ludwig Elsner. During that stay I received nancial support from the Fulbright commission. A big chunk of the second edition was also written in Germany, at the Technical University of Chemnitz. I thank my host (and another longtime friend), Volker Mehrmann. On that visit I received nancial support from Sonderforschungsbereich 393, TU Chemnitz. I am also indebted to my home institution, Washington State University, for its support of my work on both editions. I thank once again professors Dale Olesky, Kemble Yates, and Tjalling Ypma, who class-tested a preliminary version of the rst edition. Since publication of the rst edition, numerous people have sent me corrections, feedback, and comments. These include A. Cline, L. Dieci, E. Jessup, D. Koya, D. D. Olesky, B. N. Parlett, A. C. Raines, A. Witt, and K. Wright. Finally, I thank the many students who have helped me learn how to teach this material over the years. D. S. W.
xiii

Potrebbero piacerti anche