Sei sulla pagina 1di 2

CMDA 3606, Spring 2015

Test 2: Review Sheet

Dr. Chung

Test 2: Thursday, April 30, 2015, 2-3:15pm


Topics Covered:
Introduction to Inverse Problems: Discrete Inverse Problems, Hansen, Chapters 1-2
Be able to describe the requirements of a well-posed problem and explain how to
fix ill-posedness.
Know and be able to use the Fredholm integral equation to answer the questions:
what is the forward problem? what is the inverse problem?
Be familiar with the two example problems described in class. You do not need to
memorize the formulas for these problems, but you should be able to explain the
basic idea in the context of inverse problems.
Be able to do problems on the SVD and Inverse Problems handout.
SVD and Spectral Analysis: Hansen, Sections 3.2-3.3
Be very comfortable working with the SVD and using the SVD in various forms.
Know the typical properties of the SVD for ill-posed problems.
Be able to describe and explain your observations of the Picard plot.
Be able to characterize and explain the overall behavior of the Picard plot using the
SVD expansion of the inverse solution in the noisy case. Concepts like reliable
and noisy coefficients should be well understood.
Know the Discrete Picard Condition and how it helps us solve inverse problems.
Regularization Methods: Hansen, Chapter 4
Be able to explain why naive inverse solutions are useless. In particular, be able to
exact
prove and explain the significance of the error bound kxkxexactxk
from class.
k
Spectral Filtering Methods:
Be able to explain the TSVD solution and explain its alternate interpretation
as a constrained optimization problem.
Be familiar with the many formulations of Tikhonov regularization and be able
to show their equivalences.
Be able to explain the different components of the Tikhonov problem and the
significance of the regularization parameter.
Be able to explain the trends and conclusions that you made from the in-class
handout on Investigating Tikhonov Solutions.
Choosing the Regularization Parameter: Hansen, Chapter 5
Be able to analyze reconstruction error in terms of regularization error and perturbation error. Formulations in terms of the SVD will be helpful in analyzing the
error terms.

Be able to explain the difference between oversmoothing and undersmoothing.


Be able to describe the basic idea of the Discrepancy Principle, L-curve, and GCV
and be able to explain the advantages and disadvantages of each of these methods.
Unconstrained Optimization: Ascher and Greif, Section 9.2
Be able to compute the gradient and Hessian for any multivariable function f (x).
Be able to determine if x is a local minimizer by verifying sufficient conditions.
Be able to show that a matrix is positive definite.
Be able to derive Newtons method for multivariable functions, write an algorithm
for Newtons method, and apply a few iterations. Explain the advantages and
disadvantages of Newton.
Be able to write an algorithm for steepest descent and explain the advantages and
disadvantages.
Know the basic idea of Levenberg-Marquardt and quasi-Newton methods and be
able to explain when it is needed. You do not need to memorize BFGS. Explain
the advantages and disadvantages.
Be able to derive the steepest descent algorithm to solve a linear system.
Iterative Methods for Solving Linear Systems
Why do we use iterative methods, and when they are preferred to direct methods?
Stationary Iterative Methods (SIM): Ascher and Greif, Section 7.2
Be able to apply a few iterations of Jacobi and Gauss-Seidel to a small problem.
Be able to determine when a SIM is guaranteed to converge.
Krylov Subspace Methods: Golub and Van Loan, Section 11.3
Be able to explain the basic idea of a subspace method for optimization and
explain why it works.
Be able to explain the basic idea of the conjugate gradient method from its interpretation as a Krylov subspace method. Specific details regarding the Lanczos
tridiagonalization are not needed. More important is how the tridiagonalization
is used to solve the optimization problem.
You do not need to memorize the conjugate gradient algorithm.
Be able to compare Steepest Descent and the Conjugate Gradient Method (cost
per iteration, convergence, etc.).
General Comments: The most important resource for preparing for the test is the class
notes. In addition, make sure you review the suggested problems, the in-class projects, the
quizzes, and the assigned homework problems.
If you have time, it may be a good idea to review some of your notes from material covered
in Test 1, since many of the methods/ideas covered in Test 2 rely on a solid background in
linear algebra.

Potrebbero piacerti anche