. ; neWTON’S METHOD Ly
ao
7 method presented in Section § 124
+*4.1 can be exte
jpwto? o Itivari i
ew of multivariable functions, is "
ini of the ful ction f(X) at X ig a this, consider the ana ideas
itl" «Using the Taylor's oHiuratic approx.
fi x) x ‘a 8 Series expansion
fOO) =A) + VK ~ x) 41
OFX — XX - x) (6.95)
yi) = (|x, is the matrix of seg
were Fr od at the point X Second partial derivatiy i
“ rate at the point X,. By setting the ee (Hessian matrix)
equi to zero minimum of f(X), we obtai SESS ONFA16,95)
" aa 95
apex) _
= 0, j= 2
ax, Nea att (6.96)
gquations (6.96) and (6.95) give
Vf = Vf + UX - Xx) =
VX =X) = 0 (6.97)
fi is nonsingular, Eqs. (6.97) can be solve: a
Wxation (X = Xo) as fe solved to obtain an improved ap-
Xiat = Xi — UI! VF (6.98)
Since higher-order terms have been neglected in Eq. (6.95), Eq. (6.98) is to
be used iteratively to find the optimum solution X*.
The sequence of points Xi, X, ..., X;41 can be shown to converge to the
actual solution X* from any initial point X, sufficiently close to the solution
X*, provided that [J\] is nonsingular. It can be seen that Newton’s method
uses the second partial derivatives of the objective function (in the form of the
matrix (J;]) and hence is a second-order method.
Example 6.11 Show that the Newton’s method finds the minimum of a quad-
matic function in one iteration.
SOLUTION Let the quadratic function be given by
f(X) = 3X AIX + BX +C
The minimum of f(X) is given by
vf = [4]X + B= 0
or
x* = -[4)'Biq: (6.98) gives
‘The iterative steP of E
ie [Ay'(AIX; + B)
Xi
where X; is the starting point for the ith iteration. Thus Eq. (E,) pj ty,
solution Bives Hey,
Xi+ = x* = —[A]"'B
Figure 6.17 illustrates this process.
mize f(x»
the starting point as X= i 3
Example 6.12 Mini xm) =e Tet 2x} + Wy +2
aby lis
"t
SOLUTION ‘To find X, according to Eq. (6.98), we require [ J)"
Ws Where
a 8
axs Om,
au Po -[} 2
af of 22
Ox, Ox, ax} %
Therefore,
X%
s=-1ar'vh
Figure 6.17 Mi
17 Minimizati
imization of a quadratic function in one step-AS
oflas
of "| -| ene 1
FlO%)x, (-1 + 2, + 3x5), -{ i
Zuo C=
ion (6.98) gives
voxcr'e BL TL)-
To see whether or not X, is the optimum point, we evaluate
e- ee _ { 1+ 4x, + 2 0
° Caflax yy CAL + ey + ne 2 {i
sg = 0, X is the optimum point. Thus the method has converged in one
4
jteration for this quadratic function.
If f(X) is a nonquadratic function, Newton's method may sometimes di-
verge, and it may converge to saddle points and relative maxima. This problem
can be avoided by modifying Eq. (6.98) as
Equat
Xion =X + NS, = Xi — NLT! Wh (6.99)
¢ minimizing step length in the direction S; = -[J]7' Vf. The
6.99) has a number of advantages. First, it will
find the minimum in lesser number of steps compared to the original method.
Second, it finds the minimum point in all cases, whereas the original method
may not converge in some cascs. Third, it usually avoids convergence to a
saddle point or a maximum. With all these advantages, this method appears to
be the most powerful minimization method. Despite these advantages, the
method is not very useful in practice, due to the following features of the
method:
where \* is th
modification indicated by Eq. (
en x n matrix [Jj].
1. It requires the storing of th
d sometimes, impossible to compute the ele-
2. It becomes very difficult an
ments of the matrix [ J;]-
3. It requires the inversion of the matrix [J)] at each step.
4. It requires the evaluation of the quantity [J,]7! Vf at each step.
These features make the method impractical for problems involving a compli-
cated objective function with a large number of variables.