Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Introduction
The non-linear programming techniques are used routinely and particularly efficient
in solving optimal control problems. In the case of a discrete control problem i.e.,
when the controls are exerted at discrete points, the problem can be directly stated
as a non-linear programming problem. In a continuous control problem i.e., when the
controls are functions exerted over a prescribed planning horizon and approximate
solution can be found by solving the non-linear programming problem. The general
structure of optimization problem is shown in Figure 2.1.
min f (x)
xX
subject to
g(x) 0,
h(x) = 0.
where, X is a subset of <nx , x is a vector of nx components x1 , x2 , . . . , xnx
and f : X < , g : X <ng and h : X <nh are defined on X .
39
Customer
problem
Linear discrete
optimizati on
Nonlinear
objective
function
Linear
objective
function
Feasible
region
Convex full
Mathematical
optimization
Benefits
- feasible solution
- optimal solution
- fast solution
- insight and understand ing
z S,
z S such that z <
Similarly, the supremum of S , denoted
40
as Sup S , provided it exists, is the least upper bound for S , i.e., a number
satisfying:
(i) z
z S,
(ii)
<
2.2.2
where, D <
Global minimum
a
b x
Figure 2.2 A function with several local maxima and several local minima
41
2.2.3
A single variable function, which when plotted, results in a curve always curving
downwards or not curving at all, is called concave function. Figure 2.3 show a single
variable concave function. The shape of this function is such that for any two points
on the curve, in the feasible region S , the line joining the points is always below
the function. There is always a unique global maximum of such a function, i.e., A
function f (x) is a concave function over a region S , if for any two points a and b
in S ,
f [a + (1 )b] f (a) + (1 )f (b)
where, 0 1 .
f(x)
42
f(x)
Convex Program
2.3.1
Dynamic programming
constitute the state at each stage and the decision required at each stage.
Identify the relationship by which the state at one stage can be expressed as
a function of the state and decisions at the next stage.
Develop a recursive relationship for the optimal return function which permits
computation of the optimal policy at any stage.
Decide whether forward or backward recursive approach is applied to solve the
problem.
Determine the optimal decision at each stage and then the overall optimal
policy.
2.3.2
f or X = (x1 , x2 , ..., xn )
(2.1)
subject to
g i (X) = bi
f or i = 1, 2, ..., m
X0
45
(2.2)
(2.3)
f or i = 1, 2, ..., m
(2.4)
m
X
i hi (X)
(2.5)
i=1
Assuming that all the functions L , f and hi are differentiable partially with
respect to x1 , x2 , ..., xn and 1 , 2 , ..., m . The necessary conditions for the objective
function to be a maximum or a minimum are
m
X
hi
L
f
=
i
= 0, f or j = 1, 2, ..., n
xj
xj
xj
i=1
(2.6)
and
L
= 0 hi = 0,
i
f or i = 1, 2, ..., m
(2.7)
f or X = (x1 , x2 , ..., xn )
(2.8)
subject to
g i (X) = 0,
f or i = 1, 2, ..., m
(2.9)
hi (X) 0,
f or i = 1, 2, ..., r
(2.10)
subject to
g i (X) = 0,
f or i = 1, 2, ..., m
hi (X) Si = 0,
Si 0,
f or i = 1, 2, ..., r
(2.11)
f or i = 1, 2, ..., r
(2.12)
r
X
i=1
logSi
m
X
i=1
i g (X)
r
X
i=1
i (hi (X) Si )
(2.13)
where, the barrier parameter > 0 . The solutions of Lagrangian functions are
defined by the Karush-Kuhn Tucker first order necessary conditions.
L
= 0,
xi
L
= 0,
i
L
= 0,
i
L
= 0,
Si
f or i = 1, 2, ..., n
(2.14)
f or i = 1, 2, ..., m
(2.15)
f or i = 1, 2, ..., r
(2.16)
f or i = 1, 2, ..., r
(2.17)
algorithm consists in applying the Newton method to non-linear system of KarushKuhn-Tucker conditions.
By expanding the first order necessary conditions using taylors expansion end
these equations can be solved to obtain the incremental values of the variables
xi , i , i and Si and the variables are updated with factor . i.e.,
xi = xi + xi ,
f or i = 1, 2, ..., n
i = i + i ,
f or i = 1, 2, ..., m
i = i + i ,
f or i = 1, 2, ..., r
Si = Si + Si ,
f or i = 1, 2, ..., r
47
(2.18)
2.4
The advent of the digital computer has given a considerable impetus to the study of
numerical methods for determining the maximum or minimum of a given function.
As a result many algorithms have been proposed for solving the problem. This
section is intended to be an introduction to the field of non-linear optimization
and the ideas on which the algorithms are based. A very desirable property of an
optimization algorithm is global convergence.
An algorithm is set to be globally convergent if, for any initial point x0 , if
generates a sequence of points that converges to a point x in the solution set. It is
said to be locally convergent if there exists a > 0 such that for any initial point
x0 such that k
x x0 k < , if generates a sequence of points converging to x in
the solution set.
It should also be noted that for most algorithms, we set initial values for certain
parameters such as the starting point and the initial step size as well as parameters
for terminating the algorithm.
2.4.1
Newtons Method
(2.19)
Take x01 and x02 be an initial approximate solution of above equation (Veerarajan
and Ramachandran, 2010). The actual solution is given by ( x01 +h) and ( x02 + k)
where, h and k is an incremental value of x1 and x2 . Therefore
f (x01 + h, x02 + k) = 0
(2.20)
g(x01 + h, x02 + k) = 0
(2.21)
[f (x01 , x02 )] + k
[f (x01 , x02 )] = 0
x1
x2
48
(2.22)
g(x01 , x02 ) + h
[g(x01 , x02 )] + k
[g(x01 , x02 )] = 0
x1
x2
(2.23)
(2.24)
g0 + h(gx1 )0 + k(gx2 )0 = 0
(2.25)
Solving the equations (2.24) and (2.25) for h and k using determinants i.e.,
h=
Dx1
D
and k =
Dx2
D
where,
(fx1 )0 (fx2 )0
D =
(gx1 )0 (gx2 )0
Dx1
Dx2
f0 (fx2 )0
=
g0 (gx2 )0
(fx1 )0 f0
=
(gx1 )0 g0
By using the incremental values, we get the new values of x1 and x2 and then
incremental values of h and k are calculated. The iteration is continued till the
required accuracy is obtained.
Algorithm
The algorithm of Newtons method for solving non-linear equations is given in the
following Steps:
Step 1: Input: function, precision and initial approximate solution x01 , x02 .
Step 2: Calculate determinant values of D , Dx1 and Dx2 .
Step 3: Find the incremental values h and k using determinants.
Step 4: Determine the better approximation for the solution as x1 = x01 + h ,
x2 = x02 + k .
49
x1 +x2
2
. Therefore
three points are (x1 , f (x1 )) , (x0 , f (x0 )) and (x2 , f (x2 )) . For the first iteration, the
second degree polynomial passes through three points. The interpolating polynomial
can be constructed as,
P (x) = A(x x0 )2 + B(x x0 ) + C
(2.26)
(2.27)
(2.28)
f (x0 ) = C
(2.29)
The values of A and B can be obtained by solving equations (2.27), (2.28) and (2.29).
Subtracting equation (2.29) from the equation (2.27) gives
f (x1 ) f (x0 )
= A(x1 x0 ) + B
x1 x0
50
(2.30)
(2.31)
= A(x1 x0 x2 + x0 )
x1 x0
x2 x0
(2.32)
therefore,
f (x0 ) f (x2 )
f (x1 ) f (x0 )
(2.33)
(x1 x0 )(x1 x2 )
(x2 x0 )(x1 x2 )
(2.34)
A=
+
(x1 x0 ) = B
x1 x0
(x1 x0 )(x1 x2 ) (x2 x0 )(x1 x2 )
therefore,
B=
(2.35)
B 2 4AC
2A
(2.36)
(B B 2 4AC) , we get
xP x0 =
2C
B 2 4AC
therefore,
xP = x0
2C
B B 2 4AC
(2.37)
In equation (2.37) both roots are calculated and the root which is closer to x0 is
selected.
If f (x1 )f (x0 ) < 0 then the root lies between x1 and x0 .
51
2f (xi )
p
,
Bi + sign(Bi ) B 2 4Ai f (xi )
(2.38)
where, A , B and C from equations (2.33), (2.34) and (2.29) are indexed by i in
each iteration on the interval [ xa , xb ] as follows
f (x1 ) f (xi )
f (xi ) f (x2 )
+
(x1 xi )(x1 x2 ) (x2 xi )(x1 x2 )
[f (xi ) f (x1 )](x2 xi ) [f (xi ) f (x2 )](x1 xi )
Bi =
(x1 xi )(x1 x2 )
(x2 xi )(x1 x2 )
Ai =
(2.39)
Ci = f (xi )
In each iteration the interval points x1 , x2 and x0 are selected according to the
following procedure
(i) If f (x1 )f (x2 ) < 0 , then x2 = xi
(ii) If f (x1 )f (x2 ) 0, then x1 = xi
and finally substitute x0 = xi+1
Suppose, if the minimal value of |xi+1 xi | corresponding to the roots lies outside
of the interval [ x1 , x2 ] then equation (2.36) can be written as iterative form.
xi+1
p
Bi + sgn(Bi ) Bi2 4Ai Ci
= xi
2Ai
(2.40)
Algorithm
The algorithm of superlinear bracketing method for solving nonlinear equations
(Suhadolnik, 2013) is given in the following Steps:
Step 1: Input: The inputs are the left side xa and right side xb of the interval,
function f (x) , precision and maximum number of iterations Nmax .
52
2C
B+sgn(B) B 2 4AC
B+sgn(B) B 2 4AC
2A
53