Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Page 1
Outlines
Introduction to non Linear Programming
Elimination Method
Interpolation Method
Direct Search Methods
Indirect Search (Descent) Methods
Page 2
Non Linear Programming Techniques
If any of the functions among the objective and
constraint functions in is nonlinear, the problem is called a
nonlinear programming (NLP) problem.
This is the most general programming problem and all
other problems can be considered as special cases of the
NLP problem.
Page 3
..Contd..
UNIMODAL FUNCTION
A unimodal function is one that has only one peak (maximum)
or valley (minimum) in a given interval.
Thus a function of one variable is said to be unimodal if,
given that two values of the variable are on the same side
of the optimum, the one nearer the optimum gives the
better functional value (i.e., the smaller value in the case of
a minimization problem).
Page 4
..Contd..
For Minimization
a) [x2, 1], discarded
b) [0, x1], discarded
c) [0, x1] and [x2, 1], discarded
Page 5
..Contd..
One-dimensional Minimization Methods
Page 6
..Contd..
The elimination methods can be used for the minimization
of even discontinuous functions.
The quadratic and cubic interpolation methods involve
polynomial approximations to the given function.
The direct root methods are root finding methods that can
be considered to be equivalent to quadratic interpolation.
The assumption of unimodality is made in all the elimination
techniques.
If a function is known to be multimodal (i.e., having several
valleys or peaks), the range of the function can be
subdivided into several parts and the function treated as a
unimodal function in each part.
Page 7
Elimination Methods
UNRESTRICTED SEARCH(Search with Fixed Step Size)
The optimum solution is known to lie within restricted
ranges of the design variables.
In some cases this range is not known, and hence the search
has to be made with no restrictions on the values of the
variables.
Page 8
..Contd..
This method is described in the following steps:
Page 9
..Contd..
Search with Accelerated Step Size
Although the search with a fixed step size appears to be
very simple, its major limitation comes because of the
unrestricted nature of the region in which the minimum can
lie.
If forexample,the minimum point for a particular function
happens to be xopt = 50, 000 and, in the absence of
knowledge about the location of the minimum, if x1 and s
are chosen as 0.0 and 0.1, respectively, we have to evaluate
the function 5,000,001 times to find the minimum point.
This involves a large amount of computational work.
An obvious improvement can be achieved by increasing the
step size gradually until the minimum point is bracketed.
Page 10
..Contd..
A simple method consists of doubling the step size as long
as the move results in an improvement of the objective
function.
Reduction of the step length after bracketing the optimum
in (xi−1, xi).
By starting either from xi−1 or xi , the basic procedure can
be applied with a reduced step size.
This procedure can be repeated until the bracketed interval
becomes sufficiently small.
Page 11
..Contd..
Example
Find the minimum of f = x(x−1.5) by starting from 0.0 with
an initial step size of 0.05.
The function value at x1 is f1 = 0.0.
If we try to start moving in the negative x direction, we
find that x−2 = −0.05 and f−2 = 0.0775.
Since f−2 > f1, the assumption of unimodality indicates
that the minimum cannot lie toward the left of x−2.
Thus we start moving in the positive x direction and obtain
the following results:
Page 12
..Contd..
Single Step Size
Page 13
..Contd..
From these results, the optimum point can be seen to be
xopt ≈ x6 = 0.8.
In this case, the points x6 and x7 do not really bracket
the minimum point but provide information about it.
If a better approximation to the minimum is desired, the
procedure can be restarted from x5 with a smaller step
size.
Page 14
EXHAUSTIVE SEARCH
The exhaustive search method can be used to solve
problems where the interval in which the optimum is known
to lie is finite.
Let xs and xf denote, respectively, the starting and final
points of the interval of uncertainty.
The exhaustive search method consists of evaluating the
objective function at a predetermined number of equally
spaced points in the interval (xs, xf ), and reducing the
interval of uncertainty using the assumption of unimodality.
Page 15
..Contd..
In general, if the function is evaluated at n equally spaced
points in the original interval of uncertainty of length L0 =
xf − xs , and if the optimum value of the function (among
the n function values) turns out to be at point xj , the final
interval of uncertainty is given by:
Page 16
..Contd..
Example Find the minimum of f = x(x − 1.5) in the interval (0.0, 1.00) to
within 10% of the exact value.
If the middle point of the final interval of uncertainty is taken as the
approximate optimum point, the maximum deviation could be 1/(n + 1)
times the initial interval of uncertainty. Thus to find the optimum within
10% of the exact value, we should have :
Page 17
..Contd..
DICHOTOMOUS SEARCH
The exhaustive search method is a simultaneous search
method in which all the experiments are conducted before
any judgment is made regarding the location of the optimum
point.
The dichotomous search method, as well as the Fibonacci
and the golden section methods discussed in subsequent
sections, are sequential search methods in which the result
of any experiment influences the location of the subsequent
experiment.
In the dichotomous search, two experiments are placed as
close as possible at the center of the interval of
uncertainty.
Page 18
..Contd..
Based on the relative values of the objective function at
the two points, almost half of the interval of uncertainty is
eliminated. Let the positions of the two experiments be
given by:
Page 19
..Contd..
The next pair of experiments is, therefore, conducted at
the center of the remaining interval of uncertainty.
This results in the reduction of the interval of uncertainty
by nearly a factor of 2.
The intervals of uncertainty at the end of different pairs
of experiments are given in the following table:
Page 20
..Contd..
Example
Find the minimum of f = x(x − 1.5) in the interval (0.0, 1.00)
to within 10% of the exact value.
The ratio of final to initial intervals of uncertainty is given
by:
Page 21
..Contd..
Since δ = 0.001 and L0 = 1.0, we have
Functional Value
Page 22
..Contd..
Since f2<f1, the new interval of uncertainty will be (0.4995,
1.0). The second pair of experiments is conducted at:
Since f3 > f4, we delete (0.4995, x3) and obtain the new
interval of uncertainty as:
Page 23
..Contd..
The final set of experiments will be conducted at
Page 24
Thank You!
Page 25