Sei sulla pagina 1di 44

uol-new

Basic Principles
Engineering Optimization (EEE614)
Lecture 2: Basic Principles
Dr. Shafayat Abrar
Signal Processing Research Group
Communications Research Group
Department of Electrical Engineering
COMSATS Institute of IT, Pakistan
Email: sabrar@comsats.edu.pk
Spring 2013
Textbook: Practical Optimization: Algorithms and Engineering Applications,
A. Antoniou and W.-S. Lu, Springer, 2007.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Problem Denition of Nonlinear Optimization
We focus our attention on the nonlinear optimization problem
minimize F = f (x)
subject to : x R
(1)
where f (x) is a real-valued function, R E
n
is the feasible region, and
E
n
represents the n-dimensional Euclidean space.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Problem Denition of Nonlinear Optimization
We focus our attention on the nonlinear optimization problem
minimize F = f (x)
subject to : x R
(1)
where f (x) is a real-valued function, R E
n
is the feasible region, and
E
n
represents the n-dimensional Euclidean space.
Note that x can be expressed as a column vector with elements
x
1
, x
2
, , x
n
; the transpose of x, namely, x
T
, is a row vector
x
T
= [x
1
x
2
x
n
]
So the variables x
1
, x
2
, , x
n
are the parameters that inuence the cost
f (x). The optimization problem is to adjust variables x
1
, x
2
, , x
n
in
such a way as to minimize the scalar-valued quantity F = f (x).
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Gradient Information
In many optimization methods, gradient information pertaining to the
objective function is required for the evaluation (or computation) of
minima.
This information consists of the rst and second derivatives of f (x) with
respect to the n variables.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Gradient Information
In many optimization methods, gradient information pertaining to the
objective function is required for the evaluation (or computation) of
minima.
This information consists of the rst and second derivatives of f (x) with
respect to the n variables.
If f (x) C
1
, that is, if f (x) has continuous rst-order partial derivatives,
then the gradient of f (x) is dened as
g(x) =
_
f
x
1
f
x
2

f
x
n
_
T
:= f (x)
(2)
where is gradient operator, viz
:=
_

x
1

x
2


x
n
_
T
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Gradient Information
In vector calculus, the gradient of a scalar eld is a vector eld that
points in the direction of the greatest rate of increase of the scalar eld,
and whose magnitude is that rate of increase.
In simple terms, the variation in space of any quantity can be represented
(e.g. graphically) by a slope. The gradient represents the steepness and
direction of that slope.
Note: We can numerically compute the gradient of any function f (x, y)
with respect to x and y using gradient.m and plot using quiver.m.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Consider a function f (x, y) = x exp(x
2
y
2
).
2
1
0
1
2
1
0
1
0.4
0.2
0
0.2
0.4
Surface Plot
Contour Plot
1 0 1
1
0
1
2 1 0 1 2
1
0.5
0
0.5
1
Quiver Plot
x
y
QuiverContour Plot
x
y
2 1 0 1 2
1
0.5
0
0.5
1
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
MATLAB Code:
[x,y] = meshgrid(-2:0.1:2,-1:0.1:1);
z = x .* exp(-x.^2 - y.^2);
[px,py] = gradient(z,.1,.1);
mi = min(min(z)); ma = max(max(z));
slices = mi:(ma-mi)/20:ma;
subplot(2,2,1); surf(x,y,z), axis image
title(Surface Plot,fontsize,14)
axis tight; subplot(2,2,2);
cs = contour(x,y,z,slices,linewidth,2);
axis image; title(Contour Plot,fontsize,14)
subplot(2,2,3); qv=quiver(x,y,px,py);
set(qv,linewidth,1); axis image
title(Quiver Plot,fontsize,14)
subplot(2,2,4); contour(x,y,z,slices,linewidth,1);
hold on; qv=quiver(x,y,px,py);
set(qv,linewidth,1); hold off, axis image
title(Quiver-Contour Plot,fontsize,14)
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
For the function f (x, y) = x exp(x
2
y
2
).
QuiverContour Plot
2 1.5 1 0.5 0 0.5 1 1.5 2
2
1.5
1
0.5
0
0.5
1
1.5
2
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Note
To minimize a function in an iterative or adaptive
fashion, it is necessary to follow the direction
opposite to the direction of gradient.
We will discuss this issue in detail in Chapter 5.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Gradient Information
If f (x) C
2
, that is, if f (x) has continuous second-order partial
derivatives, then the Hessian of f (x) is dened as
H(x) = g
T
=
_

T
f (x)
_
=
_
_
_
_
_
_
_

2
f
x
2
1

2
f
x
1
x
2


2
f
x
1
x
n

2
f
x
2
x
1

2
f
x
2
2


2
f
x
2
x
n
.
.
.
.
.
.
.
.
.

2
f
x
n
x
1

2
f
x
n
x
2


2
f
x
2
n
_

_
(3)
For a function f (x) C
2
, we have

2
f
x
j
x
i
=

2
f
x
i
x
j
since dierentiation is a linear operation and hence H(x) is an n n
square symmetric matrix.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Gradient Information
Hence, the Hessian matrix (or simply the Hessian) is a square matrix of
second-order partial derivatives of a function. Mathematically, it describes
the local curvature of a function of many variables.
The Hessian matrix was developed in the 19th century by the German
mathematician Ludwig Otto Hesse and later named after him. Hesse
himself had used the term functional determinants.
There exist families of iterative optimization algorithms, like
quasi-Newton method, which use (inverse of) Hessian to accelerate
their convergence speed.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Taylor Series: for function of two variables
If f (x) is a function of two variables x
1
and x
2
such that f (x) C
p
where
p , that is, f (x) := f (x
1
, x
2
) has continuous partial derivatives of all
orders, then the value of function f (x) at point [x
1
+
1
, x
2
+
2
] is given
by the Taylor series as
f (x
1
+
1
, x
2
+
2
) = f (x
1
, x
2
) +
1
f (x
1
, x
2
)
x
1
+
2
f (x
1
, x
2
)
x
2
+
1
2
_

2
1

2
f (x
1
, x
2
)
x
2
1
+ 2
1

2
f (x
1
, x
2
)
x
1
x
2
+
2
2

2
f (x
1
, x
2
)
x
2
2
_
+ O(
3
)
(4)
where O(
3
) is the remainder, and is the Euclidean norm of
= [
1
,
2
]
T
given by =

T
=
_

2
1
+
2
2
.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Oh notations
The notation (x) = O(x) denotes that (x) approaches zero at least as fast as
x, as x approaches zero, that is, there exists a constant K 0 such that

(x)
x

K as x 0
The notation (x) = o(x) denotes that (x) approaches zero faster than x, as x
approaches zero, that is, there exists a constant K 0 such that

(x)
x

0 as x 0
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Oh notations
The notation (x) = O(x) denotes that (x) approaches zero at least as fast as
x, as x approaches zero, that is, there exists a constant K 0 such that

(x)
x

K as x 0
The notation (x) = o(x) denotes that (x) approaches zero faster than x, as x
approaches zero, that is, there exists a constant K 0 such that

(x)
x

0 as x 0
f (x
1
+
1
, x
2
+
2
) = f (x
1
, x
2
) +
1
f (x
1
, x
2
)
x
1
+
2
f (x
1
, x
2
)
x
2
+
1
2
_

2
1

2
f (x
1
, x
2
)
x
2
1
+ 2
1

2
f (x
1
, x
2
)
x
1
x
2
+
2
2

2
f (x
1
, x
2
)
x
2
2
_
+ O(
3
)
= f (x
1
, x
2
) +
1
f (x
1
, x
2
)
x
1
+
2
f (x
1
, x
2
)
x
2
+
1
2
_

2
1

2
f (x
1
, x
2
)
x
2
1
+ 2
1

2
f (x
1
, x
2
)
x
1
x
2
+
2
2

2
f (x
1
, x
2
)
x
2
2
_
+ o(
2
)
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Taylor Series: for function of n variables
f (x
1
+
1
, x
2
+
2
, , x
n
+
n
)
= f (x
1
, x
2
, , x
n
) +
n

i =1

i
f
x
i
+
n

i =1
n

j =1

j
2

2
f
x
i
x
j
+ o(
2
)
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Taylor Series: for function of n variables
f (x
1
+
1
, x
2
+
2
, , x
n
+
n
)
= f (x
1
, x
2
, , x
n
) +
n

i =1

i
f
x
i
+
n

i =1
n

j =1

j
2

2
f
x
i
x
j
+ o(
2
)
Taylor Series in matrix form
f (x + ) = f (x) + g(x)
T
+
1
2

T
H(x) + o(
2
)
where g(x) is the gradient, and H(x) is the Hessian at point x.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Taylor Series: for function of n variables
f (x
1
+
1
, x
2
+
2
, , x
n
+
n
)
= f (x
1
, x
2
, , x
n
) +
n

i =1

i
f
x
i
+
n

i =1
n

j =1

j
2

2
f
x
i
x
j
+ o(
2
)
Taylor Series in matrix form
f (x + ) = f (x) + g(x)
T
+
1
2

T
H(x) + o(
2
)
where g(x) is the gradient, and H(x) is the Hessian at point x.
Later, we will discuss the use of Taylors expansion in optimization.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Types of Extrema
The extrema of a function are its minima and maxima.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Types of Extrema
The extrema of a function are its minima and maxima.
Points at which a function has minima (maxima) are said to be
minimizers (maximizers).
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Types of Extrema
The extrema of a function are its minima and maxima.
Points at which a function has minima (maxima) are said to be
minimizers (maximizers).
Several types of minimizers (maximizers) can be distinguished, namely,
local or global and weak or strong.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Types of Extrema
The extrema of a function are its minima and maxima.
Points at which a function has minima (maxima) are said to be
minimizers (maximizers).
Several types of minimizers (maximizers) can be distinguished, namely,
local or global and weak or strong.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
In the general optimization problem, we are in principle seeking the global
minimum (or maximum) of f(x). In practice, an optimization problem
may have two or more local minima.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
In the general optimization problem, we are in principle seeking the global
minimum (or maximum) of f(x). In practice, an optimization problem
may have two or more local minima.
Since optimization algorithms in general are iterative procedures which
start with an initial estimate of the solution and converge to a single
solution, one or more local minima may be missed. If the global minimum
is missed, a suboptimal solution will be achieved, which may or may not
be acceptable.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
In the general optimization problem, we are in principle seeking the global
minimum (or maximum) of f(x). In practice, an optimization problem
may have two or more local minima.
Since optimization algorithms in general are iterative procedures which
start with an initial estimate of the solution and converge to a single
solution, one or more local minima may be missed. If the global minimum
is missed, a suboptimal solution will be achieved, which may or may not
be acceptable.
This problem can to some extent be overcome by performing the
optimization several times using a dierent initial estimate for the solution
in each case in the hope that several distinct local minima will be located.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
In the general optimization problem, we are in principle seeking the global
minimum (or maximum) of f(x). In practice, an optimization problem
may have two or more local minima.
Since optimization algorithms in general are iterative procedures which
start with an initial estimate of the solution and converge to a single
solution, one or more local minima may be missed. If the global minimum
is missed, a suboptimal solution will be achieved, which may or may not
be acceptable.
This problem can to some extent be overcome by performing the
optimization several times using a dierent initial estimate for the solution
in each case in the hope that several distinct local minima will be located.
If this approach is successful, the best minimizer, namely, the one yielding
the lowest value for the objective function can be selected. Although such
a solution could be acceptable from a practical point of view, usually
there is no guarantee that the global minimum will be achieved.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
In the general optimization problem, we are in principle seeking the global
minimum (or maximum) of f(x). In practice, an optimization problem
may have two or more local minima.
Since optimization algorithms in general are iterative procedures which
start with an initial estimate of the solution and converge to a single
solution, one or more local minima may be missed. If the global minimum
is missed, a suboptimal solution will be achieved, which may or may not
be acceptable.
This problem can to some extent be overcome by performing the
optimization several times using a dierent initial estimate for the solution
in each case in the hope that several distinct local minima will be located.
If this approach is successful, the best minimizer, namely, the one yielding
the lowest value for the objective function can be selected. Although such
a solution could be acceptable from a practical point of view, usually
there is no guarantee that the global minimum will be achieved.
Therefore, for the sake of convenience, the term minimize f (x) in the
general optimization problem will be interpreted as nd a local
minimum of f (x).
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Good News: In a specic class of problems where function f (x)
and set R satisfy certain convexity properties, any local minimum
of f (x) is also a global minimum of f(x). In this class of problems
an optimal solution can be assured.
These problems will be examined in Section 2.7.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Denition 2.1: Weak local minimizer
A point x

R, where R is the feasible region, is said to be a weak local


minimizer of f (x) if there exists a distance > 0 such that f (x) f (x

) if
x R and x x

<
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Denition 2.1: Weak local minimizer
A point x

R, where R is the feasible region, is said to be a weak local


minimizer of f (x) if there exists a distance > 0 such that f (x) f (x

) if
x R and x x

<
Denition 2.2: Weak global minimizer
A point x

R is said to be a weak global minimizer of f (x) if


f (x) f (x

) for all x R
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Denition 2.1: Weak local minimizer
A point x

R, where R is the feasible region, is said to be a weak local


minimizer of f (x) if there exists a distance > 0 such that f (x) f (x

) if
x R and x x

<
Denition 2.2: Weak global minimizer
A point x

R is said to be a weak global minimizer of f (x) if


f (x) f (x

) for all x R
Denition 2.3(a): Strong local minimizer
A point x

R is said to be a strong local minimizer of f (x) if there exists a


distance > 0 such that f (x) > f (x

) if x R and x x

<
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Denition 2.1: Weak local minimizer
A point x

R, where R is the feasible region, is said to be a weak local


minimizer of f (x) if there exists a distance > 0 such that f (x) f (x

) if
x R and x x

<
Denition 2.2: Weak global minimizer
A point x

R is said to be a weak global minimizer of f (x) if


f (x) f (x

) for all x R
Denition 2.3(a): Strong local minimizer
A point x

R is said to be a strong local minimizer of f (x) if there exists a


distance > 0 such that f (x) > f (x

) if x R and x x

<
Denition 2.3(b): Strong global minimizer
A point x

R is said to be a strong global minimizer of f (x) if


f (x) > f (x

) for all x R
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Necessary and Sucient Conditions for Local Minima and Maxima
The gradient g(x) and the Hessian H(x) must satisfy certain conditions
at a local minimizer x

.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Necessary and Sucient Conditions for Local Minima and Maxima
The gradient g(x) and the Hessian H(x) must satisfy certain conditions
at a local minimizer x

.
Two sets of conditions will be discussed, as follows:
1
Conditions which are satised at a local minimizer x

. These
are the necessary conditions.
2
Conditions which guarantee that x

is a local minimizer. These


are the sucient conditions.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Necessary and Sucient Conditions for Local Minima and Maxima
The gradient g(x) and the Hessian H(x) must satisfy certain conditions
at a local minimizer x

.
Two sets of conditions will be discussed, as follows:
1
Conditions which are satised at a local minimizer x

. These
are the necessary conditions.
2
Conditions which guarantee that x

is a local minimizer. These


are the sucient conditions.
A concept that is used extensively in these theorems is the concept of a
feasible direction.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Necessary and Sucient Conditions for Local Minima and Maxima
The gradient g(x) and the Hessian H(x) must satisfy certain conditions
at a local minimizer x

.
Two sets of conditions will be discussed, as follows:
1
Conditions which are satised at a local minimizer x

. These
are the necessary conditions.
2
Conditions which guarantee that x

is a local minimizer. These


are the sucient conditions.
A concept that is used extensively in these theorems is the concept of a
feasible direction.
Denition 2.4: Feasible Direction
Let = d be a change in x where is a positive constant and d is a direction
vector. If R is the feasible region and a constant > 0 exists such that
x + d R
in the range 0 , then d is said to be a feasible direction at point x.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Necessary and Sucient Conditions for Local Minima and Maxima
The gradient g(x) and the Hessian H(x) must satisfy certain conditions
at a local minimizer x

.
Two sets of conditions will be discussed, as follows:
1
Conditions which are satised at a local minimizer x

. These
are the necessary conditions.
2
Conditions which guarantee that x

is a local minimizer. These


are the sucient conditions.
A concept that is used extensively in these theorems is the concept of a
feasible direction.
Denition 2.4: Feasible Direction
Let = d be a change in x where is a positive constant and d is a direction
vector. If R is the feasible region and a constant > 0 exists such that
x + d R
in the range 0 , then d is said to be a feasible direction at point x.
In simple words, if a point x remains in R after it is moved a nite distance in
a direction d, then d is a feasible direction vector at x.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Example 2.2
Suppose the feasible region is given by
R = {x : x
1
2, x
2
0}
Which of the vectors d
1
= [2 2]
T
, d
2
= [0 2]
T
, d
3
= [2 0]
T
are
feasible directions at x
1
= [4 1]
T
, x
2
= [2 3]
T
, x
3
= [1 4]
T
?
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Theorem 2.1 First-Order Necessary Conditions for a Minimum
(a) If f (x) C
1
and x

is a local minimizer, then


g(x

)
T
d 0
for every feasible direction d at x

.
(b) If x

is located in the interior of R then


g(x

) = 0
Proof
Proof is provided on white board. (Available in book)
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Denition 2.5
(a) Let d be an arbitrary direction vector at point x. The quadratic form
d
T
H(x)d is said to be positive denite, positive semidenite, negative
semidenite, negative denite if d
T
H(x)d > 0, 0, 0, < 0,
respectively, for all d = 0 at x. If d
T
H(x)d can assume positive as well as
negative values, it is said to be indenite.
(b) If d
T
H(x)d is positive denite, positive semidenite, etc., then matrix
H(x) is said to be positive denite, positive semidenite, etc.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Theorem 2.2 Second-Order Necessary Conditions for a Minimum
(a) If f (x) C
2
and x

is a local minimizer, then for every feasible direction


d at x

.
1 g(x

)
T
d 0
2 If g(x

)
T
d = 0, then d
T
H(x

)d 0
(b) If x

is a local minimizer in the interior of R, then


1 g(x

) = 0
2 d
T
H(x

)d 0 for all d = 0
Proof
Proof is provided on white board. (Available in book)
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Theorem 2.4 Second-Order Sucient Conditions for a Minimum
If f (x) C
2
and x

is located in the interior of R, then the


conditions
1
g(x

) = 0
2
H(x

) is positive denite
are sucient for x

to be a strong local minimizer.


Proof
Proof is provided on white board. (Available in book)
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
Example 2.3
Discussed on white board. (Available in book)
Example 2.4
Discussed on white board. (Available in book)
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2
uol-new
Basic Principles
In the next lecture, we will discuss the remaining parts of Chapter 2 including
Classications of Stationary Points
Convex and Concave Functions
Optimization of Convex Functions
See you next time. Thank you for your attention.
Dr. Shafayat Abrar COMSATS Institute of IT, Pakistan
Engineering Optimization (EEE614): Lecture 2

Potrebbero piacerti anche