Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
min f (x)
(O) cI (x) ≥ 0
E
c (x) ≥ 0
where the function f : Rn → R is the objective function, cI : Rn → RmI are the
inequality constraints and cE : Rn → RmE are the equality constraints. These functions
are supposed to be smooth.
In general, the inequality constraints are of the form cI (x) = (g(x), x − l, u − x).
The vector l and u are the lower and upper bounds on the variables x and g(x) and the
non linear inequality constraints.
Under certains conditions, if x ∈ Rn is a solution of problem (O), then there exist
a vector λ = (λI , λE ) ∈ Rm , where m = mI + mE , such that the well known Karush-
Kuhn-Tucker (KKT) optimality conditions are satisfied:
∇`(x, λI , λE ) = ∇f (x) − ∇cI (x)λI − ∇cE (x)λE = 0
cE (x) = 0
(P ) cI (x) ≥ 0
λ≥0
cIi (x)λIi = 0, i = 1...m
Description:
• grad is the gradient of f . If this gradient is not available, then enter grad=NULL.
In this case, finite difference will be used to estimate the gradient.
1
The algorithm returns an int, its value depends on the output status of the algorithm.
We have 4 cases:
Imporant Remark 1: The algorithm we implement requires that the initial point x0 ,
given as an input to the algorithm, is strictly feasible, ie: c(x0 ) > 0.
Imporant Remark 2: The algorithm try to find a pair (x, λ) that solves the equations
(P ), but this does not garantee that x is a global minimun of f on the set {c(x) > 0}.