Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Robert M. Freund
Introduction
ai x = bi , i = 1, . . . , m
x n+ .
Minimize the linear function c x, subject to the condition that x must solve
m given equations ai x = bi , i = 1, . . . , m, and that x must lie in the closed
convex cone K = n+ .
We will write the standard linear programming dual problem as:
LD : maximize
m
P
y i bi
i=1
m
P
s.t.
yi ai + s = c
i=1
s n+ .
Given a feasible solution x of LP and a feasible solution (y, s) of LD, the
P
Pm
duality gap is simply c x m
i=1 yi bi = (c i=1 yi ai ) x = s x 0, because
x 0 and s 0. We know from LP duality theory that so long as the pri
mal problem LP is feasible and has bounded optimal objective value, then
the primal and the dual both attain their optima with no duality gap. That
is, there exists x and (y , s ) feasible for the primal and dual, respectively,
P
such that c x m
i=1 yi bi = s x = 0.
n =
{X S n | X 0} is a closed convex cone in n of
Remark 1 S+
dimension n
(n + 1)/2.
n . Pick any scalars
To see why this remark is true, suppose that X, W S+
, 0. For any v n , we have:
v T (X + W )v = v T Xv + v T W v 0,
n .
This shows that S n is a cone. It is also straight
whereby X + W S+
+
n is a closed set.
forward to show that S+
P
vT
v
d
Semidenite Programming
n X
n
X
Cij Xij .
i=1 j=1
the matrix C is also symmetric. With this notation, we are now ready to
dene a semidenite program. A semidenite program (SDP ) is an opti
mization problem of the form:
SDP : minimize C X
s.t.
Ai X = bi , i = 1, . . . , m,
X 0,
Notice that in an SDP that the variable is the matrix X, but it might
be helpful to think of X as an array of n2 numbers or simply as a vector
in S n . The objective function is the linear function C X and there are m
linear equations that X must satisfy, namely Ai X = bi , i = 1, . . . , m.
The variable X also must lie in the (closed convex) cone of positive semidef
n . Note that the data for SDP consists of the
inite symmetric matrices S+
symmetric matrix C (which is the data for the objective function) and the
m symmetric matrices A1 , . . . , Am , and the mvector b, which form the m
linear equations.
0 1
3 7,
7 5
0 2 8
1
A2 = 2 6 0 , and C = 2
8 0 4
3
2 3
9 0,
0 7
x11
X = x21
x31
x12
x22
x32
x13
x23 ,
x33
s.t.
x11 + 0x12 + 2x13 + 3x22 + 14x23 + 5x33 = 11
0x11 + 4x12 + 16x13 + 6x22 + 0x23 + 4x33 = 19
x11
X = x21
x31
x12
x22
x32
x13
x23 0.
x33
Ai = ..
.
0
ai2
..
.
...
...
..
.
...
0
0
..
.
c1
, i = 1, . . . , m, and C = ..
.
ain
0
0
c2
..
.
...
...
..
.
0
0
..
.
...
cn
Ai X = bi , i = 1, . . . , m,
Xij = 0, i = 1, . . . , n, j = i + 1, . . . , n,
X 0,
X = ..
.
0
x2
..
.
...
...
..
.
0
0
..
.
...
xn
The dual problem of SDP is dened (or derived from rst principles) to be:
m
P
SDD : maximize
yi bi
i=1
m
P
s.t.
yi Ai + S = C
i=1
S 0.
One convenient way of thinking about this problem is as follows. Given mul
P
tipliers y1 , . . . , ym , the objective is to maximize the linear function m
i=1 yi bi .
The constraints of SDD state that the matrix S dened as
S=C
m
X
yi Ai
i=1
m
X
i=1
yi Ai 0.
s.t.
y1 0
1
0 1
0
3 7 + y2 2
7 5
8
S 0,
8
2 8
1
6 0 +S = 2
0 4
3
2 3
9 0
0 7
11y1 + 19y2
s.t.
The following proposition states that weak duality must hold for the
primal and dual of SDP :
Proposition 5.1 Given a feasible solution X of SDP and a feasible solu
P
tion (y, S) of SDD, the duality gap is C X m
i=1 yi bi = S X 0. If
Pm
C X i=1 yi bi = 0, then X and (y, S) are each optimal solutions to SDP
and SDD, respectively, and furthermore, SX = 0.
In order to prove Proposition 5.1, it will be convenient to work with the
trace of a matrix, dened below:
trace(M ) =
n
X
Mjj .
j=1
= trace(DP QEQ P ) =
n
X
j=1
where the last inequality follows from the fact that all Djj 0 and the fact
that the diagonal of the symmetric positive semidenite matrix P T QEQT P
must be nonnegative.
j=1
10
Unlike the case of linear programming, we cannot assert that either SDP
or SDD will attain their respective optima, and/or that there will be no
duality gap, unless certain regularity conditions hold. One such regularity
condition which ensures that strong duality will prevail is a version of the
Slater condition, summarized in the following theorem which we will not
prove:
denote the optimal objective function values
Theorem 5.1 Let zP and zD
of SDP and SDD, respectively. Suppose that there exists a feasible solution
of SDP such that X
0, and that there exists a feasible solution (
X
y , S)
of SDD such that S 0. Then both SDP and SDD attain their optimal
.
values, and zP = zD
Given rational data, the feasible region may have no rational solutions.
The optimal solution may not have rational components or rational
eigenvalues.
Given rational data whose binary encoding is size L, the norms of any
L
feasible and/or optimal solutions may exceed 22 (or worse).
Given rational data whose binary encoding is size L, the norms of any
L
feasible and/or optimal solutions may be less than 22 (or worse).
7.1
Let G be an undirected graph with nodes N = {1, . . . , n}, and edge set E.
Let wij = wji be the weight on edge (i, j), for (i, j) E. We assume that
wij 0 for all (i, j) E. The MAX CUT problem is to determine a subset
S of the nodes N for which the sum of the weights of the edges that cross
from S to its complement S is maximized (where (S := N \ S) .
We can formulate MAX CUT as an integer program as follows. Let xj = 1
Then our formulation is:
for j S and xj = 1 for j S.
M AXCU T : maximizex
s.t.
1
4
n P
n
P
i=1 j=1
wij (1 xi xj )
xj {1, 1}, j = 1, . . . , n.
12
Now let
Y = xxT ,
whereby
Yij = xi xj
i = 1, . . . , n, j = 1, . . . , n.
Also let W be the matrix whose (i, j)th element is wij for i = 1, . . . , n
and j = 1, . . . , n. Then MAX CUT can be equivalently formulated as:
M AXCU T : maximizeY,x
s.t.
1
4
n P
n
P
i=1 j=1
wij W Y
xj {1, 1}, j = 1, . . . , n
Y = xxT .
Notice in this problem that the rst set of constraints are equivalent to
Yjj = 1, j = 1, . . . , n. We therefore obtain:
M AXCU T : maximizeY,x
s.t.
1
4
n P
n
P
i=1 j=1
wij W Y
Yjj = 1, j = 1, . . . , n
Y = xxT .
13
Last of all, notice that the matrix Y = xxT is a symmetric rank-1 posi
tive semidenite matrix. If we relax this condition by removing the rank
1 restriction, we obtain the following relaxtion of MAX CUT, which is a
semidenite program:
RELAX : maximizeY
s.t.
1
4
n P
n
P
i=1 j=1
wij W Y
Yjj = 1, j = 1, . . . , n
Y 0.
M AXCU T RELAX.
As it turns out, one can also prove without too much eort that:
This is an impressive result, in that it states that the value of the semide
nite relaxation is guaranteed to be no more than 12% higher than the value
of N P -hard problem MAX CUT.
8.1
s.t.
xT Qi x + qiT x + ci 0 , i = 1, . . . , m,
where the Q0 0 and Qi 0, i = 1, . . . , m. This problem is the same as:
QCQP : minimize
x,
s.t.
xT Q0 x + q0T x + c0 0
xT Qi x + qiT x + ci 0 , i = 1, . . . , m.
I
T
x MiT
Mi x
ci qiT x
0
xT Qi x + qiT x + ci 0.
x,
s.t.
I
T
x M0T
M0 x
c0 q0T x +
I
xT MiT
Mi x
ci qiT x
0
0 , i = 1, . . . , m.
Notice in the above formulation that the variables are and x and that
all matrix coecients are linear functions of and x.
8.2
As it turns out, there is an alternative way to formulate QCQP as a semidenite program. We begin with the following elementary proposition.
Proposition 8.1 Given a vector x k and a matrix W kk , then
1 xT
x W
0
if and only if
16
W xxT .
x, W,
s.t.
c0
1
2 q0
ci
1
2 qi
1 T
2 q0
Q0
1 T
2 qi
Qi
1 xT
x W
1 xT
x W
1 xT
x W
0 , i = 1, . . . , m
0.
Notice in this formulation that there are now (n+1)(2n+2) variables, but that
the constraints are all linear inequalities as opposed to semi-denite inequal
ities.
8.3
17
R, z
s.t.
(ci z)T R(ci z) 1, i = 1, . . . , k
R 0,
Now factor R = M 2 where M 0 (that is, M is a square root of R),
and now M CP becomes:
M CP : minimize ln(det(M 2 ))
M, z
s.t.
(ci z)T M T M (ci z) 1, i = 1, . . . , k,
M 0.
Next notice the equivalence:
I
(M ci M z)T
M ci M z
1
0
18
M ci M z
1
0, i = 1, . . . , k,
M 0.
Last of all, make the substitution y = M z to obtain:
M CP : minimize 2 ln(det(M ))
M, y
!
I
M ci y
s.t.
0, i = 1, . . . , k,
(M ci y)T
1
M 0.
Notice that this last program involves semidenite constraints where all of
the matrix coecients are linear functions of the variables M and y. How
ever, the objective function is not a linear function. It is possible to convert
this problem further into a genuine instance of SDP , because there is a way
to convert constraints of the form
ln(det(X))
19
8.4
Recall that a given matrix R 0 and a given point z can be used to dene
an ellipsoid in n :
ER,z := {x | (x z)T R(x z) 1},
and that the volume of ER,z is proportional to det(R1 ).
M IP : maximize det(R1 )
R, z
s.t.
ER,z {x | (ai )T x bi }, i = 1, . . . , k
R 0,
which is equivalent to:
M IP : maximize ln(det(R1 ))
R, z
s.t.
maxx {aTi x | (x z)T R(x z) 1} bi , i = 1, . . . , k
R 0.
For a given i = 1, . . . , k, the solution to the optimization problem in the
ith constraint is
R1 ai
x = z + q
aTi R1 ai
with optimal objective function value
aTi z +
aTi R1 ai ,
21
M IP : maximize ln(det(R1 ))
R, z
q
aTi z +
s.t.
aTi R1 ai
bi , i = 1, . . . , k
R 0.
Now factor R1 = M 2 where M 0 (that is, M is a square root of R1 ),
and now M IP becomes:
M IP : maximize ln(det(M 2 ))
M, z
q
s.t.
aTi z +
aTi M T M ai
bi , i = 1, . . . , k
M 0,
which we can re-write as:
M IP : maximize 2 ln(det(M ))
M, z
s.t.
aTi M T M ai (bi aTi z)2 , i = 1, . . . , k
bi aTi z 0, i = 1, . . . , k
M 0.
Next notice the equivalence:
M ai
(bi aTi z)
0
22
T
T
T 2
ai M M ai (bi ai z)
and
bi aTi z 0.
M, z
!
(bi aTi z)I
M ai
s.t.
0, i = 1, . . . , k,
(M ai )T
(bi aTi z)
M 0.
Notice that this last program involves semidenite constraints where all of
the matrix coecients are linear functions of the variables M and z. How
ever, the objective function is not a linear function. It is possible, but not
necessary in practice, to convert this problem further into a genuine instance
of SDP , because there is a way to convert constraints of the form
ln(det(X))
to a semidenite system. Such a conversion is not necessary, either from
a theoretical or a practical viewpoint, because it turns out that the function
f (X) = ln(det(X)) is extremely well-behaved and is very easy to optimize
(both in theory and in practice).
8.5
There are many types of eigenvalue optimization problems that can be for
mualated as SDP s. A typical eigenvalue optimization problem is to create
23
a matrix
S := B
k
X
wi Ai
i=1
S=B
k
P
wi Ai ,
i=1
where min (S) and max (S) denote the smallest and the largest eigenvalue
of S, respectively. We now show how to convert this problem into an SDP .
I S I
I D I
S=B
k
P
wi Ai
i=1
I S I.
This last problem is a semidenite program.
Using constructs such as those shown above, very many other types of
eigenvalue optimization problems can be formulated as SDP s.
A variety of control and system problems can be cast and solved as instances
of SDP . However, this topic is beyond the scope of these notes.
25
10
n
intS+
= {X S n | 1 (X) > 0, . . . , n (X) > 0}.
n
A natural barrier function to use to repel X from the boundary of S+
then is
n
X
j=1
n
Y
j=1
i (X)) = ln(det(X)).
Ai X = bi , i = 1, . . . , m,
X 0.
Let f (X) denote the objective function of BSDP (). Then it is not too
dicult to derive:
26
f (X) = C X 1 ,
(1)
Ai X = bi , i = 1, . . . , m,
X 0,
(2)
yi Ai .
C X 1 =
i=1
S = X 1 = LT L1 ,
which implies
1 T
L SL = I,
27
Ai X = bi , i = 1, . . . , m,
X 0, X = LLT
(3)
m
P
yi Ai + S = C
i=1
I 1 LT SL = 0.
From the equations of (3) it follows that if (X, y, S) is a solution of (3), then
X is feasible for SDP , (y, S) is feasible for SDD, and the resulting duality
gap is
S X =
n X
n
X
Sij Xij =
i=1 j=1
n
X
(SX)jj =
j=1
n
X
= n.
j=1
v
uX
n
u n X
M M = t
Mij2 .
i=1 j=1
For some important properties of the Frobenius norm, see the last subsection
of this section. A -approximate solution of BSDP () is dened as any
solution (X, y, S) of
28
Ai X = bi , i = 1, . . . , m,
X 0, X = LLT
m
P
yi Ai + S = C
i=1
kI 1 LT SLk .
(4)
y,
is a -approximate solution of BSDP () and
Lemma 10.1 If (X,
S)
m
X
i=1
S n(1 + ).
y i bi = X
(5)
(6)
T (I R)L
1 0
S = L
S = trace(X
S) =
because kRk < 1 implies that I R 0. We also have X
T
T
L
S) = trace(L
SL
) = trace(I R) = (n trace(R)). However,
trace(L
29
10.1
The Algorithm
Based on the analysis just presented, we are motivated to develop the fol
lowing algorithm:
Step 0 . Initialization. Data is (X 0 , y 0 , S 0 , 0 ). k = 0. Assume that
(X 0 , y 0 , S 0 ) is a -approximate solution of BSDP ( 0 ) for some known value
of that satises < 1.
y,
= (X k , y k , S k ), = k .
Step 1. Set Current values. (X,
S)
=1
+ n
by factoring X
=L
L
T and
the Newton step D for BSDP ( ) at X = X
solving the following system of equations in the variables (D, y):
1
1 1 = P yi Ai
C X + X DX
i=1
Ai D = 0,
(7)
i = 1, . . . , m.
+ D
X =X
S =C
m
X
yi Ai
i=1
k+1 = . k k + 1. Go to Step 1.
30
31
10.2
Ai X = bi , i = 1, . . . , m,
X 0.
f (X) = C X ln(det(X)).
32
1 (X X
) X
1 (X X
)
) + (C X
1 ) (X X
) + 1 X
minimize f (X
2
X
s.t.
Ai X = bi ,
i = 1, . . . , m.
Ai D = 0,
i = 1, . . . , m.
The solution to this program will be the Newton direction. The KarushKuhn-Tucker conditions for this program are necessary and sucient, and
are:
m
C X
1 + X
1 D X
1 = P yi Ai
i=1
Ai D = 0,
(8)
i = 1, . . . , m.
These equations are called the Normal Equations. Let D and y denote the
solution to the Normal Equations. Note in particular from the rst equation
+ D .
X =X
We can produce new values of the dual variables (y, S) by setting the new
m
P
i=1
1 X
1 D X
1 .
S = X
(9)
+ D
X =X
and
1 X
1 D X
1 .
S = X
yi Ai + S = C
i=1
1 T
kI L
S Lk < 1.
Ai D = 0, i = 1, . . . , m
m
X
yi Ai + S = C
i=1
+ D = L
(I + L
1 D L
T )L
T
X =X
34
1 X
1 D X
1 = L
T (I L
1 D L
T )L
1 .
S = X
1 D L
T k . It turns out that this is the
We will rst show that kL
crucial fact from which everything will follow nicely. To prove this, note
that
m
X
yi Ai + S = C =
i=1
m
X
yi Ai + S =
i=1
m
X
i=1
T (I L
1 D L
T )L
1 .
yi Ai + L
T L
1 D L
T L
1 D L
T L
1 D ,
S D = L
1 D L
T k .
from which we see that kL
It therefore follows that
(I + L
1 D L
T )L
T 0
X =L
and
T (I L
1 D L
T )L
1 0,
S = L
1 D L
T k < 1, which guarantees that I L
1 D L
T 0.
since kL
Next, factorize
1 D L
T = M 2 ,
I +L
35
. Then note that
where we dene L = LM
1
1 T
I (L )T S L = I M L
S LM
1 T T
1 D L
T )L
1 )LM
= I M (I L
1 D L
T )M
( L (I L
= I ML
1 D L
T )M = IM M +M (M M I)M = (IM M )(IM M )
= IM M +M (L
1 D L
T )(L
1 D L
T ).
= (L
1 D L
T )(L
1 D L
T )k k(L
1 D L
T )k2 2 .
kI (L )T S L k = k(L
10.3
y,
is a
Theorem 10.2 (Relaxation Theorem). Suppose that (X,
S)
approximate solution of BSDP () and < 1. Let
=1
+ n
y,
is a -approximate solution of BSDP ( ).
and let = . Then (X,
S)
y,
satises Ai X
= bi , i = 1, . . . , m, X
0, and
Proof: The triplet (X,
S)
m
P
yi Ai + S = C, and so it remains to show that
i=1
=L
L
T . We have
where X
1 T
p
L
SL
I
,
1 T
T SL
T SL
SL
I
=
1 L
I
=
1 1 L
I 1 1 I
L
1
I
+ 1 kIk
T SL
1L
36
1
+ n
n
n =
p
p
= + n n = .
q.e.d.
Theorem 10.3 (Convergence Theorem). Suppose that (X 0 , y 0 , S 0 ) is a
-approximate solution of BSDP ( 0 ) and < 1. Then for all k = 1, 2, 3, ...,
(X k , y k , S k ) is a -approximate solution of BSDP ( k ).
Proof: By induction, suppose that the theorem is true for iterates 0, 1, 2, ..., k.
Then (X k , y k , S k ) is a -approximate solution of BSDP ( k ).
From the Relaxation Theorem, (X k , y k , S k ) is a
of BSDP ( k+1 ) where k+1 = k .
-approximate solution
37
38
1.25 X 0 S 0
k = 6 n ln
0.75
!'
iterations.
Proof: Let k be as dened above. Note that
1
1
1
1
1 .
= 1 2 4 = 1
=1
1
2
+
4
n
6
n
+ n
+ n
2
Therefore
1
1
6 n
k
0.
CX
m
X
bi yik
i=1
1
= X S n(1 + ) 1
6 n
k
1
1
6 n
k
(1.25n)
X0 S0
0.75n
k
(1.25n 0 )
ln C X
m
X
i=1
bi yik
1
k ln 1
6 n
1.25 0
+ ln
X S0
0.75
k
1.25 0
+ ln
X S0
6 n
0.75
1.25 X 0 S 0
ln
0.75
Therefore C X k
q.e.d.
m
P
i=1
1.25 0
X S 0 = ln().
+ ln
0.75
bi yik .
39
10.4
m
0 1
0 1
1 = P yi Ai
C X + X DX
i=1
Ai D = 0,
(10)
i = 1, . . . , m.
S =C
m
X
yi Ai .
i=1
1 D L
T k 1 , stop. In this
Step 3. Test the Current Point. If kL
4
1
is a -approximate solution of BSDP ( 0 ), along with the dual val
case, X
4
ues (y , S ).
40
+ D
X =X
where
=
0.2
1 D L
T k .
kL
+ D ).
Alternatively, can be computed by a line-search of f0 (X
m
P
yi Ai + S = C and
i=1
T SL
1 .
I 1 L
0
m
P
i=1
yi Ai . Then
1 T
1 T 0 T 1 T 1 T 1
1 D L
T ,
L S L = I 0 L
( (L L L L D L L ))L = L
0
whereby
kI
1 T
1 D L
T k 1 .
L S Lk = kL
0
q.e.d.
41
+
X
T D
1
kL D L k
) + 0 kL
1 D L
T k +
f0 (X
2
.
2(1 )
In particular,
f 0
0.2
+
X
T D
1
kL D L k
) 0.025 0 .
f0 (X
(11)
In order to prove this proposition, we will need two powerful facts about the
logarithm function:
x2
2
|x|2
2
x3
3
x2
.
2(1 )
|x|3
3
x4
4
+ ...
|x|4
4
...
|x|2
2
= x
x2
2
= x
x2
2(1|x|)
x2
2(1) .
42
|x|3
2
|x|4
2
...
q.e.d.
Fact 2. Suppose that R S n and that kRk < 1. Then
ln(det(I + R)) I R
2
.
2(1 )
n
P
ln(1 + Djj )
j=1
n
P
j=1
Djj
= I D
2
Djj
2(1)
kRk2
2(1)
I QT RQ
= I R
T k ,
kL D L
1 D L
T k = .
kL
43
2
2(1)
2
2(1)
q.e.d.
n
P
j=1
2 . We
Djj
Then
+
f 0 X
1 D L
T k D
kL
+ D
= f 0 X
+ C D 0 ln(det(L
(I + L
1 D L
T )L
T ))
= C X
0 ln(det(X
)) + C D 0 ln(det(I + L
1 D L
T ))
= C X
) + C D 0 I L
1 D L
T + 0 2
f0 (X
2(1)
) + C D 0 L
T L
1 D + 0 2
= f0 (X
2(1)
2
) + C 0X
1 D + 0 .
= f0 (X
2(1)
m
C 0X
1 + 0 X
1 D X
1 = P y Ai
i
i=1
Ai D = 0,
(12)
i = 1, . . . , m.
Taking the inner product of both sides of the rst equation above with D
and rearranging yields:
1 D X
1 D = (C 0 X
1 ) D .
0X
44
+
f 0 X
T k D
1 D L
kL
) 0 X
1 D X
1 D + 0 2
f0 (X
2(1)
) 0 L
1 D L
T L
1 D L
T + 0 2
= f0 (X
2(1)
) 0 kL
1 D L
T k2 + 0 2
= f0 (X
2(1)
) 0 kL
1 D L
T k + 0 2
= f0 (X
2(1)
) + 0 kL
1 D L
T k +
= f0 (X
1 D L
T k >
Subsituting = 0.2 and and kL
q.e.d.
1
4
2
2(1)
Last of all, we prove a bound on the number of iterations that the algo
rithm will need in order to nd a 14 -approximate solution of BSDP ( 0 ):
Proposition 10.3 Suppose that X 0 satises Ai X 0 = bi , i = 1, . . . , m, and
X 0 0. Let 0 be given and let f0 be the optimal objective function value
of BSDP ( 0 ). Then the algorithm initiated at X 0 will nd a 14 -approximate
solution of BSDP ( 0 ) in at most
f0 (X 0 ) f0
k=
0.025 0
&
'
iterations.
Proof: This follows immediately from (11). Each iteration that is not a 14
approximate solution decreases the objective function f0 (X) of BSDP ( 0 )
by at least 0.025 0 . Therefore, there cannot be more than
&
f0 (X 0 ) f0
0.025 0
'
45
10.5
n
P
j=1
3. |trace(M )| nkM k.
4. If kM k < 1, then I + M 0.
Proof: We can factorize M = QDQT where Q is orthonormal and D is a
diagonal matrix of the eigenvalues of M . Then
q
q
v
uX
u n
trace(DD) = t (j (M ))2 .
j=1
This proves the rst two assertions. To prove the third assertion, note that
trace(M ) = trace(QDQT ) = trace(QT QD)
= trace(D) =
n
X
j=1
v
u
uX
j (M ) nt (j (M ))2 = nkM k.
j=1
v
u
n X
n
uX
u
kABk = t
i=1 j=1
46
n
X
k=1
Aik Bkj
!2
v
uX
n
u n X
t
n
X
i=1 j=1
A2ik
k=1
n
X
2
Bkj
k=1
!!
u
! n n
u X
n X
n
X
X
u
2 = kAkkBk.
= t
A2ik
Bkj
i=1 k=1
j=1 k=1
q.e.d.
11
F =
(y1 , . . . , ym ) m | C
Pm
i=1 yi bi .
m
X
i=1
yi Ai 0 ,
Recall that at any iteration of the ellipsoid algorithm, the set of solutions
of SDD is known to lie in the current ellipsoid, and the center of the current
ellipsoid is, say, y = (
y1 , . . . , ym ). If y F , then we perform an optimality
P
Pm
cut of the form m
y
i bi , and use standard formulas to update
i=1 i bi
i=1 y
/ F , then we perform a feasibility cut
the ellipsoid and its new center. If y
m such that h
T y > h
T y for all y F .
by computing a vector h
There are four issues that must be resolved in order to implement the
above version of the ellipsoid algorithm to solve SDD:
47
vT C v < vT
m
X
i=1
yi Ai v =
i=1
m
X
yi vT Ai v = yT h,
i=1
12
13
Because SDP has so many applications, and because interior point methods
show so much promise, perhaps the most exciting area of research on SDP
has to do with computation and implementation of interior point algorithms
for solving SDP . Much research has focused on the practical eciency of
interior point methods for SDP . However, in the research to date, com
putational issues have arisen that are much more complex than those for
linear programming, and these computational issues are only beginning to
be well-understood. They probably stem from a variety of factors, including
the fact that SDP is not guaranteed to have strictly complementary optimal
solutions (as is the case in linear programming). Finally, because SDP is
such a new eld, there is no representative suite of practical problems on
which to test algorithms, i.e., there is no equivalent version of the netlib
suite of industrial linear programming problems.
49
50
MIT OpenCourseWare
http://ocw.mit.edu
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.