Sei sulla pagina 1di 29

EE103 (Fall 2011-12)

18. Linear optimization

linear program
examples
geometrical interpretation
extreme points
simplex method

18-1

Linear program
minimize c1x1 + c2x2 + + cnxn
subject to a11x1 + a12x2 + + a1nxn b1
a21x1 + a22x2 + + a2nxn b2

am1x1 + am2x2 + + amnxn bm


n optimization (decision) variables x1, . . . , xn
m linear inequality constraints

matrix notation
minimize cT x
subject to Ax b
the inequality between vectors Ax b is interpreted elementwise

Linear optimization

18-2

Production planning
a company makes three products, in quantities x1, x2, x3 per month
profit per unit is 1.0 (for product 1), 1.4 (product 2), 1.6 (product 3)
products use different amounts of resources (labor, material, . . . )
fraction of total available resource needed per unit of each product
resource 1
resource 2

product 1
1/1000
1/1200

product 2
1/800
1/700

product 3
1/500
1/600

optimal production plan


maximize x1 + 1.4x2 + 1.6x3
subject to (1/1000)x1 + (1/800)x2 + (1/500)x3 1
(1/1200)x1 + (1/700)x2 + (1/600)x3 1
x1 0, x2 0, x3 0
solution: x1 = 462, x2 = 431, x3 = 0
Linear optimization

18-3

Network flow optimization

x4

capacity
constraints
4

x1
t

x7
x3

x5

x8

x2

6
x9

x6

0 x1
0 x2
0 x3
0 x4
0 x5
0 x6
0 x7
0 x8
0 x9

3
2
1
2
1
3
3
1
1

maximize t
subject to flow conservation at nodes
capacity constraints on the arcs

Linear optimization

18-4

linear programming formulation


maximize t
subject to t = x1 + x2, x1 + x3 = x4, et cetera
0 x1 3, 0 x2 2, et cetera
(t = x1 + x2 is equivalent to inequalities t x1 + x2, t x1 + x2, . . . )
solution
2

2
4

3
1

1
2

1
3

Linear optimization

18-5

Data fitting
fit a straight line to m points (ui, vi) by minimizing sum of absolute errors
minimize

m
X

| + ui vi|

i=1

20
15
10

5
0
5
10
15
10

10

u
dashed: least-squares solution; solid: minimizes sum of absolute errors
Linear optimization

18-6

linear programming formulation


minimize

m
X

yi

i=1

subjec to yi + ui vi yi,

i = 1, . . . , m

variables , , y1, . . . , ym
inequalities are equivalent to
yi | + ui vi|
the optimal yi satisfies
yi = | + ui yi|

Linear optimization

18-7

Terminology
minimize cT x
subject to Ax b
problem data: n-vector c, m n-matrix A, m-vector b
x is feasible if Ax b
feasible set is set of all feasible points
x is optimal if it is feasible and cT x cT x for all feasible x
optimal value: p = cT x
unbounded problem: cT x is unbounded below on feasible set
infeasible probem: feasible set is empty
Linear optimization

18-8

Example

x2

minimize x1 x2
subject to 2x1 + x2 3
x1 + 4x2 5
x1 0, x2 0

x1 + 4x2 = 5
x1
2x1 + x2 = 3

feasible set is shaded

Linear optimization

18-9

solution

x1 x2 = 2

x1 x2 = 1

x1 x2 = 0

minimize x1 x2
subject to 2x1 + x2 3
x1 + 4x2 5
x1 0, x2 0
x2

x1 x2 = 5

x1 x2 = 4
x1
x1 x2 = 3

optimal solution is x = (1, 1), optimal value is p = 2


Linear optimization

18-10

Hyperplanes and halfspaces


hyperplane
solution set of one linear equation with nonzero coefficient vector a
aT x = b

halfspace
solution set of one linear inequality with nonzero coefficient vector a
aT x b

Linear optimization

18-11

Geometrical interpretation
G = {x | aT x = b}

H = {x | aT x b}

u = (b/kak2)a

G
0

xu

xu

u = (b/kak2)a satisfies aT u = b
x is in G if aT (x u) = 0, i.e., x u is orthogonal to a
x is in H if aT (x u) 0, i.e., angle 6 (x u, a) /2
Linear optimization

18-12

Example

a x=0

x2

x2

a x = 10

a = (2, 1)

a = (2, 1)
x1

a x = 5

Linear optimization

a x=5

x1

aT x 3

18-13

Polyhedron
definition: the solution set of a finite number of linear inequalities
aT1 x b1,

aT2 x b2,

...,

aTmx bm

matrix notation: Ax b where

aT1
aT2

A= .
,
.
aTm

b1
b2

b=
..
bm

geometrical interpretation: intersection of m halfspaces

Linear optimization

18-14

example (n = 2)
x1 + x2 1,

2x1 + x2 2,

x1 0,

x2 0

x2

2x1 + x2 = 2
x1
x1 + x2 = 1

Linear optimization

18-15

example (n = 3)
0 x1 2,

0 x2 2,

0 x3 2,

x1 + x2 + x3 5

x3
(0, 2, 2)

(0, 0, 2)
(1, 2, 2)
(2, 0, 2)

(2, 1, 2)

(2, 2, 1)

(2, 0, 0)

(0, 2, 0)x

(2, 2, 0)

x1
Linear optimization

18-16

Extreme points
let x
P, where P is a polyhedron defined by inequalities aTk x bk
if aTk x
= bk , we say the kth inequality is active at x

if aTk x
< bk , we say the kth inequality is inactive at x

x
is called an extreme point of P if

AI(x)

aTk1
aTk2
..

aTkp

has a zero nullspace

where I(
x) = {k1, . . . , kp} are the indices of the active constraints at x

an extreme point x
is nondegenerate if p = n, degenerate if p > n
Linear optimization

18-17

example
x1 x2 1,

2x1 + x2 1,

x1 0,

x2 0

x
= (1, 0) is an extreme point
x2

AI(x) =

1 1
0 1

x
= (0, 1) is an extreme point

AI(x)

2x1 + x2 = 1
x1
x1 x2 = 1

1 1
1
= 2
1
0

x
= (1/2, 1/2) is not an extreme point
AI(x) =

Linear optimization

1 1


18-18

Simplex algorithm
minimize x1 + x2 x3
subject to 0 x1 2, 0 x2 2,
x1 + x2 + x3 5

0 x3 2

x3
(0, 0, 2)

x2

(2, 2, 0)
x1

move from one extreme point to another extreme point with lower cost
Linear optimization

18-19

One iteration of the simplex method


suppose x
is a nondegenerate extreme point
renumber constraints so that active constraints are 1, . . . , n
active constraints at x
:
aT1 x
= b1 ,

...,

aTn x
= bn

inactive constraints at x
:
aTn+1x
< bn+1,

...,

aTmx
< bm

matrix of active constraints: define I = I(


x) = {1, . . . , n}
T
a1
aT2

AI =
(an n n nonsingular matrix)
..
aTn
Linear optimization

18-20

step 1 (test for optimality) solve


ATI z = c
i.e., find z that satisfies

n
X

zk a k = c

k=1

if zk 0 for all k, then x


is optimal and we return x

proof. consider any feasible x:


cT x =

n
X

k=1

zk aTk x

n
X

k=1

z k bk =

n
X

zk aTk x
= cT x

k=1

(the inequality follows from zk 0, aTk x bk )


Linear optimization

18-21

step 2 (compute step) choose a j with zj > 0 and solve


AI v = ej
i.e., find v that satisfies
aTj v = 1,

aTk v = 0

for k = 1, . . . , n and k 6= j

for small t > 0, x


+ tv is feasible and has a smaller cost than x

proof:
aTj (
x + tv) = bj t
x + tv) = bk
aTk (

for k = 1, . . . , n and k 6= j

and for sufficiently small positive t


x + tv) bk
aTk (

for k = n + 1, . . . , m

finally, cT (
x + tv) = cT x
+ tzj < cT x

Linear optimization

18-22

step 3 (compute step size) maximum t such that x


+ tv is feasible, i.e.,
aTk x
+ taTk v bk ,

k = 1, . . . , m

the maximum step size is


tmax

bk aTk x

= min
Tv
a
v>0
k:aT
k
k

(if aTk v 0 for all k, then the problem is unbounded below)

step 4 (update x
)
x
:= x
+ tmaxv

Linear optimization

18-23

Example
minimize x1 + x2 x3
subject to 0 x1 2, 0 x2 2,
x1 + x2 + x3 5

at x
= (2, 2, 0)

0 x3 2

x3

1 0 0
AI = 0 1 0
0 0 1
1. z = AT
I c = (1, 1, 1)

x2

(2, 2, 1)

2. for j = 3, v = A1
I (0, 0, 1) = (0, 0, 1)
(2, 2, 0)

3. x
+ tv is feasible for t 1
Linear optimization

x1
18-24

Example
minimize x1 + x2 x3
subject to 0 x1 2, 0 x2 2,
x1 + x2 + x3 5

at x
= (2, 2, 1)

0 x3 2

x3

1 0 0
AI = 0 1 0
1 1 1

(2, 1, 2)

1. z = AT
I c = (2, 2, 1)

x2

(2, 2, 1)

2. for j = 2, v = A1
I (0, 1, 0) = (0, 1, 1)
3. x
+ tv is feasible for t 1
Linear optimization

x1
18-25

Example
minimize x1 + x2 x3
subject to 0 x1 2, 0 x2 2,
x1 + x2 + x3 5

at x
= (2, 1, 2)

0 x3 2

x3

1 0 0
AI = 0 0 1
1 1 1

(2, 0, 2)

(2, 1, 2)

1. z = AT
I c = (0, 2, 1)

x2

2. for j = 3, v = A1
I (1, 0, 0) = (0, 1, 0)
3. x
+ tv is feasible for t 1
Linear optimization

x1
18-26

Example
minimize x1 + x2 x3
subject to 0 x1 2, 0 x2 2,
x1 + x2 + x3 5

at x
= (2, 0, 2)

0 x3 2

x3
(0, 0, 2)

1 0 0
AI = 0 1 0
0 0 1

(2, 0, 2)

1. z = AT
I c = (1, 1, 1)

x2

2. for j = 1, v = A1
I (1, 0, 0) = (1, 0, 0)
3. x
+ tv is feasible for t 2
Linear optimization

x1
18-27

Example
minimize x1 + x2 x3
subject to 0 x1 2, 0 x2 2,
x1 + x2 + x3 5

0 x3 2

x3
(0, 0, 2)

at x
= (0, 0, 2)

1 0 0
AI = 0 1 0
0
0 1

x2

z = AT
I c = (1, 1, 1)
therefore, x
is optimal
x1
Linear optimization

18-28

Practical aspects
at each iteration, we solve two sets of linear equations
ATI z = c,

AI v = ej

using an LU factorization or sparse LU factorization of AI


implementation requires a phase 1 to find first extreme point
simple modifications handle problems with degenerate extreme points
very large LPs (several 100,000 variables and constraints) are routinely
solved in practice

Linear optimization

18-29

Potrebbero piacerti anche