Sei sulla pagina 1di 2

• Linear Dependence: Consider a set of vectors v1 , ..., vn and scalars k1 , ..., kn .

These vectors are linearly independent if and


only if
k1 v1 + k2 v2 + ... + kn vn = 0
has a unique solution given by k1 = k2 = ... = kn = 0.
The vectors are linearly dependent if and only if there exists k1 , ..., kn with at least one ki 6= 0 for i = 1, ..., n.

• Uniqueness of Solution: Let A be an n × n matrix. Then the equation Ax = b has a unique solution if and only if |A| 6= 0,
that is if and only if A is nonsingular.

• Solution of Linear System: Let A be an n × n nonsingular matrix. Then the equation Ax = b has a unique solution given by

x = A−1 b

for any b ∈ Rn .

• Cramer’s Rule: Let A be an n × n nonsingular matrix. Then the equation Ax = b has a unique solution given by

|Aj |
xj =
|A|

where Aj is the matrix A with b replacing the j th column of A.

• Implicit Functions: Given an equation of the form

F (y, x1 , ..., xn ) = 0

if

1. F has continuous partial derivatives Fy , F1 , · · · , Fn ,


2. at a point (y0 , x10 , ..., xn0 ) satisfying the equation, Fy 6= 0,

then there exists an n-dimensional neighbourhood of (y0 , x10 , ..., xn0 ) in which y is an implicitly defined function of the variables
x1 , · · · , xn in the form of y = f (x1 , ..., xn ).
Then its partial derivatives can be found using the formula:
∂y Fi
fi = =− for i = 1, 2, · · · , n
∂xi Fy

• Second-Derivative Test: If the value of the first derivative of a function f at x = x0 is f 0 (x0 ) = 0, the value of the function at
x0 , f (x0 ), will be

– A relative maximum if the second-derivative value at x0 is f 00 (x0 ) < 0.

– A relative minimum if the second-derivative value at x0 is f 00 (x0 ) > 0.

• Concave and Convex Functions: Consider a function f (x), then

– if f 00 (x) < 0 for all x, then f (x) must be a strictly concave function.

– if f 00 (x) > 0 for all x, then f (x) must be a strictly convex function.

• Maclaurin series:
f (0) f 0 (0) f 00 (0) 2 f 000 (0) 3 f (4) (0) 4 f (n) (0) n
f (x) = + x+ x + x + x + ··· + x
0! 1! 2! 3! 4! n!

• Taylor series:
f (x0 ) f 0 (x0 ) f 00 (x0 ) f (n) (x0 )
f (x) = + (x − x0 ) + (x − x0 )2 + · · · + (x − x0 )n
0! 1! 2! n!

• Lagrange-Multiplier Method: Let f (x1 , · · · , xn ) be continuously differentiable objective functions and let g(x1 , · · · , xn ) = c
and h(x1 , · · · , xn ) = d be two constraints. Then for real numbers λ and µ the Lagrangian function is

Z(λ, µ, x1 , · · · , xn ) = f (x1 , · · · , xn ) + λ[c − g(x1 , · · · , xn )] + µ[d − h(x1 , · · · , xn )]


The first-order conditions are given as
∂Z
= c − g(x1 , · · · , xn ) = 0
∂λ
∂Z
= d − h(x1 , · · · , xn ) = 0
∂µ
∂Z
= fi − λgi − µhi = 0 for i = 1, · · · , n
∂xi

• Maximization Problem: Consider the following maximization problem with inequality constraints

Maximize π = f (x1 , x2 , · · · , xn )
subject to g i (x1 , , x2 , · · · , xn ) ≤ ri (i = 1, · · · , m)
and xj ≥ 0 (j = 1, 2, · · · , n)

The Lagrangian function is


X
λi ri − g i (x1 , x2 , · · · , xn )
 
Z = f (x1 , x2 , · · · , xn ) +
i

The Kuhn-Tucker conditions for a local maximum are


∂Z ∂Z
≤ 0, xj ≥ 0, xj = 0 (j = 1, 2, · · · , n)
∂xj ∂xj
∂Z ∂Z
≥ 0, λi ≥ 0, λi = 0 (i = 1, · · · , m)
∂λi ∂λi

• Minimization Problem: Consider the following minimization problem with inequality constraints

Minimize π = f (x1 , x2 , · · · , xn )
subject to g i (x1 , , x2 , · · · , xn ) ≥ ri (i = 1, · · · , m)
and xj ≥ 0 (j = 1, 2, · · · , n)

The Lagrangian function is


X
λi ri − g i (x1 , x2 , · · · , xn )
 
Z = f (x1 , x2 , · · · , xn ) +
i

The Kuhn-Tucker conditions for a local minimum are


∂Z ∂Z
≥ 0, xj ≥ 0, xj = 0 (j = 1, 2, · · · , n)
∂xj ∂xj
∂Z ∂Z
≤ 0, λi ≥ 0, λi = 0 (i = 1, · · · , m)
∂λi ∂λi

• Consider the following homogeneous equation:


dy
+ ay = 0
dt
The definite solution of the above differential equation is:

y(t) = y(0)e−at

• Consider the following nonhomogeneous equation:


dy
+ ay = b
dt
The definite solution of the above differential equation is:
 
b −at b
y(t) = y(0) − e +
a a

Potrebbero piacerti anche