Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
F. Ramponi
The following notes contain the proof of Lyapunovs theorem for stability and asymptotic
stability of an equilibrium point of a nonlinear system, along with applications to the proof
of asymptotic stability of an equilibrium point via linearization, plus some comments on
unstable equilibrium points. The material is adapted from Fornasini & Marchesini,
Appunti di teoria dei sistemi (in Italian); the interested reader can nd a general and
broad exposition of Lyapunov theory in Khalil, Nonlinear systems.
1 Preliminaries
We will deal with a continuous time, autonomous, time-invariant nonlinear system
x(t) = f(x(t)) (1)
For simplicity, suppose that x(t) R
n
, f : R
n
R
n
, and f is continuous. Suppose also
that (1) has an equilibrium point at the origin (this is again for simplicity, all the results
hold in general), i.e. it holds
f(0) = 0
Let (t; 0, x) denote the unique solution x(t) to (1) that corresponds to x(0) = x.
Stability
In the following, B( x, ) will denote the open ball centered at x of radius , that is the
set x R
n
: |x x| < ;
B( x, ) will denote the closed ball, or the set x R
n
:
|x x| ; and o( x, ) will denote the sphere, or the set x R
n
: |x x| = . Note
that o( x, ) is the boundary of both B( x, ) and
B( x, ).
The following denitions of stability are due to Aleksandr Lyapunov (1857-1918). An
equilibrium point x
e
of a nonlinear system is said to be stable, if for all > 0 there exists
a > 0 such that
x B(x
e
, ) (t; 0, x) B(x
e
, ) for all t 0
(In essence, the Lyapunov stability of x
e
asserts a simultaneous continuity more pre-
cisely the equicontinuity at x
e
of all the functions in the family
t
: x (t; 0, x)
t0
.)
The equilibrium point x
e
is said to be asymptotically stable, if for all > 0 there exists a
> 0 such that
x B(x
e
, )
_
(t; 0, x) B(x
e
, ) for all t 0
lim
t+
(t; 0, x) = x
e
It is clear from the denitions that asymptotic stability implies stability.
1
Lyapunov functions
Here, a continuously dierentiable function V : R
n
R is called positive denite if
V (0) = 0 and there exists an open ball B = B(0, ) such that V (x) > 0 for all x B.
The function V is called positive semi-denite if there exists a B such that V (0) = 0 and
V (x) 0 for all x B.
Analogously, V is called negative denite or negative semi-denite if V (0) = 0 and, re-
spectively, V (x) < 0 for all x B, x ,= 0 or V (x) 0 for all x B.
Positive denite (semi-denite, etc.) quadratic forms, i.e. functions of the form V (x) =
x
Px where P R
nn
, P = P
< ,
there exists a certain time T such that (t; 0, x) B(0,
).
For the sake of contradiction, suppose that this is not the case. Then, for all t 0 we
have
(t; 0, x)
B(0, ) B(0,
)
Since
B(0, ) B(0,
) is compact, and
V is continuous and negative denite,
V attains a
negative maximum there. Hence,
V (x) if x
B(0, ) B(0,
)
and nally
V ((t; 0, x)) = V ( x) +
_
t
0
V ((; 0, x))d
= V ( x) t
Letting t + we obtain a contradiction, because V (x) 0 for all x
B(0, ), but the
right-hand side tends to . Therefore there must exist T such that x(T) B(0,
).
This proves the theorem.
Pay attention to the fact that Lyapunovs theorem assumes the existence of a Lyapunov
function with negative (semi-)denite Lie derivative, but does not provide any method
to construct one. In other words, proving the stability of an equilibrium is still not a
straightforward task, unless we nd such a function by other means. In certain cases, such
as linear systems, Lyapunov functions arise naturally, but in general their construction is
an open problem.
3
3 Asymptotic stability of linear systems
Now we apply Lyapunovs theorem to the analysis of time-invariant linear systems. We
start with an algebraic result:
Theorem. Let A R
nn
. The following statements are equivalent:
1. all the eigenvalues of A have negative real part;
2. for all matrices Q = Q
> 0 to the
following (Lyapunov) equation:
A
P + PA + Q = 0 (2)
We will just prove the only if part, namely:
Proof (1 2). Suppose that A has eigenvalues with negative real part. Dene
P :=
_
+
0
e
A
t
Qe
At
dt
Since the elements of the integrand matrix are all linear combinations of functions of
the form t
k
e
t
where has negative real part (the reader can verify this considering the
Jordan form of A), the integral exists and is nite. Now we verify that such P is a solution
to (2). Indeed,
A
P + PA =
_
+
0
(A
e
A
t
Qe
At
+ e
A
t
Qe
At
A) dt
=
_
+
0
d
dt
_
e
A
t
Qe
At
_
dt
=
_
e
A
t
Qe
At
_
+
0
= Q
(since lim
t+
e
A
t
Qe
At
= 0). Moreover, the solution is unique; indeed, let P
1
and P
2
be
any two solutions:
A
P
1
+ P
1
A + Q = 0
A
P
2
+ P
2
A + Q = 0
Subtracting the second equality from the rst,
A
(P
1
P
2
) + (P
1
P
2
)A = 0
therefore
0 = e
A
t
_
A
(P
1
P
2
) + (P
1
P
2
)A
_
e
At
= e
A
t
A
(P
1
P
2
)e
At
+ e
A
t
(P
1
P
2
)Ae
At
=
d
dt
_
e
A
t
(P
1
P
2
)e
At
_
4
In other terms, e
A
t
(P
1
P
2
)e
At
is constant, hence for all t it has the same value that it
assumes at t = 0:
e
A
t
(P
1
P
2
)e
At
= P
1
P
2
Finally, letting t + on the left-hand side, we obtain P
1
P
2
= 0, so that any two
solutions coincide; uniqueness follows.
Consider now the time-invariant linear system
x(t) = A x(t) (3)
where A R
nn
. We apply Lyapunovs theorem to verify a result that we already know
from modal analysis:
Theorem. If all the eigenvalues of A have negative real part, then the system (3) is
asymptotically stable.
Proof. Since the eigenvalues of A have negative real part, there exists a positive denite
solution P to the Lyapunov equation
A
P + PA + I = 0
Let us choose, as a Lyapunov function, the quadratic form V (x) = x
Px, which is of
course a positive denite function. Its Lie derivative is
V (x) = x
Px + x
P x
= (Ax)
Px + x
PAx
= x
(A
P + PA)x
= x
(I)x
= |x|
2
which is negative denite. Hence, the system is asymptotically stable.
When all the eigenvalues of A have negative real part, we will call A a stability matrix.
4 Stability analysis via linearization
Suppose that x
e
= 0 is an equilibrium point for (1), and that the function f in (1) is
continuously dierentiable. Then we can linearize the system around 0, namely consider
the time-invariant linear system
x(t) = A x(t) (4)
where A =
f
x
(0) R
nn
.
It turns out that, if A has eigenvalues with negative real part, something interesting can
be said not just on (4) but also on the nonlinear system. We have the following result:
5
Theorem Let x
e
= 0 be an equilibrium point for (1), and let f C
1
(R
n
). If A =
f
x
(0) is
a stability matrix, then x
e
is an asymptotically stable equilibrium point for the system (1).
Proof. Suppose that A in (4) is a stability matrix. Consider the Taylor expansion of f
around its equilibrium point:
f(x) = f(0) + Ax + (x) |x|
= Ax + (x) |x|
where lim
x0
(x) = 0. Since A is a stability matrix, the equation
A
P + PA + I = 0
admits a solution P = P
Px.
We will apply again Lyapunovs theorem using V as the Lyapunov function. We have
indeed
(A
P + PA)x + 2x
P(x) |x|
= x
x + 2x
P(x) |x|
= |x|
2
_
1 +
2x
P(x)
|x|
_
But now, from Schwartzs inequality,
2x
P(x)
= [x, 2P(x)[
|x| |2P(x)|
2 |x| |P| |(x)|
Therefore the term
2x
P(x)
x
also tends to 0 as x 0, there exists > 0 such that
V (x) < 0
for all x B(0, ) 0, and
V is negative denite. Invoking Lyapunovs theorem, we can
conclude that the origin is an asymptotically stable equilibrium point of (1).
We mention without proof another result:
Theorem Let x
e
= 0 be an equilibrium point for (1), and let f C
1
(R
n
). If the matrix
A =
f
x
(0) has at least one eigenvalue with (strictly) positive real part, then the origin is an
unstable equilibrium point for both the linearized system (4) and the nonlinear system (1).
Note that we cannot say anything when we just know that the eigenvalues of A lie in
the closed left-hand plane, that is, nothing can be said if there are eigenvalues on the
imaginary axis. For instance, a linear system with all its eigenvalues at 0 can be either
6
stable or unstable. (Consider e.g. A =
_
0 0
0 0
_
vs. A =
_
0 1
0 0
_
.)
Even when the linearized system is stable, this does not imply stability of the nonlinear
system. As a counterexample, consider the equilibrium point x
e
= 0 for the nonlinear
scalar systems x(t) = x
2
(t) and x(t) = x
2
(t). By separation of variables, their general
solutions are respectively x(t) =
1
1/x
0
+t
and x(t) =
1
1/x
0
t
. Hence, it is easily seen that
the origin is an asymptotically stable equilibrium for the rst system, and an unstable
equilibrium for the second (because the solution diverges in nite time). Nevertheless, the
linearization of both the systems around the origin yields x(t) = 0, which is just stable.
7