Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Cairo University
2|Page
Table of figures
Figure 1 Sliding Surface ..................................................................................................................................................... 5
Figure 2 Sliding Mode Controller...................................................................................................................................... 5
Figure 3 Transit response for regulator........................................................................................................................... 6
Figure 4 Transit response for Tracking ........................................................................................................................... 6
Figure 5 Sliding Mode Control for Regulator.................................................................................................................. 7
Figure 6 Example of MRAC scheme ............................................................................................................................. 11
Figure 7 Example of STAC scheme .............................................................................................................................. 12
Figure 8 simple pendulum .............................................................................................................................................. 14
3|Page
Introduction
This report delves into the interconnected topics of zero dynamics, Lyapunov stability, sliding
control, and adaptive control, exploring their significance and applications of control systems.
Where Zero dynamics play a pivotal role in the analysis and control of dynamic systems. Often
described as the internal dynamics that remain when the external dynamics are eliminated,
zero dynamics capture the inherent behavior of a system, influencing stability, performance,
and controllability. Understanding and characterizing zero dynamics are crucial for devising
control strategies that effectively manage the intricacies of complex systems. And Lyapunov
stability analysis stands as a cornerstone in control theory, providing a robust framework for
assessing the stability of equilibrium points in dynamic systems. By employing Lyapunov
functions or Lyapunov-like functions, this methodology allows for a comprehensive evaluation
of the system’s behavior. And Sliding control is a dynamic approach that addresses the
challenges of maintaining stability and precision in the presence of disturbances and
uncertainties. This methodology involves the creation of a sliding surface, a hyper-plane in the
state space, which guides the system’s trajectory. Investigating the principles and applications
of sliding control provides insights into its ability to enhance the transient response and
resilience of control systems. And The concept of adaptive control introduces a flexible model
that enables systems to autonomously adjust their parameters in response to changes in the
environment or system dynamics. This adaptability is crucial in scenarios where uncertainties
or variations may challenge the effectiveness of traditional control strategies.
This report aims to provide a comprehensive understanding of zero dynamics, Lyapunov
stability, sliding control, and adaptive control, individually and in their interconnected
applications. Through theoretical insights and practical examples.
4|Page
1. Sliding Mode Control
• The control action u can be split into two regions:
1. Sliding Surface
For 2y+5x=S=10
S=0
2. Reaching Mode
Ṡ(-) if S (+)
Ṡ(+) if S (-)
Ṡ=-sgn(S)
• The key to the sliding mode controller is the design of an appropriate sliding surface
5|Page
• Reaching Mode
1. Constant Rate:-Ksgn(S) K>0
2. Constant-Proportional Rate:-qsgn(S)-KS q>0,K>0
3. Power Rate: −𝑘|𝑆|𝛾 𝑠𝑔𝑛(𝑆) K>0, γ>1
Sliding Surface:
𝑆 = 𝑎𝑋1 + 𝑋2 𝑎>0
First Derivative of S
Ṡ = 𝑎Ẋ1 + Ẋ2 = 𝑎𝑋2 + 𝑓(𝑋, Ẋ , 𝑢, 𝑡, 𝛥)
Ṡ = −𝐾|𝑆|𝛾 𝑠𝑔𝑛(𝑆) 𝐾 > 0, 0 > 𝛾 > 1
𝑎𝑋2 + 𝑓(𝑋, Ẋ , 𝑢, 𝑡, 𝛥) = −𝐾|𝑆|𝛾 𝑠𝑔𝑛(𝑆)
Control Law:
𝑢 = − 𝑎𝑋2 − 𝑓(𝑋, Ẋ , 𝑢, 𝑡, 𝛥) − 𝐾|𝑆|𝛾 𝑠𝑔𝑛(𝑆)
6|Page
3. Example
Design Sliding Mode Control for given system: ẍ = 𝑐1 Ẋ2 + 𝑐2𝑋 + 𝑢 with c1 and c2 are uncertainties and
bounded c1<C1 and c2<C2
Ẋ = Ẋ1 = 𝑋2
ẍ = ẍ1 = Ẋ2 = 𝑐1 Ẋ2 + 𝑐2𝑋 + 𝑢
For c1=0.4,c2=0.7,C1=5,C2=8,a=2,K=0.5,gamma=0.66
7|Page
4. Design Control Law for Tracking
System: ẍ = 𝑓(𝑋, Ẋ , 𝑢, 𝑡 , 𝛥)
Ẋ = Ẋ1 = 𝑋2
ẍ = ẍ1 = Ẋ2 = 𝑓(𝑋, Ẋ , 𝑢, 𝑡 , 𝛥)
𝑒 = 𝑋𝑟 − 𝑋
ė = Ẋ𝑟 − Ẋ
Ṡ = −𝐾|𝑆|𝛾 𝑠𝑔𝑛(𝑆)
8|Page
2. Zero dynamics
In mathematics, zero dynamics is known as the concept of evaluating the effect of zero on systems.in
control field zero dynamics refers to the control action chosen in which the output variables of the
system are kept identically zero in presence of both initial conditions and input. Here we will consider
a procedure of finding the input and initial conditions that kept the output identically zero (means zero
for all the time interval) for a system that already has a controller.
The derivation:
Let the system is represented in state space model
𝑥̇ = 𝐴𝑥 + 𝐵𝑢
𝑦 = 𝐶𝑥 + 𝐷𝑢
And as most of the systems that there relative degree is higher than 1 so D=0
Assume the solution and the input as following
𝑥 = 𝑥𝑜 𝑒 −𝑠𝑡 𝑎𝑛𝑑 𝑢 = 𝑢𝑜 𝑒 −𝑠𝑡
The new representation will be
𝑥𝑜 𝑠 𝑒 −𝑠𝑡 = 𝐴 𝑥𝑜 𝑒 −𝑠𝑡 + 𝐵 𝑢𝑜 𝑒 −𝑠𝑡
𝑦 = 𝐶 𝑥𝑜 𝑒 −𝑠𝑡
By representing the above two equations in matrix form
𝑠𝐼 − 𝐴 −𝐵 𝑥𝑜 −𝑠𝑡 0
[ ][ ]𝑒 =[ ]
𝐶 0 𝑢𝑜 𝑦
And since we are studying the system when y=0 so we set the y value above to zero
To get a system of equations that has 3 unknowns 𝑥𝑜 , 𝑠, 𝑢𝑜 - 𝑥𝑜 is a vector of initial
conditions-
Finding s
to solve this problem non of 𝑥𝑜 , 𝑢𝑜 can equal zero although 𝑒 −𝑠𝑡 so it led to get the
determinant of the matrix equals zero, from this step we will get an equation with only one
unknown which is s so we can obtained it easily .
finding 𝒙𝒐 , 𝒖𝒐
as the matrix now is fully defined so the system can be easily solved but since the matrix
wont be full rank so we will get a many possible solution for both unknowns to satisfy that
the output equals zero.
MATLAB code
clc
clear all
A=[-7 -12;
1 0];
B=[1;
9|Page
0];
C=[1 2];
D=0;
syms s x1 x2 u0
M(s)=[s*eye(2)-A -B;
C 0]
M(s) =
s=solve(det(M)==0)
s =
M=M(s);
M=M*[x1;x2;u0]
M =
vv=solve([M(1)==0,M(2)==0]);
x1=vv.x1
x1 =
x2=vv.x2
x2 =
final step is to set the constant value of the input to get the proper initial conditions for zero output.
10 | P a g e
3. Adaptive Control
Adaptive control is used when the use of a fixed controller cannot achieve a satisfactory compromise
between robust stability and performance. It changes the control action of the system to reach the
required dynamics when the inputs to the system or the environment changes, like the airplane's mass
that changes with time due to fuel being burned, so, in simple terms, it can be said that it's a controller
with adjustable parameters and a mechanism for automatically adjusting those parameters, it's a way
if dealing with parametric uncertainty.
11 | P a g e
B. Self-tuning adaptive control:Model-based tuning consists of two operations:
Model building via identification Controller design using the identified model
12 | P a g e
4.Lyapunov stability
Energy function and Energy-Like function
An energy function and a Lyapunov-like function which is also called energy-like function are both
mathematical concepts used in the analysis of dynamic systems, particularly in the control theory.
While they serve different purposes, they share some similarities in terms of their properties.
Energy function
An energy function is a mathematical function often associated with physical systems. it is a quantity
that represents the system’s energy and is typically derived from the system’s dynamics. For example,
in mechanical systems, the total energy function might involve kinetic and potential energy terms as
the simple pendulum. In electrical circuits, it could relate to the stored energy in capacitors and
inductors.
Energy-like function
When a system is represented in state-space form, especially if it’s a complex or high-dimensional
system, Lyapunov stability analysis using Lyapunov functions is a preferred method. As there will be
a complexity in the determination of the physical model of the system.
Lyapunov stability
Lyapunov stability provides a mathematical framework to understand whether a system will converge
to a stable state, diverge, or remain in a balanced state. Lyapunov stability is a concept in control
theory that assesses the stability of an equilibrium point in a dynamical system. It uses Lyapunov
functions, which are mathematical functions that help analyze the behavior of the system over time. If
a Lyapunov function exists and satisfies certain conditions, it indicates the system’s stability. If the
function decreases over time, the system is stable; if it increases, the system is unstable.
The Lyapunov function is defined as V(X) where is a scalar function defined over the state space of
the system. The variable X represents the state vector of the system.
The Lyapunov stability analysis involves finding a Lyapunov function V(X) . The Lyapunov function can
be positive definite and negative definite.
Positive definite Lyapunov function: V(X) is a positive quantity for all values of X and V(X) is equal to
zero when X is zero.
Negative definite Lyapunov function: V(X) is a negative quantity for all values of X and V(X) is equal
to zero when X is zero.
13 | P a g e
Example for energy equation method
Simple pendulum
The energy equation for a simple pendulum is derived from the conservation of mechanical energy,
considering the kinetic and potential energy of the system.
Kinetic energy = 0.5𝑚𝑣 2 = 𝐾𝐸
Potential energy = 𝑚𝑔𝐿(1 − 𝑐𝑜𝑠𝜃) = 𝑃𝐸
Total energy = 𝐾𝐸 + 𝑃𝐸 = 𝐸
𝐸 = 0.5𝑚𝐿𝜃̇ + 𝑚𝑔𝐿(1 − 𝑐𝑜𝑠𝜃) which is positive definite
∴ 𝐸̇ =m𝐿2 𝜃̇ 𝜃̈ + 𝑚𝑔𝐿(𝜃̇sin 𝜃)
14 | P a g e