Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
1.
a2 + b2
Argument x of x: x = tan1
b
a
where ejy = cos(y)+jsin(y) (Euler formula). Thus, we have that x = |x| (cos(x) + jsin(x))
and that Re(x) = a = |x|cos(x), Im(x) = b = |x|sin(x). The different concepts are illustrated in the complex plane in Figure 1.
Im
.x
b
|x|
x
a Re
Exercises
a. Consider the complex number x = 1 + 2j. Compute |x|, x and x.
b. Given two complex numbers x and y, prove that |xy| = |x| |y| and that (xy) =
x + y
c. Given two complex numbers x and y, prove that | xy | =
|x|
|y|
and that ( xy ) = x y
h. Show that x
x is equal to |x|2 .
i. Show that cos() =
1
2
ej + ej for R. Give a similar expression for sin().
Recap - integral
R t=t
In order to compute t=t12 f (t)dt of a function f (t) of the variable t, we need to compute the
primitive Pf (t) of f (t). The primitive Pf (t) is the function such that:
d Pf (t)
= f (t)
dt
Having determined the primitive, the integral can then be computed as follows:
Z t=t2
2
f (t)dt = [Pf (t)]t=t
t=t1 = Pf (t = t2 ) Pf (t = t1 )
t=t1
Exercises
a. Compute
b Compute
n 6= 0.
c. Compute
R 2
0
cos(t)dt.
R 2
Exercise 1. Consider the signals x(t), y(t) and z(t) given in the figure below. Note that
x(t) = 0 and y(t) are zero everywhere expect between t = 0 and t = 1 and that z(t) is zero
everywhere except between t = 0 and t = 2. Express z(t) as a linear combination of shifted
versions of x(t) and y(t).
2
x(t)
y(t)
1
z(t)
1
SOLUTIONS.
Solution of the exercises on complex numbers.
a. |x| =
1+4=
5 and x = tan1
2
1
y = |y| ejy
y = |y| ejy
|x|
|y|
and that ( xy ) = x y
3
2
(= 6j)
3 ej
3
3
3
= ej( 2 ) = ej 2 (= j)
j 2
2
2
2
2e
h. x
x = (|x| ejx )(|x| ejx ) = |x|2 ej0 = |x|2 .
i. 21 ej + ej = 12 (cos() + j sin() + cos() + j sin()) = 21 (2cos()) = cos(). In
order to find an expression for sin(), we use a similar reasoning. Since sin() = sin(),
we will consider:
ej ej
which is equal to 2j sin(). Consequently:
sin() =
j j
1 j
e ej =
e ej
2j
2
cos(nt)
sin(nt)
+j
n
n
Consequently,
Z 2
sin(2n)
cos(2n)
sin(0)
cos(0)
j
j
jnt
e dt =
+j
+j
= (0 ) (0 ) = 0
n
n
n
n
n
n
0
where we have used the fact that n is an integer. Consider now the case n = 0. Then
f (t) = 1 whose primitive if Pf (t) = t. Consequently, when n = 0, the integral is equal to
2 0 = 2.
c. Here f (t) = aebt . The primitive is thus: Pf (t) = a
ebt . Consequently:
b
Z
a bt
a
a
bt
e
=0
=
ae dt =
b
b
b
0
0
Indeed, since b > 0, limt ebt = 0.
Solution of the exercises on continuous-time signals.
Exercise 1. z(t) = y(t) + 2x(t 1) y(t 1)
Exercise 2. Denote the triangular pulse of length by v (t) (i.e. v (t) = (1 2|t|/ )p (t)).
The signal x(t) in subplot (a) of Figure P1.1 is equal to p4 (t) + p2 (t). The signal x(t) in
subplot (b) is equal to 34 v8 (t) 31 v2 (t). The signal in subplot (c) is equal to 2 p4 (t) + v4 (t).
5
The one in subplot (d) is p2 (t) v2 (t). The signal x(t) in subplot (e) is a periodic signal with
period 2. The first period (i.e. from t = 0 till t = 2) is given by p1 (t 12 ). Consequently, we
have:
x(t) =
k=
p1 (t
1
+ 2k)
2
f or 1 t < 1
x(t) = 1
x(t) = 1 f or 1 t < 3
x(t) = 0
elsewhere
The signal in sub-question (b) is:
x(t) = t f or 0 t < 1
x(t) = 1 f or 1 t < 2
x(t) = 0 elsewhere
Conversion table.
THIRD edition
p. 37 exercise 1.1.i
p. 39
SECOND edition
p. 47 exercise 1.1.a
p. 49
X
2
1
jk0 t
x(t) =
ck e
with ck =
x(t) ejk0t dt
T
T
2
k=
with T the fundamental period of x(t). This decomposition is called the Fourier series of
x(t) and the coefficients ck are called the Fourier coefficients of x(t). This is fact the complex exponential form of the Fourier series. The Fourier series can also be expressed in a
trigonometrical form (see p. 101 of the book for more details).
The power P of a periodic signal x(t) is defined by:
Z T
2
1
P =
x2 (t)dt
T T2
with T the fundamental period of x(t). The power can also be computed using the Fourier
coefficients (Parsevals Theorem):
P =
+
X
k=
|ck |2
2
k
for k = 1, 3
voor k = 2
3
X
ak sin(k0 t)
k=1
with 0 =
2
10
rad/s, a0 = 1, a1 = 2, a2 = 0 and a3 = 12 .
4
x(t)
4
10
0
t [sec]
10
Exercise 3. Let x(t) = sin(0.4t), y(t) = cos(1.4t + 4 ) and z(t) = x(t) + y(t).
a. Is z(t) a periodic signal? If yes, what is its fundamental period Tz ?
b. Prove using the formula cos() = 21 ej + ej that z(t) can be written as follows:
z(t) = a2 ej2 t + a1 ej1 t + a1 ej1 t + a2 ej2 t
with a1 = 0.5ej 2 , a1 = 0.5ej 2 , a2 = 0.5ej 4 , a2 = 0.5ej 4 , 1 = 0.4 and 2 = 1.4.
c. Prove that the expression in item [b.] is equivalent to the Fourier series of z(t).
d. Compute the power Pz of the periodic signal z(t) using Parsevals theorem. Verify
that we obtain the same value for the power Pz if we use, instead, the definition of the
power of a signal:
1
Pz =
T
T
2
z 2 (t)dt
T2
1.5
x(t)
0.5
0.5
1.5
2
40
30
20
10
0
t [sec]
10
20
30
40
a. Is the signal x(t) an even signal or an odd signal? Motivate your answer.
b. What is the fundamental frequency 0 of the signal x(t)?
The complex exponential form of the Fourier series of the periodic signal x(t) is given by:
x(t) =
ck ejk0 t
k=
where the fundamental frequency 0 has been determined in item [b.] and where the Fourier
coefficients ck are given by:
when k = 0
0
ck =
2
sin k 2 when k 6= 0
k
c. Explain based on Figure 2 why c0 = 0
d. Based on the complex exponential form of the Fourier series given above, determine
the parameters ak (k = 0... + ), bk (k = 1... + ) in the trigonometrical form of the
Fourier series of x(t):
x(t) = a0 +
+
X
k=1
a. Compute the Fourier series of x(t). Hint: determine separately c0 and ck for k 6= 0.
b. Check the value of c0 based on the figure representing x(t).
c. Compute the power of the periodic signal x(t) using Parsevals theorem. Verify that
we obtain the same value for the power if we use, instead, the definition of the power
RT
X
1
sin2 (k x) = 0.5 x ( x)
2
k
k=1
d. How does the Fourier series change if the signal x(t) is right-shifted by 0.5 T ?
Exercise 6. Consider the periodic signal x(t) given in the figure below.
Compute the Fourier series of x(t) by making use of the results obtained in Exercise 5.
Exercise 7 (examination June 2007). Consider the continuous-time signal x(t) =
et cos(10t)u(t) with u(t) the unit step function. Does there exist a Fourier series for the
signal x(t)? Motivate your answer.
PART 2: Continuous-time Fourier transform
The Fourier transform F (x(t)) = X() of a continuous-time signal x(t) describes the frequency content of the signal x(t). X() is a complex function of the frequency . It is
defined as:
Z
X() =
x(t) ejt dt
t<
p (t) = 1 if
2
p (t) = 0 elsewhere
d. What is the Fourier transform G() of g(t) = h(t) h(t) ? Use a property of the
Fourier transform (see the hint below).
Hints for items [b.] and [c.]: consider a signal x(t) whose Fourier transform is denoted
by X(). Then, the Fourier transforms of x(t c) (c R) and of x(t) are given by:
F (x(t c)) = X() ejc
F (x(t)) = X()
Exercise 2. Let a, 0 , be given constants. Compute the Fourier transform X() of:
a. x(t) = (1 eat ) u(t) with u(t) the unit step function (see page 2 of the book).
b. x(t) = sin(0 t + ).
c. x(t) = (t a) with (t) the unit impulse.
d. x(t) = eat sin(0 t) u(t).
Hint: recall that, in Table 3.2 of the book, we see that
F (u(t)) = 1/(j) + ()
F (sin(0 t)) = j (( + 0 ) ( 0 ))
F ((t)) = 1
Recall furthermore for item [d.] that, if x(t) = y(t)sin(0 t), then the Fourier transform
X() of x(t) is given by: X() = (j/2) (Y ( + 0 ) Y ( 0 )) with Y () the Fourier
transform of y(t). This is property (3.51) of the book.
Exercise 3.
a. Prove that the Fourier transform of x(t) is given by X(). Prove subsequently that
X()
= X() for real-valued x(t). Finally, show that Im(X()) = 0 for even
signals and Re(X()) = 0 for odd signals.
b. Prove the property (3.50) of the book
c. Prove the property (3.52) of the book
Exercise 4. Consider the signal x(t) depicted below:
ak ejk0 t
k=
with a0 =
V
,
T
ak =
V
sin( k
)
k
T
a. Compute the Fourier transform X() of x(t) from its Fourier series. Recall for this
purpose that the Fourier transform of ejo t is given by 2( 0 ). This is property
(3.74) in the book.
7
b. Compute the Fourier transform G() of g(t) = x(t) pT (t) with pT (t) the rectangular
pulse of length T (see item [a.] of exercise 1). The signal g(t) corresponds to one of
the periods of x(t).
SOLUTIONS.
SOLUTIONS: PART 1
Exercise 1.
1.a. The signal x(t) is made up of the sum of two cosines (since b2 = 0). Since these two
cosine functions are integer multiples of 0 , we can directly conclude that x(t) is periodic
and that the fundamental frequency is 0 . The fundamental period is then 2/0 = 10 sec.
We can also make the full reasoning. For this purpose, we also start by the fact that x(t)
is made up of the sum of two cosines. The fundamental period of the first cosine is T1 = 20 .
2
The fundamental period of the second cosine is T2 = 3
. The signal x(t) is then periodic if
0
and only if (cfr. pp. 5-6 of the book) T1 /T2 can be written as a ratio q/r of two integers.
Here, x(t) is indeed periodic since
3
T1
= .
T2
1
Since 3 and 1 are two coprime integers, the fundamental period T of x(t) is T = T1 = 3T2 ..
Consequently, T = 2
= 10 s since 0 = /5. The result can be verified as follows:
0
3
t)
=cos(
5
The above equation shows that x(t) is periodic of period T = 10s and T is the fundamental
period because is the smallest number for which the above equation holds.
1.b. x(t) is an even function. Because cos(t) = cos(t) for all t, it follows that x(t) = x(t).
1.c. The signal x(t) is given by:
x(t) = b1 cos(0 t) + b3 cos(30 t)
with b1 = 2/ and b3 = 2/(3). It is asked to determine the Fourier series of x(t) in its
complex exponential form i.e. to rewrite x(t) as a summation of complex exponentials at
harmonics of its fundamental frequency 0 :
x(t) =
ck ejk0t
k=
To determine the Fourier coefficients in this expansion, use can be made of the definition
RT
ck = T1 2T x(t) ejk0t dt. However, here, the use of the definition is not at all neces2
sary since x(t) is already given in the form of a summation of cosines at harmonics of the
fundamental frequency 0 of x(t)1 . Consequently, in order to determine the Fourier coefficients, we will justrewrite the cosines into complex exponentials using Euler formula:
cos() = 12 ej + ej . This delivers:
b3 j30 t
b1 j0 t
e
+ ej0 t +
e
+ ej30 t
2
2
b3 j30 t b1 j0 t b1 j0 t b3 j30 t
=
e
+
e
+
e
+
e
2
2
2
2
The latter expression is the Fourier series of x(t) (exponential form). It is indeed in the form
X
x(t) =
ck ejk0 t with the following values for the Fourier coefficients ck :
k=
c1 = c1 =
1
b1
=
2
c3 = c3 =
1
b3
=
2
3
k=3
|ck |2 = 2(
1
1
20
+ 2) = 2.
2
9
9
Exercise 2.
2.a. Due to the constant term a0 , the signal is neither even nor odd since
x(t) = a0 +
3
X
k=1
ak sin(k0 (t)) = a0
3
X
ak sin(k0 t)
k=1
i.e. x(t) is already given in the trigonometrical form of the Fourier series.
10
x(t) = a0 +
3
X
ak sin(k0 t)
k=1
= a0 +
3
X
ak
k=1
2j
ejk0t ejk0t
since sin() =
1
2j
ej ej
ck ejk0t
k=
with ck the Fourier coefficients. Comparing the two expressions, we can see that the Fourier
3
1
= 4j , c3 = a2j3 = j
, c2 = c2 = 0, c1 = a
= j,
coefficients ck are given by c3 = a
2j
4
2j
a1
c1 = 2j = j, c0 = a0 = 1 and ck = 0 for all other k.
2.d. Using Parsevals theorem, we have that the power P is:
P =
k=
1
1
|ck |2 = ( )2 + 0 + 1 + 1 + 1 + 0 + ( )2 = 3.125
4
4
Exercise 3.
3.a. The fundamental period Tx of x(t) is the smallest number for which x(t + Tx ) = x(t).
1
1
This number is given by Tx = 2x
= 0.4
= 5. The period of y(t) is equal to 10/7.
2
The signal z(t) = x(t) + y(t) is then periodic if and only if (cfr. pp. 9 of the book) Tx /Ty
can be written as a ratio q/r of two integers. Here, z(t) is indeed periodic since
35
7
Tx
=
= .
Ty
10
2
Since 7 and 2 are two coprime integers, the fundamental period of z(t) is Tz = 2Tx = 7Ty =
10s. The result can be verified as follows:
z(t + Tz ) = x(t + 2Tx ) + y(t + 7Ty ) = z(t)
| {z } | {z }
=x(t)
=y(t)
The above equation shows that z(t) is periodic of period Tz and Tz is the fundamental period
because is the smallest number for which the above equation holds.
3.b. The signal z(t) is equal to cos(1 t 0.5) + cos(2 t + 0.25). Consequently, z(t) can
be rewritten as:
z(t) = 0.5 ej(1 t0.5) + ej(1 t0.5) + ej(2 t+0.25) + ej(2 t+0.25)
11
ck ejk0t
k=
denotes the fundamental frequency of the signal and T its fundamental pewhere 0 = 2
T
riod. In item [a.], we have shown that z(t) has a fundamental period equal to T = 10.
Consequently, 0 is here equal to 2/T = 0.2. Note that 1 = 20 and 2 = 70 .
Using the expression of z(t) proposed in item [b.]Pand the relations between 1 , 2 and
jk0 t
0 , the coefficients ck of the Fourier series of z(t) =
can be read by inspeck= ck e
tion: c2 = a1 , c2 = a1 , c7 = a2 , c7 = a2 and ck = 0 for any other values of k.
P
2
3.d. Parsevals theorem states that Pz =
k= |ck | . Consequently, Pz = 1. We obtain
the same result via the definition. To show that, first note that
z 2 (t) =
=
Consequently,
1
T
T
2
z 2 (t)dt =
T2
1
(0.5 + 0.5) T = 1
T
0 = 2
= 2
= 20
rad/s.
T
40
4.c. That c0 = 0 is logical since the average over time of the signal is equal to 0. The periodic signal x(t) indeed oscillates around 0 with the same area above and under the zero-line.
4.d. First we observe that ck = ck . Thus the complex Fourier series can be rewritten as:
=0
z}|{ X
x(t) = c0 +
ck
ejk0 t + ejk0t
k=1
+
X
ck (2 cos(k0 t))
k=1
12
Consequently, the coefficients bk = 0 k and the coefficients ak = 2ck for k > 1 and a0 = 0.
4.e. The power is
1
T
T
2
1
x (t)dt =
40
2
T2
20
dt = 1
20
Exercise 5.
5.a. In order to determine the Fourier series of x(t), we need to compute its Fourier coefficients ck . For this purpose, observe first that the fundamental period of x(t) is T and thus
that its fundamental frequency is 0 = 2/T .
First, let us compute the Fourier coefficient c0 :
1
c0 =
T
T
2
T2
1
x(t)dt =
T
V dt =
2
V
T
T
2
1
=
T
V
=
T
T2
x(t)ejk0 t dt
ejk0t dt
V
ejk0 2 ejk0 2
(jk0 )T
ck ejk0t
k=
5.b. We have shown that c0 = V /T . This value can also be deduced from the figure
representing x(t) since c0 is, by definition, the mean of the periodic signal (over one of its
period). Here, this mean can be determined by dividing the area of the pulse (i.e. V ) by
the length T of one period.
13
5.c. Using the definition of the power, we easily obtain P = (V 2 )/T . This power can also
be computed using Parsevals theorem.
P =
k=
|ck | =
V 2 2
=
+
T2
c20
c2k + c2k
k=1
2V 2
k
2
sin
22
k
T
k=1
V 2 2 V 2
=
+
T2
T
T
2
V
=
T
5.d. Since x(t) =
jk0 t
,
k= ck e
x(t 0.5 T ) =
=
ck ejk0 (t0.5
ck ejk0 t
T)
k=
k=
ak ejk0 t
k=
with 0 =
2
,
T
a0 = V /2 and, for k 6= 0,
V
ak =
sin
k
k
2
us first compute the Fourier series of 2 x5 (t T4 ) = k= ck ejk0 t for which the Fourier
coefficients are thus denoted ck . Using the Fourier series of x5 (t) given above, we first obtain:
X
T
T
2 x5 (t ) = 2
ak ejk0 (t 4 )
4
k=
ck = 2 ak ejk0 4 = 2 ak ejk 2
2V
=
sin(k/2) ejk 2
k
2V 1 jk
=
e 2 ejk 2 ejk 2
k 2j
V
= j
(1 cos(k))
k
for odd k
j 2V
k
=
0
for even k
T
Now using that x(t) = 2 x5 (t T4 ) V , we obtain the following for the Fourier series of x(t):
x(t) =
ck ejk0t V
ck ejk0t
k=
k=
The Fourier coefficients ck of x(t) are equal to ck for all k 6= 0 while the coefficient c0 is equal
to 0. We see that x(t) has no dc-component as would be expected by taking a look at the
figure representing x(t) that clearly shows that x(t) is a zero-mean signal. Note also that,
as a consequence of the fact that x(t) is an odd signal, all Fourier coefficients ck of x(t) are
imaginary.
Exercise 4. No, since x(t) is not periodic. The Fourier series of a signal only exists if the
signal is periodic.
SOLUTIONS: PART II
Exercise 1.
1.a. By choosing = 1, we see that h(t) = p1 (t 0.5).
1.b. By posing = 1 in the expression of the Fourier transform of p (t), we obtain that the
Fourier transform of p1 (t) is 2 sin( 2 ). Consequently, using property (3.41) in the book,
H() =
1
sin( ) ej 2 = j
ej 1
1.c Left-shifting h(t) by 0.5 delivers p1 (t) which has a real Fourier transform.
1.d The Fourier transform of h(t) is, by (3.45), equal to H(). Consequently,
G() = H() H()
15
1
1
ej 1 j
2
= j (cos() 1)
= j
ej 1
Exercise 2.
2.a. Let us decompose x(t) into two parts: x1 (t) = u(t) and x2 (t) = eat u(t) such that
x(t) = x1 (t) x2 (t). The generalized Fourier transform X1 () of x1 (t) is 1/(j) + ().
The Fourier transform of x2 (t) can be deduced as follows:
(a+j)t
Z
Z
e
1
at jt
(a+j)t
X2 () =
e
e
dt =
e
dt =
=
a + j 0
a + j
0
0
X() is then equal to X1 () X2 ().
2.b We observe that x(t) = y t + 0 with y(t) = sin(0 t). Consequently, using (3.41),
j
X() = Y () e
= j (( + 0 ) ( 0 )) e
= j ej ( + 0 ) jej ( 0 )
where the last equality follows from the fact that, for a functions f (), we have that
f ()( 0 ) = f (0 )( 0 ).
2.c We observe that x(t) = y(t a) with y(t) = (t). Consequently, X() = Y ()eja =
eja since Y () = 1.
2.d. Let us denote eat u(t) by y(t). Then, by (3.51), we obtain:
X() =
j
(Y ( + 0 ) Y ( 0 ))
2
1
1
j( + 0 ) + a j( 0 ) + a
0
=
(a + j)2 + 02
Exercise 3.
3.a. The Fourier transform of x(t) is given by:
16
F (x(t)) =
x(t) ejt dt
x() ej
()
d = X()
Consequently, for even signals (i.e. such that x(t) = x(t)), we have X() = X() =
X().
This implies that Im(X()) = 0 . For odd signals (i.e. such that x(t) = x(t)),
F (x(t) e
)=
j0 t
x(t) e
jt
dt =
x(t) ej(0 )t dt = X( 0 )
3.c. First note that x(t) cos(0t) = 21 x(t) (ej0 t + ej0 t ). The result then follows from two
applications of the property proven in item (b).
Exercise 4.
4.a. Using property (3.74), we see that the Fourier transform X() of the periodic signal
x(t) is given by:
X() = 2
k=
ak ( k0 )
4.b. The signal g(t) can be rewritten as g(t) = V p (t). Using the fact that the Fourier
transform P () of p (t) is (2/)sin( 2 ), the Fourier transform G() of g(t) is:
G() = V P ()
2V
=
sin( )
Conversion table.
17
18
Most of the mechanical systems can be accurately modeled by a set of differential equations
relating the output y(t) and the input u(t) of the system. In order to simulate the model,
the differential equations have to be solved. This is often complicated. The theory of the
Fourier transform allows to get insights in the behaviour of the modeled system without
having to solve the differential equations.
An important tool for this purpose is the frequency response H() of the system. To
determine H(), we apply property (3.53) of the Fourier transform to the differential equation(s). This delivers an expression of the Fourier transform Y () of the output y(t) as a
linear function of the Fourier transform X() of the input x(t). Then, H() is just:
H() =
Y ()
X()
For example, suppose that a system is described by the differential equation dy(t)
+ k y(t) =
dt
x(t) (k R). This equation can be rewritten using (3.53) as jY ()+kY () = X(). Y ()
is thus equal to the following function of X(): Y () = (1/(j + k))X(). The frequency
response H() of the system is thus: 1/(j + k).
The frequency response H() is a very important quantity when we are interested to know
the behaviour of a system. And this for two main reasons.
The frequency response H() allows to determine the (steady-state) response y(t) of
the system when the input x(t) is given by x(t) = Acos(0 t + ) ( t +).
The response is then
y(t) = A|H(0)|cos(0 t + + H(0 ))
The amplitude of x(t) is multiplied by the modulus of H() evaluated at the frequency
of x(t) i.e. = 0 . The phase of x(t) is shifted with the argument of H() at = 0 .
See Section 5.1 for more details. The result also holds for sine functions.
More generally, the relation Y () = H()X() can also be used to compute the
response y(t) for any given x(t). For this purpose, determine the Fourier transform
X() of x(t). With X(), determine Y () by multiplying H() and X(): Y () =
H()X(). The output signal y(t) can then be determined by applying the inverse
Fourier transform on Y () (see (3.38)). This methodology is equivalent to solving
differential equations via the Laplace transform. Indeed, the Laplace variable s is here
just replaced by j.
Another important quantity is the inverse Fourier transform h(t) of H() i.e. h(t) =
F 1 (H()). The signal h(t) is called the impulse response of the system. From the impulse
1
response, it can be determined whether the system is stable and/or causal. The system is
indeed stable if and only if h(t) tends to 0 when t tends to +. The system is causal if and
only if h(t) = 0 for all t < 0. These properties come from the fact that the output y(t) of a
system to an input x(t) can also be expressed as the convolution of h(t) with x(t):
Z
Remark. (Inverse) Fourier transforms can be computed via their respective definition (3.30)
and (3.38). Instead, they can be deduced via standard Fourier transform pairs (see Table
3.2 on the page 144) in combination with properties of the Fourier transforms (see Table 3.1
on page 141).
The references to the book pertains to the third edition. At the end of the session, there is a
conversion table for the second edition.
Bode Diagram
40
30
20
Magnitude (dB)
10
0
10
20
30
40
50
60
0
Phase (deg)
45
90
135
180
2
10
10
10
10
Frequency (rad/sec)
d2 y(t)
dy(t)
+ 0.1
+ y(t) = x(t)
2
dt
dt
40
20
20
40
10
15
20
t
25
30
35
40
10
15
20
t
25
30
35
40
40
20
20
40
v(t)
0 +
u(t)
+
y(t)
,
Figure 3: Closed-loop
Exercise 2. Consider the closed-loop system depicted in Figure 3 where y(t) is the controlled
output, u(t) the command signal (and thus not the unit step function), v(t) the disturbance,
C the controller and G the plant. The frequency responses of the controller and of the plant
are given by:
C() =
1
j
and G() =
3
10
0.1 j+1
a. Determine the frequency response S() such that Y () = S()V () with V (), Y ()
the Fourier transforms of v(t) and y(t), respectively.
The modulus and argument of the frequency response S() found in item [a.] are represented in the Bode plot of Figure 4.
10
0
Magnitude (dB)
10
20
30
40
50
60
70
Phase (deg)
90
45
10
10
10
10
10
10
Exercise 3. Consider the simplified rolling mill depicted in Figure 6. This rolling mill is
made up of two rolls rotating at a speed 0 of 2 rad/s (i.e. the rotation time is 3.1 s.).
At the output of the mill, the thickness of the steel plate has to be equal to 1 mm. In
order to obtain this constant output thickness, a feedback loop such as in Figure 7 is used.
In this feedback loop, the controlled variable y(t) is the output thickness while the control
variable u(t) is the position of the rolls. The controller C has as objective to keep the output
thickness constant and thus to compensate any disturbance v(t). In a rolling mill, the main
disturbance v(t) consists of the effect of the eccentricity of the rolls on the output thickness.
The eccentricity is a generic term embedding any imperfection of the rolls e.g. the fact that
the rolls are not perfectly round.
4
1.5
1
0.5
v
0
0.5
1
1.5
20
40
60
80
100
t
120
140
160
180
200
20
40
60
80
100
t
120
140
160
180
200
1.5
1
0.5
y
0
0.5
1
1.5
u(t)
y(t)
v(t)
reference
for y(t)
=1 mm +
u(t)
,
Figure 7: Closed loop
+
y(t)
where V () is the Fourier transform of v(t). S() is called the sensitivity function in control
theory.
b. In Figure 8, three candidate frequency responses S() are proposed. Which sensitivity
function is the best able to compensate the eccentricity v(t)? Explain why?
Modulus of S ()
10
10
10
10
10
10
10
Modulus of S ()
10
10
10
10
10
10
10
10
10
10
Modulus of S ()
10
10
10
10
10
10
10
10
10
10
10
10
10
X 2()
X 1()
Figure 9: X1 () and X2 ()
The frequency B is given. The signals x1 (t) and x2 (t) are speech signals coming from two
different radio channels. Both signals x1 (t) and x2 (t) are band-limited with a bandwidth
equal to B (the same B as above!). Thus, the Fourier transforms X1 () and X2 () of
x1 (t) and x2 (t) are such that X1 () = X2 () = 0 for all || > B . We will furthermore
suppose that:
both X1 () and X2 () are entirely real i.e. their imaginary parts are equal to zero for
all
X1 ( = 0) = X2 ( = 0) = 1
X1 () and X2 () have the shapes given in Figure 9.
a. Give an expression of the Fourier transform X() of x(t) as a function of X1 (), X2 ()
and B and represent X() in a graph. For this purpose, use the property of multiplication by a cosine of the Fourier Transform in Table 3.1 on page 141 of the book.
We would like to reconstruct the information signal x1 (t) from the received signal x(t) using
a two-step procedure:
Step 1: we generate a signal y(t) by multiplying x(t) by cos(2B t):
y(t) = x(t)cos(2B t)
Step 2: we generate a signal z(t) by filtering y(t) obtained in Step 1 with a filter whose
frequency response H() is given by:
2 if B < < B
H() =
0 elsewhere
b. Give an expression of the Fourier transform Y () of y(t) as a function of X1 (), X2 ()
and B and represent Y () in a graph. For this purpose, develop y(t) using the
trigonometric formula: cos(A)cos(B) = 21 (cos(A B) + cos(A + B)).
7
(j + c)(j + d)
j + c j + d
with =
b ac
b ad
and =
dc
dc
j + d j c
( )j + d c
=
(j + c)(j + d)
(j + c)(j + d)
The latter expression is equal to
aj+b
.
(j+c)(j+d)
Consequently,
a=
aj + b = ( )j + d c =
b = d c
b ac
b ad
and =
.
dc
dc
a. What is the impulse response h(t) of that filter? Use property (3.53) of the Fourier
= (t) with u(t) the unit step function and (t)
transform and the fact that the du(t)
dt
the unit impulse.
b. What is the signal y(t) that is obtained by filtering the input x(t) = eat u(t) with
a > 0 by H() ?
Solutions.
Exercise 1.
1.a. We use (3.53) to rewrite the differential equation as follows:
10 (j)2 Y () + 0.1 (j)Y () + Y () = X()
with X() and Y (), the Fourier transforms of x(t) and y(t), respectively. The frequency
response H() of the system is thus given by:
H() =
0.1
Y ()
=
2
X()
(j) + 0.01 (j) + 0.1
0.1
=
2
(0.1 ) + 0.01 j
The modulus and argument of H() can thus be computed for each :
0.1
|H()| = p
(0.1 2 )2 + (0.01)2
1
H() = tan
0.01
(0.1 2 )
1.b. The force x(t) is made up of two frequencies from which one is the resonance frequency 0.3161 that can be seen in Figure 1. At this frequency, we read from the Bode
30
S() =
j (0.1 j + 1)
1
=
1 + C()G()
0.1 (j)2 + j + 10
=
10
j (0.1 j + 1)
(10 0.1 2) + j
2.b. From the expression of |S()|, we note that |S( = 0)| = 0. We observe the same
phenomenon in the Bode plot of S(): we indeed see that |S()| is going to 0 with a slope
of 20 dB by decade when 0. The modulus of |S()| is equal to 0.005 (i.e. -45 dB)
at the frequency = 0.05. The argument of S() at = 0.05 is 2 . At = 100, the
modulus of the frequency response S() is 1 and its argument is 0. Consequently,
y(t) 0.1sin(100 t) + 0.005sin(0.05t + 2 ). The second sine function has an amplitude 20
times smaller than the first one and is thus invisible in the figure representing y(t).
Remark. The frequency response S() is a typical sensitivity function in feedback control
where the goal is to attenuate disturbances having a frequency content smaller than the
chosen bandwidth of the closed-loop system. The bandwidth B is here equal to = 0.8
and we have observed that the disturbance at frequency 0.05 << B is (almost) completely
rejected while the disturbance at = 100 >> B remains unchanged.
Exercise 3.
3.a. The disturbance due to the imperfections of the rolls is periodic since the same disturbance comes back at each rotation of the roll. Consequently, the fundamental frequency of
v(t) should be equal to the roll speed 0 = 2rad/s.
3.b. The modulus of S1 () is equal to 1 for = 0 , = 20 and = 30 . Consequently,
the first controller keeps the effect of the eccentricity unchanged. The modulus of S2 () is
very small at = 0 , but is equal to 1 for = 20 and = 30 . Consequently, the second
controller almost completely removes the first harmonics of v(t), but keeps unchanged the
two other harmonics. The modulus of S3 () is very small at = 0 , = 20 and = 30 .
Consequently, the third controller almost completely removes all harmonics of v(t) and thus
the whole signal v(t). The third controller is thus the best controller to remove the eccentricity.
Exercise 4. The output y(t) to the filter for the sub-question (a) is P
y(t) = 3cos(3t)
5sin(6t30). The output y(t) corresponding to sub-question (b) is y(t) = 3k=1 (1/k)cos(2kt).
Exercise 5.
5.a. Using the property (3.52) of the book, we obtain that
X() =
1
(X1 ( 2B ) + X1 ( + 2B ) + X2 ( 4B ) + X2 ( + 4B ))
2
This delivers the graph of X() given in Figure 10 (the symbol w stands for ).
5.b. Using the proposed trigonometric formula, we obtain successively:
y(t) = x(t)cos(2B t)
= x1 (t)cos2 (2B t) + x2 (t)cos(4B t)cos(2B t)
11
X(w)
1
0.5
w
-4wB
-2wB
1
(x1 (t) + x1 (t)cos(4B t) + x2 (t)cos(2B t) + x2 (t)cos(6B t))
2
1
1
X1 ()+ (X1 ( 4B ) + X1 ( + 4B ) + X2 ( 2B ) + X2 ( + 2B ) + X2 ( 6B ) + X2 ( + 6B ))
2
4
0.5
0.25
w
-6wB
-4wB
-2wB
Figure 11: Y ()
5.c. Filtering y(t) by the proposed filter, we obtain a signal z(t) whose Fourier Transform
Z() is given by Z() = H()Y (). Taking a look at the representation of Y (), we see
that Z() is then precisely equal to X1 () and thus z(t) = x1 (t).
5.d. The same procedure can be applied without problem to retrieve x1 (t) from xbis (t). Indeed, ybis (t) = y(t) + cos(100t)cos(2B t) = y(t) + 12 (cos(2B 100) + cos(2B + 100)) .
The frequencies 2B 100 and 2B + 100 being both larger than B , these two cosines
will be removed when filtering y(t) by H().
12
Remark: amplitude modulation. This exercise is about the amplitude modulation (AM)
technique in radio transmission. Each AM radio channel is characterized by a frequency (The
Dutch Radio 1 for example by 547 kHz). Suppose that a particular radio characterized by
a frequency 1 wish to transmit a speech signal x1 (t). Note that a speech signal is bandlimited with a bandwidth of B = 25000 rad/s. What is important to realize is that the
radio channel does not directly transmit x1 (t) in the air: it transmits a signal x1 (t)cos(1 t)
where 1 is the frequency characterizing the radio channel. We see that, in this signal, the
cosine at frequency 1 (i.e. the so-called carrier) has an amplitude which varies in the time
and which is equal to the speech signal. This explains the term amplitude modulation. Each
radio channel does that at its own characteristic frequency. Consequently, our radio device
receives a signal which is very similar to the signal x(t) in this exercise where 1 = 2B
and 2 = 4b represents the characteristic frequencies of two different radio channels. When
the signal x(t) is received, the procedure presented in the exercise is followed in order to
retrieve the speech signal of one of the radio stations. Now, why do we need to multiply the
speech signal by a carrier at a characteristic frequency in the first place? In fact, if x1 (t) and
x2 (t) would be send as such in the air, the received signal would be x(t) = x1 (t) + x2 (t) and
it would be impossible to separate those two signals by filtering since they lie in the same
frequency region. The fact that each radio channel modulates their speech signal by carriers
at different frequencies makes it possible that the received signal contains non-distorted version of the frequency information of the speech signals of all radio stations. This frequency
content is just located in another frequency range. This is evidenced in Figure 10 where we
see that the frequency information of the speech signals of the two different radio channels
(the triangle and the half circle) are received without any distortion: they are just shifted
toward an higher frequency range i.e. around the characteristic frequency of each of the
radio channel1 . Since the frequency information of both x1 (t) and x2 (t) is unharmed, it is
possible to retrieve one of these speech signals by following the procedure presented in this
exercise.
Exercise 6.
6.a. The Fourier Transform H() can be written as H() = Z()ej with Z() =
1/(j + 2). The inverse Fourier Transform of Z() is z(t) = e2t u(t) with u(t) the unit
step function. Since H() = Z()ej , h(t) is then h(t) = z(t 1) (shift in time property
(3.41)). Consequently, h(t) = e2(t1) u(t 1).
6.b. Since the impulse response h(t) decays to 0 when t , the filter is stable.
6.c. The frequency response is by definition the Fourier Transform of the impulse response.
The frequency response of the filter is thus H().
6.d. The response of a linear filter to x(t) = (t) is by definition the impulse response h(t)
of the filter. Indeed, since F ((t)) = 1, the Fourier transform of the response is H() whose
inverse Fourier transform is the impulse response h(t). Consequently, when x(t) = (t 2),
1
For this to be possible, it is thus important that the difference between the characteristic frequencies of
two channels is larger than 2B .
13
ej
ej
X() =
j + 2
(j + 2)(j + 1)
with X() the Fourier transform of x(t). The Fourier transform Y () can now be separated
as follows:
1
1
j
Y () = e
j + 1 j + 2
{z
}
|
=V ()
Using the shift-in-time property of Fourier Transform, we see that y(t) = v(t1) where v(t) is
the inverse Fourier Transform of V (). The signal v(t) is here equal to v(t) = (et e2t )u(t).
Thus, y(t) = (et+1 e2t+2 )u(t 1).
Exercise 7.
7.a. The impulse response of a system (or a filter) is the inverse Fourier transform of its
frequency response. To compute the inverse Fourier transform of H(), we notice that
H() = (j)
1
j + b
| {z }
Z()
where Z() is the Fourier transform of z(t) = ebt u(t). Using now the property (3.53) of
the book, we conclude that the impulse response h(t) (i.e. the inverse Fourier transform of
H()) is the derivative of z(t) with respect to time:
h(t) =
dz(t)
du(t)
= bebt u(t) + ebt
= bebt u(t) + ebt (t)
| {z }
dt
dt
=(t)
j
Y () =
=
(b + j)(a + j)
= y(t) =
a
ba
a + j
b
ba
b + j
1
bebt aeat u(t)
ba
Conversion table.
14
15
0.8
0.6
0.4
x(t)
0.2
0.2
0.4
0.6
0.8
0.2
0.4
0.6
0.8
1
time [s]
1.2
1.4
1.6
1.8
N=1, 2, 3
Magnitude (dB)
20
40
60
80
100
120
N=1, 2, 3
Phase (deg)
45
90
135
180
225
270
0
10
10
10
H1 () =
H2 () =
H3 () =
j
6
1
+1
1
1
(j)2
36
8.485
j
36
+1
1
1
(j)3
216
12
(j)2
216
72
j
216
+1
The frequency response of these three filters are represented in a Bode plot in Figure 2.
We have filtered x(t) with each of these three filters. Let us denote by yi (t) (i = 1, 2, 3), the
output obtained by filtering x(t) by Hi () (i = 1, 2, 3). The three outputs are represented
together with the desired output 0.5cos(2t) in Table 1.
c. Based on the expression of H1 (), show that the Butterworth filter is indeed implementable in practice.
d. Explain the shape of the outputs in Table 1. Use for this purpose Figure 2.
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
y(t)
0.5 cos(2 t)
0.5
0.1
0.2
0.2
0.3
0.3
0.4
0.4
0.5
0.5
3
time [s]
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
y(t)
y(t)
0.1
0.1
0.2
0.2
0.3
0.3
0.4
0.4
0.5
0.5
3
time [s]
3
time [s]
3
time [s]
0.1
Table 1: Top left: desired output 0.5cos(2t); Top right: y1 (t); Bottom left: y2 (t); Bottom
right: y3 (t)
Exercise 9. Consider the infinite-time signal x(t) depicted in Figure 3 in the interval
[40, 40]. The continuous-time signal x(t) is a periodic signal with a fundamental fre
quency 0 = 20
rad/s. We have shown in Exercise 4 of PART 1 of Session 2 that x(t) can
be written as:
x(t) =
+
X
ak cos(k0 t)
k=1
with
ak =
when k = 0
4
k
sin
k
2
when k 6= 0
a. Determine a value for A and a value for cut in the expression of the ideal filter H():
H() =
A if cut cut
0 elsewhere
3
1.5
x(t)
0.5
0.5
1.5
2
40
30
20
10
0
t [sec]
10
20
30
40
4
t) 3
cos( 3
t). This output signal is represented
ydes (t) is equal to ydes (t) = 4 cos( 20
20
in the left part of Table 2.
Since the ideal filter designed in item [a.] is not implementable in practice, we decide to
perform the filtering operation with a Butterworth filter1 with a cut-off frequency equal to
40 rad/s. The order of the filter has been chosen to N = 1 and the resulting output is
represented in dashed line in the right part of Table 2.
b. Justify the choice of the cut-off frequency based on what has been found in item [a.]
c. Explain why the obtained output when N = 1 is more similar to the input x(t) than
to the desired output ydes (t).
In order to obtain an output closer to ydes (t), we have increased the order of the Butterworth filter to N = 10. This filter yields the output represented in solid line in the right
part of Table 2.
d. Explain why the obtained output when N = 10 is now much more similar to the desired
output ydes (t).
1.5
0.5
0.5
butter
des
(t)
(t)
1.5
0.5
0.5
1.5
40
30
20
10
0
time [s]
10
20
30
40
1.5
40
30
20
10
0
time [s]
10
20
30
40
Table 2: Left: desired output ydes (t); Right: ybutter (t) when N = 1 (black dashed) and when
N = 10 (blue solid)
SOLUTIONS
Exercise 8.
8.a. Since the frequencies = 38 rad/s and = 42 rad/s are larger than 6 rad/s and since
the frequency = 2 rad/s is smaller than 6 rad/s, the output of the ideal filter will indeed
be 12 cos(2t).
8.b. First, note that the filtering operation is always done in the time domain. Now, it
is shown in the book that the impulse response h(t) of the ideal filter is nonzero for t < 0
(see page 241). The filter is therefore non-causal and thus that, to compute the value of the
output at time t = 10 s using e.g. the convolution integral h(t) x(t), we not only need
the values of the signal x(t) for t 10, but also all the values for t > 10. Since x(t) is the
measurement of a physical variable, the values of x(t) at time t > 10 are not available at
time t = 10. Consequently, the ideal filter is not implementable in practice.
8.c. The impulse response h1 (t) of the filter H1 is the inverse Fourier transform of H1 ().
Using Table 3.1, we see that h1 (t) = 6e6t u(t) which is equal to 0 for t < 0. The filter is
thus causal and can thus be implemented in practice.
Remark. If, as usual, the measurement x(t) is under the form of a voltage, the filtering
operation H1 () can be easily done using the analog circuit represented in Figure 4 provided
that RC = 61 . Indeed, the frequency response of this circuit is 1/(1 + jRC).
8.d. Recall first that for i = 1, 2, 3:
1
1
1
yi (t) = |Hi(2)|cos(2t+Hi (2))+ |Hi (38)|cos(38t+Hi(38))+ |Hi (42)|cos(42t+Hi(42))
2
4
4
We observe that the output y1 (t) of the Butterworth filter with N = 1 contains still
5
x(t)
y(t)
4
4
4
cos(0 t) + 0
cos(30 t) + 0 +
cos(50 t) + ...
3
5
6
5
3
cut <
20
20
9.b. The cut-off frequency of the Butterworth filter can be chosen equal to the cut-off frequency of the ideal filter. Consequently cut = 40 = 4
is a reasonable choice.
20
9.c. The obtained output when N = 1 is more similar to the input x(t) than to the desired
output ydes (t) because the Butterworth filter of order 1 does not allow to reduce the harmonics above 40 sufficiently to significantly change the shape of x(t) (see Figure 2).
9.d. Unlike when N = 1, the Butterworth filter of order 10 allows to reject almost completely
the harmonics above 40 . Therefore, the obtained output when N = 10 is more similar to
the desired output ydes (t). However, a Butterworth filter of order 10 introduces a (large)
phase shift of the harmonics in the bandwidth i.e. 0 and 30 and this phase shift is different
for the harmonic in 0 and for the harmonic in 30 . This explains the difference of shape
between ydes (t) and ybutter (t).
y(t)
x(t)
10
|H()|
10
10
10
10
10
10
10
Figure 2: |H()|
1
10
Our objective is to determine the input signal x(t) that has to be applied to the system
to force the output y(t) to follow the periodical pattern that is represented in Figure 3. This
periodical pattern has a fundamental period of 60 s and is very close to a block signal of
amplitude 45 degrees. However, the transition between 45 and 45 is here not instantaneous
as in a block signal but is done in 2.25 s.
The signal yr (t) can be expressed as the following Fourier series expansion (trigonometrical form):
yr (t) =
bk sin(k0 t)
k=1
30
rad/s. The Fourier coefficients bk are only nonzero for odd k. The absolute
with 0 =
value of these Fourier coefficients are represented at the top of Table 1. In this table, the
coefficients are represented on the left side from k = 0 till k = 120 while, on the right side,
we zoom on the coefficients from k = 40 till k = 120.
60
40
yr(t) [deg]
20
20
40
60
10
20
30
40
50
60
time [s]
70
80
90
100
60
40
y(t) [deg]
20
20
40
60
10
20
30
40
50
60
time [s]
70
80
90
100
110
40
y(t) [deg]
20
20
40
60
2.5
3.5
4.5
5
time [s]
5.5
6.5
7.5
Y ()
Yr ()
with Yr () and Y (), the Fourier transforms of yr (t) and y(t), respectively.
2
This controller has been designed using the H framework. Reference: Ferreres-Fromion, Proc. European Control Conference, 1997.
y(t)
x(t)
controller
y r(t)
10
|T()|
10
10
10
10
10
10
10
10
Figure 7: |T ()|
d. Determine the expression of T () as a function of H() and C() i.e. the frequency
responses of the flexible transmission system and of the controller, respectively.
The modulus |T ()| of T () is represented in Figure 7.
We have run an experiment in the closed-loop configuration and we obtain an output
y(t) as depicted in Figure 8. In Figure 9, we give a detail of y(t) and we compare y(t) with
the desired yr (t).
e. Is the output y(t) obtained in the closed-loop configuration a better image of yr (t)
than it was the case in the open-loop configuration (see Figure 4).
f. Explain the observation made in item [e.]. For this purpose, use can be made of the
bottom of Table 1 where we represent |bk ||T (k0 )|.
4
60
40
y(t) [deg]
20
20
40
60
10
20
30
40
50
60
time [s]
70
80
90
100
110
40
y(t) [deg]
20
20
40
60
28
29
30
31
32
33
34
35
time [s]
Figure 9: Detail of the output y(t) obtained in the closed-loop configuration (red solid)
compared to the same detail of yr (t) (blue dotted)
Suppose now that the measurement of the output y(t) is achieved using an electrical sensor
and that the electrical network induces a measurement error v(t) = sin(100t):
ymeasured (t) 6= y(t)
= y(t) + sin(100t)
This means that the signal entering into the controller is not yr (t)y(t), but yr (t)y(t)v(t).
h. Knowing that T ( = 100) 0.0001, can you deduce whether this measurement error
will induce a (significant) change in the shape of the actual output y(t) of the system?
SOLUTIONS
Exercise 10.
10.a. In Figure 4, we see that the oscillation is a damped sinusoid (a sinusoid multiplied
by et for some > 0). Now, by looking at Figure 5, we observe that this sinusoid has a
period of approximatively 0.8 s. This leads to a frequency of:
oscillation
5
2
= 8 rad/s
0.8
2
10.b. By inspection of Figure 2, we see that this frequency oscillation corresponds to the
frequency of the first resonance peak of H().
10.c. Let us first determine the output y(t). For this purpose, let us note that, since
x(t) = yr (t), we have that
x(t) =
bk sin(k 0 t)
k=1
X
k=1
In fact, the damping is due to the summation of different harmonics with different phases and with
frequencies around = 8.
x(t) and that the extra harmonics of y(t) around = 8 are limited in amplitude since
the resonance peak is limited (the stiffness of the elastic belts is large), the shapes of
y(t) and x(t) = yr (t) are quite similar.
10.d. In Figure 6, we see that:
X() = C() (Yr () Y ())
with X(), Yr () and Y (), the Fourier transforms of x(t), yr (t) and y(t), respectively. Now,
we have also that Y () = H()X(). This yields:
Y () = H()X() = H()C() (Yr () Y ())
= (1 + H()C()) Y () = H()C()Yr ()
= T () =
H()C()
Y ()
=
Yr ()
1 + H()C()
X
k=1
Consequently, we see that bk |T (k0)| represents the amplitude of the harmonic at k0 in the
output. We can therefore explain the shape of y(t) as follows:
The fact that the oscillations have disappeared can be explained by the fact that,
unlike H(), T () does not present any resonance peak with important amplitude.
Similarly as in the open-loop case, we have also to note for this purpose that bk is
significant up to k = 19 i.e that the main components of yr (t) are sinusoids at frequencies smaller than 190 2 rad/s (see the top of Table 1). Note also that |T ()| 1
for < 2 and that consequently, bk |T (k0)| bk for k 19. This is confirmed the
top and the bottom part of Table 1 which are similar up to k = 19. Consequently,
the significant harmonics of y(t) and yr (t) are approximately equal which explains the
almost perfect match between y(t) and yr (t). In fact, since |T ()| < 1 for all > 2
and since |T ()| 0 when , y(t) only misses the harmonics of yr (t) at the
highest frequencies. This yields the smoothed edges and the small delay4 .
4
Note that this is unavoidable since a bounded control action can never achieve T () = 1 .
10.h. The measurement error can be seen as a second input to the system. Consequently,
Y () = T1 ()Yr () + T2 ()V ()
where V () is the Fourier transform of v(t) and T1 (), T2 () are two frequency responses
that have to be determined. Following a similar reasoning as in item [d.], we can deduce
T1 () and T2 () as follows. First note that:
X() = C() (Yr () Y () V ())
The last equation combined with Y () = H()X() yields:
Y () = H()X() = H()C() (Yr () Y () V ())
= (1 + H()C()) Y () = H()C() (Yr () V ())
= Y () =
H()C()
H()C()
Yr ()
V ()
1 + H()C()
1 + H()C()
Since the perturbation induced by v(t) on the actual output is a sinusoid of amplitude 0.0001
degree and is therefore negligeable, we have that y(t) y1 (t).
60
0.9
0.8
50
0.7
40
0.6
|bk|
|bk|
0.5
30
0.4
20
0.3
0.2
10
0.1
20
40
60
k
80
100
0
40
120
60
50
60
70
80
k
90
100
110
120
50
60
70
80
k
90
100
110
120
50
60
70
80
k
90
100
110
120
0.9
0.8
50
0.7
30
0.5
|b | |H(k )|
0.6
|b | |H(k )|
40
0.4
20
0.3
0.2
10
0.1
20
40
60
k
80
100
0
40
120
60
0.9
0.8
50
0.7
20
30
0.5
|b | |T(k )|
0.6
|b | |T(k )|
40
0.4
0.3
0.2
10
0.1
20
40
60
k
80
100
120
0
40
Table 1: Top Left: |bk | in the interval [0 120]; Top Right: |bk | in the interval [40 120];
Middle Left: |bk ||H(k0)| in the interval [0 120]; Middle Right: |bk ||H(k0)| in the interval
[40 120]; Bottom Left: |bk ||T (k0 )| in the interval [0 120]; Bottom Right: |bk ||T (k0 )| in
the interval [40 120];
X() =
x[n] ejn
n=
The DTFT is always periodic with period 2 i.e. X( + 2) = X(). The DTFT of a given
signal x[n] can be computed
either by applying the definition (thus by evaluating the summation above). In some
cases, an important formula for doing this is the following one:
q2
X
k=q1
rk =
r q1 r q2+1
1r
which can be applied when r is a real or complex number1 and when q1 < q2
or by using standard DTFT pairs in combination with properties of the DTFT. Table 4.1 on page 177 of the book summarizes the DTFTs of some common signals. The
properties of the DTFT are summarized in Table 4.2 on page 178.
The references to the book pertains to the third edition. At the end of the session, there is a
conversion table for the second edition.
Exercise 1. Determine the value of the discrete-time signal x[n] = u[n]2u[n1]+ u[n4]
for each integer n (u[n] is the discrete-time unit-step function (see p. 14 of the book)).
1
Exercise 2. Let us consider the continuous-time signal x(t) = cos(t). This signal is sampled
with a sampling period Ts = 0.1. Is the obtained discrete-time signal x[n] periodic ? What
happens if we change the sampling period to Ts = 0.1 3 = 0.10472...?
Exercise 3. Consider the following discrete-time signal x[n]:
for n = 0 and n = 2
1
1 for n = 1
x[n] =
0
elsewhere
a. Compute the DTFT X() of the finite-time signal x[n].
Exercise 4. Let a, b, 0 be given constants (|a| < 1 and b is an integer). Compute the
(generalized if necessary) DTFT X() of the following discrete-time signals:
a. x[n] = an u[n] with u[n] the discrete-time unit step function (see page 14 of the book)
b. x[n] = a|n| with |.| the absolute value operator. Use the fact that x[n] = an when
n < 0.
c. x[n] = [n b] with [n] the unit-pulse function (see page 15). Use a property of the
DTFT.
d. x[n] = an sin(0 n) u[n]. Use a property of the DTFT
PART 2: SAMPLING
Computers cannot deal with continuous-time signals. To analyze the property of a signal or
for other use involving computers, signals have to be sampled. Consequently, a discrete-time
signal x[n] can be the sampled version of a continuous-time signal x(t):
2
.
Ts
When x[n] = x(t = nTs ), there exists a simple relation between the continuous-time Fourier
transform X() of the original continuous-time signal x(t) and the DTFT X() of the
sampled signal x[n]. This relation is illustrated in the figure below:
X()
T s X s()
-2 s
- s
s/2
s/2
2 s
Ts X()
-4
-2
In this figure, we see that, starting from the continuous-time X(), we construct Xs () by
summing up shifted versions of X():
1
Xs () =
Ts
=
k=
X( ks )
(1)
1
(... + X( 2s ) + X( s ) + X() + X( + s ) + X( + 2s ) + ...)(2)
Ts
Note that the factor T1s in the equations above is just a scaling factor and is therefore not
really important. The DTFT X() is then directly obtained from Xs () by replacing the
frequency by the normalized frequency = Ts :
)
Ts
(3)
Xs () = X( = Ts )
(4)
X() = Xs ( =
or equivalently:
Ts
s
2
Observations. The important thing to note here is that Xs () and X() contain the same
information: we indeed see in (3) that X() is just Xs () expressed with the normalized
frequency . 2 The period of X() being 2, Xs () is therefore a periodic function with
period s .
The important question now is: have we lost information by sampling? ANSWER: no information is lost if and only if Ts Xs () = X() for all [ 2s 2s ]. In other words, no
information is lost if the main interval [ 2s 2s ] of Xs () (or the main interval [ ] of
X()) presents a non-distorted image of X().
2
A necessary and sufficient condition for this is that the sampling frequency s is chosen
larger than twice the highest frequency present in the Fourier transform X() of the original
continuous-time signal x(t) (Shannons theorem). In other words, if X() = 0 for || > B
(B given), then s should be chosen such that s > 2B. In the figure on page 3, we see that
the condition of Shannons theorem is respected (X() = 0 for all || > 2s ). This in turn
explains that X() is a non-distorted version of X() in its main interval [ ].
When s has been chosen according to Shannons theorem, no information is lost through
sampling and, if we would like so, the continuous-time signal x(t) can be reconstructed from
its sampled version x[n] using the following relation:
+
sin 2s (t nTs )
1 X
x(t) =
x[n]
(5)
n=
t nTs
Note that this relation is non causal. Indeed, to compute the continuous-time signal at time
t, you need x[n] from n = to n = . This is in fact logical since the reconstruction can
be interpreted as filtering the fictive (continuous-time) signal corresponding to Xs () with
an ideal low-pass filter with cut-off frequency3 cut = 2s and we have seen in exercise 8 of
session 3 that ideal filtering is an non-causal operation. Moreover, it is also to be noted that
this non-causal operation has to be performed for all t R to reconstruct the original signal.
Consequently, even though we will consider this relation for this exercise session, this relation is an ideal reconstruction mechanism which is rarely used in practice. Instead, simpler
methods such as the Zero Order Hold (ZOH) mechanism is used to generate continuous-time
signals from discrete-time signals. The ZOH mechanism will be discussed in Session 5.
Let us now see what happens when Shannons theorem is not respected. Shannons theorem
is not respected if a signal x(t) is sampled with a sampling frequency s while its Fourier
transform X() is nonzero for || > 2s . In this case, there is overlapping between the
different terms in the summation on the right-hand side of (2). Consequently, X() is no
longer a perfect image of X() in its main interval [ ]. This phenomenon is called
aliasing. When aliasing occurs, it is completely impossible to reconstruct x(t) from the
sampled signal even if we use (5). In other words, if Shannons theorem is not satisfied,
1
x(t) 6=
+
X
x[n]
n=
sin
s
(t
2
nTs )
t nTs
In exercises 5 and 6, we will study the theory above in detail to gain understanding.
Exercise 5. Consider the continuous-time signal x(t) = cos10t. This signal is sampled with
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
3
time [s]
n
)
15
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
3
time [s]
4n
)
15
15
2
z
}|
{
x(t) = x1 (t) + Acos(100t)
The signal x1 (t) is the signal of interest and is band-limited with a bandwidth B = 30 rad/s
i.e. X1 () = 0 for || > B . The signal x2 (t) is an undesirable disturbance at 50 Hz which
is due to the electricity network and which has to be filtered away.
For further use in a computer, the signal has to be sampled at a sampling frequency
1
s = 80 rad/s (i.e. Ts = 40
s). In this exercise, we will analyze what is this best method
to filter away x2 (t) and to obtain a sampled signal which is a faithful image of x1 (t). For
this purpose, we will need an anti-aliasing filter (see later).
a. Is the sampling frequency s sufficiently high to ensure that the DTFT X1 () of the
sampled signal x1 [n] = x1 (nTs ) is, in its main interval [ ], a non-distorted image
of the continuous-time Fourier transform X1 () of x1 (t)
b. Suppose that, before the sampling device, the signal x(t) is filtered by a filter whose
frequency response F () is given by:
F () =
1 for 40 40
0 elsewhere
Such a filter is generally called an anti-aliasing filter. Denote by z(t) the signal obtained
by filtering x(t) by this filter. This signal z(t) is the signal entering the sampling device
which delivers z[n]. Show that the DTFT Z() of z[n] is equal to X1 (), the DTFT
of x1 [n] found in item [a.].
c. Why is it important to place the filter F () before the sampling device and not after?
Exercise 8 (Examination November 2006). Some periodic vibrations due to air turbulence are very hazardous for the wings of an aircraft. It is therefore important to detect
them when they are occurring in order to compensate them. Suppose these hazardous vibrations are vibrations at 0 and 20 for 0 = 20 rad/s. An engineer at AIRBUS has
developed a digital device to detect those vibrations by spectral analysis. To explain how
this device works, we first denote the continuous-time vibration (displacement) of the wing
at a particular spot by x(t). The detection device works in three steps:
Step 1: The vibration x(t) is measured as illustrated in the figure below:
x(t)
z(t)
Filter F()
Sampling
z[n]
In this picture, the filter F () is a so-called anti-aliasing filter and is given by:
F () =
1 for 50 50
0 elsewhere
and the sampling occurs with a sampling frequency equal to s = 100 rad/s.
Step 2: the DTFT Z() of z[n] is computed and represented in the main interval
[ , ].
Step 3: It is then verified whether the main interval of Z() exhibits delta-impulses
at the normalized frequencies = 0.4 and/or at = 0.8. If such delta-impulses
are detected, an alarm goes off.
a. Show that the proposed device is able to detect hazardous vibrations x(t) = cos(0t)
and x(t) = cos(20 t).
b. How does the detection device respond to a vibration x(t) = cos(120t) (non-hazardous
vibration)? Explain your answer.
7
c. How would the detection device respond to a vibration x(t) = cos(120t) (nonhazardous vibration) if the filter F () would be absent? Explain your answer.
1.5
0.5
0
30
20
10
10
20
30
Figure 3: |X()| (solid) and |Ts Xs ()| when Ts = 0.2 (dashed) and when Ts = 0.1
(dotted)
Exercise 9. Consider the continuous-time signal x(t) = et u(t). This signal is sampled
with a period Ts . This delivers the signal x[n] = x(t = nTs ).
a. Determine the continuous-time Fourier transform of x(t).
b. Determine the DTFT X() of x[n]. Deduce also Xs () from X() based on formula (4)
in the theory summary.
c. The modulus of X() and of Ts Xs () (for Ts = 0.2 and for Ts = 0.1) are represented
in Figure 3. Explain why, in both cases, Xs () is a distorted image of X().
d. Suppose now that, before the sampling of x(t), x(t) is filtered by an (anti-aliasing)
filter:
1 for 10 10
F () =
0 elsewhere
This filtering operation delivers the signal y(t) whose Fourier transform is denoted
Y (). Draw the modulus of Y () using the information in Figure 3.
e. The signal y(t) is subsequently sampled with sampling period Ts = 0.1 s. Draw the
modulus of Ys () = Y ( = Ts ) using what has been found in item [d.].
f. Let us define:
1
yrec (t) =
1
xrec (t) =
+
X
y[n]
sin
s
(t
2
nTs )
t nTs
(6)
sin
s
(t
2
(7)
n=
+
X
x[n]
n=
nTs )
t nTs
where y[n] and x[n] have been defined above and Ts = 0.1 s, s = 20 rad/s. Show,
via a Fourier transform analysis, that the signal yrec (t) reconstructed with y[n] is a
better image of x(t) than the signal xrec (t) reconstructed with x[n]. In other words,
show that the presence of the anti-aliasing filter is beneficial.
Exercise 10. CD recording. Consider the continuous-time signal x(t) corresponding to
a piece of music lasting 600 s. The signal x(t) is not band-limited. However, it is important
to note that all frequencies above 120000 rad/s can not heard by a human being. We would
like to record this piece of music onto a CD. For this purpose, the signal has to sampled and
each element x[n] of the sampled signal will then be engraved onto the CD.
a. Show that the number of samples that has to be engraved increases with increasing
values of the sampling frequency s
We would like to keep the number of samples that has to be engraved as small as possible while guaranteeing that the engraved information contains all the hearable frequency
information of x(t).
b. Determine a device made up of a sampling block and an anti-aliasing filter which
generates a sampled signal having the property described above. Exercise 9 is a good
source of inspiration for this.
Solutions
Exercise 1. The signal x[n] is equal to u[n] + y[n] + z[n] where y[n] = 2u[n 1] and
z[n] = u[n 4]. The discrete-time unit-step function u[n] is given by:
0 f or n < 0
u[n] =
1 f or n 0
The signal y[n] is made up with u[n 1] which is u[n] right-shifted of one sample. Consequently, y[n] = 2u[n 1] is given by:
0
f or n < 1
y[n] =
2 f or n 1
Using a similar reasoning, we find that:
z[n] =
0 f or n < 4
1 f or n 4
f or n = 0
x[n] = 1
x[n] = 1 f or n = 1, 2 and 3
x[n] = 0
f or any other n
Exercise 2. The signal x[n] = x(0.1 n) = cos(0.1n) is not periodic since we cannot find any
integer rx such that 0.1rx = 2q for some integer q (see page 16). For the second value of
Ts , we can prove using the same procedure that the discrete-time signal x[n] = cos(0.1 3 n)
is periodic of (fundamental) period rx = 60. This result can be checked as follows:
n=
X() = 1 ej + e2j
= ej ej 1 + ej
= ej (2cos() 1)
Exercise 4.
4.a. Based on the definition of the DTFT, we obtain successively:
X() =
jn
x[n]e
n=
an ejn
n=0
(aej )n
n=0
1
1 aej
an ejn +
n=0
1
X
an ejn
n=
(aej )n +
n=0
(aej )k
k=1
1
aej
+
1 aej 1 aej
1 a2
=
1 + a2 2acos()
4.c. x[n] = z[n b] with z[n] = [n]. Consequently, X() = ejbZ() (shift in time
property). Since Z() = 1, we obtain X() = ejb .
4.d. First, observe that x[n] = z[n]sin(0 n) with z[n] = an u[n]. In item [a.], we have proven
that Z() = 1/(1 aej ) and, using the property of multiplication by a sine, we find that
the DTFT of z[n]sin(0 n) is 0.5 j (Z( + 0 ) Z( 0 )). Consequently,
j
1
1
X() =
2 1 aej(+0 ) 1 aej(0 )
Exercise 5.
11
5.a. Using Table 3.2 of the book, we see that the continuous-time Fourier transform of
x(t) is given by X() = (( 10) + ( + 10)). X() is represented in the above plot of
Figure 4.
5.b. Since Ts =
,
15
) = cos( n)
15
3
5.c. Shannons theorem is here respected since the highest non-zero frequency in X() is
10 rad/s and we have chosen s > 20 rad/s.
5.d. The DTFT of x[n] is given by (see Table 4.1)
X() =
( +
k=
2
2
2k) + (
2k)
3
3
2
3
15
= 10 rad/s.
5.e. The DTFT X() in its main interval presents two delta-impulses at frequencies corresponding to 10 rad/s. The continuous-time Fourier transform of x(t) presents two deltaimpulses at 10 rad/s. We see thus that X() is a non-distorted image of X().
5.f. We will first answer item [f.] in a graphical way. For this purpose, Xs () can be deduced
as shown in Figure 4. We will here just deduce the main interval [15 15] of Xs () since
we know that Xs () is periodic. We start with X() (above plot). Then, to X(), we add
X( s ) (s = 30). This delivers the middle plot. We continue by adding X( + s ). This
delivers the bottom plot. Wee see that the addition of both X( s ) and X( + s ) do not
change anything in the main interval. This will be also the case for the terms X( ks ) for
|k| > 1 since X( ks ) presents delta-impulses at the frequencies = ks 10 = 30k 10.
These frequencies are not located in the interval [15 15] for |k| > 0. Consequently, we
conclude that the main interval [15 15] of Xs () presents two delta-impulses at = 10.
Now that we have determined Xs () in its main interval, it is straightforward to determine
X() in its main interval by recalling that = Ts : the main interval of X() presents
= 2
which is what we had found in item [d.]. Note
delta-impulses at = 10Ts = 10
15
3
that what is represented in the interval [15 15] in Figure 4 is in fact Ts Xs () (see (1)).
We can also answer item [f.] in a mathematical way. Starting from
X() = (( 10) + ( + 10)), we use (1) to determine Xs ():
12
X()
()
()
10
10
X(- s)+X()
()
()
()
()
10
10
20
40
X(-s )+X()+X(+s)
()
40
()
s
()
()
()
()
20
10
10
20
40
Xs () =
Ts
=
Ts
k=
k=
( 10 ks ) + ( + 10 ks )
( 10 30k) + ( + 10 30k)
X() = Xs ( = ) =
Ts
=
15
15
k=
k=
,
15
since s = 30 rad/s
we obtain:
15
15
(
10 30k) + (
+ 10 30k)
15
= (),
we obtain that
13
2k
15
2k
!
X() =
k=
2
2
2k) + ( +
2k)
(
3
3
where the factor Ts in the expression of the filter is just there to compensate the factor
in the definition of Xs () (see (1)).
1
Ts
4
8
2
2
) = cos( n) = cos( n + 2n) = cos( n)
15
3
3
3
2
3
4
15
= 52 rad/s.
6.e. The DTFT X() in its main interval presents two delta-impulses at frequencies corresponding to 52 rad/s. The continuous-time Fourier transform of x(t) presents two deltaimpulses at 10 rad/s. We see thus that, in its main interval, X() is NOT a faithful image
of X(). This is logical since Shannons theorem is here not respected.
6.f. The easiest way here is to deduce graphically the main interval of Xs () and subsequently the main interval of X() . This is done in Figure 5. Note that the main interval of
14
15
]. We here also start with X() (above plot). We see that X() has
Xs () is here [ 15
4 4
no contribution in the main interval. We now add X( s ) = X( 7.5) to X(). This
delivers the middle plot. We continue by adding X( + s ). This delivers the bottom plot.
We see that both X( s ) and X( +s ) have contribution in the main interval. These two
terms are in fact the only terms in the summation (2) which have contribution in the main
interval. Indeed, X( ks ) has delta-impulses at the frequencies = ks 10 = 15k
10
2
which are not located in the main interval for |k| > 1. Consequently, we conclude that
15
] presents two delta-impulses at = 25 . Now that we have dethe main interval [ 15
4 4
termined Xs () in its main interval, it is straightforward to determine X() in its main
interval by recalling that = Ts : the main interval of X() presents delta-impulses at
= 2
which is what we had found in item [d]. Note that what is
= 52 Ts = 20
30
3
15
represented in the interval [ 15
] in Figure 5 is in fact Ts Xs () (see (1)).
4 4
X ()
()
()
10
10
X(- s)+X()
()
()
()
()
10
10
2.5
17.5
X(- s)+X()+X(+ s)
()
()
()
()
()
()
17.5
10
2.5
2.5
10
17.5
4
0 elsewhere
15
4
which is equal to the continuous-time Fourier transform of cos( 52 t). Thus xrec (t) = cos( 52 t)
and is thus completely different from x(t) = cos(10t). This is logical since Shannons theorem
was here not respected and information is lost through sampling (here the lost information
is the actual frequency of the signal x(t)).
6.h. To sum up what we have seen in Exercises 5 and 6, we can say that the signal
x[n] = x(t = nTs ) is a good image of x(t) in Exercise 5 since Shannons theorem is respected while the aliasing in Exercise 6 makes of x[n] a cosine corresponding to a much
lower frequency4 (i.e. = 2.5 rad/s) than the frequency of x(t) (i.e. = 10 rad/s).
This is confirmed by what we observe in Figures 1 and 2. In Figure 2, we indeed see that
x[n] = x(t = nTs ) corresponding to Exercise 6 is a periodic signal with a slower oscillation
(i.e. a smaller frequency) than x(t) while, in Figure 1, x[n] and x(t) have the same frequency.
Exercise 7.
7.a. We observe that the sampling frequency s = 80 rad/s fulfills Shannons theorem for
the signal of interest x1 (t). Indeed B < 2s . Consequently, the sampled signal x1 [n] will
have a DTFT X1 () which, in its main interval [ ], will be a non-distorted image of the
continuous-time Fourier transform X1 () of x1 (t).
7.b. The continuous-time Fourier transform X() of x(t) is:
X() = X1 () + A ( ( + 100) + ( 100))
with X1 () = 0 for all || > 30. X() is represented below for a triangular X1 () (left
plot: X1 (); right plot X()):
X 1()
X()
()
()
100
100
The Fourier transform Z() of z(t) is given by F ()X() and is thus equal to X1 (). This
yields to z(t) = x1 (t). The sampled signal z[n] is thus equal to x1 [n] and Z() = X1 () for
all .
7.c. If we directly sample x(t), we obtain x[n] = x1 [n] + x2 [n]. As observed in item [a.],
the signal x1 [n] has a DTFT X1 () which, in the main interval, is a non-distorted image of
X1 (). Consequently, in this interval [ ], X1 () is non-zero only between B and B
with B = Ts B = 3
.
4
Note now that s is not high enough for Shannons theorem to hold for x2 (t). Aliasing
will occur. The signal x2 [n] is given by:
4
By frequency, we mean here the actual frequency in rad/s, NOT the normalized frequency .
16
1
) = Acos(0.5n + 2n) = Acos(0.5n)
40
k=
X()
()
()
/2
0 /2
Because of the aliasing, the perturbation x2 (t) lying outside the bandwidth of x1 (t) before
sampling is thus transformed into a perturbation at a frequency within the bandwidth of
x1 [n]. The DTFT X2 () of the sampled perturbation lying within the bandwidth of the
signal of interest, it is no longer possible to remove it without harming the signal of interest.
Exercise 8.
8.a. Let us begin with the case x(t) = cos(0 t) = cos(20t). Since 0 is smaller than 50,
the signal z(t) obtained after filtering with F () is equal to x(t). The signal z[n] is therefore
given by z[n] = cos(0 nTs ) = cos(0.4n). Consequently, we see that the DTFT of z[n] is
given by:
X() =
k=
which, in its main interval, only presents two delta-impulses one at = 0.4 and one
at = 0.4. These delta-impulses will launch the alarm. The same can be said for
x(t) = cos(20 t). Indeed, 20 is also smaller than 50 and the signal z(t) obtained after filtering with F () is thus also equal to x(t). The signal z[n] is here given by z[n] =
cos(20 nTs ) = cos(0.8n). Consequently, we see that the DTFT of z[n] will present, in its
main interval, delta-impulses at = 0.8 which will launch the alarm.
8.b. The signal x(t) will be filtered away by F (). Thus, z(t) = 0 t and z[n] = 0 n. No
alarm is launched since Z() = 0 and does not present any delta-impulses.
8.c. If there is no filter z(t) = x(t). Therefore, we have:
17
2
12
n) = cos( n + 2n) = cos(0.4n)
5
5
As shown in item [a.], the DTFT of such a signal z[n] will present delta-impulses at =
0.4 which will launch the alarm while the vibration is not dangerous. The anti-aliasing
filter is thus absolutely necessary.
Exercise 9.
9.a. The Fourier transform X() of x(t) is given by (see Table 3.2):
X() =
1
1 + j
Note that x(t) is not band-limited since there exists no B such that X() = 0 for || > B .
9.b. Given Ts , the sampled signal has the following expression x[n] = enTs u[n] = bn u[n]
with b = eTs . The DTFT X() of x[n] is given by:
X() =
1
1 bej
1
1 bejTs
18
9.e. Due to the fact that Y () = 0 for || > 10, sampling y(t) at a sampling frequency
of 20 rad/s will not cause aliasing (i.e. Shannons theorem is respected). Consequently, in
its main interval [10 10], Ys () will be a faithful image of Y () i.e. Ts Ys () = Y () and
thus |Ts Ys ()| = |Y ()|. Since Ys () is periodic with period s = 20 rad/s, we can thus
represent |Ts Ys ()| as in Figure 7.
9.f. As mentioned in the summary of theory, the reconstruction formula (6) is equivalent
to filtering the signal corresponding to Ys () by an ideal low-pass filter whose frequency
response H() is given by:
Ts for 10 10
H() =
0 elsewhere
Consequently, we obtain that the Fourier Transform Yrec () of yrec (t) is precisely equal to
Y () and thus that yrec(t) = y(t).
The signal y(t) is only a filtered version of x(t). Thus, we do not have that yrec (t) =
x(t). However, the application of an anti-aliasing filter is beneficial. Indeed, we have that
Yrec () = X() for all [10 10] which is not the case when the anti-aliasing filter is
absent. Indeed, if there is no anti-aliasing filter, the reconstructed signal is given by (7)
and, following a similar reasoning as above, we see that the Fourier transform Xrec () of
xrec (t) will be equal to H()Xs() with the Xs () represented in Figure 3. In Figure 8, we
compare the modulus of Xrec () and Yrec (). Since Yrec () = Y () is equal to X() for
the frequencies in [10 10], we see in this last figure that Yrec () is a better image of X()
than Xrec () and, consequently, the signal yrec (t) reconstructed with y[n] is a better image
of x(t) than the signal xrec (t) reconstructed with x[n].
1.5
0.5
0
30
20
10
10
20
30
Figure 6: |Y ()|
Exercise 10.
10.a. The length of x(t) is 600 seconds. Consequently, the signal x[n] = x(t = nTs ) will
s
contain 600
= 300
samples. We see thus that the number of samples increases with s .
Ts
10.b. We will engrave the sample of the signal z[n] generated with the following device:
19
1.5
0.5
0
30
20
10
10
20
30
20
30
0.5
0
30
20
10
10
x(t)
Filter F()
z(t)
Sampling
z[n]
difference. However, removing these frequencies are completely necessary in order to guarantee that the sampling of z(t) occurs without aliasing i.e. to avoid that high frequency
components are mirrored into the low frequency components and modify these components
such as has been the case in Figure 3.
Note that, in the actual CD technology, s is not chosen equal to 240000, but to 260000
rad/s.
Conversion table.
THIRD edition
SECOND edition
p. 14
p. 19
p. 15
p. 20
p. 16
p. 21
p. 244
p. 233
p. 465
p. 486
Table 3.2, p. 144
Table 4.2, p. 192
Table 4.1, p. 177
Table 7.1, p. 308
Table 4.2, p. 178
Table 7.2, p. 309
(4.2)
(7.2)
(4.5)
(7.5)
Figure 5.17, p. 244 Figure 5.24, p. 234
21
1 ejN
1 ej
b. Show that, in its main interval, the DTFT WN () is equal to 0 for all =
an integer 6= 0.
2k
N
with k
4
.
N
It presents also
All these lobes (and in particular the central one) become narrower for increasing values
of N.1
The amplitudes of the secondary lobes decrease for increasing value of ||
When we compare this decrease for different values of N, this decrease is faster for
larger N.
Exercise 1. (inspired by J.W. Wingerden et al., Wind Energy, Wiley, 2008).
Description. The blades of wind turbines (see the left part of Table 2) are subject to strong
vibrations which induce sinusoidal strains in the blade. To enhance the life expectancy of
the turbine, it is very important to compensate these vibrations using a feedback controller.
In order to be able to design such a controller, it is absolutely necessary to know at which
frequencies these vibrations occur. To obtain this information, the blade is experimented
in a wind tunnel under realistic circumstances and the strain x(t) in the blade is measured.
Suppose that this strain x(t) is given by:
x(t) = A1 sin(1 t) + A2 sin(2 t) + A3 sin(3 t)
1
This behaviour observed in Table 1 follows from the fact that the width of the lobes are inversely
proportional to N . See the first item.
25
2
1.8
20
1.6
1.4
N
15
|W()|
w[n]
1.2
10
0.8
0.6
0.4
0.2
4/N
0
10
15
20
25
30
1
2/N
60
200
180
50
160
40
140
W()
W()
120
30
100
80
20
60
40
10
20
Table 1: Top Left: wN [n] for N = 20; Top Right: Modulus of WN () for N = 20; Bottom
Left: Modulus of WN () for N = 50; Bottom Right: Modulus of WN () for N = 200
with A1 = A2 = 1, A3 = 21 , 1 = 4, 2 = 6 and 3 = 30 rad/s. This expression is of
course unknown in practice. The only thing we know a-priori is that all important vibrations
have a frequency smaller than 40 rad/s. In the sequel, we will show how we can deduce
the values of the vibration frequencies from the measurement of x(t).
Measurement setup. The strain x(t) is measured as illustrated in the right part of Table 2.
In this picture, the anti-aliasing filter F () is given by:
1 for 40 40
F () =
0 elsewhere
and the sampling occurs with a sampling frequency equal to s = 80 rad/s.
a. Are the sampling frequency and the anti-aliasing filter well chosen?
2
blade
x(t)
z(t)
Filter F()
Sampling
z[n]
|()|
(/2)
() () ()
()
(/2)
1 2
= 0.31
10
2 =
3
= 0.47
20
3 =
3
= 2.35,
4
and show that we can deduce the vibration frequencies present in x(t) by inspecting
this DTFT.
3
z400 [n] =
z40 [n] =
z[n] for 0 n 39
0
elsewhere
order of
N = 400 samples of the noisy z[n] have been collected. The measured signal is represented
in the left part of Table 5. We see that the signal is completely corrupted by the noise.
It is thus now really completely impossible to determine the vibration frequencies from the
time-domain representation. However, as can be seen in the right part of Table 5, the DTFT
of this noisy measurement presents a very similar shape as the one represented in Table 3
and, consequently, the vibration frequencies can still be easily determined.
g. Explain this phenomenon.
200
150
zmeas[n]
100
50
2
0
0
50
100
150
200
n
250
300
350
400
0.5
1.5
2.5
2.5
20
meas
[n]
15
10
1
5
2
0
0
10
15
20
n
25
30
35
40
0.5
1.5
200
150
zmeas[n]
100
1
50
3
0
0
50
100
150
200
n
250
300
350
400
0.5
1.5
2.5
Table 5: Left: noisy measurement; Right: Modulus of the DTFT of the noisy measurement
Solutions
Preliminary.
a. The DTFT WN () of wN [n] is defined by:
WN () =
jn
wN [n]e
n=
N
1
X
ejn
n=0
Using now the property (4.5) of the book leads to the result.
b. Let us evaluate WN () at =
2k
N
for an arbitrary k 6= 0:
=1
WN ( =
z }| {
1 ej2k
2k
)=
2k = 0
N
1 ej N
Exercise 1.
1.a. We know that the important vibrations have a frequency smaller than 40 rad/s. Consequently, we have to choose s at least two times larger than 40 . This is the case here.
The anti-aliasing filter removes from x(t) all frequencies that could be mirrored by aliasing
into the frequency range of interest i.e. [0 40].
1.b. We see that x(t) contains three frequencies smaller than 40. Consequently, the signal
1
z(t) is equal to x(t). Since Ts = 40
s, the signal z[n] is given:
z[n] = sin(
3
1
3
n) + sin(
n) + sin(
n)
10
20
2
4
|{z}
|{z}
|{z}
1
k=
... ( 2 2k) +
( + 3 2k) ( 3 2k)
2
2
Consequently, we can see that, in its main interval [ ], the modulus of Z() presents
delta-impulses at s equal to 1 , 2 and 3 . This is precisely what we observe in
Figure 1.
By inspecting |Z()|, we can deduce that z[n] and z(t) contain three frequencies. We can
therefore also conclude that x(t) contains three frequencies in the frequency range of interest
[0 40] i.e. the frequency range where the important vibrations occur. At which frequencies
do these vibrations occur? They occur at the frequencies where we observe a peak i.e. 1 ,
2 and 3 . These are normalized frequencies. Based on these normalized frequencies we can
now deduce the actual frequencies of the signal x(t) by using the relation = Ts :
1 =
1
40
=
= 4
Ts
10
2 =
2
120
=
= 6
Ts
20
3 =
3
120
=
= 30
Ts
4
1.c. Using the expression of z[n] given in item [b] and the property of multiplication by a
sine in Table 4.2 of the book, we see that the DTFT Z400 () of z[n] wN [n] is equal to:
Z400 () =
... +
j
( WN =400 ( + 1 ) WN =400 ( 1 )) + ....
2
j
j
( WN =400 ( + 2 ) WN =400 ( 2 )) + ( WN =400 ( + 3 ) WN =400 ( 3 ))
2
4
In the interval [0 ], the behaviour is mainly determined by the summation of three shifted
versions of WN =400 ():
j
j
j
WN =400 ( 1 ) WN =400 ( 2 ) WN =400 ( 3 )
2
2
4
7
Since N is here very large, we can conclude, based on what has been observed in the preliminary exercise, that the secondary lobes of WN =400 () are small and that their influence
in the above summation will therefore be negligeable. In other words, the secondary lobes
of e.g. WN =400 ( 3 ) do not change the shape of the function WN =400 ( 1 ) and
WN =400 ( 2 ). Since 1 and 2 are quite close, the influence of the secondary lobes of
WN =400 ( 1 ) on the function WN =400 ( 2 ) (and conversely) is relatively larger, but
not really significative. Consequently, in the interval [0 ], the modulus of |Z400 ()| is approximately three repetitions of the shape of |WN =400 ()|, but centered at three different
frequencies: = 1 , = 2 and = 3 and scaled by a factor 21 or a factor 14 (the j
disappears when taking the modulus).
This scaling factor explains the amplitude of the peaks at = i (i = 1..3) that we
observe in Table 3. The first peak is equal to N2 = 200 since the maximal amplitude of
|WN =400 ( 1 )| is N = 400. The same can be said for the second peak. The third peak
has an amplitude of N4 = 100. A (rough) expression of the peak amplitude (when N is large
enough; see later) is N2Ai where Ai is the amplitude of the sine at the frequency i .
1.d. In item [c.], we have used the expression of x(t) to explain the shape of |Z400 ()|. Now,
in practice, we have to go the other way around. We look at |Z400 ()| to deduce the properties
of x(t). For example, the frequencies present in x(t). For this purpose, we proceed as follows.
By inspecting |Z400 ()|, we can e.g. read the frequencies where the three peaks appear:
= 0.31
= 0.47
= 2.35
Based on these three normalized frequencies, we can now deduce the actual frequencies of
the signal x(t) by using the relation = Ts :
=
0.31
= 12.4 1
Ts
0.47
= 18.8 2
Ts
2.35
= 94 3
Ts
X.e. By reading the amplitude of the three peaks, we can also deduce the amplitude of the
vibrations at the three frequencies. For this purpose, we can use the relation deduced in
item [c.]. The amplitude of the first two peaks is 200 which corresponds to an amplitude of
1 for the corresponding sine function in x(t). The amplitude of the third peak is 100 which
8
... +
j
( WN =40 ( + 1 ) WN =40 ( 1 )) + ....
2
j
j
( WN =40 ( + 2 ) WN =40 ( 2 )) + ( WN =40 ( + 3 ) WN =40 ( 3 ))
2
4
Since N is here much smaller than in item [c.], we can conclude, based on what has been
observed in the preliminary exercise, that the secondary lobes of WN =40 () are much larger
than the ones for the case N = 400. These secondary lobes therefore play a much more
important role in the summation above. Due to these important secondary lobes, it is much
more difficult to determine from |Z40 ()| the number of sine functions in x(t). Indeed, from
|Z40 ()|, we could conclude that x(t) contains four sine functions while it actually only contains three of them.
1.g. That the modulus of the DTFT Z400,noisy () of the noisy measurement presents a very
similar shape
as |Z400 ()| can be explained by the fact the DTFT of the noise part is of amplitude N = 20 which is much smaller than the amplitudes of the peaks. Consequently,
the disturbance induced by the noise on the DTFT is not significative.
Remark. In order to reduce the influence of the noise even more, spectral analysis is
generally performed by representing the so-called periodogram instead of the modulus of the
DTFT. The periodogram is defined as:
1
|ZN,noisy ()|2
N
1
which in this case delivers 400
|Z400,noisy ()|2 . The periodogram is represented in Figure 2.
In the case of the periodogram, the modulus of the DTFT of the noise is of amplitude 1
N A2
and the amplitude of the peaks 4 i .
Conversion table.
THIRD edition SECOND edition
Table 4.1, p. 177 Table 7.1, p. 308
Table 4.2, p. 178 Table 7.2, p. 309
(4.5)
(7.5)
100
80
60
40
20
Figure 2:
0.5
1.5
1
|Z400,noisy ()|2
400
10
2.5
in the interval [0 ]
Computers nowadays play a major role in many engineering fields. This is also true for the
field of signal analysis and signal processing. From this point of view, discrete-time signals
are very important since they are the only ones that can be treated by a computer. As
seen in the Exercise 1 of this session, in order to perform the spectral analysis of a physical
(continuous-time) signal x(t), we must first sample it and then use the computer to achieve
the analysis. Computers1 can also be very useful to transform an initial signal x[n] (the
input) into another signal y[n] with desired properties (the output):
Such a transformation can e.g. be the removal of high-frequency components via a
low-pass filter. For this purpose, computers are not compulsory, the filtering operation
can also be done in continuous time. However, it is so that filtering is much easier
to implement in a computer environment (i.e. via lines of code) than via an analog
circuit. Moreover, it is much more easier to design (almost) ideal filters in discrete
time than it is the case in continuous time.
Another possible application is digital control. Due to their simple implementation, the
majority of the control laws are indeed nowadays programmed into computers. Such
digital controllers compute the control input u[n] that has to be applied to a real-life
system based on a sampled version y[n] = y(t = nTs ) of the actual output y(t) of this
system. Note that the discrete-time u[n] cannot be directly applied to the system. It
has first to be transformed into a continuous system via a digital-analog conversion
(see the third part of this session for more details)
The transformation of a discrete-time signal x[n] into a new signal y[n] can be described by
the theory of discrete-time systems. A discrete-time system is a system which has a discretetime input signal x[n] and a discrete-time output signal y[n]. While continuous-time systems
are described by a set of differential equations, discrete-time systems are described by a set
of difference equations relating the input signal x[n] and the output signal y[n]. An example
of difference equations is as follows:
y[n] = 0.5y[n 1] + x[n 1]
(1)
The output of the discrete-time system for a given input x[n] can be computed by solving the
difference equations for each n. Even though this is effectively how the output is computed
by a computer, this method is certainly not the only one to determine y[n] from x[n] and,
furthermore, the difference equation representation does not say much on the properties of
the system. It is e.g. very difficult to see whether the system in question is a low-pass filter
or something else. The theory of the Discrete-Time Fourier Transform (DTFT) allows us
to gain insights in the behaviour of the discrete-time system without having to solve the
1
We here use the notion of computer in a very broad sense. This notion includes any digital devices
such as MP3, CD player, GSM,..
difference equation and provides alternative methods to compute the output y[n].
An important tool for this purpose is the frequency response H() of the discrete-time
system. To determine H(), we apply the shift in time property of the DTFT (see Table 4.2
p. 178) to the difference equation(s). This delivers an expression of the DTFT Y () of the
output y[n] as a linear function of the DTFT X() of the input x[n]. Then, H() is just:
H() =
Y ()
X()
For example, suppose that a discrete-time system is described by the difference equation (1). This equation can be rewritten using the shift in time property as Y () =
0.5 ej Y () + ej X(). Y () is thus equal to the following function of X():
Y () = (ej /(1 + 0.5ej ))X(). The frequency response H() of the system is thus:
ej
.
1+0.5ej
Such as Y () and X(), the frequency response H() of a discrete-time system is also periodic with period 2. Consequently, all the information is located in its main interval [ ].
The frequency response H() is a very important function. There are mainly three reasons
for this.
The frequency response H() allows to determine the (steady-state) response y[n] of
the system when the input x[n] is given by x[n] = Acos(0 n + ) ( n +).
The response is then
y[n] = A|H(0 )|cos(0 n + + H(0 ))
The amplitude of x[n] is multiplied by the modulus of H() evaluated at the frequency
of x[n] i.e. = 0 . The phase of x[n] is shifted with the argument of H() at = 0 .
See p. 250 for more details. The result also holds for sine functions.
More generally, because of the relation Y () = H()X(), we get insights in the
behaviour of the system by looking at the frequency response H(). For example, in
its main interval, a low-pass filter should have a frequency response H() equal to 1
for frequency up to the cut-off frequency and a frequency response which is very small
for all the other frequencies.
In the first item, we have seen that the frequency response gives a direct way to compute
the output y[n] of the system when the input x[n] is a (co)sine function. For some
other input signals, the frequency response H() provides a method to compute the
output y[n] which can be easier than the use of the difference equations. This method
is based on the relation Y () = H()X() and is as follows: determine the Fourier
transform X() of x[n]. With X(), determine Y () by multiplying H() and X():
Y () = H()X(). The output signal y[n] can then be determined by finding the
inverse Fourier transform of Y () (see (4.27) and the remark below).
2
Remark. (Inverse) DTFTs can be computed via their respective definition (4.2) and (4.27).
Instead, they can be deduced via standard Fourier transform pairs (see Table 4.1 on the
page 177) in combination with properties of the Fourier transforms (see Table 4.2 on page
178).
Another important quantity is the inverse DTFT h[n] of H() i.e. h[n] = F 1 (H()). The
signal h[n] is called the pulse response of the discrete-time system since h[n] is the output
signal of the discrete-time system described by H() when the input signal x[n] is the unit
pulse function [n] (see page 15). From the pulse response h[n], it can be determined whether
the system is stable and/or causal. The system is indeed stable if and only if h[n] tends to
0 when n tends to +. The system is causal if and only if h[n] = 0 for all n < 0. These
properties come from the fact that the output y[n] of a system to an input x[n] can also be
expressed as the convolution of h[n] with x[n]:
+
X
i=
h[i] x[n i]
+
X
Y (z)
n=
= +
X(z)
X
y[n]z n
x[n]z n
n=
ej
1+0.5ej
is:
z 1
1
=
1
1 + 0.5z
z + 0.5
It is possible to deduce the transfer function H(z) from a difference equation such as (1)
without deducing first the frequency response H(). For this purpose, we use the shift
in time property of the Z-transform which states that the Z-transform of y[n c] (with
c an integer) is equal to z c Y (z) with Y (z) the Z-transform of y[n]. Using this property, equation (1) leads to Y (z) = 0.5z 1 Y (z) + z 1 X(z). Consequently, we obtain that
z 1
H(z) = Y (z)/X(z) = 1+0.5z
1 .
Based on H(z), we can also verify whether the discrete-time stable is stable: a system is
stable if and only if all the poles of H(z) are located in the unity circle i.e. if and only if
3
the modulus of these poles are all strictly smaller than one. For the H(z) given above, we
see that z + 0.5 = 0 for z = 0.5 whose modulus is smaller than one. Thus the system
corresponding to H(z) is stable. The equivalent of H(z) for continuous-time systems is the
transfer function H(s) in the Laplace domain.
The references to the book pertains to the third edition. At the end of the session, there is a
conversion table for the second edition.
Problem 2 (examination January 2006). Following the diagram as sketched in Figure 1,
a continuous-time signal x(t) is sampled with a sampling period Ts = 0.04 s. This delivers a
discrete-time signal x[n]. Before further use in the computer, the signal x[n] is filtered with
a digital filter whose frequency response H() is represented in a Bode plot in Figure 2. The
output of this filter is denoted y[n].
x(t)
Ts
x[n]
y[n]
Filter
1.0
|H|
0.8
0.6
0.4
0.5
0.5
0.5
0.5
Angle(H), degrees
200
400
600
(2)
(3)
with a R and b R
a. Determine y[n] for 0 n 3 for these two systems when x[n] is given by the unit
pulse [n] (see page 15) i.e.
1 for n = 0
x[n] =
0 elsewhere
Suppose y[n] = 0 for n < 0.
b. Determine the frequency response H() of these two discrete-time systems.
c. Determine the transfer function H(z) corresponding to these two discrete-time systems.
d. Give the condition(s) that a and b have to fulfill to ensure that both discrete-time
systems are stable.
e. Suppose a and b fulfill the condition(s) found in item [d.]. Determine the pulse response
of the two systems using what has been found in [b.]. Compare these pulse responses
with what has been found in item [a.].
Problem 4 (examination August 2006). Consider the discrete-time system whose
pulse response h[n] is given by:
2 f or 1 n 4
h[n] =
0 elsewhere
a. Is this system causal? Is this system stable? Motivate your answer.
b. Determine the frequency response H() of the considered system.
c. Compute, for all values of n, the output y[n] of this system when the input x[n] is
equal to sin( 2 n)
d. Same question as in item [c.] when
x[n] =
1 f or n = 0 and n = 1
0 elsewhere
Problem 5 (examination November 2005). Assume that the signal h[n] can be written
as the sum of the signals a[n] and b[n]:
h[n] = a[n] + b[n].
The signals a[n] and b[n] are given by:
a[n] = 3 [n 1] with [n] the unit pulse (see page 15)
b[n] = 3 cn1 u[n 1] with u[n] the discrete-time unit step function (see page 14)
(1 + ej )(1 + ej )
1 + ej 1 + ej
with =
and =
To show this relation, use the same procedure as in the hint of Exercise 6 of Session 3.
Problem 6. Consider the discrete-time system described by the following transfer function:
H(z) =
z2
3z
z + 0.5
b. If we denote the input and output signals of this system by x[n] and y[n], what is then
the difference equation relating y[n] and x[n]?
Problem 7 (examination February 2008). A continuous-time signal x1 (t) is given by
x1 (t) = xB (t)cos(10t)
where the signal xB (t) is band-limited with a bandwidth equal to 30 rad/s i.e. the Fourier
transform XB () of xB (t) is such that XB () = 0 for || > 30rad/s.
6
a. Show that the signal x1 (t) is also band-limited and that X1 () = 0 for all || > 40
rad/s.
b. Suppose that the signal x1 (t) is sampled and that this sampling delivers the discretetime signal x1 [n] = x1 (t = nTs ) with Ts the sampling period. What is the maximum
sampling period Ts to ensure that the DTFT X1 () of the sampled signal x1 [n] is, in its
main interval [ ], a non-distorted image of the continuous-time Fourier transform
X1 () of x1 (t).
Now suppose that the continuous-time signal x1 (t) (which is the signal of interest) is
subject to an additive disturbance x2 (t) due to the electricity network and that we actually
measure the signal x(t):
=x2 (t)
z }| {
x(t) = x1 (t) + cos(100t)
In order to be able to filter away the disturbance, the signal x(t) is measured via the measurement device described in the figure below:
x(t)
Sampling
x[n]
Filter F()
z[n]
In this figure, we see that the signal x(t) is first sampled with a given sampling period Ts .
This delivers the sampled signal x[n] = x(nTs ). Subsequently, the sampled signal is filtered
by an ideal discrete-time filter F (). The frequency response of this filter in its main
interval [ ] is given by
1 for 0 0
F () =
0 elsewhere
for some 0 . As already mentioned, the objective of the measurement device is to ensure
that the DTFT Z() of the discrete-time signal z[n] is, in its main interval [ ], a nondistorted image of the continuous-time Fourier transform X1 () of x1 (t).
1
c. Can we achieve this objective if Ts = 40
s? If yes, determine a possible value for 0 in
the expression of F () to achieve this objective.
1
80
1
120
s
s
Solutions.
Problem 2.
2.a. The signal x[n] is given as follows:
x[n] = x(nTs ) = 2 sin(0 nTs ) = 2 sin(10n(0.04)) = 2 sin(
2
n)
5
2.b. The output y[n] can be directly deduced from expression (5.65) in the book. For this
purpose, we need to determine the modulus and the argument of H() evaluated at the
frequency of x[n] i.e. 0 = 2
= 0.4 rad/s. From the bode plot of H() in Figure 2, we
5
read that |H(0.4)| 0.62 and H(0.4) 460 degrees Thus H(0.4) 100 degrees
which is also equivalent to -1.74 rad. Consequently, we obtain:
y[n] = 2 (0.62) sin(
2
2
n 1.74) = 1.24 sin( n 1.74)
5
5
Problem 3.
3.a. Using the first difference equation, we obtain:
y[n = 0] = ay[1] + x[1] = 0 + 0 = 0
y[n = 1] = ay[0] + x[0] = 0 + 1 = 1
y[n 2] = ay[1] + x[1] = a + 0 = a
Y ()
ej
=
X()
1 + aej
Y ()
= bej + 2ej2
X()
3.c. Using the relation between H(z) and H(), we obtain that the transfer function of the
z 1
j2
first system is H(z) = 1+az
= (ej )2 , we obtain the transfer function
1 . Noting that e
of the second system: H(z) = bz 1 + 2z 2 .
Equivalently, the transfer functions of the system can directly be deduced from the difference equations. Indeed, using the shift in time property of the Z-transform on the first
difference equation, we obtain:
Y (z) = a z 1 Y () + z 1 X() = H(z) =
z 1
Y (z)
=
X(z)
1 + az 1
Y (z)
= bz 1 + 2z 2
X(z)
z 1
1
=
1
1 + az
z+a
The transfer function H(z) has thus one pole in z = a. Consequently, the condition for
stability is that the modulus of a is strictly smaller than 1: |a| < 1 (the modulus of a real
number is equivalent to its absolute value). The transfer function H(z) of the second system
can be rewritten as:
H(z) =
bz + 2
z2
This transfer function has two poles in z = 0 and is thus stable for all values of b.
3.e. The pulse response is by definition the inverse DTFT of the frequency response
H(). The frequency response H() of the first system can be written as ej Z()
with Z() = 1/(1 + aej ). Consequently, using the shift in time property, we have
that h[n] = z[n 1] with z[n] the inverse DTFT of Z(). Using Table 4.1, we see that
z[n] = (a)n u[n] since |a| < 1. Consequently, the pulse response of the first system is
h[n] = (a)(n1) u[n 1].
Consequently, we have for 0 n 3 that
h[0] = (a)1 u[1] =
1
0=0
a
This is precisely what we found in item [a.]. This is perfectly logical since the pulse response
h[n] is by definition the output response y[n] of a filter H() when x[n] = [n].
The pulse response of the second system can be deduced as follows. The frequency
response H() being the DTFT of h[n], we can write that:
+
X
H() =
h[n]ejn
n=
Now, H() = bej + 2ej2 . Consequently, we see by inspection that h[n] is given by:
b for n = 1
2 for n = 2
h[n] =
0 elsewhere
which is exactly what we found in item [a.].
Problem 4.
4.a. The discrete-time system is causal since h[n] = 0 n < 0 and it is stable since h[n] = 0
for all n larger than 4.
4.b. The frequency response of the considered system is by definition the DTFT of the pulse
response h[n] of the system:
H() = 2
4
X
ejn = 2
n=1
ej ej5
1 ej
ej 2 ej
H( = ) =
2
1 ej 2
5
2
j + j
=0
1+j
i=
h[i] x[n i] =
4
X
i=1
h[i] x[n i]
We know that y[n] = 0 for n < 0 since the system is causal. Let us apply the formula for
n 0:
y[n = 0] =
4
X
i=1
10
y[n = 1] =
4
X
4
X
4
X
4
X
4
X
i=1
y[n = 2] =
i=1
y[n = 3] =
i=1
y[n = 4] =
i=1
y[n = 5] =
i=1
y[n = 6] =
4
X
i=1
2
y[n] =
4
n=
Exercise 5
5.a. Using the linearity property of the DTFT we can write that H() = A() + B() with
A() and B() the DTFTs of a[n] and b[n] respectively. Using table 4.1, we have that:
A() = 3 ej
If we define v[n] = cn u[n], then we observe that b[n] = 3v[n 1]. Consequently, B() =
3ej V () with V () the DTFT of v[n]. Since v[n] = cn u[n], V () is thus 1/(1 cej ).
Consequently:
11
B() =
We have thus:
3 ej
1 c ej
3ej
1 cej
3ej (1 cej ) 3ej
=
1 cej
3 c e2j
=
1 c ej
H() = 3ej
5.b. The transfer function H(z) can be constructed by replacing each term ej in H()
by z:
3 c z 2
H(z) =
1 c z 1
5.c. We can deduce the difference equation both via the frequency response H() or the
transfer function H(z). Here, we do it using the frequency response. We know that Y () =
H()X() with Y () and X() the DTFT of y[n] and x[n]. Consequently, we have:
(1 c ej ) Y () = 3 c e2j X()
=
Y () c ej Y () = 3 c e2j X()
Applying the inverse DTFT on the last equation, we obtain the difference equation of the
system:
y[n] c y[n 1] = 3 c x[n 2].
5.d. We can answer to this question both using the frequency response H() or the
transfer function H(z). Here, we do it using the frequency response. We know that
Y () = H()X() with Y () and X() the DTFT of y[n] and x[n]. Consequently,
X()
3 c e2j
Y () =
1 c ej
2j
= 3 c e
z
}| {
1
1 dej
1
j
(1 c e )(1 dej )
c
cd
= 3 c e2j
|
1 c ej
{z
V ()
12
d
cd
1 d ej
!
}
where for the last step partial fraction decomposition was used. If we denote v[n] the inverse
DTFT of V () as defined above, the output signal y[n] is equal to 3 c v[n 2] (see Table
4.2). Now, using Table 4.1., we have that:
v[n] =
d
1
c
cn u[n]
dn u[n] =
c(n+1) d(n+1) u[n]
cd
cd
cd
= y[n] = 3 c v[n 2] =
3 c (n1)
c
d(n1) u[n 2]
cd
Exercise 6.
6.a. The transfer function H(z) can be rewritten as:
H(z) =
z2
3z
=
z + 0.5
(z
3z
1+j
)(z
2
1j
)
2
3z
3z 1
=
z 2 z + 0.5
1 z 1 + 0.5z 2
We know that Y (z) = H(z)X(z) with Y (z) and X(z) the Z-transforms of y[n] and x[n].
This leads to
(1 z 1 + 0.5z 2 ) Y (z) = 3z 1 X(z)
Y (z) z 1 Y (z) + 0.5z 2 Y (z) = 3z 1 X(z)
Using the shift in time property of the Z-transform, the inverse Z-transform of this equation
is:
y[n] y[n 1] + 0.5y[n 2] = 3x[n 1]
which is the difference equation of the system.
Exercise 7.
7.a. Using the property of multiplication by a cosine for the continuous-time Fourier Transform, we have that X1 () = 21 (XB ( 10) + XB ( + 10)). Consequently, we see that for
13
X()
X 1()
5
3
3
x2 [n] = cos( n) = cos( n) = cos( n)
4
4
4
Consequently, X2 () presents delta-impulses at = 34 . There is aliasing, but the mirrored disturbance does not fall into the bandwidth of X1 (). Indeed, X1 ( = 34 ) = 0.
Consequently, X() can be represented as shown in Figure 4 and we see that it will be
possible to retrieve a non-distorted image of X1 () after filtering of X() by choosing 0 in
the interval:
14
X()
X 1()
0 <
2
4
1
7.e. When Ts = 120 s, the DTFT X1 () will be equal to zero for all such that 31 < ||
. The disturbance becomes:
5
x2 [n] = cos( n)
6
Consequently, X2 () presents delta-impulses at = 65 . Note that there is here not any
aliasing. Since, X1 ( = 56 ) = 0, we can follow the same reasoning as in item [d.] and it
will be possible to retrieve a non-distorted image of X1 () after filtering by choosing 0 in
the interval:
5
0 <
3
6
Conversion table.
THIRD edition
p. 14
p. 15
p. 250
p. 538
Table 4.1
Table 4.2
Chapter 7
(4.2)
(4.5)
(4.27)
(5.58)
(5.60)
(5.65)
SECOND edition
p. 19
p. 20
p. 332
p. 614
Table 7.1
Table 7.2
Chapter 11
(7.2)
(7.5)
(7.27)
(7.48)
(7.50)
(7.55)
15
Like the ideal conversion, the ZOH conversion can also be understood as a filtering operation on the fictive signal corresponding to Xs (). More precisely, the Fourier transform
XZOH () of the continuous-time signal xZOH (t) is given by:
1 ejTs
XZOH () = HZOH ()Xs () with HZOH () =
j
We observe that HZOH () of the ZOH mechanism is a function of the sampling period Ts .
For the case Ts = 0.5 such as considered in the example above, the modulus of HZOH () is
represented in the left part of Table 2 where it is compared to the filter corresponding to the
ideal conversion. By inspecting this figure, we observe that HZOH () is also a low-pass filter
but, unlike Hideal (), HZOH () does not completely remove the high-frequency components1
of Xs (). Consequently, in the example of Table 1, xZOH (t) is not just cos(t) + cos(2t), but
also contains higher frequencies, as can be evidenced by its block-shape. Note however
that these high-frequency components in xZOH (t) are smaller for smaller values of Ts .
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0
0
0.5
0.5
1
1
1.5
1.5
2
2
2.5
2.5
10
5
t
10
Table 1: Left: sampled signal x[n] (Ts = 0.5 s); Right: Signal xZOH (t) (red dashed) compared
to x(t) (blue solid).
Exercise 8. D/A conversion for compact disks. In exercise 10 of session 4, we analyzed
how a piece of music could be stored on a CD. This is done as follows. Denote by z(t)
the music signal. This signal2 is band-limited with a bandwidth equal to 120 krad/s ( 1
krad/s = 1 kilo rad/s = 1000 rad/s). Consequently, we have that:
Z() = 0 for all || < 120 krad/s
(2)
In this exercise, we will suppose for simplicity that Z() is entirely real and thus that
|Z()| = Z(). Furthermore, we will suppose that Z() has the shape given in the right
1
These high-frequency components are due to the periodicity of Xs (); see session 4.
As shown in session 4, the signal z(t) is in fact the music signal from which all the frequency components
that cannot be heard have been filtered.
2
1
Ts
0.8
0.8 Ts
0.6
0.6 Ts
0.4 Ts
0.4
0.2 Ts
0.2
10
15
20
[rad/s]
25
30
35
40
200
400
600
800
[krad/s]
1000
1200
1400
Table 2: Left: Hideal () (blue dashed) and |HZOH ()| (red solid) for the case Ts = 0.5 (i.e.
s = 4). Right: Fourier Transform Z() of the continuous-time music signal z(t) in the
interval [0 1500] krad/s
part of Table 2. In order to be stored on the CD, the signal z(t) is sampled at a sampling
2
frequency s = 260 krad/s (Ts = 260
ms) and the obtained samples z[n] are stored on the
CD. The frequency information in z[n] can be expressed as a function of the normalized
frequencies (the DTFT Z()) or as a function of the actual frequencies . The latter
representation
is denoted Zs () = Z( = Ts ). As shown in session 4, Zs () is equal to
P
1
k= Z( ks ). Consequently, Ts Zs () is also entirely real and can be represented
Ts
as in the left part of Table 3. The quantity Ts Zs () will be denoted Zs () in the sequel.
In this exercise, we will analyze how we can listen to the CD. The discrete-time signal z[n]
must for this purpose somehow been transformed into a continuous-time (electrical) signal
x(t) that can subsequently be amplified by the loudspeaker. The signal x(t) in question must
of course be (almost) equal to the initial music signal z(t) i.e. the one whose sampled version
is stored on the CD. Consequently, it is required that |X()| |Z()| for all frequency3 .
We could think that it would be sufficient to require that |X()| |Z()| for all hearable
frequencies i.e. for || < 120 krad/s. However, even though these frequency components
are not hearable, it is nevertheless important that
X() Z() = 0 for all || > 120 krad/s
or more precisely:
|X()| < 0.005 for all || > 120 krad/s
This extra constraint is due to the fact that the loudspeaker amplifier is a nonlinear operator
which can therefore map high frequency components into lower frequencies.
3
We here restrict attention to the modulus mainly for the sake of simplicity, but also because the phase
deformation due the reconstruction operation can be neglected.
1
1
0.8
0.8
0.6
0.6
0.4
0.4
|Y(=200 krad/s)|=0.07
0.2
0.2
200
400
600
800
[krad/s]
1000
1200
1400
200
400
=120 krad/s
600
800
1000
1200
1400
[krad/s]
We represent scaled versions of both HZOH () and Zs () to be able to represent them in the same
figure.
b. Denoting x(t), the output of F (). Explain why x(t) can also not be sent to the
loudspeaker?
0.9
0.8
0.7
0.6
0.5
0.4
0.3
z[n]
ZOH
Ts
y(t)
0.2
x(t)
0.1
500
1000
1500
2000
2500
ZOH,2()|
Table 4: Left: Reconstruction set-up for item [b.]. Right: |H
In fact, the simple procedure presented above to generate x(t) would perfectly work if s
would have been chosen larger. Let us prove it. Suppose thus that the music signal z(t) is
2
sampled at a sampling frequency s,2 = 1040 krad/s (Ts,2 = 1040
ms). The obtained discretetime signal z2 [n] = z(t = nTs,2 ) is then the signal which is converted into a continuous-time
2
signal y2 (t) by using a ZOH mechanism (Ts,2 = 1040
ms). In the right part of Table 4, we
1
ZOH,2() =
represent the modulus of H
HZOH,2() with HZOH,2(), the filter correspondTs,2
2
ms).
1040
The subscript 2 in the notation v2 [n] is to stress out that v2 [n] is a discrete-time signal corresponding
to a sampling frequency s,2 .
v2 [n] =
z[ n4 ] if n is a multiple of 4
0
elsewhere
z[n]
oversam
pling
z2[n]=
d2[n]
v2[n]
y2(t)
ZOH
T s,2
G()
digital part
F()
x2(t)
analog part
4.5
4.5
3.5
3.5
2.5
2.5
1.5
1.5
0.5
0.5
n [samples]
9 10 11
n [samples]
12
13
14
15
16
17
18
19
20
Table 6: Left: signal z[n]; Right: oversampled signal v2 [n] corresponding to z[n]
d. Determine the DTFT V2 () of v2 [n] as a function of the DTFT Z() of the signal z[n]
stored on the CD (z[n] = z(t = nTs )) and compare these two DTFTs in a plot.
Based on the oversampled signal v2 [n], we can generate the desired signal z2 [n] by filtering
v2 [n] with a discrete-time low-pass filter with cut-off frequency cut = 4 and with an amplification gain of TTs,2
= 4. The filter G() for this filtering operation has thus the following
s
frequency response in its main interval [ ]:
4 for 4 4
G() =
0 elsewhere in the main interval
6
We choose an ideal filter for G(). It is here not a large oversimplification since, unlike for
continuous-time filter, it is very easy to design a discrete-time filter which is (almost) ideal6 .
e. Denote by d2 [n], the signal obtained by filtering v2 [n] by the filter G(). Show that
the signal d2 [n] is equal to the signal z2 [n] that would be obtained by sampling the
music signal z(t) at the sampling frequency s,2 (z2 [n] = z(t = nTs,2 )).
Now, since we could generate d2 [n] = z2 [n] from the stored signal z[n] via oversampling+filtering,
it is straightforward to generate the signal x2 (t) that can be sent to the loudspeaker i.e. we
have just to follow the procedure described above item [c.]. The full reconstruction mechanism for the CD technology is thus as represented in Table 5.
The main approximation here is that an actual discrete-time filter will introduce a small delay. This
small delay is necessary to make the filter causal (see Chapter 10 of the book for more details).
Solutions
Exercise 8.
8.a. In order to be a good signal to go to the loudspeaker, the signal y(t) should fulfill the
two following conditions:
1. |Y ()| |Z()| for || < 120 krad/s
2. |Y ()| < 0.005 for all || > 120 krad/s.
For the first condition, recall that Zs () = X() for || < 120 krad/s. Since Y () =
ZOH Zs (), the first condition is met if H
ZOH () 1 for || < 120 krad/s. Since
H
ZOH ( = 120 krad/s)| = 0.8, we can hardly say that the first condition is met, but
|H
as we will now see, the biggest problem is the second condition. The second condition is
indeed definitely NOT met since the high frequency components are not sufficiently reduced
ZOH (). Indeed, we do not have that |Y ()| < 0.005 for all || > 120 krad/s. For
by H
example, in = 200 krad/s, |Y ()| = 0.07.
8.b. The second condition (i.e. |X()| < 0.005 for all || > 120 krad/s) is not met. Indeed,
for example, since |Y ( = 200 krad/s)|| = 0.07, the gain of the filter F () at = 200
= 0.071. This is not the case since |F ()| in = 200
krad/s should be smaller than 0.005
0.07
krad/s is 0.4.
8.c. For the case of a sampling frequency s,2 = 1040 krad/s, the DTFT Z2 () of z2 [n]
expressed as a function of the normalized frequency can also be expressed as a function
of the actual frequency = Ts,2 . This yields Zs,2 () = Z2 ( = Ts,2) which is periodic with
period s,2. For simplicity,
we introduce the notation Zs,2 () = Ts,2 Zs,2 (). The function
P
For the sake of completion, the modulus of F () is also reproduced in the right part of
Table 7.
To be a good signal for the loudspeaker, the modulus of X2 () should be such that:
1. |X2 ()| |Z()| for || < 120 krad/s
2. |X2 ()| < 0.005 for all || > 120 krad/s.
8
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
|Y(=950 krad/s)|=0.015
500
1000
1500
2000
2500
500
[krad/s]
1000
1500
2000
2500
[krad/s]
Table 7: Left: Zs,2 () = Ts,2 Zs,2 () in the frequency range [0 2500] krad/s (blue solid) and
ZOH,2()| (red dashed); Right: |Y2 ()| (blue solid) and |F ()| (red dashed)
|H
8.d. The DTFT V2 () of v2 [n] is defined as:
V2 () =
v2 [n] ejn
n=
Since v2 [n] is only nonzero when n is a multiple of 4, the DTFT V2 () can be rewritten as:
V2 () = ... + v2 [8] ej8 + v2 [4] ej4 + v2 [0] + v2 [4] ej4 + v2 [8] ej8
X
=
v2 [4m] ej4m
| {z }
m=
=z[m]
m=
V2 () = Z(4)
Based on this formula, we see that the shape of V2 () can be deduced from the shape of
Z(). Consequently, let us first represent the DTFT Z() of z[n] = z(t = nTs ). We know
that Z() = Zs ( = Ts ) and Zs () is represented in Table 3. Consequently, we know also
that:
Like Zs (), Z() is entirely real
More importantly, since Zs () is, in its main interval [130 130] krad/s, nonzero
for || < 120 krad/s, the DTFT Z() is, in its main interval [ ], nonzero for
= 2.9.
|| < 120Ts = 24
26
Z() is periodic of period 2 while Zs () is periodic with period s = 260 krad/s
Like for Zs (), there is a scaling factor T1s between the DTFT Z() of the sampled
signal z[n] and the Fourier Transform Z() of the continuous-time signal z(t). In other
since Z( = 0) = 0.45.
words, Z( = 0) = 0.45
Ts
We can thus represent Z() for the normalized frequency interval [0 3] as in the left part
of Table 8 (blue dashed). From the representation of Z(), we can then easily deduce the
shape of V2 () via the formula V2 () = Z(4). This leads to the plot in red solid in the left
part of Table 8.
It is very important to note that, in the case of V2 (), the normalized frequency corresponds to an actual frequency = Ts,2 since v2 [n] is a discrete-time signal corresponding
to a sampling period Ts,2 !!!
8.e. The filter G() is represented in the right part of Table 8 (blue dashed) and is compared to V2 (). Consequently, the signal d2 [n] has a DTFT D2 () equal to G()V2 (). The
DTFT D2 () of d2 [n] is thus as represented7 in Table 9. We will now show that this DTFT
D2 () is equal to the DTFT Z2 () of z2 [n] for all . Having proved this, we will have also
proven that z2 [n] = d2 [n].
10
G(=0)=Ts2/Ts
Z(=0)=V2(=0)=0.45/Ts
V (=0)=0.45 / T
2
=/4
=2/4
=2+/4
Table 8: Left: Z() (blue dashed) and V2 () (red solid) in the interval [0 3]; Right: G()
(blue dashed) and V2 () (red solid) in the interval [0 3]
have been able to retrieve values of z(t) in between the available samples of z[n]. This seems
magical, but it is not. Indeed, you have to recall that we have respected Shannons theorem
when sampling the signal z(t). Consequently, z[n] contains all the information present in
z(t).
11
0.8
0.6
0.4
0.2
12
The values y[n] taken by a stochastic process y at each time-instant n vary at each experiment/realization. However, each realization of y presents the same global characteristics i.e.
each realization will for example oscillate around the same value and will have the same frequency content.... In this session, we will restrict attention to a particular type of stochastic
processes: the (wide-sense) stationary stochastic processes. As the term stationary indicates,
a wide-sense stationary (WSS) stochastic process is a stochastic process for which the mean
Ey[n] and the quantity E(y[n]y[n ]) (for arbitrary ) are independent of the time instant
n where they are evaluated.
Given two (jointly) stationary stochastic processes y[n] and x[n], we have the following
definitions:
The variance of y[n] (constant over n) is defined as E(y[n] Ey[n])2
cross-correlation function: Ryx [ ] = E (y[n]x[n ])
auto-correlation function: Ry [ ] = E (y[n]y[n ])
The power of a stochastic process is defined as Ey2 [n] = Ry [0]
The power spectral density function y () is defined as the DTFT of Ry [ ] i.e.
y () =
+
X
Ry [ ]ej
When y is obtained by filtering the WSS stochastic process x with a filter whose transfer
function is given by H(z), we have that:
y () = |H(z = ej )|2 x ()
n, e[n] is a zero-mean random variable with variance e2 . The properties of a white noise
of variance e2 are thus:
Ee[n] = 0
Re [ ] =
e2 for = 0
0 elsewhere
Exercise 1. Consider the discrete-time stationary stochastic process y[n] that is generated
as:
y[n] = 10 + e[n]
with e[n] a (zero-mean) white noise of variance e2 = 4. In Figure 1, two realizations of this
stochastic process are represented for 0 n 4.
y[n] (first realization)
15
10
0
1
2
n
2
n
15
10
0
1
H2
e[n]
H1
y[n]
a. Determine the transfer functions H1 (z) and H2 (z) of the filters H1 and H2 in Figure 2
in such a way that this figure describes the difference equation generating y[n]. Hint:
use the time-shift property of the Z-transform.
b. What are the conditions on a, b, c and d so that the above difference equation represents a stable system? Hint: the system is stable if and only if both the transfer
functions H1 (z) and H2 (z) found in item [a.] are stable.
Consider the same equation but with a = 0.9, b = 1, c = d = 0 and e[n] a white noise
stochastic process with variance e2 = 1. This delivers the following equation generating a
wide-sense stationary process y[n]
y[n] = 0.9 y[n 1] + e[n]
c. Determine the power spectral density y () of y[n]
d. The stochastic process can be rewritten as
y[n] =
+
X
k=0
k e[n k]
1
3
x[n 2] + x[n 4] + v[n]
4
2
where x and v are two white noise processes with variance x2 and v2 , respectively. Moreover,
we suppose that x and v are independent.
a. Determine the mean of y[n] for all values of n
4
10
10
10
10
6
4
y[n]
2
0
2
4
6
0
50
100
150
200
250
n
300
350
400
450
500
50
100
150
200
250
n
300
350
400
450
500
6
4
e[n]
2
0
2
4
6
Figure 4: Upper plot: a realization of y[n]. Bottom plot: a realization of e[n]. Both when
e2 = 1.
b. Determine Ry [ ] for = 0
c. Determine the power of y
d. Determine Ryx [ ] for all values of
e. Explain why the random variable y[n] (i.e. the value taken by y at time instant n) is
not correlated with the random variable x[n] (i.e. the value taken by x at time instant
n) or, in other words, why Ryx [0] = 0
f. Prove the following property of the power spectral density: For a stochastic process
y[n] = x1 [n] + x2 [n], we have that
y () = x1 () + x2 ()
when x1 [n] and x2 [n] are two independent (wide-sense) stationary stochastic processes.
g. Determine the power spectral density function y () of the signal y given above. Use
for this purpose the property given in item [f.].
Solutions.
Exercise 1.
1.a. y[n] is made up of the summation of a constant 10 (which does not change in different
realizations) and of a white noise process (which is different at each realization). Consequently, for each value of n, the value taken by y[n] = 10 + e[n] in different realizations is
different.
1.b. For n = 2, we obtain successively:
Ey[2] = E (10 + e[2])
= E10 + Ee[2]
= 10 + 0 = 10
since e[2] is by definition a random variable with zero mean. The latter holds for each value
of n. Consequently, for an arbitrary value of n:
Ey[n] = E (10 + e[n])
= E10 + Ee[n]
= 10 + 0 = 10
1.c. The variance var(y) of a random variable y is defined as E(y Ey)2 . The sample y[2]
is a random variable and its mean is equal to 10 (see item [b.]). Consequently, the variance
var(y[2]) of y[2] is given by:
var(y[2]) = E(y[2] 10)2
since e[2] is a random variable of variance e2 = 4. The latter holds for each value of n.
Consequently, for an arbitrary value of n:
var(y[n]) = E(y[n] 10)2
where the last step follows from the definition of the auto-correlation function. Recalling
now that e[n] is a white noise, we obtain:
100 + e2 for = 0
Ry [ ] =
100
elsewhere
1.f. The power of a stochastic process is defined as Ry [0]. The power is thus equal to 100+e2 .
Exercise 2.
2.a. The mean of y[n] can be computed as follows:
Ey[n] = E(e[n 1] ae[n 2]) = Ee[n 1] aEe[n 2] = 0 n
since Ee[n] = 0 for all n for a white-noise e[n].
2.b. The auto-correlation function of e[n] can be computed as follows:
Ry [ ] = Ey[n]y[n ]
= E ((e[n 1] ae[n 2])(e[n 1 ] ae[n 2 ]))
(1 + a2 )e2 f or = 0
ae2
f or = 1
Ry [ ] =
0
elsewhere
and e[n 2] for y[n] and e[n 3] and e[n 4] for y[n 2]). These different elements are by
definition uncorrelated. This explains why y[n] and y[n 2] are uncorrelated and thus why
Ry [2] = 0.
2.c. The cross-correlation function of y[n] with e[n] can be computed as follows:
Rye [ ] = Ey[n]e[n ]
We obtain thus:
2
f or = 1
e
2
ae f or = 2
Rye [ ] =
0
elsewhere
2.d. The power of a stochastic signal is equal to its auto-correlation function evaluated at
= 0. Thus, the power of e[n] is Re [0] = e2 and the power of y[n] is Ry [0] = (1 + a2 )e2 .
2.e. The power spectral density function y () of y[n] is defined as the DTFT of Ry [ ] i.e.
y () =
Ry [ ]ej
e () =
Re [ ]ej
= e2
where we made use of the fact that e0 = 1 and that
2
e for = 0
Re [ ] =
0 elsewhere
1
y () d =
2
= (1 + a
= (1 + a
(1 + a2 )e2 2ae2 cos() d
)e2
2ae2
)e2
cos() d
{z
}
=0
which is indeed what we found in item [d.] for the power of y. Now, let us proceed with the
integral of e ():
1
2
1
e () d =
2
e2 d
= e2
which is indeed what we found in item [d.] for the power of e.
Exercise 3
3.a. Using the shift in time property of the Z-transform, we obtain that
Y (z) = az 1 Y (z) + bE(z) + cV (z) + dz 1 V (z)
with Y (z), E(z) and V (z), the Z-transforms of y[n], e[n] and v[n], respectively. Since we are
looking for the transfer functions such that Y (z) = H1 (z)E(z) + H2 (z)V (z), we obtain:
Y (z) =
c + dz 1
b
E(z)
+
V (z)
1
1
1
az
1
az
| {z }
| {z }
=H1 (z)
=H2 (z)
3.b. The system is stable if and only if both H1 (z) and H2 (z) are stable transfer functions.
These transfer functions can be rewritten as follows:
H1 (z) =
bz
za
H2 (z) =
cz + d
za
We see that the unique pole of both H1 (z) and H2 (z) is in z = a. Consequently, the only
condition for stability is that |a| < 1 i.e 1 < a < 1. No extra condition on b, c, d are needed
for stability.
3.c. Using formula (3.26) in the lecture notes, we have that:
y () = |H1 ()|2 e () = |H1 ()|2
since, for a white noise e, e () = e2 and e2 = 1 in this case. Now, using what has been
found in item [a.], we obtain:
10
y () =
=
=
=
2
1
1 0.9ej
2
1
1 0.9 (cos() j sin())
2
1
1 0.9cos() + j 0.9 sin()
1
2
(1 0.9cos()) + (0.9 sin())2
3.d. Using the fact that y[n] is the result of the convolution of h[n] (i.e. the pulse response
of the filter H1 (see item [a.])) and e[n], we obtain that:
y[n] =
+
X
k=
h[k]e[n k]
Now, noticing that h[n] = (0.9)n u[n] (see Table 4.1 on page 177 of the book), we obtain:
y[n] =
+
X
k=0
(0.9)k e[n k]
= E
k=0
+
X
k=0
(0.9)k e[n k]
e[n]
(0.9)0 E (e[n]e[n]) + (0.9)1 E (e[n 1]e[n]) + (0.9)2 E (e[n 2]e[n]) + ...
= e2 = 1
+
X
k=0
+
X
k=0
(0.9)k e[n k]
e[n + 1]
(0.9)0 E (e[n]e[n + 1]) + (0.9)1 E (e[n 1]e[n + 1]) + (0.9)2 E (e[n 2]e[n + 1]) + ...
= 0
11
Ey[n] = E
k=0
+
X
k=0
(0.9) e[n k]
= 0
since Ee[n] = 0 for all n when e is a white noise.
3.g. That y[n] varies more smoothly (shows more memory) than e[n] is logical since the
value of y at time instant n is directly dependent on the value of y at time instant n 1.
Indeed, y[n] = 0.9y[n1]+ e[n]. The white noise e[n] has a more erratic behaviour since the
value of e[n] at time instant n is independent of the value of e[n] at previous and future time
instants. This has direct consequence on the shape of their power spectrum. Consequently,
that y[n] varies more smoothly than e[n] can be explained based on the shape of y () of
y[n] in Figure 3. Indeed, in this figure, we see that the power of y[n] is strongly concentrated
in the low frequencies (recall that y () is represented in a logarithmic scale) as opposed to
e[n] whose power is spread over the whole frequency range (e () is indeed nonzero for all
) and we know that a signal whose power is located in the low frequencies range is a signal
which varies more smoothly.
3.h. The power of y is by definition equal to Ry [0] which is given by:
Ry [0] = Ey[n]y[n]
= E (0.9 y[n 1] + e[n])2
= E (0.9)2 y[n 1]y[n 1] + 1.8 y[n 1]e[n] + e[n]e[n]
e2
5.26 e2
1 (0.9)2
12
When e2 = 1, the power of y is thus equal to 5.26 while the power of e is by definition
equal to e2 = 1. In Figure 4, we see that the power of y is larger since the values taken by
y[n] for different n are spread over a wider (larger) range of amplitudes.
Exercise 4.
4.a. We have for all values of n that
1
3
Ey[n] = Ex[n 2] + Ex[n 4] + Ev[n] = 0 + 0 + 0 = 0
4
2
since x and v are white noises and thus zero-mean.
4.b. We have that:
Ry [ = 0] = Ey[n]y[n]
9 2
1 2
3
3
2
= E
x [n 2] + x [n 4] + v [n 2] + x[n 2]x[n 4] + x[n 2]v[n] + x[n 4]v[n]
16
4
4
2
1
9
=
Rx [0] + Rx [0] + Rv [0] + 0 + 0 + 0
16
4
Using now the fact that e and v are both white noises, we obtain
Ry [0] =
13 2
x + v2
16
13 2
16 x
+ v2 .
13
4.e. We first observe that y[n] is a linear combination of the random variables x[n 2],
x[n 4] and v[n]. Second, we observe that, since x is white, the random variable x[n] is
not correlated with the previous samples of x: in particular with x[n 2], x[n 4]. The
random variable x[n] is also not correlated with v[n] since v and x are independent. Combining the previous observations implies that y[n] and x[n] are uncorrelated and thus that
Rye [0] = Ey[n]e[n] = 0.
4.f. We must prove the following: for a stochastic process y[n] = x1 [n] + x2 [n], we have that
y () = x1 () + x2 ()
when x1 [n] and x2 [n] are two independent (wide-sense) stationary processes. For this purpose, let us recall that
y () =
+
X
Ry [ ]ej
where Ry [ ] = E(y[n]y[n ]). Using the fact that y[n] = x1 [n] + x2 [n], we can rewrite
Ry [ ] as follows:
Ry [ ] = E (x1 [n]x1 [n ] + x1 [n]x2 [n ] + x2 [n]x1 [n ] + x2 [n]x2 [n ]) = Rx1 [ ]+Rx2 [ ]
where the last equality follows from the fact that x1 [n] and x2 [n] are independent. Consequently:
y () =
+
X
(Rx1 [ ] + Rx2 [ ]) ej
+
X
Rx1 [ ]ej
+
X
Rx2 [ ]ej
= x1 () + x2 ()
4.g. We can decompose the signal y[n] in this exercise in the sum of two independent signals.
Indeed, y[n] = x1 [n] + x2 [n] with
1
3
x1 [n] = x[n 2] + x[n 4]
4
2
x2 [n] = v[n]
These signals x1 [n] and x2 [n] are indeed independent since x[n] and v[n] are independent.
Based on the above decomposition, we can conclude that:
y () = x1 () + x2 ()
14
Let us now compute x1 () and x2 (). The latter can be easily determined:
x2 () = v () = v2
since v is a white noise of variance v2 (see e.g. exercise 2, item [e.]).
The signal x1 [n] is given by x1 [n] = 43 x[n 2] + 12 x[n 4]. The transfer function H(z)
between x and x1 is H(z) = 34 z 2 + 21 z 4 . Using now formula (3.26) of the lecture notes, we
obtain:
x1 () = |H()|2 x ()
| {z }
=x2
where H() = H(z = ej ) is the frequency response of H(z) and |.| denotes the modulus.
We have thus:
13 3 j2
13 3
j2
2
x1 () =
+ (e + e
) x =
+ cos(2) x2
16 8
16 4
15
13 3
+ cos(2) x2 + v2
16 4
The values y[n] taken by a WSS stochastic process y at each time-instant n vary at each
experiment/realization. However, each realization of y presents the same global characteristics i.e. each realization will for example oscillate around the same value and will have the
same frequency content....
The power distribution of a WSS stochastic process y over the frequency (i.e. its frequency content) is represented by its power spectral density function y (). For a white
noise y, the power spectral density function y () is equal to the variance of the white noise.
When y is obtained by filtering the WSS stochastic process x with a filter whose transfer
function is given by H(z), we have that:
y () = |H(z = ej )|2 x ()
2
y () d =
y () d (the last equality follows from y () = y ())
Py = Ey [n] =
2
0
Exercise 1. In this exercise, we consider a continuous-time system in the following setup:
x[n]
ZOH
Ts
x(t)
Continuous
system
y(t)
Sampling
Ts
y[n]
G(z)
This system represents a chemical plant producing a certain type of crystals. The output
y(t) is the discrepancy between the required size for these crystals and the actual size of the
crystals produced by the plant. Consequently, we wish y(t) to be as close as possible to zero.
In order to verify how close/far y(t) is from zero, we measure y(t); this is done via a sampling
mechanism yielding the discrete-time signal y[n] = y(t = nTs ) for some given Ts . The crystal
1
The power can also be computed as the auto-correlation function for = 0 i.e. Ry [0].
size (and thus y(t)) is influenced by the amount of reactants that are fed into the plant. This
amount is here given by Q + x(t) where Q is a given constant and x(t) a variable quantity
that can be considered as the input of the system (see the above schema). As also shown in
this schema, the continuous-time input x(t) is constructed via a ZOH based on a discretetime signal x[n]. Consequently, the considered system can also be seen as a discrete-time
system with input x[n] and output y[n]. This discrete-time system has a transfer function
G(z) which is supposed given. It is important to note that y[n] is not only influenced by
x[n] but by several process disturbances too. We have thus the following situation:
v[n]
x[n]
G(z)
y[n]
In order to have an idea of the importance of those disturbances, we have experimented the
plant with x[n] = 0 and measured the corresponding output y[n] = v[n]. The measurement
y[n] = v[n] is represented in Table 1 (left part). With a spectral analysis (see session 5), we
have also observed that v[n] can be modeled as a zero-mean WSS stochastic process whose
power spectral density function v () has the shape given in Table 1 (right part).
0.1
1
0.08
0.06
0.8
0.04
v()
v[n]
0.02
0.02
0.6
0.4
0.04
0.2
0.06
0.08
0.1
1000
2000
3000
4000
5000
6000
n [samples]
7000
8000
9000
10000
0 3
10
10
10
10
10
|S()|
10
v[n]
0 +
-
x[n]
C(z)
10
G(z)
y[n]
+
2
10
10
10
10
10
10
10
10
10
1
|C()S()|
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
10
Table 2: Top Left: closed-loop setup; Top Right: |S(ej )|; Bottom Left: |C(ej )S(ej )|;
Bottom Right: |C(ej )G(ej )S(ej )|
Important remark. Due to the fact that the disturbance v is modeled by a (zero-mean)
WSS stochastic process, the output of the plant and the control input are also WSS stochastic processes for which the power content can be determined via the power spectral density
function. In this stochastic framework, an attenuation of the disturbance will be characterized by an output y which has a smaller power than v.
The closed-loop control setup leads to the desired result since the output y[n] has a much
smaller power than v[n] as shown in Table 3 (left part) and this happens with a control
input x[n] whose power is judged very reasonable (see the right part of Table 3).
a. Determine the power spectral density functions y () and x () of the output y[n]
and of the control input x[n] as a function of v ().
3
0.1
0.08
0.08
0.06
0.06
0.04
0.04
0.02
0.02
x[n]
y[n]
0.1
0.02
0.02
0.04
0.04
0.06
0.06
0.08
0.08
0.1
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
0.1
1000
2000
3000
n [samples]
4000
5000
6000
n [samples]
7000
8000
9000
10000
Table 3: Left: output y[n] in closed loop; Right: Control input x[n] in closed loop
b. Explain, based on Table 2 and on what has been found in item [a.], why the power Py
of y[n] and the power Px of x[n] are smaller than the power Pv of v[n].
Suppose now that the measurement of y(t) is subject to a quantization error (measurement noise). Suppose thus that the output of the sensor does not deliver precisely
y[n] = y(t = nTs ), but y[n] + e[n] instead. Here, e[n] can be considered as a white noise of
variance e2 = (0.005)2. This white noise e is independent of the process disturbance v. The
closed-loop setup becomes such as in Table 4 and a realization of the measurement noise
e[n] is also represented in this table. Note that e[n] is a relatively important measurement
noise since e[n] has a maximal amplitude of the order of the signal y[n] in Table 3.
We have simulated this closed-loop setup and obtained the output y[n] and the control
input x[n] given in the bottom part of Table 4. The values taken at each instants by
these signals are different from the ones in Table 3 (the realization of v[n] being different).
However, as far as the global characteristics are concerned, the results in Tables 3 and 4 are
similar. Consequently, the influence of the measurement noise seems negligeable.
c. Determine the power spectral density functions y () and x () of the output y[n]
and of the control input x[n] as a function of v () and e () = e2 .
d. Explain, based on what has been found in item [c.] and on the hint below, why the
power of y[n] and x[n] are similar in Tables 3 and 4. Hint:
1
2
1
2
0.1
0.08
0.06
0.04
actual output
0.02
0 +
x[n]
C(z)
e[n]
v[n]
+
G(z)
y[n]
0.02
0.04
+
e[n]
0.06
0.08
0.1
0.1
0.1
0.08
0.08
0.06
0.06
0.04
0.04
0.02
0.02
x[n]
y[n]
measured output
0.02
0.04
0.04
0.06
0.06
0.08
0.08
1000
2000
3000
4000
5000
6000
n [samples]
7000
8000
9000
10000
2000
3000
4000
5000
6000
n [samples]
7000
8000
9000
10000
1000
2000
3000
4000
5000
6000
n [samples]
7000
8000
9000
10000
0.02
0.1
1000
0.1
Table 4: Top Left: closed-loop setup with measurement noise; Top Right: 10000 samples
of a realization of the disturbance e[n]; Bottom Left: output y[n] with measurement noise;
Bottom Right: Control input x[n] with measurement noise
e. It can very harmful for the actuators to have to vary at a very fast rate. Consequently,
besides a control input x[n] with small power, it is desired that this small x[n] does not
have a power distribution concentrated in high frequencies. In this example, we will
say that the high frequencies are the ones in the (normalized) frequency range [0.5 ].
Show that x[n] has indeed a small power contribution in this range.
Exercise 2 (radar range determination. Radars are devices which allow to detect an
object and to determine at which distance from the radar this object is located. For this
purpose, the radar emits a white noise x with variance x2 . If an object is in the neighborhood,
the signal x is bounced back and comes back to the radar (with some attenuation K < 1).
Of course, the bounced back signal is not the only signal received by the radar: the radar
receives a lot of other signals that we will model by a WSS stochastic signal e independent
of x. Let us denote by y, the signal received by the radar. This signal is thus equal to:
y[n] = K x[n c] + e
5
Solutions.
Exercise 1.
1.a. In order to determine y () and x () as a function of v (), we will use expression
(3.26) of the lecture notes. For this purpose, we first need to determine the transfer functions
relating y (resp. x) and v. For this purpose, in order to be able to use the Z-transform
theory, we suppose that v is a deterministic signal for which the Z-transform V (z) exists
(see page 36 of the lecture notes)2 . Under this assumption, we can relate the Z-transforms
Y (z), X(z) and V (z) of the signals y[n], x[n] and v[n] as follows:
=X(z)
}|
{
z
Y (z) = G(z) (C(z)Y (z)) +V (z)
= Y (z) =
1
V (z) = S(z)V (z)
1 + C(z)G(z)
Equivalently, we deduce:
X(z) = C(z)Y (z) = C(z)S(z)V (z)
Now the transfer functions have been determined, we can go back to our stochastic framework
and use expression (3.26) of the lecture notes to deduce that:
y () = |S(z = ej )|2 v ()
x () = |C(z = ej )S(z = ej )|2 v ()
1.b. As can be seen in Table 1, the process v[n] concentrates its power in the frequency
range [0 0.01]; v () is indeed negligeable outside this frequency range. As a consequence,
we have that:
Z
1 0.01
Pv
v () d
0
Now, recalling that y () = |S(z = ej )|2 v (), we can thus write that:
Z
1
Py =
|S(ej )|2 v () d
0
Since v () is negligeable outside the frequency range [0 0.01], the last expression can be
approximated almost perfectly by:
Z
1 0.01
Py
|S(ej )|2 v () d
0
2
This is just a trick so that everything is right mathematically. However, this trick is not important in
practice. We just want to determine the transfer function between v and y.
Note that 1 |S(ej )| < 2 for all [0.05 ], but v () is so negligeable at those frequencies that this amplification does not change the previous conclusion.
Now, we observe that |S(ej )| << 1 for all [0 0.01]. Consequently, we conclude that:
Z
Z
1 0.01
1 0.01
j 2
Py
|S(e )| v () d <<
v () d Pv
0
0
This is the desired behaviour since the output should be a small as possible.
The same reasoning can be done for x[n]. Indeed, |C(z = ej )S(z = ej )| < 1 for all .
1.c. To deduce the transfer functions, like in item [a.], we assume that v and e are deterministic and have a Z-transform. Under this assumption, we can relate the Z-transforms
Y (z), X(z) , V (z) and E(z) of the signals y[n], x[n], v[n] and e[n] as follows:
Y (z) = G(z) (C(z)Y (z) C(z)E(z)) + V (z)
Yv (z)
Ye (z)
z }| { z
}|
{
C(z)G(z)
1
V (z)
E(z) = S(z)V (z) + (C(z)G(z)S(z)E(z))
= Y (z) =
1 + C(z)G(z)
1 + C(z)G(z)
Equivalently, we deduce:
X(z) = C(z) (G(z)X(z) + V (z) + E(z))
Xv (z)
Xe (z)
z
}|
{ z
}|
{
C(z)
C(z)
= X(z) =
V (z)
E(z) = C(z)S(z)V (z) + (C(z)S(z)E(z))
1 + C(z)G(z)
1 + C(z)G(z)
Coming back to the stochastic framework, we see that the output y is made of two additive contributions yv (obtained by filtering v by S(z)) and ye (obtained by filtering e by
C(z)G(z)S(z)). Similarly, x is made of two additive contributions xv and xe due to v and
e, respectively.
Since yv is independent of ye AND xv is independent of xe (v and e being independent),
we can conclude that (see the property given in item [f.] of exercise 4 in session 6):
y () = yv () + ye ()
x () = xv () + xe ()
and thus, using expression (3.26) of the lecture notes,:
y () = |S(ej )|2 v () + |C(ej )G(ej )S(ej )|2 e2
8
Remember that Pyv is the power of the output when there is no measurement noise (i.e. like
in Table 3).
Using now the hint, we conclude that the power Py of y[n] is here thus given by:
Py = Pyv + 0.038 e2
1
0.038
Remark. This is once again the desired property since we want y as small as possible and
thus we want the power of y to be as small as possible.
Using a similar reasoning, we can deduce that the power of x[n] is given by :
Px = Pxv + 0.001 e2
Here the power Pxe of xe is 1000 times smaller than the power of e itself. Consequently, we
have that Px Pxv .
1.e. As deduced in item [c.]:
x () = |C(ej )S(ej )|2 v () + |C(ej )S(ej )|2 e2
Let us analyze the contribution in [0.5 ] of both terms:
the first term |C(ej )S(ej )|2 v () has really an absolutely negligeable contribution
in this range since both |C(ej )S(ej )| and v () are negligeable in this frequency
range.
the second term |C(ej )S(ej )|2 e2 is generated by a white noise which has thus equal
power content at each frequency. However, this white noise is here filtered by C(z)S(z)
which is a transfer function attenuating the components at each frequency, but especially the components at higher frequencies (see Table 2). Consequently, the contribution in the range [0.5 ] of this second term will be very small.
Exercise 2.
2.a. We have that:
Ryx [ ] = Ey[n]x[n ]
c is indeed positive !
10