Sei sulla pagina 1di 236

Signals and Systems

MSc Course

1999

H. G. ter Morsche
G. Meinsma

University of Eindhoven, The Netherlands


University of Twente, The Netherlands

This text is a translation of the lecture notes Inleiding Signaaltheorie (2Y460) by


H. G. ter Morsche from the University of Eindhoven. The translated text is used for an MSc
course at the University of Twente.
With the translation some changes have been made here and there, but for the most part the
translation follows ter Morsches original text. Five appendices have been added, and some
material was appended notably to Chapters 4, 7 and 8. Some notation was changed to fit
notation of follow-up courses in the MSc curriculum.
The willingness of H. G. ter Morsche to make his text available is greatly acknowledged.

Contents
1

Signals, energy and power


1.1 Continuous-time signals and discrete-time signals
1.2 Complex-valued and real-valued signals . . . . .
1.3 Periodic signals . . . . . . . . . . . . . . . . . .
1.4 Non-periodic signals . . . . . . . . . . . . . . .
1.5 Energy and power . . . . . . . . . . . . . . . . .
1.6 Sequences and series of complex numbers . . . .
1.7 Problems . . . . . . . . . . . . . . . . . . . . .

. . .
. . .
. . .
. . .
. . .
. . .
. . .

. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .

. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .

. .
. .
. .
. .
. .
. .
. .

1
1
2
6
8
9
9
12

Periodic signals and their line spectra


2.1 Complex and real Fourier series . . . . . .
2.2 The fundamental theorem of Fourier series .
2.3 Real Fourier series . . . . . . . . . . . . .
2.4 Convolution and Parsevals theorem . . . .
2.5 The Gibbs phenomenon . . . . . . . . . . .
2.6 Problems . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

17
17
21
26
31
35
36

Non-periodic signals and their continuous spectra


3.1 The Fourier integral theorem . . . . . . . . . .
3.2 Fourier transform properties and examples . . .
3.3 Examples . . . . . . . . . . . . . . . . . . . .
3.4 Convolution and correlation . . . . . . . . . .
3.5 Shannons sampling theorem . . . . . . . . . .
3.6 Amplitude modulation . . . . . . . . . . . . .
3.7 Problems . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

41
41
45
51
54
59
63
64

.
.
.
.
.
.

69
69
71
73
75
77
80

Generalized functions and Fourier transforms


4.1 The delta function . . . . . . . . . . . . . .
4.1.1 Properties of the delta function . . .
4.2 Generalized derivatives . . . . . . . . . . .
4.3 Generalized Fourier transforms . . . . . . .
4.4 Introduction to generalized functions . . . .
4.4.1 Generalized derivatives . . . . . . .
iii

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

Contents

iv

4.5
5

4.4.2 Generalized Fourier transforms . . . . . . . . . . . . . . . . . . . . . 81


Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Linear time-invariant systems


5.1 LTI systems in time domain . . . .
5.2 LTI systems in frequency domain .
5.3 Ideal filters . . . . . . . . . . . .
5.4 Problems . . . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

85
86
96
100
102

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

The Laplace transform


6.1 Laplace transforms of piecewise smooth signals
6.2 Signals with delta components . . . . . . . . .
6.3 Properties of the Laplace transform . . . . . . .
6.4 Problems . . . . . . . . . . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

107
. 108
. 111
. 112
. 119

Systems described by ordinary linear differential equations


7.1 State variables and state equations . . . . . . . . . . . .
7.2 Solution of input-state-output equations . . . . . . . . .
7.3 Solutions of state and output for positive time . . . . . .
7.3.1 The output . . . . . . . . . . . . . . . . . . . .
7.3.2 Solution of ODEs for positive time . . . . . . .
7.4 Initially at rest signals and systems . . . . . . . . . . . .
7.5 BIBO-stable systems and its steady-state behavior . . . .
7.5.1 Steady-state behavior . . . . . . . . . . . . . . .
7.6 Butterworth filters . . . . . . . . . . . . . . . . . . . . .
7.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

121
. 121
. 129
. 133
. 135
. 137
. 140
. 145
. 146
. 152
. 154

The z-transform and Discrete-time Systems


159
8.1 Definition and convergence of the z-transform . . . . . . . . . . . . . . . . . . 160
8.2 Properties of the z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
8.3 The inverse z-transform of rational functions . . . . . . . . . . . . . . . . . . . 168
8.4 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.5 The one-sided z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
8.6 Discrete-time systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.6.1 Initially at rest systems described by difference equations . . . . . . . . 179
8.6.2 BIBO-stable discrete-time systems . . . . . . . . . . . . . . . . . . . . 182
8.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

A Partial Fraction Expansion


A.1 When P(s)/Q(s) is strictly proper . . . . . . . . . . . . . . . . . . . . . . . .
A.2 When P(s)/Q(s) is proper . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.3 When P(s)/Q(s) is non-proper; long division . . . . . . . . . . . . . . . . . .

187
188
191
192

Contents

B Solution of ODEs and ODEs


193
B.1 Solution of ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
B.1.1 Particular solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
B.2 Solution of difference equations . . . . . . . . . . . . . . . . . . . . . . . . . 198
C Selected tables

203

D Exam examples

209

E Solutions

215

vi

Contents

1
Signals, energy and power

t
(a)

(b)

Figure 1.1: A continuous-time signal and a discrete-time signal.

1.1 Continuous-time signals and discrete-time signals


Quantities that change with time, such as the voltage across a resistor or the outdoor temperature, may be regarded as functions f (t) defined on R or on a subset of R. Functions that
depend on time are called signals. If a signal f (t) is defined for all t R or for all t in some
interval (a, b) R, then we say that f (t) is a continuous-time signal. If f (t) is defined only
for a sequence of time instances
< t2 < t1 < t0 < t1 < t2 < . . . ,

(tn R),

then f (t) is said to be a discrete-time signal. A typical example of discrete-time signal is a signal obtained through sampling of a continuous-time signal. The sampled signal of a continuoustime signal f (t) is the discrete-time signal f (np), n Z, defined at integer multiples of the
sampling time p > 0. Figure 1.1(a) shows the plot of a damped sinusoid (continuous-time) and
Figure 1.1(b) next to it shows the corresponding sampled signal (discrete-time) for a certain
sampling time. In plots, discrete-time signals are represented by a series of stems on the real
axis, such as in Figure 1.1(b). Often the values of the time instances tn are irrelevant, and in
such cases it is customary to denote the discrete-time signal by f [n], where f [n] := f (tn ), so
with square brackets around the time index n.
1

Chapter 1. Signals, energy and power

Figure 1.2: The argument of a point z = x + j y in the complex plane.

1.2 Complex-valued and real-valued signals


Signals in the real world always take real values. Such signals are therefore called real signals or
real-valued signals. The treatment of signals and their transforms is however greatly simplified
if we allow the use of complex-valued signals. A complex-valued signal f (t) is a signal that
can be written as
f (t) = f 1 (t) + j f 2(t)
where j is the imaginary unit ( j2 = 1), f 1 (t) is the real part of f (t), ( f1 (t) = Re f (t) R)
and f2 (t) is the imaginary part of f (t), ( f2 (t) = Im f (t) R).
Any complex number z =
 x + j y can be expressed in polar coordinates as z = r cos +
jr sin in which r = |z| = x 2 + y 2 is the absolute value or modulus of z and = arg z the
argument of z. The argument is the angle measured in radians that z makes with the positive
real axis, see Figure 1.2, and it is unique up to a multiple of 2 . Using Eulers formula
e j = cos + j sin
it follows that z can be succinctly expressed in polar form as z = rej . Likewise any complex
signal f (t) can be expressed as
f (t) = A(t)e j (t) ,
where A(t) = | f (t)| is the modulus and (t) = arg f (t) the argument of f (t). The complex
conjugate of z C will be denoted by z instead of the more common z . So if z = x + j y
then z = x j y. Likewise the complex conjugate of a signal f (t) is denoted as f (t). The
following rules are readily verified,
| f (t)| = | f (t)|,

| f (t)|2 = f (t) f (t),

and

arg f (t) = arg f (t).

Moreover, f (t) = f (t) is a different way of saying that f (t) is real. Also, |e f (t) | = eRe f (t) ,
which is a consequence of the fact that |ej | = 1 for every R.
1.2.1. Example.

1.2. Complex-valued and real-valued signals

a) An important example of a complex-valued signal is the harmonic signal f (t) = ej t


(see Section 1.3), where is a real-valued constant. The modulus | f (t)| is equal to 1 for
all t, since

j t
|e | = cos2 (t) + sin2 (t) = 1 t R.
b) Let a = u + j v C, a  = 0 and consider f (t) = eat . Then
| f (t)| = |e(u+ j v)t | = |eut e j vt | = |eut ||e j vt | = eut = e(Re(a))t ,
arg f (t) = vt = (Im a)t,

f (t) = eut e j vt = ea t .


To effectively work with complex signals we shall need to extend the analysis techniques of
real functions to complex functions. This is relatively straightforward. To begin with, consider
the notion of limit.
1.2.2. Definition. Let f (t) = f 1 (t) + j f 2(t) be a complex function and let L = L1 + j L 2 C.
Then the complex-valued limit
lim f (t) = L

ta

is defined to mean that


lim f 1 (t) = L 1

and

ta

lim f 2 (t) = L 2 .

ta

As a consequence the rules of calculus for limits that we know for real-valued functions may
also be applied to complex-valued functions.
1.2.3. Example. Let f (t) =

e jt
.
j +t

lim e j t

lim f (t) =

t0

t0

lim( j + t)

t0

As limt0 ( j + t) = j  = 0, there holds that

1
= j.
j


For real-valued functions f (t) it is direct that limta f (t) = L if and only if limta | f (t)
L| = 0. For complex-valued functions this holds as well, with now | f (t) L| denoting the
modulus of f (t) L. In full:
1.2.4. Theorem. Let f (t) = f1 (t)+ j f 2(t) be a complex-valued function and L = L1 + j L 2
C. Then
lim f (t) = L

ta

if and only if

lim | f (t) L| = 0.

ta

Chapter 1. Signals, energy and power

Proof. Suppose first that limta | f (t) L| = 0. We make use of the following inequalities
that hold for any complex z = x + j y:
| Re z| |z|,

| Im z| |z|.

These inequalities follow from the fact that




| Re z| = |x| = x 2 x 2 + y 2 = |z|,


| Im z| = |y| = y 2 x 2 + y 2 = |z|.
Therefore | f1 (t) L 1 | = | Re( f (t) L)| | f (t) L| and since | f (t) L| 0 as t a,
it follows that | f1 (t) L 1 | 0 as t a and hence that limta f 1 (t) = L 1 . In the same way
it follows that limta f 2 (t) = L 2 .
Now suppose that f1 (t) L 1 and f2 (t) L 2 as t a. Then as t a we get that


(1.1)
| f (t) L| = ( f 1 (t) L 1 )2 + ( f 2 (t) L 2 )2 02 + 02 = 0.

1.2.5. Example. Let a C with Re a > 0. Then limt eat = 0. This is a consequence of
the fact that |eat 0| = eRe(at) = e Re(a)t 0 as t .

Continuity and differentiability are notions that are defined by means of limits. Concretely, a
function is said to be continuous in t = a if limta f (t) = f (a) and f is said to be differentiable at t = a if limta ( f (t) f (a))/(t a) exists. It may be verified that continuity
and differentiability of a complex function f (t) = f1 (t) + j f 2 (t) is equivalent, respectively,
to continuity and differentiability of its real part f1 (t) and imaginary part f2 (t). Moreover,
the derivative f  (t) at t = a then equals f  (a) = f 1 (a) + j f 2 (a). It is important to note that
for complex functions f (t) the time t is still real-valued, and in particular f (t) denotes the
derivative with respect to a real-valued parameter (namely t).
The rules of calculus for differentiation of complex functions are the same as those for real
functions. Complex numbers are in this respect to be regarded as constants. It may be shown
for example that for any a C the derivative of f (t) = eat is indeed equal to f  (t) = aeat .
Also integration as we know it for real-valued functions is easily extended to complexvalued functions. The integral of a complex function f (t) = f1 (t) + j f 2 (t) on an interval
(a, b) is defined as
 b
 b
 b
f (t) dt =
f 1 (t) dt + j
f 2 (t) dt.
(1.2)
a

In effect this says that for complex-valued functions the integral exists if and only if both its
real and imaginary part can be integrated. In the above, a = and b = are allowed.
From Equation (1.2) it follows that
  b
 b
f (t) dt =
f (t) dt.
a

1.2. Complex-valued and real-valued signals

Like in the real case it is often possible to obtain an explicit function description of the primitive of f (t), also called the antiderivative of f (t). Also the rules of partial integration and
substitution remain valid for complex-valued functions. This is illustrated in the following
three examples.
1.2.6. Example. Let n be a positive integer and let T > 0 and 0 = 2/T . Then


1 T /2 j n0 t
0 if n  = 0,
e
dt =
T T /2
1 if n = 0.

(1.3)

This may be seen as follows. For n = 0 we have that ej n0 t = 1 which immediately establishes
the case for n = 0. If n  = 0 then
T /2
 T /2
e j n0 t
1
1
j n0 t
e
dt =
=
(e j n e j n ) =
((1)n (1)n ) = 0.
j n0 T /2
j n0
j n0
T /2


1.2.7. Example. Let a C and suppose that Re a > 0. Then



eat dt = 1/a.

(1.4)

This is because


eat dt = lim
0

eat
M
a

eat dt = lim

M


=

ea M
M
a
lim


+

1
1
= .
a
a


1.2.8. Example. Suppose T > 0, and let 0 = 2/T and n Z, n  = 0. We shall establish
that
 T
T2
.
(1.5)
te j n0 t dt =
2 j n
0
Partial integration yields
 T
te j n0 t dt =
0

T
 T
 T
1
te j n0 t
1
j n0 t
t de
=

e j n0 t dt
j n0 0
j n0 0
j n0 0
1
T
e j n0 T
(e j n0 T 1).
j n0
( j n0)2

Since 0 T = 2 and e2 j n = 1 we find (1.5).

The following inequalities are often used when only existence of integrals or bounds on integrals are needed and not so much their precise value.
  b
 b



f (t) dt 
| f (t)| dt.

a

Chapter 1. Signals, energy and power

This is readily verified. A straightforward applications is this: If | f (t)| M on the interval of


integration [a, b], then
  b
 b
 b




f
(t)
dt
|
f
(t)|
dt

Mdt = M(b a).




a

In particular it then follows that the integral exists.


In the rest of the notes all signals are assumed complex-valued unless explicitly stated
otherwise. We end this section with the definition of a class of signals that we will keep coming
back to.
1.2.9. Definition. A signal f (t) is piecewise smooth on a finite interval [a, b] if
1. f (t) is differentiable everywhere on (a, b) with the possible exception of a finite number
of points ci (i = 1, 2, . . . , m), the so-called points of discontinuity,
2. The derivative f  (t) is continuous on (a, b) for every t  = ci , (i = 1, . . . , m),
3. At the points of discontinuity ci the following limits exist
f (ci +) := lim f (ci + h),
h0

f  (ci +):= lim f  (ci + h),


h0

f (ci ) := lim f (ci h), and f (ci ):= lim f  (ci h),
h0

h0

4. f (a+), f (b), f  (a+) and f  (b) exist.


A signal f (t) defined for all t (, ) is said to be piecewise smooth if it is piecewise
smooth on every finite interval [a, b].

The piecewise smooth signals encompass practically all continuous-time signals one is likely
to come across in practice.

1.3 Periodic signals


An important class of real-valued signals are the sinusoids or real harmonic signals.
1.3.1. Definition (Sinusoids). A real-valued signal f (t) that can be written as
f (t) = A cos(t + ),

t R

is called a sinusoid or real harmonic signal.

Here A ( A > 0) is the amplitude, the angular frequency and the initial phase of the signal
f (t). If the time t expresses seconds, then the angular frequency is in units of radians per
second (rad/s) and /(2 ) is called the frequency and is in units of hertz (Hz). One hertz is
one cycle per second. Similarly for the complex case we define

1.3. Periodic signals

1.3.2. Definition (Harmonic signals). A signal f (t) that can be written in the form
f (t) = ce j t
with c C and R, is called a (complex) harmonic signal with amplitude |c|, angular
frequency and initial phase = arg(c).

The connection between a real and a complex harmonic signal lies in Eulers formula ej t =
cos(t) + j sin(t), or, equivalently,
e j t + e j t
,
2 j t
j t
e e
.
=
2j

cos(t) = Re e j t =
sin(t) = Im e

j t

(1.6)

Any linear combination of two real or complex harmonic signals with the same frequency, is
again a harmonic signal. For example, for any a, b R,

a cos(t) + b sin(t) = Re ae j t Re j be j t = Re (a j b)e j t


= Re |a j b|e j arg(a j b) e j t
= Re(|a j b|e j (t+) ),
= Re Ae j (t+) ,

(for := arg(a j b)),



(for A := |a j b| = a 2 + b2 )

= A cos(t + ).
Note that complex harmonic signals are allowed to have a negative frequency.
Sinusoids f (t) = A cos(t + ) and complex harmonic signals ce j t are important examples of periodic signals with period T = 2/. Periodic signals f (t) with period T , i.e.,
signals such that f (t + T ) = f (t) for all t R, will be referred to as T -periodic signals.
In Chapter 2 we shall see that every piecewise smooth T -periodic signal can be written as a
sum (possibly an infinite sum) of harmonic signals whose angular frequencies are multiples
, n Z of 0 = 2/T .
= n 2
T
The following simple theorem will be of use later.
1.3.3. Theorem (Interval-shift). Suppose that f (t) is integrable on [T /2, T /2] and that
f (t) is periodic with period T > 0. Then for every a R, there holds that


a+T


f (t) dt =

T /2

T /2

f (t) dt.

Proof. We write

a

a+T


f (t) dt =
a

T /2


f (t) dt +

T /2

T /2


f (t) dt +

a+T
T /2

f (t) dt.

Chapter 1. Signals, energy and power

The result now follows because the first and third integral on the right-hand side cancel each
other, which follows by substitution t = + T ,
 a+T
 a
 a
 T /2
f (t) dt =
f ( + T ) d =
f ( ) d =
f ( ) d.
(1.7)
T /2

T /2

T /2

1.4 Non-periodic signals


Not every sum of sinusoids is again periodic. For example f (t) = cos t +cos t is not periodic.
Non-periodic signals are also called aperiodic. The following definition introduces three types
of aperiodic signals that play a prominent role in signal theory and control theory.
1.4.1. Definition.
a) For a given a > 0 the rectangular pulse recta (t) is defined as

1
recta (t) = 0

1
2

if |t| < a2 ,
if |t| > a2 ,
if |t| = a2 .

a2
b) For a given a > 0 the triangular pulse triana (t) is defined as

triana (t) =

1 |t|/a
0

if |t| < a,
if |t| a.

c) The unit step 1(t) is defined as

1
1(t) = 0

a
2

if t > 0,
if t < 0,
if t = 0.

0
t

Remark: Note that at the jump discontinuities the function value is here taken to be the midvalue f (t) = ( f (t) + f (t+))/2. This choice is somewhat arbitrary but circumvents certain
technicalities when Fourier series and Fourier integrals are considered.

1.5. Energy and power

1.5 Energy and power


It is customary in signal theory to associate with a signal an energy content.
1.5.1. Definition (Energy content). Let f (t) be a signal. The energy content E f of the signal
f (t) is defined as

| f (t)|2 dt.
Ef =

If E f < (finite energy content) then the signal is said to be an energy signal. The rectangular
and triangular pulses are examples of energy signals. Sinusoids and harmonic signals are not.
For example, the harmonic signal f (t) = ce j 0 t satisfies | f (t)| = |c| and, hence, E f = .

For a signal f (t) to have a finite energy content it is necessary that limt t | f (t)|2 dt = 0.
Consequently, signals like sinusoids, periodic signals and the unit step and many others, are not
energy signals. In such cases it is customary to look at the average energy per unit time, i.e., to
look at its (averaged) power.
1.5.2. Definition (Power). Let f (t) be a signal. The power Pf of f (t) is defined as
P f = lim

1
M

M/2

M/2

| f (t)|2 dt.

Signals that have finite power are called power signals.

The power of a bounded T -periodic signal f (t) is finite, and you may wish to verify that its
power equals the average power over one period,
1
Pf =
T

T /2

T /2

| f (t)|2 dt.

1.5.3. Example. The power of the sinusoid f (t) = A cos(0 t + ) with period T = 2/0,
is


0 /0 2
A2
A cos2 (0 t + )dt = {x = 0 t} =
cos2 (x )dx = A2 /2.
Pf =
2 /0
2


1.6 Sequences and series of complex numbers


Sequences and series of complex numbers play an important role in signal theory. Hence it is
necessary to extend the notion of convergence of real-valued sequences and series to complex

Chapter 1. Signals, energy and power

10

ones. Based on Definition 1.2.2 about limits, it is not surprising that we define convergence of
a complex sequence an = u n + j vn , (n N) to a = u + j v by
lim an = a

if and only if

lim u n = u and lim vn = v.

For complex-valued sequences,


lim an = a

if and only if

lim |an a| = 0.

(1.8)

The rules of calculus for limits of complex-valued sequences are the same as for real-valued
sequences.
1.6.1. Example. Suppose z C and |z| < 1. Then for every p R we have that
lim n p z n = 0.

The proof is straightforward, given that we know this result already for the real-valued case.
Indeed, |n p z n 0| = n p |z|n 0 as n , so we may conclude from the property in (1.8)
that limn n p z n = 0.

Consider next the series


ak .
k=0

n
A series like this is said to converge if the sequence of partial
 sums sn = k=0 ak converges. It
can be shown that for complex
ak = u k + j v
k the series
k=0 ak converges with limit u + j v if

and only if the two real series k=0 u k and k=0 vk converge with limits u and v respectively.
So




ak =
uk + j
vk .
k=0

k=0

k=0

In many respects series with complex terms obey


the same rules as series with real-valued terms.
In particular, a necessary condition for a series
k=0 ak to have a limit is that limk ak = 0,
i.e., that limk |ak | = 0. An important property is the following:
If


k=0

|ak | < then

ak converges.

(1.9)

k=0


1.6.2. Definition (Absolute convergence and absolutely summable sequences). If
k=0 |ak |
is finite, then the sequence
ak , k = 0, 1, 2, . . . is said to be absolutely summable and the cor
a
is
said to be absolutely convergent.
responding series

k=0 k
The 
property in (1.9) thus states that absolutely convergent series converge. The advantage is
that
k=0 |ak | is a real-valued series with nonnegative terms only, for which various convergence
criteria
are known. An important such criterion is the comparison test, which states that

a
is
absolutely
convergent if we can find a dominating sequence bk |ak | such that
k
k=0

k=0 bk converges absolutely.

1.6. Sequences and series of complex numbers

11

1.6.3. Example (The geometric series). A geometric series is a series of the form

zk ,

k=0

in which z is some complex number, called the ratio of successive terms. A necessary condition
for this series to converge is that zk 0 as k . This requires that |z| < 1. The condition
|z| < 1 is also sufficient for convergence, which can be seen as follows. Suppose |z| < 1. For
the partial sums we have that
N


zk =

k=0

1 z N+1
,
1z

(1.10)

which is easily established by multiplying both sides of the equation with 1 z. As |z| < 1,
the contribution of zN+1 to the right-hand side of (1.10) goes to zero as N . Hence
as N , the partial sums (1.10) converge with limit 1/(1 z). Note that |z| < 1 also
guarantees that the geometric series converges absolutely.

1.6.4. Example. Consider the series


e j k
k + k2
k=1

with

R.

For every real this series converges absolutely. This is because


 j k 

 e
1
1


 k + k2  = k + k2 k2

1
and
k=1 k 2 converges absolutely.

1.6.5. Example. Consider the series


zk
,
k
k=1

(z C).

(1.11)


k
For k 1 we have that |zk /k| = |z|k /k |z|k . The dominating series
k=1 |z| is a geometric
series which converges absolutely if |z| < 1. The comparison test then shows that (1.11)
converges absolutely for every |z| < 1. If on the other hand |z| > 1 then the sequence zk /k
does not converge to zero as k so in that case the series (1.11) diverges. Convergence
for the case that |z| = 1 is somewhat 
more complicated. If |z| = 1 then the series certainly
does not converge absolutely because
k=1 1/k = , but it still leaves open the possibility
that (1.11) converges, albeit not absolutely. For example if z = 1 the series reduces to one
with general term (1)k /k and this can be seen to converge. In fact it may be shown that the
series converges for every |z| = 1 other than z = 1. For z = 1 the series diverges.


Chapter 1. Signals, energy and power

12

1.6.6. Example. We end this chapter with the derivation of the finite sum
s N () =

N


e j k .

k=N

Sums of this form occur frequently in signal analysis. Let z = ej , then


s N () =

N


z k = z N

k=N

2N


zk

k=0


2N + 1
= {finite geometric series} = N 1z 2N +1
z 1z =

z N z N +1
1z

if z = 1,
if z  = 1.

If is not a multiple of 2 then z = e j is not equal to 1 and we then obtain


s N () =

e j N e j (N+1)
e j (N+1/2) e j (N+1/2)
sin((N + 1/2))
.
=
=
1 e j
e j /2 e j /2
sin(/2)

In conclusion,

s N () =

2N + 1
sin((N+1/2))
sin(/2)

if is a multiple of 2 ,
if is not a multiple of 2 .

(1.12)

= 2

= 2

Figure 1.3: The function sN () for N = 8.

1.7 Problems
1.1 Show that limt0

e2 j t 1
= 2 j.
t

1.7. Problems

13

1.2 Does limt e j t exist? Motivate your answer.


2

1.3 Show that limt

e jt
= 0.
t

1.4 Demonstrate that


f (t) = sin(0 t) + 2 cos(0 t) cos(0 t + /4)
is a sinusoid, and find its amplitude.
1.5 A good understanding of a complex-valued signal f (t) can often be had from a plot of
the curve (Re f (t), Im f (t)) in the two-dimensional plane. Draw by hand the curves of
(a) f (t) = e j 0 t , and
(b) f (t) = e j 0 t + 0.1e10 j 0 t .
 2T
1.6 Calculate T e2 j 0 t cos2 (0 t) dt, where 0 = 2/T > 0.
1.7 Suppose T > 0 and define 0 = 2/T . Let f 1 (t) = sin(0 t) and f2 (t) = f 1 (t) 1(t).
Sketch the graphs of the two signals gi (t) (i = 1, 2) given by


t+T

gi (t) =

f i ( ) d.

Here we assume that 0 =

2
.
T

1.8 Determine the power of the harmonic signal f (t) = 2e2 j 0 t .


1.9 Let f (t) be a T -periodic signal with power Pf .
(a) Show that for every t0 R the power of f (t t0 ) equals the power of f (t).
(b) Express the power of f (2t) in terms of the power Pf of f (t).
1.10 Determine the energy content of
(a) f (t) = 2e2 j 0 t rectT (t),

(T = 2/0).

(b) f (t) = rect2 (t) trian2 (t).


1.11 Show that for every t0 R the energy content of f (t t0 ) is the same as that of f (t).

e j n
1.12 Show 
that the series
n=1 n 2 converges for every real . What does this imply for the
cos n
series
n=1 n 2 .
1.13 Show that cos(t/3) cos(t/4) is periodic and determine its period T .

Chapter 1. Signals, energy and power

14

More involved problems:


1.14 The exponential function ez may be defined through its Taylor series as ez =

zn
n=0 n! .

(a) Show that this series converges for every z C.


(b) Show that ea eb = ea+b for every a, b C.
(c) Use the Taylor series of cos() and sin() to show that for every R there holds
that e j = cos() + j sin().

n
(d) De Moivre: Show that cos() + j sin() = cos(n) + j sin(n).
N
sin n.
1.15 Determine n=1
= z n1 + z n3 + +
1.16 Let n N, n > 1, n even, and define z = e j t . Show that sin(nt)
sin(t)

z (n3) + z (n1) = 2 cos((n 1)t) + cos((n 3)t) + + cos(t) .


Matlab problems:
1.17 In Matlab a complex function like y(t) = e0.3 j t may be plotted as follows:
t=0:0.1:10;
y=exp(0.3*j*t);
plot(y)

The first command defines a vector of time instances increasing from 0 to 10 with stepsize 1/10. The semicolumn ; tells Matlab to execute the command silently. The second
command defines a complex-valued vector y of function values e0.3 j t for each t in the
1
2
9
, 10
, . . . , 9 10
, 10]. The vector y is given to the plot command, which plots
vector [0, 10
the imaginary part of each entry of y against its real part.
Plot the curve (Re f (t), Im f (t)) of
(a) f (t) = e(6 j +1)t ,
(b) f (t) = e0.2 j t + e0.4 j t .
1.18 In Matlab an integral like
 1
f (t) dt
0

can be obtained numerically in the following way. First a file with a name, say,
myfunction.m must be opened. Assuming the function f (t) to be integrated is
f (t) = t 3 + t 2 , we type in this file the two lines
function y = myfunction(t)
y=t.^3+t.^2;

1.7. Problems

15

Then, after this file is saved, the integral




t 3 + t 2 dt

can be obtained by
quad(myfunction,0,1)

At first sight it may seem strange that we need to use .^ instead of ^. This has to do
with Matlabs philosophy of array promotion, see the Matlab Mini Manual. Also, the
name quad is not very descriptive. Try it, and use it to find the power of the signal
f (t) = 2e2 j t + e j t
and verify the result by direct calculation of the power.

16

Chapter 1. Signals, energy and power

2
Periodic signals and their line spectra

Sinusoids and complex harmonic signals are the fundamental building blocks in signal analysis.
In this chapter we use the harmonic signals to describe arbitrary periodic signals. The discussion
culminates in the famous result that practically every T -periodic signal f (t) can be expressed
as a sum of harmonics, also called a superposition of harmonics,

2
ck e j k T t ,
ck C.
(2.1)
f (t) =
kZ

Note that the harmonic signals in the sum all are T -periodic:
2

e jk T

(t+T )

= e jk T

t+ j k2

= e j k T t e j k2 = e j k T

t R.

Therefore the sum (2.1) is T -periodic as well. The incredible fact is that the converse is true
as well: Practically every T -periodic signal f (t) is of the form (2.1) for suitable choice of
coefficients ck .
If we define 0 as
0 =

2
T

then (2.1) is somewhat more succinctly described as



ck e j k0 t .
f (t) =

(2.2)

kZ

It shows that the frequencies k0 of the harmonics that build up f (t) are an integer multiple of
0 . For this reason 0 = 2/T is called the fundamental frequency of T -periodic signals.

2.1 Complex and real Fourier series


In this section we analyze superpositions of T -periodic harmonic signals. Later on in this
chapter we turn to general periodic signals. For real-valued signals that are build up from a
finite number of T -periodic sinusoids it is straightforward to obtain an equivalent expression
as a superposition of complex harmonic signals:
17

Chapter 2. Periodic signals and their line spectra

18

2.1.1. Example. Let f (t) be a given real-valued signal of the form


N



1
ak cos(k0 t) + bk sin(k0 t) ,
f (t) = a0 +
2
k=1

(2.3)

in which ak and bk are real numbers. This can be rewritten as a sum of complex harmonic
signals as follows.
N


1
ak cos(k0 t) + bk sin(k0 t)
f (t) = a0 +
2
k=1
N


ak j k t
bk
1
(e 0 + e j k0 t ) j (e j k0 t e j k0 t )
= a0 +
2
2
2
k=1
N
N

ak j bk j k0 t 
ak + j bk j k0 t
1
= a0 +
e
e
+
2
2
2
k=1
k=1

N


ck e j k0 t .

(2.4)

k=N

Here ck = (ak j bk )/2 and ck = (ak + j bk )/2 for k = 0, 1, 2, . . . , N and for consistency
we put b0 = 0. Note that ck = ck .

It will be clear that in this example for N = one would end up with an infinite sum of
complex harmonics

ck e j k0 t .

(2.5)

k=

A series like this is called a (complex) Fourier series, and the coefficients ck are the (complex)
Fourier coefficients. The successive terms ck e j 0 t generally are complex-valued functions of t,
even if their sum is a real-valued function. Note that whereas the index k in the real case (2.3)
goes from 1 to N, in the complex case (2.4) the index k goes from N to N. Inspired by this
we define convergence of the infinite Fourier-series (2.5) as follows.
2.1.2. Definition. For a given t R the Fourier series in (2.5) is said to converge with limit
f (t) if
f (t) = lim

N


ck e j k0 t .

(2.6)

k=N

To avoid clutter we usually write f (t) = k= ck e j k0 t when we mean (2.6). The next
example demonstrates
which
a Fourier series may be convergent even if
 is that
1 a subtlej kpoint,
t
j k0 t
0
and k=0 ck e
do not converge.
its two sub-series k= ck e

2.1. Complex and real Fourier series

19



j k0 t
2.1.3. Example. The Fourier series k=0 (1/k)e j k0 t which is the series
k= ck e
with ck = 1/k for k  = 0 and c0 = 0converges for t = 0 to zero. This can be seen as follows.

ck e j k0 t =

k=

ck

k=
1
N
 
1 
1
= lim
ck = lim
+
N
N
k k=1 k
k=N
k=N
N


= lim (
N

N

1
k=1

N

1
k=1

) = 0.

With a bit more theory (developed in the following pages) it may be shown that the Fourier
series converges for every t R and that it converges to f (t) = 0 (t T /2)/j for 0 < t < T
(see Example 2.2.4 and Figure 2.1), and that the series is T -periodic, piecewise smooth with
discontinuities at t = 0, T, 2T, . . 
..

j k0 t
=
Note that for t = 0 the sub-series

k=0 ck e
k=1 1/k do not converge.

If in a Fourier series f (t) = k= ck e j k0 t only finitely many coefficients ck are nonzero,
then obviously the Fourier series converges for every t R. More 
generally, convergence for
every t R is ensured if the ck are absolutely summable, that is, if
k= |ck | converges. In
this case the sum f (t) is in fact continuous everywhere.


j k0 t
converges and
2.1.4. Theorem. If
k= |ck | < , then the Fourier series
k= ck e
moreover is continuous at every t R.


1

Proof. It follows from k= |ck | < that both k= |ck | and k=0 |ck | converge absolutely. As |ck e j k0 t | = |ck | we conclude that the two sub-series of the Fourier series converge
absolutely for any t, and, hence, that the Fourier
converges for any t.
series itself
j k0 t
c
e
. We show that f (t) is continLet f (t) denote the Fourier series f (t) =
k= k
uous for every time a R, that is, we show that for every > 0 there is a > 0 such that
|t a| < implies that | f (t) f (a)| < .
Define the partial sums
s N (t) =

N


ck e j k0 t .

k=N

Then there holds that


| f (t) s N (t)| = |

ck e j k0 t |

|k|>N

By assumption

|k|>N

|ck | =

k=


k=

|ck |.

|k|>N

|ck | < , so that


|ck |

N

k=N

|ck | 0

as N .

(2.7)

Chapter 2. Periodic signals and their line spectra

20

Consequently
there is a large enough positive integer N1 such that for every N > N1 we

get
|k|>N |ck | < /3. Considering Equation (2.7) we conclude that | f (t) sN1 (t)|

|k|>N1 |ck | < /3 for every t.
The partial sum sN1 (t) is a finite sum of continuous functions, hence is itself continuous.
For the given > 0, therefore, a > 0 can be found such that |sN1 (t)s N1 (a)| < /3 whenever
|t a| < . Finally then for all such t,
| f (t) f (a)| = | f (t) sN1 (t) + s N1 (t) s N1 (a) ( f (a) sN1 (a))|
| f (t) s N1 (t)| + | f (a) sN1 (a)| + |s N1 (t) s N1 (a)|
< /3 + /3 + /3 = .
This completes the proof.
2.1.5. Example. The sum 
f (t) of the Fourier series in Example 2.1.3 is not continuous. By
the above result, therefore,

k= |ck | = , which may also be verified directly.

If
k= |ck | = (which is always the case if f (t) is not continuous on R), then the partial
N
j k0 t
may not provide a satisfactory approximation of f (t) near
sums s N (t) =
k=N ck e
the points of discontinuity. A famous phenomenon in this respect is the Gibbs phenomenon
discussed in Section 2.5.
We end this section with an important connection between f (t) and its Fourier coefficients
ck . So far we assumed the Fourier coefficients ck given and the signal f (t) to be the resulting
sum. But what if we are given f (t) and want to find the corresponding Fourier series (assuming
one exists)? For the sake of simplicity we restrict
the moment to signals f (t) that
 Nattentionj for
k0 t
. The claim is that the Fourier
have a finite Fourier series expansion f (t) = k=N ck e
coefficients are uniquely determined by f (t) through the integral
1
ck =
T

T /2

T /2

f (t)e j k0 t dt.

(2.8)

This in fact is quite easily established,


1
T

T /2

T /2

f (t)e j k0 t dt =

1
T

T /2

N


T /2 n=N

N


1
cn
=
T
n=N

cn e j n0 t e j k0 t dt

T /2

e j (nk)0 t dt

T /2

= ck .
In the last integral we made use of Example 1.2.6, which showed that in the above sum all
terms for n  = k are zero. Only for n = k does the above integral have a nonzero value. In that
case the integrand e j (nk)0 t is 1 for all t and its integral over [T /2, T /2] hence equals T .

2.2. The fundamental theorem of Fourier series

21

2.2 The fundamental theorem of Fourier series


Let f (t) be a T -periodic signal. Equation (2.8) suggests comparing f (t) with the Fourier series

ck e j k0 t ,

k=

where 0 = 2
is the fundamental frequency and ck are the Fourier coefficients determined
T
by (2.8). The Fourier series thus obtained from f (t) is called the (complex) Fourier series of
f (t) and the coefficients ck are the (complex) Fourier coefficients of f (t). To emphasize the fact
that the Fourier coefficients are determined by f (t), we shall from now on denote the Fourier
coefficients of f (t) by fk rather than ck . The question now is to which function the Fourier
series of f (t) converges. We shall see that for piecewise smooth T -periodic f (t) the Fourier
series converges for every t with limit ( f (t+) + f (t))/2. In particular for signals f (t) that
are continuous the signal f (t) and its Fourier series are one and the same thing.
To begin with, we show that the Fourier coefficients fk of a piecewise smooth periodic
signal f (t) tend to zero as k . This is based on the following more general result.
2.2.1. Lemma (RiemannLebesgue). If f (t) is piecewise smooth on [a, b], then
 b
f (t)e j t dt = 0.
lim
||

Proof. Suppose first that f (t) is continuously differentiable on [a, b]. Then we can use partial
integration to obtain
 b
 b

1 
1
j t
j t b
f (t)e
f (t)e dt =

f  (t)e j t dt.
a
j
j a
a
Since |e j t | = 1 we can derive from this the bound

 b
 b


1
1
j t


(| f (b)| + | f (a)|) +
f (t)e dt 
| f  (t)|dt.

| j |
| j | a
a
It is immediate that the right-hand side goes to zero as || , which proves the claim.
If f (t) is not continuously differentiable, then, since f (t) is piecewise smooth, we may
split [a, b] into a finite set of subintervals [ti , ti+1 ] (i = 1, . . . ) such that f (t) is continuously
differentiable on each ofthese subintervals. Similarly as done above
(using partial integration)
b
t
it follows that lim|| ti i+1 f (t)e j t dt = 0. Hence lim|| a f (t)e j t dt = 0.
An immediate consequence is that piecewise smooth T -periodic signals f (t) satisfy
 T /2
f (t)e j k0 t dt = 0,
lim
|k|

T /2

i.e., that the Fourier coefficients fk tend to zero as |k| . In fact, looking at the proof
of RiemannLebesgue lemma, we
 bmay conclude that | fk | C/|k|. The
 b RiemannLebesgue
lemma also implies that lim|| a f (t) cos(t) dt = 0 and lim|| a f (t) sin(t) dt = 0.

Chapter 2. Periodic signals and their line spectra

22

2.2.2. Lemma. If f (t) is piecewise smooth on [T /2, T /2], then


 T /2
f (0+) + f (0)
sin(at)
lim
dt =
.
f (t)
a T /2
t
2
Proof. It suffices to prove that
 T /2

sin(at)
dt = f (0+),
f (t)
lim
a 0
t
2
Indeed, if the above holds then replacing t with t readily gives
 0
 T /2

sin(at)
sin(at)
dt = lim
dt = f (0).
f (t)
f (t)
lim
a T /2
a
t
t
2
0
 T /2
Define I (a) = 0 f (t) sin(at)
dt and express I (a) as a sum I (a) = I1 (a) + f (0+)I2 (a) with
t
 T /2
f (t) f (0+)
sin(at) dt,
I1 (a) =
t
0
 T /2
sin(at)
dt.
I2 (a) =
t
0
We will show that lima I1 (a) = 0 and that lima I2 (a) = /2.
To calculate the limit of I2 (a) we make use of the standard integral


sin t
dt = .
t
2
0
This gives

lim I2 (a) = lim

T /2

sin(at)
dt = { = at} = lim
a
t


0

aT /2

sin
d = .

Now take an > 0. We show that |I1 (a)| < for large enough a. Since f  (0+) =
limt0 ( f (t) f (0+))/t exists, the function ( f (t) f (0+))/t is bounded on (0, T /2], that is,
| f (t) f (0+)| Mt

t [0, T /2]

for some M > 0. Choose > 0 such that < /(2M) and < T /2. Then






f (t) f (0+)


sin(at) dt  M
| sin(at)| dt M < .

t
2
0
0
We have found that


 T /2


f
(t)

f
(0+)
f
(t)

f
(0+)
sin(at) dt +
sin(at) dt 
|I1 (a)| = 
t
t
0

 T /2


f (t) f (0+)

sin(at) dt  .
< + 
2
t

2.2. The fundamental theorem of Fourier series

23

On the interval [, T /2] the function ( f (t) f (0+))/t is piecewise smooth since the only
possible singularity is at t = 0 and this is not in the interval. The RiemannLebesgue lemma
therefore applies, which gives


T /2

lim

f (t) f (0+)
sin(at) dt = 0.
t

So for sufficiently large a we have that







T /2



f (t) f (0+)
sin(at) dt  < ,
t
2

Then, finally, |I1 (a)| < /2 + /2 = , implying that lima I1 (a) = 0.

The previous results allow us to state and prove the following fundamental result.
2.2.3. Theorem (The Fourier series theorem). Let f (t) be a T -periodic signal and suppose
it is piecewise smooth on [T /2, T /2]. Then for every t R there holds that


f (t+) + f (t)
=
f k e j k0 t ,
2
k=

where fk are the Fourier coefficients of f (t) defined as


fk =

1
T

T /2

T /2

f (t)e j k0 t dt.

(2.9)

Proof. We need to show that ( f (t+) + f (t))/2 = limN s N (t) for every t, where
s N (t) =

N


f k e j k0 t .

(2.10)

k=N

First we derive an integral representation for sN (t) by substituting the defining integral for fk ,
namely,
fk =

1
T

T /2

T /2

f ( )e j k0 d

Chapter 2. Periodic signals and their line spectra

24

in (2.10). This gives


s N (t) =

N


f k e j k0 t =

k=N

1
=
T

T /2

T /2

f ( )


N

1 T /2
f ( )e j k0 (t ) d
T
T /2
k=N

N


e j k0 (t ) d

k=N

1
T

= {see (1.12)}

= {x = t }

1
=
T

1
= {interval-shift} =
T

T /2

f ( )

T /2
t+T /2

tT /2
T /2

sin((N + 1/2)0 (t ))
d
sin(0 (t )/2)

f (t x)

T /2

f (t x)

sin((N + 1/2)0 x)
dx
sin(0 x/2)

sin((N + 1/2)0 x)
dx.
sin(0 x/2)

This establishes the integral representation of sN (t),



1 T /2
sin((N + 1/2)0 x)
dx.
f (t x)
s N (t) =
T T /2
sin(0 x/2)
To apply Lemma 2.2.2 we rearrange the integral as

sin((N + 1/2)0 x)
x
1 T /2
dx.
f (t x)
T T /2
sin(0 x/2)
x
Now let
g(x) = f (t x)

x
.
sin(0 x/2)

Since limx0 x/ sin(0 x/2) = 2/0 we have that g(0+) = 2 f (t)/0 and g(0) =
2 f (t+)/0 . Therefore

sin((N + 1/2)0 x)
1 T /2
dx
g(x)
lim s N (t) = lim
N
N T T /2
x
1 g(0+) + g(0)
= {Lemma 2.2.2 for a = N + 1/2} =
T
2
f (t+) + f (t)
f (t+) + f (t)
2

=
.
=
0 T
2
2
This completes the proof.
2.2.4. Example (The sawtooth). Figure 2.1 shows the graph of the sawtooth with period T .
It is the T -periodic signal f (t) which on one period [0, T ) is given by
f (t) = t T /2,

(0 t < T ).

2.2. The fundamental theorem of Fourier series

25

T
2

2T

T2
Figure 2.1: The sawtooth.

Figure 2.2: Fourier series approximation

N
k=N

f k e j k0 t of the sawtooth for N = 4.

The Fourier coefficients of f (t) can be calculated explicitly.




1 T /2
1 T
f (t)e j k0 t dt = {interval-shift} =
f (t)e j k0 t dt
fk =
T T /2
T 0

 T
1
1 T
(t T /2)e j k0 t dt = {k  = 0} =
(t T /2) de j k0 t
=
T 0
j k0 T 0
T
 T
(t T /2)e j k0 t
1
=
+
e j k0 t dt
j k0 T
j
k
T
0
0
0
j
j
+0=
,
=
k0
k0

1 T
(t T /2) dt = 0.
f0 =
T 0
The sawtooth f (t) is continuous everywhere except at t = 0, T, 2T, . . . . At these values
t = nT the signal satisfies f (nT +) = f (0+) = T /2 and f (nT ) = f (0) = T /2, see
Figure 2.1. According to the Fourier theorem, at these points of discontinuity the Fourier series
equals ( f (nT +) + f (nT ))/2 = 0. On the interval [0, T ) the Fourier series therefore equals

 j
t T /2 if 0 < t < T ,
j k0 t
e
=
k
0
if t = 0.
0
k =0
This establishes the claims in Example 2.1.3 as well, because multiplying by 0 /j gives,

N

0 (t T /2)/j if 0 < t < T ,
1 j k0 t
e
=
lim
N
k
0
if t = 0.
k=N, k =0


Chapter 2. Periodic signals and their line spectra

26

2.3 Real Fourier series


For real-valued signals f (t) the Fourier coefficients obey the symmetry rule fk = f k . This
follows from


1 T /2
1 T /2
j k0 t
f (t)e
dt = { f (t) is real} =
f (t)e j k0 t dt
f k =
T T /2
T T /2

  T /2
1
j k0 t
f (t)e
dt = f k .
=
T T /2
The complex Fourier series of a real-valued function can then be rewritten as a real Fourier
series, that is, as a sum of sinusoids. This goes as follows.
N


f (t) = lim

f k e j k0 t

k=N
N


= f 0 + lim

= f0 +

( f k e j k0 t + f k e j k0 t )

k=1

( f k e j k0 t ) + f k e j k0 t

k=1

= f0 + 2

Re( f k e j k0 t ).

k=1

The equality fk = f k for k = 0 states that f0 = f 0 , i.e., that f 0 is real. Define ak and bk as
ak = 2 Re f k ,

bk = 2 Im f k ,

(k = 1, 2, 3, . . . )

(2.11)

So 2 f k = ak j bk . Then 2 Re( fk e j k0 t ) = ak cos(k0 t) + bk sin(k0 t), which finally establishes that f (t) is indeed a sum of sinusoids,
f (t) = 12 a0 +

ak cos(k0 t) + bk sin(k0 t) .

(2.12)

k=1

It will be clear that f (t) is a real-valued T -periodic signal with T = 2/0 . The series (2.12) is
known as the real Fourier series and the coefficients ak and bk are the real Fourier coefficients.
The term ak cos(k0 t) + bk sin(k0 t) is sometimes referred to as the k-th harmonic of f (t). In
Summary:
2.3.1. Theorem (Fourier series theorem, real-valued case). Let f (t) be a real-valued T periodic signal and suppose it is piecewise smooth on [T /2, T /2]. Then for every t R
there holds that


f (t) + f (t+)
= 12 a0 +
ak cos(k0 t) + bk sin(k0 t) .
2
k=1

2.3. Real Fourier series

27

where ak and bk are the real Fourier coefficients of f (t) given by


2
ak =
T
bk =

2
T

T /2

T /2
 T /2
T /2

f (t) cos(k0 t) dt,

k = 0, 1, . . . ,

(2.13)

f (t) sin(k0 t) dt,

k = 1, 2, . . . .

(2.14)

Proof. Let f k be the complex Fourier coefficients of f (t). In the above we showed that ak =
2 Re f k and bk = 2 Im f k . Therefore
ak = 2Re f k = 2Re

1
T

T /2

T /2

f (t)e j k0 t dt =

2
T

T /2

T /2

f (t) cos(k0 t) dt,

and
bk = 2Im f k = 2Im

1
T

T /2

T /2

f (t)e j k0 t dt =

2
T

T /2

T /2

f (t) sin(k0 t) dt.

Figure 2.3: f (t) = | sin( t)|.

2.3.2. Example. Consider the signal f (t) = | sin( t)|, (see Figure 2.3). This signal is periodic
with period T = 1. In addition the signal is even which is to say that f (t) = f (t). This
means that the g(t) := f (t) sin(k0 t) are odd i.e., that g(t) = g(t). Hence the integrals for
bk ,
bk =

2
T

T /2

T /2

f (t) sin(k0 t) dt,

k = 1, 2, . . .

Chapter 2. Periodic signals and their line spectra

28

are all zero. The Fourier series of f (t) apparently entails only cosine terms.
ak =

2
T

T /2

T /2


f (t) cos(k0 t) dt = 2

1/2

1/2

| sin( t)| cos(2k t) dt

 1/2
sin( t) cos(2k t) dt
= {integrand is even} = 4
0
 1/2
=2
(sin((2k + 1) t) sin((2k 1) t)) dt
0



2 cos((2k + 1) t) 1/2 2 cos((2k 1) t) 1/2

=
(2k + 1)
(2k 1)
0
0


1
1
4
1
2

=
=
.
2k + 1 2k 1
1 4k 2
The signal f (t) is continuous and is piecewise smooth on R, so we conclude that
| sin( t)| =

4
1
2
+
cos(2k t)

k=1 1 4k 2

for every t R.

Incidentally, the trigonometric formula that was used in the above example, sin() cos() =
1
sin( + ) + 12 sin( ), is easily obtained with the help of Eulers formula. Indeed,
2
sin() cos() = Im e j Re e j
e j e j e j + e j
2j
2
j (+)
j ()
+e
e j (+) e j ()
e
=
4j
(e j (+) e j () ) + (e j () e j (+) )
=
4j
1
1
= sin( + ) + sin( ).
2
2
=

All standard trigonometric formulae may be similarly derived.


According to the Fourier theorem, any piecewise smooth periodic signal f (t) may be reconstructed from its Fourier coefficients fk , except at the points of discontinuity. Possibly after
redefining f (t) at the discontinuities, one can say that f (t) is fully determined once its Fourier
coefficients are known.
The sequence of Fourier coefficients may be regarded as a discrete-time signal g[k] := fk .
The doubly infinite sequence fk is called the line spectrum of f (t) and we say that the line
spectrum describes f (t) in the frequency domain, and that f (t) itself describes the signal in
the time domain.

2.3. Real Fourier series

29

The values that a line spectrum takes are usually complex numbers. These may be expressed
in polar form as fk = | f k |e j k . The sequence | fk | is then referred to as the amplitude spectrum
of f (t), and the sequence k as the phase spectrum of f (t). The phase, as always, is unique
up to an integer multiple of 2 . If f (t) is real-valued, then fk = f k , hence, | fk | = | f k |
and k = k . Real-valued signals hence have an amplitude spectrum that is even and a phase
spectrum that is odd (up to multiples of 2 ).

| fk |

2
Figure 2.4: The amplitude spectrum and phase spectrum of the sawtooth.
2.3.3. Example. In Example 2.2.4 we found the Fourier coefficients of the sawtooth to be fk =
j/(k0) for k  = 0 and fk = 0 for k = 0. The amplitude spectrum is therefore | fk | = 1/|k0 |,
except for k = 0 where it is zero. The phase spectrum k = arg fk equals /2 for positive k,
and /2 for negative k. The phase of f0 = 0 is not really defined, but in such cases we take
it to be zero. Figure 2.4 shows the amplitude and phase spectrum of the sawtooth signal for a
certain 0 . The sawtooth is a real-valued function and this is in accordance with the fact that
the amplitude spectrum is an even function of k and that the phase spectrum is an odd function
of k.

Certain operations on signals in the time domain are more conveniently described in the
frequency domain. An important such operation is the convolution of two signals, discussed
in the next section. We end this section with a list of time domain signal operations and their
corresponding operation in the frequency domain. Though each operation on its own is straightforward, the combination of these operations allow one to derive Fourier expansions of quite a
wide class of signals (see the problems in Section 2.6).
Signal operations
1. Linearity: f (t) + g(t) fk + gk
The line spectrum dk of d(t) = f (t) + g(t) for any constants , C equals

1 T /2
( f (t) + g(t))e j k0 t dt
dk =
T T /2


1 T /2
1 T /2
j k0 t
f (t)e
dt +
g(t)e j k0 t dt = f k + gk
=
T T /2
T T /2

Chapter 2. Periodic signals and their line spectra

30

2. Time-shift: f (t ) e j k0 f k
The line spectrum dk of the shifted signal f (t ) follows from
dk =

1
T

T /2

T /2

f (t )e j k0 t dt

= {v = t }

1
=
T

T
2

T2

= {interval-shift} = e j k0
=e

j k0

1
T

f (v)e j k0 (v+ ) dv


T /2

T /2

f (v)e j k0 v dv

fk .

The amplitude spectrum |dk | = |e j k0 || f k | = | f k | is left invariant under a time-shift.


However, the phase spectrum arg(dk ) = arg( fk ) k0 changes linearly in the amount
of the shift . A shift in time hence amounts to a linear phase shift in frequency.
3. Time reversal: f (t) fk
The line spectrum dk of f (t) follows as
1
dk =
T

T /2

T /2

f (t)e j k0 t dt


1 T /2
f (v)e j k0 v dv
= {v = t} =
T T /2

1 T /2
f (v)e j (k)0 v dv
=
T T /2
= f k .

Time reversal hence corresponds to frequency reversal.

4. Conjugation: f (t) f k

The line spectrum dk of f (t) follows as


dk =
=

1
T


1
T

T /2

f (t)e j k0 t dt

T /2
 T /2

T /2


f (t)e

j k0 t

dt

= f k
.

Conjugation in the time domain results in the frequency domain to conjugation and frequency reversal.

2.4. Convolution and Parsevals theorem

31

Property

Time domain: f (t)



j k0 t
f (t) =
k= f k e

Frequency domain: fk
 T /2
f k = T1 T /2 f (t)e j k0 t dt

Linearity

f (t) + g(t)

fk + gk

Time-shift

f (t ), ( R)

e j k0 f k

Time-reversal

f (t)

fk

Conjugation

f (t)

f k

Frequency-shift

e j n0 t f (t), (n Z)

fkn

Table 2.1: Properties of the Fourier series


5. Frequency shift: e j n0 t f (t) f kn
For a given n N, the signal g(t) with line spectrum fkn is determined as
g(t) =

f kn e j k0 t = {m = k n} =

f m e j (m+n)0 t

m=

k=

= e j n0 t

n
f m e j m0 t = e j n0 t f (t) = e j 0 t f (t).

m=

A shift in frequency by n corresponds in time domain to multiplication by the harmonic


signal e j n0 t .

2.4 Convolution and Parsevals theorem


2.4.1. Definition. The convolution or convolution product of two T -periodic signals f (t) and
g(t) is the T -periodic signal ( f g)(t) defined as

1 T /2
f ( )g(t ) d.
(2.15)
( f g)(t) =
T T /2


Convolution products in the time domain reduce to ordinary products of the respective line
spectra in the frequency domain:
2.4.2. Theorem (Convolution theorem for periodic signals). Let f (t) and g(t) be two T periodic piecewise smooth signals with line spectra fk and gk respectively. Then ( f g)(t) is
piecewise smooth, continuous and its line spectrum ( f g)k satisfies
( f g)k = f k gk ,

(k Z).

Chapter 2. Periodic signals and their line spectra

32

Proof. We omit the proof that ( f g)(t) is piecewise smooth and continuous, since the proof
is technical but otherwise straightforward.
The line spectrum ( f g)k obeys


1 T /2
f ( )g(t ) d e j k0 t dt
T /2 T T /2
 T /2  T /2
1
f ( )g(t )e j k0 t d dt
= 2
T T /2 T /2

1
( f g)k =
T

T /2

= {change order of integration}


  T /2


1
1 T /2
j k0 t
f ( )
g(t )e
dt d
=
T T /2
T T /2
= {see Table 2.1, time-shift}

 1  T /2

1 T /2
j k0
f ( )e
gk d =
f ( )e j k0 d gk = f k gk .
=
T T /2
T T /2

Remark. Since f k gk = gk f k there holds that f g = g f , i.e., convolution products


commute.

(a)

(b)

(c)

Figure 2.5: (a): A jumpy signal; (b) averaged with = 0.03; (c) averaged with = 0.09
2.4.3. Example (Sliding window averaging). For a given T -periodic signal f (t) we construct the signal f(t) by averaging f (t) around t over an interval of a fixed length T , (0, 1)
i.e., we consider
 t+ T /2
1
f(t) =
f ( ) d.
T t T /2
Averaging f (t) this way filters out high-frequency noise. It is to be expected, then, that f(t) is
somewhat smoother than f (t), but as long as is not too large the graph of the averaged f(t)
should retain roughly the same shape as the graph of f (t). Figure 2.5(a) shows an example

2.4. Convolution and Parsevals theorem

33

of a jumpy signal f (t). Figure 2.5(b) shows f(t) for the case that = 0.03. In plot (c) of
that figure the average was taken over a wider interval ( = 0.09) and as expected the plot is
smoother than the one in (b).
The signal f(t) can be considered as the convolution of f with a suitable function g(t):
 t+ T /2
 T /2
1
f (t) = 1
f ( ) d = {v = t } =
f (t v) dv
T t T /2
T T /2

1 T /2
f (t v)g(v) dv = ( f g)(t),
=
T T /2
for

g(t) =

if |t| T /2
if T /2 < |t| < T /2.

T
2

T
2

In frequency domain the process of averaging hence means multiplying the line spectrum with
the line spectrum gk of g(t).

1 T /2
g(t)e j k0 t dt
gk =
T T /2
 T /2
T
T
1
1
(e j k0 2 e j k0 2 )
e j k0 t dt = {k  = 0} =
=
T T /2
j k0 T
=
g0 =

sin(k0 T2 )
k0 T2
 T /2

1
T

T /2

= {0 T2 =

2 T
T 2

= } =

sin(k )
,
k

dt = 1.

Therefore
sin(k )
fk .
fk =
k
Note that the function sin(k )/(k ) tends to zero as k . The high-frequency harmonics fk e j 0 t are therefore more attenuated than the lower frequency harmonics. This agrees
with our understanding of averaging. Also, the greater the averaging interval, the smaller is
sin(k )/(k ) for large k, i.e., the more are the high-frequency harmonics attenuated. Again
this agrees with our understanding of averaging.

As a T -periodic signal is fully determined by its line spectrum, it should be possible to express
any property that f (t) may have in terms of its line spectrum. It is for example possible to
express the power

1 T /2
| f (t)|2 dt
Pf =
T T /2
of a T -periodic signal f (t) in terms of the fk s.

Chapter 2. Periodic signals and their line spectra

34

2.4.4. Theorem (Parsevals theorem for periodic signals). If f (t) is a piecewise smooth T periodic signal, then
Pf =

| f k |2 .

k=

Proof. Note that P f may be obtained from the following expression for t = 0:
1
T

T /2

T /2

f ( ) f ( t) d.

The above expression is the convolution product of f (t) and g(t) = f (t). We have that
P f = ( f g)(0). The line spectrum gk of g(t) = f (t) equals gk = f k (see Table 2.1) so by
the convolution theorem ( f g) has line spectrum ( f g)k = f k f k = | f k |2 . Consequently

( f g)(t) =

| f k |2 e j k0 t .

k=

Substitute t = 0 and the result follows.


2.4.5. Example. Let f (t) be the T -periodic sawtooth signal of Example 2.2.4. That is, f (t) =
t T /2 on [0, T ) and periodically continued elsewhere. In Example 2.2.4 we found that
f k = j/(k0) for k  = 0 and that f0 = 0. So on the one hand the power Pf equals
1
Pf =
T

(t T /2)2 dt =

1 2
T ,
12

and on the other hand, by Parsevals theorem,




T2
1
=
Pf =
(k0 )2
4 2
kZ, k =0

T2 
1
1
=
.
2
2
k
2 k=1 k 2
kZ, k =0

We conclude that

1 2
1
T2 
T ,
=
2
2
2 k=1 k
12

in other words, that


2
1
.
=
k2
6
k=1

2.5. The Gibbs phenomenon

35

2.5 The Gibbs phenomenon



j k0 t
Since f (t) =
, it is tempting to think that for large N the partial sums sN (t) =
k= f k e
N
j k0 t
form a good approximation of f (t). Generally the quality of the approximation
k=N f k e
depends not only on N but very much on the value of t as well. In particular the approximation
near points of discontinuity may be unsatisfactory, no matter how large N is. An overshoot
phenomenon called the Gibbs phenomenon then occurs. The Gibbs phenomenon is illustrated
by an example.
1
2

2
t

Figure 2.6: A square wave with period 2 .


2.5.1. Example (Square wave). The T -periodic square wave f (t), with T = 2 , is defined
on (, ) as

1
if 0 t < ,
f (t) =
1 if t < 0

. The f k follow
and f (0) = 0, see Figure 2.6. The square wave is real-valued so that fk = f k
easily,

 0


1
j kt
j kt
e
dt +
e
dt
fk =
2
0

1 cos(k )
1
(e j kt e j kt ) dt =
=
2
j k
 0
0
if k is even,
=
2
if k is odd.
j k
The (2N 1)-th partial sum s2N1 (t) is equal to
s2N1 (t) =

N1

l=0

N1

2
4
(e(2l+1) j t e(2l+1) j t ) =
sin((2l + 1)t).
(2l + 1) j
(2l + 1)
l=0

It may be shown that s2N1 (t) takes on its maximal value at t = /(2N). Figure 2.7 shows
plots of the partial sums S2N1 (t) for N = 4, 8, 12, 16 on the interval [2 , 2 ], and the peak
values of s2N1 (t) are collected in Table 2.2. Note that the overshoot does not converge to zero
but to a value of about 0.17898.


Chapter 2. Periodic signals and their line spectra

36

s2N1 ( 2N
)

4
8
16
32
64
128
256

1.180284
1.179305
1.179061
1.179000
1.178985
1.178981
1.178980

Table 2.2: The Gibbs phenomenon. Peak value of s2N1 (t).

Figure 2.7: The partial sums s2N1 (t) for N = 4 and N = 8 (left) and N = 12 and N = 16
(right).

summable, i.e., if
If the Fourier coefficients fk are absolutely
k= | f k | < , then the
|
f
|
<
,
then
the maximal approxiGibbs phenomenon does not occur. Indeed, if
k
k=
mation error,



f k e j k0 t 
| f k |,
max | f (t) s N (t)| = max
tR

tR

|k|>N

|k|>N

which converges to zero as N . In such cases one says that the convergence from sN (t)
to f (t) is uniform in t.

2.6 Problems
2.1 Express sin2 (0 t + /3) as a superposition of complex harmonic signals and as superposition of sinusoids.
2.2 Suppose a T -periodic signal f (t) is such that its Fourier coefficients fk satisfy fk =

2.6. Problems

37

f k for all integers k. Show that f (t) is imaginary-valued (that is, that j f (t) is realvalued).
2.3 Given is a T -periodic signal f (t). Suppose, in addition that f (t) is even. Show that fk
is real and that fk = f k for any integer k.
2.4 Let f (t) be a T -periodic signal that on period [0, T ] is given by f (t) = rectT /2 (t T /2).
(a) Sketch the graph of f (t).
(b) Determine the Fourier coefficients fk of f (t).
(c) Sketch the amplitude and phase spectrum of f (t).
(d) Determine the real Fourier series of f (t).
(e) Write f (t) as an infinite sum of sinusoids.
2.5 Suppose f (t) is a 2 -periodic signal with line spectrum fk = 1/(k 2 + 1).
(a) Show that f (t) is real and even.
(b) Determine the line spectrum of f (t) cos2 (0 t).
(c) Determine the line spectrum of f (2t).
(d) Determine the phase spectrum of f (2t T /2).
2.6 Let f (t) be the 2 -periodic signal such that
f (t) = t 2

( t ).

(a) Determine the complex Fourier coefficients of f (t) and write down the Fourier
series of f (t).
(b) Determine the real Fourier series of f (t).
(c) What is the third harmonic of f (t)?
(d) What is the amplitude spectrum of f (t)?
(e) Calculate 1 14 +

1
9

1
16

+ .

2.7 Let f (t) be the 2 -periodic signal such that


f (t) = (t 3)2

(3 t 3 + ).

Determine the line spectrum of f (t).


2.8 Let f (t) be the 2 -periodic signal such that
f (t) = e4 j t t 2

( t ).

Determine the line spectrum of f (t).

Chapter 2. Periodic signals and their line spectra

38

2.9 Let f (t) be the 2 -periodic signal such that


f (t) = e2 j t t 2

( < t ).

Determine the line spectrum of f (t).


2.10 Let f (t) be the -periodic signal given by f (t) = sin2 t. Determine the second and third
harmonic of f (t).
2.11 Let f (t) be the T -periodic signal such that
f (t) = rectT /2 (t)

(T /2 t T /2).

Determine the line spectrum of f (t).


2.12 Given is the T -periodic signal f (t) that on one interval [0, T ] equals f (t) = |t T /2|.
(a) Show that f (t) has a real-valued line spectrum.
(b) Calculate the power of the first harmonic of f (t).
2.13 Let f (t) be the T -periodic signal with line spectrum fk and let 0 = 2/T .
(a) Determine the line spectrum of f (t) cos2 (0 t).
(b) Show that g(t) = f (t)e j 0 t/2 is periodic with period 2T .
(c) Determine the line spectrum and power of g(t).
2.14 Let f (t) be a T -periodic signal and let g(t) be the signal given by
g(t) =

1
a

t+a/2

f (u) du.

ta/2

Here we assume that 0 < a < T .


(a) Show that g(t) is T -periodic.
(b) Determine the line spectrum of g(t).
(c) What can you tell about g(t) for the cases that a = T and that a > T ?
2.15 Determine the power of the following signals.
(a) f (t) = cos(0 t) + 2 sin(0 t).
(b) f (t) = | sin(0 t)|.
2.16 Given are the T -periodic signals f (t) and g(t) with respective line spectra fk and gk , and
powers P f and Pg . In addition we are given that fk gk = 0 for every k, in other words,
the spectra of f and g do not overlap. Show that Pf +g = P f + Pg .

2.6. Problems

39

2.17 Let fn (t) be the T -periodic signal given by fn (t) = e j n0 t , where 0 = 2/T . Determine ( f n f m )(t) for every n, m Z.
2.18 The T -periodic signals f (t) and g(t) on the interval (0, T ] are given by f (t) =
rectT /2 (t T /4) and g(t) = rectT /2 (t 3T /4). Determine ( f g)(t).
2.19 Let f (t) be a real T -periodic signal with real Fourier series


1
ak cos(k0 t) + bk sin(k0 t) .
f (t) = a0 +
2
k=1

Which coefficients ak , bk are guaranteed to be zero if


(a) f (t) is even, that is, if f (t) = f (t) for all t,
(b) f (t) is odd, that is, if f (t) = f (t) for all t,
(c) f (t) has period T /2. (Explain.)
2.20 Consider a real-valued signal f (t) and its real Fourier series
2.3.1). Show
 (Theorem
2
2
Parsevals theorem for the real Fourier series: Pf = 14 a02 + 12
(a
+
b
k ).
k=1 k
More involved problems:
2.21 Suppose a given piecewise smooth signal f (t) is such that f (t + T /2) = f (t) for all t
and a certain fixed T > 0. Show that f (t) is periodic and that f2k = 0 for every integer
k.
2.22 Determine the period and line spectrum of the periodic signal
f (t) =

sin(2t) + sin(3t)
.
sin(t)

 T /2
2.23 Suppose we are given a T -periodic signal f (t) that satisfies T /2 f (t) dt = 0. The
t
signal g(t) is defined through integration of f (t) as g(t) = 0 f ( ) d.
(a) Show that g(t) is T -periodic.
T
Suppose in addition that 0 t f (t) dt = 0, then the line spectrum gk of g(t) follows from
the line spectrum fk of f (t) as

f k /( j k0) if k  = 0,
gk =
0
if k = 0.
(You do not need to show this.)
(b) Find a signal h(t) such that g(t) = ( f h)(t).

1
2.24 Find
n=1 n 4 . (Hint: Use Problem 2.6.)

Chapter 2. Periodic signals and their line spectra

40

Matlab problems:
2.25 In Example 2.3.2 we found the real line spectrum of f (t) = | sin( t)|,
ak =

1
4
,
1 4k 2

bk = 0.

To calculate in Matlab the partial sums


N
a0 
+
ak cos(kw0 t)
2
k=1

(2.16)

we open a file with name, say, mysum.m and enter the following code
function sn = mysum(t,N)
w0=2*pi;
sn=2/pi;
for k=1:N,
sn=sn+ (4/pi)*(1/(1-4*k^2))*cos(k*w0*t);
end

Then the sum (2.16) can be computed by typing the following commands at the Matlab
prompt.
t=0:0.01:1;
N=5;
sn=mysum(t,N);
plot(t,sn)
hold on
plot(t,abs(sin(pi*t)),red)
hold off

%
%
%
%
%
%
%

Discretized time
Calculate partial sum
Plot it
Keep this plot
Add a plot of f(t)

Try this Matlab code and then similarly plot the sum of the first N harmonics for N =
2, 5, 10 of the Fourier series of
f (t) = t ( t),

(0 t ).

3
Non-periodic signals and their continuous
spectra
This chapter is centered around a version of the Fourier theorem for non-periodic signals. Under
certain conditions, non-periodic signals f (t) can be seen as a continuous superposition of
harmonic signals, that is to say, as an integral of weighted harmonic signals

c()e j t d.
f (t) =

(cf. (2.2).) We assume throughout this chapter that the signals f (t) are piecewise smooth, and
that
f (t) =

f (t+) + f (t)
2

for every t. This may always be achieved by redefining f (t) at the points of discontinuity, if
necessary.

3.1 The Fourier integral theorem


The proof of the Fourier series theorem (Theorem 2.2.3) relies on two preliminary lemmas
(Lemma 2.2.1 and Lemma 2.2.2). Strengthened versions of these two lemmas will be used in
the Fourier integral theorem presented shortly. The two lemmas are the following.
3.1.1. Lemma. Suppose f (t) is absolutely integrable. Then

f (t)e j t dt = 0.
lim
||

(3.1)

3.1.2. Lemma. Suppose f (t) is absolutely integrable. Then



sin(at)
dt = f (0).
f (t)
lim
a
t

(3.2)


41

Chapter 3. Non-periodic signals and their continuous spectra

42

The proofs of the above two lemmas are almost identical to the proofs of Lemma 2.2.3 and
Lemma 2.2.2, and are therefore omitted here. Be aware that the lemmas assume our standing
assumption that f (t) = f (t+)+2 f (t) and that f (t) is piecewise smooth. Also note that the above
two lemmas reduce to Lemma 2.2.1 and Lemma 2.2.2 for the case that f (t) has finite duration1 .
In the above lemmas we assume that f (t) is absolutely integrable, which is something we have
not yet defined. A signal f (t) is said to be absolutely integrable if

| f (t)| dt < .

Roughly speaking this means that f (t) should go to zero fast enough as t . This
condition is needed in the proofs in order to make sense of the integrals (3.1) and (3.2). Usually,
if f (t) is not absolutely integrable, then (3.1) is not defined.
 For example if f (t) = 1, then
f (t) is not absolutely integrable, and (3.1) is not defined: e j t dt =?. With that out of
the way we can prove the famous result:
3.1.3. Theorem (The Fourier integral theorem). Let f (t) be an absolutely integrable signal. Then

1
F()e j t d,
(3.3)
f (t) =
2
where F() is the Fourier Transform of f (t), defined as

f (t)e j t dt.
F() =

(3.4)


Proof. Substituting the integral expression of F() in F()e j t d gives

 a
1
1
F()e j t d = lim
f ( )e j (t ) d d
a 2 a
2

 a
1
f ( )
e j (t ) d d
= {change order of integr.} = lim
a 2
a

e j a(t ) e j a(t )
1
d
f ( )
= lim
a 2
j (t )

sin(a(t ))
1
d.
f ( )
= lim
a
t
That the order of integration may be changed is due to the fact that f (t) is absolutely integrable.
Since


sin(a(t ))
sin(av)
d = {v = t } =
dv,
f ( )
f (t v)
t

we see that


sin(av)
1
1
j t
dv = {Lemma 3.1.2} = f (t).
F()e d = lim
f (t v)
a
2

v
1 Finite duration means that the signal is zero for all |t| large enough.

3.1. The Fourier integral theorem

43

Note the striking symmetry between the expressions for f (t) and F(). As it turns out, absolute
integrability of f (t) is enough to ensure the Fourier integral theorem to be valid, i.e., we need
not impose anything similar on F().
3.1.4. Example. The rectangular pulse recta (t) as defined in Definition 1.4.1 is bounded and
of finite duration, so it is absolutely integrable. Its Fourier transform F() equals

F() =

recta (t)e

j t


dt =

a/2

a/2

e j t dt =

Hence according to (3.3) we have that



1
sin(a/2) j t
recta (t) =
e d.
2
/2

sin(a/2)
.
/2

(3.5)


The function F() is known under a variety of names. It is called the Fourier transform of
f (t), and sometimes it is referred to as the spectrum or frequency spectrum of f (t). A plot of
F() as a function of is also quite often called the spectrum or frequency spectrum of f (t).
Since F() is generally complex-valued, a plot of F() consists generally of two parts, one
of its amplitude versus frequency, and one its phase versus frequency. The Fourier transform
F() is said to describe f (t) in the frequency domain or the -domain. For T -periodic signals
we found in the previous chapter that the frequency spectrum is a line spectrum which may
be regarded as a discrete signal defined at the frequencies k0 in the -domain. Non-periodic
signals as we see have a frequency spectrum that is build up from a continuum of frequencies.
The Fourier transform can reveal properties of the signal f (t) that may not be apparent so
easily from f (t) itself. Consider Example 3.1.4, where we computed the Fourier transform of
the rectangular pulse recta (t). Figure 3.1 shows for three values of a the corresponding recta (t)
and its Fourier transform. What we notice is that for small a the Fourier transform is smeared
out over a wide frequency range (Figure 3.1a,b). More important for our understanding of the
Fourier transform is to understand what happens when a is large, such as Figure 3.1(e,f). In
that case recta (t) is constant equal to 1 for a wide time-span. As we see from Figure 3.1(f), this
apparently implies that the Fourier transform is practically build up from the single frequency
= 0 only: For all other frequencies the Fourier transform F() is very small. Stated differently, the signal recta (t) for large a has its frequency content concentrated around = 0.
This, in hind side, is actually not so surprising, since F(0) being relatively large meansloosely
speakingthat

1
F()e j t d
f (t) =
2
approximately equals
 +
2
1
F(0)e0t
F()e j t d
2
2

Chapter 3. Non-periodic signals and their continuous spectra

44

f 1 (t) = rect1/2 (t)

F1 () =

sin( 21 /2)
/2

0
5
5

(a)
5

f 2 (t) = rect2 (t)

50

50

F2 () =

(b)

sin(2/2)
/2

0
55

(c)

f 3 (t) = rect5 (t)

50

50

F3 () =

(d)

sin(5/2)
/2

0
55
5

5 (e)

f 4 (t) = rect5 (t) cos(0 t), 0 = 5

50

50

(f)

F4 () = 1 (F3 ( + 0 ) + F3 ( 0 ))
2

4
2

0
55

(g)

50

50

(h)

Figure 3.1: Examples of signals fi (t) and their Fourier transforms Fi () .


which is constant as a function of time. For similar reasons it is to be expected that a signal
f (t) like
f (t) = recta (t) cos(0 t)
has its frequency content concentrated around frequency = 0 , that is, has a Fourier
transform F4 () with spikes around = 0 . Indeed, if we do the computation of F4 () then
we get what is shown in Figure 3.1(h).
3.1.5. Example. In the city of Vlissingen the water level f (t) of the sea is measured every
ten minutes. Figure 3.2 depicts the water level for a time span of three days and sixty days.
The first measurement in both plots is from 1 September 1989 at ten minutes past 11am. With
numerical recipes it is possible to compute the Fourier transform F() of f (t). The two plots
in the bottom half of the figure show |F()| over two ranges of frequencies. Note the huge

= 2. This tells us that f (t) is close to periodic, with period


spike of |F()| just to the left of 2
T 1/2, which means half a day. It represents the fluctuation of the water level due to the

= 4 and 2
= 6.
moons gravitational pull. Also note the little humps in |F()| at about 2
(We have something more to say about this in Chapter 4.) Can you explain the spike of |F()|

= 1?
at precisely 2


3.2. Fourier transform properties and examples

45

f (t) [meters]

f (t) [meters]

4
2
0

-2
-4

2
0

-2

-4

20

40

t [days]
100

|F()|

80

|F()|

60

t [days]

60
40
20
0
0

6
4
2
0
0

0.5

[cycles per day]

1.5

[cycles per day]

Figure 3.2: Vlissingen seawater level f (t) and its |F()|.

3.2 Fourier transform properties and examples


Fourier transform of f (t) was defined as the frequency domain function F() =
The

j t
dt. Often Fourier transform is also used to refer to the mapping F that sends
f (t)e
f (t) to F():

f (t)e j t dt.
F { f (t)} =

Likewise, the inverse Fourier transform refers either to the mapping F 1 that sends F() to
f (t) or refers to f (t) itself, seen as the result of a given F().
The connection between f (t) and F() is conveniently expressed as a transform pair
F

f (t) F().
3.2.1. Example. The rectangular pulse recta (t), (a > 0) has Fourier transform (see Example 3.1.4)
2 sin( a2 )
.
F() =

By the Fourier integral theorem it follows that



sin( a2 ) j t
e d.
recta (t) =

Note that F() is not absolutely integrable. If we interchange and t and replace with ,
then we get the odd looking result that

sin( a2 t) j t
e
dt = recta () = recta ().
t

Chapter 3. Non-periodic signals and their continuous spectra

46

The Fourier transform of the signal sin(at/2)/t apparently is recta () even though
sin(at/2)/t is not absolutely integrable. Moreover

sin( a2 t)
1
=
recta ()e j t d.
t
2
The formulas of the Fourier integral theorem remain valid in this case.

Several properties and rules of calculus are collected next.


F

1. Linearity: a1 f 1 (t) + a2 f 2 (t) a1 F1 () + a2 F2 ().


This rule holds for any complex a1 and a2 and it states that the Fourier transform F is a
linear mapping.
F

2. Reciprocity: F(t) 2 f ().


Interchange in (3.3) and (3.4) the symbols and t and then replace with , then

F(t)e j t dt,
2 f () =

so that F(t) 2 f ().


F

3. Conjugation: f (t) F ().

F { f (t)} =

f (t)e j t dt =

4. Time-scaling: f (at)

1
F( a ),
|a|



f (t)e j t dt

= F ().
F

(a R, a  = 0). In particular f (t) F().

If a > 0, then

F { f (at)} =

f (at)e

j t

1
dt = { = at} =
a

f ( )e j /a d =

1
F( ).
a a

If a < 0 then the integral gains a minus sign since the boundaries of integration and
swap.
F

5. Time-shift: f (t ) F()e j .

f (t )e j t dt = {v = t }
F { f (t )} =


f (v)e j (v+ ) dt = F()e j .
=

3.2. Fourier transform properties and examples

47

6. Frequency-shift: f (t)e j 0 t F( 0 ).

f (t)e j (0 )t dt = F( 0 ).
F { f (t)e j 0 t } =

This readily gives the modulation theorem:


1
1
F
f (t) cos(0 t) = ( f (t)e j 0 t + f (t)e j 0 t ) (F( 0 ) + F( + 0 )).
2
2
The use of the modulation theorem in AM radio transmission is discussed in Section 3.6.
F

7. Differentiation with respect to time: f  (t) j F().







j t
j t
f (t)e
dt = f (t)e
+ j
F { f (t)} =

f (t)e j t dt = j F(),

provided that limt f (t) = 0, which is usually the case if f (t) is absolutely integrable.
t
F
8. Integration with respect to time: f ( ) d
t
Let g(t) = f ( )d , then g (t) = f (t) and

lim g(t) =


f ( ) d = {for = 0} =

F()
,
j

provided that F(0) = 0.

f ( )e j d = F(0) = 0,

and, moreover, g(t) 0 as t because f (t) is assumed absolutely integrable.


Using partial integration we get that

F {g(t)} =

g(t)e j t dt = g(t)

e j t 
1
+
j
j

f (t)e j t dt =

F()
.
j

9. Differentiation with respect to frequency: j t f (t) F  ().


Partial integration gives




j t
j t
F ()e d = F()e
jt

F()e j t d = 2 j t f (t),

provided that lim F() = 0, but this is always the case for absolutely integrable
f (t) (see Lemma 3.1.1).
Note the symmetry between rules 5 and 6, and between rules 7 and 9. Table 3.1 collects
the various properties. Table 3.2 brings together some of the more standard Fourier transform
pairs. In the derivation of these transform pairs extensive use is made of the above properties.
The following examples clarify Table 3.2.

Chapter 3. Non-periodic signals and their continuous spectra

48

Property

Time domain
f (t) =

1
2

F()e j t d

Freq. domain
F() =

f (t)e j t dt

a1 f 1 (t) + a2 f 2 (t)

a1 F1 () + a2 F2 ()

Reciprocity

F(t)

2 f ()

Conjugation

f (t)

F ()

Time-scaling

f (at)

1
F( a )
|a|

Time-shift

f (t )

F()e j

Frequency-shift

f (t)e j 0 t

F( 0 )

f  (t)

j F()

Linearity

Differentiation (time)
Integration (time)
Differentiation (freq.)

t

f ( ) d

j t f (t)

Condition

F()
j

F ()

Table 3.1: Some standard Fourier transform properties.

a R, a  = 0

lim f (t) = 0

F(0) = 0

3.2. Fourier transform properties and examples

recta (t) =

f (t)
1 if |t| < 12 a,

1
0 if |t| > 2 a

1 |t|/a if |t| < a


triana (t) =

0
if |t| > a
ea|t|
t n at
e
n!

1(t)

tn! eat 1(t)


n

eat

49

F()
2 sin( 21 a)

a>0

4 sin2 ( 21 a)
a2

a R, a > 0

2a
a 2 +2

Re a > 0

1
(a+ j )n+1

Re a > 0

1
(a+ j )n+1

Re a < 0

2 /4a

a R, a > 0

recta ()

a R, a > 0

sin( a2 t)
t

Condition

Table 3.2: Some standard Fourier transform pairs.

3.2.2. Example (Rectangular and triangular pulse). The Fourier Transform of the rectangular pulse is derived in Example 3.1.4. The triangular pulse triana (t) is defined in Definition 1.4.1. Suppose f (t) is given as
1
f (t) = recta (t + a/2) recta (t a/2).
Note that F(0) =

f (t) dt = 0 and that

1
triana (t) =
a

f ( ) d.

Recall that recta (t) 2 sin(a/2)/. Hence based on integration in time (Rule 8) and time
shift (Rule 5) we get that
4 sin2 ( a2 )
1 2 sin( a2 ) a j /2
a j /2
(e
e
)=
.
triana (t)
j
a
a2
F

Chapter 3. Non-periodic signals and their continuous spectra

50


3.2.3. Example. In Example 1.2.7 we saw that for Re a > 0 there holds that 0 eat dt =
1/a. An immediate consequence is the frequency spectrum of f (t) = eat 1(t). Since Re(a +
j ) = Re(a) > 0 we have that


1
F
at
(a+ j )t
.
e
1(t) dt =
e(a+ j )t dt =
e 1(t)
a + j

0
Differentiating with respect to frequency (Rule 9) n times gives
 n
1
d
1
F
( j t)n eat 1(t)
= (1)n j n n!
,
d a + j
(a + j )n+1

(Re a > 0).

Therefore
1
t n at
F
e 1(t)
,
n!
(a + j )n+1

Re a > 0.

(3.6)

These transform pairs are for the cases where Re a > 0. If Re a < 0 then similarly it may be
shown that

1
t n at
F
e 1(t)
,
n!
(a + j )n+1

(Re a < 0).

The inverse Fourier transform of 1/(a + j )n+1 hence depends rather dramatically on a. If
n
Re a > 0 then the inverse Fourier transform is tn! eat 1(t) which is zero for all negative time,
n
but if Re a < 0 then the inverse Fourier transform is tn! eat 1(t) which is zero for positive
time.

3.2.4. Example. Suppose Re a > 0. Then
ea|t| = eat 1(t) + eat 1(t).
By linearity,
F

ea|t|

1
(a + j ) + (a + j )
2a
1
2a
+
=
=
= 2
.
2
2
a + j a + j
(a + j )(a + j )
a
a + 2


3.2.5. Example. To determine the frequency spectrum of the Gaussian function eat , (a > 0)
we make use of the following standard integral


2
et dt = .
2

eat e j t dt. As eat is an even function of t, we get that



2
eat cos(t) dt.
F() = 2

Let F() =

3.3. Examples

51

Now differentiate and use partial integration




1
2

at 2
F () = 2
te
sin(t) dt =
sin(t) deat
a 0
0
at 2

=
e
cos(t) dt = F().
a 0
2a
We end up with a first order differential equation

F  () = F().
2a
Next separate the variables,

F  ()
= .
F()
2a
After integrating both sides from = 0, we get that ln |F()| ln |F(0)| = 2 /4a, or,
F() = F(0)e

2 /4a

The value of F(0) follows as






at 2
2
.
e
dt = { = t/ a} =
e d =
F(0) =
a
a

Finally, then, we obtain



2 /4a
e
.
F() =
a
It is interesting to see that the Fourier transform of the Gaussian function is again a Gaussian
function.


3.3 Examples
Often the frequency spectrum F() can be found through a combination of the rules of Table 3.2.
3.3.1. Example. The frequency spectrum of recta (t) cos(0 t) and eat cos(0 t) 1(t) may be
obtained using the modulation theorem (Rule 6).
F

recta (t) cos(0 t)

sin( 12 a( 0 )) sin( 12 a( + 0 ))
+
,
0
+ 0

(a > 0)

(See Figure 3.1(h).) Likewise we find that


F

eat cos(0 t) 1(t)

a + j
,
(a + j )2 + 02

(Re a > 0).




Chapter 3. Non-periodic signals and their continuous spectra

52

3.3.2. Example. With help of the reciprocity rule (Rule 2) it follows that
a2

F
ea|| ,
2
+t
a

(Re a > 0),

and the modulation theorem gives


cos(0 t) F a|0 |
(e

+ ea|+0 | ),
a2 + t 2
2a

(Re a > 0).

To find the spectrum of


f (t) =

sin2 (at)
.
t2

we shall use the reciprocity rule (Rule 2). Since


F

trian2a (t)

4 sin2 (a)
2
= f (),
2
2a
a

we have by the reciprocity rule that


2
F
f (t) 2 trian2a () = 2 trian2a ().
a
Hence
sin2 (at) F
a trian2a ().
t2


The most important family of Fourier transforms are the rational functions of frequency.
3.3.3. Example (Rational functions in frequency domain). In this example we calculate the
inverse Fourier transform of a rational function of the form
F() =

pm ( j )m + pm1 ( j )m1 + + p1 ( j ) + p0
P( j )
=
.
Q( j )
qn ( j )n + qn1 ( j )n1 + + q1 ( j ) + q0

The coefficients pi and qi are assumed real. We shall further assume that the rational function
is strictly proper which means that the degree of the numerator P is less than that of the
denominator Q. Additionally we assume that Q has no zeros on the imaginary axis, i.e., that
Q( j )  = 0 for every R. In Chapter 4 when we consider generalized Fourier transforms,
we will have a way of coping with imaginary zeros of Q and also the case that P/Q is not
strictly proper can then be dealt with.
For rational functions there is a straightforward algorithm that always leads to an explicit
form of the inverse Fourier transform f (t). Here we illustrate it by example. Suppose that
F() =

6 j
.
( j + 1)(4 + 2 )

(3.7)

3.3. Examples

53

Now substitute s = j and perform a partial fraction expansion


6s
3
2
1
=

.
(s + 1)(4 s 2 )
s+2 s +1 s2
A partial fraction expansion of a rational function is an expansion of the rational function as a
sum of simple terms of the form /(s )k , such as done above. Appendix A discusses partial
fraction expansion in more detail and shows for example that a partial fraction expansion of a
strictly proper rational function always exists, and it explains how to construct a partial fraction
expansion. Continuing with the example, we see that
F() =

2
1
3

.
j + 2
j + 1
j 2

The inverse Fourier transform now follows from Table 3.2 as being
f (t) = (3e2t 2et ) 1(t) + e2t 1(t).


3.3.4. Example (Maple example). In Maple-V4 the inverse Fourier transform of (3.7) can be
produced by the following three commands.
with(inttrans):
F:=6*I*omega/(I*omega+1)/(4+omega^2):
invfourier(F,omega,t);

The outcome in Maples notation is


e2t Heaviside(t) + 3e2t Heaviside(t) 2et Heaviside(t)
In Maple, Heaviside(t) denotes the unit step 1(t) and the capital I refers to the imaginary unit
j . With the command with(inttrans) Maple acquires access to many Fourier related
commands. For example the Fourier transform of et 1(t) can now be obtained with a single
command
fourier(exp(-t)*Heaviside(t),t,omega);

The outcome is 1/(I + 1).

Chapter 3. Non-periodic signals and their continuous spectra

54

3.4 Convolution and correlation


An important operation on signals is the multiplication of the frequency spectra of two signals.
This operation is important in applications. Two applications that rely on this are the sampling
theorem and amplitude modulation discussed in Sections 3.5 and 3.6. In the previous chapter
we found that taking products of line spectra of periodic signals, corresponds in the time domain
to taking convolutions. For continuous frequency spectra a similar correspondence exists, but
with the definition of convolution appropriately adapted.
3.4.1. Definition (Convolution for non-periodic signals). The convolution or convolution
product of two signals f (t) and g(t) is the signal ( f g)(t) defined as

( f g)(t) =

f ( )g(t )d.


Convolution products commute, since



( f g)(t) =


f ( )g(t ) d = {v = t } =

g(v) f (t v) dv = (g f )(t).

Sufficient for the existence of ( f g)(t) is that f (t) is bounded while g(t) is absolutely
integrable, or the other way around. Another important class of signals for which convolutions
exist is the class of causal signals. These play a prominent role in the theory of systems,
considered in Chapter 5.

f (t)
0

Figure 3.3: An example of a causal signal.

3.4.2. Definition. A signal f (t) is causal if f (t) = 0 for all t < 0. (See Figure 3.3.)

A signal of the form f (t) 1(t) is causal because 1(t) = 0 for t < 0. The name causal may
seem a bit unfortunate, since there is not really a cause or an effect. The justification for
this name will emerge later when we consider systems in Chapter 5. As claimed, convolutions

3.4. Convolution and correlation

55

of causal signals are defined for every t. Indeed if f (t) and g(t) are causal, then



f ( ) 1( )g(t ) 1(t ) d
( f g)(t) = ( f 1) (g 1) (t) =


f ( )g(t ) 1(t ) d
=
0

0
if t < 0,
= t
if t 0,
0 f ( )g(t ) d

 t
f ( )g(t )d 1(t).
=
0

The convolution of two causal signals apparently is itself causal, and since for each t the integration above is over a finite interval [0, t] it follows that the convolution exists for every t and
every piecewise smooth f (t) and g(t).
3.4.3. Example. Convolution with the unit step amounts to integration:

 t
f ( ) 1(t ) d =
f ( ) d.
( f 1)(t) =

3.4.4. Example. Consider the rectangular pulse recta (t). The convolution of recta (t) with
itself is given by

(recta recta )(t) =


recta ( ) recta (t ) d =

a/2
a/2

recta (t ) d.

In the last integral replace t with v and what follows is



(recta recta )(t) =

t+a/2

recta (v) dv.

(3.8)

ta/2

The integrand recta (v) is 1 on the interval [a/2, a/2] and it is zero elsewhere. If the interval
[t a/2, t + a/2] over which is integrated has no overlap with the interval [a/2, a/2] where
the integrand recta (v) is nonzero, then the integral (3.8) will be zero. We distinguish four cases
for t corresponding to how the two intervals overlap: t < a, t > a and t [a, 0) and
t [0, a].
If t < a then t + a/2 < a/2, so that [t a/2, t + a/2] is entirely to the left of
[a/2, a/2]. The integral (3.8) is therefore zero for all t < a. Similarly, if t > a then
a/2 < t a/2 so the interval [t a/2, t + a/2] is entirely to the right of [a/2, a/2], and
(3.8) is therefore 0.
If t [a, 0) then the interval [t a/2, t + a/2] is to the left of
[a/2, a/2] but they
 t+a/2
overlap on the interval [a/2, t + a/2]. The integral (3.8) then equals a/2 1 dt = a + t.

Chapter 3. Non-periodic signals and their continuous spectra

56

If t [0, a/2] then the interval [t a/2, t + a/2] is to the right of


[a/2, a/2] with an
 a/2
overlap on the interval [t a/2, a/2]. The integral (3.8) now becomes ta/2 1 dt = a t.
In summary:

a |t| if |t| < a,
(recta recta )(t) =
0
if |t| > a.
The right-hand side may be recognized as a triana (t). We found that
(recta recta )(t) = a triana (t).

(3.9)


Next we formulate and prove the convolution theorem for the Fourier integral. In fact we
consider two versions of the convolution theorem, one for convolution in the time domain
and one for convolution in the frequency domain. In the proofs we shall silently assume that
changing the order of integration is allowed. It is allowed but we do not prove it.
F

3.4.5. Theorem (Convolution theorem in the time domain). Suppose that f (t) F()
F
and g(t) G(). Then
F

( f g)(t) F()G().
Proof. Determine the Fourier transform of ( f g)(t) as follows:



j t
e
f ( )g(t ) d dt
F {( f g)(t)} =




j t
f ( )
e
g(t ) dt d
=


F
f ( )G()e j d = F()G().
= {g(t ) G()e j 0 t } =

3.4.6. Theorem (Convolution theorem in the frequency domain). Suppose that f (t)
F
F() and g(t) G(). Then
F

f (t)g(t)

1
(F G)().
2

Proof. In the Fourier transform



f (t)g(t)e j t dt
F { f (t)g(t)} =

we substitute f (t) for its Fourier integral



1
F(u)e j ut du.
f (t) =
2

3.4. Convolution and correlation

57

Then, after changing the order of integration,


 
1
F(u)e j ut du g(t)e j t dt
F { f (t)g(t)} =
2


1
F(u)
g(t)e j (u)t dt du
=
2


1
1
F(u)G( u)du =
=
(F G)().
2
2

(3.10)

3.4.7. Example. In Example 3.4.4 it was shown that (recta recta )(t) = a triana (t). Application of the convolution theorem gives
F {triana (t)} =

4 sin2 ( 12 a)
1
(F {recta (t)})2 =
a
a2

This is in accordance with Table 3.2.


F

3.4.8. Example. Given that f (t) = eat 1(t) 1/(a + j ) for any Re a > 0, it follows
by the convolution theorem that



 t
1
a a(t )
e
e
d
1(t) = teat 1(t).
=
(
f

f
)(t)
=
F 1
(a + j )2
0


In the case of periodic signals we found a way to express the power of a periodic signal in terms
of its line spectrum. The result was called Parsevals theorem. Similarly there is a Parsevals
theorem for non-periodic signals that expresses the energy content of a non-periodic signal in
terms of its frequency spectrum. The energy content of a signal f (t) was defined as

| f (t)|2 dt.
Ef =

3.4.9. Theorem (Parseval). Let f (t) be a signal with E f < . Then



Ef =

| f (t)|2 dt =

1
2

|F()|2 d.


Proof. The rule f (t)g(t)




f (t)g(t)e

j t

1
(F
2

1
dt =
2

G)() when written out becomes

F(v)G( v) dv.

Chapter 3. Non-periodic signals and their continuous spectra

58

Now take = 0,


1
f (t)g(t) dt =
F(v)G(v) dv.
2

For a more symmetrical version, replace g(t) with g (t) and the corresponding Fourier transform G() with G () (See Rule 3). Then we get


1

f (t)g (t)dt =
F(v)G (v) dv.
(3.11)
2

This is an important equality. The result follows if we take g(t) = f (t).


3.4.10. Example. In Example 3.3.2 (for a = 1) we derived the Fourier transform pair
sin2 t
t2

trian2 (). With the help of Parseval we then get




1
sin4 t
dt =
t4
2

( trian2 ())2 d =
0

1
2
(1 )2 d = .
2
3


An operation closely related to the convolution product is the cross correlation of two signals.
3.4.11. Definition (Cross correlation). Let f1 (t) and f2 (t) be two signals with E f1 < and
E f2 < . The cross correlation 1,2 (t) of f 1 (t) and f2 (t) is defined as

f 1 (t + ) f 2 ( ) d.
1,2 (t) =

The Fourier transform of 1,2 (t) follows from the convolution theorem on noting that 1,2 (t)
is the convolution product of the signals f (t) = f1 (t) and g(t) = f2 (t) with respective
frequency spectra F1 () and F2 (). Hence
F

1,2 (t) F1 ()F2 ().

(3.12)

If f 2 (t) = f 1 (t) = f (t) then (t) = 1,1 (t) is called the autocorrelation of f (t). The spectrum
of (t) is therefore equal to F()F () = |F()|2 , which is called the energy spectrum or
spectral density of f (t). The inverse Fourier transform now yields the formula


1

f (t + ) f ( ) d =
|F()|2 e j t d.
(t) =
2

Substitute t = 0 and what follows is again Parsevals equality. Moreover it follows that


1
1
2 j t
|F()| |e | d =
|F()|2 d = (0),
|(t)|
2
2
In other words, the auto correlation is maximal for t = 0.

3.5. Shannons sampling theorem

59

f (t)

f [n]

g(t)

Figure 3.4: Original signal, sampled signal, held signal (via a zero order hold).

3.5 Shannons sampling theorem


Communication between the continuous-time we live in and the discrete-time world of computers, is done through sampling and holding devices. As explained briefly on page 1, sampling is
the act of taking values of a continuous-time signal f (t) at multiples of a fixed sampling period
T , resulting in a discrete-time signal f [n] := f (nT ), (n Z). A holding device is any device
that takes a discrete-time signal f [n] and produces a continuous-time signal. The most obvious
holding device is the zero order hold, which produces the piecewise constant continuous-time
signal g(t) such that g(nT + t) = f [n] for every t [0, T ) and n Z. Figure 3.4 illustrates
the idea.

f (t)

g(t)

2T 3T 4T 5T 6T 7T 8T

Figure 3.5: The signals f (t) and g(t) = f (t) + sin(T t) have identical samples.
It is to be expected that with sampling some information of the original continuous-time
signal f (t) is lost. It is unlikely that the samples f [n] are enough to reconstruct by some
sort of holding device the signal f (t). For example if of the signal f (t) shown in Figure 3.5
we are only given its samples f (nT ), then we can not be sure that the samples come from
f (t) and not from g(t) = f (t) + sin( T t), because g(t) and f (t) are identical at the sampling
instances t = nT . However, in this example the signal g(t) contains a term sin(T t) which
is a signal whose frequency may be unrealistically high if T is very small. If we know that
the samples are taken from a signal that does not have such high frequencies, then we can
discard g(t). In this section, we show that signals that are bandlimited can be reconstructed
from their samples f (nT ) provided that the sampling time is small enough, i.e., provided that

Chapter 3. Non-periodic signals and their continuous spectra

60

the sampling frequency s := 2/T is high enough.


3.5.1. Definition (bandlimited signals). A signal f (t) is bandlimited if F() = 0 for all
|| > b for some b > 0. The smallest such value b is the bandwidth of f (t).

Bandlimited thus implies that the signal f (t) is not build up from very high frequencies, so f (t)
is smooth and not very jumpy. The bandwidth is the highest frequency in f (t). Formulated this
way makes it possible to say that sinusoids have a bandwidth as welleven though sinusoids
are not Fourier transformableand that the bandwidth equals their frequency. A pathological
case of sampling is when we sample a sinusoid sin(b t) precisely at its zeros:

.
This happens when the sampling frequency s =

2
T

satisfies

s = 2b .

(3.13)

The value 2b is known as the Nyquist rate. To allow for reconstruction of a signal f (t) with
bandwidth b it suffices to take the sampling frequency higher than 2b :
F

3.5.2. Theorem (Shannons sampling theorem). Let f (t) F() and suppose that
F() = 0 for all || > b . Let f [n] = f (nT ), n Z be the sampled signal of f (t)
with sampling frequency s = 2/T . If s > 2b , then f (t) is uniquely determined by its
samples f [n] via

f (t) =

f [n]

n=

sin(s (t nT )/2)
.
s (t nT )/2

(3.14)

Proof. As F() = 0 for all || > s /2 > b we have that


f (t) = F

1
{F()} =
2

F()e

j t

1
d =
2

s /2

F()e j t d.

s /2

On the interval [s /2, s /2] we express F() as a Fourier series with period s ,
F() =

Fn e j nT ,

(s /2, s /2),

n=

in which T = 2/s . Note that T here is precisely the sampling period. Since F() has its
support on (s /2, s /2) we may multiply with the rectangular pulse rects without changing
the result,
F() =


n=

Fn e j nT rects (),

R.

(3.15)

3.5. Shannons sampling theorem

61

The Fourier coefficients Fn can be expressed as



1 s /2
Fn =
F()e j nT d
= {F() = 0 for || > s /2}
s s /2


2 1
1
j nT
F()e
d
=
F()e j nT d
=
s
s 2
= {inverse Fourier transform of f (t)}
2
f (nT ) = T f [n].
=
s
Now replace n by n and we see that (3.15) becomes
F() = T

f [n]e j nT rects ().

(3.16)

n=

From Table 3.2 we know that


sin(s t/2) F
rects (),
t
and with the help of the time-shift rule we then get
sin(s (t nT )/2) F
e j nT rects ().
(t nT )
With all this we can apply the inverse Fourier transform term by term to (3.16) and we get what
we wanted to show,
f (t) = T

f [n]

n=


sin(s (t nT )/2)
sin(s (t nT )/2)
=
.
f [n]
(t nT )
s (t nT )/2
n=

This completes the proof.


Compact discs store sampled signals that are sampled with a frequency of 44.1103 Hz. Knowing Shannons sampling theorem it should be no surprise that a frequency of 44.1 103 Hz is
about twice as much as what human hearing can detect. It is in fact a bit more than twice
the bandwidth of the human ear, but then again, signals stored on CDs are not actually reconstructed by the reconstruction formula (3.14), but by something more realistic. The ideal
reconstruction formula is not practical since its reconstructed f (t) as given by (3.14) depends
on all future and past f [n]. This would mean that the CD has to be read in its entirety first
before sound is produced! Not very practical.
Remark: Viewing it from a different angle, the signal f (t) defined by the reconstruction
formula (3.14) is the signal of smallest bandwidth that interpolates given points (nT, f [n]),
n Z.

Chapter 3. Non-periodic signals and their continuous spectra

62

f [n]

f [n] sin((tn))
(tn)

n
t
Figure 3.6: Graph of f [n]

sin((t n))
.
(t n)

3.5.3. Example. For T = 1 the reconstruction formula (3.14) becomes


f (t) =


n=

f [n]

sin((t n))
.
(t n)

in this sum is a function that is zero at all sampling instances t Z


Each term f [n] sin((tn))
(tn)
except at t = n where it equals f [n], see Figure 3.6.


(a)

2c

2c

2c

Frequency spectrum of m(t)

(b)

2c

Frequency spectrum of xam (t)

(c)

2c

2c

Frequency spectrum of y(t)


Figure 3.7: Frequency spectrum of m(t), xam (t) and y(t).

3.6. Amplitude modulation

63

3.6 Amplitude modulation


An interesting application of the Fourier transform and its properties is amplitude modulation
(AM). AM is a technique used for transmission of signals via electromagnetic waves over long
distances.
That we can listen to a multitude of radio programs transmitted via electromagnetic waves
is due to the fact that each broadcast station has its own (commonly government-assigned)
frequency-band within which they have to transmit their programs. Essentially all that a radio
does is allow us to roam the spectrum of electromagnetic waves and select a frequency band. In
the case of AM signals, the frequency bands are somewhere in the range of 30kHz to 300kHz.
The mathematics of AM is quite straightforward.
At the broadcast station the piece of music m(t) which is some bandlimited signal, is
multiplied with a sinusoid cos(c t) with frequency somewhere in the range of 30-300kHz. The
resulting product
xam (t) = m(t) cos(c t)
is called the amplitude modulated signal, and by the frequency-shift rule we see that it has
frequency spectrum

1
X am () = M( c ) + M( + c ) .
2
Now, m(t) is bandlimited with bandwidth much smaller than c , see Figure 3.7(a). The frequency spectrum of the modulated signal xam (t) therefore looks like the one in Figure 3.7(b).
It is a signal whose spectrum is more or less centered around c . This signal now is transmitted. Incidentally, since the two lobes of Xam () have the same shape as that of M() it will be
intuitively clear that amplitude modulation incurs no loss of information. Indeed, the original
signal m(t) can be completely recovered from its modulated xam (t). This is done at the users
end. The receiver multiplies the received signal xam (t) once again with cos(c t). The resulting
product
y(t) = xam (t) cos(c t)
has, by the frequency-shift rule, a frequency spectrum

1
Y () = X am ( c ) + X am ( + c )
2
1
1
1
= M( 2c ) + M() + M( + 2c ).
4
2
4
The frequency spectrum of y(t) is shown in Figure 3.7(c). The original signal m(t) now follows
as
M() = 2Y () rect2c (),
that is, removal of the high-frequency components of y(t) and then doubling the signal, brings
back m(t).
For reasons of exposition we glossed over several practical problems, such as how the
receiver knows the carrier angular frequency c .

Chapter 3. Non-periodic signals and their continuous spectra

64

3.7 Problems
3.1 Let f (t) be the signal

f (t) =

if 5 < t < 6,
if t < 5 or t > 6.

et
0

Determine the frequency spectrum of f (t). Are f (t) and its frequency spectrum absolutely integrable?
F

3.2 Let f (t) F() and 0 > 0. Determine the frequency spectra of the following
signals.
(a) f (t) sin(0 t),
(b) f (at)e j 0 t ,

(a  = 0),

(c) Re( f (t)),


(d) Im( f (t)).
3.3 Determine the frequency spectra of the following signals.
sin 4t
,
t
(b) triana (2t)
(a)

(a > 0),

(c) eat 1(t t0 ),


(d) te

at 2

(Re(a) > 0),

(a > 0),

(e) eat sin(0 t) 1(t),


1
,
(f)
1 + t2
1
.
(g) 2
t + 2t + 2
F

(Re(a) > 0),

3.4 Let f (t) F(). Determine the frequency spectra of the following signals.
(a) 2 f (3t 1),
(b) e2 j t f (t 2),
(c) t f (t),
(d) f ( 12 t),
(e) f (1 t),
(f) f (t) cos2 (0 t).

3.7. Problems

65

3.5 The signal f (t) is given by



1
(1 + cos( t/T )) if |t| T ,
f (t) = 2
0
if |t| > T .
Here T > 0. Determine the frequency spectrum of f (t).
F

3.6 Let f (t) F(). Determine f (t) for the cases that F() equals
(a) F() = rect2a ( 0 ) + rect2a ( + 0 ),
(b) F() =
(c) F() =

2+ j
,
4+5 j 2
9
.
(1+ j )2 (2+ j )

3.7 Determine the convolution ( f g)(t) for the following signals.


(a) f (t) = eat 1(t) and g(t) = ebt 1(t) with a > 0 and b > 0,

(b) f (t) = eat and g(t) = 1(t 1) (Re(a) > 0),


sin(t)
sin(t)
and g(t) =
with > 0 and > 0,
(c) f (t) =
t
t
(d) f (t) = rect2 (t) and g(t) = 1(t).
3.8 Given is the bandlimited signal with frequency spectrum
F() = || rect2 ().

(a) Is the signal f (t) uniquely determined by its samples at the time instance t =
0, 12 , 1, 32 , . . . ?
Motivate your answer.
(b) Determine f [n] for n Z.
(c) Determine the energy content of f (t).
3.9 Given are the signals
f (t) = e|t|

and

h(t) =

sin(at)
t

(a > 0).

Let g(t) be the convolution g(t) = ( f h)(t).


(a) For which values of a is the convolution uniquely determined by its samples at
t = 0, 1, 2, . . . ?
3.10 A signal f (t) is given whose frequency spectrum is
F() =

1
j + b

with b a real constant. Determine the frequency spectrum G() of the following signals
g(t).

Chapter 3. Non-periodic signals and their continuous spectra

66

(a) g(t) = f (5t 4),


(b) g(t) = t 2 f (t),
(c) g(t) = e2 j t f (t),
(d) g(t) = cos(4t) f (t),
(e) g(t) = f  (t),
(f) g(t) = ( f f )(t),
(g) g(t) = f 2 (t),
(h) g(t) =

1
.
j tb
F

3.11 Suppose f (t) F(). Determine f (t) for the cases that F() equals
(a) F() = j trian2 (),
(b) F() = e j t0 rect8 (),
(c) F() = cos() rect2 ().
More involved problems:
3.12 Let f (t) be an absolutely integrable signal and let the signal g(t) be given by
g(t) =

1
a

t+a/2

f (u) du.

ta/2

(a) Show that g(t) is absolutely integrable.


(b) Express the frequency spectrum of g(t) in terms of the frequency spectrum of f (t).
3.13 Let f (t) and g(t) be two continuously differentiable signals and suppose that the convolution h(t) = ( f g)(t) is well defined.
(a) Show that h  (t) = ( f  g)(t) = ( f g )(t).
(b) Show that h(t) is polynomial if g(t) is a polynomial.
Matlab problems:
3.14 One interesting byproduct of Shannons sampling theorem is that it gives a way how
to approximate F() of signals f (t) that are given by samples f [n]. The reasoning is
as follows. Of all f (t) that interpolate given samples f [n] at t = nT , the f (t) given
by (3.14) is the one with smallest bandwidth. This loosely speaking means that (3.14)
is the smoothest function that interpolates the samples. If the samples f [n] faithfully
capture the underlying f (t), then of all possible f (t), this smoothest interpolant (3.14)

3.7. Problems

67

is the most likely candidate. In the proof of Shannons sampling theorem we derived an
explicit expresseion for F() of (3.14):
F() = T

f [n]e j nT rects ().

n=

The Matlab macro fouriertrans computes this F() at some equidistant frequency
grid of [0, s ].
function [w,F]=fouriertrans(t,f,N);
%[w,F]=fouriertrans(t,f,N). Continuous-time Fourier transform
%
%
IN: t, vector of equidistant sampled time
%
f, vector of sampled function values f(t)
%
N, number of grid points (preferably N=2^k)
% OUT: w, row vector of sampled frequencies
%
F, row vector of sampled Fourier trans. F(w)
if N < length(f)
disp(Not all data is used.);
end
T=t(2)-t(1);
% sampling time
ws=2*pi/T;
% sampling frequency
points=1:(N/2);
w=(points-1)*ws/N;
% N/2 grid points from [0,ws/2]
F=T*fft(f(:),N);
% note: f(:) is a row vector
F=F(points).*exp(-j*t(1)*w); % T sum f[n]e^(-jnwT)rect_ws(w)

It makes use of the famous fast Fourier transform which is an ingenious trick to speed
up computation of discrete-to-discrete Fourier transforms (not considered in this course).
To find the amplitude transfer of f (t) = et 1(t)+sin(10t) we type at the Matlab prompt
T=0.1;
t=0:T:20;
f=exp(-t)+sin(10*t);
[w,F]=fouriertrans(t,f,512);
plot(w,abs(F));

Now let
f (t) =

%
%
%
%
%

The sampling time


Discretized time-axis
Sampled function
Do the computation
Plot magnitude transfer

t 1 if t [0, 2),
0
if t  [0, 2).

Make plots of the amplitude transfer of F() on the frequency interval [0, 20]. Do this
numerically with the help of fouriertrans and compare it with the exact |F()|
(use Maple). You may need to experiment with T and the length of the time-interval
over which f (t) is sampled.

68

Chapter 3. Non-periodic signals and their continuous spectra

4
Generalized functions and Fourier transforms
The Fourier theory of the two preceding chapters is still rather limited in scope since the signals
to which it applies are required to be piecewise smooth and absolutely integrable, with the single
exception of f (t) = sin(t)/t. The unit step 1(t) for one could not be Fourier transformed. A
signal f (t) whose F() is identical to 1 for all frequencies (assuming this makes sense) is
another signal that could not be dealt with.
In this chapter we shall see that with the introduction of the delta function or, more generally, with the introduction of the generalized functions we will have a way of dealing with
the above mentioned problems. A thorough treatment of delta functions is beyond the scope of
this course. We have to limit the treatment at various stages and appeal to intuitive arguments.
The properties in the end lead to a set of rules of calculus for delta functions that we may apply
without heading into problems.

4.1 The delta function


In applications we often encounter signals that have a very short duration but nevertheless have
a definite impact. An example is when you hit someone in his or her face (try it, its fun).
Such signals are called impulses. The mathematical function that represents an impulse is the
Dirac delta function or the unit pulse. The delta function (t) is often introduced as the limit as
n of


n
rn (t) =
0

n
if |t| <
if |t| >

1
,
2n
1
.
2n

(4.1)
1
2n

1
2n

As n goes to infinity, the rectangular pulses rn (t) become spikier and spikier, with
their spike

around t = 0, see Figure 4.1. However, the area between the spike and the x-axis, rn (t) dt
equals 1. We now naively define the delta function (t) as the limit
(t) = lim rn (t)

(4.2)

69

Chapter 4. Generalized functions and Fourier transforms

70

Figure 4.1: A series of rn (t) for n = 1, n = 2, n = 3 and n = 5.


and we think of the delta function as a function that is zero everywhere except at t = 0 where
it has a spike so large that
 0+
(t) dt = 1.
0

The delta function is usually depicted as done in Figure 4.2, i.e., depicted by the zero function
with a fat arrow pointing upwards at t = 0. The idea to see the delta function as a spike in this

Figure 4.2: The delta function (t).


sense is helpful, but mathematically it is far from sound. After all,

0
if t  = 0,
lim rn (t) =
n
if t = 0
and the integral of a function that is zero everywhere except for one point, is zero. Still,
for many applications with delta functions it is enough to see (t) as the limit (4.2). Many
tentative problems in calculations involving delta functions may be avoided if we stick to the
rule the-last-limit-you-take. By that is meant that in calculations with delta functions, first (t)
is replaced with the well defined rn (t) and only at the very last step the limit n is taken.
With this rule, for example, the following quintessential property of the delta function may be
derived.
4.1.1. Lemma. If f (t) is continuous at t = 0, then

(t) f (t) dt = f (0).

4.1. The delta function

71

Proof. First replace (t) with rn (t). The mean-value theorem gives that



rn (t) f (t) dt = n

1/(2n)

1/(2n)

1
f (t) dt = n( f (n )) = f (n )
n

for some n ( 1
, 1 ) depending on n. As a last step we take the limit n :
2n 2n



(t) f (t) dt = lim

rn (t) f (t) dt = lim f (n ) = { f (t) is continuous} = f (0).


n

4.1.1 Properties of the delta function


Delta functions can be added, they can be multiplied with regular functions, they can be integrated etcetera. In this subsection we review the more important properties and rules for delta
functions.
The scaled and shifted delta function (at b) we take to be defined as (at b) =
limn rn (at b). For t = b/a the argument at b is zero, so rn (at b) as a function of t
is centered around t = b/a and looks like
n
1
at b = 2n

at b = 1
2n

0
b/a

This is very much like a shifted copy of rn (t) with the difference that the spike does not have a
unit area. The width of the spike of rn (at b) is easily seen to be /|an|, so the area of the spike
is 1/|a|. In the limit as n the spike therefore approaches 1/|a| times the delta function
that has its spike at t = b/a:
(at b) =

b
1
(t ).
|a|
a

(4.3)

We can now generalize Lemma 4.1.1. If f (t) is continuous, then




b
1
(t ) f (t) dt
(at b) f (t) dt =
a

|a|

1
1
b
b
b
f ( ).
( ) f ( + ) d =
= { = t } =
a
|a|
a
|a| a
An immediate special case is that

(t b) f (t) dt = f (b),

(if f (t) is continuous at t = b).

(4.4)

Chapter 4. Generalized functions and Fourier transforms

72

(t b)
0

t =b

Figure 4.3: Shifted delta function (t b).


This property is known as the sifting property of the delta function. It is the property that out of
all values { f (t) : t R} that f (t) can take, the value at t = b is sifted out. Note that (t b)
has its spike at t = b, see Figure 4.3, therefore another way to interpret the sifting property is

that it says that (t b) f (t) dt equals f (t) at that t where (t b) has its spike.
It is also possible to determine the convolution product ( f )(t) of a signal f (t) and the
delta function (t).

( f )(t) =

(t ) f ( ) d = {sifting property} = f (t).

(4.5)

Here we used the fact that (t ) = ( t) as a function of has its spike at = t. A final
useful property to have is the following.
4.1.2. Lemma. If f (t) is continuous at t = b, then
f (t)(t b) = f (b)(t b).

(4.6)

Proof (idea). This is another instance were we shall use the rule the-last-limit-you-take. The
1
1
1
function f (t)rn (t b) is zero
 for all |t b| > 2n . On the interval [b 2n , b + 2n ] it generally
has a spike. Since limn f (t)rn (t b) dt = f (b) it means that the area of this spike
approaches f (b) as n , so that f (t)rn (t) approaches f (b) times the delta function (t
b).
4.1.3. Example.
1. t(t) = 0(t) = 0, i.e. the zero signal.
2. e j 0 t (t b) = e j 0 b (t b).
3. (t) = (t). This is because of the scaling property for a = 1 (Table 4.1).
4.

t

( ) d

(t).

= (1 )(t) =

1(t) for all t = 0. So the step is an indefinite integral of




4.2. Generalized derivatives

73

Property

Condition


Sifting

(t b) f (t) dt = f (b)

f (t)(t b) = f (b)(t b)

Convolution

( f )(t) = f (t)

Scaling

1
(at b) = |a|
(t ab )
t
( ) d = 1(t)

f (t) continuous at t = b
f (t) continuous at t = b

t = 0

Table 4.1: Properties and rules of calculus for the delta function.

4.2 Generalized derivatives


If two continuous signals f (t) and g(t) satisfy
 t
g( )d
f (t) = f (a) +
a

for a certain a R or a = , then we know that f (t) is continuously differentiable with


derivative f  (t) = g(t). The integral equality, however, can also hold for non-differentiable
functions f (t), and, even in that case we shall say that f (t) is differentiable and that g(t) is its
(generalized) derivative, notation: f  (t) = g(t). This generalization allows us to differentiate
discontinuous functions.
4.2.1. Example.
a) Let f (t) = |t| and consider the signal sgn(t) defined as

if t > 0,
1
sgn(t) := 0
if t = 0,

1 if t < 0.
The function sgn(t) is the generalized derivative of f (t) = |t|, because
 t
sgn( ) d.
|t| =
0

b) Let f (t) = 1(t). We showed earlier that


 t
1(t) =
( ) d.

So (t) is the generalized derivative of 1(t).

Chapter 4. Generalized functions and Fourier transforms

74

It may be shown that the product rule and chain rule of differentiation remain valid for derivatives in the generalized sense.

f (t)

f  (t)
1
1

Figure 4.4: f (t) = 1(t) 1(t 1) + et 1(t 1) and its generalized derivative f  (t).
4.2.2. Example. Consider the signal f (t) depicted in Figure 4.4. It is the signal
f (t) = 1(t) 1(t 1) + et 1(t 1).
Its generalized derivative f  (t) equals
f  (t) = (t) (t 1) et 1(t 1) + et (t 1)
= (t) (t 1) et 1(t 1) + e (t 1)
= (t) (1 e )(t 1) et 1(t 1).

The generalized derivative f  (t) is depicted in Figure 4.4(b).

We note that differentiation of a function at a point of discontinuity results in a delta function.


If f (t) at t = b jumps from f (b) to f (b+), then in the generalized derivative a delta function
( f (b+) f (b))(t b) shows up.
4.2.3. Example. The functions y(t) = et 1(t) and u(t) = 1(t) are a (generalized) solution
of the differential equation
y(t) + y  (t) = u  (t)
because

y(t) + y  (t) = et 1(t) + et 1(t) + et (t) = (t) = u  (t).




4.3. Generalized Fourier transforms

75

4.3 Generalized Fourier transforms


It makes sense to define the Fourier transform of the delta function using the sifting property,
that is,

(t)e j t dt = 1.
(4.7)
F {(t)} =

From Theorem 3.1.3 we know that absolutely integrable signals can be recovered from their
Fourier transform through an inverse Fourier transform which is in the form of an integral. In
the case of the delta function, however, this integral

1
e j t d
2
diverges. In a proper setup the inverse Fourier transform can be given a meaning and can be
shown to equal (t), see Section 4.4. For now however we simply define F {(t)} = 1. Its
implication that (t) is build up from all frequencies with equal weight F() = 1 is bit
difficult to interpret. Delta functions in the frequency domain () have a more appealing
interpretation. Consider F() = () and apply it to the inverse Fourier transform (assuming
this makes sense)

1 j 0t
1
1
e =
.
()e j t d = {sifting property} =
f (t) =
2
2
2
 1 j t
e
dt is now not defined, but also
This is a constant signal. The Fourier transform 2
in this case it is possible to give it a meaning and show that the Fourier transform equals
F() = (), see Section 4.4.
Summarizing, delta functions in one domain correspond to constant functions in the other.
4.3.1. Example.
F

a) (t ) e j .
F

This is a direct consequence of the time-shift rule and the fact that (t) 1.
F

b) e j 0 t 2 ( 0 ).
F

1
().
This is a direct consequence of the frequency-shift rule and the fact that 2
F

c) cos(0 t) (( + 0 ) + ( 0 )).
It follows from the Modulation theorem (Page 47).


That the Fourier transform of f (t) = cos(0 t) equals


|F()|
F() = (( + 0 ) + ( 0 )),

Chapter 4. Generalized functions and Fourier transforms

76

agrees with our understanding of what the Fourier transform F() entails. The above function
F() consists of two spikes, one spike at frequency 0 and one at 0 . Its frequency content is
therefore concentrated at the frequencies 0 only, and does not depend on any other frequency.
Indeed, cos(0 t) is like that. As a last example we consider the frequency spectrum of a
f (t)

F()

(t)

2 ()

(t b)

e j b

e j 0 t

2 ( 0 )

cos(0 t)

(( 0 ) + ( + 0 ))

sgn(t)

2
j

1(t)

1
j

+ ()

Table 4.2: Some generalized Fourier transform pairs.


piecewise smooth T -periodic signal f (t) with line spectrum fk . By Theorem 2.2.3 the signal
f (t) may be written as the sum of harmonic signals
f (t) =

f k e j k0 t

( < t < ).

k=

With the help of Table 4.2 and the superposition principlei.e., that the Fourier transform of a
sum of fk e j k0 t is the sum of the Fourier transforms of fk e j k0 t we find that
F() = 2

f k ( k0 ).

k=

The Fourier transform of a periodic signal consists of a train of delta functions. That the
superposition principle applies will not be shown here.
Table 4.2 collects some generalized Fourier transform pairs, including some that we did not
yet treat. The rules that hold for the classical Fourier transform remain valid if we extend it
with the Fourier transform pairs of Table 4.2 (proof is omitted). In those rules any derivative
should now be understood to mean the generalized derivative.
4.3.2. Example. Let f (t) = et 1(t). Then f  (t) = et (t) et 1(t) = (t) et 1(t). The
Fourier transform of f  (t) equals 1 1/(1 + j ) = j /(1 + j ). Via the differentiation rule
we the Fourier transform of f  (t) equals the Fourier transform F() = 1/(1 + j ) of f (t)
multiplied with j . Indeed, this gives the same result.


4.4. Introduction to generalized functions

77

Even the convolution theorems remain valid. An important consequence of the convolution
theorem has to do with Rule 8 on page 47 about integration with respect to time. In that rule
it was assumed that F(0) = 0. Now, in the generalized sense,
 t we may dispense with this
assumption and generalize the rule as follows. First we express f ( )d as a convolution


f ( )d = ( f 1)(t).
F

Now let f (t) F(), then application of the convolution theorem in the time-domain (see
Theorem 3.4.5) yields
 t
F()
1
F
+ ()) =
+ F(0)().
f ( ) d F() F {1(t)} = F()(
j
j

Note that for F(0) = 0 we recover Rule 8 on Page 47.

4.4 Introduction to generalized functions


In this section we shall indicate (and not more than that) how to arrive at a more solid foundation
of delta functions, generalized Fourier transforms and generalized derivatives.
Delta functions and derivatives of discontinuous functions can not be properly seen as
functions. A proper foundation of such functions therefore needs a generalization of the
notion of function. By far the most popular such generalization is that of distributions or
generalized functions. As a motivation we shall take a closer look at the result of Lemma 4.1.1
that

(t)(t) dt = (0)
(4.8)

for every (t) continuous at t = 0. That is, the delta function maps continuous (t) to (0).
You may want to convince yourself that no other function can have this property. Since (4.8)
characterizes (t) uniquely, we may take that as the definition of the delta function. That is, we
define the delta function not by its function values as one would normally do, but by the way it
acts on (t): The delta function is a linear mapping that sends continuous (t) to (0). When,
like here, a function is defined through how it acts on (t) then we say that it is a generalized
function or distribution. The delta function defined like this, as a generalized function, is
properly defined. Any piecewise smooth function can be seen as a generalized function, but
not every generalized function is a piecewise smooth function. This leads to the generalization
of the concept of function that we alluded to in the beginning of this section. Formally:
4.4.1. Definition. A test function (t) is a function that is infinitely often continuously differentiable and has finite duration.
A generalized function or distribution is a linear continuous mapping that sends test functions (t) to complex numbers.


Chapter 4. Generalized functions and Fourier transforms

78

We shall not go into the type of continuity here. Up to this definition we allowed for
general continuous (t), but restricting the (t) to test functions has advantages. The standard
test function is

1
e 1t 2 if |t| < 1,
0 (t) =
0
elsewhere
This is shown in Figure 4.5(a). By shifting and scaling it is possible to make many more test
functions.
1/e

0 (t)
1

c0 (at b)

c 0 (a  t b )

Figure 4.5: Some test functions.


Any piecewise smooth function f (t) gives rise to a generalized function

f (t)(t) dt.
(t) 

(4.9)

Such mappings identify f (t) uniquely for almost all t and this is the reason that one usually
makes no distinction between a function f (t) and the generalized function (4.9), even though
one is a function and the other is a mapping.
4.4.2. Example.
1. The unit step

0 (t) dt.

1(t) seen as a generalized function, maps (t) to

1(t)(t) dt =


2. The generalized function f (t) = t2 maps (t) to 0 t 2 (t)dt. Note that this integral is
defined for every test function. (If we would have allowed every continuous (t) then
the integral does not always exist. This is an instance where we see that test functions
are convenient.)

3. The delta function (t) maps (t) to (0).




The game is next to generalize the notions like sum, product and limit etcetera available
for regular functions to that of generalized functions. Of course we want the generalization
to be such that these notions for regular functions are the same as when seen as generalized
functions. A complete list of all generalizations is too much for an introductory exposition like
ours. We shall confine ourselves to the notion of limitwhich is an intriguing oneand later
we shall generalize the derivatives and Fourier transforms.

4.4. Introduction to generalized functions

79

4.4.3. Definition. Let f n (t) be a series of functions (possibly generalized functions). We


say that fn (t) has a generalized limit for n if for each test function (t) the

limit limn f n (t)(t) dt exists. The generalized limit f (t) is denoted as f (t) =
"lim"n f n (t).

If the fn (t) have a limit in a regular sense to a regular function f (t), say,
lim max | f n (t) f (t)| = 0

n t[a,b]

a < b

then fn (t) also has a limit in the generalized sense, with the same limit f (t) = "lim"n f n (t).
But with generalized limits we are able to take limits that hitherto could not be taken.
3.5
3
2.5
2
1.5
1
0.5
0
-0.5
-1
-10

-8

-6

-4

-2

10

Figure 4.6: sin(at)/( t) for a = 10.


4.4.4. Example.
1. By Lemma 3.1.2 there holds for any absolutely integrable piecewise smooth function

dt = f (0). Now test functions are piecewise smooth


f (t) that lima f (t) sin(at)
t
and absolutely integrable, so we infer that

sin(at)
(t) dt = (0).
lim
a
t
Here we recognize the defining property of the delta function. We remarkably have that
= (t). See Figure 4.6.
"lim"a sin(at)
t
2. "lim"a ea j t = 0. This is a direct consequence of Lemma 3.1.1.

 a/2
3. "lim"a recta (t) = 1. Indeed, recta (t)(t) dt = a/2 (t) dt and this converges

to (t) dt as a .
= 0. The function
4. Less standard is the generalized limit that "lim"a cos(at)/t

cos(at)/t has a pole at t = 0. In that case the integral cos(at)/t(t) dt has to

Chapter 4. Generalized functions and Fourier transforms

80

be replaced by its principal value lim 0





|t|>

cos(at)/t(t) dt. So




cos(at)
cos(at)
cos(at)
(t) dt = lim
(t) dt +
(t) dt
0
t
t
t

|t|>

 +

cos(a )( )
cos(at)(t)
d( ) +
dt
= {t = } = lim
0

t


 +
(t) (t)
dt
cos(at)
= {t = } = lim
0
t

and this limit exists because


(t) (t)
= 2  (0)
0
t
lim

exists. We conclude that the principle value of


tion (t). In the limit a it equals zero:


cos(at)

lim
0

(t) (t)
dt
t

(t)/t dt exists for every test func

= Re

e j at

(t) (t)
dt
t

(t) (t)
) 1(t)} = 0, as |a| .
= {Lemma 3.1.1 for f (t) = (t
t

"lim"n n rect1/n (t) = (t).


"lim"a
"lim" 0

sin(at)
t

= (t).

2
1 et /
2

= (t).

"lim"a e j at = 0.
"lim"a

cos(at)
t

= 0.

Table 4.3: Some generalized limits.

4.4.1 Generalized derivatives


Suppose that f (t) = "lim"n f n (t) and that every fn (t) is a regular differentiable function
with derivative fn (t). We shall say that then f (t) has generalized derivative f  (t) defined as
f  (t) = "lim" f n (t).
n

4.4. Introduction to generalized functions

81

A definition like this is only then sensible if "lim"n f n (t) exists and only depends on f (t)
and not on fn (t). Every sequence of fn (t) that converges to f (t) should give the same result
f  (t). This is the case, which can be seen with partial integration,






lim
f (t)(t) dt = lim f n (t)(t)]
f n (t)(t) dt
n n
n



f n (t)  (t) dt =
f (t)  (t) dt.
= lim
n

This shows that the definition of f (t) only depends on f (t) as it should, and it also shows that
"lim"n f n (t) exists whenever "lim"n f n (t) exists.
4.4.5. Example. The unit step 1(t) may be expressed as the generalized limit as n of
f n (t) =

1 arctan(nt)
+
2

(Convince yourself of this.) The derivative of fn (t) is


f n (t) =

n
1
1 + (nt)2

which may be shown to converge (in the generalized sense) to the delta function. We found
n
once again that 1 (t) equals "lim"n 1 1+(nt)

2 = (t).

4.4.2 Generalized Fourier transforms


Like with generalized derivatives we shall define generalized Fourier transforms through limits.
Suppose that f (t) = "lim"n f n (t) and that every fn (t) is piecewise smooth and absolutely integrable so that it has a Fourier transform Fn (). Now f (t) need not be absolutely
integrable and as such the Fourier theory of Chapter 3 generally does not apply to f (t). We
shall say that f (t) := "lim"n f n (t) has a generalized Fourier transform F() defined as
F() = "lim" Fn ().
n

A definition like this is only then sensible if "lim"n Fn () exists and only depends on
f (t) and not on fn (t). Every sequence of fn (t) that converges to f (t) should give the same
"lim"n Fn () and this limit should exist. As it stands this is not the case. The workaround
is to replace the test functions with by what are the called tempered test functions (t) which
are the infinitely often differentiable functions that are polynomially bounded in that tn (m) (t)
is bounded for every n, m N. It may be shown that the Fourier transform is a bijection on the
set of tempered test functions, and that is a property that we shall need. Consider

 

j t
Fn ()() d =
f n (t)e
dt () d

= {change order of integration}






j t
f n (t)
()e
d dt =
=

f n (t)(t) dt

Chapter 4. Generalized functions and Fourier transforms

82

where (t) := 2 F 1 {()}. In the limit, therefore,




F()() d = lim
f n (t)(t) dt.
n

Since the Fourier transform is a bijection on the set of tempered test functions it follows that
(t) is a tempered test function as well, so that the above generalized limit exists,


F()() d =
f (t)(t) dt.

This expression for F() only depends on f (t) as it should, and moreover is well defined
whenever f (t) = "lim"n f n (t) exists.
4.4.6. Example.
1. F {(t)} = "lim"n F {n rect1/n (t)} = "lim"n

sin(/(2n))
/(2n)

= 1.



 a/2
2. Since lima recta (t)(t) dt = lima a/2 (t)dt = (t) dt, holds for
every test function (t), we get that
"lim" recta (t) = 1.
a

Then because of Table 4.3 we arrive at


F {1} = "lim" F {recta (t)} = "lim"
a
F

2 sin( a2 )
= 2 ().

The two pairs (t) 1, 1 2 () = 2 () suggest that the reciprocity rule


remains valid for generalized Fourier transforms. That is indeed the case (proof is omitted).
4.4.7. Example.
1. To determine the Fourier transform of the unit step 1(t) we shall make use of the fact
that
"lim" recta (t a/2) = 1(t).
a

recta (t a/2)

1
0

The Fourier transform of recta (t a/2) is easily obtained,


F {recta (t a/2)} = (1 e j a )/( j ).
Since 1(t) is the generalized limit of recta (t a/2) as a , we get, using Table 4.3,
that
1 e j a
1 cos(a) + j sin(a)
= "lim"
a
a
j
j
1
cos(a)
sin(a)
1
"lim"
+ "lim"
=
+ 0 + ().
=
a
j a
j

F {1(t)} = "lim"

4.5. Problems

83

We found the (generalized) Fourier transform pair


F

1(t)

1
+ ().
j

2. The Fourier transform of the signal f (t) = sgn(t) subsequently follows readily on noting
that
sgn(t) = 2 1(t) 1.
This directly gives the Fourier transform pair
F

sgn(t)

2
2
+ 2 () 2 () =
.
j
j

For absolutely integrable functions the generalized Fourier transform and the normal Fourier
transform are the same. For this reason the adjective generalized is often omitted.

4.5 Problems
4.1 Given is a continuous function f (t).
(a) Let g1 (t) = (2t + 4). Determine ( f g1 )(t).
(b) Let g2 (t) = (2t 1). Determine f (t)g2 (t).
4.2 Determine the derivative in the generalized sense of the following signals.
(a) t rect2 (t),
(b) (sin t) 1(t),
(c) t rect2 (t 1),
(d) e j t 1(t ),
(e) rect1 (t) trian1 (t).
4.3 Let f (t) and g(t) be two continuously differentiable signals. Determine the generalized
derivative of f (t) 1(t) + g(t) 1(t).
4.4 Determine the frequency spectrum of the following signals
(a) sin(0 t),
(b) e j 0 t 1(t),
(c) cos(0 t) 1(t),
(d) sin(0 t) 1(t),
(e) e j 0 t sgn(t),

Chapter 4. Generalized functions and Fourier transforms

84

4.5 Determine the frequency spectrum of the signal f (t) given by

0 if t < 0,
f (t) = t if 0 t < 1,

1 if t 1.
4.6 Determine the frequency spectrum of the following signals
(a) (rect2 1)(t),
 t
sin
(b)
d .

4.7 Determine the time-domain the signal f (t) whose frequency spectrum equals
(a) F() =

1
(2

+ 1)

cos
,

(c) F() = 1(),


1
.
(d) F() =
1
(b) F() =

4.8 Let f (t) F(). Verify that the differentiation rule f  (t) j F() is also valid
for the signals

1(t b),

sgn(t),

e j 0 t .

4.9 Determine in the time-domain the convolution product ( f g)(t) for the cases that
(a) f (t) = et 1(t), g(t) = sgn(t),
(b) f (t) = 1(2t + 1), g(t) = e|t| .

4 and
4.10 Consider Example 3.1.5. Explain the little humps in |F()| at 2

More involved problems for Chapter 4.

6.

(These problems use Section 4.4.)

4.11 Show that "lim"


 n n f (nt) = (t) for any piecewise smooth absolutely integrable f (t)
that satisfies f (t) dt = 1.
4.12 Prove that "lim"n e|t|/n = 1 and use that to show that F {1} = 2 (t).

5
Linear time-invariant systems

u(t)

y(t) = T {u(t)}

Figure 5.1: A system.


An important problem area in electrical engineering is the analysis and design of analog and
digital filters. Analog filters are devices that change frequency properties of continuous-time
signals, and digital filters change frequency properties of discrete-time signals. As such, filters
are often characterized by what is called their frequency characteristic, which specifies for each
frequency how the amplitude and phase of the input signal u(t) relates to the amplitude and
phase of the output signal y(t). Figure 5.1 shows a block diagram description of a filter. The
arrow labelled u(t) pointing to the box indicates that u(t) is the input of the filter, that is, that
the signal u(t) is given to the filter, and the arrow labeled y(t) pointing away from the box
indicates that y(t) is the output of the filter, i.e., that y(t) is seen as being produced by the filter,
commonly depending on the input u(t). The internal structure of a filter is often not important.
This is why the filter is simply depicted by a black box, labeled T . Mathematically, a filter will
be understood as a function that maps input signals to output signals,
y = T (u),

or

u(t) y(t).

To emphasize that u and y are functions of time, we shall normally write y(t) = T {u(t)},
though this notation is debatable.
The idea of inputs causing outputs is not limited to filters. A wide variety of disciplines,
including econometrics, physics, electrical engineering and chemical engineering and others,
have a need for input signals and output signals, and the theory within which such problems
are treated has the generic name system theory and not merely filter theory, and a filter T is
often referred to as a system. We are not concerned with a formal setup of system theory, save
to say that there is a lot more to it then mentioned here. In this course we think of a system T
85

Chapter 5. Linear time-invariant systems

86

as being a mapping from inputs to outputs. At a later stage we shall amend this by allowing for
the effect of initial conditions.
In this chapter we restrict attention to continuous-time systems, which are systems whose
inputs and outputs are continuous-time signals. The output y(t) is often referred to as the
response of the system. In addition we assume that the system is linear and time-invariant,
two important properties defined shortly. Precisely these two properties allow a system to be
characterized in frequency domain.

5.1 LTI systems in time domain


We begin with the definitions of two important properties.
5.1.1. Definition (Linear system). A continuous-time system T is linear if for any a1 , a2 C
and any two inputs u1 (t) and u 2 (t) there holds that
T {a1 u 1 (t) + a2 u 2 (t)} = a1 T {u 1 (t)} + a2 T {u 2 (t)}.

(5.1)


A system T with the property (5.1) is sometimes said to obey the superposition principle.
5.1.2. Definition (Time-invariant system). A continuous-time system T is time-invariant if
for every t0 R and every input u(t), the corresponding output y(t) = T {u(t)} satisfies
T

u(t t0 ) y(t t0 ).


Roughly speaking, a system is time-invariant if the response to an input does not depend on the
time the input is applied; an input applied today will yield the same response as when applied
tomorrow.
A system that is both linear and time-invariant, is said to be an LTI system. LTI is an
acronym of Linear Time-Invariant.
5.1.3. Example. Consider the RC-network shown in Figure 5.2. We interpret the RC-network
as a system with the voltage delivered by the voltage source as the input u(t) of the system,
and the voltage across the capacitor as the output y(t).
The input and output are related by a differential equation that may be obtained using
Kirchoffs voltage law and the voltage-current relations of resistors and capacitors. Kirchoffs
voltage law gives that
u(t) = v R (t) + y(t) = Ri(t) + y(t).

(5.2)

The voltage across the capacitor equals


q(t)
= {q(t) is the charge, q(t) =
y(t) =
C

1
i( ) d } =
C

i( ) d.

(5.3)

5.1. LTI systems in time domain

87

v R (t)
i(t)
R
u(t)

y(t) = vC (t)

Figure 5.2: An RC-network (Example 5.1.3).


Differentiation with respect to time gives
C

dy(t)
= i(t).
dt

Substitute this expression for i(t) in (5.2) and we get a differential equation in the input and
output
dy(t)
+ y(t) = u(t),
dt

(5.4)

1
. This is a first-order ordinary linear differential equation with constant coefin which = RC
ficients, and with the right-hand side u(t) known. Solutions y(t) of this differential equation
are not unique. Indeed, the associated homogeneous equation

dy(t)
+ y(t) = 0
dt
has a non-trivial solution et . Hence if y(t) is a solution of (5.4), then also y(t) + cet is
a solution of (5.4) for any constant c. In fact, it may be shown that every solution of (5.4) is
necessarily of the form y(t) + cet with c constant (see Appendix A). The solution is unique
upto a constant c. However, because of (5.3) we have some more knowledge about y(t):

1 t
i( ) d = 0.
lim y(t) = lim
t
t C
To find the solution y(t) that obeys the above condition, we shall use handy trick: The left-hand
side of (5.4) may be written in the form
d
dy(t)
+ y(t) = et (et y(t)).
dt
dt
So the differential equation (5.4) becomes (with replacing t),
d
(e y( )) = e u( ).
d

Chapter 5. Linear time-invariant systems

88

Now integrate the left-hand side and use that limt y(t) = 0,
 t
t
d
(e y( )) d = e y( )
= et y(t).

We found that
t

e y(t) =

e u( ) d.

Divide both side by et and we arrive at the solution


 t
e(t ) u( ) d.
y(t) =

(5.5)

Apparently, the differential equation with the condition that limt y(t) = 0, has a unique
solution. To beautify formula (5.5) we define the function h(t) = et 1(t). Because of the
unit step 1(t), we may write (5.5) as

h(t )u( ) d,
y(t) =

but this we recognize as a convolution y(t) = (h u)(t). Written as a convolution, it is easy to


see that the system is an LTI system. Linearity is a consequence of the fact that
(h (a1 u 1 + a2 u 2 ))(t) = a1 (h u 1 )(t) + a2 (h u 2 )(t).
and time-invariance follows from the fact that

T
h(t )u( t0 ) d = {substitute v = t0 }
u(t t0 )


h(t t0 v)u(v) dv = (h u)(t t0 ) = y(t t0 ),
=

for any t0 .

In the previous example we saw that the output y(t) could be expressed as a convolution of the
input u(t) and a fixed signal h(t). We shall now demonstrate that this is always the case for LTI
systems. This is quite a remarkable result.
Assume first that we know that the response y(t) to an input u(t) equals a convolution
y(t) = (h u)(t),
for some fixed but possibly unknown function h(t). If we apply as input the delta function
u(t) = (t), then the response y(t) to this input is precisely the function h(t),

h(t )( ) d = h(t).
y(t) = (h )(t) =

This is interesting, because it shows that we can uncover h(t) from its input-output behavior.

5.1. LTI systems in time domain

89

5.1.4. Definition (Impulse response). The impulse response of a system T is defined as


h(t) = T {(t)}.

Now suppose that the system T is an arbitrary LTI system. We aim to show that y(t) = (hu)(t)
where h(t) is the impulse response of the system. The response to a sum of weighted, shifted
delta-functions
u(t) =

M


ck (t tk ),

(ck R, tk R),

k=1

is
M

ck (t tk )}
y(t) = T {u(t)} = T {
k=1

= {linearity} =

M


ck T {(t tk )}

k=1

= {time-invariance} =

M


ck h(t tk ),

k=1

where h(t) = T {(t)} is the impulse response of the system. Loosely speaking integration is
the same as summation, so, likewise, the response to an input of the form

c( )(t ) d
(5.6)
u(t) = (c )(t) =

is


c( )(t ) d }
y(t) = T {u(t)} = T {


c( )T {(t )} d
= {linearity} =


c( )h(t ) d = (c h)(t).
= {time-invariance} =

Here we assumed that the superposition principle applies over a continuum, which is practically
always the case and will be silently assumed from now on. Note that the function c(t) in (5.6)
is in fact equal to the input itself, c(t) = u(t), because (c )(t) = c(t). So we found the
following important result.
5.1.5. Theorem. If T is a continuous-time LTI system, then for any input u(t), the output y(t)
is given as
y(t) = (h u)(t),
where h(t) = T {(t)} is the impulse response of the system.

Chapter 5. Linear time-invariant systems

90

The system in Example 5.1.3 has impulse response h(t) = et 1(t).


5.1.6. Example (Distortionless system). A system is distortionless if the response y(t) to an
input u(t) is given by
y(t) = K u(t t0 ),
with K and t0 constants, K > 0, t0 0. This is an LTI system.
The graph of the output y(t) has the same shape as that of the input u(t), but possibly
delayed with positive amount t0 . The impulse response of a distortionless system is
h(t) = K (t t0 ),
and, hence, there holds that

(t t0 )u( ) d.
K u(t t0 ) = K

This is in accordance with sifting property of the delta function.

5.1.7. Example (The integrator). The integrator system is the system whose output is the
input integrated,
 t
u( ) d.
y(t) =

The impulse response equals


 t
( )d = 1(t),
h(t) =

i.e., the unit step. Indeed y(t) = (u 1)(t).

Often it is possible to argue that a system is linear and time-invariant, without having much idea
of the inner workings of the system. Theorem 5.1.5 states that then the system is completely
determined once its impulse response is determined.
5.1.8. Example (The echo system). The echo system is a system with as input u(t) the transmitted signal (for example, the voice of a singer in a concert hall) and as output y(t) the sum
of reflected signals (what a person in the audience in the concert hall hears).
The echo system is time-invariant, after all, the time of day that the concert begins probably
has no effect on the performance. The echo system is also linear (within reason), since the
reflected sound of one singer will generally not depend on what another singer happens to be
singing at that time.
By Theorem 5.1.5 the echo system is, hence, completely determined by its impulse response T {(t)}. Thinking of the concert hall, the impulse response is something like the effect
of a gun shot. A reasonable first model is to assume that the sound of a gun shot is reflected

5.1. LTI systems in time domain

91

1
k=0 2k

1
k=0 2k (t

kt0 )

t0

1(t kt0 )

t0

Figure 5.3: The echo systems impulse response and step response.
(echoed) every t0 seconds, for some t0 > 0, and that with each reflection its amplitude is halved,
say. This means that the impulse response of the echo system is (see Figure 5.3),
h(t) =


1
k
k=0

(t kt0 ).

Now for any other less violent signal u(t) the total of reflections y(t) follows as
y(t) = (h u)(t) =


1
k
k=0

u(t kt0 ).

The impulse response and the response y(t) to the unit step u(t) =
ure 5.3.

1(t) are shown in Fig

Systems in practical situations are real, and, in addition, they are causal or non-anticipating.
Causality expresses that the output at any time t0 does not depend on future time input values
u(t), t t0 . Formally:
5.1.9. Definition. A system T is causal or non-anticipating, if for any two inputs u1 (t) and
u 2 (t) and corresponding outputs y1 (t) = T {u 1 (t)} and y2 (t) = T {u 2 (t)} there holds for every
fixed t0 R that
if u 1 (t) = u 2 (t) t < t0

then

y1 (t) = y2 (t) t < t0 .

(5.7)


5.1.10. Example. The system y(t) = u(t 1) is an example of a delay system. If t is in units
of seconds, then y(t) = u(t 1) expresses that the response y(t) equals the input but then one
second delayed. So this is a causal system. Causality may also be verified formally through the
definition. Indeed, if u1 (t) = u 2 (t) for all t < t0 , then for any t < t0 we have that
y1 (t) = u 1 (t 1) = {since t 1 < t0 } = u 2 (t 1) = y2 (t).

Chapter 5. Linear time-invariant systems

92

Likewise the system y(t) = u(t + 1) expresses that at time t the output equals the input one
second into the future. This is not a causal system. If we want to formally show that the system
is noncausal, then it suffices to find one counterexample of (5.7). Suppose that u1 (t) = 0 and
u 2 (t) = 1(t), then u1 (t) = u 2 (t) for all t < 0, but y1 (t) = 0 and y2 (t) = 1(t + 1) and they are
not the same for t = 12 < 0: The system is not causal.

The response of an LTI system to the zero signal u(t) = 0 t, is the zero signal y(t) = 0 t.
As a causal signal is zero for all t < 0, we see that the response y(t) of a causal LTI system to
a causal input, is zero for all t < 0. Therefore the response y(t) is then causal as well.
We know that an LTI system is fully determined by its impulse response h(t). Therefore
it should be possible to express causality in terms of the impulse response. This can indeed be
done, in fact the condition is easy.
u(t) = (t)
0

y(t) = h(t)
0

Figure 5.4: The impulse response h(t) of a causal system is a causal signal.
5.1.11. Theorem. An LTI system T is causal if and only if its impulse response h(t) is a causal
signal. (See Figure 5.4.)
Proof. Suppose T is a causal system. Then, as argued above, its response to a causal input is
causal. The delta function is considered a causal signal, hence its response h(t) is causal.
Conversely, suppose that h(t) is a causal signal. Consider any two inputs u1 (t) and u 2 (t)
and any t0 R and suppose that
u 1 (t) = u 2 (t)

t < t0 .

Then the response y1 (t) to u 1 (t) equals

y1 (t) = (h u 1 )(t) = {h(t) causal} =

u 1 ( )h(t ) d

and the response y2 (t) to u 2 (t) equals



y2 (t) = (h u 2 )(t) = {h(t) causal} =

u 2 ( )h(t ) d.

Now, if t < t0 we have that y1 (t) = y2 (t) because


 t
(u 1 ( ) u 2 ( ))h(t ) d
y1 (t) y2 (t) =

and u 1 ( ) u 2 ( ) = 0 for every (, t) for every t < t0 .

5.1. LTI systems in time domain

93

5.1.12. Definition. A system is real if u(t) R for all t R implies that y(t) = T {u(t)} R
for all t R.

It is not difficult to see that an LTI system is real if and only if its impulse response is realvalued.
The next property that we consider concerns stability of a system. Stability can be defined
in many different ways, depending on what is deemed important. For LTI systems an often
used version of stability is BIBO-stability.
5.1.13. Definition. An LTI system T is BIBO-stable if each bounded input results in a
bounded output.

BIBO is an acronym of Bounded-Input-Bounded-Output. In this course, stability will always
be understood in the sense of BIBO-stability. The next theorem shows how BIBO-stability can
be characterized by the systems impulse response. For the moment we shall assume that the
impulse response has no delta function components.
5.1.14. Theorem. An LTI system whose impulse response h(t) has no delta function components, is BIBO-stable if and only if

|h(t)| dt < .
(5.8)

Proof. The condition (5.8) is that h(t) is absolutely integrable. First we show that absolute
integrability of h(t) is enough to ensure BIBO stability. Suppose u(t) is bounded, that is,
|u(t)| < M

t R

and a certain constant M > 0. Then also y(t) is bounded, because







h( )u(t ) d 
|y(t)| = 



|h( )u(t )| d M
|h( )| d = M I,

with I := |h( )| d < . The bound M I is independent of time, hence, y(t) is bounded.
Next we show that absolute integrability of h(t) is also necessary for BIBO-stability. Consider the input signal
u(t) = sgn(h(t)),
that is, u(t) = 1 for the time instances where h(t) > 0 and u(t) = 1 for the time instances
where h(t) < 0. This signal u(t) is bounded, and its response y(t) at t = 0 equals



h(0 )u( ) d =
|h( )| d =
|h(t)| dt.
y(0) = (h u)(0) =

If the system is BIBO-stable then |y(0)| is finite, so by the above, the impulse response h(t) is
necessarily absolutely integrable.

Chapter 5. Linear time-invariant systems

94

5.1.15. Example. The integrator y(t) =


h(t) = 1(t) and


| 1(t)| dt =
1 dt = .

t

u( ) d of Example 5.1.7 is not stable, because

5.1.16. Example. The RC-network of Example 5.1.3 is stable. Indeed, its impulse response
is h(t) = et 1(t) with = 1/RC a positive constant. Therefore


1
|h(t)| dt =
et dt = = 1 < .

0


If h(t) contains delta function components, then it is still possible to characterize BIBOstability. This can be done as follows. Write h(t) in the form
h(t) = h 1 (t) +

N


an (t tn ),

(an C, tn R, t0 < t1 < < t N ).

(5.9)

n=0

(t) is the signal obtained from h(t) by removing all delta function components, and
Here, h 1
N
an (t tn ) is the sum of delta functions. Then the response y(t) to an input u(t)
h 2 (t) = n=0
can be written as
y(t) = (h u)(t) = (h 1 u)(t) +

N


an u(t tn ).

n=0

It may be shown that the


 Nsystem is BIBO-stable if and only if the two subsystems y1 (t) =
an u(t tn ) are BIBO-stable. Without proof we summarize:
(h 1 u)(t) and y2 (t) = n=0
having no delta func5.1.17. Lemma. The LTI system with impulseresponse (5.9) with h1 (t)

N
|an | < .
tion components, is BIBO-stable if and only if |h 1 (t)| dt < and n=0

In
 Nmost applications the number N of delta function components is finite. In such cases
n=0 |an | is always finite and then BIBO-stability of the system is the same as BIBO-stability
of the system with the delta function components removed. Stated differently, adding a finite
number of delta function components to the impulse response has no effect on the systems
stability properties.
5.1.18. Example. The echo
of Example 5.1.8 has impulse response h(t) =
 system


1 k
k
(
)
(t

kt
).
As
(1/2)
=
2 is finite, the echo system is BIBO-stable.
0
k=0 2
k=0
If the singer uses a microphone and if the microphone picks up its own reflected
 ksignal
then
the
impulse
response
is
h(t)
=
delayed by t0 and amplified by a factor 2, say,
k=0 2 (t

k
2
=
.
This
problem
occurs
all
too
often at
kt0 ). This system is not BIBO-stable since
k=0
pop concerts when bands try to test their equipment.


5.1. LTI systems in time domain

95

Once the impulse response of an LTI system is known, then we know the system completely
and we can calculate the response to any input signal. In particular it is possible to determine
the systems step response. This is the response to the unit step u(t) = 1(t). The step response
of a given system is in the course denoted by g(t), that is,
 t
h( ) d.
(5.10)
g(t) = (h 1)(t) =

(a)

(b)

(c)

Figure 5.5: Three typical step responses.


5.1.19. Example. The step response of the RC-network of Example 5.1.3 may be seen as the
response to a constant voltage switched on at t = 0. The impulse response was found to be
h(t) = et 1(t). As the impulse response is causal, also the step response is causal,
 t
 t

e
1( ) d = e d 1(t)
g(t) =

= (1 e

) 1(t).

The step response looks like that of Figure 5.5(a).


Figure 5.5 shows three of the more common step responses of BIBO-stable systems. After
some transitional behavior, the step response of a BIBO-stable system settles down and converges to a constant as t .

If the (generalized) derivative of an input u(t) is known, then an alternative expression of the
response y(t) may be used that is sometimes easier to work with than the convolution (h u)(t).
We assume that limt u(t) = 0, which in practical situations is always the case (for example,
the voltage input of Example 5.1.3 was not active 100 years ago). Then
 t
t
u  ( ) d = u( ) = u(t).

With help of the unit step we recognize in the above a convolution



u  ( ) 1(t ) d.
u(t) =

Chapter 5. Linear time-invariant systems

96

Hereby the signal u(t) is written as continuous superposition of shifted step functions. By
time-invariance the response to 1(t ) is g(t ). The response to u(t) can hence be derived
from the (extended) superposition principle, as
 


u ( ) 1(t ) d =
u  ( )g(t ) d.
y(t) = T

5.1.20. Example. Consider again the RC-network of Example 5.1.3. We shall calculate the
response to the causal input u(t) given as

0 if t < 0,
u(t) = t if 0 t 1,

1 if t > 1.

Note that the generalized derivative of u(t) is u (t) = 1(t) 1(t 1), i.e., the rectangular pulse
on (0, 1).
The system is causal and the input signal is causal, hence so is its response y(t). The
response y(t) follows as
 1


u ( )g(t ) d =
g(t ) d
y(t) =


0
t1

g(v) dv =
g(v) dv
= {v = t } =
t
t1
 t
t
1
1
(1 ev ) 1(v) dv = (v + ev ) 1(v) t1
=

t1
1 t
1 (t1)
= (t + (e
1)) 1(t) (t 1 + (e
1)) 1(t 1).

Because of the step functions 1(t) and 1(t 1) it is best to distinguish the three cases t < 0,
t [0, 1] and t > 1:

if t < 0,
0
1 t
(5.11)
y(t) = t + (e
1)
if 0 t 1,

1
(t1)
if t > 1.
1 + (e 1)e
Figure 5.6 shows the response y(t). It may not be immediately clear, but the response is
continuously differentiable, while the input clearly is not.


5.2 LTI systems in frequency domain


Theorem 5.1.5 formulates the connection between inputs and outputs of LTI systems in time
domain terms. Concretely, the output y(t) was shown to be the time domain convolution y(t) =

5.2. LTI systems in frequency domain

97

Figure 5.6: The response y(t) (Equation (5.11)).


(h u)(t) of the input u(t) and the systems impulse response h(t). In frequency domain, the
connection between input and output is even simpler. This is due to the convolution theorem
for the Fourier integral: Let Y (), H () and U () denote the Fourier transforms of y(t), h(t)
and u(t), respectively, then by the convolution theorem the input relates to the output as
Y () = H ()U ().

(5.12)

An LTI system, therefore, is uniquely determined by the spectrum of its impulse response,
assuming the Fourier transform H () of h(t) exists. In such cases H () is called the frequency
response of the system. An important class of systems that have a frequency response are the
BIBO-stable systems.
5.2.1. Lemma. A BIBO-stable LTI system has a frequency response H () defined for all .
N
an (t
Proof. BIBO-stable systems have an impulse response of the form h(t) = h1 (t)+ n=0
N
tn ) with h 1 (t) absolutely integrable and n=0 |an | < . Now, the Fourier transform H1 ()
then, the
of h 1 (t) exists for all because h1 (t) is absolutely integrable. By superposition,
N
j tn
. It is
frequency response H () of the system follows as H () = H1 () + n=0 an e
defined for all frequencies.
The (complex) frequency response H () may be written in polar form as
H () = |H ()|e j ().
We refer to |H ()| as amplitude transfer of the system, and to () as the systems phase
transfer. For real systems, the impulse response h(t) is real-valued. In such cases the frequency
response has the property that H () = H (). Hence the amplitude transfer is even as a
function of frequency, and the phase transfer is an odd function of frequency. Consequently, in
a graphical representation of amplitude transfer and phase transfer it suffices to plot only for
non-negative values of frequencies 0. This is common practice (have a look at the booklet
that came with your stereo equipment or set of loudspeakers and probably you will find one or
more of such plots). In the remainder of this section we review the response of an LTI system
to four special type of inputs.
1. Response to a harmonic signal. The response to a harmonic input signal u(t) = ej 0 t
may be obtained via the convolution product (h u)(t),



j 0 (t )
j 0
h( )e
d =
h( )e
d e j 0 t = H (0 )e j 0 t .
y(t) =

Chapter 5. Linear time-invariant systems

98

A prerequisite for the above to hold is of course that H (0) exists, and this is not always
the case. For example, the integrator has impulse response h(t) = 1(t) and (generalized)
frequency response H () = 1/j + () and this is not defined for = 0 = 0.
Indeed, the harmonic signal for 0 = 0 is the constant signal u(t) = 1, and it will be
t
clear that then the output y(t) = u( ) d of the integrator is not defined.
For an LTI system T there apparently holds that
T {e j 0 t } = H (0 )e j 0 t ,

(5.13)

but only for those values 0 for which H (0 ) exists. Because of (5.13) one sometimes
refers to e j 0 t as an eigenfunction of the system, with eigenvalue H (0). Each 0 R
gives rise to an eigenfunction ej 0 t and eigenvalue H (0). LTI systems have a continuum
of eigenfunctions and eigenvalues.
2. Response to a sinusoid. Signals and systems are in practical situations always real. It
is therefore more interesting to consider sinusoidal signals than the complex harmonic
signals considered above.
Suppose the LTI system is real and take as input the sinusoid u(t) = cos(0 t + ). The
sinusoid cos(0 t + ) may be written as
1
u(t) = (e j e j 0 t + e j e j 0 t ).
2
Then, by linearity of the system,
1
1
T {u(t)} = e j T {e j 0 t } + e j T {e j 0 t },
2
2
and, with the help of (5.13), we arrive at
1
1
T {u(t)} = e j H (0 )e j 0 t + e j H (0)e j 0 t .
2
2
By assumption the system is real, so that H (0 ) = H (0 ). This, finally allows us to
write

1 j
1
e H (0 )e0 t
T {u(t)} = e j H (0 )e j 0 t +
2
2
= Re(e j H (0)e j 0 t ) = |H (0)| cos(0 t + + (0 )).
The output is again a sinusoid, but with amplitude |H ()| and initial phase + () =
+ arg H ().
3. Response to a periodic signal. Let u(t) be a T -periodic signal with line spectrum un .
Then u(t) has a Fourier series expansion
u(t) =


n=

u n e j n0 t

5.2. LTI systems in frequency domain

99

with 0 = 2/T .
Because of (5.13) and the superposition principle of linear systems, we have that

y(t) = T {u(t)} =

u n H (n0)e j n0 t .

n=

The output y(t) is also periodic with again period T , and its line spectrum yn equals
yn = H (n0)u n .
4. Response to the unit step (the step response). The step response g(t) of an LTI system
is introduced on page 95. If the LTI system is BIBO-stable, then its frequency response
is defined for all frequencies and is in fact continuous in . In this case the frequency
spectrum G() of g(t) equals (in the generalized sense)
F

g(t) = (h 1)(t) G() = H ()(

H ()
1
+ ()) =
+ H (0)().
j
j

That is,
G() =

H ()
+ H (0)().
j

(5.14)

The last example is again about the RC-network of Example 5.1.3.


5.2.2. Example. In Example 5.1.19 we found the step response of the RC-network of Example 5.1.3 without reference to frequency domain. Since in frequency domain LTI systems
amount to a simple multiplication of the spectra of input and impulse response, it is likely that
the computations are somewhat easier in frequency domain. In this example we shall rederive
the step response, but now with help of frequency domain arguments.
We copy the differential equation (5.4) that relates the input u(t) and output y(t),
d
y(t) + y(t) = u(t),
dt
with = 1/RC > 0. Since the system is BIBO-stable, we know that the frequency response
exists. It may be found by Fourier transforming both sides of the equation. As differentiation
in time domain corresponds to multiplication in frequency domain by j , we get that
( j + )Y () = U ().
Therefore
Y () =

U () = H ()U ().
j +

This means that the frequency response of the system is


H () =

.
j +

Chapter 5. Linear time-invariant systems

100

The inverse Fourier transform is h(t) = et 1(t), which is the impulse response as found
earlier. The step response g(t) is the inverse Fourier transform of
G() = {see (5.14)} =

+ ().
j ( j + )

To find the inverse Fourier transform we apply partial fraction expansion,


G() =

1
1
1
1

+ () =
+ ()
.
j
j +
j
j +

From the table of standard and generalized Fourier transform pairs we read that
g(t) = 1(t) et 1(t) = (1 et ) 1(t).
This is accordance with what was established in Example 5.1.19.

|H ()|

Figure 5.7: An example of the amplitude transfer of an ideal filter.

5.3 Ideal filters


In the previous section we saw that an LTI system if fully determined by its frequency response.
A real system is an ideal filter if its frequency response H () is zero outside some finite interval
[a , b ]. See Figure 5.7. Filters like these are ideal since they can not be build. This is due to
the fact that an ideal filter is necessarily a non-causal system. Another reason to consider them
ideal is that they have the desirable property that they block any harmonic input u(t) = ej 0 t
whose angular frequency is outside the frequency band [a , b ]. In many applications this is
what one wants to have; audio equalizers are a case in point and so is amplitude modulation
(see Section 3.6).
Even though they cannot be build, it is important to understand the behavior of ideal filters,
since this will help in the analysis and design of more practical filters that approximate the ideal
filters. Later, in Chapter 7, at which time we have some more techniques available, we shall
design more practical filters.
We limit the discussion to low-pass filters. These are the ideal filters whose frequency
response is zero outside a frequency band [0, b ] around zero.

5.3. Ideal filters

101

A typical example of an ideal low-pass filter is the LTI system with frequency response

e j t0
H () =
0

|H ()|

if || < c ,
if || > c .

0
c

c t0

arg H ()

The value c > 0 is the cut-off frequency of the filter. That the filter can not be build follows
from the inverse Fourier transform h(t) of H (),
1
h(t) =
2
=

H ()e

j t

1
d =
2

e j (tt0) d

sin(c (t t0 ))
.
(t t0 )

It is a non-causal signal, implying that the system is not causal (see Figure 5.8). The impulse
c /

t0
2/c
Figure 5.8: Graph of sin(c (t t0 ))/(t t0 ).
response h(t) has a maximum of c / at t = t0 . Finally, consider two special inputs to the
ideal low-pass filter.
1. Let u(t) be a bandlimited signal whose frequency spectrum U () is zero for all || > c .
Then the frequency response of y(t) satisfies
Y () = H ()U () = e j t0 U (),
and, hence,
y(t) = u(t t0 ),
Apparently, the transfer of this type of inputs is distortionless.

Chapter 5. Linear time-invariant systems

102

2. Let u(t) be a T -periodic signal, with Fourier series

u(t) =

u n e j n0 t

n=

with 0 = 2/T . By Item 3 on page 98 we have that the response y(t) equals

y(t) =

u n H (n0)e j n0 t .

n=

Let N be that (unique) integer number such that N0 c < (N + 1)0 . Then y(t)
follows as
N


y(t) =

u n e j n0 (tt0 ) = u N (t t0 ),

n=N

where u N (t) denotes the N-th partial sum of the Fourier series expansion u(t). All
harmonic components of u(t) whose frequencies satisfy || > c , are filtered out by
the ideal low-pass filter.

5.4 Problems
5.1 A system T is described by
y(t) = T {u(t)} = au(t) + bu(t 1).
(a) Show that the system is an LTI system.
(b) Determine its frequency response.
(c) Determine the impulse response.
(d) Is the system causal?
(e) Is the system BIBO-stable.
5.2 A system T is described by


t+1

y(t) = T {u(t)} =

u( ) d.

t1

(a) Show that the system is an LTI system.


(b) Determine the impulse response and frequency response.
(c) Is the system causal?
(d) Is the system BIBO-stable?
(e) Find the response y(t) to the input u(t) = rect2 (t).

5.4. Problems

103

5.3 Consider the system y(t) = u(t).


(a) Show that the system is linear.
(b) For which values of is the system time-invariant?
(c) For which values of is the system causal?
5.4

(a) Is the system y(t) = u2 (t) linear?


(b) Is the system y(t) = u2 (t) time-invariant?
(c) Is the system y(t) = t 2 u(t) linear?
(d) Is the system y(t) = t 2 u(t) time-invariant?

5.5 We are given the frequency response of an LTI system,


H () =

1
.
j ( j + 1)

(a) Sketch the impulse response of the system.


(b) Is the system BIBO-stable.
5.6 Let T be the LTI system with impulse response
h(t) = tet 1(t).
(a) Is the system causal?
(b) Determine the frequency response.
(c) Is the system BIBO-stable.
(d) Determine the response to the input u(t) = sin(0 t).
(e) Determine the step response g(t).
(f) Show that g (t) = h(t).
5.7 Let T be the LTI system whose frequency response is given by
H () =

1 j
.
1 + j

(a) Investigate the amplitude transfer of the system.


(b) Show that the energy content of an input u(t) equals the energy content of the
corresponding response y(t).
(c) Determine the impulse response.
(d) Determine the step response.
(e) Find the response y(t) to the input u(t) = rectT (t). Here T > 0.

Chapter 5. Linear time-invariant systems

104

(f) Determine

y(t) dt,

where y(t) is the response found in Part (e).


5.8 All we know of the system T is that it is an LTI system and that the response y(t) to the
input u(t) = et 1(t) is
1

y(t) = (e2t + et + e 2 t ) 1(t).


Determine the impulse response.
5.9 The LTI system with frequency response
H () = (1 ||) rect2 ()
is an example of an ideal low-pass filter.
(a) Determine the impulse response.
The system is given a periodic signal with period T = 3 which on one period [0, T )
equals u(t) = rectT /2 (t T /2).
(b) Find the response y(t) to this signal.
5.10 Suppose T is an LTI system with step response
g(t) = 1(t) + e2t 1(t).
(a) Determine the impulse response and frequency response.
(b) Is the system BIBO-stable.
The system is given a 2 -periodic input u(t) whose line spectrum is
uk =

1
,
1 + jk

(k Z).

(c) Is the periodic signal u(t) real-valued?


Let y(t) be the response to the given input.
(d) Determine the first harmonic of y(t).
5.11 Two LTI systems T1 and T2 with impulse responses h1 (t) and h 2 (t), respectively, are
connected in series by letting the output y1 (t) of the first system, T1 , be the input u2 (t)
of the second system, T2 . See Figure 5.9.
The map from u 1 (t) to y2 (t) now defines a system. Let h(t) denote the impulse response
of this system.
Show that
h(t) = (h 1 h 2 )(t).

5.4. Problems

105

u 1 (t)

T1

y1 (t) = u 2 (t)

T2

y2 (t)

Figure 5.9: Series connection of two systems.


Matlab problems:
5.12 In Example 5.1.3 it was shown that the frequency response of the system of Figure 5.2

in which = 1/(RC) > 0.


is H () = j +
(a) Let = 1. Plot the amplitude and phase transfer of H () using abs and phase.
It shows that the RC-network is a low-pass filter.
(b) Let = 1. Plot on time interval [0, 50] the input
1
u(t) = cos( t) + cos(5t)
5
and plot on the same time interval its response y(t). Is the plot of y(t) a surprise?
(Explain.)

106

Chapter 5. Linear time-invariant systems

6
The Laplace transform

A drawback of the Fourier transform



f (t)e j t dt

is that many signals that we wish to consider simply do not have a Fourier transform. The unit
step 1(t), for example, only has a Fourier transform in the generalized sense, and et 1(t) does
not have a Fourier transform at all. The Laplace transform can be seen as an extension of the
Fourier transform. It is an extension that allows to consider a much wider family of signals,
but which still inherits most of the useful properties and insights of the Fourier transform. As
it turns out, it gives rise to some useful new properties and insights as well. In accordance with
the Fourier transform, the two-sided Laplace transform of a signal f (t) is defined as

f (t)est dt.
F(s) =

In contrast to the Fourier transform, however, in the Laplace transform we allow for general
complex valued s and not just imaginary valued s = j j R. This simple extension makes it
possible to take Laplace transforms of signals that hitherto were not (Fourier) transformable.
In many cases we deal with causal signals, f (t) = 0 t < 0. Assuming a causal signal
f (t) contains no delta function components, then the Laplace transform reduces to


st
f (t)e dt =
f (t)est dt.

The latter expression



f (t)est dt
0

is the one-sided Laplace transform. In this course we only consider the one-sided Laplace
transform, from now on referred to as the Laplace transform. It is important to realize that the
Laplace transform may also be used for non-causal signals. Taking the Laplace transform of a
non-causal signal, however, means that all values f (t), t < 0 are lost in the transformation.
107

Chapter 6. The Laplace transform

108

The Laplace transform will therefore be of use only if we are not concerned with past time
function values f (t), t < 0; a situation that is often the case.
Later when functions with delta function components are allowed, we shall have to amend
the definition of the Laplace transform slightly. In the following section we consider piecewise
smooth signals.

6.1 Laplace transforms of piecewise smooth signals


6.1.1. Definition. The Laplace transform F(s) of a piecewise smooth signal f (t) is defined as

F(s) =
f (t)est dt
(6.1)
0

for those s C for which this integral exists.

Generally the Laplace transform of a given signal f (t) is defined only for a subset of the
complex numbers s. If we know of f (t) that it is exponentially bounded for t > 0, that is, if
we know that
| f (t)| Ceat

t > 0,

for some real numbers C and a, then the Laplace transform exists for every Re s > a. Indeed,
for those s the integral of (6.1) converges even absolutely,



C
st
Re(s)t
< .
| f (t)e | dt =
| f (t)|e
dt
Ce(aRe(s))t dt =
Re(s) a
0
0
0
All polynomials in t are exponentially bounded for t > 0, all exponential functions of the form
ebt are exponentially bounded. All piecewise smooth periodic signals are bounded, hence,
exponentially bounded for t > 0 as well, and all products and sums of exponentially bounded
signals are again exponentially bounded. The Laplace transform therefore applies to many
signals, many more than can be handled with the Fourier transform.

6.1.2. Example. The Laplace transform of f (t) = 1 is F(s) = 0 est dt. This Laplace

st 
transform exists iff Re s > 0 in which case F(s) = 0 est dt = es 0 = 1/s.

This example is instructive in that it demonstrates a fundamental feature of Laplace transforms:
For a given signal f (t) the set of values of s for which the Laplace integral (6.1) exists, adheres
to one of the following three situations.
1. There is an R such that the Laplace transform exists for every Re s > , while
for Re s < the Laplace transform never exists. This number is the abscissa of
convergence.
2. The Laplace transform exists for every s C. Example:
f (t) = et .
2

6.1. Laplace transforms of piecewise smooth signals

109

3. For no value of s C does the Laplace transform exist. Example:


2

f (t) = et .
In the second of the three situations mentioned above, the abscissa of convergence is taken to
be = ; in the third situation the abscissa of convergence is taken to be = +. The
general statement is this:
6.1.3. Lemma. For any signal f (t) there is an R, possibly = , such that F(s)
exists for all Re s > , and does not exist for any Re s < .
Proof. The statement is equivalent to saying that
If F(s0 ) exists, then F(s) exists Re s > Re s0 .
This we shall prove. So, suppose s0 is such that F(s0 ) exists. Then (t) defined as
t
(t) = 0 es0 f ( ) d converges to F(s0 ) as t , which, in particular, means that (t)
is bounded on [0, ). This we soon need. Now suppose that Re s > Re s0 . Then
 M
est f (t) dt
F(s) = lim
M 0
 M
e(ss0 )t es0 t f (t) dt
= lim
M 0
 M
e(ss0 )t  (t) dt
= lim
M 0


 M
t=M
(ss0 )t
(ss0 )t
(t) t=0 + (s s0 )
e
(t) dt .
= lim e
M

Since (t) is bounded, and Re(s s0 ) > 0 we see that the last limit exist. Therefore F(s)
exists whenever Re s > Re s0 .
6.1.4. Example.
1. The unit step 1(t) has Laplace transform


est 1(t) dt =

est
N s

est dt = lim

t=N
t=0

1
= {Re s > 0} = .
s

The above limit exists only if Re s > 0. The abscissa of convergence is therefore = 0.
2. The causal exponential function eat 1(t) (with a complex) has Laplace transform


1
(Re(s) > Re(a)).
est eat 1(t)dt =
e(sa)t dt =
sa
0
0
The abscissa of convergence is = Re(a).

Chapter 6. The Laplace transform

110

3. The Laplace transform of f (t) = eat cos(bt) 1(t) with a complex and b real, follows
similarly as above:



1 (sa j b)t
est eat cos(bt) dt =
+ e(sa+ j b)t dt
e
2 0
0


1
1
sa
1
+
=
.
= {Re(s a j b) > 0} =
2 s a jb s a + jb
(s a)2 + b2
The abscissa of convergence is = Re(a j b) = Re a.


Im s

Re s

s=

Figure 6.1: Region of convergence {s C : Re(s) > }.


Given a function f (t) and its abscissa of convergence , we shall refer to the set {s C :
Re s > } as the region of convergence of the Laplace transform of f (t), see Figure 6.1. On
the region of convergence the Laplace transform exists. On the boundary {s C : Re(s) = }
of this region the Laplace transform may or may not exist, depending on f (t) and on the value
of s.
Remark: If f (t) is causal and its abscissa of convergence is less than zero, then the imaginary
axis is part of the region of convergence, and consequently, then,



j t
j t
f (t)e
dt =
f (t)e
dt =
f (t)est dt

exists with s = j . In these cases the Fourier transform exists and coincides with the Laplace
transform for s = j .
6.1.5. Example. Consider the causal signal f (t) = eat 1(t) with Re a > 0. The Laplace
transform (see Example 6.1.4) equals 1/(s+a) with abscissa of convergence = Re(a) < 0.
Hence the Fourier transform is 1/( j + a).


6.2. Signals with delta components

111

The Laplace transform of 1(t) is 1/s with abscissa of convergence = 0. The Fourier transform of 1(t), however, equals 1/j + (). This demonstrates that when the boundary of the
region of convergence is the imaginary axis, then the (generalized) Fourier transform can not
always be obtained from the Laplace transform through substitution of s = j .
The mapping that sends a signal f (t) to its Laplace transform F(s) is denoted by L. That
is,

f (t)est dt.
L{ f (t)} =
0

Both F(s) and the mapping L are referred to as the Laplace transform.
In Chapter 3 we saw that a time domain signal f (t) can be recovered from its Fourier
transform through what was called the inverse Fourier transform. Also the Laplace transform
has an inverse, however, since the Laplace transform only considers positive time values t > 0,
the inverse Laplace transform necessarily can only recover f (t) for t > 0. Without proof we
state that causal signals are uniquely determined by their Laplace transform, where uniqueness
is in the sense of generalized functions. For piecewise smooth signals this means that the
Laplace transform uniquely characterizes the signal except at its points of discontinuity.

6.2 Signals with delta components


In Chapter 4 the Fourier transform was extended in such a way that generalized functions such
as the delta function could be given a meaningful Fourier transform. In the extension of Laplace
transform we shall confine ourselves to signals f (t) of the form
f (t) = g(t) +

N


an (t tn ).

(6.2)

n=0

Here g(t) denotes a piecewise smooth signal, the coefficients an are (complex) numbers and
the tn are arbitrary time instances tn R.
The Laplace transform of the signal f (t) of (6.2) is now taken to be
L{ f (t)} = L{g(t)} +

N


an L{(t tn )}.

n=0

The Laplace transform of g(t) is the Laplace transform of a piecewise smooth signal as dealt
with in the previous section. It will be no surprise that for the Laplace transform of (t tn )
we shall use the sifting property of delta functions: If tn  = 0, then


st
(t tn )e dt =
(t tn )est 1(t) dt
L{(t tn )} =
0


0
if tn < 0,
= {1(t) is continuous at t = tn  = 0} = st
e n if tn > 0.

Chapter 6. The Laplace transform

112


If tn = 0, then we end up with the integral 0 (t)est dt, which has no meaning since the
delta function (t) has its spike precisely at t = 0 which is on the boundary of the interval over
which is integrated. To accommodate for this problem it is customary to adjust the definition
of Laplace transform by expanding slightly the interval over which is integrated. The Laplace
transform will henceforth be understood as


F(s) =
f (t)est dt := lim
f (t)est dt.
(6.3)
0

Consequently, for any tn R, possibly tn = 0, we get that




0
if tn < 0,
st
L{(t tn )} =
(t tn )e dt = st
e n if tn 0.
0

(6.4)

In particular the Laplace transform of the delta function (t) is equal to 1. For piecewise smooth
signals f (t) it makes no difference whether or not integration in (6.3) begins at 0 or 0 or even
0+, but for generalized functions it does make a difference, and opting for the choice 0 means
that we want to be able to take effects of delta function components (t) fully into account.

6.3 Properties of the Laplace transform


Following is a list of properties and rules of calculus for the Laplace transform. Only those properties are proved whose derivation is substantially different from their corresponding Fourier
transform property. In what follows, F(s) denotes the Laplace transform of f (t), and denotes
the abscissa of convergence of F(s).
1. Linearity
L{a1 f 1 (t) + a2 f 2 (t)} = a1 L{ f 1(t)} + a2 L{ f 2(t)}.
2. Time scaling
L{ f (at)} =

1 s
F( ),
a a

(a > 0, Re(s) > a).

3. Time-shift
L{ f (t t0 ) 1(t t0 )} = F(s)est0 ,

(t0 > 0, Re(s) > ).

4. Shift in the s-domain


L{ f (t)es0 t } = F(s s0 ),

(Re(s) > Re(s0 ) + ).

5. Differentiation with respect to time


L{ f  (t)} = s F(s) f (0),

(Re(s) > ).

(6.5)

6.3. Properties of the Laplace transform

113

Proof. We prove this only for that case that f (t) is differentiable in the classical sense.
We shall further assume that on the region of convergence f (t)est 0 for t .
This is practically always the case. Let > 0. Partial integration gives that





st
st
st
f (t)e dt =
e d f (t) = f (t)e
+s
f (t)est dt





f (t)est dt.
= f ( )es + s

Now take the limit 0, then we see that L{ f  (t)} = f (0) + s F(s).
If f (t) is piecewise smooth, then the rule (6.5) remains valid, even if the derivative f (t)
exists only in the generalized sense.
t
6. Integration with respect to time Let g(t) = 0 f ( ) d . Then
L{g(t)} =

F(s)
,
s

(Re(s) > max (0, )).

The derivation of this rule is postponed till we treat the convolution theorem for Laplace
transforms (see Example 6.3.6).
7. Differentiation with respect to s
L{t f (t)} = F  (s),

(Re(s) > ).

6.3.1. Example. Let g(t) = f  (t). Then by Rule 5 we have that G(s) = s F(s) f (0). The
Laplace transform of the second derivative can be obtained by applying Rule 5 twice:

L{ f (2) (t)} = L{g  (t)} = sG(s) g(0) = s s F(s) f (0) f  (0)


= s 2 F(s) s f (0) f  (0).
Repeated use (n times) of the differentiation rule will give
L{ f (n) (t)} = s n F(s) s n1 f (0) s n2 f  (0) f (n1) (0).

(6.6)

If f (t) is causal, then f (k) (0) = 0, so then


L{ f (n) (t)} = s n F(s).


6.3.2. Example. Repeated use of Rule 7 gives


L{(t)n f (t)} = F (n) (s)

(n = 0, 1, . . . ).

(6.7)


Chapter 6. The Laplace transform

114

6.3.3. Example.
a) Consider the signal f1 (t) = eat and the causal signal f2 (t) = eat 1(t), and realize that
they have the same Laplace transform F1 (s) = F2 (s) = 1/(s a). The derivatives of
f 1 (t) and f2 (t) are
f 2 (t) = aeat 1(t) + (t).

f 1 (t) = aeat ,

The derivative of f2 (t) is a generalized derivative since f2 (t) is discontinuous at t = 0.


With help of Rule 5 we find that
a
s
s
f 1 (0) =
1=
,
sa
sa
sa
s
s
f 2 (0) =
.
L{ f 2 (t)} =
sa
sa

L{ f 1 (t)} =

These findings may also be obtained from direct calculation of L{ f1 (t)} and L{ f 2 (t)}.
b) We know that L{eat } = 1/(s a). Differentiate n times in the s-domain and we arrive
at
L{(t)n eat } =
hence

t n eat
L
n!


=

(1)n n!
,
(s a)n+1

1
.
(s a)n+1

Some of the more commonly used Laplace transform pairs and properties are collected in
Tables 6.1 and 6.2.
6.3.4. Example. The inverse Laplace transform of rational functions may be determined with
the help of partial fraction expansion (see Appendix A). The method is the same as for determining the inverse Fourier transform of rational functions. Let F(s) be given as
F(s) =

6s
.
(s + 1)(s 2 4)

The poles of this rational function are s1 = 1, s2 = 2 and s3 = 2. Verify yourself that the
partial fraction expansion of F(s) is
F(s) =

1
3
2
+
+
s+1 s2 s+2

Now, from Table 6.2 we can directly write down the inverse Laplace transform,
f (t) = 2et + e2t 3e2t ,

(t > 0).


6.3. Properties of the Laplace transform

f (t)

Property

115

F(s) =

a1 f 1 (t) + a2 f 2 (t)


0

f (t)est dt

Condition

a1 F1 (s) + a2 F2 (s)

Re s > max(1 , 2 )

f (at)

1
F( a )
a

a > 0, Re s >

f (t t0 ) 1(t t0 )

F(s)est0

t0 > 0, Re s >

Shift in s-domain

f (t)es0 t

F(s s0 )

Re s > Re s0 +

Differentiation (t)

f  (t)

s F(s) f (0)

Re s >

f  (t)

s 2 F(s) s f (0) f  (0)

Re s >

F(s)
s

Re s > max(0, )

F (s)

Re s >

Linearity
Time-scaling
Time-shift

t

Integration (t)

Differentiation (s)

f ( ) d

t f (t)

Table 6.1: Standard Laplace transform properties

f (t), (t > 0)

t n at
e
n!


0

f (t)est dt

Region of conv.

1
sa

Re s > Re a

1
s n+1

Re s > 0

1
(sa)n+1

Re s > Re(a)

cos(bt)

s
s 2 +b2

Re s > 0

sin(bt)

b
s 2 +b2

Re s > 0

eat cos(bt)

sa
(sa)2 +b2

Re s > Re a

eat sin(bt)

b
(sa)2 +b2

Re s > Re a

(t)

s C

eat
tn
n!

F(s) =

(n = 0, 1, . . . )
(n = 0, 1, . . . )

Table 6.2: Standard Laplace transform pairs.

Chapter 6. The Laplace transform

116

In Section 3.4 we saw that the convolution of two causal signals f (t) and g(t) is again
causal,
 t

( f g)(t) =
f ( )g(t ) d 1(t).
0

Since the Laplace transform only deals with the causal part of a signal (i.e., the part f (t) for
t 0), it is natural to define convolutions in this respect as
 t
f ( )g(t ) d, (t > 0).
(6.8)
( f g)(t) =
0

We stress that the signals f (t) and g(t) are allowed to be non-causal. Also, we want to allow
delta components in f (t) and g(t), and so we have to extend the interval [0, t] over which is
integrated in (6.8) slightly,
 t+
f ( )g(t ) d.
(6.9)
( f g)(t) =
0

6.3.5. Theorem (Convolution theorem for the Laplace transform). Let f (t) and g(t) be
signals with Laplace transforms F(s) and G(s) respectively. Then
L{( f g)(t)} = F(s)G(s),
where ( f g)(t) is the convolution product (6.9).

For piecewise smooth signals f (t) and g(t) the proof of this theorem is the same is the proof
of the convolution theorem for the Fourier transform. It may be shown that the result is still
valid when f (t) and g(t) contain delta function components.
6.3.6. Example.
a) Consider the unit step 1(t) and an arbitrary signal f (t). Then
 t

F(s)
.
f ( ) d =
L{( f 1)(t)} = L
s
0
(Compare with Rule 6 on page 113.)
b) Consider the delta function (t) and an arbitrary signal f (t). Then
L{( f )(t)} = F(s)1 = F(s).
In other words ( f )(t) = f (t) for all t > 0.


Sometimes it comes in handy to express the final value f (),


f () := lim f (t)
t

directly in terms of F(s). This can be done provided that the final value exists.

6.3. Properties of the Laplace transform

117

6.3.7. Theorem (Final value theorem). Let f (t) be a signal whose final value f () exists,
and let F(s) denote the Laplace transform of f (t), and assume that F(s) is defined for all
Re s > 0. Then
lim s F(s) = f ().
s0

Proof. (Sketch) Suppose that f (t) is piecewise smooth, then by the differentiation rule (Rule 5)
it holds that

est f  (t) dt + f (0).
s F(s) =
0

Now take the limit s 0 and change limit and integration,




est f  (t) dt + f (0) =
f  (t) dt + f (0)
lim s F(s) = lim
s0
s0 0
0

= f (t) 0 + f (0) = f ().

(6.10)

Adding delta function components an (t tn ), tn 0 has no effect on the result. Indeed,



an (t tn ) = f ()
lim f (t) +
t

is independent of the delta components, and so is the limit





an (t tn )} = lim s F(s) +
an estn = lim s F(s).
lim sL{ f (t) +
s0

s0

s0

6.3.8. Example. Let f (t) be a signal with Laplace transform


F(s) =

5
.
s(s 2 + 2s + 5)

To find f (t) we use partial fraction expansion,


s(s 2

1
s+2
1
1
s+1
5
= 2
=

.
2
+ 2s + 5)
s
s + 2s + 5
s
(s + 1) + 4 (s + 1)2 + 4

From Table 6.2 we copy


1
L{1} = ,
s

1
1
L{ et sin(2t)} =
,
2
(s + 1)2 + 4

L{et cos(2t)} =

s+1
,
(s + 1)2 + 4

so that
f (t) = 1 et (sin(2t) + cos(2t)),

(t > 0).

From this the final value f () can be seen to exist and it equals limt f (t) = 1. This value
is indeed equal to what the final value theorem states:
lim s F(s) = lim
s0

s0

5
= 1.
s 2 + 2s + 5


Chapter 6. The Laplace transform

118

We end this chapter with a discussion on Laplace transforms of periodic signals, at least, periodic on [0, ), which is to say that f (t + T ) = f (t) for all t 0. Here T is the period of
f (t). The Laplace transform of a T -periodic signal can be written as

 mT +T

st
F(s) =
f (t)e dt =
f (t)est dt
0

= { = t mT } =
=



msT

m=0 mT

 T

f (mT + )es(mT + ) d

m=0
T

f ( )es d .

m=0

Here we recognize a geometric series with ratio emsT . The geometric series converges provided that |emsT | < 1, i.e., Re s > 0, and then has limit

emsT =

m=0

1
1 esT

(Re s > 0).

Therefore
F(s) = L{ f (t)} =
where

FT (s) =

FT (s)
1 esT

(Re(s) > 0),

(6.11)

f (t)est dt.

The Laplace transform can apparently be expressed by a Laplace type integral over one period.

2T

3T

Figure 6.2: A sawtooth signal.


6.3.9. Example. Figure 6.2 shows the plot of a sawtooth signal. It is the T -periodic signal
f (t) defined on one period [0, T ) as f (t) = t and otherwise periodically extended. Via the
integral
 T
1 (1 + sT )esT
FT (s) =
test dt =
,
s2
0

6.4. Problems

119

we obtain the Laplace transform of f (t):


L{ f (t)} =

FT (s)
1 + sT
1 (1 + sT )esT
T
=
.
=

sT
2
sT
2
1e
s (1 e )
s
s(1 esT )

See Figure 6.3. Every periodic signal has an abscissa of convergence equal to 0. Note that the
denominator s(1 esT ) is zero for every s = j k0, k Z with 0 = 2/T . This is perhaps
not that surprising since we know that periodic signals have a generalized Fourier transform
whose spectrum is a series of infinite peaks at integer multiples of 0 .


1
0.8
0.6

|F(s)|

0.4
0.2
0
0
1
2

Re s

3
4
5

Figure 6.3: F(s) =

20

T
,
s(1esT )

15

10

(b)
(c)
(d)
(e)
(f)

s3
(s2)3
1
s 2 +2s+2
s
s 2 +2s+2

20

for T = 1, Re s [0, 5], Im s [5, 5 ].

6.1 Determine f (t), (t > 0), whose Laplace transform equals


1
s2
s
s2
s
(s2)2

15

Im s

6.4 Problems

(a)

10

Chapter 6. The Laplace transform

120

6.2 Let f (t) = (1 + 1(t 1)) cos(t). Verify that


L{ f  (t)} = sL{ f (t)} f (0).
6.3 Determine the Laplace transform of the following signals
(a) f (t) = | sin(0 t)|,
(b) f (t) = t .
Here t denotes the entire of t defined as
t = max{n : n t }.
nZ

6.4 Determine the inverse Laplace transform of the signals


1
,
+ 1)2
s 2 3s + 2
,
(b) 2
s 7s + 12
1 + es
,
(c)
s2 + 1
e(s+a)t0
.
(d)
s+a
(a)

(s 2

6.5 Determine the inverse Laplace transform of


(a)
F(s) =

1 eas
,
s(1 ebs )

(0 < a < b, Re s > 0)

(b) Sketch the inverse Laplace transform of


F(s) =

1
(s + a)(1 esT )

(a > 0, T > 0, Re s > 0).

6.6 Suppose f (t) is a periodic signal with period T and let F(s) be the Laplace transform of
f (t). Determine lims0 s F(s).
6.7 Use the Laplace transform to determine ( f g)(t) for f (t) = tn 1(t) and g(t) = t m 1(t)
More involved problems:
6.8 Let > 0. Determine all s C for which the Laplace transform of the signal f (t) =
1/(1 + t ) exists. (Hint: distinguish various cases of ).

7
Systems described by ordinary linear
differential equations

u(t)

y(t)

x 1 (t), x 2 (t),

Figure 7.1: An input-state-output system.


In Chapter 5 we saw that the Fourier transform is of help in the analysis of analog filters, that
is, of continuous time LTI systems. In that chapter the system was modeled as a black box with
only the interconnection between the input and output considered important, and without any
regard for the inner workings of the system. The mathematical modeling of a system was kept
to a minimum, and was directed towards a description of the impulse response or frequency
response.
This chapter deals with continuous-time systems whose input and output are related via a
set of linear differential equations with constant coefficients. Besides the input u(t) and output
y(t) now also other signals xk (t), k = 1, 2, . . . , n enter the scene. These signals xk (t) form
what is called the state of the system and they can be understood as some form of memory of
the system. As Figure 7.1 suggests, the state components xk (t) are often interpreted as being
internal to the system. The output y(t) now not only depends on the input u(t) but on the state
as well,
y = T (u, x1 , x2 , . . . , xn ).
In this chapter the Laplace transform will be used frequently.

7.1 State variables and state equations


7.1.1. Example. Figure 7.2 shows a mechanical system in which a mass m is connected to a
wall via a spring with spring constant k. On the mass an exterior force F(t) is exerted.
121

Chapter 7. Systems described by ordinary linear differential equations

122

q(t)
F(t)
m

Figure 7.2: A mechanical system.


Let q(t) denote the position of the mass with respect to the point where the spring exerts
no force on the mass.
We assume that we have control over the exterior force F(t) and hence take that as the
input, u(t) = F(t). As output we take the speed of the mass, y(t) = q(t).

The equations of motion are


m q(t)
+ kq(t) = u(t).

(7.1)

(For brevity, the derivative of q(t) with respect to time is denoted by q(t),

and q(t)
denotes the
second derivative, etc.).
From t = t0 an input force u(t) = F(t) is exerted on the mass. It may be intuitively clear
0 ) of the mass, that
that if we know at t = t0 the position q(t0 ) of the mass and the speed q(t
then the position and speed of the mass is fully determined for all t t0 for any given input
force. Define the signals x1 (t) and x2 (t) as
x1 (t) = q(t),

x2 (t) = q(t).

The second-order differential equation (7.1) can be alternatively expressed by 2 first-order differential equations
x1 (t) = x2 (t),
k
1
x2 (t) = x1 (t) + u(t).
m
m
In matrix notation this is

 

0
0 1
x(t) + 1 u(t),
x(t)
=
mk 0
m
where x(t) is the 2 1 vector x(t) =
matrix notation becomes



 x1 (t)
y(t) = 0 1
x2 (t)

 x1 (t) 
x 2 (t)

. Similarly, the output y(t) = q(t)

= x2 (t) in

7.1. State variables and state equations

123

In summary, we found a description of the system of the form


x(t)
= Ax(t) + Bu(t)
,
y(t) = Cx(t)
 0 1
 0 

in which A = k/m
0 and B = 1/m and C = 0
order derivatives of the signals show up.

(7.2)
1

are matrices, and in which only first




This chapter considers systems with one input u(t) and one output y(t) that can be related via
a vector differential equation of the form (7.2). In the above example the variable x(t) is a
vector with two entries, x1 (t) and x2 (t). In this chapter we shall allow for variables x(t) with
an arbitrary number of entries,

x1 (t)
x2 (t)

x(t) = . .
..
xn (t)

This vector-valued signal x(t) is called a state of the system. The first set of equations in (7.2),
x(t)
= Ax(t) + Bu(t),
are called the state equations. Now in general A is an n n matrix and B is a column vector
of the same dimension as the state x(t). The second equation in (7.2), is y(t) = Cx(t) and
is called the output equation of the system. In fact we shall allow for a more general output
equation
y(t) = Cx(t) + Du(t).
Here C is a row vector with as many entries as the state x(t), and D is a scalar. Note that
the output equation involves no derivatives. In summary, then, the systems considered in this
chapter are the ones with representation
x(t)
= Ax(t) + Bu(t)
.
y(t) = Cx(t) + Du(t)

(7.3)

This representation is known as an input-state-output representation. Why would we want to


consider input-state-output representations? One of the motivations is that they lend themselves
for simulation.
7.1.2. Example. Consider again the mechanical system
In Example 7.1.1 we
 Figure
 q(t) 7.2.

 of
=
is
the
vector of position
rewrote the system in the form (7.2), where x(t) = xx12 (t)
(t)
q(t)

and speed, and u(t) = F(t) is the external Force.


Suppose we wish to simulate this system for the input force
u(t) = F(t) = 1(t 20).

Chapter 7. Systems described by ordinary linear differential equations

124

u(t)

1
0

y(t)

1
t =0

t = 20

t = 40

Figure 7.3: Simulation of the mechanical system of Example 7.1.1.


See Figure 7.3. In order to do simulation on a computer we discretize the differential equation:
Let h be some small number. Then,
x(t + h) x(t) + h x(t)
= x(t) + h( Ax(t) + Bu(t)).
Therefore for the sampled signals x[n] = x(nh), u[n] = u(nh), (n N), there approximately
holds that
x[n + 1] = x[n] + h( Ax[n] + Bu[n]).
This is an explicit recurrence relation, in a form ready for computation once the input u[n]
and initial conditions x[0] are known. The Matlab command lsim makes use of a similar
recurrence relation.
k=1;
m=1;

% Some spring constant


% Some mass

A=[0 1; -k/m 0];


B=[0; 1/m];
C=[0 1];
D=0;
x0=[0.2; 0.1];

% The input-state-output data

h=0.1;
t=0:h:40;
u=stepfun(t,20);

% Integration step-size
% Discretized time interval [0,40]
% Discretized unit step 1(t-20)

% Some initial state x(0)

lsim(A,B,C,D,u,t,x0); % Do the simulation

The output y(t) produced by lsim is shown in Figure 7.3.

There is an abundance of systems that have an input-state-output representation. Examples


include the electrical circuits with resistors, capacitors and inductors. In this case the rule of
thumb in assigning the state variables is this: take each current through the each of the inductors
as a state component, and take each voltage across each capacitor as a state component xk (t).
We demonstrate this in the following example.

7.1. State variables and state equations

125

i 2 (t)

i(t)
R

L
R

u(t)

y(t)
C1

C2

i 1 (t)
Figure 7.4: Electrical circuit (Example 7.1.3).
7.1.3. Example. Consider the electrical circuit of Figure 7.4. The voltage across the voltage
source is taken to be the input u(t), and as output y(t) we take the voltage across the resistor R
and capacitor C2 as shown in the figure.
With Kirchoffs laws and the current-voltage relations of the various components, it is
possible to determine the voltages and currents for any of the components. As state vector we
take

i(t)
x1 (t)
x(t) = x2 (t) = vC1 (t)
x3 (t)
vC2 (t)
in which x1 (t) is the current i(t) = iL (t) through the inductor L, x2 (t) is the voltage vC1 (t)
across the capacitor C1 , and x3 (t) is the voltage vC2 (t) across the capacitor C2 .
Let i 1 (t) and i2 (t) denote the currents through the capacitors C1 and C2 respectively, and
let v L (t) be the voltage across the inductor. The following three differential equations in
x1 (t), x2 (t) and x3 (t) are readily verified.
u(t) = Rx1 (t) + v L (t) + Ri 1 (t) + x2 (t),
0 = Ri 1 (t) + x2 (t) Ri 2 (t) x3 (t),

(7.4)

x1 (t) = i 1 (t) + i 2 (t).


The current-voltage relations of the inductor and two capacitors yield
v L (t) = L x1 (t),

 t
1
i 1 ( ) d,
x2 (t) = x2 (t0 ) +
C 1 t0
 t
1
i 2 ( ) d.
x3 (t) = x3 (t0 ) +
C 2 t0

The second and third of these equations can be made into differential equations through differ-

Chapter 7. Systems described by ordinary linear differential equations

126

entiation with respect to t,


i 1 (t) = C1 x2 (t),
i 2 (t) = C2 x3 (t).
Next substitute i1 (t) and i2 (t) in the equations (7.4), and we arrive at
L x1 (t) + RC1 x2 (t) = Rx 1 (t) x2 (t) + u(t),
RC 1 x2 (t) RC2 x3 (t) = x3 (t) x2 (t),
C1 x2 (t) + C2 x3 (t) = x1 (t).
These are three ordinary differential equations of first order. In matrix notation this is

L
0
0
!

R 1 0
x1 (t)
1
0
x1 (t)
RC 1
RC1 RC 2 x2 (t) = 0 1 1 x2 (t) + 0 u(t).
C1
C2
1
0 0
0
x3 (t)
x3 (t)
"#
$
V

Finally, multiply both sides of the equality from the left with V1 , then we end up with the
state equations
x(t)
= Ax(t) + Bu(t),
in which (using Maple)

3R/L
0
R 1 0
RC 1
1

1/C1
RC1 RC 2
0 1 1 =
2
C1
C2
1/C2
1
0 0

0
1
1/L
RC 1
RC1 RC 2 0 = 0 .
C1
C2
0
0

A= 0
0

1/L
1/L
1/RC1 1/RC1 ,
1/RC2 1/RC2

and
L
B = 0
0

The output is y(t) = Ri2 (t)+vC2 (t) = RC2 x3 (t)+x3 (t). Since x3 (t) =
x3 (t)) we find the output equation y(t) = Cx(t) with
C=

1
1
x (t)+ 2RC
(x2 (t)
2C 2 1
2


1
R 1 1 .
2

Now we wrote the circuit in the input-state-output representation (7.3). Note that D = 0 in this
example.


7.1. State variables and state equations

127

u(t)
8

7
x2 (t)

x2 (t)

x1 (t)

x1 (t)

y(t)

Figure 7.5: A simulation diagram (Example 7.1.4).


In Example 7.1.1 we saw that the second-order differential equation m q(t)
+ kq(t) = u(t) may
be rearranged as an input-state-output equation. Interestingly, every linear differential equation
with constant coefficients
y (n) (t) + qn1 y (n1) (t) + + q1 y (1) (t) + q0 y(t)
=

(7.5)

pn u (n) (t) + pn1 u (n1) (t) + + p1 u (1) (t) + p0 u(t)

may be rearranged as an input-state-output equation. This partly explains the interest in inputstate-output equations.
7.1.4. Example (Simulation diagrams). Consider the differential equation
y (t) + 5 y (t) + 6y(t) = 7u(t)
+ 8u(t).

(7.6)

As explained, the input-state-output representations are a convenient form for simulation. For
ODEs like (7.6) there is another representation that is suitable for simulation purposes. These
are the so called simulation diagrams. Figure 7.5 shows an example of a simulation diagram.
A simulation diagram is built up from three blocks: adders, integrators and gains.
a(t)
a(t) + b(t) + c(t)

c(t)

x(t)

b(t)
Adder

x(t)

Integrator

u(t)

Bu(t)

Gain

Integrators and gains are simple LTI systems. Note that the integrator is represented by a
triangle instead of a box. An adder can have many inputs, but has one output, which is the
sum of the inputs. The name simulation diagram derives from the fact that each of the three
blocks can be well simulated on a computer. For adders and gains this is obvious. Integration
is also doable on a computer, as opposed to numeric differentiation, which is error prone.
In order to find a simulation diagram of (7.6) we rearrange the equation so that only the
highest order derivative of the output is on the left-hand side,
y (t) = 5 y (t) 6y(t) + 7u(t)
+ 8u(t).

Chapter 7. Systems described by ordinary linear differential equations

128

Next we integrate twice so as to get rid of the derivatives on the left-hand side



y(t) =
5 y (t) 6y(t) + 7u(t)
+ 8u(t)

 



=
5y(t) + 7u(t) +
6y(t) + 8u(t) .
As a final step we assign to each integral on the right-hand side a state variable xk (t), k = 1, 2:

 



y(t) =
5y(t) + 7u(t) +
6y(t) + 8u(t)
"#
$
!
"#

x 2 (t)

x 1 (t)

The state components assigned this satisfy

y(t) = x1 (t)
x (t) = 5y(t) + 7u(t) + x2 (t) .
1
x2 (t) = 6y(t) + 8u(t)
Verify yourself that this agrees with the simulation diagram of Figure 7.5. From the simulation
diagram we can directly write down an equivalent input-state-output equation:


 
5 1
7
x(t)
=
x(t) +
u(t)
6
0
8
.


y(t) =
1 0 x(t)


The general procedure to go from a high-order ODE (7.5) to a first-order input-state-output


representation is not more complicated than as demonstrated in the above example. To avoid
clutter we shall limit the derivation of the simulation diagram of ODEs (7.5) to the case that
n = 3. The general case works exactly alike but is bit more messy. So, consider the ODE
...
...
y (t) + q2 y (t) + q1 y (t) + q0 y(t) = p3 u (t) + p2 u(t)
+ p1 u(t)
+ p0 u(t).
The first step is to reorder the ODE as
...
...
y (t) = p3 u (t) + [ p2 u(t)
q2 y (t)] + [ p1 u(t)
q1 y (t)] + [ p0 u(t) q0 y(t)]
and then integrate 3 times in order to get rid of the derivatives on the left-hand side,

 




p2 u(t) q2 y(t) +
p1 u(t) q1 (t) + [ p0 u(t) q0 y(t)] .
y(t) = p3 u(t) +
To each integral we now assign a state component, x1 (t), x2 (t) and x3 (t),

 




p2 u(t) q2 y(t) +
p1 u(t) q1 (t) + [ p0 u(t) q0 y(t)] .
y(t) = p3 u(t) +
"#
$
!
"#

!
!

"#

x 1 (t)

x 2 (t)

x 3 (t)

$
$

7.2. Solution of input-state-output equations

129

This way the signals y(t), u(t) and the state components satisfy
y(t) = p3 u(t) + x1 (t),
x1 (t) = p2 u(t) q2 y(t) + x2 (t),
x2 (t) = p1 u(t) q1 y(t) + x3 (t),
x3 (t) = p0 u(t) q0 y(t),
This corresponds to the simulation diagram of Figure 7.6 and from the simulation diagram in
turn we may form the input-state-output representation,

p2 q2 p3
q2 1 0
x(t)
= q1 0 1 x(t) + p1 q1 p3 u(t),
p0 q0 p3
q0 0 0


y(t) = 1 0 0 x(t) + p3 u(t).

u(t)
p0

p1
x3 (t)

x 3 (t)

q0

x2 (t)
q1

p3

p2
x 2 (t)

x1 (t)

x 1 (t)

q2
y(t)

Figure 7.6: Simulation diagram for n = 3.


This settles the case that n = 3. Any idea how the general case looks like?
The state x(t) assigned in this manner normally fails to have a physical interpretation, such
as the currents and voltages of Example 7.1.3. Note that both the simulation diagram and the
input-state-output equations do not contain derivatives of the input u(t), even though the ODE
(7.5) does contain derivatives of the input. This is an advantage of simulation diagrams and
input-state-output equations over ODEs, certainly when non-differentiable inputs are considered such as the unit step u(t) = 1(t).

7.2 Solution of input-state-output equations


In this section we derive the general solution of the input-state-output equations
x(t)
= Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)

Chapter 7. Systems described by ordinary linear differential equations

130

where A Rnn , B Rn1 , C R1n , D R and x(t) an n-dimensional signal. We


assume that the input u(t) is given, and that the state x(t) and output y(t) are to be found.
Once we know the state x(t), then the output y(t) follows directly from the output equation
y(t) = Cx(t) + Du(t), so the main concern in this respect are the state equations, x(t)

=
Ax(t) + Bu(t).
7.2.1. Example (Variation of constants). In this example we provide the general solution
x(t) for the case that n = 1. That is, A and B are assumed scalar, A = a, B = b, and
the state consists of only one entry x(t) = x1 (t). Consider, then, the differential equation
x1 (t) = ax1 (t) + bu(t).

(7.7)

For the zero input this differential equation reduces to the homogeneous equation x1 (t) =
ax1 (t). It is immediate that
x1 (t) = zeat ,

zC

is a solution of the homogeneous equation x1 (t) = ax1 (t) for any constant z. The trick of
variation of constants is now to express the candidate solution of (7.7) as
x1 (t) = z(t)eat ,
where now z(t) is an arbitrary function of time. Any signal x1 (t) can be expressed as x1 (t) =
z(t)eat because eat is invertible. Now
x1 (t) = ax1 (t) + bu(t) z (t)eat + az(t)eat = az(t)eat + bu(t)
z (t)eat = bu(t)
z (t) = eat bu(t)
 t
ea bu( ) d,
z(t) = z 0 +

(z0 C),

and so the general solution x1 (t) = z(t)eat follows as




 t
 t
ea bu( ) d = eat z 0 +
ea(t ) bu( ) d.
x1 (t) = eat z 0 +
0

(7.8)

As we shall see, the variations of constants formula of the above example also works for the
general n-dimensional case. In the above example, the general solution (7.8) relies on the
exponential function eat . In the n-dimensional case the role of the exponential function eat
is taken over the matrix exponential eAt , where A Rnn . Analogous with the Taylor series
expansion of ea ,
ea =


1
1
1 k
a = 1 + a + a2 + a3 +
k!
2!
3!
k=0

7.2. Solution of input-state-output equations

131

we define the matrix exponential eA of a matrix A Rnn as


eA =


1 k
1
1
A = I + A + A2 + A3 + .
k!
2!
3!
k=0

(7.9)

This series is well defined for every matrix A. In view of the scalar case it is tempting to think
that for matrices A a solution of x(t)

= Ax(t) is x(t) = eAt z for any constant column vector


z Rn . That is indeed correct, but before we can show that we have to review some properties
of the matrix exponential.
7.2.2. Lemma. The matrix exponential has the following properties.
1. e0 = I for the zero matrix 0 Rnn . Here I denotes the identity matrix in Rnn .
2. If F Rnn commutes with A Rnn (that is if AF = F A), then e A e F = e A+F .
3. e A is invertible for every A Rnn and its inverse is eA .
4. Let A Rnn and t R. Then

d At
e
dt

= Ae At = e At A.

Proof.
1. Trivial.
2.
m=

 1

1 k

1 m
k=
A
F =
Ak F m .
e e =
k!
m!
k!
m!
k=0
m=0
m=0
A F

k=0

We see that the coefficient of Ak F m is


e A+F

1
.
k!m!

%
&
n

n






1
1
1
n
k nk
( A + F)n =
Ak F nk .
=
F
A
=
k
n!
n!
k!(n

k)!
n=0
n=0
k=0
n=0 k=0

So also in this expression the coefficient of the factor Ak F m is


e A+F .

1
.
k!m!

Therefore e A e F =

3. Apply Item 2 with F = A.


4. The Taylor series (7.9) can be seen to converge absolutely. Because of that differentiation
and summation in the Taylor series of eAt may be interchanged,

d 
1 k k

1 kd k
d At
e =
A t =
A
t
dt
dt k=0 k!
k! dt
k=0


1
1 m+1 m
Ak t k1 = {m = k 1} =
A
t = e At A.
(k

1)!
m!
k=1
m=0

1 n n+1
= Ae At .
Since A commutes with A we further have that eAt A =
n=0 n! t A

Chapter 7. Systems described by ordinary linear differential equations

132

With these properties of the matrix exponential we can redo the variations of constants trick
of Example 7.2.1. That is, we express the candidate solution x(t) of the state equation x(t)

=
Ax(t) + Bu(t) as
x(t) = e At z(t)
where z(t) is an arbitrary function of time. Any function x(t) can be expressed like this since
e At is invertible for every A Rnn and t R. Now
x(t)
= Ax(t) + Bu(t)

Ae At z(t) + e At z (t) = Ae At z(t) + Bu(t)

e At z (t) = Bu(t)

z (t) = eAt Bu(t)


 t
eA Bu( ) dt,
(z(t0 ) Rn )
z(t) = z(t0 ) +
t0


 t
eA Bu( ) dt
x(t) = e At eAt0 x(t0 ) +

x(t) = e A(tt0 ) x(t0 ) +

t0
t

e A(t ) Bu( ) dt.

t0

We shall have to be a bit careful with possible delta function components in the input. The way
around this problem is to replace the initial state x(t0 ) with
x(t0 ) := lim x(t0 h).
h0

In the event of a jump-discontinuity at t0 the initial state x(t0 ) is hence understood to be the
state just before the jump. Summary:

= Ax(t) + Bu(t) with


7.2.3. Theorem. Let A Rnn and B Rn . The state equation x(t)
given initial condition x(t0 ) Rn and input u(t), has a unique solution x(t) determined as
 t
A(tt0 )
x(t0 ) +
e A(t ) Bu( ) d.
(7.10)
x(t) = e
t0

Consequently the output y(t) = Cx(t) + Du(t) is uniquely determined as


 t
A(tt0 )
x(t0 ) +
Ce A(t ) Bu( ) d + Du(t).
y(t) = Ce

(7.11)

t0

7.2.4. Example. Let A =

1

0
0 2

. Then

1
1
e At = ( At)0 + ( At) + ( At)2 +
2! 



 1! 
1 t2
1
t
0
0
1 0
+
+
=
+
0 1
1! 0 (2t)
2! 0 (2t)2
  t

e
0
1 + t + 2!1 t 2 +
=
=
1
2
0
1 + (2t) + 2! (2t) +
0

0
e2t


.

7.3. Solutions of state and output for positive time

133

In Matlab the matrix exponential of a constant matrix


 be computed with the expm com can
mand. For example, the matrix exponential of A = 13 24 can be found with
A=[1 2, 3 4];
expm(A)

In Maple the matrix exponential eAt can be found with


with(linalg);
A:=matrix([[1,2],[3,4]]);
exponential(A,t);


In the above example the matrix exponentials eAt could be found because the matrix A is
diagonal. For general matrices there are other techniques to find the entries of the matrix
exponential. One such technique uses the Laplace transform. The next section among other
things explains how this works.

7.3 Solutions of state and output for positive time


In the previous section we derived the general solution of (7.3) for all time. Often, however,
we are only interested in solutions y(t) for positive time and we interpret negative time as the
past, as something we do not care about.
If we want to determine the response y(t) in (7.3) to a given input u(t), for positive time
only, then we need not use the method of the previous section but may use the Laplace transform
instead. First we should find a solution x(t) of the state equations
x(t)
= Ax(t) + Bu(t).
However, solutions x(t) of this differential equation are not unique. If we know x(0), then
with the Laplace transform we can find a unique x(t), t > 0 that satisfies the state equation.
We define the Laplace transform of a vector or matrix of signals entry-by-entry, that is, the
Laplace transform L{x(t)} of an n-dimensional state vector x(t) is defined as

L{x1 (t)}
L{x2 (t)}

L{x(t)} =
.
..

.
L{xn (t)}

Because of the differentiation rule, there holds for the derivative that
L{x(t)}

= sL{x(t)} x(0).
It is not difficult to verify that
L{ Ax(t)} = AL{x(t)}.

(7.12)

Chapter 7. Systems described by ordinary linear differential equations

134

Now let X (s) denote the Laplace transform of x(t) and let U (s) denote the Laplace transform
of u(t). Taking Laplace transforms of left and right-hand side of the state equations x(t)

=
Ax(t) + Bu(t) gives us
s X (s) x(0) = AX (s) + BU (s).

(7.13)

This we may rearrange as follows.


(s I A)X (s) = x(0) + BU (s).

(7.14)

Here I stands for the n n identity matrix. Now, if s is not a root of the characteristic equation
det(s I A) = 0 of A, then s I A has an inverse. Multiplying both left and right-hand side of
(7.14) from the left with (s I A)1 , results in
X (s) = (s I A)1 x(0) + (s I A)1 BU (s).

(7.15)

This is a set of algebraic equations, which is usually easier to work with than the state equation
itself. Once the above algebraic equation is solved, the underlying time-domain signal x(t)
follows from the inverse Laplace transform.
7.3.1. Corollary. Let A Rnn . Then L{e At } = (s In A)1 . So for t > 0 the matrix
exponential equals L1 {(s I A)1 }.
Proof. Let t0 = 0 and let u(t) = 0 for all t, then (7.10) reduces to x(t) = eAt x(0) and (7.15)
reduces to X (s) = (s In A)1 x(0). So necessarily L{e At } = (s In A)1 .


7.3.2. Example. We shall calculate the matrix exponential eAt of A = 11 1
1 . In the Laplace
domain we have that

1
s 1
1
1
.
(s I2 A) =
1 s 1
Now, recall that the inverse of an arbitrary 2 2 matrix
(s I2 A)

a b 
c d

is


 ' s1
1
s 1 1
2
= (s1)1 +1
=
1
s1
(s 1)2 + 1
(s1)2 +1

 a b 1
c d

(s1)1 2+1
s1
(s1)2 +1

1
d b
adbc c a

. So

(
.

In order to determine the matrix exponential we need only take the inverse Laplace transforms
entry-by-entry:

 t
e cos(t) et sin(t)
.
(7.16)
e At = t
e sin(t) et cos(t)
Since we used the Laplace transformwhich deals with positive time onlywe formally derived the matrix exponential only for positive time. It is easy to believe and indeed true that
(7.16) in fact holds for all time t R.


7.3. Solutions of state and output for positive time

135

If we are given the input u(t) and initial state x(0) then we need not determine the matrix
exponential first in order to find x(t).


7.3.3. Example. Suppose we have a continuous time system with state x(t) = xx12 (t)
(t) and state
equations


 
1 1
1
x(t)
=
x(t) +
u(t).
1 1
0
 
As input we take u(t) = (t 1), and we suppose that the initial state is x(0) = 01 .
We shall calculate x(t) for t > 0. Let, as always, X (s) denote the Laplace transform
 X 1 (s)of

s
x(t). The Laplace transform U (s) of u(t) = (t1) is U (s) = e . Therefore, X (s) := X 2 (s)
equals



1    

0
1 s
s1
1
X 1 (s)
1
= (s In A) (x(0) + BU (s)) =
+
e
.
1
0
1 s 1
X 2 (s)
So for X 1 (s) and X 2 (s) we find



    

 ' 1+es (s1) (
1
s 1 1
0
1 s
X 1 (s)
(s1)2 +1
+
e
=
=
.
2
s1+es
1
s

1
1
0
X 2 (s)
(s 1) + 1
2
(s1) +1

Verify yourself that X1 (s) and X 2 (s) are the Laplace transforms of the signals
x1 (t) = et sin t + et1 cos(t 1) 1(t 1),
et cos t + et1 sin(t 1) 1(t 1).
x2 (t) =

(t > 0).


7.3.1 The output


In Example 7.3.3, the Laplace transform was used to find the state x(t) for all positive time,
given an initial state x(0) and an input u(t). In this subsection we discuss the use of the
Laplace transform to find the output y(t). Let, again, X (s) and U (s) denote the Laplace transforms of x(t) and u(t). As found in (7.15), the Laplace transform of the state equation is
X (s) = (s I A)1 x(0) + (s I A)1 BU (s).
In the end we are interested in the output y(t). The output in time domain is given by the output
equation
y(t) = Cx(t) + Du(t).
So if Y (s) denotes the Laplace transform of y(t), then we have in the s-domain that
Y (s) = C X (s) + DU (s).
Substitute X (s) with (7.15) and we end up with the following result.

Chapter 7. Systems described by ordinary linear differential equations

136

7.3.4. Lemma. Let U (s), X (s) and Y (s) denote the Laplace transforms of u(t), x(t) and
y(t). Then y(t) satisfies (7.3) for t > 0 if and only if

Y (s) = C(s I A)1 x(0) + C(s I A)1 B + D U (s).


(7.17)


Now that the output is characterized in the s-domain, it is also possible to express the output in
time-domain terms. For that we simply determine the inverse Laplace transform of (7.17). To
this end define K (s) and H (s) as
K (s) = C(s I A)1 ,
H (s) = C(s I A)

(7.18)

B + D.

(7.19)

Realize that K (s) is a row vector and that H (s) for each s is a scalar. The Laplace transform
Y (s) can now be written as
Y (s) = K (s)x(0) + H (s)U (s).

(7.20)

It will be no surprise that we will need the inverse Laplace transforms of K (s) and H (s).
Suppose we managed to find the inverse Laplace transforms k(t) and h(t) of K (s) and H (s).
Then because of the convolution theorem we get the time domain characterization of the output
y(t) = k(t)x(0) + (h u)(t).

(7.21)

7.3.5. Example. Consider the system


x(t)
= Ax(t) + Bu(t),
y(t) = Cx(t) + Du(t),
in which


0
1
,
A=
1 2

 
0
B=
,
1



C = 1 1 ,

D = 1.

The inverse of s I A is


1

1
s+2 1
s 1
2
= {det(s I A) = s(s + 2) + 1 = (s + 1) } =
.
1 s+2
(s + 1)2 1 s
Therefore
K (s) = C(s I A)




 s+2 1


1
1
=
1 1
=
s+3 1s ,
2
2
1 s
(s + 1)
(s + 1)

and
1s
+1
(s + 1)2
1
1
+2
.
= {partial fraction expansion} = 1
s+1
(s + 1)2

H (s) = C(s I A)1 B + D = K (s)B + 1 =

7.3. Solutions of state and output for positive time

137

Since K (s) and H (s) are rational functions (this is always the case) it is easy to find the inverse
Laplace transform. The inverse Laplace transforms are


k(t) = 2t + 1 2t 1 et 1(t),
and
h(t) = (t) + (1 + 2t)et 1(t).


For a given initial state x(0) = xx12 (0)
(0) and input u(t) we finally obtain the general form of
the output


y(t) = (2t + 1)et x1 (0) + (2t 1)et x2 (0) 1(t) + (h u)(t).


7.3.2 Solution of ODEs for positive time


Often systems are written in input-state-output form, but sometimes systems are given by a
high-order differential equation. In this case we may if we want rewrite the system in inputstate-output form, but if we need to find solutions y(t) we may as well apply the Laplace transform directly on the differential equation. The difference is that now the initial conditions are
not of the form x(0) but most likely the initial conditions are in terms of y(0), y (0), y (0)
etcetera. A typical application is that of set-point change.
7.3.6. Example (Set-point change). Consider again the RC-network of Example 5.1.3. The
differential equation that relates the input voltage u(t) and output voltage y(t) across the capacitor was found to be
y (1) (t) + y(t) = u(t),

( =

1
).
RC

(7.22)

Suppose that since long the voltage u(t) has been equal to a constant value of one. It is easy
to believe that then y(t) (the voltage across the capacitor) then settles to a constant value as
well. In fact we show this to be true in Subsection 7.5.1 on steady-state behavior. Assuming
that y(t) is constant, gives us that y(1) (t) = 0, and so from (7.22) we see that necessarily the
output settles to a value of y(t) = u(t) = 1.
Now, at t = 0, we suddenly increase the voltage from 1 to 2 and, keep it constant from that
point onwards
u(t) = 2,

t > 0.

What will happen with y(t)? That is, what is y(t) for t > 0? Intuition tells us that the response
y(t) is unique, but we also know that the general solution y(t) of ODE (7.22) is not unique. It
is readily verifiedsee Appendix Bthat the general solution for t > 0 is
y(t) = 2 + et ,

R.

Chapter 7. Systems described by ordinary linear differential equations

138

As argued, we have shortly before the set-point change at t = 0 that y(0) = 1. This gives us
:
1 = y(0) = 2 + e0 = 2 + .
So = 1, and we found the (unique) response to the set-point change,

2
1
if t < 0
1
y(t) =
2 et if t > 0

y(t)

In words, after that u(t) is increased from 1 to 2, the output y(t) grows continuously and
exponentially from 1 upwards and settles to a final value of 2.

Another example of a set-point change is shown in Figure 7.3. Changing the reference
temperature on your central heating system is another instance of a set-point change. Set-point
changes are very very common. With the Laplace transform we can redo the previous example,
but now more succinctly. Indeed, initial conditions can then be taken into account right from the
start andimportantlywe can solve the problem without having to find a particular solution.
7.3.7. Example (Set-point change via Laplace). To find the solution y(t) of (7.22) for u(t) =
2 we shall use the Laplace transform. Recall that
L{y (1) (t)} = sY (s) y(0).
So, taking the Laplace transform of the Equation (7.22) gives

sY (s) y(0) + Y (s) = U (s).


By assumption y(0) = 1, and u(t) = 2 t > 0, giving U (s) = 2/s. Therefore
sY (s) 1 + Y (s) = 2/s.
This is an algebraic equation, with solution
Y (s) =

s + 2
2
1
1 + 2/s
=
= {partial fraction expansion} =
.
s+
s(s + )
s
s+

Its inverse Laplace transform yields the time-domain y(t) that we are after,
y(t) = 2 et ,

t > 0

This agrees with what was found earlier.

In Example 7.3.6 we found the solution by assuming that y(t) is continuous at t = 0. Only
then can we say that y(0) = 2 + e0 = 2 + , which we needed to determine y(t). This
assumption is not generally valid. In certain systems y(t) may jump as the result of a jump
in u(t), so the procedure in Example 7.3.6 is not generally applicable. The use of the Laplace
transform in Example 7.3.7 does not rely on any continuity assumption. The Laplace transform
is generally applicable. The following example demonstrates a case where y(t) jumps.

7.3. Solutions of state and output for positive time

139

7.3.8. Example. Consider the ODE


y (2) (t) 4y(t) = u (2) (t)

(7.23)

and suppose that u(t) = 1(t) and that we are given the initial conditions y(0) = 0,
y (0) = 1. To find y(t) for t > 0 we apply the Laplace transform on the above equation.
Using the differentiation rule (Page 115) we find that
L{y (2) (t)} = s 2 Y (s) sy(0) y (1) (0) = s 2 Y (s) 1,
and
L{u (2) (t)} = s 2 U (s) su(0) u (1) (0) = s 2 U (s) = s 2

1
= s.
s

Next take the Laplace transform of (7.23), and what we get is


(s 2 Y (s) 1) 4Y (s) = s 2 U (s) = s.
This is a linear algebraic equation in Y (s) with solution
Y (s) =

s+1
3/4
1/4
s+1
=
=
+
.
2
s 4
(s 2)(s + 2)
s2 s+2

The output now follows from the inverse Laplace transform,

y(t)

1
3
y(t) = e2t + e2t ,
4
4

(t > 0),

y(0)=0
y (0)=1
0

At t = 0 the output jumps from y(0) = 0 to y(0+) = 1.

u(t)

y(t)
T
x k (t)

Figure 7.7: An initially at rest system.

Chapter 7. Systems described by ordinary linear differential equations

140

7.4 Initially at rest signals and systems


In Chapter 5 we considered a system to be a mapping from inputs u(t) to outputs y(t),
T

u(t) y(t).
However, in input-state-output systems and systems described by ODEs the output not only
depends on the input but also on initial conditions, whether initial conditions x(0) from a state
x(t) or initial conditions y(0), y (0) in the form of derivatives of the output. Nevertheless
there still remains the desire to associate with ODEs and input-state-output equations, a system
in the old sense, that is, a system where the output y(t) is uniquely determined by the input
u(t). In other words, where the system is a mapping from inputs to outputs.
7.4.1. Example. In Example 7.1.1 we considered a mechanical system consisting of a mass
connected to a wall via a spring. Due to friction (even though not modeled in the example) the
mass will eventually come to a stand-still if no force is applied for a long time. Just before we
start experimenting with the input force it is likely that we find the mass at rest. In other words,
if we apply a force F(t) from t = 0 onwards, then the state of the system will have been at rest
prior to that,
  

q(t)
0
=
t < 0.
x(t) =
0
q(t)

Intuitively it may be clear that when the system is initially at rest, that then the position and
speed of the mass are fully determined by the input force. In particular, the output y(t) (the
speed) is fully determined by the input force u(t) = F(t). That is, the output is a function of
the input.


f (t)
t = t0

0
t

Figure 7.8: An initially at rest signal.


Motivated by this example we shall say that a signal f (t) is initially at rest if there is a
t0 R such that f (t) = 0 for all t < t0 , see Figure 7.8. Furthermore we shall say that a system
is initially at rest if it is defined for initially at rest inputs u(t), and the response y(t) to each
initially at rest input u(t) is initially at rest.
Suppose now that u(t) is initially at rest for t < t0 . Then, when applied to an input-stateoutput system (7.3), we obtain for the output the general solution (see Theorem 7.2.3)
 t
Ce A(t ) Bu( ) d + Du(t).
y(t) = Ce A(tt0 ) x(t0 ) +
t0

7.4. Initially at rest signals and systems

141

Since u(t) = 0 for all t < t0 the above integral is zero for all t < t0 , so the output for t < t0 is
completely determined by the initial condition x(t0 ) as
y(t) = Ce A(tt0 ) x(t0 ),

t < t0 .

A signal like this is initially at rest if and only if CeA(tt0 ) x(t0 ) = 0 is identically zero for all
t. We can therefore summary as follows.
7.4.2. Lemma. In an initially at rest system described by input-state-output equations (7.3),
the response y(t) to an input u(t) initially at rest for t < t0 is determined as
 t
Ce A(t ) Bu( ) d + Du(t).
(7.24)
y(t) =
t0

We can take the result a bit further. For the inputs that are initially at rest for t < t0 the integral
in (7.24) is the same as
 t
Ce A(t ) Bu( ) d + Du(t).
(7.25)
y(t) =

The latter equation (7.25) has the advantage over (7.24) that it is independent of t0 , that is, we
do not need to know t0 for (7.25) to be valid. Moreover, from this integral (7.25) we can see
that it is a convolution. Indeed, define h(t) as
h(t) = Ce At B 1(t) + D(t),
then (7.25) is nothing but
y(t) = (h u)(t).

(7.26)

Now that we wrote the system as a convolution, we may copy a whole series of results of
Chapter 5 on systems described by convolutions. For one, they are linear and time-invariant.
Also, since h(t) = Ce At B 1(t) + D(t) is a causal signal, we see that initially at rest systems
(7.3) are causal (Theorem 5.1.11). In the following we shall assume that t0 = 0.
In Lemma 7.3.4 we found that

Y (s) = C(s I A)1 x(0) + C(s I A)1 B + D U (s).


Now, for initially at rest systems we have that CeAt x(0) 0 , so initially-at-rest systems in
the s-domain are characterized by
Y (s) = (C(s I A)1 B + D)U (s).

(7.27)

Comparing (7.26) with (7.27) we see that the impulse response h(t) has Laplace transform
H (s) = C(s I A)1 B + D.

Chapter 7. Systems described by ordinary linear differential equations

142

7.4.3. Definition. The transfer function H (s) of an LTI system is defined as the Laplace transform H (s) = L{h(t)} of the impulse response h(t) of the system.

The transfer function is a convenient notion. As we saw in Chapter 5, it is possible to
express in principle all properties of a system in terms of its impulse response. As we shall
shortly see, some of its more important properties can seen by inspecting the transfer function.
For systems described by an ODE
y (n) (t) + qn1 y (n1) (t) + + q1 y (1) (t) + q0 y(t)
=

pn u (n) (t) + pn1 u (n1) (t) + p1 u (1) (t) + p0 u(t)

(7.28)

its initially at rest behavior is very compactly characterized by its transfer function. Formally we
should first rewrite the ODE as an input-state-output equation, and then work out the necessary
formulas. However we know that the transfer function of the initially at rest system H (s)
relates an input U (s) and output Y (s) via
Y (s) = H (s)U (s),
and this allows us to take a shortcut.
7.4.4. Lemma. The initially at rest system described by (7.28) has rational transfer function
H (s) =

pn s n + pn1 s n1 + + p1 s + p0
.
s n + qn1 s n1 + + q1 s + q0


Proof. Take any smooth causal input u(t). Because of causality of u(t) there holds that
u (k) (0) = 0 for all k. The response y(t) is also causal, so also for the output there holds
that y (k) (0) = 0 for every k. The differentiation rule (Page 112) then gives
L{y (k) (t)} = s k Y (s),

L{u (k) (t)} = s k U (s).

Now take the Laplace transform of (7.28),


(s n + qn1 s n1 + + q1 s + q0 )Y (s) = ( pn s n + pn1 s n1 + + p1 s + p0 )U (s).
So necessarily
Y (s) =

pn s n + pn1 s n1 + + p1 s + p0
U (s).
s n + qn1 s n1 + + q1 s + q0

7.4.5. Example. The transfer function of the initially at rest system described by
y (2) (t) 4y(t) = u(t)

7.4. Initially at rest signals and systems

143

is
H (s) =

s2

1
.
4

The impulse response can be obtained through partial fraction expansion of H (s),
H (s) =

1
1/4
1/4
=
+
.
(s + 2)(s 2)
s +2 s2

From Table 6.2 we then get


1
1
h(t) = e2t + e2t ,
4
4

(t > 0).

We also know that h(t) is zero for all t < 0, so h(t) = 14 e2t + 14 e2t 1(t).

7.4.6. Example. Consider again the RC-network as shown in Figure 5.2 on Page 87. In Example 5.1.3 we found that u(t) and y(t) are related by the differential equation
y (t) + y(t) = u(t),

( =

1
> 0).
RC

We assume the system is initially at rest, i.e., there is no charge on the capacitor and no current
in the circuit before an input is applied. Then the system is determined by its transfer function
H (s) =

.
s+

The impulse response now follows directly:


h(t) = et 1(t).
The derivation of the impulse response is now much easier than in Chapter 5.

7.4.7. Example. Consider the RC L-network of Figure 7.9. The voltage across the voltage
source is taken to be the input u(t) of the system and the voltage vR (t) across the resistor as
the output of the system. We assume the system is initially at rest, i.e., there is no charge on
the capacitor, no flux in the inductor and no current in the circuit before an input is applied.
Let x1 (t) be the voltage vC (t) across the capacitor, and as second state component x2 (t) we
take the current through the inductor. Verify yourself that the system is described by
x(t)
= Ax(t) + Bu(t),
y(t) = Cx(t) + Du(t),
in which
A=


1/C
,
1/L
0

B=

 

,
0



C = 1 0 ,

D = 1.

Chapter 7. Systems described by ordinary linear differential equations

144

y(t) = v R (t)

i 2 (t)

R
i 1 (t)
u(t)

Figure 7.9: RC L-network (Example 7.4.7).


Here is defined as = 1/(RC).
The transfer function is H (s) = C(s I A)1 B + D:

  
 s + 1/C 1
s2 + 2
,
+1= 2
H (s) = 1 0
0
1/L
s
s + s + 2


( := 1/ LC).

The transfer function is a rational function so the inverse Laplace transform can be found
through partial fraction expansion. For simplicity suppose that = /2, that is, that 2R =

L/C. Then s 2 + s + 2 = (s + /2)2 and


H (s) = 1

1
1
s
+ 12 2
=1
.
2
(s + /2)
s + /2
(s + /2)2

Back transformation finally gives us the impulse response


h(t) = (t) (1

t/2
t)e
1(t).
2


In the above examples we saw that the transfer function H (s) is a rational function of s. This is
trivially the case for systems described by ODEs (7.28) and is also always the case for systems
that have an input-state-output representation. In addition these transfer functions are proper,
meaning that the degree of the numerator is less than or equal to the degree of the denominator
of H (s).
7.4.8. Theorem. Let T be the initially at rest system described by input-state-output equations
(7.3). Then the transfer function is of the form
H (s) =

P(s)
,
Q(s)

where P(s) and Q(s) are polynomials s with real coefficients and such that deg P deg Q.

7.5. BIBO-stable systems and its steady-state behavior

145

Proof. We know that H (s) = C(s I A)1 B + D. We shall determine E(s) := (s I A)1 B
with the help Cramers rule. The vector E(s) obeys
(s I A)E(s) = B.
By Cramers rule the k-th entry Ek (s) of E(s) equals
E k (s) =

det Mk (s)
.
det(s I A)

Here Mk (s) is derived from s I A with its k-th column replaced by B. The denominator
is the characteristic polynomial of A. The entries Ek (s) are therefore rational functions of s.
Then also the transfer function
H (s) = C(s I A)1 B + D = C E(s) + D
is a rational function. Finally consider the limit
1
1
C(I A)1 B + D = D.
|s| s
s

lim H (s) = lim C(s I A)1 B + D = lim

|s|

|s|

This shows that lim|s| H (s) is bounded, but that means that H (s) is proper.

7.5 BIBO-stable systems and its steady-state behavior


As noted on several occasions, the impulse response h(t) and the transfer function H (s), both
uniquely determine the initially at rest system. In particular it should be possible to determine
BIBO-stability of the system on the basis of H (s) alone. That we shall do in this section.
In the previous section we found that the transfer function H (s) of a input-state-output
system can be expressed as
H (s) =

P(s)
,
Q(s)

(7.29)

where deg(P) deg(Q). We shall assume that the polynomials P(s) and Q(s) do not have
common zeros. This may always be achieved by removing common factors of P(s) and Q(s).
The impulse response may be found through partial fraction expansion of H (s). In Appendix A it is explained how a partial fraction expansion may be constructed. The general form
of the partial fraction expansion of P(s)/Q(s) is
M

Ak,1
Ak,2
Ak,m k

P(s)
= A0 +
+
+

+
,
Q(s)
s sk
(s sk )2
(s sk )m k
k=1

where A0 , Ak,l are constants (generally complex-valued) and the sk are the zeros of Q(s) with
multiplicity mk . Assuming we found the coefficients A0 and Ak,l then the impulse response
follows from Table 6.2,
h(t) = A0 (t) +

M


k=1

Ak,1 esk t + Ak,2

t sk t
t m k 1
e + + Ak,m k
esk t 1(t).
1!
(m k 1)!

Chapter 7. Systems described by ordinary linear differential equations

146

This has some implications for BIBO-stability. We know that a system is BIBO-stable if and
only if the impulse response (after removal of the delta function component) is absolutely
integrable. It may be shown that this is equivalent to the zeros sk having negative real part. The
reasoning is as follows. The function h1 (t) = h(t) A0 (t) consists of terms of the form
Ak,l

t l1 sk t
e 1(t).
(l 1)!

(7.30)

Now, if all sk have negative real part, then all functions (7.30) are absolutely integrable. In that
case h 1 (t) is a sum of absolutely integrable functions, and, hence, is itself absolutely integrable.
t l1 sk t
e 1(t)
If, on the other hand, some sk have zero real part or positive real part, then limt (l1)!
is not zero, and in such cases it may be shown that h1 (t) then can not converge to zero and is
not absolutely integrable. These intuitive arguments may be properly proved, but we will not
do that here.
7.5.1. Definition. The poles sk C of a rational function H (s) are the zeros sk of its inverse
1/H (s).

7.5.2. Theorem. An LTI system with proper rational transfer function H (s) is BIBO-stable if
and only if all poles sk of H (s) have negative real part, Re sk < 0.

7.5.3. Example.
1. The initially at rest system y(1) (t) + 2y(t) = u(t) is BIBO-stable because the pole
1
has negative real part.
s1 = 2 of H (s) = s+2
2. The initially at rest systemy(2) (t) 2y(t)
= u(t) is not BIBO-stable since the poles
of H (s) = s 212 are s1 = 2, s2 = 2 and one of them has nonnegative real part:
Re s1 0.
3. The initially at rest system y(1) (t) = u(t) is not BIBO-stable because the pole s1 = 0 of
H (s) = 1s has nonnegative real part, Re s1 0. Indeed, this is the integrator system: If
u(t) = 1(t) then y(t) = t 1(t) which is unbounded.


The RC-network of Example 5.1.3 and 7.4.6 has transfer function H (s) = /(s + ), where
= 1/(RC) > 0. The transfer function has one pole at s = , which has negative real part,
hence, by Theorem 7.5.2 the system is BIBO-stable.

7.5.1 Steady-state behavior


In Chapter 5 we had a look at the response to a harmonic input u(t) = ej 0 t and we found that
the output was a harmonic signal as well,
y(t) = H ( j 0)e j o t = H ( j 0)u(t).

7.5. BIBO-stable systems and its steady-state behavior

147

Here H ( j 0) is the Laplace transform of the impulse response of the system evaluated at j 0 ,
and the function G() = H ( j ) was called the frequency response of the system and it equals
the Fourier transform of the impulse response.
Harmonic inputs u(t) = e j 0 t are somewhat artificial since no input in practice will ever
have been active since the beginning of time. It is more appropriate to consider harmonic inputs
switched on at t = 0, i.e., u(t) = e j 0 t 1(t).
Suppose the system is BIBO-stable. The response y(t) to the input u(t) = ej 0 t 1(t) is

 t
y(t) = (h u)(t) =
h( )e j (t ) 1(t ) d =
h( )e j (t ) d




h( )e j d e j t
h( )e j (t ) d = H ( j )e j t + ytr (t),
=

where


ytr (t) =

h( )e j (t ) d.

Because of BIBO-stability there holds that



|h( )e j (t )| d 0
|ytr (t)|

as t .

We conclude that in BIBO-stable systems for large values of t the response to ej 0 t 1(t) looks
like H ( j )e j t , and in the limit t is indistinguishable from H ( j )ej t . For this reason
yst (t) = H ( j )e j t is called the steady-state response. The difference between y(t) and the
steady-state response is ytr (t) and it is referred to as the transient response. The transient
response tends to zero as t .
Any T -periodic input u(t) switched on at t = 0 can be written as u(t) = f (t) 1(t), in
which f (t) is T -periodic over the whole time axis R. Let fk denote the line spectrum of f (t).
Then


f k e j k0 t 1(t).
u(t) =
k=

We determine the steady-state response to this input for a given BIBO-stable system T . We
assume that the infinite superposition principle applies,

y(t) =

f k T {e j k0 t 1(t)}.

k=

In steady-state there holds that


T {e j k0 t 1(t)} = H ( j k0)e j k0 t ,
hence
yst (t) =


k=

f k H ( j k0)e j k0 t .

Chapter 7. Systems described by ordinary linear differential equations

148

i 1 (t)

i 2 (t)

u(t)

i 3 (t)

Figure 7.10: An RC L-network (Example 7.5.4).

I1 (s)

R
I2 (s)

U (s)

Ls

I3 (s)

1
Cs

Figure 7.11: Impedance equivalent network (Example 7.5.4).

7.5. BIBO-stable systems and its steady-state behavior

149

In steady-state therefore the response to a T -periodic input is again T -periodic and the line
spectrum of the steady-state response is fk H ( j k0).
7.5.4. Example. Consider the RC L-network of Figure 7.10. We shall assume that the network
is initially at rest. At t = 0 the voltage source is switched on, and the resulting current i3 (t)
through the capacitor C is the output of the system.
To determine the transfer function H (s) it is not necessary to write down an input-stateoutput representation of the system or some high-order differential equation that relate input
and output. We may instead apply the Laplace transform directly to the Kirchoffs laws and the
current-voltage relations of the various components. To this end we form an alternative network
were each time domain signal is replaced with its Laplace transform, and were each component
(resistor, capacitor, etc.) is replaced with its impedance, see Figure 7.11. The impedance Z(s)
of a component is the ratio of the voltage across the component, V (s), and current trough the
component, I (s). For resistors R, inductors L and capacitors C the respective impedances
Z R (s), Z L (s) and Z C (s) are
R I R (s)
VR (s)
=
= R,
I R (s)
I R (s)
VL (s)
d
Ls I (s)
= {v L (t) = L i L (t)} =
= Ls,
inductor: Z L (s) =
I L (s)
dt
I (s)
d
1
VC (s)
VC (s)
= {C vC (t) = i C (t)} =
=
.
capacitor: ZC (s) =
IC (s)
dt
CsVC (s)
Cs
resistor: Z R (s) =

The impedance equivalent network of the network of Figure 7.10 is shown in Figure 7.11. The
advantage of working with impedances is that then all components act like resistors, that is, the
voltage over a component is simply the current through it multiplied by something (namely the
impedance).
The remaining equations that are needed to fully determine the system, are the network
equations that the sum of voltages in each loop is zero, and that the sum of currents at each
node is zero:

s L I2 (s) = (R + 1/(Cs))I3 (s),


(7.31)
U (s) = R I1 (s) + s L I2 (s),

I1 (s) = I2 (s) + I3 (s).


The transfer function H (s) satisfies I3 (s) = H (s)U (s), so if we set U (s) = 1, then I3 (s) is
the transfer function, and it is uniquely determined since (7.31) are three equations in three
unknowns. With Maple or by hand we find that
H (s) = I3 (s) =

LCs 2
.
2RLCs 2 + (R 2 C + L)s + R

Suppose now that R = 0.5, L = 0.25H and C = 1F. Then


H (s) =

s2
s2
=
.
s 2 + 2s + 2
(s + 1)2 + 1

Chapter 7. Systems described by ordinary linear differential equations

150

The poles of H (s) are the zeros of (s + 1)2 + 1, i.e., the poles are s1,2 = 1 j . Note that the
poles have negative real part, hence, the system is BIBO-stable.
We next calculate the response to a constant voltage u(t) = 1(t) switched on at t = 0. The
response y(t) = i3 (t) in this case has Laplace transform
1
s
s
=
I3 (s) = H (s)U (s) = H (s) = 2
s
s + 2s + 2
(s + 1)2 + 1
1
s+1

.
=
(s + 1)2 + 1 (s + 1)2 + 1
The current i3 (t) follows from Table 6.2,
i 3 (t) = et (cos t sin t) 1(t).
Remark: The impulse response can similarly be obtained from a partial fraction expansion
of H (s). However, since we know the step response already, there is an easier way to find the
impulse response h(t); it is simply the derivative of the step response:
d
H (s)
H (s)
}(t) = L{
}
s
dt
s

d t
di 3 (t)
=
e (cos t sin t) 1(t)
=
dt
dt
= et (cos t sin t) 1(t) + et ( sin t cos t) 1(t) + et (cos t sin t)(t)

h(t) = L1 {H (s)}(t) = L{s

= (t) 2et cos t

1(t).

Next we compute the response to

0 if t < 0,
u(t) = t if 0 t 1,

1 if t > 1.

The Laplace transform of the input U (s) is





1
1
1 st 
st
st
U (s) =
e u(t) dt =
u(t)de = u(t)e
+
e u (t) dt
s 0
s
s 0
0
0


1 1 st
1
1 st 
e u (t) dt =
e dt = 2 (1 es ).
=
s 0
s 0
s


st

The Laplace transform of the output i3 (t) therefore equals


I3 (s) = H (s)U (s) =

1 es
.
s 2 + 2s + 2

7.5. BIBO-stable systems and its steady-state behavior

151

1
is et sin(t) 1(t), hence,
The inverse Laplace transform of s 2 +2s+2

i 3 (t) = et sin(t) 1(t) e(t1) sin(t 1) 1(t 1).


As a final example we consider a periodic input switched on at t = 0. As input we take the
T -periodic signal that on one period [0, T ) is given by

0
u(t) = 1

if 0 t < T /4,
if T /4 t < 3T /4,
if 3T /4 t < T

1
T

2T

and periodically extended elsewhere. As is customary for causal periodic inputs, we write u(t)
as u(t) = f (t) 1(t), where f (t) is a T -periodic signal on the whole time axis R. The line
spectrum fk of f (t) follows from
1
fk =
T
=

3T /4
T /4

e j k0 t dt =

1
(e j k0 T /4 e3 j k0 T /4 )
j k0 T

e j k/2
1
(e j k/2 e3 j k/2 ) =
(1 (1)k )
j k0 T
j k0 T

(k  = 0),

and, for k = 0,
1
f0 =
T

3T /4
T /4

1
1dt = .
2

For k even (k  = 0) we have that fk = 0, while for k = 2n + 1,


2(1)n1
(1)n1
2e j (n+ 2 )
=
=
.
=
j (2n + 1)0 T
(2n + 1)0 T
(2n + 1)
1

f 2n+1

The line spectrum of the steady-state response yst (t) is yk = f k H ( j k0), so we end up with
the following expression for the steady state,
yst (t) =


k=

f n H ( j k0)e j k0 t =


1
(1)n1
H (0) +
H ( j (2n + 1)0 )e j (2n+1)0t .
2
(2n
+
1)
! "# $ n=
0

For computation of yst (t) a computer is needed. Figure 7.12(a) shows the actual response y(t)
to the square wave u(t), found through
20 simulation. Partj n(b)t shows a plot of the partial sum
approximation of the steady state, n=20 f n H ( j n0)e 0 . Note the transient behavior of
y(t) in Part (a).


Chapter 7. Systems described by ordinary linear differential equations

152

1.5

1.5

0.5

0.5

-0.5

-0.5

-1

-1

-1.5

-1.5

-2

-2
0

(a)

(b)

Figure 7.12: (a) Response to the square wave. (b) Approximation


of its steady state response.

20
n=20

f n H ( j n0)e j n0 t

7.6 Butterworth filters


The fact that BIBO-stability is so succinctly characterized in the s-domainnamely the poles
of the transfer function must have negative real partmakes it possible to design filters in
the s-domain in a way that they behave well in the reality of the time domain. An important
filter design problem is the design of low-pass filters and band-pass filters. In section 5.3 we
discussed the ideal low-pass filter in some detail, and one of the findings was that they could not
be build since they are necessarily non-causal. In this section we shall introduce the Butterworth
filters. These filters approximate the behavior of ideal low- or band-pass filters, but they are
causal, they are BIBO-stable and they can be build, whether digitally on a computer or as an
analog device consisting of capacitors, resistors, and possibly operational amplifiers. We focus
on low-pass filters.
We aim to approximate the ideal low-pass filter with frequency response


1
Hideal ( j )
0

|Hideal( j )|

if || < 1,
if || > 1,
0

with a system whose frequency response satisfies


|Hn ( j )|2 =

1
.
1 + 2n

(7.32)

The reasoning is this: If n is large then 2n is small if 0 < < 1 so then Hn ( j ) 1,


while for > 1 the number 2n is large so that then Hn ( j ) 0. For large n the amplitude
|Hn ( j )| therefore roughly approximates that of the ideal low-pass filter. Figure 7.13 shows
plots of |Hn ( j )| for n = 2, n = 4 and n = 8. In order to a find a BIBO-stable system that has
a frequency response Hn ( j ), we shall express (7.32) in terms of the transfer function. To do

7.6. Butterworth filters

153

|Hn ( j )|

n=

n=

n=

Figure 7.13: Amplitude transfer of Butterworth filters Hn ( j ) for n = 2, 4, 8.


this, we first rewrite the left-hand side of (7.32),
|Hn ( j )|2 = Hn ( j )Hn ( j ) = Hn ( j )Hn ( j ) = Hn (s)Hn (s)

for s = j ,

and then rewrite the right-hand side of (7.32),


1
1
1
=
=
,
2n
2n
1+
1 + (s/j )
1 + (s 2 )n

for s = j .

In terms of the transfer function the identity (7.32) therefore is


Hn (s)Hn (s) =

1
.
1 + (s 2 )n

(7.33)

This equation we shall solve for Hn (s). For BIBO-stability we need Hn (s) to have its poles to
the left of the imaginary axis. To this end we factor the right-hand side of (7.33) as
2n
)
1
1
=
,
2
n
1 + (s )
s sk
k=1

were sk are the zeros of 1 + (s2 )n , so,


(sk2 )n = 1.
This has solutions,

j (2k1)
n
n
(sk2 ) = 1 = e j (2k1) = e n ,

k = 1, 2, . . . , n,

so that
sk = j e

j (k1/2)
n

k = 1, 2, . . . , 2n.

We see that the poles sk lie evenly distributed on the unit circle in the complex plane, see
Figure 7.14. For BIBO-stability we need that Hn (s) has poles only to the left of the imaginary
axis. It will be no surprise, then, that we choose Hn (s) to be the product of factors 1/(s sk )
for those sk that lie on the left of the imaginary axis,
Hn (s) =

n
)
k=1

1
.
s sk

Chapter 7. Systems described by ordinary linear differential equations

154

s1

s12
s11

s2
s3

s10

0
s9

s4
s8

s5
s6

s7

Figure 7.14: Zeros sk , (k = 1, 2, . . . , 12) of 1 + (s)2n for n = 6.


As a consequence Hn (s) is the rational function with its poles sk all to the right of the
imaginary axis, and so the poles of the product Hn (s)Hn (s) cover all poles sk of 1/(1+(s 2 )n .
Now (7.33) is satisfied and we are done.
For n = 1, 2, 3, 4 one obtain
1
,
s+1
1

,
H2 (s) =
s 2 + 2s + 1
1
,
H3 (s) =
(s + 1)(s 2 + s + 1)

H1 (s) =

H4 (s) =

1


.

(s 2 + 2 + 2s + 1) (s 2 + 2 2s + 1)

The filter with transfer function Hn (s) obtained this way is known as the n-th order Butterworth
filter and since Hn (s) is rational it means that Butterworth filters are described by a differential
equation. For example, the second order Butterworth filter is described by

y (t) + 2 y (t) + y(t) = u(t).


Consequently Butterworth filters can be simulated. Also it possible to combine capacitors,
resistors and operational amplifiers in an electrical network in such a way that the network has
Hn (s) as its transfer function. These networks hence work as low-pass filters.

7.7 Problems
7.1 Construct the simulation diagrams and then determine the input-state-output representations of
(a) y (1) (t) + y(t) = 2u(t).
(b) y (1) (t) + y(t) = u (1) (t) + 2u(t).

7.7. Problems

155

(c) y (4) (t) = u(t).


7.2 Consider the state equations


 
0 2
1
x(t)
=
x(t) +
u(t).
1 1
0
(a) Determine e At .

 
(b) Suppose we are given the initial conditions x(2) = 10 and that u(t) = et for
t > 2. Find x(t) for t > 2.


1 1
At
and verify that dtd e At = Ae At .
7.3 Determine e for A =
0 1
7.4 We are given an initially at rest system described by
x(t)
= Ax(t) + Bu(t),
y(t) = Cx(t) + Du(t),
in which



1 1
A=
,
1 1

 
0
B=
,
1



C= 1 1 ,

D = 1.

(a) Determine the transfer function H (s).


(b) Determine the step response.
(c) Find the response y(t) to the input u(t) = et 1(t).
7.5 We are given an initially at rest system described by
x(t)
= Ax(t) + Bu(t),
y(t) = Cx(t) + Du(t),
in which

1 2 0
A = 1 0 0 ,
0 1 0

B = 0 ,
0



C= 1 0 0 ,

(a) Determine the transfer function H (s).


(b) Is the system BIBO-stable?
(c) Determine the impulse response.
7.6 Suppose T is an LTI system with transfer function
H (s) =

s2
.
s 2 + 2s + 5

D = 0.

Chapter 7. Systems described by ordinary linear differential equations

156

(a) Determine the impulse response.


(b) Is the system BIBO-stable?
(c) Determine the response y(t) to the rectangular pulse u(t) = rect2 (t 1).
The following problems assume knowledge of Appendix B.
7.7 Determine the basis solutions of
(a)

L (1)
y (t)
R
(4)

+ y(t) = u(t).

(b) y (t) + 3y (2) (t) + 2y(t) = 2u(t).


(c) y (3) (t) + 3y (2) (t) + 3y (1) (t) + y(t) = 0.
7.8 We are given a continuous-time system described by
1
1
1
u(t),
y (t) y(t) = u(t)
4
2
2

t R.

(7.34)

(a) Determine the basis solutions.


(b) Find the transfer function of the initially at rest system.
(c) Is the initially at rest system (7.34) BIBO-stable?
(d) Find the simulation diagram.
(e) Find an input-state-output representation of the system.
Now suppose that u(t) is given as
u(t) = t

t.

(7.35)

(f) Determine the output y(t) for t > 0 with initial conditions y(0) = 0, y (0) = 1.
7.9 Consider the continuous-time system described by
y (2) (t) + 4y (1) (t) = u (1) (t) + 4u(t).

(7.36)

(a) Determine the basis solutions.


(b) Find the simulation diagram of (7.36).
(c) Determine the transfer function of the initially-at-rest system described by (7.36).
(d) Is the initially-at-rest system (7.36) BIBO stable?
Suppose now that u(t) equals
u(t) = 1(t)

(7.37)

(e) Determine y(t) for t > 0 with initial conditions y(0) = 1 and y (0) = 4.

7.7. Problems

157

7.10 Given is the system described by the ODE


y (t) 2 y (t) 3y(t) = u(t)
u(t).

(7.38)

The value of is left unspecified.


(a) Determine the basis solutions.
(b) Construct the simulation diagram.
(c) Express the system as an input-state-output representation.
(d) Determine the transfer function via the input-state-output representation
(e) Determine the transfer function directly from the ODE (7.38).
(f) For which value(s) of is the initially at rest system BIBO-stable?
More involved problems:
7.11 Show that ypart (t) = (h u)(t) is a particular solution of the ODE (7.5), where h(t) is
the impulse response of its initially at rest behavior.
7.12 Let k N and let C k denote the set of signals f (t) for which f (k) (t) exits and is continuous. Consider system (7.5). Show that u(t) Ck implies that y(t) C k+r , where
r := deg(Q) deg(P) is the relative degree of the transfer function H (s) = P(s)/Q(s)
of the system.
Matlab problems:
7.13 Systems described by differential equations (7.5) are in Matlab represented by the arrays
of the coefficients,
den=[ 1 qn1 q0 ]
num=[ pn pn1 p0 ]
In Matlab there are hundreds of macros at our disposal to analyze systems. For instance,
we may compute poles, generate plots of impulse response and step response. Here is an
example.
den=[1 2 4];
num=[1 2];
roots(den);
impulse(num,den);
step(num,den);
[A,B,C,D]=tf2ss(num,den);

%
%
%
%
%
%

y + 2y + 4y
u + 2u
Compute the poles of the system
plot the impulse response h(t)
plot the step response (h*1)(t)
find an input-state-output repr.

Type this is in and then type in the commands

158

Chapter 7. Systems described by ordinary linear differential equations

t=0:0.1:20;
noise=rand(1,length(t))/5-1/10;
plot(t,noise);
x=sin(t)+noise;
y=lsim(num,den,x,t);
plot(t,y,t,x);

Explain what these commands do. What can you say about possible noise reduction of
the system? Finally type in and interprete
[ampl,myphase,w]=bode(num,den); % Determine amplitude and
% phase of H(j*w) at a suitable vector w
plot(w,ampl);
% plot |H(jw)| against w
plot(w,myphase);
% plot arg H(jw) against w

7.14 Reproduce the two Matlab plots of Figure 7.12. (You probably want to have a look at the
Matlab example on page 124.)
7.15 Simulate the initially at rest system T2 described by
y (2) (t) + y (1) (t) + y(t) = u (1) (t) + u(t),
over the time interval [0, 10] for the inputs
(a) u(t) = 1(t),
(b) u(t) = (1 et ) 1(t).
Simulate the step response of the initially at rest system described by
y (3) (t) + 2y (2) (t) + 2y (1) (t) + y(t) = u (1) (t) + u(t).
Is this the same as the response found in (7.15b)? (Explain.) What is its steady-state
response?

8
The z-transform and Discrete-time Systems

The Fourier and Laplace transforms are foremost of use in the analysis of continuous-time
signals and systems. In this chapter we treat a transform that plays a similar role in the analysis
of discrete-time signals and systems. The transform is the z-transform. The z-transform assigns
to a discrete-time signal f [n] a function F(z), defined in the complex z-plane as the two-sided
series
F(z) =

f [n]z n .

n=

The treatment of the z-transform follows the same lines as that of the other transforms introduced in this course. First we define the z-transform and comment on its convergence properties (Section 8.1). Next, a number of important properties and rules for z-transform are reviewed (Section 8.2). In Section 8.4 we treat the convolution product in a form that suits the
z-transform. The one-sided z-transformanalogous to the one-sided Laplace transformis
the topic of Section 8.5. Finally, in Section 8.6 the use of the z-transform in the analysis of LTI
discrete-time systems is discussed.
Like with the Laplace transform, we shall not develop a general inverse z-transform. We
are only concerned with the reconstruction of a discrete-time signal whose given z-transform
is a rational function. This is the most useful class of z-transforms, and no general inverse
z-transform theory is needed to find the inverse z-transform of such functions. With the help of
partial fraction expansion the inverse z-transform of a rational function can be determined.
The two discrete-time signals most fundamental in signal theory are the discrete delta
function [n] also known as the unit pulse, and the discrete unit step 1[n]. The unit pulse [n]
is defined as

[n] =

0
1

if n  = 0,
if n = 0,

and the discrete unit step 1[n] is defined as


159

Chapter 8. The z -transform and Discrete-time Systems

160

1
1[n] =
0

if n 0,
if n < 0.

8.1 Definition and convergence of the z-transform




f [n]

f (t)

f [n](t nT )

Figure 8.1: Continuous-time signal, sampled signal and combed signal.


Suppose we have a continuous-time signal f (t) and its sampled signal f [n] :=
f (nT ), (n Z), which is sampled with a sampling period T . Using the delta-train or
impulse train

(t nT ),
T

n=

it is possible to interpret a discrete-time sampled signal f [n] as a continuous-time signal.


Indeed, multiplying f (t) with the delta-train gives,
f (t)

(t nT ) =

n=

f (t)(t nT ) =

n=

f [n](t nT ),

(8.1)

n=

and this is a continuous-time signal uniquely determined by the samples f [n], see Figure 8.1.
This way we can interpret every discrete-time signal as a continuous-time signal. For the
frequency spectrum of (8.1) we find that
F() =


n=

f [n]F {(t nT )} =

f [n]e j nT .

n=

The frequency spectrum apparently is a Fourier series in the frequency domain with period
2/T , that is with fundamental frequency T . If we now take z = ej T , then we arrive at


n
n= f [n]z . Remember that the Laplace transform was introduced as the Fourier transform for s = j and that we subsequently allowed for arbitrary s not just imaginary s = j .
We do the same thing here. Even though z is introduced as z = ej T we shall from now on
allow arbitrary complex valued z.

8.1. Definition and convergence of the z -transform

161

8.1.1. Definition. Let f [n] be a discrete-time signal. The z-transform F(z) of f [n] is defined
as

F(z) =

f [n]z n ,

n=

for those values z C for which the series converges.

You may recognize in the z-transform a Laurent series that is to say, a two-sided power series, where besides the positive powers also the negative powers of the variable z occur. The
convergence properties of these series are similar to those of power series. To analyze the
convergence
properties, we split the
two-sided z-transform into two parts, an anti-causal part
1

n
n
n= f [n]z , and a causal part
n=0 f [n]z ,
F(z) = + f [3]z 3 + f [2]z 2 + f [1]z + f [0] + f [1]z 1 + f [2]z 2 + .
!
"#
$ !
"#
$
anti-causal part, F (z)

causal part, F+ (z)

Both the anticausal part and the causal part are infinite sums, both of which have to exist in
order for F(z) to be defined. We shall denote the anti-causal part of the z-transform by F (z),
and the causal part is denoted by F+ (z). So in case the z-transform converges, then we have
that

F(z) =

f [n]z n = F (z) + F+ (z).

(8.2)

n=

8.1.2. Example. Suppose f [n] equals


f [n] =


1
1
3n

if n < 0,
if n 0.

Then
F (z) =

1

n=

z n =


n=1

z n = {if |z| < 1} =

z
,
1z

and
F+ (z) =


1
3z
1 n 
1
z
=
= {if 1/|3z| < 1} =
=
1
n
n
3
(3z)
3z 1
1 3z
n=0
n=0

The anti-causal part F (z) exists for all |z| < 1, and the causal part F+ (z) exists for all |z| > 13 .
The z-transform then exists whenever 1/3 < |z| < 1. The set of such values z form a ringshaped region in the complex plane, see Figure 8.2.


Chapter 8. The z -transform and Discrete-time Systems

162

z-plane

j R2
j R1
R1

R2

Figure 8.2: A ring-shaped region of convergence {z C : R1 < |z| < R2 }.

In the above example the set of values z where the z-transform is defined turned out to be a
ring-shaped region. It may be shown that this is always the case. In other words, there are radii
R1 and R2 such that F(z) exists in the interior of the ring,
{ z C : R1 < |z| < R2 }
while it does not exist in the interior of its complement { z : |z| < R1 , or |z| > R2 }. We
call the ring {z C : R1 < |z| < R2 } the region of convergence of F(z). The z-transform
converges on this region of convergence, and it may also converge for some points on the
boundary of this region. If R1 = 0 and R2 > 0 then the region of convergence is a punctured
disc with radius R2 with possibly the point z = 0 removed. If R2 = , then the region of
convergence is the complement of the disc with radius R1 , see Figure 8.3. An example of such
a case is when f [n] is initially at rest. This is an important case; a case we shall prove.
8.1.3. Definition. A signal f [n] is initially at rest if f [n] = 0 for all n < n0 for a certain n0
(see Figure 8.4).

8.1.4. Lemma. Let f [n] be an initially at rest signal. The z-transform F(z) of f [n] exists
either for all z C or there is an R1 0 such that F(z) exists for all |z| > R1 and F(z) does
not exist for any |z| < R1 . (See Figure 8.3.)

n
Moreover, on the region of convergence |z| > R1 the z-transform F(z) =
n=n 0 f [n]z
converges absolutely.
Proof. Let R1 := inf{|z0 | : F(z 0 ) exists}. By definition of R1 , the z-transform F(z) does
not exist for any |z| < R1 . Suppose now that |z| > R
1 . Then there is a z0 for which R1

n
z 0 < z and such that F(z0 ) exists. Since F(z0 ) =
n=n 0 f [n]z 0 exists, we must have
that limn | f [n]z 0n | = 0. In particular this shows that | f [n]z0n | is bounded by some M

8.1. Definition and convergence of the z -transform

163

z-plane
j R1
R1

Figure 8.3: Region of convergence for R2 = .

n0

Figure 8.4: An example of an initially at rest signal.


(independent of n). With this we can show that F(z) converges absolutely,
 



 




z
n
 

0 
f [n]z n  = 
f [n]z 0n
|F(z)| = 

 n=n
n=n
z 
0
0


 z 0 n
 z 0 n
1
< .
M   = {since | zz0 | < 1} = M   0

z
z
1 | zz0 |
n=n 0

The mapping Z that sends f [n] to its z-transform F(z),


Z{ f [n]} = F(z)
is called the z-transform. Rather confusingly the mapping Z goes under the same name as the
function F(z). If V is the region of convergence, then the link between f [n] and its z-transform
F(z) is conveniently expressed as a transform pair with mention of the region of convergence,
Z

f [n] F(z),

(z V ).

We say that F(z) describes f [n] in the z-domain. The z-domain is the complex plane.

Chapter 8. The z -transform and Discrete-time Systems

164

8.1.5. Example. Let f [n] be the signal

2
f [n] = 0

1/n

if n < 0,
if n = 0,
if n > 0.

The anti-causal part is the power series


1


2n z n = {k = n} =

n=

(z/2)k .

k=1

This is a geometric series with ratio z/2, hence it converges for |z| < 2 and diverges for |z| 2.
The causal part is the series
0+

z n /n.

n=1

This can be seen as a power series in the variable 1/z. Its convergence radius is 1 (see Example 1.6.5). Therefore the causal part converges for |z| > 1 and diverges for |z| < 1. The region
of convergence of the z-transform of f [n] is the ring-shaped region 1 < |z| < 2.

8.1.6. Example. Let f [n] be the causal signal f [n] = an 1[n], with a C. This is an initiallyat-rest signal. The anti-causal part of f [n] is zero, so the z-transform F (z) of the anti-causal
part is identically zero as well and is defined for every z C. The z-transform is a geometric
series,
F(z) =

a n 1[n]z n =

n=

a n z n =

n=0

(a/z)n .

n=0

The ratio in this geometric series is a/z, so for convergence we need that |z| > |a|. The region
of convergence is the complement of the disc with radius |a|. On the region of convergence the
geometric series converges with limit 1/(1 az ) = z/(z a). We found the transform pair
Z

a n 1[n]

z
,
za

(|z| > |a|).

(8.3)


8.1.7. Example. Suppose f [n] is the discrete-time signal defined as



0
f [n] = a n 1[n 1] =
a n

if n 0,
if n < 0.

8.2. Properties of the z -transform

165

Here a is some nonzero complex number. The causal part of the z-transform consists of zero
elements only and hence is the zero function, well defined for any z C. For the anti-causal
part we find that
F (z) =

1


a n z n =

n=

(z/a)n .

n=1

This is a geometric series with ratio z/a. It converges if and only if |z| < |a| in which case its
limit is az /(1 az ) = z/(z a). The region of convergence is the disc with radius |a|,
Z

a n 1[n 1]

z
,
za

(|z| < |a|).




8.1.8. Example. The z-transform of the discrete-time unit pulse [n] is easily obtained. Indeed, since [n] is zero except for n = 1 where it is 1, we find immediately that

[n]z n = 1.

n=

The series converges for any z C. So there holds


Z

[n] 1,

(z C).

(8.4)


It is important to mention with a z-transform its region of convergence. This is eminent from
Examples 8.1.6 and 8.1.7. In both examples the z-transform was found to be z/(z a), even
though the discrete-time signals f [n] in these examples were anything but the same. From this
it is clear that for a given F(z) we can not determine the inverse z-transform if we do not know
its region of convergence. For initially at rest signals, however, we know that the region of
convergence is necessarily the complement of a disc with a certain radius. In that case, as we
shall see, knowledge of F(z) is enough to find f [n]; no specific knowledge of the region of
convergence is needed. We return to this problem in Section 8.3.

8.2 Properties of the z-transform


The properties and rules of calculus that we treat in the section make use of the transform pairs
Z
Z
f [n] F(z), g[n] G(z). Some rules are derived, others are left to the reader to
verify.

Chapter 8. The z -transform and Discrete-time Systems

166

Properties and rules of calculus


Z

1. Linearity: a f [n] + bg[n] a F(z) + bG(z)


This rule holds for any two complex numbers a and b and states that Z is a linear mapping.
Z

2. Time-reversal: f [n] F(1/z).


With time-reversal is meant the operation in the n-domain to replace n with n.
Z

3. Conjugation: f [n] (F(z )) .


Proof:


n=

f [n]z

%
n

( f [n](z ) ) =

n=

&
n

f [n](z )

= (F(z )) .

n=
Z

4. Time shift (shift in the n-domain): f [n k] z k F(z).


Proof:

f [n k]z n = {m = n + k} =

n=

f [m]z mk = z k F(z).

m=
Z

5. Scaling in the z-domain: an f [n] F(z/a), (a  = 0).


Proof:


n=

a n f [n]z n =

f [n](z/a)n .

n=
Z

6. Differentiation in the z-domain: n f [n] z F  (z).


Proof:


n=

n f [n]z n =


n=

f [n]z(z n ) = z

f [n](z n ) = z F  (z).

n=

In the last equality we interchanged summation and differentiation. It may be shown that
that is allowed for all z in the region of convergence, which has to do with the fact that
on the region of convergence the z-transform converges absolutely.
8.2.1. Example.

8.2. Properties of the z -transform

167

a) Any discrete-time signal f [n] can be expressed as a sum of shifted, weighted discretetime delta pulses as
f [n] =

f [k][n k].

(8.5)

k=

Convergence of this series is no problem. Indeed, for any n all terms f [k][n k],
(k Z) are zero except when k = n in which case f [k][n k] = f [n][0] = f [n].
Z

Now [n] 1, so by the time-shift rule we find that


Z

[n k] z k .
If we take in (8.5) the z-transform of both left and right-hand side, then we re-derive the
z-transform:
Z{ f [n]} = Z{

f [k][n k]} =

k=

f [k]Z{[n k]} =

k=

f [k]z k .

k=

b) The z-transform of the discrete-time unit step 1[n] equals z/(z 1) with region of convergence |z| > 1 (see Example 8.1.6 with a = 1). By the differentiation rule, we have
that
z
d z

Z
=
,
(|z| > 1).
n 1[n] z
dz z 1
(z 1)2
Z

Application of the time-shift rule gives (n 1) 1[n 1] 1/(z 1)2 . Then if we


differentiate once again we get that
Z

n(n 1) 1[n 1] 2z/(z 1)3 ,

(|z| > 1).


Z

Note that n(n 1) 1[n 1] = n(n 1) 1[n], so n(n 1) 1[n] 2z/(z 1)3 . We
may repeat this process any number of times, say k times, and this will bring us to
Z

n(n 1) (n k + 1) 1[n] k!

z
,
(z 1)k+1

(|z| > 1).

This can be expressed more succinctly as


 
z
n
Z
1[n]
,
(|z| > 1).
(z 1)k+1
k

. Note that nk = 0 if k > n.


The binomial coefficients are defined as nk = n(n1)(nk+1)
k!
Application of the scaling rule finally yields the transform pair
 
ak z
n n
Z
,
(|z| > |a|).
a 1[n]
(z a)k+1
k


Chapter 8. The z -transform and Discrete-time Systems

168

f [n]

F(z)

Region of convergence

[n k]

z k

z  = 0 (or every z if k 0)

a n 1[n]
n
n
a 1[n]
k

z
za

|z| > |a|

ak z
(za)k+1

|z| > |a|

a n1 1[n 1]

1
za

|z| > |a|

Table 8.1: Some z-transform pairs.

8.3 The inverse z-transform of rational functions


In the introduction we mentioned that we shall not derive a general inverse z-transform. This
is not to say that an inverse does not exist, only that a general inverse requires a mathematical
language that is too advanced for an introductory course on signals and systems. We mostly
encounter z-transforms F(z) that are rational in z, and for such functions it is possible to derive
the inverse z-transform without requiring deep mathematics. A rational function F(z) can be
written in the form
F(z) =

pm z m + pm1 z m1 + . . . + p1 z + p0
.
ql z l + ql1 z l1 + . . . + q1 z + q0

(8.6)

The numerator and denominator are polynomials in the complex variable z. The numerator
polynomial we denote by P(z) and the denominator polynomial we denote by Q(z),
F(z) =

P(z)
.
Q(z)

We assume that pm  = 0 and ql  = 0, so P(z) is of degree m and Q(z) is of degree n. In addition


we assume that P(z) and Q(z) have no common zeros, which means that there is no polynomial
D(z) of degree 1 or more such that P(z)/D(z) is polynomial and Q(z)/D(z) is polynomial.
Z
The problem that we consider is to find a discrete-time signal f [n] such that f [n] F(z).
Without knowledge of the region of convergence we can in general not uniquely determine
f [n]. However, if we restrict attention to signals that are initially at rest, then the region of
convergence is necessarily the complement of some disc, and this knowledge, as we shall soon
see, is enough to determine f [n] uniquely and the region of convergence uniquely.
8.3.1. Definition. The poles of a rational function

P(z)
Q(z)

are the zeros of its inverse

Q(z)
.
P(z)

Since we assume that P(z) and Q(z) have no common zeros, the poles are simply the zeros
of Q(z). The poles zk are those numbers z where P(z)/Q(z) is not defined. Now, on the
region of convergence the z-transform F(z) = P(z)/Q(z) is by definition well-defined, so

8.3. The inverse z -transform of rational functions

169

we can conclude that the region of convergence does not contain any pole zk of F(z). If for
example the zk in Figure 8.5 are the poles of F(z), then the largest possible convergence region
is the complement of the disc with center at z = 0 and radius maxk |z k |, such as indicated
in Figure 8.5. Interestingly, this largest possible region of convergence is the actual region of

z-plane
z1
z4
z5

z3
z2

Figure 8.5: The largest possible region of convergence.


convergence:
8.3.2. Lemma. Suppose F(z) is a rational function with poles zk C. If F(z) is the ztransform of an initially at rest signal f [n], then the region of convergence of F(z) is { z C :
|z| > maxk |z k | }.

Proof. The inverse z-transform f [n] obtained through partial fraction expansion has the mentioned region of convergence. Suppose there is another initially at rest signal g[n] with F(z)
as its z-transform and which exists for at least one z C. Suppose, to obtain a contradiction,
that g[n] differs from f [n]. As both f [n] and g[n] are initially at rest, also their difference
h[n] = f [n] g[n] is initially at rest. By assumption h[n] is not identically zero, so there a
time index n0 Z for which h[n0 ]  = 0 while h[n] = 0, n < n0 , see Figure 8.4. Now,
Z{ f [n] g[n]} = Z{h[n]} =

h[n]z n .

n=n 0

If z is large enough then z is in the convergence regions of both f [n] and g[n], so then the
left-hand side of the above equation is defined and then equal to F(z) F(z) = 0. This leads
to a contradiction, since the right-hand is not identically zero for large z, because
lim z

n0

1
1
h[n]z n = lim h[n 0 ] + h[n 0 + 1] + h[n 0 + 2] 2 + = h[n 0 ]  = 0.
z
z
z
n=n 0

Chapter 8. The z -transform and Discrete-time Systems

170

This is a contradiction, so our assumption that g[n] differs from f [n] is wrong. They are the
same.
8.3.3. Example. Suppose
F(z) =

z2
.
(2z 1)(z 1)

There are two poles, z1 = 1/2 and z2 = 1. The region of convergence of F(z) seen as the
z-transform of an initially at rest signal hence is |z| > 1.

To determine the initially at rest signal f [n] given its rational z-transform F(z) we shall
use partial fraction expansion. The following examples demonstrate the method. Table 8.1 is
indispensable to these examples.
8.3.4. Example.
a) Let F(z) be given by
F(z) =

z4
.
z2 1

Motivated by the formulae of Table 8.1 where a factor z appears in the numerator, we
shall perform partial fraction expansion on F(z)/z rather than on F(z).
Now F(z)/z = z 3 /(z 2 1) is not strictly proper. It is not even proper. So first we have
to separate a polynomial part (see Appendix A),
z3
z
F(z)
= 2
= {division with remainder} = z + 2
.
z
z 1
z 1
The term z/(z 2 1) is strictly proper, and so has a partial fraction expansion,
1 1
1 1
F(z)
=z+
+
.
z
2z1 2z+1
Therefore,
F(z) = z 2 +

1 z
1 z
+
.
2z1 2z+1

(8.7)

The signal f [n] may now be determined by taking the inverse z-transform of each of the
three terms on the right-hand side of (8.7). Using Table 8.1 we get that
f [n] = [n + 2] +

1
2

b) Let F(z) be the function


F(z) =

z2

1
.
+ 2z + 2

1[n] + 12 (1)n 1[n].

8.3. The inverse z -transform of rational functions

171

The denominator has a factorization z2 + 2z + 2 = (z + 1)2 + 1 = (z + 1 + j )(z + 1 j ).


Partial fraction expansion of F(z)/z gives
F(z)
1
1
1+ j
1
1 + j
1
=
=

+
.
z
z(z + 1 + j )(z + 1 j )
2z
4 z+1+ j
4
z+1 j
Then F(z) follows as
F(z) =

z
1 + j
z
1 1+ j

+
,
2
4 z+1+ j
4
z+1 j

and then from Table 8.1 we may determine the inverse z-transform,


f [n] = 12 [n] + 14 (1 j )n+1 + (1 + j )n+1 1[n]

= 12 [n] + 12 Re (1 + j )n+1 1[n].


If so desired the expression
j for f [n] can be rewritten in a real form. To this end express
1 + j as 1 + j = 2e with = 3/4. Then we get that

f [n] = 12 [n] + 12 ( 2)n+1 cos( 34 (n + 1) ) 1[n].




There is an easy test to verify if f [n] obtained from F(z) 


makes sense. The idea is this. If
n
f [n] is at rest up to n = n0 (see Figure 8.4), then F(z) =
n=n 0 f [n]z , with f [n 0 ]  = 0.
Consequently
lim z F(z) = lim
n0

n0

f [n]z

= {k = n n 0 } = lim

n=n 0

f [k + n 0 ]z k = f [n 0 ].

k=0

This tells us two things:


We may determine f [n0 ] through lim|z| z n0 F(z),
the time index n0 is that (unique) integer for which lim|z| z n0 F(z) exits with a nonzero
limit.
In Example 8.3.4 we considered F(z) = z4 /(z 2 1). The only time-index n0 for which
4
lim|z| z n0 z 2z1 exists and is nonzero is n0 = 2,
lim z 2

|z|

z4
z2 1

z2
|z| z 2 1
= {divide numerater and denominator by highest power z2 }
1
= 1.
= lim
|z| 1 1/z 2
=

lim

Verify yourself that no other n0 will make lim|z| z n0 z 2z1 finite and nonzero. It is now easy
to show that

Chapter 8. The z -transform and Discrete-time Systems

172

8.3.5. Theorem (initial value theorem). The initially at rest f [n] with z-transform
F(z) =

pm z m + pm1 z m1 + . . . + p1 z + p0
,
ql z l + ql1 z l1 + . . . + q1 z + q0

pm  = 0, ql  = 0

is initially at rest up to n0 = l m, and f [n 0 ] = lim|z| z lm F(z) = pm /ql .

In particular, f [n] is causal iff F(z) is proper.


8.3.6. Example. We calculate the causal signal f [n] whose z-transform equals the strictly
proper
F(z) =

1
.
z(z 2)2

After partial fraction expansion of F(z)/z we obtain




1
z
z
1
1+
+
.
F(z) =
4
z
z 2 (z 2)2
The inverse z-transform follows, again, from Table 8.1,
f [n] =

1
[n] + [n 1] 2n 1[n] + n2n1 1[n] .
4

The degree of the denominator of F(z) is 3, and the degree of the numerator is 0. So f [n]
is zero for all n < 3 0 = 3. Indeed, f [n] is like that. Note that lim|z| z 3 F(z) =
z3
1
lim|z| z(z2)

2 = lim|z| 1(12/z) 2 = 1, which equals f [3] = 1.

8.4 Convolution
With each of the transforms considered so far it was possible to define a convolution product
of two signals whose transform is the product of the transforms of the two signals. Also for
the z-transform there is such a convolution. We assume in the following that F(z) and G(z)
are the z-transforms of discrete-time signals f [n] and g[n]. We try to find a signal h[n] whose
z-transform is the product F(z)G(z). Written out, this is

h[n]z n =

n=

f [l]z l

l=

g[k]z k .

k=

The right-hand side is the product of two infinite series. On the intersection of the two regions
of convergence of F(z) and G(z) the two infinite series converge absolutely, which allows us
to interchange product and summation,


n=

h[n]z n =



l= k=

f [l]g[k]z lk .

8.4. Convolution

173

To find h[n] we equate equal powers of z. The coefficient of zn on the left-hand, h[n], must
equal the sum of the products f [l]g[k] for which l + k = n. That is,
h[n] =

f [l]g[n l].

l=

It is now clear how we should define convolutions with respect to the z-transform.
8.4.1. Definition. The convolution or convolution product ( f g)[n] of two discrete-time signals f [n] and g[n] is defined as

( f g)[n] =

f [l]g[n l].

(8.8)

l=

Verify yourself that convolution products commute. The convolution product of Definition 8.4.1
is an infinite series. If f [n] and g[n] are causal signals then convergence of the convolution
product is trivial, since then the convolution product ( f g)[n] for each n is a finite sum,

( f g)[n] =

f [l]g[n l] = { f [l] = 0 for l < 0} =

l=

f [l]g[n l]

l=0

= {g[n l] = 0 for l > n} =

n


f [l]g[n l].

l=0

By the above construction, the z-transform of a convolution product is the product of the ztransforms. We formulate this as a theorem.
8.4.2. Theorem (Convolution theorem for the z-transform). Let f [n] and g[n] be two
discrete-time signals with z-transforms F(z) and G(z), respectively. Then
Z

( f g)[n] F(z)G(z).


8.4.3. Example. Let f [n] and g[n] be the discrete-time signals


f [n] = 2n 1[n],

g[n] = 3n 1[n].

The z-transform of f [n] is F(z) = z/(z 1/2) with region of convergence |z| > 1/2. The ztransform of g[n] is G(z) = z/(z1/3) with region of convergence |z| > 1/3. The convolution
product has z-transform
F(z)G(z) =

z2
.
(z 1/2)(z 1/3)

After partial fraction expansion of F(z)G(z)/z, etcetera, the convolution may be shown to be
( f g)[n] = (3 2n 2 3n ) 1[n].


Chapter 8. The z -transform and Discrete-time Systems

174

8.4.4. Example. The sum


n


f [l]

l=

may be seen as the convolution with the discrete unit step 1[n]. Indeed,

( f 1)[n] =

f [l] 1[n l] =

l=

n


f [l].

l=

As 1[n] z/(z 1) we get from the convolution theorem that


n


f [l]

l=

z
F(z).
z1


8.5 The one-sided z-transform


Let f [n] be some discrete-time signal. The z-transform of the causal part of f [n],
F+ (z) =

f [n]z n

n=0

is known as the one-sided z-transform or the unilateral z-transform of f [n], and is sometimes
written as Z+ { f [n]}. Note that the one-sided z-transform equals the z-transform of f [n] 1[n].
The region of convergence of a one-sided transform, hence, equals the region of convergence
of the z-transform of a causal signal, of which we know that it is the complement of a disc with
certain radius and center at z = 0.
It is clear that the inverse of the one-sided z-transform can only recover positive-time function values, f [n], n 0. In case F+ (z) is rational we can again use partial fraction expansion
in combination with Table 8.1 to find f [n], (n 0).
In the following we review some properties and rules of calculus for the one-sided ztransform. These rules are the same as for the two-sided z-transform, except for the time-shift
rule.
Properties of the one-sided z-transform
Z+

1. Linearity: a f [n] + bg[n] a F+ (z) + bG + (z)


This holds for every two complex numbers a and b.
Z+

2. Conjugation: f [n] (F+ (z )) .

8.5. The one-sided z -transform

175

3. Time-shift (shift in the n-domain):


Z+

f [n k] z k F+ (z) + f [k] + z 1 f [k + 1] + + z k+1 f [1],

(k > 0).

This can be seen as follows

f [n k]z n = {m = n k} =

n=0

f [m]z mk

m=k

f [m]z

mk

+ f [k] + z 1 f [k + 1] + + z k+1 f [1]

m=0

= z k F+ (z) + f [k] + z 1 f [k + 1] + + z k+1 f [1].


Z+

4. Scaling in the z-domain: an f [n] F+ (z/a),

(a  = 0).

Z+

5. Differentiation in the z-domain: n f [n] z F+ (z).


This follows from the fact that

n f [n]z

n=0

d n
[n](z dz
z )

= z

n=0


n=0

f [n]

d n
d
z = z F+ (z).
dz
dz

In the last equality we interchanged differentiation and summation, which is allowed on


the region of convergence.
6. Convolution:

n
l=0

Z+

f [l]g[n l] F+ (z)G + (z)

Note that the convolution product used here coincides with the convolution product of
Definition 8.4.1 for the case of causal signals f [n] 1[n] and g[n] 1[n].
8.5.1. Example. Suppose f [n] satisfies
f [n] = 2 f [n 1] f [n 2]

(n 0),

(8.9)

with initial conditions


f [1] = 0, f [2] = 1.
Using the one-sided z-transform it is possible to find the signal f [n] that satisfies this difference
equation for n 0. The z-transform of the difference equation (8.9) yields
Z+ { f [n]} = 2Z+ { f [n 1]} Z+ { f [n 2]}.
From the time-shift rule we know that
Z+ { f [n 1]} = z 1 F+ (z) + f [1],
Z+ { f [n 2]} = z 2 F+ (z) + f [2] + z 1 f [1],

(8.10)

Chapter 8. The z -transform and Discrete-time Systems

176

and since the initial conditions are given, f [1] = 0, f [2] = 1, the equation (8.10)
becomes


F+ (z) = 2z 1 F+ (z) z 2 F+ (z) 1 .
The solution F+ (z) of this algebraic equation is
F+ (z) =

1
1 2z 1 + z 2

z2
z2
=
.
z 2 2z + 1
(z 1)2

After partial fraction expansion of F(z)/z we obtain


F+ (z) =

z
z
.
+
z 1 (z 1)2

Finally from Table 8.1 we read that f [n] equals


f [n] = 1 + n,

(n 0).

Repeated application of the recurrence relation (8.9), beginning at n = 0, also allows us to


generate one at a time the function values f [0], f [1], f [2], . . . .
n=0 :
n=1 :
n=2 :
..
..
.
.

f [0] = 2 f [1] f [n 2] = 20 + 1 = 1,
f [1] = 2 f [0] f [1] = 21 0 = 2,
f [2] = 2 f [1] f [0] = 22 1 = 3,
..
.

We have more to say about difference equations like (8.9) when we consider discrete-time
systems.


8.6 Discrete-time systems


The z-transform is an important tool in the analysis of discrete-time systems for much the same
reasons as why the Fourier and Laplace transforms are important for continuous-time systems.
In this course we confine attention to discrete-time systems that are LTI systems, i.e., that are
both linear and time-invariant. Discrete-time LTI systems are also called discrete-time filters.
The response y[n] of a discrete-time system T to an input u[n] will be denoted by
y[n] = T {u[n]},
T

and is sometimes conveniently expressed as u[n] y[n].


The following definitions of linearity, time-invariance and causality of discrete-time systems are the discrete-time counterparts of the continuous-time notions of linearity, timeinvariance and causality as considered in Chapter 5.

8.6. Discrete-time systems

177

f [n]z n

f [n]

Linearity

a1 f 1 [n] + a2 f 2 [n]

a1 F1+ (z) + a2 F2+ (z)

|z| > max(1 , 2 )

f [n]

(F+ (z ))

|z| >

f [n 1]

z1 F+ (z) + f [1]

|z| >

f [n 2]

z 2 F+ (z) + f [2] + z 1 f [1]

|z| >

f [n + 1]

z F+ (z) z f [0]

|z| >

f [n + 2]

z 2 F+ (z) z 2 f [0] z f [1]

|z| >

Scaling (z-dom.)

an f [n]

F+ (z/a)

a  = 0, |z| > |a|

Differentiation

n f [n]

z F+ (s)

|z| >

F+ (z)G + (z)

|z| > max( f , g )

Conjugation
Time-shift

Convolution

n
l=0

f [l]g[n l]

F+ (z) =

Property

n=0

Table 8.2: Some one-sided z-transform properties.

|z| >

Chapter 8. The z -transform and Discrete-time Systems

178

8.6.1. Definition. A discrete-time system T is linear if for every input signals u1 [n] and u 2 [n]
and every complex a1 and a2 there holds that
T {a1 u 1 [n] + a2 u 2 [n]} = a1 T {u 1 [n]} + a2 T {u 2 [n]}.


8.6.2. Definition. A discrete-time system T is time-invariant if for each input u[n] and corresponding output y[n] = T {u[n]} there holds that for any k Z,
T

u[n k] y[n k].




8.6.3. Definition. A discrete-time system T is causal or non-anticipating if for every k Z


and every two input signals u1 [n] and u 2 [n] and corresponding output signals,
y1 [n] = T {u 1 [n]},

y2 [n] = T {u 2 [n]}

there holds that


If u 1 [n] = u 2 [n] for n < k,

then

y1 [n] = y2 [n] for n < k.




Discrete-time systems that are both linear and time-invariant are called discrete-time LTI
systems. As with continuous-time systems, a discrete-time LTI system is causal if and only if
the response to every causal input is causal.
In the description of discrete-time LTI systems in the time-domain an important role is
played by the response to the unit pulse [n]. The response to the unit pulse is called the
(discrete-time) impulse response. Let h[n] be the impulse response. Then, by time-invariance,
the response to a shifted unit pulse [n k] is h[n k]. Now, any input signal u[n] can be seen
as a superposition of shifted pulses h[n k], (see Example 8.2.1a):
u[n] =

u[k][n k].

k=

Since the system is LTI there holds that


y[n] = T {u[n]} =


k=

u[k]T {[n k]} =

u[k]h[n k].

k=

We see that the response y[n] to an input u[n] is a discrete-time convolution


y[n] = (h u)[n].

(8.11)

8.6. Discrete-time systems

179

Apparently every LTI system is described by a convolution (8.11) and as such is completely
characterized by its impulse response. This is a beautiful result. The description of LTI systems
is even simpler in the z-domain. Applying the z-transform to (8.11) namely gives
Y (z) = H (z)U (z).
Z

(8.12)
Z

Here u[n] U (z), y[n] Y (z) and h[n] H (z).


The z-transform H (z) of the impulse response is known as the transfer function.
8.6.4. Example (Moving averaging). Suppose T is the system whose output y[n] follows
from the input u[n] via
y[n] =

u[n] + 2u[n 1] + u[n 2]


.
4

The system calculates for each n the weighted average of the input at times n, n 1 and n 2.
Verify yourself that the system is linear and time-invariant and that the system is causal. The
impulse response is the output y[n] for the case that the input u[n] is the unit pulse u[n] = [n].
The impulse response therefore is
h[n] =

[n] + 2[n 1] + [n 2]
,
4

and the transfer function H (z) follows as


H (z) = (1 + 2z 1 + z 2 )/4.


The discrete-time system considered in the above example has the property that only finitely
many function values h[n], (n Z) are nonzero. Systems like these are called finite impulse
response filters (FIR filters). Examples of infinite impulse response (IIR filter) are considered
in the following subsection. An IIR filter is a filter whose impulse response h[n] has infinitely
many nonzero function values h[n]  = 0.

8.6.1 Initially at rest systems described by difference equations


The most important class of discrete-time systems are the ones in which the input and output
are interrelated via a high-order difference equation with constant coefficients,
y[n] + q1 y[n 1] + + qN1 y[n N + 1] + qN y[n N]
= p0 u[n] + p1 u[n 1] + + pN1 u[n N + 1] + pN u[n N].

(8.13)

Now, it may be clear that this way the output y[n] is not uniquely determined by the input u[n].
For example if u[n] is identically zero, then any constant y[n] = c is a solution of the difference
equation
y[n] y[n 1] = u[n].

Chapter 8. The z -transform and Discrete-time Systems

180

As in the continuous-time case it may however be argued that the most natural vantage point is
to assume the output is initially at rest when given an initially at rest input. To see more clearly
how this works we reorder the difference equation into an explicit recurrence relation
y[n] = p0 u[n] + p1 u[n 1] + + pN1 u[n N + 1] + pN u[n N]

(8.14)

q1 y[n 1] qN1 y[n N + 1] qN y[n N]


For any given time index n the value y[n] on the left-hand side solely depends on previously
computed input and output values u[k], (k n) and y[k], (k < n). One at a time the values
y[n] may now be computed.
8.6.5. Example. Let u[n] = [n]. This input is at rest for all n < 0. We assume that the
output is initially at rest as well. Therefore the right-hand side of (8.14) is zero if n ' 0. Now
step-by-step we increase the value of n and with each increase we evaluate the value of y[n]
indicates a nonzero element):
through (8.14), (a grey box
(n ' 0) : y[n] = 0,
..
.
. : ..
n = 1 : y[1] = p0 [1] + + p N [1 N] q1 y[2] qN y[1 N]
=0
n = 0 : y[0] = p0 [0] + + p N [N] q1 y[1] qN y[N]
= p0
n = 1 : y[1] = p0 [1] + p1 [0] + + p N [N] q1 y[0] qN y[1 N]
= p1 q1 p0
n = 2 : etcetera.


An initially at rest system described by a difference equation (8.13) refers to a system where
the output y[n] is computed as indicated by the above scheme. You may want to verify that
this way the output is uniquely determined by the input, and that in addition the system is
linear and time-invariant. We may therefore conclude that initially at rest systems described
by a difference equation (8.13) are LTI systems. In particular they have an impulse response
that fully characterizes them. Recall that the impulse response is the output y[n] if we take as
input the unit pulse, u[n] = [n]. The output computed in Example 8.6.5 therefore is nothing
but the systems impulse response. It is generally easier to find the impulse response via the
z-transform. Note that the impulse response h[n] as found in Example 8.6.5 turned out to
be a causal signal. Like in the continuous-time case this means that initially at rest systems
described by (8.13) are causal systems.
8.6.6. Lemma. The initially at rest system described by (8.13) is a causal LTI system and it
has a proper rational transfer function
H (z) =

p0 + p1 z 1 + + p N1 z N+1 + p N z N
.
1 + q1 z 1 + + q N1 z N+1 + q N z N

(8.15)


8.6. Discrete-time systems

181

Proof. The impulse response is the output when we take as input u[n] = [n]. That is,
h[n] + q1 h[n 1] + + qN1 h[n N + 1] + qN h[n N]
= p0 [n] + p1 [n 1] + + pN1 [n N + 1] + p N [n N].
Next take z-transforms of left and right-hand side. The time-shift rule states that Z{ f [n k]} =
z k Z{ f [n]}, and Z{[n k]} = zk . So we obtain
(1 + q1 z 1 + + q N z N )H (z) = ( p0 + p1 z 1 + + p N z N ),
and the transfer function follows. By the Initial Value Theorem 8.3.5, the impulse response
h[n] is at rest up tp n = 0 (and h[0] = p0 ). The impulse response hence is causal, implying
that the system is causal.
8.6.7. Example. Suppose T is an initially at rest system in which the input and output are
related via
y[n] y[n 1] + 14 y[n 2] = u[n] u[n 1].

(8.16)

We shall determine the impulse response with the help of the z-transform. The transfer function
is
H (z) =

z2 z
1 z 1
=
1 z 1 + 14 z 2
z2 z +

1
4

z2 z
.
(z 12 )2

The partial fraction expansion of H (z)/z is


z1
1
H (z)
=
=
1
2
z
(z 2 )
z

1
2

1
2

(z 12 )2

so that
H (z) =

z
z

1
2

1
z
2

(z 12 )2

The impulse response therefore is


h[n] = (1 n)( 12 )n 1[n].
The impulse response is causal and has infinitely many nonzero entries. The system is an IIR
system.

You may wonder why we confine the form of difference equations to that of (8.13), and why we
do not consider difference equations like y[n] = u[n + 1]. The reason is that (8.13) encompass
all causal systems described by a finite difference equation with constant coefficients. All other
non-equivalent such difference equations describe noncausal systems. This can be seen from
the transfer function. The transfer function (8.15) is proper, which is equivalent to causality
of the system. The transfer function of, for example, y[n] = u[n + 1] is H (z) = z, which is
nonproper, hence, that system is noncausal.

Chapter 8. The z -transform and Discrete-time Systems

182

8.6.2 BIBO-stable discrete-time systems


A discrete-time system T is BIBO-stable if the response y[n] to any bounded input u[n] is
bounded. Completely analogous to the continuous-time case there holds the following.
8.6.8. Lemma. An LTI system T with impulse response h[n] is BIBO-stable if and only if


nZ |h[n]| < .
Proof. Suppose h[n] is absolutely summable. Let u[n] be a bounded input, i.e., there is an
M > 0 such that |u[n]| < M for all n. Then y[n] is bounded because








h[k]u[n k] M
|h[k]| < ,
|y[n]| = 

k=
k=
and the bound is independent of n.
That absolute summability of h[n] is also necessary for BIBO-stability can be seen as
follows. Assume the system is BIBO-stable and consider the bounded input u[n] = sgn h[n].
Then, by BIBO-stability we have that |y[0]| < , but,
y[0] = (h u)[0] =

h[k]u[0 k] =

k=

|h[k]|,

k=

so h[n] is necessarily absolutely summable.


For initially at rest systems described by difference equation (8.13) it is easy to characterize
BIBO-stability in terms of its transfer function.
8.6.9. Lemma. An initially at rest system described by a difference equation (8.13) is BIBOstable if and only if all poles zk of the transfer function have absolute value less than one,
|z k | < 1.
Proof (sketch). With each pole zk of the transfer function there corresponds a term
n m (z k )n 1[n] in the impulse response. Now nm (z k )n 1[n] is absolutely summable if and
only if |zk | < 1. Therefore if all poles zk have absolute value less than one, then the impulse
response is the sum of absolutely summable functions, hence is itself absolutely summable,
implying BIBO-stability. Conversely, if one or more poles zk have absolute value 1 or more,
then the term nm (z k )n 1[n] appearing in the impulse response is not absolutely summable. It
is possible to show that then the impulse response cannot be absolutely summable, implying
instability.
8.6.10. Example. Consider the initially at rest system described by
y[n] y[n 1] = u[n 1].
1

z
The transfer function of this system is H (z) = 1z
1 =
n1
1[n 1]. Now


n=

|h[n]| =


n=1

n1 = {assuming || < 1} =

1
z

so its impulse response is h[n] =

1
.
1

8.7. Problems

183

From this expression we conclude that the system is BIBO-stable if and only if || < 1.
The same conclusion can be drawn, without doing any computation, by inspecting the
poles of H (z). The transfer function H (z) = 1/(z ) has one pole, z1 = , and by the above
theorem, the system is therefore BIBO-stable if and only this pole z1 = has absolute value
less than one.

8.6.11. Example. A typical example of an unstable system is that of multiplying rabbits. Let
y[n] denote the number of pairs of rabbits that a farmer has in month n. Now suppose that
rabbits have to be two months old before they are mature and that from that time onwards each
pair of rabbits produces one new pair every month! This we can capture by the difference
equation,
y[n] = y[n 1] + y[n 2] + u[n].

(8.17)

The difference equation expresses that in month n the farmer has as many pairs of rabbits as
she had the previous month, y[n 1], plus y[n 2] newborn pairs of rabbits due to the number
of pairs of rabbits y[n 2] that are mature in month n. The input u[n] is the number of pairs
that the farmer introduces to the group.
If the farmer would have been a mathematician, she would have calculated the poles of the
system first. To calculate the poles of the system, we rearrange Equation (8.17) as
y[n] y[n 1] y[n 2] = u[n].
The transfer function now follows as
H (z) =

1
1 z 1 z 2

z2
.
z2 z 1

The poles of the transfer function are z1 = 12 + 12 5 1.618 and z2 = 12 12 5 0.618. As


we see, the first pole in absolute value exceeds one. This system is therefore not BIBO-stable.
The number of rabbits will explode.


8.7 Problems
8.1 Determine the one-sided z-transforms of f [n] and its region of convergence for
(a) f [n] = [n],
(b) f [n] = [n 5],
(c) f [n] = 1[n],
(d) f [n] = nbn ,
(e) f [n] =

1
.
n!

8.2 Determine the basis solutions of


(a) y[n] 14 y[n 2] = u[n].

Chapter 8. The z -transform and Discrete-time Systems

184

5
(b) y[n] y[n 1] + 16
y[n 2] = u[n].

(c) y[n] 14 y[n 2] = u[n] + u[n 1].


8.3 Solve the initial value problems for y[n], (n 0) of
(a) y[n] = y[n 1] + y[n 2], with y[0] = 1 and y[1] = 1.
(b) y[n] 14 y[n 2] = u[n] + u[n 1] with u[n] = 1[n] and y[2] = y[1] = 1.
8.4 Determine the transfer function and region of convergence and impulse response for the
initially at rest systems described by
(a) y[n] = u[n N], (N > 0)
5
y[n 2] + u[n],
(b) y[n] = y[n 1] 16

(c) y[n] = 14 y[n 2] + u[n] + u[n 1].


8.5 Show that the one-sided z-transform satisfies
Z+

f [n + k] z k F+ (z) z k f [0] z k1 f [1] z f [k 1],

(if k > 0).

8.6 Are the following initially at rest systems BIBO-stable?


(a) y[n] 14 y[n 2] = u[n],
(b) y[n] 14 y[n + 2] = u[n],
5
y[n 2] = u[n 2],
(c) y[n] y[n 1] + 16

(d) y[n] + 52 y[n 1] + y[n 2] = u[n 1] + 2u[n 2],


(e) y[n] y[n 1] = u[n] u[n 1] for some constant .
8.7 We are given the initially at rest system
y[n] +

3
y[n 1] y[n 2] = u[n 1] + u[n 2].
2

(a) Determine the basis solutions.


(b) Determine the transfer function.
(c) Determine the impulse response.
(d) Is the system BIBO-stable?
(e) Now suppose the system is not initially at rest, but that y[2] = 0 and y[1] =
1. What is the response y[n] for n 0 to the input u[n] = [n].
8.8 We are given an initially at rest discrete-time system described by
3
3
y[n] + y[n 1] y[n 2] = u[n 1] + u[n 2].
4
2

8.7. Problems

(a)
(b)
(c)
(d)
(e)

185

Determine the basis solutions.


Determine the impulse response.
Find the transfer function and its region of existence.
Is the system BIBO-stable?
Now suppose that the system is not at rest, but that y[2] = 0 and y[1] = 2.
What is the response y[n] (n 0) to the input u[n] = 1 n Z.

8.9 Consider the discrete-time system described by


y[n] y[n 2] = u[n],

n Z.

(8.18)

(a) Determine the basis solutions.


(b) Determine the impulse response of the initially at rest system
(c) Is the initially at rest system BIBO-stable?
8.10 Consider Example 8.6.11. How many rabbits will the farmer have in month n if she starts
out with one newborn pair of rabbits in month 0 and further leaves the rabbits at what
they do best.
More involved problems:
8.11 (Complicated calculation.) Solve the initial value problems for y[n], (n 0), of
(a) The exponential smoother:
y[n] = ay[n 1] + (1 a)u[n] with u[n] = cos(n/2)
and y[1] = 0 and a = 3/3
(b) y[n] = 56 y[n 1] 16 y[n 2] + u[n] u[n 1], with u[n] = 1[n], and y[n] = 0
for all n < 0.
(c) y[n] = 14 y[n 2] + u[n] + u[n 1], with u[n] = 1 for all n Z and y[2] =
y[1] = 1.
(d) y[n] = 14 y[n 2] + u[n] + u[n 1], with u[n] = (1/2)n 1[n] and y[0] = 1 and
y[1] = 0.
8.12 The initially at rest discrete-time system described by (8.13) is well-defined for initially
at rest inputs. Show that for all such inputs the initially at rest system is
(a) linear,
(b) time-invariant,
(c) causal.
8.13 Suppose that y[n] is a solution of a homogeneous equation
y[n] + q1 y[n 1] + + qN1 y[n + N + 1] + qN y[n N] = 0.
Show that then (x y)[n] is a solution of this homogeneous equation for every x[n] of
finite duration (that is, x[n] = 0 for all n > M for some M). Here the convolution is as
defined for the discrete-time signals (8.8).

186

Chapter 8. The z -transform and Discrete-time Systems

8.14 Consider a discrete-time system T . Let y[n] be the step response y[n] = T {1[n]}. Let
H (z) be the systems transfer function.
z
(a) Suppose that T is the initially at rest system (8.13). Show that Y (z) = H (z)z1
and determine the region of convergence of Y (z).

(b) Now suppose that all we know of T is that its impulse response is initially at rest
and that the region of convergence of H (z) is |z| > . Prove that Y (z) exists for
all |z| > max(1, ).

A
Partial Fraction Expansion

Consider the identity


1
1
1
1
= 2 +
+ 2
(s + 1)(s + 2)(s + 3)
s+1 s+2 s+3

It is easy to verify that the above identity is correct: multiply both left and right-hand side by
(s + 1)(s + 2)(s + 3), and the identity reduces to the polynomial identity
1
1
1 = (s + 2)(s + 3) (s + 1)(s + 3) + (s + 1)(s + 2),
2
2
whose validity is subsequently easily verified.
In this appendix we briefly review the problem of partial fraction expansion. The partial
fraction expansion of a rational function, like
1
,
(s + 1)(s + 2)(s + 3)

(A.1)

is an expansion of the rational function as a sum of elementary terms of the form /(s )k ,
such as,
1
1
+
+ 2 .
s+1 s+2 s+3
1
2

(A.2)

More generally, the partial fraction expansion of a rational function


pm s m + pm1 s m1 + + p1 s + p0
P(s)
=
Q(s)
qn s n + qn1 s n1 + + q1 s + q0
is an expansion of the form
M

Ak,1
Ak,2
Ak,m k

P(s)
= A0 +
+
+

+
Q(s)
s sk
(s sk )2
(s sk )m k
k=1

187

(A.3)

Appendix A. Partial Fraction Expansion

188

with A0 , Ak,l , sk C and m k , M N. Two properties are immediate. Firstly, the right-hand
side of (A.3) is not defined for s = sk , so the left-hand side, P(sk )/Q(sk ) is not defined as well.
Therefore the sk are necessarily zeros of the polynomial Q(s). Secondly, the limit as |s|
of the right-hand side is finite,
lim A0 +

|s|

M

Ak,1
Ak,2
Ak,m k

+
+ +
= A0 ,
2
s sk
(s sk )
(s sk )m k
k=1

(A.4)

so the left-hand side P(s)/Q(s) is also finite in the limit |s| . This is the case if and only
if the degree of P(s) is less than or equal to the degree of Q(s). Rational functions P(s)/Q(s)
with deg P(s) deg Q(s) are called proper rational functions.
A.0.1. Theorem. Every proper rational function P(s)/Q(s) has a partial fraction expansion.
More concretely, let sk , (k = 1, 2, . . . , M) denote the zeros of Q(s). Then P(s)/Q(s) has
a partial fraction expansion of the form
M

Ak,1
P(s)
Ak,2
Ak,m k

= A0 +
+
+ +
2
Q(s)
s sk
(s sk )
(s sk )m k
k=1

where M is the number of different zeros of Q(s), mk is the multiplicity of zero sk of Q(s), and
A0 and the Ak,l are (complex) constants.


A.1 When P(s)/Q(s) is strictly proper


A rational function P(s)/Q(s) is strictly proper if the degree of P(s) is less than the degree of
Q(s),
pn1 s n1 + + p1 s + p0
P(s)
=
,
Q(s)
qn s n + qn1 s n1 + + q1 s + q0

(qn  = 0).

Strictly proper rational functions tend to zero as |s| , so in view of (A.4), we have that
A0 = 0.
In this section we demonstrate partial fraction expansion techniques for strictly proper rational
functions.
A.1.1. Example. Let P(s)/Q(s) = 1/((s + 1)(s + 2)). The zeros of Q(s) are s1 = 1
and s2 = 2 and they both have multiplicity one. Therefore by the above theorem there are
constants A = A1,1 and B = A2,1 such that
A
B
1
=
+
.
(s + 1)(s + 2)
s+1 s+2
To determine the values of A and B we may multiply left and right-hand side by (s + 1)(s + 2)
to obtain,
1 = (s + 2) A + (s + 1)B = s(A + B) + (2A + B).

A.1. When P(s)/Q(s) is strictly proper

189

Next equate equal powers:


s1 : 0 = A + B
s 0 = 1 : 1 = 2A + B.
These are two equations in two unknowns, and its solution is
A = 1,

B = 1.

1
=
We found the partial fraction expansion (s+1)(s+2)

1
s+1

1
.
s+2

A.1.2. Example. The method of the previous example is generally applicable, but if Q(s) has
many zeros, then the method becomes unwieldy. In such cases it is often easier to work with a
direct method, such as the one demonstrated on the following example. The method assumes
that all poles of F(s) have multiplicity 1.
Suppose
F(s) =

s+4
.
(s + 1)(s + 2)(s + 3)

We see that F(s) is strictly proper, so F(s) has the partial fraction expansion,
A
B
C
s+4
=
+
+
(s + 1)(s + 2)(s + 3)
s+1 s+2 s+3
1
which has a pole at
for some constants A, B and C. Note that A is the coefficient of s+1
s = 1. Now to find A we simply evaluate at this pole s = 1 the function F(s) with the term
(s + 1) removed:


3
1 + 4
s+4

= .
=
A=

2
(s + 1) (s + 2)(s + 3) s=1 (1 + 2)(1 + 3)
1
1
and s+3
may be directly determined as
Likewise the coefficients B and C of s+2


2 + 4
s+4

= 2,
=
B=

(s + 1) (s + 2) (s + 3) s=2 (2 + 1)(2 + 3)

and
C=

s+4
(s + 1)(s + 2) (s + 3)






s=3

1
3 + 4
= .
(3 + 1)(3 + 2)
2

So the partial fraction expansion is


3/2
2
1/2
s+4
=
+
+
.
(s + 1)(s + 2)(s + 3)
s+1 s+2 s+3
This method works for rational functions F(s) whose poles have multiplicity 1.

Appendix A. Partial Fraction Expansion

190

The exposition in the previous example was deliberately taken rather graphical as this makes
the method easier to perform by hand. Mathematically, we did nothing but compute
3
1
A = lim (s + 1)F(s) = , B = lim (s + 2)F(s) = 2, C = lim (s + 3)F(s) = .
s1
s2
s3
2
2
If F(s) has a multiple pole then a similar result holds.
A.1.3. Lemma. Suppose sk is a zero of a polynomial Q(s) with multiplicity mk . Then the
coefficient Ak,m k of the term of highest order
Ak,m k
(s sk )m k
in the partial fraction expansion of F(s) = P(s)/Q(s) equals
Ak,m k = lim (s sk )m k F(s).
ssk

A.1.4. Example. Suppose


s+4
(s + 1)2 (s + 2)
The multiplicity of the zero s1 = 1 is m 1 = 2. So the partial fraction expansion of F(s) is of
the form
B
C
A
+
.
(A.5)
+
F(s) =
2
s + 1 (s + 1)
s+2
F(s) =

1
Since the multiplicity of the zero s1 = 1 is 2, the coefficient B of the highest order term (s+1)
2
equals


s+4
1 + 4
2

= 3.
=
B = lim (s + 1) F(s) =

s1
(s + 1)2 (s + 2) s=1 1 + 2

Now that B is known we may bring it to the left-hand side of (A.5),


C
A
1
+
.
=
2
(s + 1)
s+1 s+2
The left-hand side may be simplified to
F(s) 3

2s 2
2
(s + 4) 3(s + 2)
1
=
=
.
=
2
2
2
(s + 1)
(s + 1) (s + 2)
(s + 1) (s + 2)
(s + 1)(s + 2)
We have reduced the problem to one of lower order. We leave it to the reader to verify that
F(s) 3

A
C
2
=
+
(s + 1)(s + 2)
s+1 s+2
for A = 2 and C = 2. The partial fraction expansion of F(s) is now determined,

2
3
2
s+4
=
+
.
+
2
2
(s + 1) (s + 2)
s + 1 (s + 1)
s+2


A.2. When P(s)/Q(s) is proper

191

A.2 When P(s)/Q(s) is proper


A rational function P(s)/Q(s) is proper if the degree of P(s) is less than or equal to the degree
of Q(s). So proper rational functions are of the form
P(s)
pn s n + pn1 s n1 + + p1 s + p0
=
,
Q(s)
qn s n + qn1 s n1 + + q1 s + q0

(qn  = 0).

(A.6)

Partial fraction expansion of a proper rational function can easily be reduced to that of a strictly
proper rational function. Indeed, if P(s)/Q(s) is proper, then we may always express it as
P(s) A0 Q(s) + A0 Q(s)
P(s) A0 Q(s)
P(s)
=
=
+ A0
Q(s)
Q(s)
Q(s)
in which A0 is chosen in such a way that
P(s) A0 Q(s)
Q(s)
is strictly proper.
A.2.1. Example. Suppose
s2
P(s)
= 2
.
Q(s)
s + 3s + 2
The degree of the numerator P(s) is the same as the degree of the denominator Q(s). Now
s 2 A0 (s 2 + 3s + 2)
P(s)
=
+ A0 .
Q(s)
s 2 + 3s + 2
For A0 = 1 we numerator polynomial s2 A0 (s 2 + 3s + 2) drops degree,
s 2 (s 2 + 3s + 2)
3s 2
P(s)
=
+1= 2
+ 1.
2
Q(s)
s + 3s + 2
s + 3s + 2
1
1
Now s 3s2
2 +3s+2 is strictly proper and it has partial fraction expansion s+1 4 s+2 (verify this
yourself). Then the partial fraction expansion of P(s)/Q(s) follows as

1
1
s2
=1+
4
.
2
s + 3s + 2
s+1
s+2


Appendix A. Partial Fraction Expansion

192

A.3 When P(s)/Q(s) is non-proper; long division


A rational function P(s)/Q(s) is non-proper if the degree of the numerator P(s) exceeds the
degree of the denominator Q(s),
pm s m + pm1 s m1 + + p1 s + p0
P(s)
,
=
Q(s)
qm1 s m1 + + q1 s + q0

( pm  = 0).

In cases like these we may express the rational function as a polynomial plus a strictly proper
part,
P(s)
= L polynomial (s) + Fstrictly proper (s).
Q(s)
On the strictly proper part we may again perform partial fraction expansion. The polynomial
part and strictly proper part may be obtained through long division.
A.3.1. Example. Suppose
s4
P(s)
= 2
.
Q(s)
s + 3s + 2
The degree of the numerator P(s) exceeds that of the denominator Q(s), i.e., P(s)/Q(s) is
non-proper. The polynomial part and strictly proper part follow from long division,
s 2 + 3s + 2 / s 4
s 4 + 3s 3 + 2s 2

\ s 2 3s + 7

3s 3 2s 2
3s 3 9s 2 6s
7s 2 + 6s
7s 2 + 21s +14
15s 14
Then, finally,
15s 14
s4
= s 2 3s + 7 + 2
2
s + 3s + 2
s + 3s + 2
15s 14
= s 2 3s + 7 +
(s + 1)(s + 2)
16
1

.
= s 2 3s + 7 +
s+1 s +2


B
Solution of ODEs and ODEs
This appendix reviews time-domain solutions of ordinary linear differential equations with
constant coefficients. In Section B.2 time-domain solutions of ordinary difference equations
are discussed.

B.1 Solution of ODEs


We saw in Section 7.1 of Chapter 7 that any differential equation of the form
y (n) (t) + qn1 y (n1) (t) + q1 y (1) (t) + q0 y(t)
=

pn u (n) (t) + pn1 u (n1) (t) + p1 u (1) (t) + p0 u(t)

(B.1)

has an input-state-output representation. In Chapter 7 we derived the general solution of the


input-state-output equations, so in a way we solved the ODE (B.1) as well. In this section we
review how the general solution of the input-state-output equations and their solution techniques
of the previous section specialize to the ordinary high-order linear differential equations (B.1).
It is customary in mathematics to associate with a differential equation (B.1) a homogeneous
equation and a characteristic equation. The homogeneous equation is the equation (B.1) in
which the input u(t) is put equal to zero,
y (n) (t) + qn1 y (n1) (t) + + q1 y (1) (t) + q0 y(t) = 0,
and the associated characteristic equation is the polynomial equation
n + qn1 n1 + + q1 + q0 = 0.

(B.2)

The fundamental theorem of algebra states that a polynomial equation of degree n has exactly
n roots, counting multiplicities. Now if 1 is a root of the characteristic equation (B.2), then
y(t) = e1 t is a solution of the homogeneous equation. Indeed, if y(t) = e1 t , then
y (n) (t) + qn1 y (n1) (t) + + q1 y (1) (t) + q0 y(t)
1 t
+ + q1 1 e1 t + q0 e1 t
= n1 e1 t + qn1 n1
1 e

= n1 + qn1 n1
+ + q1 1 + q0 e1 t = 0.
1

193

Appendix B. Solution of ODEs and ODEs

194

B.1.1. Example.
1. The characteristic equation of
y (2) (t) b2 y(t) = 0
is 2 b2 = 0. Its roots are 1 = b and 2 = b. Hence y1 (t) = ebt and y2 (t) = ebt
are solutions of y(2) (t) b2 y(t) = 0. By linearity, then,
y(t) = 1 ebt + 2 ebt ,
is a solution of y(2) (t) b2 y(t) = 0 for any 1 , 2 C.
2. The characteristic equation of
y (3) (t) 3y (2) (t) + 2y (1) (t) = 0
is 3 32 + 2 = 0. Since
3 32 + 2 = ( 1)( 2)
we see that 1 = 0, 2 = 1 and 3 = 2 are the characteristic roots. Then e0t = 1, and
et and e2t are three solutions of the homogeneous equation, and then by linearity every
y(t) of the form
y(t) = 1 + 2 et + 3 e2t ,

1 , 2 , 3 C,

is a solution of the homogeneous equation.


3. The characteristic equation of
y (3) (t) = 0
is 3 = 0. This has one root 1 = 0 with multiplicity 3. Now y1 (t) = e1 t = e0t = 1
is obviously a solution of the homogeneous equation, but so are y(t) = t and y(t) = t2 .
Apparently not every solution is a linear combination of exponential functions.


In the last of the three examples we saw that not every solution of the homogeneous equation
is a sum of exponential functions. This has something to do with the fact that the multiplicity
of the characteristic root 1 in that example is more than 1. The general result is this:
B.1.2. Theorem. To each characteristic root i of multiplicity mi , the m i functions
yi,k (t) = t k ei t ,

(k = 0, 1, . . . , m i 1)

are solutions of the homogeneous equation. These solutions yi,k (t) are called the basis solutions.
Furthermore, y(t) is a solution of the homogeneous equation if and only if it is a linear
combination of the basis solutions,

i,k yi,k (t), i,k C.
(B.3)
y(t) =
i,k

B.1. Solution of ODEs

195

Proof. (Idea only). That each basis solution satisfies the homogeneous equation is not difficult
to verify. There are exactly n basis solutions, and they can be seen to be linearly independent.
Therefore the general solution (B.3) form an n-dimensional subspace. The homogeneous solutions are the solutions for u(t) = 0, so from Theorem 7.2.3, Equation (7.11) we know that
the general solution of the homogeneous equation is CeAt x(0), x(0) Rn , and this forms an
n-dimensional subspace as well. The solution sets {y(t) : y(t) = CeAt x(0), x(0) Rn } and
(B.3) must therefore be the same.
B.1.3. Example.
1. The characteristic equation of
y (n) (t) = 0
is n = 0. It has one root 1 = 0 with multiplicity n. The basis solutions hence are
y1,1 (t) = 1, y1,2 (t) = t, y1,3 (t) = t 2 ,

... ,

y1,n (t) = t n1 .

The general solution of y(n) (t) = 0 is therefore y(t) = 1,1 + 1,2 t + + 1,n t n1 , that
is, the solutions are the polynomials in t of degree n 1 or less.
2. The characteristic equation of
y (3) (t) 4y (2) (t) + 5y (1) (t) 2y(t) = 0

(B.4)

is 3 42 + 5 2 = 0. Since
3 42 + 5 2 = ( 1)2 ( 2)
we obtain as basis solutions
y1,1 (t) = et ,

y1,2 (t) = tet ,

y2,1 (t) = e2t .

The general solution of (B.4) then is


y(t) = 1,1 et + 1,2 tet + 2,1 e2t ,

1,1 , 1,2 , 2,1 C.




B.1.1 Particular solutions


Up to now we considered only homogeneous equations, which are ODEs (B.1) with zero input.
In this subsection we consider ODE (B.1) for the case that the input u(t) is not the zero function.
Suppose that for a given input u(t) we found one solution ypart (t) of the ODE (B.1). How
does the general solution of the ODE look like?

Appendix B. Solution of ODEs and ODEs

196

B.1.4. Lemma. Suppose u(t) is given and let ypart (t) be one solution of the ODE (B.1). Then
the general solution y(t) of (B.1) is
y(t) = ypart (t) + yhom (t)
where yhom (t) is any solution of the associated homogeneous equation.
Proof. Prove it yourself!
In our quest for the general solution it therefore suffices to find one solution of the ODE. All
others then follow by adding the general solution of the homogeneous equation. One solution
ypart (t) of the ODE is commonly called a particular solution. Generally it is difficult to find a
particular solution in which case one has to settle for the integral expression (7.11) or solve it
numerically. For certain input signals u(t) it is however possible to make an educated guess.
The following three examples demonstrate three such cases.
B.1.5. Example (Constant inputs). If the input is constant
u(t) = c,
then we may contemplate a constant particular solution ypart (t). As all derivatives of a constant
signal are zero, the ODE (B.1) for constant input and output reduce to
q0 y(t) = p0 u(t).
If q0  = 0 then apparently
ypart (t) =

p0
c
q0

is a constant particular solution.


Often we are only concerned with solutions for positive time. Consider the ODE
y (2) (t) 4y(t) = u(t)
and suppose that u(t) = 1(t). Since we are interested in the signals for positive time, we may
consider the input u(t) = 1(t) to be a constant 1. A particular solution follows as ypart (t) =
1/(4) = 1/4. Hence the general solution y(t) is
1
y(t) = + 1,1 e2t + 2,1 e2t ,
4

1,1 , 2,1 C.


B.1.6. Example (Exponential inputs). The constant input of the previous example is a degenerate case of an exponential input u(t) = es0 t . For exponential inputs u(t) = es0 t we
contemplate a particular solution of the form
ypart (t) = Aes0 t ,

for some A C.

B.1. Solution of ODEs

197

The left-hand side of the ODE (B.1) then becomes


(n)
(n1)
(t) + qn1 ypart
(t) + + q0 ypart (t) = A s0n + qn1 s0n1 + + q0 )es0 t
ypart

while the right-hand side of the ODE is


pn u (n) (t) + pn1 u (n1) (t) + + p0 u(t) = pn s0n + pn1 s0n1 + + p0 es0 t .

Equating the two sides yields A,


A=

pn s0n + pn1 s0n1 + + p0


s0n + qn1 s0n1 + + q0

For A to exist we shall need to assume that s0 is not a characteristic root, otherwise the above
denominator is zero. For s0 = 0 we recover that case of constant inputs.
Consider the ODE
y (2) (t) 4y(t) = u(t)
with input u(t) = es0 t . Then as long as s0 is not a characteristic root, we obtain as particular
solution
1
e s0 t .
ypart (t) = 2
s0 4
Like in the previous example, the general solution then is
y(t) =

s02

1
es0 t + 1,1 e2t + 2,1 e2t ,
4

1,1 , 2,1 C.


B.1.7. Example (Polynomial inputs). If the input u(t) is a polynomial in t, then the righthand side of the ODE (B.1), pn u (n) (t) + pn1 u (n1) (t) + + p0 u(t) is a polynomial in t as
well. The ODE is then of the form
(n)

y (t) + qn1 y

(n1)

(t) + + q0 y(t) =

M


k t k ,

(k C).

k=0

The claim is that there is a particular solution which is polynomial in t. The method is best
demonstrated on an example. Consider the ODE
y (2) (t) + y (1) (t) + 2y(t) = u (1) (t) + u(t).

(B.5)

Suppose that u(t) = t 2 , so the ODE becomes y(2) (t) + y (1) (t) + 2y(t) = 2t + t 2 . Differentiate
both sides as often as needed up to the point where the right-hand side becomes constant.
Original equation: y(2) (t) + y (1) (t) + 2y(t) = 2t + t 2
(3)

(2)

(1)

differentiate once: y (t) + y (t) + 2y (t) = 2 + 2t


differentiate once again: y(4) (t) + y (3) (t) + 2y (2) (t) = 2.

(B.6)
(B.7)
(B.8)

Appendix B. Solution of ODEs and ODEs

198

The last equation (B.8) has a solution y(2) (t) = 1. Now we use that in the preceding equation
(B.7) to solve for y(1) (t). Since y (3) (t) = 0 we obtain from (B.7) that
y (1) (t) =

1
1
(2 + 2t) y (3) (t) y (2) (t) = t + .
2
2

Now that y (1) (t) is determined we return to Equation (B.6) and solve that for y(t),
y(t) =

1
1
3
1
1
1
(2t + t 2 ) y (2) (t) y (1) (t) = (2t + t 2 1 (t + )) = t 2 + t .
2
2
2
2
2
4

This is a particular solution.


The characteristic
equation
of (B.5) is 2 + + 2 = 0 and its roots are complex, 1 =

1
1
1
1
2 + j 2 7 and 2 = 2 j 2 7. The general solution of (B.5) for u(t) = t2 hence is

1
3
1
1
1
1
1
y(t) = t 2 + t + 1,1 e( 2 + j 2 7)t + 2,1 e( 2 j 2 7)t
2
2
4

(1,1 , 2,1 C).




B.2 Solution of difference equations


The discrete-time counterpart of differential equations is difference equations. In this section
we discuss the difference equation of the form
y[n] + q1 y[n 1] + + qN1 y[n N + 1] + qN y[n N]
= p0 u[n] + p1 u[n 1] + + pN1 u[n N + 1] + pN u[n N].

(B.9)

First consider its homogeneous equation


y[n] + q1 y[n 1] + + qN1 y[n N + 1] + qN y[n N] = 0

(B.10)

and associated characteristic equation


1 + q1 1 + + q N1 1N + q N N = 0.

(B.11)

Any root of this equation gives rise to a solution of the homogeneous equation (B.10). Indeed
if 1 is a root of (B.11), then y[n] = n1 is a solution of the homogeneous equation:
y[n] + q1 y[n 1] + + qN1 y[n N + 1] + qN y[n N]
N+1
n
+ q0 N
= (1 + q1 1
1 + + q N1 1
1 )1 = 0.

B.2.1. Example.

B.2. Solution of difference equations

199

1. The characteristic equation of


y[n] b2 y[n 2] = 0
is 1 b2 2 = 0. Its roots are 1 = b and 2 = b. Hence y1 [n] = bn and y2 [n] =
(b)n are solutions of y[n] b2 y[n 2] = 0. By linearity, then,
y[n] = 1 bn + 2 (b)n ,
is a solution of y[n] b2 y[n 2] = 0 for any 1 , 2 C.
2. The characteristic equation of
y[n] 3y[n 1] + 2y[n 2] = 0
is 1 31 + 22 = 0. Since
1 2
1
( 3 + 2) = 2 ( 1)( 2)
2

we see that 1 = 1 and 2 = 2 are the characteristic roots. Then 1n = 1 and 2n are two
solutions of the homogeneous equation, and then by linearity every y[n] of the form
1 31 + 22 =

y[n] = 1 + 2 2n ,

1 , 2 , C,

is a solution of the homogeneous equation.


3. The characteristic equation of
y[n] 2y[n 1] + y[n 2] = 0
is 1 21 + 2 = 12 (2 2 + 1) = 0. This has one root 1 = 1 with multiplicity
2. Now y1 [n] = 1n = 1 is trivially a solution of the homogeneous equation, but so is
y[n] = n, (verify this yourself). Apparently not every solution is a linear combination of
powers of n.


In the last of the three examples we saw that not every solution of the homogeneous equation
is a sum of powers n of n. This has something to do with the fact that the multiplicity of the
characteristic root 1 in that example is more than 1. Without proof we state the general result.
B.2.2. Theorem. To each characteristic root i with multiplicity mi , the m i functions
yi,k [n] = n k ki ,

(k = 0, 1, . . . , m i 1)

are solutions of the homogeneous equation. These solutions yi,k [n] are called basis solutions.
Furthermore, y[n] is a solution of the homogeneous equation if and only if it is a linear
combination of the basis solutions,

i,k yi,k [n], i,k C.
(B.12)
y[n] =
i,k

Appendix B. Solution of ODEs and ODEs

200

B.2.3. Example.
1. The characteristic equation of
y[n] 2y[n 1] + y[n 2] = 0
is 1 1 + 2 = 12 (2 2 + 1) = 0. It has one root 1 = 1 with multiplicity 2. The
basis solutions hence are
y1,1 [n] = 1n = 1, y1,2 [n] = n1n = n.
The general solution of y[n] 2y[n 1] + y[n 2] = 0 is y[n] = 1,1 + 1,2 n, with
1,1 , 1,2 C.
2. The characteristic equation of
y[n] 5y[n 1] + 8y[n 2] 4y[n 3] = 0
is

1
(3
3

(B.13)

52 + 8 4) = 0. Since

3 52 + 8 4 = ( 1)( 2)2
we obtain as basis solutions
y1,1 [n] = 1n = 1,

y2,1 [n] = 2n ,

y2,2 [n] = n2n .

The general solution of (B.13) then is


y[n] = 1,1 + 2,1 2n + 2,2 n2n ,

1,1 , 2,1 , 2,2 C.




Particular solutions
Up to now we considered only homogeneous equations of (B.9), that is to say the case that u[n]
is the zero function. Suppose now that for a given (nonzero) u[n] we found one solution ypart [n]
of (B.9). Then, as with the continuous-time case, it may be shown that the general solution y[n]
of (B.9) is
y[n] = ypart [n] + yhom [n]
where yhom [n] is any solution of the associated homogeneous equation. In order to find all
solutions of (B.9) it therefore suffices to find one solution ypart [n]. All others then follow by
adding the general solution of the homogeneous equation. A solution ypart [n] of the difference
equation is called a particular solution.
In general it is difficult to find an explicit formula for the particular solution, with a few
exceptions.

B.2. Solution of difference equations

201

B.2.4. Example (exponential inputs). Suppose u[n] = bn , for some constant b  = 0. We


contemplate a particular solution of the form
ypart [n] = Abn ,

for some A C.

The left-hand side of (B.9) then becomes


ypart [n] + q1 ypart [n 1] + + qN ypart [n N]

= Abn 1 + q1 b1 + q2 b2 + + q N bN )
while the right-hand side of the difference equation (B.9) is

p0 bn + p1 bn1 + + p N bnN = bn p0 + p1 b1 + + p N bN .
Equating the two sides yields A,
A=

p0 + p1 b1 + p2 b2 + + p N bN
= H (b).
1 + q1 b1 + q2 b2 + + q N bN

For A to exist we shall need to assume that b is not a characteristic root, otherwise the above
denominator is zero.
Consider the difference equation
y[n] 4y[n 2] = u[n 2]
with input u(t) = bn . Then if b is not a characteristic root, b  = 2, we obtain as particular
solution
ypart [n] =

b2
1
bn .
bn = 2
2
1 4b
b 4


If we know some initial conditions and we are only interested in solutions for positive time
(n 0) then the one-sided z-transform comes in handy, see Chapter 8.

202

Appendix B. Solution of ODEs and ODEs

C
Selected tables

For easy reference the often needed tables that are scattered throughout the notes are collected
in this appendix.

Property

Time domain: f (t)



j k0 t
f (t) =
k= f k e

Frequency domain: fk
 T /2
f k = T1 T /2 f (t)e j k0 t dt

Linearity

f (t) + g(t)

fk + gk

Time-shift

f (t ), ( R)

e j k0 f k

Time-reversal

f (t)

fk

Conjugation

f (t)

f k

Frequency-shift

e j n0 t f (t), (n Z)

fkn

Table 2.1: Properties of the Fourier series. (Page 31.)

203

Appendix C. Selected tables

204

Property

Time domain
f (t) =

1
2

F()e j t d

Freq. domain
F() =

f (t)e j t dt

a1 f 1 (t) + a2 f 2 (t)

a1 F1 () + a2 F2 ()

Reciprocity

F(t)

2 f ()

Conjugation

f (t)

F ()

Time-scaling

f (at)

1
F( a )
|a|

Time-shift

f (t )

F()e j

Frequency-shift

f (t)e j 0 t

F( 0 )

f  (t)

j F()

Linearity

Differentiation (time)
Integration (time)
Differentiation (freq.)

t

f ( ) d

j t f (t)

Condition

F()
j

F ()

Table 3.1: Some standard Fourier transform properties. (Page 48.)

a R, a  = 0

lim f (t) = 0

F(0) = 0

205

recta (t) =

f (t)

F()

1 if |t| < 12 a,

1
0 if |t| > 2 a

1 |t|/a if |t| < a


triana (t) =

0
if |t| > a
ea|t|
t n at
e
n!

1(t)

tn! eat 1(t)


n

eat

2 sin( 12 a)

a>0

4 sin2 ( 12 a)
a2

a R, a > 0

2a
a 2 +2

Re a > 0

1
(a+ j )n+1

Re a > 0

1
(a+ j )n+1

Re a < 0

2 /4a

a R, a > 0

recta ()

a R, a > 0

sin( a2 t)
t

Condition

Table 3.2: Some standard Fourier transform pairs. (Page 49.)

"lim"n n rect1/n (t) = (t).


"lim"a
"lim" 0

sin(at)
t

= (t).

2
1 et /
2

= (t).

"lim"a e j at = 0.
"lim"a

cos(at)
t

= 0.

Table 4.3: Some generalized limits. (Page 80.)

Appendix C. Selected tables

206

Property
Sifting

Condition


(t b) f (t) dt = f (b)

f (t)(t b) = f (b)(t b)

Convolution

( f )(t) = f (t)

Scaling

1
(at b) = |a|
(t ba )
t
( ) d = 1(t)

f (t) continuous at t = b
f (t) continuous at t = b

t = 0

Table 4.1: Properties and rules of the delta function. (Page 73.)

f (t)

F()

(t)

2 ()

(t b)

e j b

e j 0 t

2 ( 0 )

cos(0 t)

(( 0 ) + ( + 0 ))

sgn(t)

2
j

1(t)

1
j

+ ()

Table 4.2: Some generalized Fourier transform pairs. (Page 76.)

207

f (t)

Property

F(s) =

a1 f 1 (t) + a2 f 2 (t)


0

f (t)est dt

Condition

a1 F1 (s) + a2 F2 (s)

Re s > max(1 , 2 )

f (at)

1
F( a )
a

a > 0, Re s >

f (t t0 ) 1(t t0 )

F(s)est0

t0 > 0, Re s >

Shift in s-domain

f (t)es0 t

F(s s0 )

Re s > Re s0 +

Differentiation (t)

f  (t)

s F(s) f (0)

Re s >

f  (t)

s 2 F(s) s f (0) f  (0)

Re s >

F(s)
s

Re s > max(0, )

F (s)

Re s >

Linearity
Time-scaling
Time-shift

t

Integration (t)

Differentiation (s)

f ( ) d

t f (t)

Table 6.1: Standard Laplace transform properties. (Page 115.)

f (t), (t > 0)

t n at
e
n!


0

f (t)est dt

Region of conv.

1
sa

Re s > Re a

1
s n+1

Re s > 0

1
(sa)n+1

Re s > Re(a)

cos(bt)

s
s 2 +b2

Re s > 0

sin(bt)

b
s 2 +b2

Re s > 0

eat cos(bt)

sa
(sa)2 +b2

Re s > Re a

eat sin(bt)

b
(sa)2 +b2

Re s > Re a

(t)

s C

eat
tn
n!

F(s) =

(n = 0, 1, . . . )
(n = 0, 1, . . . )

Table 6.2: Standard Laplace transform pairs. (Page 115.)

Appendix C. Selected tables

208

f [n]z n

f [n]

Linearity

a1 f 1 [n] + a2 f 2 [n]

a1 F1+ (z) + a2 F2+ (z)

|z| > max(1 , 2 )

f [n]

(F+ (z ))

|z| >

f [n 1]

z1 F+ (z) + f [1]

|z| >

f [n 2]

z 2 F+ (z) + f [2] + z 1 f [1]

|z| >

f [n + 1]

z F+ (z) z f [0]

|z| >

f [n + 2]

z 2 F+ (z) z 2 f [0] z f [1]

|z| >

Scaling (z-dom.)

an f [n]

F+ (z/a)

a  = 0, |z| > |a|

Differentiation

n f [n]

z F+ (s)

|z| >

F+ (z)G + (z)

|z| > max( f , g )

Conjugation
Time-shift

Convolution

n
l=0

F+ (z) =

Property

f [l]g[n l]

n=0

|z| >

Table 8.2: Some one-sided z-transform properties. (Page 177.)

f [n]

F(z)

Region of convergence

[n k]

z k

z  = 0 (or every z if k 0)

a n 1[n]
n
n
a 1[n]
k

z
za

|z| > |a|

ak z
(za)k+1

|z| > |a|

a n1 1[n 1]

1
za

|z| > |a|

Table 8.1: Some z-transform pairs. (Page 168.)

D
Exam examples

Date: 18 November 1998


Place: A101
Time: 13.30 16.30

1. Consider the differential equation


y (2) (t) 4y (1) (t) + 4y(t) = u (2) (t).

(D.1)

(a) Determine the basis solutions.


(b) Represent (D.1) by a simulation diagram.
(c) Express (D.1) in input-state-output form.
(d) Find the transfer function of the initially at rest system (D.1).
(e) Is the initially at rest system (D.1) BIBO stable?
(f) Does the initially at rest system (D.1) have a frequency response?
Now suppose that u is given as
u(t) = t 1(t)

t 0.

(D.2)

(g) Determine the output y(t) for t > 0, with initial conditions y(0) = 0, y (0) = ,
where is some constant.
2. Consider the difference equation
y[n] +

1
5
y[n 1] + y[n 2] = u[n],
6
6

n Z.

(D.3)

(a) Determine the basis solutions.


(b) What it the transfer function H (z) of the initially at rest system (D.3), and what is
its region of convergence?
209

Appendix D. Exam examples

210

(c) Determine the impulse response of the initially at rest system (D.3).
(d) Is the initially at rest system (D.3) BIBO-stable?
Suppose now that
u[n] =

1
n
2

1[n], n Z.

(e) Determine the output y(n) for n 0 with initial conditions y[2] = 3 and
y[1] = 0.
3.

(a) Let T = 2. Determine the Fourier coefficients fk of the T -periodic signal f (t) that
on [1, 1] equals
f (t) = 1 |t|
(b) Compute

(1 t < 1)

1
m=0 (2m+1)2

4. Suppose f (t) and g(t) are continuously differentiable functions with derivatives f (t)
and g (t) respectively. What is the the generalized derivative of f (t) 1(t) + g(t) 1(t).
5. In Problem 1g you determined an output y(t) for all t > 0. What is this output y(t) for
all t R.?
6.

(a) Give the definition of causality for continuous-time systems.


(b) Let be some real constant. For which values of is the linear system y(t) =
u(t) causal?

problem: 1(a) 1(b) 1(c) 1(d) 1(e) 1(f) 1(g) 2(a) 2(b) 2(c) 2(d) 2(e) 3(a) 3(b) 4 5 6(a) 6(b)
points:

The grade is proportional to the sum of points earned. Total number of points: 42.

3 3

211

Date: 27 November 1998


Place: TW-D 103a
Time: 9.00 12.00

1. Consider the differential equation


y (2) (t) + 4y (1) (t) + 4y(t) = u (2) (t).

(D.4)

(a) Determine the basis solutions.


(b) Represent (D.4) by a simulation diagram.
(c) Express (D.4) in input-state-output form.
(d) Find the transfer function of the initially at rest system (D.4).
(e) Is the initially at rest system (D.4) BIBO stable?
(f) Does the initially at rest system (D.4) have a frequency response?
Now suppose that u is given as
u(t) = 1(t).

(D.5)

(g) Determine the output y(t) for t > 0, with initial conditions y(0) = 1, y (0) =
0.
2. Consider the difference equation
y[n] y[n 2] = u[n] u[n 1],

n Z.

(D.6)

(a) Determine the basis solutions.


(b) What is the transfer function H (z) of the initially at rest system (D.6), and what is
its region of convergence?
(c) Determine the impulse response of the initially at rest system (D.6).
(d) Is the initially at rest system (D.6) BIBO-stable?
Suppose now that
u[n] = 1[n],

n Z.

(e) Determine the output y[n] for n 0 with initial conditions y[2] = 1 and
y[1] = 0.
3.

(a) Let T = 2. Determine the Fourier coefficients fk of the T -periodic signal f (t) that
on [0, 2) equals

1 if 0 t < 1
f (t) =
1 if 1 t < 2

Appendix D. Exam examples

212

(b) Formulate Parsevals theorem for periodic signals.



1
(c) Compute
m=0 (2m+1)2 using the Fourier series of 3a.
4. Let f (t) = et 1(t) + (1 t) 1(t), where is some constant.
(a) Sketch the graph of f (t) for some > 0.
(b) Determine the generalized derivative of f (t).
(c) For which value(s) of does the generalized derivative not contain a delta function
component?
5. Consider the system y(t) = tu(t), (t R).
(a) Is the system linear?
(b) Is the system time-invariant?

1(a) 1(b) 1(c) 1(d) 1(e) 1(f) 1(g) 2(a) 2(b) 2(c) 2(d) 2(e) 3(a) 3(b) 3(c) 4(a) 4(b) 4(c) 5(a) 5(b)
2

The grade is proportional to the sum of points earned. Total number of points: 42.

213

Test exam for Chapters 1-5.


1. Demonstrate that f (t) = sin(0 t) + cos(0 t + /4) is a sinusoid.
2. Suppose f (t) is a 2-periodic signal given by
f (t) = |t 1|,

0t 2

(a) Show that the Fourier coefficients fk of f (t) equal

k even, k  = 0

0
2
f k = k 2 2 k odd

1
k=0
2

(b) Determine the power of f (t)



1
4
(c) Show that
n=0 (2n+1)4 = 96
1
3. A signal f (t) is given whose Fourier transform is F() = j +b
with b a real constant.
Determine the Fourier transform G() of g(t) = f (5t 4).

4. Let f (t) = 1(t 1) trian4 (t). Determine the generalized derivative f  (t) and sketch
f (t) and f  (t).
5. Let R. Consider the system
 t++1
u( ) d.
y(t) =
t+

(a)
(b)
(c)
(d)

Show that the system is linear


Show that the system is time-invariant
Determine the impulse response of the system
For which values of is the system causal?

6. Consider the system with frequency response


H () = rect1 ()
(a) Sketch |H ()|
(b) Let u(t) be the periodic sawtooth with fundamental frequency 0 = 3/4:
T
2

j j k0 t
e
u(t) =
k0
kZ, k =0

T
T
2

Determine the response y(t) of the system to this input u(t).


problem: 1 2(a) 2(b) 2(c) 3 4 5(a) 5(b) 5(c) 5(d) 6(a) 6(b)
points:

2 3

Appendix D. Exam examples

214

Test exam for Chapters 6-8 & Appendices


1. Consider the continuous-time system
y (t) y (t) 6y(t) = u(t)
u(t)

(D.7)

The value of R is left unspecified.


(a) Determine the basis solutions
(b) Construct the simulation diagram
(c) Determine the transfer function
(d) For which value(s) of is the initially at rest system BIBO-stable?
(e) Now suppose that u(t) = 1(t) and that we are given initial conditions y(0) =
1, y (0) = 0. Determine the response y(t) for t > 0 to this input and initial
conditions.
2. Consider the discrete-time system
y[n] 2y[n 1] + y[n 2] = u[n].
(a) Determine the basis solutions
(b) Determine the transfer function
(c) Is the initially at rest system BIBO-stable?
(d) Now suppose that u[n] = 2n n, and that we are given initial conditions y[1] = 0,
y[2] = 1. Determine the response y[n] for n 0 to this input and initial
conditions.


1 2
3. Let A =
. Determine e At .
2 1
4. It is possible to define steady-state solutions for discrete-time systems much the same
way as is done for continuous-time systems. Let u[n] be a causal harmonic input u[n] =
e j 0 n 1[n] and let y[n] be the response to this input. Show that for BIBO-stable discretetime systems with transfer function H (z) we have that


lim  y[n] H (e j 0 )e j 0 n  = 0.
n

(Consequently we call H (e j 0 )e j 0 n the steady-state solution.)


problem: 1(a) 1(b) 1(c) 1(d) 1(e) 2(a) 2(b) 2(c) 2(d) 3 4
points:

3 3

E
Solutions

Chapter 1
1.1 1.2 No.
1.3 1.4 The amplitude is

2.

1.5 1.6

T
4

1.7 1.8 P f = 4
1.9

(a) (b) P f .
(a) E f = 4T .

1.10

(b) E f =

7
6

1.11 1.12 1.13 T = 24


1.14 1.15

1 cos((N+1/2))cos(/2)
2
sin(/2)

or, equivalently,

sin(N/2) sin((N+1)/2)
sin(/2)

1.16 -

215

Appendix E. Solutions

216

Chapter 2
2.1 sin2 (0 t + /3) =

1
2

1
4

cos(20 t) +

1
2

3 sin(20 t)

2.2 2.3 2.4

(a) -

0
(b) ck = 1

k
1
k

k
k
k
k

=0
even, k = 0
= 4l + 1, l Z,
= 4l + 3, l Z

(c) (d)

(1)m+1
1 
+
cos((2m + 1)0 t)
2 m=0 (2m + 1)

(e) f (t) =
2.5

1 
(1)m+1
cos((2m + 1)0 t)
+
2 m=0 (2m + 1)

(a) k 4 + 15
.
k 6 5k 4 + 19k 2 + 25
1
(c) f k = 2
k +1

0 k even
(d) k =
k odd

2(1)k
k = 0
.
(a) f k = 2k 2
k=0
3
The Fourier series hence is
(b) f k =

2.6


2(1)k j k0 t
2
+
e
3
k2
k=
k =0

(b)

4(1)k
2 
+
cos(k0 t)
3
k2
k=1

4
(c) cos(30 t)
9

2
k = 0
2
(d) | f k | = k 2
k=0
3
(e)
2.7 f k =

2
12

4(1)k e3 j k
k2
2
3

k = 0
k=0

217


2.8 f k =

2.9 f k =

4(1)k
(k4)2
2
3

k = 4
k=4


2

2(1)k

cos(2 2 )
(1)k sin(2 2 ) +
3
2 k
(2 k)
(2 k)2

2.10 The second harmonic is 12 cos(2t) and the third harmonic is 0.

k even
0
2.11 f k = 1
k = 4l + 1, l Z

1 k = 4l + 3, l Z
2.12

(a) (b)

2T 2
4

(a) fk =

2.13

1
4 f k2

1
2 fk

1
4 f k+2

(b) (c) g2l = 0 and g2l+1 = fl . P f = Pg .


2.14

(a) -



k0 a
2
fk .
sin
(b) g0 = f 0 en gk =
0 ka
2


k0 a
2
(c) In both cases g 0 = f 0 , and gk =
fk .
sin
0 ka
2

2.15

(a) P f =
(b) P f =

5
2
1
2

2.16 2.17 ( f n f m )(t) = 0 if n = m, ( f n f m )(t) = fn (t) if n = m.




1
t 

2.18 ( f g)(t) =  .
2 T
(a) b k = 0 for every k = 0, 1, 2,

2.19

(b) ak = 0 for every k = 0, 1, 2,


(c) ak = bk = 0 for every odd k = 1, 3, 5, .
2.20 2.21 2
j kt
2.22
k=2 e
2.23

(a) (b) h(t) = 2

2.24

4
.
90


sin(k0 t)
k0
k=1

Appendix E. Solutions

218

Chapter 3
1
(e6( j 1) e5( j 1) ). f (t) is absolutely integrable, but F() is not abso1 j
lutely integrable.

3.1 F() =

3.2

3.3

1
(F( 0 ) F( + 0 )),
2j


0
1
F
,
(b)
|a|
a

1
F() + F () ,
(c)
2

1
F() F () .
(d)
2j
(a)

(a) rect8 (),


8 sin2 ( 14 a)
,
a2
e(a+ j )t0
(c)
,
a + j
 2
(d)
e 4a ,
2a j a


1
1
1
,

(e)
2 j a + j ( 0 ) a + j ( + 0 )

(b)

(f) e || ,
(g) e j e|| .
3.4

2 j /3
F( ),
e
3
3
(b) F( + 2)e 2 j (+2) ,
1
(c) F  (),
j
(d) 2F(2),
(a)

(e) e j F(),
1
1
1
(f) F() + F( 20 ) + F( + 20 ).
2
4
4
sin(T ) T 2 sin(T )
2 2
.

T 2
2 sin(at)
cos(0 t),
3.6 (a)
t
1
(b) (et + 2e 4t ) 1(t),
3
(c) (9e t + 9te t + 9e 2t ) 1(t).
3.5

3.7

(a) ( f g)(t) =

1
(eat 1(t) + ebt 1(t)),
a+b

219

ea(t1)
,
a
sin(t)
(if ),
(c) ( f g)(t) =
t
(d) ( f g)(t) = (t + 1) rect2 (t) + 1(t 1).

(b) ( f g)(t) =

3.8

(a) Yes,

2 4 2
n
n 2
(b) f [n] =
8

n2

2 4 2
n 2

n = 4l,
n = 4l + 1,
n = 4l + 2,
n = 4l + 3,

2
.
3
(a) a < ,
(c)

3.9
3.10

(a)

1
4 j /5 ,
j +5b e

(b)

2
,
( j +b)3

(c)

1
j (2)+b ,

(d)

1
1
2 j (4)+b

1
1
2 j (+4)+b ,

1
(e) 2 j +b
,

(f)

1
,
( j +b)2

1
(g) sgn(b) j +2b
,

(b > 0)
2e b 1()
.
(h)
b
1() (b < 0)
2e

3.11

(a)
(b)
(c)

3.12

2 (t)
2 sin(t) cos(t)
2 sin
t 2
t 3
1 sin(4(tt 0))

tt0

t sin(t)
(t 2 1)

(a) 2 sin( a
2 )
(b)
F().
a

3.13 -

Chapter 4
4.1

(a)
(b)

4.2

f (t+2)
2 ,
1
1
2 f ( 2 )(t

12 ).

(a) rect2 (t) (t + 1) (t 1),

Appendix E. Solutions

220

(b) cos(t) 1(t),


(c) rect2 (t 1) 2(t 2),
(d) j e j t 1(t ) (t ),
(e)

1
2 (t

+ 1/2) 12 (t 1/2) sgn(t) rect1 (t).

4.3 f  (t) 1(t) + g  (t) 1(t) + (g(0+) f (0))(t).


4.4

(a)
(b)
(c)

(( 0 ) ( + 0 )),

1
j (0 ) + ( 0 ),

+ 2 ( + 0 ) + 2 (
j (2 02 )

(d)

0
2 02

(e)

2
j (0 ) ,

4.5

e j 1
2

4.6

(a)

2 sin
j 2

(b)

rect2 ()
j

4.7

(a)
(b)

j
2
j
4

2 j (

+ 0 )

2 j (

+ ().
+ 2(),

+ 2 ().


sgn(t) 1 + e |t| ,
(sgn(t + 1) + sgn(t 1)),

(c) 21j t + 12 (t),


(d)

j jt
2e

sgn(t).

4.8 4.9

(a) sgn(t) 2e t 1(t),


1


(b) sgn(t + 12 ) 1 e |t+ 2 | + 1.

4.10 4.11 4.12 -

Chapter 5
5.1

(a) (b) H () = a + be j ,
(c) h(t) = a(t) + b(t 1),
(d) Yes,
(e) Yes.

5.2

(a) (b) H () =

2 sin
,

h(t) = rect2 (t)

0 ),

0 ),

221

(c) No
(d) Yes
(e) y(t) = 2 trian2 (t)
5.3

(a) (b) Iff = 1


(c) Iff = 1

5.4

(a) No
(b) Yes
(c) Yes
(d) No

5.5

(a) (b) No

5.6

(a) Yes,
(b) H () =

1
(1+ j )2

(c) Yes,
(d) y(t) =

(102 ) sin(0 t)+20 cos(0 t)


,
(1+02 )2

(e) g(t) = (1 te t et ) 1(t).


(f) 5.7

(a) The amplitude transfer is constant equal to one: |H ()| = 1.


(b) (c) h(t) = 2e t 1(t) (t),
(d) g(t) := (1 2e t ) 1(t),
(e) g(t + T2 ) g(t T2 ), where g(t) is the step response found above

(f) y(t) dt = Y (0) = T .

5.8 h(t) = 3(t) e 2t 1(t) + 12 et/2 1(t).


5.9

(a) h(t) =
(b) y(t) =

5.10

2 sin2 (t/2)
,
t 2
1
2
2t
2 3 cos( 3 ).

(a) h(t) = 2(t) 2e 2t 1(t) and H () =


(b) Yes,
(c) Yes,
(d)

5.11 -

8
5

cos t

4
5

sin t

2(1+ j )
2+ j ,

Appendix E. Solutions

222

Chapter 6
(a) e 2t

6.1

(b) (t) + 2e 2t
(c) (2t + 1)e 2t
(d) (t) + (4t 2 + 12t + 6)e 2t
(e) sin(t)e t
(f) (cos(t) sin(t))e t
6.2 6.3

6.4

(a)

0 1+es/0
s 2 +02 1es/0

(b)

es
.
s(1es )

(a) ( 12 sin t 12 t cos t) 1(t)


(b) (t) + 2(e 3t + 3e 4t ) 1(t)
(c) sin t (1(t) 1(t ))

(d) e at 1(t t0 )


a
6.5 (a)
n=0 recta (t nb 2 ) =
n=0 1(t nb) 1(t nb a)
 a(tnT )
(b)
1(t nT )
n=0 e

T
6.6 T1 0 f (t)dt
6.7

n!m!
n+m+1
(n+m+1)! t

1(t).

6.8 F(s) exists iff Re s 0, except s = 0 for wich F(s) exists iff > 1.

Chapter 7
7.1

(a) (b) -

(c)  1 2t 2 t 2 2t 2 t 
e + e
e e
7.2 (a) 31 2t 31 t 32 2t 31 t
e

e
3
3
3e + 3e
 2(t2)

+ (6t 7)e (t2)
1 4e
(b) x(t) = 9
for t > 2
4e 2(t2) (3t 2)e (t2)
 t t 
7.3 e At = e0 teet .
7.4

(a) H (s) =
(b) (t)

s1
s2 ,
2t
+ e 1(t)

(c) y(t) = ( 23 et + 13 e2t ) 1(t).


7.5

(a) H (s) =

s
(s+1)(s2) ,

223

(b) No,
(c) h(t) = 13 (et + 2e 2t ) 1(t).
7.6

(a) h(t) = (t) e t (2 cos(2t) + 3/2 sin(2t)) 1(t)


(b) Yes
(c) g(t) g(t 2), where g(t) is the step response g(t) = (cos(2t) 1/2 sin(2t))e t 1(t)

7.7

(a) e L t
(b) e j t , e j t e j

2t

e j

2t

(c) e t , tet , t 2 et
7.8

(a) e 2 t , e 2 t
(b)

s/21/2
s 2 1/4

(c) No
(d)

 

01
1/2

x(t) +
x(t)
= 1
u(t)
1/2
(e)
0
4

y(t) = x 1 (t)
(f) 2e t/2 + 2t 2, (t 0)
7.9

(a) e 4t , 1
(b) (c) H (s) =

1
s

(d) No.
(e) t + e 4t
7.10

(a) e 3t , et
(b)
 
 

21
1
x(t)
=
x(t) +
u(t)
30

(c)

y(t) = 1 0 x(t)
(d)

s
s 2 2s3

(e)

s
s 2 2s3

(f) BIBO-stable iff = 3


7.11 7.12 -

Appendix E. Solutions

224

Chapter 8
8.1

(a) F(z) = 1, for every z


(b) F(z) = z 5 , z = 0
(c) F(z) =
(d)

bz
,
(zb)2

z
1z ,

|z| > 1

|z| > |b|

(e) F(z) = e 1/z , z = 0.


8.2

(a) y1 [n] = ( 12 )n , y2 [n] = ( 12 )n


(b) y1 [n] = ( 12 +

1
4

j )n , y2 [n] = ( 12

8.3

(c) Same as (a)


1 1 n
1
5)( 2 + 2 5) + ( 12
(a) ( 12 + 10

(b) 3( 12 )n + 13 ( 12 )n + 83 ) 1[n]

8.4

(a) H (z) =

1
zN

1
4

1
10

j )n

5)( 12

1
2

n
5)

, z = 0, h(t) = (n N)

|z| > 54 , h(t) = 56 ( 54 )n 1[n] + 16 ( 14 )n 1[n]


2 +z
(c) H (z) = z 2z1/4
, |z| > 14 h(t) = 3( 12 )n 12 ( 12 )n 1[n]

(b) H (z) =

z2
,
z 2 z5/16

8.5 8.6

(a) Yes
(b) No
(c) Yes
(d) Yes!
(e) BIBO-stable iff = 1

8.7

(a) (2) n , ( 12 )n
(b)
(c)

z+1
z 2 + 32 z1

4
3

2
n
15 (2)

+ 65 ( 12 )n

1[n]

(d) No
(e) [n] + 75 (2)n +
8.8

11 1
10 2n

(a) ( 32 )n , ( 12 )n
(b) ( 12 )n1 1[n 1]
(c) H (z) =

1
z 12

, |z| >

1
2

(d) Yes
(e) 9/8( 12 )n 5/8( 32 )n (n 0)
8.9

(a) (1) n , 1n = 1
(b) h(n) = (1/2 + 1/2(1) n ) 1[n]
(c) No

225

1 1 n
1
8.10 See Problem 8.3: ( 12 + 10
5)( 2 + 2 5) + ( 12

n+2
8.11 (a) a1a
cos(n/2) + a sin(n/2) 1[n]
2 +1 a

(b) ( 12 )n 2( 13 )n 1[n]

1
10

1 1 n
5)( 2 2 5)

(c) 8/3 5/8( 12 )n + 5/24( 12 )n


(d) 5( 12 )n + 5( 12 )n + 22 n( 12 )n
8.12

(a) (b) (c) -

8.13 8.14

(a) |z| = maxk |z k | where z k are the poles of H (z)z/(z 1).


(b) -

226

Appendix E. Solutions

Index
F , 43
L, 108
T , 83
Z, 163
Z+ , 174
-domain, 41
k-th harmonic, 24
z-domain, 163
z-transform, 161
mapping, 163
one-sided, 174
time-reversal, 165
unilateral, 174
T -periodic, 7

discrete-time system, 177


causal part, 161
characteristic equation, 193
comparison test, 10
complex Fourier series, 19
conjugation, 166
continuous-time
delta function, 67
linear system, 84
LTI system, 84
signal, 1
time-invariant system, 84
unit pulse, 67
continuous-time system
initially at rest, 139
continuous function, 4
continuous superposition, 39
convergence
uniform, 33
convolution
aperiodic signal, 52
discrete-time, 173
frequency domain, 54
periodic signal, 28
convolution theorem
z-transform, 173
Fourier-transform, 54
Laplace transform, 113
cross correlation, 55
cut-off frequency, 99

abscissa of convergence, 106


absolutely convergent, 10
absolutely integrable, 40
absolutely summable, 10
adder, 125
amplitude, 6
amplitude modulated signal, 60
amplitude modulation, 60
amplitude transfer, 95
analog filter, 83
angular frequency, 6
anti-causal part, 161
antiderivative, 5
aperiodic, 8
autocorrelation, 56

delta-train, 160
delta function, 67, 75
continuous-time, 67
discrete-time, 159
differentiable function, 4
differentiation rule
z-transform, 166
Fourier transform, 45
Laplace transform, 110

bandlimited, 57
bandwidth, 57
basis solution
continuous-time, 194
discrete-time, 199
BIBO-stable, 181
Butterworth filter, 153
causal, 52, 89

227

Index

228

digital filter, 83
discrete-time
BIBO-stable, 181
causal system, 177
delta-function, 159
filter, 176
impulse response, 178
initially at rest system, 179
linear system, 176
LTI system, 177
non-anticipating system, 177
signal, 1
system, 176
time-invariant system, 177
transfer function, 178
unit pulse, 159
unit step, 159
discrete delta function, 159
distortionless, 88
energy, 9
energy content, 9
energy signal, 9
energy spectrum, 56
even signal, 25
exponentially bounded, 106
filter
continuous time, 83
discrete-time, 176
ideal, 98
final value theorem, 114
finite duration, 40
finite impulse response, 178
FIR filter, 178
Fourier coefficient, 16
Fourier integral theorem, 40
Fourier series, 16
complex, 19
real, 24
Fourier transform, 41, 43
generalized, 79
inverse, 43
frequency
fundamental, 15
frequency characteristic, 83
frequency domain, 26
frequency response, 95
frequency spectrum, 41

function
delta, 75
generalized, 75
tempered test, 79
test, 75
fundamental frequency, 15
gain, 125
generalized derivative, 71
generalized Fourier transform, 79
generalized function, 67, 75
geometric series, 10
ratio, 10
Gibbs phenomenon, 32
harmonic
k-th, 24
harmonic signal, 3
homogeneous equation
continuous-time, 193
discrete-time, 198
ideal filter, 98
IIR filter, 178
impedance, 146
impulse response
continuous-time system, 87
discrete-time system, 178
impulse train, 160
initially at rest, 179
continuous-time signal, 139
continuous-time system, 139
discrete-time signal, 162
discrete-time system, 179
initial value theorem, 171
input-state-output representation, 121
integrator, 125
integrator system, 88
Laplace transform, 105, 106, 108
one-sided, 105
two-sided, 105
Laurent series, 161
linearity
continuous-time system, 84
discrete-time system, 176
linear time-invariant, 84
line spectrum, 26
LTI system

Index

continuous-time, 84
discrete-time, 177
matrix exponential, 129
modulation theorem, 45
non-anticipating
continuous-time system, 89
discrete-time system, 177
non-proper, 192
Nyquist rate, 57
odd signal, 25
one-sided z-transform, 174
one-sided Laplace transform, 105
output equation, 121
Parseval theorem
T -periodic signals, 31
non-periodic case, 55
partial fraction expansion, 187
particular solution
continuous-time, 196
discrete-time, 200
periodic signal, 7
convolution, 28
phase, 6
phase transfer, 95
piecewise smooth, 6
power, 9
power signal, 9
primitive, 5
proper, 188, 191
strictly, 50
pulse
rectangular, 8
triangular, 8
unit, 67, 159
ratio, 10
rational function, 50
real Fourier series, 24
real harmonic signal, 6
real signal, 2
real system, 90
reciprocity, 44
rectangular pulse, 8
region of convergence
z-transform, 162

229

Laplace transform, 108


response
impulse, 87
steady state, 146
step, 93
transient, 146
RiemannLebesgue lemma, 19
sampling, 1
sampling frequency, 57
sampling theorem, 58
sampling time, 1
sawtooth, 23
scaling
z-domain, 166
time-, 44
Shannons sampling theorem, 58
sifting property, 70
signal, 1
T -periodic, 7
absolutely integrable, 40
aperiodic, 8
bandlimited, 57
causal, 52
continuous-time, 1
discrete-time, 1
energy, 9
even, 25
harmonic, 3
initially at rest, 139, 162
non-periodic, 8
odd, 25
periodic, 7
piecewise smooth, 6
power, 9
real, 2
real harmonic, 6
sinusoidal, 6
simulation diagram, 125
sinusoids, 6
smooth
piecewise, 6
spectral density, 56
spectrum, 41
state, 121
state equations, 121
steady state response, 146
step response, 93

230

strictly proper, 50, 188


superposition, 15
continuous, 39
superposition principle, 84
system
causal, 89, 177
continuous-time, 84
discrete-time, 176
non-anticipating, 89
real, 90
tempered test function, 79
test function, 75
tempered, 79
time-invariance
continuous-time system, 84
discrete-time system, 177
time-reversal
z-transform, 165
time-shift, 166
time domain, 26
transfer function
continuous-time, 140
discrete-time, 178
transform pair, 163
z-transform, 163
Fourier transform, 43
transient response, 146
triangular pulse, 8
two-sided Laplace transform, 105
uniform convergence, 33
unilateral z-transform, 174
unit pulse, 67, 159
unit step
continuous-time, 8
discrete-time, 159

Index

Potrebbero piacerti anche