Sei sulla pagina 1di 3

1

Analogue and digital signals

Yuriy Zakharov

We can say that signals are physical variables carrying infor- mation. Then signal processing allows us to extract this infor- mation. Mathematically, signals are represented as functions of one or more variables. In our course, we will concentrate on signals of one variable, notably time. As our main purpose will be signal processing for communications, signals we are going to deal with are signals that appear in different parts of com- munication receiver and transmitter. Usually, they are electrical signals. However, in recent time, most of signal processing is performed by digital processors or computers. Therefore it is convenient to consider signals as mathematical functions with some properties. The material related to this lecture can be found, for example, in [1] and [2].

A. Continuous-time versus discrete-time signals

If we know a signal x(t ) at any time t , we will say that this is a continuous-time signal. If the signal is known at only discrete moments, e.g., t = kT where T is a positive number and k

, discrete-time signal. The transform of a continuous-time signal to discrete-time signal is known as sampling and the interval T is known as the sampling interval.

then we have a

ranges over a set of integers, k = 0 , ± 1 , ± 2 ,

B. Continuous-amplitude versus discrete-amplitude signals

We will also distinguish between continuous-amplitude and discrete-amplitude signals. Thus, there are four different types of signals depending on whether the time and amplitude are continuous or discrete. Signals with both time and ampli- tude continuous are usually called analogue signals. Signals with both time and amplitude discrete are called digital signals. When we are discussing software implementation of a commu- nication receiver (or in other words, software radio), it implies that we deal with digital signals, i.e., signals whose time and amplitude are discrete. Thanks to advances in computers and integrated circuits, dig- ital signal processing, which deals with digital signals, is the most important branch of signal processing. So, we need all signals to be in digital form. If the signal to be processed is in an analogue form, it is rst converted to a discrete-time sig- nal by sampling at discrete instants in time. The discrete-time signal is then converted to a digital signal by a process called quantization. The whole procedure is called analog-to-digital (A/D) conversion. We should admit that very often in theory, quantized signals (or, in other words, discrete-amplitude signals) are not distin- guished from continues-amplitude signals. There are two rea- sons for that:

1) Firstly, quantization is a very difcult operation for accu- rate mathematical analysis.

2) Secondly, quantization error can be easily made very small, practically negligible.

In such cases, the discrete-time / continuous-amplitude sig-

nal model is very useful. We are going to deal with such signal model in our course. However, when we are interested in prac- tical implementation of signal processing, and especially xed- point software implementation, we always should take into ac- count the quantization error.

C. Random versus deterministic signals

Signals can also be classied as random and non-random (or deterministic). Theory of random signals, or random processes , simplies analysis of signals and allows development of simple signal processing algorithms. Actually, signal processing for communications deals mostly with random signals (or random processes ). The fact is that in order to contain information, a signal must be random. Deterministic signals are totally pre- dictable, so they cannot carry any information. Theory of ran- dom processes, which is a part of probability theory, is of sig- nicant importance for modern signal processing and commu- nications. We will consider some properties of random signals in this course.

D. Periodic versus aperiodic signals

A signal x(t ) can be periodic with a fundamental period T

or aperiodic (or non-periodic). If a signal x(t ) is periodic, this means that x(t ) = x(t + T ). For example, the signal

x(t ) = A sin(ω 0 t + ϕ)

(1)

is a periodic signal with the period T = 2π , amplitude A and phase ϕ. Sinusoids are very important in communications be- cause they are used as carriers. When a receiver receives such a

signal, it should extract the transmitted information. Very often, the information is in the amplitude, e.g., A = ± 1 and we need to nd out whether it is +1 or 1 . This process is called de- modulation. However, to make the demodulation reliable, we need to rstly nd the frequency ω 0 and phase ϕ. These two parameters are nuisance parameters, such a name is due to the fact that we are not interested in these parameters themselves. The process of nding such nuisance parameters is called pa- rameter estimation. In a receiver, parameter estimation is per- formed by a synchronisation block. In our example, we need to perform frequency estimation (frequency synchronisation) and phase estimation (phase synchronisation).

Is the sinusoid random or deterministic? If all its parameters,

such as the amplitude A, the frequency ω 0 , and the phase ϕ are

a priori known, the signal is deterministic; we know everything about the signal and we can accurately predict the signal for any moment. However, if some parameters are random, then the signal itself is random.

ω

0

2

E. Description of random variables

Consider a random experiment which can be repeated many times under the same conditions. If a result of the experiment is given by a number x, we call x a random variable.

To describe a random variable, we need to describe all possi-

ble values of the variable and probabilities of these values. The

relationship between the values of a random variable and their probabilities is called the distribution of the random variable. There are discrete and continuous random variables. A discrete random variable takes on values from a nite set,

{x (1) ,

( n) } , and its distribution is the probability mass

,x

function (pmf):

p i = Pr{x = x ( i) },

i = 1 , .

, n.

(2)

It is obvious that

n

i=1

p i = 1 .

(3)

A continuous random variable takes on values from a con-

tinuous set. A probability that the random variable takes on a specic value from the set is often equal to zero. A continuous

random variable is described by the probability density function (pdf) p x (x); the probability that the random variable x takes on

a value from the interval [a, b ] is

Pr{a x b} =

a

b

p x (x)dx.

(4)

In addition to the property (4), there are two other important

properties of the pdf. Firstly, the pdf is a nonnegative function:

p x (x) 0 .

(5)

Secondly, the integral of p x (x) over all possible x is equal to 1:

−∞

p x (x)dx = 1 .

(6)

A discrete random variable can also be described by a pdf (in-

stead of a pmf ). This requires the use of the Dirac delta function

δ (x); the pmf in (2) is equivalent to the pdf

p x (x) =

n

i=1

p i δ (x x ( i) ).

(7)

Note that a constant (non-random) value c can be considered as

a random variable x with the pdf δ (x c ). 1) Example-1: Binary random variable:

p x (x) = 1 2 δ (x + 1) + 1

2 δ (x 1)

2) Example-2: Uniform pdf:

p x (x) =

1

ba

0

a<x<b ;

otherwise.

(8)

(9)

3) Example-3: Gaussian (or Normal) pdf:

p x (x) =

1

(x µ) 2 2σ 2

.

2 πσ 2 e

(10)

The parameter µ is known as the mean ; the parameter σ 2 is known as the variance.

F. Real versus complex signals

Signals in the nature are real signals, or real-valued signals. However, in signal processing and communications we often deal with complex, or complex-valued, signals. Such signals have a real and an imaginary part, like

x(t ) = y (t ) + j · z (t ),

j = 1.

(11)

We say that y (t ) is the real part of x(t ) and z (t ) is the imagi- nary part of x(t ). This is a very useful mathematical model that allows us to simplify the signal analysis and synthesis. 1) Example: Complex exponential: One of the most impor- tant complex-valued signals is the complex exponential

x(t )

Ae j ( ω 0 t+ ϕ )

=

= A cos(ω 0 t + ϕ) + jA sin(ω 0 t + ϕ).

(12)

The last relationship is due to the Euler’s theorem. The signal

x(t ) is a periodic signal with the period

T

= 2 π .

ω 0

(13)

We will meet such signals in many applications.

G. Power and energy of a signal

The instantaneous power associated with signal x(t ) is |x(t )| 2 . A signal energy over a time interval of length T is dened as

(14)

E T = T/2 |x(t )| 2 dt.

T/2

The average power over a time interval of length T is dened as

(15)

T

1

T/2

P T =

T/2 |x(t )| 2 dt = T E T .

1

Often we are interested in the signal energy and the average power over the signal period T . If a signal exists over the innite time interval t (−∞, + ), to nd its energy and average power we need to take limits in (14) and (15) when T → ∞:

T →∞

T/2

E = lim

T/2 |x(t )| 2 dt.

P = lim

T →∞

T/2

T T/2 |x(t )| 2 dt.

1

(16)

(17)

3

H. Operations with signals

1) Time-shifting:

The signal x(t t 0 ) represents a time-

shifted version of the signal x(t ). If t 0 = 0 , we have no time-

shift. If t 0 > 0 , then the signal is delayed by t 0 seconds. If t 0 < 0 , we have an advanced replica of x(t ). Physically, it is not possible, as we cannot obtain a signal if it does not exist yet. However, in theory and practice, such an operation is use- ful, if, for example, the signal x(t ) is itself a delayed version of another signal. In communications, a signal can be delayed when propagated through a radio channel. In such a case, the receiver should estimate the delay t 0 and compensate for the de- lay before it starts the demodulation. This procedure is known as timing synchronisation. 2) Reection: The signal x(t ) is obtained from x(t ) by a reection about t = 0 , i.e., by reversing x(t ). This operation happens in a tape recorder when the rewind switch is pushed on, i.e., the tape plays from the end to the beginning. 3) Time-scaling: The signal x(2 t ) can be described as x(t ) compressed in time by a factor of 2 . The signal x(t/ 2) can be described as x(t ) expanded by a factor of 2 . In general, if time is scaled by a parameter η , then x(ηt) is a compressed version of x(t ) if |η| > 1 (the compressed signal exists in a smaller time interval). The signal x(ηt) is an expanded version of x(t ) if |η| < 1 (the signal exists in a larger time interval). The time-scaling operation happens when you play back your tape recorder at a faster or slower speed than the speed used for recording. In communications, the time-scaling happens when there is the Doppler effect, i.e., the receiver or transmitter are moving.

I. Summary

1) Signals can be continuous-time or discrete-time. Signals can be continuous-amplitude or discrete-amplitude. 2) Continuous-time continuous-amplitude signals are called analogue signals. Discrete-time discrete-amplitude signals are called digital signals. 3) Signals that satisfy the condition x(t ) = x(t + T ) are called periodic signals with fundamental period T . 4) The complex exponential x(t ) = Ae j ( ω 0 t+ ϕ ) is periodic with period T = 2π ω 0 . 5) The energy of a signal over a time interval t [T / 2 , + T / 2] is dened as

T/2

E T = T/2 |x(t )| 2 dt.

(18)

6) The average power over a time interval t [T / 2 , + T / 2] is dened as

P T =

T/2

T T/2 |x(t )| 2 dt.

1

(19)

7) Signals can be deterministic or random. A random signal is often called a random process. 8) A random variable is described by the probability density function (pdf). 9) The signal x(t t 0 ) represents a time-shifted version of the signal x(t ).

10) The signal x(t ) is obtained from x(t ) by a reection about t = 0 . 11) The signal x(ηt) is a time-scaled version of x(t ). If |η| > 1 , then x(ηt) is a compressed version of x(t ), whereas if 0 < |η| < 1 , then x(ηt) is an expanded version of x(t ).

R EFERENCES [1] S. S. Soliman and M. D. Srinath, Continuous and discrete signals and systems , Prentice Hall, 2nd edition, 1998. [2] S. M. Kay, Fundamentals of statistical signal processing: Estimation the- ory , Prentice Hall PTR, 1993.