Sei sulla pagina 1di 79

Chapter 3 Pulse Modulation

We migrate from analog modulation


(continuous in both time and value) to digital
modulation (discrete in both time and value)
through pulse modulation (discrete in time but
could be continuous in value).

3.1 Pulse Modulation


Families of pulse modulation
Analog pulse modulation
A periodic pulse train is used as carriers (similar to
sinusoidal carriers)
Some characteristic feature of each pulse, such as
amplitude, duration, or position, is varied in a
continuous matter in accordance with the sampled
message signal.
Digital pulse modulation
Some characteristic feature of carriers is varied in a
digital matter in accordance with the sampled,
digitized message signal.
Po-Ning Chen@cm.nctu Chapter 3-2

1
3.2 Sampling Theorem

Ts sampling period

g (t ) = g (nT ) (t nT )
n =
s s
fs = 1/Ts sampling rate

G ( f ) = g (nTs ) (t nTs ) exp( j 2ft )dt =
n =

g (nT ) exp( j 2nT f )
n =
s s

Po-Ning Chen@cm.nctu Chapter 3-3

3.2 Sampling Theorem



Given: G ( f ) = g (nT ) exp( j 2nT f )
n =
s s


Claim: G ( f ) = f s G ( f mf )
m =
s

Po-Ning Chen@cm.nctu Chapter 3-4

2
3.2 Spectrum of Sampled Signal

Let L( f ) = f s G( f mf ), and notice that it is periodic with period f .
m =
s s

By Fourier Series Expansion,



n 1 n
c
fs / 2
L( f ) = exp j 2 f , where cn = L ( f ) exp j 2 f df
f s
n f /2
n = fs fs s

1 2n
cn =
fs / 2
L( f ) exp j f df
fs fs / 2
f s
2n
= f / 2 G ( f mf s ) exp j
f /2
f df
s

m =
s
f s

Po-Ning Chen@cm.nctu Chapter 3-5


2n

fs / 2
cn = G ( f mf s ) exp j f df , s = f mf s
m =
fs / 2
f s
f s / 2 + mf s 2n
= f s / 2 + mf s
G ( s ) exp j ( s + mf s ) ds
m = fs
f s / 2 + mf s 2n
= f s / 2 + mf s
G ( s ) exp j s ds
f s
m =
2n
= G ( s ) exp j s ds
f s
= g ( nTs )

n
L( f ) = g ( nTs ) exp j 2 f
n = f s

= g (mT ) exp( j 2mT f ), where m = n.
m =
s s

Po-Ning Chen@cm.nctu Chapter 3-6

3
3.2 First Important Conclusion for Sampling

Uniform sampling at the time domain results in a periodic


spectrum with a period equal to the sampling rate.

g (t ) = g (nT ) (t nT ) G ( f ) = f G( f mf )
n =
s s s
m =
s

Po-Ning Chen@cm.nctu Chapter 3-7

3.2 Reconstruction from Sampling


Take f s = 2W .
Ideal lowpass filter

1
G( f ) = G ( f ) for | f | W .
2W

Po-Ning Chen@cm.nctu Chapter 3-8

4
3.2 Aliasing due to Sampling

When f s < 2W , G ( f ) cannot be reconstruced by undersampled samples.

Po-Ning Chen@cm.nctu Chapter 3-9

3.2 Second Important Conclusion for Sampling


A band-limited signal of finite energy with bandwidth W
can be completely described by its samples of sampling rate
fs 2W.
2W is commonly referred to as the Nyquist rate.

How to reconstruct a band-limited signal from its samples?

2W
fs

Po-Ning Chen@cm.nctu Chapter 3-10

5

g (t ) = G ( f ) exp( j 2ft )df

= G ( f ) exp( j 2ft )df


W



g (nT ) exp( j 2nT f ) exp( j 2ft )df
1 W
= s s
fs W
n =

exp( j 2 (t nTs ) f )df
1
g (nT )
W
= s
fs W
n =

2W
sin[2W (t nTs )]
=
fs
g (nT )
n =
s
2W (t nTs )

= g (nT ) (2WT sinc[2W (t nT )])
n =
s s s

2WTs sinc[2W(tnTs)] plays the role of an interpolation function


for samples.

Po-Ning Chen@cm.nctu Chapter 3-11

3.2 Band-Unlimited Signals


The signal encountered in practice is often not strictly band-
limited.
Hence, there is always aliasing after sampling.
To combat the effects of aliasing, a low-pass anti-aliasing
filter is used to attenuate the frequency components outside
[fs, fs].
In this case, the signal after passing the anti-aliasing filter is
often treated as bandlimited with bandwidth fs/2 (i.e., fs =
2W). Hence,
t
g (t ) = g (nT ) sinc T
s n
n = s

Po-Ning Chen@cm.nctu Chapter 3-12

6
3.2 Interpolation in terms of filtering
Observe that

t
g (t ) = g (nT ) sinc T s n
n = s
is indeed a convolution between g(t) and sinc(t/Ts).
t t
g (t ) * sinc = g ( )sinc d
Ts Ts
t
= g ( nTs ) ( nTs ) sinc d

n = Ts
t
= g (nT ) s

( nTs )sinc d
n = Ts
Po-Ning Chen@cm.nctu Chapter 3-13

(Continue from the previous slide.)


t
t
g (t ) * sinc = g ( nTs )sinc n
Ts n = Ts

t
Reconstruction filter (interpolation filter) h(t ) = sinc
Ts
H ( f ) = Ts rect (Ts f )

fs
g (t ) H( f ) g (t )

fs / 2 fs / 2

Po-Ning Chen@cm.nctu Chapter 3-14

7
3.2 Physical Realization of Reconstruction Filter
An ideal lowpass filter is not physically realizable.
Instead, we can use an anti-aliasing filter of bandwidth W,
and use a sampling rate fs > 2W. Then the spectrum of a
reconstruction filter can be shaped like:

Po-Ning Chen@cm.nctu Chapter 3-15

Signal spectrum after anti-


aliasing filter with bandwidth W

Signal spectrum after


sampling with fs > 2W

Ideal filter of bandwidth fs/2.


The physically realizable
reconstruction filter

g (t ) * hrealizable (t ) = G ( f ) H realizable ( f ) G ( f ) H ideal ( f ) g (t ) * hideal (t )

Po-Ning Chen@cm.nctu Chapter 3-16

8
3.3 Pulse-Amplitude Modulation (PAM)
PAM
The amplitude of regularly spaced pulses is varied in
proportion to the corresponding sample values of a
continuous message signal. Notably, the top of each pulse
is maintained flat. So this is
PAM, not natural sampling for
which the message signal is
directly multiplied by a
periodic train of rectangular
pulses.

Po-Ning Chen@cm.nctu Chapter 3-17

3.3 Pulse-Amplitude Modulation (PAM)


The operation of generating a PAM modulated signal is
often referred to as sample and hold.
This sample and hold process can also be analyzed
through filtering technique.

s (t ) = m(nT )h(t nT ) = m (t ) * h(t )
n =
s s

1, 0<t<T

where h(t ) = 1 / 2, t = 0, t = T and m (t ) = m( nTs ) (t nTs ).
0, n =
otherwise

Po-Ning Chen@cm.nctu Chapter 3-18

9
3.3 Pulse-Amplitude Modulation (PAM)
By taking filtering standpoint, the spectrum of S(f) can be
derived as:
S ( f ) = M ( f )H ( f )

= f s M ( f kf s ) H ( f )
k =

= f s M ( f kf s )H ( f )
k =

M(f) is the message signal with bandwidth W (or having


experienced an anti-aliasing filter of bandwidth W).
fs 2W.
Po-Ning Chen@cm.nctu Chapter 3-19

3.3 Pulse-Amplitude Modulation (PAM)


(over the range [ W ,W ] of M ( f ))
1
H( f )


S ( f ) = f s M ( f kf s )H ( f )
k =

= f s M ( f ) H ( f ) + f s M ( f kf s ) H ( f )
|k |1
Reconstruction Filter Equalizer
M ( f )H ( f ) M ( f )
Po-Ning Chen@cm.nctu Chapter 3-20

10
3.3 Feasibility of Equalizer Filter
The distortion of M(f) is due to M(f)H(f),
1, 0<t <T

where h(t ) = 1 / 2, t = 0, t = T or H ( f ) = Tsinc( fT )exp( jfT )
0,
otherwise

1
exp( jfT ), | f | W
1
=
E ( f ) = H ( f ) Tsinc( fT )
0, otherwise

Question: Is the above E(f) feasible or realizable?

Po-Ning Chen@cm.nctu Chapter 3-21

~
1 1 E( f )
> = f s > 2W . 1

T Ts 0.8

1
0.6
E.g., T = 1, W = 1/8.
~ , | f | W
E ( f ) = Tsinc( fT )
0.4


0.2
0, otherwise
-1 -0.5 0.5 1

This gives an equalizer:

i (t ) ~ o1 (t ) (t + T / 2) or o(t )
exp( jfT )
E( f )

A lowpass filter non-realizable! Why?


Because " o1 (t ) = 0 for t < 0" does not imply " o(t ) = 0 for t < 0"

Po-Ning Chen@cm.nctu Chapter 3-22

11
3.3 Feasibility of Equalizer Filter
Causal
i (t ) o(t )
h(t )

A reasonable assumption for a feasible linear filter


system is that:

For any i (t ) satisfying i (t ) = 0 for t < 0, we have o(t ) = 0 for t < 0.

A necessary and sufficient condition for the above


assumption to hold is that h(t) = 0 for t < 0.

Po-Ning Chen@cm.nctu Chapter 3-23

Simplified Proof:
h (t ) = 0 for t < 0
o(t ) = h ( )i (t )d = 0h( )i (t )d
t

i (t ) = 0 for t < 0
o(t ) = 0 for t < 0
a 0, for t < 0;
If
h ( t ) dt 0 for some a > 0, then take i ( t ) =
1, for t 0.
a
o( a ) = h ( )d 0, which means that
there will be a nonzero output due to completely
a
zero input! Therefore, h( )d = 0 for every a > 0.
( a )
( a ) = 0 for every a > 0 = 0 for a > 0.
a
a a
h ( ) d = 0 for every a > 0
a
h( )d = h( a ) = 0 for a > 0.

Po-Ning Chen@cm.nctu Chapter 3-24

12
3.3 Aperture Effect
The distortion of M(f) due to M(f)H(f)

1, 0<t <T

where h(t ) = 1 / 2, t = 0, t = T or H ( f ) = Tsinc( fT )exp( jfT )
0,
otherwise

is very similar to the distortion caused by the finite size of


the scanning aperture in television. So this is named the
aperture effect.
If T/Ts 0.1, the amplitude distortion is less than 0.5%;
hence, the equalizer may not be necessary.

Po-Ning Chen@cm.nctu Chapter 3-25

1
~ , | f | W 1 1
E ( f ) = Tsinc( fT ) and > = f s > 2W .
T Ts
0, otherwise

1
~ , | f | 0.04
E ( f ) = sinc( f ) for T = 1, Ts = 10,W = 0.04
0, otherwise

~
E( f ) 1.00264
1

0.8

0.6

0.4

0.2

-0.06 -0.04 -0.02 0.02 0.04 0.06

Po-Ning Chen@cm.nctu Chapter 3-26

13
3.3 Pulse-Amplitude Modulation
Final notes on PAM
PAM is rather stringent in its system requirement, such
as short duration of pulse.
Also, the noise performance of PAM may not be
sufficient for long distance transmission.
Accordingly, PAM is often used as a mean of message
processing for time-division multiplexing, from which
conversion to some other form of pulse modulation is
subsequently made. Details will be discussed in Section
3.9.

Po-Ning Chen@cm.nctu Chapter 3-27

3.4 Other Forms of Pulse Modulation


Pulse-Duration Modulation (or Pulse-Width Modulation)
Samples of the message signal are used to vary the
duration of the pulses.
Pulse-Position Modulation
The position of a pulse relative to its unmodulated time
of occurrence is varied in accordance with the message
signal.

Po-Ning Chen@cm.nctu Chapter 3-28

14
Pulse trains

PDM

PPM

Po-Ning Chen@cm.nctu Chapter 3-29

3.4 Other Forms of Pulse Modulation


Comparisons between PDM and PPM
PPM is more power efficient because excessive pulse
duration consumes considerable power.
Final note
It is expected that PPM is immune to additive noise,
since additive noise only perturbs the amplitude of the
pulses rather than the positions.
However, since the pulse cannot be made perfectly
rectangular in practice (namely, there exists a non-zero
transition time in pulse edge), the detection of pulse
positions is somehow still affected by additive noise.

Po-Ning Chen@cm.nctu Chapter 3-30

15
2 2
1 BT ,Carson 1
See slide 2 - 162 : figure - of - merit D =
2
1 = Bn ,Carson 1
2 W 2

3.5 Bandwidth-Noise Trade-Off


PPM seems to be a better form for analog pulse modulation
from noise performance standpoint. However, its noise
performance is very similar to (analog) FM modulation as:
Its figure of merit is proportional to the square of
transmission bandwidth (i.e., 1/T) normalized with
respect to the message bandwidth (W). ( I .e., Bn = BT / W )
There exists a threshold effect as SNR is reduced.

Question: Can we do better than the square law in figure-


of-merit improvement? Answer: Yes, by means of Digital
Communication, we can realize an exponential law!

Po-Ning Chen@cm.nctu Chapter 3-31

3.6 Quantization Process


Transform the continuous-amplitude m = m(nTs) to discrete
approximate amplitude v = v(nTs)

Such a discrete approximate is adequately good in the sense


that any human ear or eye can detect only finite intensity
differences.

Po-Ning Chen@cm.nctu Chapter 3-32

16
3.6 Quantization Process
We may drop the time instance nTs for convenience, when
the quantization process is memoryless and instantaneous
(hence, the quantization at time nTs is not affected by earlier
or later samples of the message signal.)
Types of quantization
Uniform
Quantization step sizes are of equal length.
Non-uniform
Quantization step sizes are not of equal length.

Po-Ning Chen@cm.nctu Chapter 3-33

An alternative classification of quantization


Midtread
Midrise

midtread midrise

Po-Ning Chen@cm.nctu Chapter 3-34

17
3.6 Quantization Noise

Uniform midtread
quantizer

Po-Ning Chen@cm.nctu Chapter 3-35

3.6 Quantization Noise


Define the quantization noise to be Q = M V = M g(M),
where g( ) is the quantizer.
Let the message M be uniformly distributed in (mmax,
mmax). So M has zero mean.
Assume g( ) is symmetric and of midrise type; then, V =
g(M) also has zero-mean, so does Q = M V.
Then the step-size of the quantizer is given by:
2mmax
=
L
where L is the total number of representation levels.

Po-Ning Chen@cm.nctu Chapter 3-36

18
3.6 Quantization Noise
Assume g( ) assigns the midpoint of each step interval to be
the representation level. Then

0, q<
2
q 1
Pr{Q q} = Pr ( M mod ) q = + , q <
2 2 2 2
1,
q
2

1
Or pdf f Q ( q) = 1 q <
2 2

Po-Ning Chen@cm.nctu Chapter 3-37

3.6 Quantization Noise


So, the output signal-to-noise ratio is equal to:

P P P 3P
SNRO = /2
= = 2
= 2 L2
2 1 1 2 1 2mmax
/ 2 q dq mmax
12
12 L

The transmission bandwidth of a quantization system is


conceptually proportional to the number of bits required per
sample, i.e., R = log2(L).
We then conclude that SNRO 4R, which increases
exponentially with transmission bandwidth.

Po-Ning Chen@cm.nctu Chapter 3-38

19
Example 3.1 Sinusoidal Modulating Signal
Let m(t) = Am cos(2fct). Then
Am2 3( Am2 / 2) 2 3 R
P= and mmax = Am SNRO = L = 4 = (1.8 + 6 R ) dB
2 Am2 2
L R SNRO (dB)
32 5 31.8
64 6 37.8
128 7 43.8
256 8 49.8

* Note that in this example, we assume a full-load quantizer, in


which no quantization loss is encountered due to saturation.
Po-Ning Chen@cm.nctu Chapter 3-39

3.6 Quantization Noise


In the previous analysis of quantization error, we assume
the quantizer assigns the mid-point of each step interval to
be the representative level.
Questions:
Can the quantization noise power be further reduced by
adjusting the representative levels?
Can the quantization noise power be further reduced by
adopting a non-uniform quantizer?

Po-Ning Chen@cm.nctu Chapter 3-40

20
3.6 Optimality of Scalar Quantizers

Representation level v1 v2 vL1 vL


Partitions I1 I2 IL1 IL
L

UI k = [ A, A) Notably, interval Ik may not be


k =1
a consecutive interval.

Let d(m, vk) be the distortion by representing m by vk.


Goal: To find {Ik} and {vk} such that the average distortion
D = E[d(M, g(M))] is minimized.

Po-Ning Chen@cm.nctu Chapter 3-41

3.6 Optimality of Scalar Quantizers


Solution:
L
min min D = min min d ( m, vk ) f M ( m )dm
{vk } {Ik } {vk } {Ik }
k =1 I k

(I) For fixed {vk}, determine the optimal {Ik}.


(II) For fixed {Ik}, determine the optimal {vk}.

(I) If d(m, vk) d(m, vj), then m should be assigned to Ik


rather than Ij.

I k = {m [ A, A) : d ( m, vk ) d ( m, v j ) for all 1 j L}

Po-Ning Chen@cm.nctu Chapter 3-42

21
(II) For fixed {Ik}, determine the optimal {vk}.
L
min d ( m, vk ) f M ( m )dm
{vk }
k =1 I k

L
d ( m, vk ) f M ( m )dm = d ( m, v j ) f M ( m )dm
k =1 I v j I
Since
v j
k j
d ( m, v j )
= f M ( m )dm
I
v j j

a necessary condition for the optimal v j is :


d ( m, v j )

Ij
v j
f M ( m )dm = 0.

Lloyd-Max algorithm is to repetitively apply (I) and (II) for


the search of the optimal quantizer.
Po-Ning Chen@cm.nctu Chapter 3-43

Example: Mean-Square Distortion


d(m, vk) = (m vk)2

(I) I k = {m [ A, A) : ( m vk ) 2 (m v j ) 2 for all 1 j L}


should be a consecutive interval.

Representation level v1 v2 vL1 vL

Partitions I1 I2 IL1 IL

Po-Ning Chen@cm.nctu Chapter 3-44

22
Example: Mean-Square Distortion
(II) A necessary condition for the optimal v j is :

(m v j )2
m j +1 m j +1


mj
v j
f M ( m )dm = 2 ( m v j ) f M ( m )dm = 0.
mj


m j +1
mf M ( m )dm
v j ,optimal = = E [ M | m j M < m j +1 ]
mj


m j +1
f M ( m )dm
mj

Exercise: What is the best {mk} and {vk} if M is uniformly


distributed over [A,A).
m + mk +1
2
1 L m
min m k
k +1
Hint : D = dm.
2 A {m } k =1 m
k 2 k

Po-Ning Chen@cm.nctu Chapter 3-45

3.7 Pulse-Code Modulation

(anti-alias)

Po-Ning Chen@cm.nctu Chapter 3-46

23
3.7 Pulse-Code Modulation
Non-uniform quantizers used for telecommunication (ITU-
T G.711)
ITU-T G.711: Pulse Code Modulation (PCM) of Voice
Frequencies (1972)
It consists of two laws: A-law (mainly used in
Europe) and -law (mainly used in US and Japan)
This design helps to protect weak signal, which occurs
more frequently in, say, human voice.

Po-Ning Chen@cm.nctu Chapter 3-47

3.7 Laws
Qautnization Laws
A-law
13-bit uniformly quantized
Conversion to 8-bit code
-law
14-bit uniformly quantized
Conversion to 8-bit code.
These two are referred to as compression laws since
they uses 8-bit to (lossily) represent 13-(or 14-)bit
information.
Po-Ning Chen@cm.nctu Chapter 3-48

24
3.7 A-law in G.711
A-law (A=87.6)
A 1
m, m
1 + log( A) A
FA-law ( m ) =
1 + log( A | m |) 1
sgn(m) , m 1
1 + log( A) A

Linear mapping

Logarithmic mapping

Po-Ning Chen@cm.nctu Chapter 3-49

FA-law ( m) 1

0.8

0.6

0.4

0.2
output

-0.2

-0.4

-0.6

-0.8

-1
-1 -0.8 -0.6 -0.4 -0.2 0
input
0.2 0.4 0.6 0.8 1 m

Po-Ning Chen@cm.nctu Chapter 3-50

25
8 bit PCM code A piecewise linear approximation to the law.
128
112
96
80
64 FA-law ( m)
48
32
output

-32
-48
-64 256
-80 128
64
-96 -64
-112 -128
-256
-128
-4096 -2048 -1024 -512 0 512 1024 2048 4096
input
13 bit uniform quantization

Po-Ning Chen@cm.nctu Chapter 3-51

Compressor of A-law (assume nonnegative m)

Input Values Compressed Code Word


Chord Step
Bits:11 10 9 8 7 6 5 4 3 2 1 0 Bits: 6 5 4 3 2 1 0
0 0 0 0 0 0 0 a b c d x 0 0 0 a b c d
0 0 0 0 0 0 1 a b c d x 0 0 1 a b c d
0 0 0 0 0 1 a b c d x x 0 1 0 a b c d
0 0 0 0 1 a b c d x x x 0 1 1 a b c d
0 0 0 1 a b c d x x x x 1 0 0 a b c d
0 0 1 a b c d x x x x x 1 0 1 a b c d
0 1 a b c d x x x x x x 1 1 0 a b c d
1 a b c d x x x x x x x 1 1 1 a b c d

E.g. (3968)10 > (1111,1000,0000)2>(111,1111)2>(127)10


E.g. (2176)10 >(1000,1000,0000)2>(111,0001)2>(113)10

Po-Ning Chen@cm.nctu Chapter 3-52

26
Expander of A-law (assume nonnegative m)

Compressed Code Word Raised Output Values


Chord Step
Bits:6 5 4 3 2 1 0 Bits:11 10 9 8 7 6 5 4 3 2 1 0
0 0 0 a b c d 0 0 0 0 0 0 0 a b c d 1
0 0 1 a b c d 0 0 0 0 0 0 1 a b c d 1
0 1 0 a b c d 0 0 0 0 0 1 a b c d 1 0
0 1 1 a b c d 0 0 0 0 1 a b c d 1 0 0
1 0 0 a b c d 0 0 0 1 a b c d 1 0 0 0
1 0 1 a b c d 0 0 1 a b c d 1 0 0 0 0
1 1 0 a b c d 0 1 a b c d 1 0 0 0 0 0
1 1 1 a b c d 1 a b c d 1 0 0 0 0 0 0

E.g. (113)10 (111,0001)2 (1000,1100,0000) 2 (2112)10


(1000,1000,0000)2 + (1000,0000,0000)2 (2176)10 + (2048)10
In other words, (111,0001) 2 + (111,0000)2 = = (2112)10
2 2

Po-Ning Chen@cm.nctu Chapter 3-53

3.7 -law in G.711


-law ( = 255)

log(1 + m )
F -law ( m) = sgn(m) for m 1.
1 + log( )

It is approximately linear at low m.


It is approximately logarithmic at large m.

Po-Ning Chen@cm.nctu Chapter 3-54

27
F -law ( m) 1

0.8

0.6

0.4

0.2

-0.2

-0.4

-0.6

-0.8

-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
m

Po-Ning Chen@cm.nctu Chapter 3-55

8 bit PCM code A piecewise linear approximation to the law.


128
112
96
80
64
48 F -law ( m )
32
16
0
-16
-32
-48
479
-64 223
95
-80 31
-31
-96 -95
-112 -223
-479
-128
-8159 -4063 -2015 -991 0 991 2015 4063 8159

14 bit uniform quantization (213 = 8192)


Po-Ning Chen@cm.nctu Chapter 3-56

28
Compressor of -law (assume nonnegative m)
Raised Input Values Compressed Code Word
Chord Step
Bits:12 11 10 9 8 7 6 5 4 3 2 1 0 Bits: 6 5 4 3 2 1 0
0 0 0 0 0 0 0 1 a b c d x 0 0 0 a b c d
0 0 0 0 0 0 1 a b c d x x 0 0 1 a b c d
0 0 0 0 0 1 a b c d x x x 0 1 0 a b c d
0 0 0 0 1 a b c d x x x x 0 1 1 a b c d
0 0 0 1 a b c d x x x x x 1 0 0 a b c d
0 0 1 a b c d x x x x x x 1 0 1 a b c d
0 1 a b c d x x x x x x x 1 1 0 a b c d
1 a b c d x x x x x x x x 1 1 1 a b c d
Raised Input = Input + 33 = Input + 21H
(For negative m, the raised input becomes input 33.)
An additional 7th bit is used to indicate whether the input signal is positive
(1) or negative (0).
Po-Ning Chen@cm.nctu Chapter 3-57

Expander of -law (assume nonnegative m)

Compressed Code Word Raised Output Values


Chord Step
Bits:6 5 4 3 2 1 0 Bits:12 11 10 9 8 7 6 5 4 3 2 1 0
0 0 0 a b c d 0 0 0 0 0 0 0 1 a b c d 1
0 0 1 a b c d 0 0 0 0 0 0 1 a b c d 1 0
0 1 0 a b c d 0 0 0 0 0 1 a b c d 1 0 0
0 1 1 a b c d 0 0 0 0 1 a b c d 1 0 0 0
1 0 0 a b c d 0 0 0 1 a b c d 1 0 0 0 0
1 0 1 a b c d 0 0 1 a b c d 1 0 0 0 0 0
1 1 0 a b c d 0 1 a b c d 1 0 0 0 0 0 0
1 1 1 a b c d 1 a b c d 1 0 0 0 0 0 0 0
Output = Raised Output 33
Note that the combination of a compressor and an expander is
called a compander.
Po-Ning Chen@cm.nctu Chapter 3-58

29
Comparison of A-law and -law specified in G.711.
1

0.8

0.6

0.4

0.2

-0.2

-0.4 A-law
mu-law
-0.6

-0.8

-1
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8

Po-Ning Chen@cm.nctu Chapter 3-59

3.7 Coding
After the quantizer provides a symbol representing one of
256 possible levels (8 bits of information) at each sampled
time, the encoder will transform the symbol (or several
symbols) into a code character (or code word) that is
suitable for transmission over a noisy channel.
Example. Binary code.
11100100 0 = change
1 1 1 0 0 1 0 0 1 = unchange

Po-Ning Chen@cm.nctu Chapter 3-60

30
3.7 Coding
Example. Ternary code (Pseudo-binary code).
A
00011011 B
C
0 0 0 1 1 0 1 1
00011011ACABBCBB
Through the help of coding, the receiver may be able to
detect (or even correct) the transmission errors due to noise.
For example, it is impossible to receive ABABBABB,
since this is not a legitimate code word (character).
Po-Ning Chen@cm.nctu Chapter 3-61

3.7 Coding
Example of error correcting code Three-times repetition
code (to protect Bluetooth packet header).

00011011 000,000,000,111,111,000,111,111
Then majority law can be applied at the receiver to
correct one-bit error.
Channel (error correcting) codes are designed to compensate
the channel noise, while line codes are simply used as the
electrical representation of a binary data stream over the
electrical line.

Po-Ning Chen@cm.nctu Chapter 3-62

31
3.7 Line codes
(a) Unipolar nonreturn-to-zero
(NRZ) signaling
(b) Polar nonreturn-to-zero (NRZ)
signaling
(c) Unipolar return-to-zero (RZ)
signaling
(d) Bipolar return-to-zero (BRZ)
signaling
(e) Split-phase (Manchester code)

Po-Ning Chen@cm.nctu Chapter 3-63

3.7 Derivation of PSD


From slide chapter 1-117, we obtain that the general formula
for PSD is:
1
PSD = lim
T
E [ S ( f ) S2*T ( f )], where s2T (t ) = s(t ) 1{| t | T }.
2T

For a line coded signal, s(t ) = a g (t nT ), where g (t ) = 0 ouside [0, T ).
n =
n b b

N 1
hence, S ( f ) = G ( f ) an e j 2fnT and S2 NT ( f ) = G ( f ) an e j 2fnT .
b
b
b

n = n= N

1 N 1
PSD = lim | G ( f ) |2 E [an am* ]e j 2f ( n m )T
b

N 2 NT
b n = m = N
Po-Ning Chen@cm.nctu Chapter 3-64

32
1 N 1
PSD = lim | G ( f ) |2 E [an am* ]e j 2f ( n m )T b

N 2 NT
b n = m = N
1 N 1
j 2f ( n m ) T
=| G ( f ) |2 lim a ( n m )e b

b m = N n =
N 2 NT

1 N 1 j 2fkT
=| G ( f ) |2 lim a ( k )e b

b m = N k =
N 2 NT

1

=| G ( f ) |2 a ( k )e j 2fkT b

Tb k =

1 j 2fkT 2 2
a ( k )e e j 2fkTb
For i.i.d. {an }, b
= a + a
Tb k = Tb Tb k =

2
2
=
Tb
a
+
Tb
a
( f k / T )
k =
b

Po-Ning Chen@cm.nctu Chapter 3-65

3.7 Power spectral of line codes


Unipolar nonreturn-to-zero (NRZ) signaling
Also named on-off signaling.
Disadvantage: Waste of power due to the non-zero-
mean nature (PSD does not approach zero at zero
frequency). {an }n = is zero/one i.i.d.,


s(t ) = an g (t nTb ), where A, 0 t < Tb
n = g (t ) = 0, otherwise

Po-Ning Chen@cm.nctu Chapter 3-66

33
3.7 Power spectral of line codes
PSD of Unipolar NRZ
2 2

PSD U- NRZ =| G ( f ) |2 a + a ( f k / T ) b
Tb Tb k =
2
2

= A2Tb2sinc 2 ( fTb ) +a a
( f k / T ) b
Tb Tb k =
A2Tb

= sinc 2 ( fTb )1 + ( f k / Tb )
4 k =
A2Tb
= sinc 2 ( fTb )(1 + ( f ) )
4
Po-Ning Chen@cm.nctu Chapter 3-67

3.7 Power spectral of line


codes
Polar nonreturn-to-zero (NRZ) signaling
The previous PSD of Unipolar NRZ suggests that a
zero-mean data sequence is preferred.
{an }n= is 1 i.i.d.,


s(t ) = an g (t nTb ), where A, 0 t < Tb
n = g ( t ) =
0, otherwise
2 2

PSD P- NRZ =| G ( f ) |2 a + a ( f k / T ) b
Tb Tb k =
= A2Tbsinc 2 ( fTb )
Po-Ning Chen@cm.nctu Chapter 3-68

34
3.7 Power spectral of line
codes
Unipolar return-to-zero (RZ) signaling
An attractive feature of this line code is the presence of
delta functions at f = 1/Tb, 0, 1/Tb in the PSD, which
can be used for bit-timing recovery at the receiver.
Disadvantage: It requires 3dB more power than polar
return-to-zero signaling.

{an }n= is zero/one i.i.d.,




s(t ) = an g (t nTb ), where A, 0 t < Tb / 2
n = g (t ) = 0, otherwise

Po-Ning Chen@cm.nctu Chapter 3-69

3.7 Power spectral of line codes


PSD of Unipolar RZ
a2 a2

PSD U-RZ =| G ( f ) | 2
+ ( f k / T ) b
Tb Tb k =
A2Tb2 fT 2

2
= sinc 2 b a + a
2 Tb Tb
( f k / T ) b
4 k =
fT

2
A Tb
= sinc 2 b 1 + ( f k / Tb )
16 2 k =
2 fTb

A2Tb
= sinc 1 + ( f k / Tb )
16 2 k =
Po-Ning Chen@cm.nctu Chapter 3-70

35
3.7 Power spectral of line codes
Bipolar return-to-zero (BRZ) signaling
Also named alternate mark inversion (AMI) signaling
No DC component and relatively insignificant low-
frequency components in PSD.

A, 0 t < Tb / 2
s (t ) = a g (t nT ), where g (t ) = 0,
n =
n b
otherwise

Po-Ning Chen@cm.nctu Chapter 3-71

3.7 Power spectral of line codes


PSD of BRZ
{an} is no longer i.i.d.
1 1 1 1
E [an2 ] = (0) + ( 1) 2 + ( +1) 2 =
2 4 4 2
1 1
E [an an +1 ] = ( 1) =
4 4
1 1 1 1
E [an an + 2 ] = (1)(1) + (1)( 1) + ( 1)(1) + ( 1)( 1) = 0
16 16 16 16
M
E [an an + m ] = 0 for m > 1.
Po-Ning Chen@cm.nctu Chapter 3-72

36
1 j 2fkT
PSD BRZ =| G ( f ) |2 a ( k )e b

Tb k =
fT 1 1 1
2 2
A Tb
= sinc 2 b cos(2fTb )
4 2 Tb 2 2
A2Tb fT
= sinc 2 b (1 cos(2fTb ) )
8 2

Po-Ning Chen@cm.nctu Chapter 3-73

3.7 Power spectral of


line codes
Split-phase (Manchester code)
This signaling suppressed the DC component, and has
relatively insignificant low-frequency components,
regardless of the signal statistics.
Notably, for BRZ, the DC component is suppressed
only when the signal has the right statistics.
{an }n= is 1 i.i.d.,


A, 0 t < Tb / 2
s(t ) = an g (t nTb ), where
n = g (t ) = A, Tb / 2 t < Tb
0,
otherwise
Po-Ning Chen@cm.nctu Chapter 3-74

37
3.7 Power spectral of line codes
PSD of Manchester code
2 2

PSD Manchester =| G ( f ) |2 a + a ( f k / T )
b
Tb Tb k =
A2Tb2 fT fT 2

2
= sinc 2 b sin 2 b a + a
2 2 Tb Tb
( f k / T ) b
16 k =
fT fT
2
A Tb
= sinc 2 b sin 2 b
16 2 2

Po-Ning Chen@cm.nctu Chapter 3-75


Adjust A so that PSD( f )df = 1.
(Normalize the transmission power.)
T T
PSD U-NRZ,Normalization = b sinc2 ( fTb ) + b ( f )
2 2
PSD P- NRZ,Normalization = Tbsinc 2 ( fTb )

Tb fT

PSD U-RZ = sinc 2 b 1 + ( f k / Tb )
4 2 k =
fT
PSD BRZ = Tbsinc 2 b sin 2 (fTb )
2
T fT fT
PSD Manchester = b sinc 2 b sin 2 b
4 2 2
Po-Ning Chen@cm.nctu Chapter 3-76

38
From integration standpoint,
f ' df '
PSD( f )df = PSD for f ' = fTb , but Tb ( fTb )df = ( f ' )df '.
Tb Tb
1 1
PSD U - NRZ,Normalization = sinc 2 ( f ' ) + ( f ' )
2 2
PSD P- NRZ,Normalization = sinc2 ( f ' )

1 2 f ' 1 k
PSD U -RZ = sinc + sinc 2 ( f ' k )
4 2 4 k = 2
f '
PSD BRZ = sinc2 sin 2 (f ' )
2
f ' f '
PSD Manchester = sinc 2 sin 2
2 2
Po-Ning Chen@cm.nctu Chapter 3-77

1
U-NRZ
P-NRZ
U-RZ
BRZ
0.8 Manchester

0.6

0.4

0.2
1/ 2

0
0 0.5 1 1.5 2

Po-Ning Chen@cm.nctu Chapter 3-78

39
3.7 Differential encoding with unipolar NRZ line
coding
1 = no change and 0 = change.
on
dn

d n = d n 1 on = d n 1 on

Po-Ning Chen@cm.nctu Chapter 3-79

3.7 Regeneration
Regenerative repeater for PCM system
It can completely remove the distortion if the decision
making device makes the right decision (on 1 or 0).

Po-Ning Chen@cm.nctu Chapter 3-80

40
3.7 Decoding & Filtering
After regenerating the received pulse at the last time, the
receiver then decodes, and regenerates the original message
signal (with acceptable quantization error).
Finally, a lowpass reconstruction filter whose cutoff
frequency is equal to the message bandwidth W is applied at
the end (to remove the unnecessary high-frequency
components due to quantization).

Po-Ning Chen@cm.nctu Chapter 3-81

3.8 Noise Consideration in PCM Systems


Two major noise sources in PCM systems
(Message-independent) Channel noise
(Message-dependent) Quantization noise
The quantization noise is often under designers
control, and can be made negligible as by taking
adequate number of quantization levels.

Po-Ning Chen@cm.nctu Chapter 3-82

41
3.8 Noise Consideration in PCM Systems
The main effect of channel noise is to introduce bit errors.
Notably, the symbol error rate is quite different from
the bit error rate.
A symbol error may be caused by one-bit error, or two-
bit error, or three-bit error, or ; so in general, one
cannot derive the symbol error rate from the bit error
rate (or vice versa) unless some special assumption is
made.
Considering the reconstruction of original analog signal,
a bit error in the most significant bit is more harmful
than a bit error in the least significant bit.

Po-Ning Chen@cm.nctu Chapter 3-83

3.8 Error Threshold


Eb/N0
Eb: Transmitted signal energy per information bit
E.g., information bit is encoded using three-times
repetition code, in which each code bit is transmitted
using one BPSK symbol with symbol energy Ec.
Then Eb = 3 Ec.
N0: One-sided noise spectral density
The bit-error-rate is a function of Eb/N0 and transmission
speed (and implicitly bandwidth, etc).

Po-Ning Chen@cm.nctu Chapter 3-84

42
3.8 Error Threshold
Influence of Eb/N0 on BER at 105 bps
Eb/N0 (dB) BER About one error in every
4.3 102 103 second
8.4 104 101 second
10.6 106 10 seconds
12.0 108 20 minutes
13.0 1010 1 day
14.0 1012 3 months
The output signal-to-noise ratio of an analog FM receiver without
pre/de-emphasis is typically 40-50 dB. Pre/de-emphasis may reduce
the requirement by 13 dB.
Po-Ning Chen@cm.nctu Chapter 3-85

3.8 Error Threshold


Error threshold
The minimum Eb/N0 to achieves the required BER.
By knowing the error threshold, one can always add a
regenerative repeater when Eb/N0 is about to drop below the
threshold; hence, long-distance transmission becomes
feasible.
Unlike the analog transmission, distortion will
accumulate for long-distance transmission.

Po-Ning Chen@cm.nctu Chapter 3-86

43
3.9 Time-division multiplexing
An important feature of sampling process is a conservation-
of-time.
In principle, the communication link is used only at the
sampling time instances.
Hence, it may be feasible to put other messages samples
between adjacent samples of this message on a time-shared
basis.
This forms the time-division multiplex (TDM) system.
A joint utilization of a common communication link by
a plurality of independent message sources.

Po-Ning Chen@cm.nctu Chapter 3-87

3.9 Time-division multiplexing

The commutator (1) takes a narrow sample of each of the N


input messages at a rate fs slightly higher than 2W, where W
is the cutoff frequency of the anti-aliasing filter, and (2)
interleaves these N samples inside the sampling interval Ts.

Po-Ning Chen@cm.nctu Chapter 3-88

44
3.9 Time-division multiplexing

The price we pay for TDM is that N samples be squeezed in


a time slot of duration Ts.

Po-Ning Chen@cm.nctu Chapter 3-89

3.9 Time-division multiplexing


Synchronization is essential for a satisfactory operation of
the TDM system.
One possible procedure to synchronize the transmitter
and receiver clocks is to set aside a code element or
pulse at the end of a frame, and to transmit this pulse
every other frame only.

Po-Ning Chen@cm.nctu Chapter 3-90

45
Example 3.2 The T1 system
T1 system
Carries 24 64kbps voice channels with regenerative
repeaters spaced at approximately 2-km intervals.
Each voice signal is essentially limited to a band from
300 to 3100 Hz.
Anti-aliasing filter with W = 3.1 KHz
Sampling rate = 8 KHz (> 2W = 6.2 KHz)
ITU G.711 -law is used with = 255.
Each frame consists of 24 8 + 1 = 193 bits, where a
single bit is added at the end of the frame for the
purpose of synchronization.

Po-Ning Chen@cm.nctu Chapter 3-91

Example 3.2 The T1 system


In addition to the 193 bits per frame (193/ (1/8KHz) ) =
1.544 Megabits per second, a telephone system must
also pass signaling information such as dial pulses
and on/off-hook.
The least significant bit of each voice channel is
deleted in every sixth frame, and a signaling bit is
inserted in its place.

Po-Ning Chen@cm.nctu Chapter 3-92

46
3.10 Digital multiplexers

The introduction of digital multiplexer enables us to


combine digital signals of various natures, such as
computer data, digitized voice signals, digitized facsimile
and television signals.
Po-Ning Chen@cm.nctu Chapter 3-93

3.10 Digital multiplexers


The multiplexing of digital signals is accomplished by
using a bit-by-bit interleaving procedure with a selector
switch that sequentially takes a (or more) bit from each
incoming line and then applies it to the high-speed common
line.

Po-Ning Chen@cm.nctu Chapter 3-94

47
3.10 Digital multiplexers
Digital multiplexers are categorized into two major groups.
1. 1st Group: Multiplex digital computer data for TDM
transmission over public switched telephone network.
Require the use of modem technology.
2. 2nd Group: Multiplex low-bit-rate digital voice data
into high-bit-rate voice stream.
Accommodate in the hierarchy that is varying from
one country to another.
Usually, the hierarchy starts at 64 Kbps, named a
digital signal zero (DS0).

Po-Ning Chen@cm.nctu Chapter 3-95

3.10 North American digital TDM hierarchy


The first level hierarchy
Combine 24 DS0 to obtain a primary rate DS1 at 1.544
Mb/s (T1 transmission)
The second-level multiplexer
Combine 4 DS1 to obtain a DS2 with rate 6.312 Mb/s
The third-level multiplexer
Combine 7 DS2 to obtain a DS3 at 44.736 Mb/s
The fourth-level multiplexer
Combine 6 DS3 to obtain a DS4 at 274.176 Mb/s
The fifth-level multiplexer
Combine 2 DS4 to obtain a DS5 at 560.160 Mb/s
Po-Ning Chen@cm.nctu Chapter 3-96

48
3.10 North American digital TDM hierarchy
The combined bit rate is higher than the multiple of the
incoming bit rates because of the addition of bit stuffing
and control signals.

Po-Ning Chen@cm.nctu Chapter 3-97

3.10 North American digital TDM hierarchy


Basic problems involved in the design of multiplexing
system
Synchronization should be maintained to properly
recover the interleaved digital signals.
Framing should be designed so that individual can be
identified at the receiver.
Variation in the bit rates of incoming signals should be
considered in the design.
A 0.01% variation in the propagation delay produced
by a 1 decrease in temperature will result in 100
fewer pulses in the cable of length 1000-km with
each pulse occupying about 1 meter of the cable.
Po-Ning Chen@cm.nctu Chapter 3-98

49
3.10 Digital multiplexers
Synchronization and rate variation problems may be
resolved by bit stuffing.
Example 3.3. AT&T M12 (second-level multiplexer)
24 control bits are stuffed, and separated by sequences
of 48 data bits (12 from each DS1 input).

Po-Ning Chen@cm.nctu Chapter 3-99

Po-Ning Chen@cm.nctu Chapter 3-100

50
Example 3.3 AT&T M12 multiplexer
The control bits are labeled F, M, and C.
Frame markers: In sequence of F0F1F0F1F0F1F0F1, where F0
= 0 and F1 = 1.
Subframe markers: In sequence of M0M1M1M1, where M0 = 0
and M1 = 1.
Stuffing indicators: In sequences of CI CI CI CII CII CII CIII CIII
CIII CIV CIV CIV, where all three bits of Cj equal 1s indicate
that a stuffing bit is added in the position of the first
information bit associated with the first DS1 bit stream that
follows the F1-control bit in the same subframe, and three 0s
in CjCjCj imply no stuffing.
The receiver should use majority law to check whether a
stuffing bit is added.
Po-Ning Chen@cm.nctu Chapter 3-101

Example 3.3 AT&T M12 multiplexer


These stuffed bits can be used to balance (or maintain) the
nominal input bit rates and nominal output bit rates.
S = nominal bit stuffing rate
The rate at which stuffing bits are inserted when both
the input and output bit rates are at their nominal
values.
fin = nominal input bit rate
fout = nominal output bit rate
M = number of bits in a frame
L = number of information bits (input bits) for one input
stream in a frame

Po-Ning Chen@cm.nctu Chapter 3-102

51
Example 3.3 AT&T M12 multiplexer
For M12 framing,
f in = 1.544 Mbps
f out = 6.312 Mbps
M = 288 4 + 24 = 1176 bits
L = 288 bits
M L 1 L
Duration of a frame = = S + (1 S )
f out f in f in
123
One bit is replaced
by a stuffed bit.

f in 1.544
S = L M = 288 1176 = 0.334601
f out 6.312
Po-Ning Chen@cm.nctu Chapter 3-103

Example 3.3 AT&T M12 multiplexer


Allowable tolerances to maintain nominal output bit rates
A sufficient condition for the existence of S such that
the nominal output bit rate can be matched.
L 1 L M L 1 L
max S + (1 S ) min S + (1 S )
S[ 0 ,1]
f in f in f out S[ 0,1] f in f in
L M L 1 L L 1
f out f in f out
f in f out f in M M

288 287
1.5458 = 6.312 f in 6.312 = 1.54043
1176 1176

Po-Ning Chen@cm.nctu Chapter 3-104

52
Example 3.3 AT&T M12 multiplexer
This results in an allowable tolerance range:
1.5458 1.54043 = 6.312 / 1176 = 5.36735 kbps

In terms of ppm (pulse per million pulses),


106 bppm
106 106 + a ppm
= =
1.54043 1.544 1.5458
a ppm = 1164.8 and bppm = 2312.18
This tolerance is already much larger than the
expected change in the bit rate of the incoming DS1
bit stream.
Po-Ning Chen@cm.nctu Chapter 3-105

3.11 Virtues, limitations, and modifications of


PCM
Virtues of PCM systems
Robustness to channel noise and interference
Efficient regeneration of coded signal along the transmission
path
Efficient exchange of increased channel bandwidth for
improved signal-to-noise ratio, obeying an exponential law.
Uniform format for different kinds of baseband signal
transmission; hence, facilitate their integration in a common
network.
Message sources are easily dropped or reinserted in a TDM
system.
Secure communication through the use of encryption/decryption.
Po-Ning Chen@cm.nctu Chapter 3-106

53
3.11 Virtues, limitations, and modifications of
PCM
Two limitations of PCM system (in the past)
Complexity
Bandwidth
Nowadays, with the advance of VLSI technology, and with
the availability of wideband communication channels (such
as fiber) and compression technique (to reduce the
bandwidth demand), the above two limitations are greatly
released.

Po-Ning Chen@cm.nctu Chapter 3-107

3.12 Delta modulation


Delta Modulation (DM)
The message is oversampled (at a rate much higher than
the Nyquist rate) to purposely increase the correlation
between adjacent samples.
Then, the difference between adjacent samples is
encoded instead of the sample value itself.

Po-Ning Chen@cm.nctu Chapter 3-108

54
Po-Ning Chen@cm.nctu Chapter 3-109

3.12 Math analysis of delta modulation


Let m[n ] = m( nTs ).
Let mq [n ] be the DM approximation of m(t ) at time nTs .
Then
n
mq [n ] = mq [n 1] + eq [n ] = e [n],
j =
q

where eq [n ] = sgn(m[n ] mq [n 1]).

The transmitted code word is {[( eq [n ] / ) + 1] / 2}n = .

Po-Ning Chen@cm.nctu Chapter 3-110

55
n
mq [n ] = mq [n 1] + eq [n ] = e [n],
j =
q

3.12 Delta where eq [n ] = sgn(m[n ] mq [n 1]).


modulation
The principle virtue
of delta modulation
is its simplicity.
It only requires
the use of
comparator,
quantizer, and
accumulator.
With bandwidth
W of m(t)

Po-Ning Chen@cm.nctu Chapter 3-111

3.12 Delta modulation


Distortions due to delta modulation
Slope overload distortion
Granular noise

Po-Ning Chen@cm.nctu Chapter 3-112

56
3.12 Delta modulation
Slope overload distortion
To eliminate the slope overload distortion, it requires
dm(t )
max (slope overload condition)
Ts dt
So, increasing step size can reduce the slope-overload
distortion.
Alternative solution is to use dynamic . (Often, a delta
modulation with fixed step size is referred to as a linear
delta modulator due to its fixed slope, a basic function
of linearity.)

Po-Ning Chen@cm.nctu Chapter 3-113

3.12 Delta modulation


Granular noise
mq[n] will hunt around a relatively flat segment of m(t).
A remedy is to reduce the step size.

A tradeoff in step size is therefore resulted for slope


overload distortion and granular noise.

Po-Ning Chen@cm.nctu Chapter 3-114

57
3.12 Delta-sigma modulation
Delta-sigma modulation
In fact, the delta modulation distortion can be reduced
by increasing the correlation between samples.
This can be achieved by integrating the message signal
m(t) prior to delta modulation.
The integration process is equivalent to a pre-
emphasis of the low-frequency content of the input
signal.

Po-Ning Chen@cm.nctu Chapter 3-115

3.12 Delta-sigma modulation


A side benefit of
integration-before-
delta-modulation,
which is named
delta-sigma
modulation, is that
the receiver design Move the accumulator to the transmitter.
is further simplified
(at the expense of a
more complex
transmitter).

Po-Ning Chen@cm.nctu Chapter 3-116

58
3.12 Delta-sigma modulation

A straightforward
structure

Since integration is
a linear operation,
the two integrators
before comparator
can be combined
into one after
comparator.

Po-Ning Chen@cm.nctu Chapter 3-117

3.12 Math analysis of delta-sigma modulation


Let i[n ] = m(t )dt.
nTs

Let iq [n ] be the DM approximation of i (t ) = m( )d at time nTs .


t

Then iq [n ] = iq [n 1] + q [n ], where q [n ] = sgn(i[n ] iq [n 1]).


The transmitted code word is {[( q [n ] / ) + 1] / 2}n = .

Since
q [n ] = iq [n ] iq [n 1] i[n ] i[n 1] = ( n 1)T m(t )dt m(t )Ts ,
nTs

we only need a lowpass filter to smooth out the received


signal at the receiver end. (See the previous slide.)
Po-Ning Chen@cm.nctu Chapter 3-118

59
3.12 Delta modulation
Final notes
Delta modulation trades channel bandwidth (e.g., much
higher sampling rate) for reduced system complexity
(e.g., the receiver only demands a lowpass filter).
Can we trade increased system complexity for a reduced
channel bandwidth? Yes, by means of prediction
technique.
In Section 3.13, we will introduce the basics of
prediction technique. Its application will be addressed in
subsequent sections.

Po-Ning Chen@cm.nctu Chapter 3-119

3.13 Linear prediction

Consider a finite-duration impulse response (FIR) discrete-


time filter, where p is the prediction order, with linear
prediction p


x[n ] = wk x[n k ]
k =1

Po-Ning Chen@cm.nctu Chapter 3-120

60
3.13 Linear prediction
Design objective
To find the filter coefficient w1, w2, , wp so as to
minimize index of performance J:
J = E [e 2 [n ]], where e[n ] = x[n ] x[n ].

Po-Ning Chen@cm.nctu Chapter 3-121

Let {x[n ]} be statinoary with autocorrelation function RX ( k ).


p

2

J = E x[n ] wk x[n k ]
k =1
p p p
= E[ x 2 [n ]] 2 wk E [ x[n ] x[n k ]] + wk w j E [ x[n k ]x[n j ]]
k =1 k =1 j =1

p
p p p

= RX [0] 2 wk RX [k ] + 2 wk w j R X [k j ] + wk2 R X [0]
k =1 k =1 j > k k =1
p i 1

J = 2 RX [i ] + 2 w j RX [i j ] + 2 wk RX [k i ] + 2 wi RX [0]
wi j =i +1 k =1
p
= 2 RX [i ] + 2 w j RX [i j ] = 0
j =1

Po-Ning Chen@cm.nctu Chapter 3-122

61
p

w R
j =1
j X [i j ] = RX [i ] for 1 i p.

The above optimality equations are called the Wiener-Hopf


equations for linear prediction.
It can be rewritten in matrix form as:
R X [ 0] RX [1] K RX [ p 1] w1 RX [1]
R [1]
R X [ 0] K RX [ p 2] w2 RX [2]
X =
M M O M M M

RX [ p 1] RX [ p 2] L RX [0] w p RX [ p ]

or R X w = rX Optimal solution w o = R X1rX

Po-Ning Chen@cm.nctu Chapter 3-123

3.13 Toeplitz (square) matrix


Any square matrix of the form
a0 a1 K a p 1
a a0 K a p2
1
M M O M

a p 1 a p 2 L a0 p p

is said to be Toeplitz.
A Toeplitz matrix can be uniquely determined by p
elements, [a0, a1, , ap1].

Po-Ning Chen@cm.nctu Chapter 3-124

62
3.13 Linear adaptive predictor
The optimal w0 can only be obtained with the knowledge of
autocorrelation function.
Question: What if the autocorrelation function is unknown?
Answer: Use linear adaptive predictor.

Po-Ning Chen@cm.nctu Chapter 3-125

3.13 The idea behind linear adaptive predictor


To minimize J, we should update wi toward the bottom of
the J-bowel.
J
gi
wi
So when gi > 0, wi should be decreased.
On the contrary, wi should be increased if gi < 0.
Hence, we may define the update rule as:
1
w i [n + 1] = w i [n ] g i [n ]
2
where is a chosen constant step size, and is
included only for convenience of analysis.
Po-Ning Chen@cm.nctu Chapter 3-126

63
gi[n] can be approximated by:
p
g i [n ] J / wi = 2 RX (i ) + 2 w j RX (i j )
j =1
p
2 x[n ] x[n i ] + 2 w j [n ] x[n j ] x[n i ]
j =1

p

= 2 x[n i ] x[n ] + w j [n ] x[n j ]
j =1
p

w i [n + 1] = wi [n ] + x[n i ] x[n ] w j [n ] x[n j ]
j =1
= w i [n ] + x[n i ]e[n ]

Po-Ning Chen@cm.nctu Chapter 3-127

3.13 Structure of linear adaptive predictor

Po-Ning Chen@cm.nctu Chapter 3-128

64
3.13 Least mean square
The below pair results the form of the popular least-mean-
square (LMS) algorithm for linear adaptive prediction.

w j [n + 1] = w j [n ] + x[n j ]e[n ]

p

e [ n ] = x [ n ] w j [ n ] x[ n j ]
j =1

Po-Ning Chen@cm.nctu Chapter 3-129

3.14 Differential pulse-code modulation


Basic idea behind differential pulse-code modulation
Adjacent samples are often found to exhibit a high
degree of correlation.
If we can remove this adjacent redundancy before
encoding, a more efficient coded signal can be resulted.
One way to remove the redundancy is to use linear
prediction.
Specifically, we encode e[n] instead of m[n], where
e[n ] = m[n ] m [n ],
where m[n ] is the linear prediction of m[n ].
Po-Ning Chen@cm.nctu Chapter 3-130

65
2
1 2m m2
Quantizati on Noise Power = max = max
12 L 3L2
3.14 DPCM
For DPCM, the
quantization
error is on e[n],
rather on m[n]
as for PCM.
So the
quantization
error q[n] is
supposed to be
smaller.

Po-Ning Chen@cm.nctu Chapter 3-131

3.14 DPCM
Derive:
eq [n ] = e[n ] + q[n ]
mq [n ] = m [n ] + eq [n ]
= m [n ] + e[n ] + q[n ]
= m[n ] + q[n ] eq [n ] mq [n ]

So we have the same


m [n ]
relation between mq[n]
and m[n] but with
smaller q[n].

Po-Ning Chen@cm.nctu Chapter 3-132

66
3.14 DPCM
Notes
DM system can be treated as a special case of DPCM.

Prediction filter = single delay


Quantizer => single-bit

Po-Ning Chen@cm.nctu Chapter 3-133

3.14 DPCM
Distortions due to DPCM
Slope overload distortion
The input signal changes too rapidly for the prediction
filter to track it.
Granular noise

Po-Ning Chen@cm.nctu Chapter 3-134

67
3.14 Processing Gain
The DPCM system can be described by:
mq [n ] = m[n ] + q[n ]
So the output signal-to-noise ratio is:
E [m 2 [n ]]
SNRO =
E [ q 2 [n ]]
We can re-write SNRO as:
E [m 2 [n ]] E [e 2 [n ]]
SNRO = = G p SNRQ
E [e 2 [n ]] E [ q 2 [n ]]
where e[n ] = m[n ] m [n ] is the prediction error.

Po-Ning Chen@cm.nctu Chapter 3-135

3.14 Processing Gain


In terminologies,

E [m 2 [n ]]
p = E [e 2 [n ]] processing gain
G

2
SNR = E [e [n ]] signal to quantization noise ratio
Q
E [ q 2 [n ]]
Notably, SNRQ can be treated as the SNR for
system of eq [n ] = e[n ] + q[n ].

Po-Ning Chen@cm.nctu Chapter 3-136

68
3.14 Processing Gain
Usually, the contribution of SNRQ to SNRO is fixed and
limited.
One additional bit in quantization will results in 6 dB
improvement.
Gp is the processing gain due to a nice prediction.
The better the prediction is, the larger Gp is.

Po-Ning Chen@cm.nctu Chapter 3-137

3.14 DPCM
Final notes on DPCM
Comparing DPCM with PCM in the case of voice
signals, the improvement is around 4-11 dB, depending
on the prediction order.
The greatest improvement occurs in going from no
prediction to first-order prediction, with some additional
gain resulting from increasing the prediction order up to
4 or 5, after which little additional gain is obtained.
For the same sampling rate (8KHz) and signal quality,
DPCM may provide a saving of about 8~16 kbps
compared to standard PCM (64 Kpbs).
Po-Ning Chen@cm.nctu Chapter 3-138

69
3.14 DPCM
Source: IEEE Communications Magazine, September 1997.
Excellent
ADPCM PCM
G.711
G.723.1 G.729 G.728 G.726
Good IS-641 US-1 G.727
G.723.1
JDC2 GSM
Speech Quality
Fair MELP 2.4 FS-1016
IS96
IS54
JDC
FS-1015 GSM/2
Poor

Unacceptable
2 4 8 16 32 64
Bit rate (kb/s)
Po-Ning Chen@cm.nctu Chapter 3-139

3.15 Adaptive differential pulse-code modulation


Adaptive prediction is used in DPCM.
Can we also combine adaptive quantization into DPCM to
yield a comparably voice-quality to PCM with 32 Kbps bit
rate? The answer is YES from the previous figure.
32 Kbps: 4 bits for one sample, and 8 KHz sampling
rate
64 Kbps: 8 bits for one sample, and 8 KHz sampling
rate
So, adaptive in ADPCM means being responsive to
changing level and spectrum of the input speech signal.

Po-Ning Chen@cm.nctu Chapter 3-140

70
3.15 Adaptive quantization
Adaptive quantization refers to a quantizer that operates
with a time-varying step size [n].
[n] is adjusted according to the power of input sample
m[n].
Power = variance, if m[n] is zero-mean.

[n ] = E [m 2 [n ]]
In practice, we can only obtain an estimate of E[m2[n]].

Po-Ning Chen@cm.nctu Chapter 3-141

3.15 Adaptive quantization


The estimate of E[m2[n]] can be done in two ways:
Adaptive quantization with forward estimation (AQF)
Estimate based on unquantized samples of the input
signals.
Adaptive quantization with backward estimation (AQB)
Estimate based on quantized samples of the input
signals.

Po-Ning Chen@cm.nctu Chapter 3-142

71
3.15 AQF
AQF is in principle a more accurate estimator. However it
requires
an additional buffer to store unquantized samples for the
learning period.
explicit transmission of level information to the receiver
(the receiver, even without noise, only has the quantized
samples).
a processing delay (around 16 ms for speech) due to
buffering and other operations from the use of AQF.
The above requirements can be relaxed by using AQB.

Po-Ning Chen@cm.nctu Chapter 3-143

3.15 AQB

A possible drawback for a feedback system is its potential unstability.


However, stability in this system can be guaranteed if mq[n] is bounded.
Po-Ning Chen@cm.nctu Chapter 3-144

72
3.15 APF and APB
Likewise, the prediction approach used in ADPCM can be
classified into:
Adaptive prediction with forward estimation (APF)
Prediction based on unquantized samples of the input
signals.
Adaptive prediction with backward estimation (APB)
Prediction based on quantized samples of the input
signals.
The pro and con of APF/APB is the same as AQF/AQB.
APB/AQB are a preferred combination in practical
applications.

Po-Ning Chen@cm.nctu Chapter 3-145

3.15 ADPCM

Adaptive prediction
with backward
estimation (APB).

Po-Ning Chen@cm.nctu Chapter 3-146

73
3.16 Computer experiment: Adaptive delta
modulation This figure may be incorrect.
e[n ] eq [n ]

In this section,
eq [n 1]
the simplest form
of ADPCM
modulation with
AQB is
simulated,
namely, ADM
with AQB.
Comparison with
LDM (linear DM)
where step size is
fixed will also be
performed.
Po-Ning Chen@cm.nctu Chapter 3-147

3.16 Computer experiment: Adaptive delta


modulation I thus fixed it in this slide.
e[n ] eq [n ]

In this section, eq [n 1]
the simplest form
of ADPCM
modulation with
AQB is
simulated,
namely, ADM
with AQB.
Comparison with
LDM (linear DM)
where step size is
fixed will also be
performed.
Po-Ning Chen@cm.nctu Chapter 3-148

74
3.16 Computer experiment: Adaptive delta
modulation

1 eq [n 1]
[n 1] 1 + , if [n 1] min
[n ] = 2 e [n ]
q
if [n 1] < min
min ,

[n ] is the step size at iteration n,


where
eq [n ] is the 1 bit quantizer output that equals 1.

f 1
m(t ) = 10 sin 2 s t , LDM = 1 and min =
100 8

Po-Ning Chen@cm.nctu Chapter 3-149

3.16 Computer experiment: Adaptive delta


modulation

LDM ADM

Observation: ADM can achieve a comparable performance of


LDM with a much lower bit rate.
Po-Ning Chen@cm.nctu Chapter 3-150

75
3.17 MPEG audio coding standard
The ADPCM and various voice coding techniques
introduced above did not consider the human auditory
perception.
In practice, a consideration on human auditory perception
can further improve the system performance (from the
human standpoint).
The MPEG-1 standard is capable of achieving transparent,
perceptually lossless compression of stereophonic audio
signals at high sampling rate.
A human subjective test shows that a 6-to-1
compression ratio are perceptually indistinguishable to
human.
Po-Ning Chen@cm.nctu Chapter 3-151

3.17 Characteristics of human auditory system


Psychoacoustic characteristics of the human auditory
system
Critical band
The inner ear will scale the power spectra of
incoming signals non-linearly in the form of limited
frequency bands called the critical bands.
Roughly, the inner ear can be modeled as 25
selective overlapping band-pass filters with
bandwidth < 100Hz for the lowest audible
frequencies and up to 5kHz for the highest audible
frequencies.

Po-Ning Chen@cm.nctu Chapter 3-152

76
3.17 Characteristics of human auditory system
Auditory masking
When a low-level signal (the maskee) and a high-
level signal (the masker) occur simultaneously (in
the same critical band), and are close to each other in
frequency, the low-level signal will be made
inaudible (i.e., masked) by the high-level signal, if
the low-level one lies below a masking threshold.

Po-Ning Chen@cm.nctu Chapter 3-153

3.17 Characteristics of human auditory system


Masking threshold is frequency-dependent.

SNR for R- SMR


bit quantizer

NMR (noise-to-mask ratio) = SMR SNR


Po-Ning Chen@cm.nctu Chapter 3-154

77
3.17 MPEG audio coding standard

Po-Ning Chen@cm.nctu Chapter 3-155

3.17 MPEG audio coding standard


Time-to-frequency mapping network
Divide the audio signal into a proper number of subbands, which is
a compromise design for computational efficiency and perceptual
performance.
Psychoacoustic model
Analyze the spectral content of the input audio signal and thereby
compute the signal-to-mask ratio.
Quantizer-coder
Decide how to apportion the available number of bits for the
quantization of the subband signals.
Frame packing unit
Assemble the quantized audio samples into a decodable bit stream.

Po-Ning Chen@cm.nctu Chapter 3-156

78
3.18 Summary and discussion
Sampling transform analog waveform to discrete-time
continuous wave
Nyquist rate
Quantization transform discrete-time continuous wave to
discrete data.
Human can only detect finite intensity difference.
PAM, PDM and PPM
TDM (Time-division multiplexing)
PCM, DM, DPCM, ADPCM
Additional consideration in MPEG audio coding
Po-Ning Chen@cm.nctu Chapter 3-157

79

Potrebbero piacerti anche