Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
INTRODUCTION
_______________________________________
A digital transmission system may or may not include conversions
between analog and digital signals (sampling, A/D- and D/A-
conversion)
The transmitter end of the transmission chain converts a digital
bit-stream into an analog waveform which is sent to the physical
channel, which is in practise analog. The receiving end converts
the received analog waveform back to digital format.
The transmission chain includes:
Source coding/decoding: Reducing the bit-rate of the
information signal by reducing the redundancy;
Compression.
Channel coding/decoding: Error control coding,
compensating the effect of bit errors that inevitably take
place in a practical transmission channel.
In any 'sensible' channel, it is possible to get arbitrarily
low bit-error-rate by increasing redundancy in the
transmitted signal and using error control coding.
One of the central results of information theory is that
source coding and channel coding can, in principle, be
carried out independently of each other.
Modulation/demodulation: converting a digital signal into
analog waveform.
Channel, that distorts the transmitted signal and adds noise
and interference to it.
When designing the system, the primary targit is to minimize the
used bandwidth and/or the transmitted signal power.
83050E/3
Analog Digital
input input
Noise
Channel
Interferences
Synchroni-
zation
Analog Digital
output output
Bit-error-rate
Definition of Information
__________________________________________________
Definition:
0 h( a m )
Why logarithm?
Entropy
__________________________________________________
Source Symbol a
i Probability pi
Space 0.186
A 0.064
B 0.013
C 0.022
D 0.032
E 0.103
F 0.021
G 0.015
H 0.047
I 0.058
J 0.001
K 0.005
L 0.032
M 0.020
N 0.057
O 0.063
P 0.015
Q 0.001
R 0.048
S 0.051
T 0.080
U 0.023
V 0.008
W 0.017
X 0.001
Y 0.016
Z 0.001
(1) q = 1 2
(2) q=0.1
More Examples
__________________________________________________
H ( X ) = 0log 2 0 1 log(1) = 0
H ( X ) log 2 K
Huffman Code
__________________________________________________
yes
Assign 0 and 1 to
the two codewords
no
Stop
83050E/15
H ( X ) L( X ) H ( X ) + 1
H ( X ) L( X ) H ( X ) + 1 / n .
Run-Length Codes
__________________________________________________
Lempel-Ziv Codes
__________________________________________________
Principle
The method uses a codebook consisting of a number of
source symbol sequences.
The source symbol stream is scanned one symbol at a
time. This is continued until the beginning of the
uncoded symbol sequence is not in the codebook
anymore.
This sequence can be represented as the concatenation
of one of the words in the codebook and one additional
symbol. This new symbol sequence will be added to the
codebook.
The same process is repeated starting from the
beginning of the uncoded source symbol stream.
83050E/18
Dictionary Dictionary
Location Contents Codeword
1 0001 0 0000 0
2 0010 1 0000 1
3 0011 00 0001 0
4 0100 001 0011 1
5 0101 10 0010 0
6 0110 000 0011 0
7 0111 101 0101 1
8 1000 0000 0110 0
9 1001 01 0001 1
10 1010 010 1001 0
11 1011 00001 1000 1
12 1100 100 0101 0
13 1101 0001 0110 1
14 1110 0010 1010 0
15 1111 0010 0100 0
16 1110 1
83050E/19
X = Y = {0,1} pY*X(y*x)=1-p
x=0 y=0
The conditional probabilities
between input and output p
(transition probabilities)
are shown by such a graph. x=1 y=1
p is the bit-error probability 1-p
H ( X y ) = E log 2 p X Y ( X y ) = p X Y ( x y ) log 2 p X Y ( x y )
x X
H ( X Y ) = H ( X y ) pY ( y ) = pY ( y ) p X Y ( x y ) log 2 p X Y ( x y )
yY yY x x
C = sCs
I ( X , Y ) = H (Y ) + p log 2 p + (1 p ) log 2 (1 p ) .
Cs = 1 + p log 2 p + (1 p) log 2 (1 p)
R<C
(
0 H ( N ) 1 log 2 2e 2
2
)
The upper bound is achieved if and only if N is Gaussian
distributed.
H (Y X ) = p X ( x) fY X ( y x)log 2 fY X ( y x)dy
x X Y
Mutual information and channel capacity are defined according to
the earlier models based on the definitions of entropy and
conditional entropy. The channel capacity depends naturally on the
channel alphabet X and the noise level. Examples later.
83050E/26
H (Y X ) = f X ( x) fY X ( y x)log 2 fY X ( y x)dydx
X Y
Y =X +N
I ( X , Y ) = H (Y ) H (Y X ) = H (Y ) H ( N )
( )
H ( N ) = 1 log 2 2e 2
2
H (Y ) 1 log 2 (2e( + 2 ) )
2
1
2
( 2 1
2
) 2 1
2
(
)
Cs = log 2 2e( + ) log 2 2e = log 2 1 +
2
83050E/28
2-AM
8-AM
16-AM
8-AMPM 32-AMPM
83050E/29
The figures show also the true channel capacity for the case
of continuous-valued input and output.
Complex Constellations:
83050E/31
1 for f W
B( f ) =
0 for f > W -W W
f
C = W log 2 (1 + )
2
C = W log 2 (1 + ) = 3300log 2 (1 + 10000)
2
= 43.9 kbps
SNR/dB C/kbps
20 22.0
30 32.9
40 43.9
50 54.8
60 65.8
TRANSMISSION CHANNELS
Transmission Media
FEXT
TX RX
83050E/39
Radio Channel
Reflection
Direct
path
83050E/41
f
v v
fc - fc fc +
The following figure shows a model for a two-path fading
channel, which includes the delays and complex gains of the
two paths, as well as the modulation by the Doppler
spectrum.
A1 r1(t)
DELAY
1
A2 r2(t) OUTPUT
u(t)
DELAY
2
83050E/44
Example
c 3 108
= = = 0.3 m ,
f 9
10
the velocity
v = 27.8 m/s
and the maximum Doppler shift
v 27.8 m/s
fD = = = 92.6 Hz .
0.3 m
The bandwidth of the received carrier is then about 185 Hz,
since the Doppler shift can effect in both directions,
depending on whether the reflected beam arrives from front
or back.
Bandwidth is limited by
Regulation (especially in case of radio communications)
Bandwidth of the medium.
Binary signal
16-level signal:
T
Pulse Waveforms
x(t ) = ak p(t kT )
k
1
P ( f ) Ga (e j 2fT )
2
Gx ( f ) =
T
Redundancy
t t
t t
83050E/53
There are many (at least tens of) different line coding
methods, often based on ad-hoc principles.
In the following, we consider mostly the case of binary data.
Codes can be classified, e.g., by the used signal levels, as
follows:
unipolar: +a, 0
polar (antipodal): +a, -a
bipolar (pseudoternary): +a, 0, -a.
T/2
t t t
-T/2 T/2 -T/2 T/2 -T/2
AMI-Line Code
0 => 0
1 => +/- alternatingly
Example: Incoming 0 1 1 1 1 1 1 0 0 0 0
AMI-coded + - + - + - 0 0 0 0
Received Decoded
+ 1
0 0
- 1
HDBk Codes
0 a transmitted 0-symbol
B is a valid AMI + or - symbol
V a + or symbol violating the AMI-principle, i.e., it
has the same polarity as the most recent +/- symbol
Block Codes
2 k Ln
kBnT Codes
n N Log2N k Efficiency
2 2 1 1 50%
4 6 2.58 2 50%
6 20 4.32 4 67%
8 70 6.13 6 75%
10 252 7.97 7 70%
83050E/63
Input: 1 0 0 0 1 1 0 0 1
Coded: 1 0 0 1 0 1 1 0 0 1 1
83050E/64
TRANSMIT S(t)
Bn Ak FILTER CHANNEL
CODER
BITS
g(t) SIGNAL
b (t )
TRANSMITTER N(t)
NOISE
RECEIVE
Bn Ak Qk Q( t) FILTER R(t)
DECODER
BITS
f (t)
SAMPLER
ESTIMATED
SYMBOLS SLICER OR TIMING
DECISION
DEVICE RECOVERY
RECEIVER
Transmitter Blocks
The transmitter filter forms a continuous time signal from the
symbol sequence, Am . The impulse response of the filter is
g (t ) . In the following this is called also as the transmitted
pulse shape.
S (t ) = Am g (t mT )
m =
Example
+3
+2 TRANSMIT
+1 FILTER
SYMBOLS S(t)
2T
t g(t) t
0 T 3T 0 T 4T
1
-1 Ak
t
T
83050E/67
Channel
h(t ) = b(t ) g (t ) = b( )g (t )d
Example
1 f <W
B( f ) =
0 f W f
-W W
Receiver Blocks
In general, the receiver design is more critical than the
transmitter, because the channel attenuates and distorts the
signal and it is important to recover the signal as well as
possible to minimize bit error rate.
Receiver filter
1. Filters out the adjacent channels and out-of-band noise&
interferences.
2. Effects on the pulse shape.
3. As equalizer compensates the linear distortion of the
channel, e.g., by using inverse transfer function. The
transfer function of the channel is usually unknown, so
adaptive methods are important.
-1
HARDWARE
0 T 0 T 0 T
1 f <W
B( f ) =
0 f W f
-W W
1 f <W
G ( f ) = 2W
0 f W
sin( 2Wt )
g (t ) = = sinc(2Wt )
2Wt It has zero crossings at each
1 multiple of T=1/ 2W , except at
0.8 t=0. The pulses are
0.6 overlapping, but the
0.4 requirements are fulfilled
0.2 anyway.
0
This is not a practical
-0.2
solution, as will be discussed
-0.4
-3 -2 -1 0
Time in symbol intervals
1 2 3
later, but it illustrates the
principle.
I
83050E/74
Intersymbol Interference
Lets consider two adjacent symbols, a0=1 and a1=2.
Corresponding pulses and their effects on the overall
waveform are shown in the following figure.
2.5
1.5
0.5
-0.5
-3 -2 -1 0 1 2 3
Time in symbol intervals
p ( 0) = 1
p ( mT ) = 0 when m = 1, 2,
1
0.8
0.6
0.4
0.2
-0.2
-0.4
-3 -2 -1 0 1 2 3
Time in symbol intervals
f f f
1 1 0 1 1 1 1 0 1 1 1 1 0 1 1
T 2T 2T T T 2T 2T T T 2T 2T T
83050E/78
2T 2T
1
s
0 f
fp 1 fs
2T
83050E/79
T 0 f (1 )T / 2
T T 1
P ( f ) = 1 sin f (1 )T / 2 f (1 + )T / 2
2 2T
0 f > (1 + )T / 2
The impulse response can be shown to be:
sin ( t / T ) cos ( t / T )
p (t ) = i
t /T 1 ( 2 t / T )2
83050E/82
Eye Diagram
An eye diagram consists of many synchronized, overlaid
traces of small sections (a few symbols) of a signal. It is
assumed that symbols are random and independent, so all
the possible symbol combinations are expected to have
occurred.
They are used for both checking the system operation and
evaluation system performance in research and
development work.
c a
b
Eyediagram (continued)
The effect of excess bandwidth (Raised cosine pulses, 2
level PAM):
25% 100%
U k = [N (t ) f (t )]t = kT .