Sei sulla pagina 1di 27

CLASS NO.

22
a systematic scheme for the replacement of the original information symbol sequence by
a sequence of code symbols,
in such a way as to permit its reconstruction
Coding
Cryptography
to preserve
secrecy
Source coding
to compress
data
Line coding
to improve
spectral
characteristics
Error-control
coding
to permit robust
transmission of
data
Error-detection
coding
allows re-transmission
of erroneous data
Forward Error
Correction
(FEC) coding
to correct errors even
without a feedback
channel
channel
coding
Error detection and retransmission
Forward error correction

Terminal Connectivity
Simplex
Half duplex
Full duplex


Figure : Terminal connectivity classifications (a) Simplex (b) Half-duplex
(c) Full-duplex
To reduce error probability or BER
To provide a coding gain:
a reduction in bit energy to noise density ratio in the coded system
compared to an uncoded, for a given BER and at the same data rate
Uncoded
Coded
Bit energy to
noise density ratio,
dB
BER
(log scale)
Required
BER
Coding gain
Add redundancy, such that original
information can be reconstructed from
corrupted signal
e.g. for a 2 bit binary message we add
check bits (a.k.a. parity bits):
0 0 : 0 0 0
Information bits 0 1 : 1 1 0 Check bits
1 0 : 0 1 1
1 1 : 1 0 1

Possible transmitted words are called
codewords
Suppose 0 1 1 1 0 transmitted; 0 0 1 1 0
received
Error is detected - not in codeword list
Error may be corrected by choosing
"closest" codeword
Error correction only possible through
introduction of redundancy - i.e. check bits
Hamming distance between two words is
the number of bit positions where they
differ
Note use of redundancy for error
corrections in other contexts:
spelling mistakes
noisy TV picture
Automatic Repeat Request

ARQ vs. FEC

ARQ is much simpler than FEC and need no redundancy.

ARQ is sometimes not possible if

A reverse channel is not available or the delay with ARQ
would be excessive

The retransmission strategy is not conveniently
implemented

The expected number of errors, without corrections,
would require excessive retransmissions


fig: Automatic Repeat Request (ARQ) (a) Stop and wait ARQ (b) Continuous
ARQ with pullback (c) Continuous ARQ with selective repeat
Adding extra (redundant) bits to the data stream so
that the decoder can reduce or correct errors at the
output of receiver.
Disadvantage:
extra bits increasing data rate and
consequently, increasing B/W of encoded signal.
FEC Codes can be classified into two broad categories
1 Block Codes
2 Convolutional Codes

In case of block codes, encoder transforms each k-bit
data block into a larger block of n-bits called code bits
or or channel symbol

The (n-k)-bits added to each data block are are called
redundant bits, parity bits or check bits

They carry no new information

Ratio of redundant bits to data bits: (n-k)/k is called
redundancy of code

Ratio of data bits to total bits, k/n is called code rate
Figure:Comparison of typical coded versus uncoded error performance

Trade-Off 1: Error Performance vs. Bandwidth

Trade-Off 2: Power versus Bandwidth

Coding Gain
(E
b
/N
0
)
u
dB-(E
b
/N
0
)
c
dB

Trade-Off 3: Data Rate versus Bandwidth


Trade-Off 4: Capacity versus Bandwidth
The set of all binary n-tuple is called a vector space
over the binary field of 0 and 1
Addition Multiplication




Vector Subspaces
A subset S of the vector space is called a subspace if
The all-zeros vector is in S
The sum of any two vectors in S is also in S ( Closure
property./Linear property) e.g. { 0000 0101 1010
1111 }
0 1 1
1 0 1
1 1 0
0 0 0
=
=
=
=
1 1 1
0 0 1
0 1 0
0 0 0
=
=
=
=
The subset chosen for the code should include as many
as elements to reduce the redundancy but they should
be as apart as possible to maintain good error
performances
Figure : Linear block-code structure
Message Vector Codeword
000 000000
100 110100
010 011010
110 101110
001 101001
101 011101
011 110011
111 000111
Table 1: Assignment of Codewords to Messages
If k is large, a lookup table implementation of the encoder
becomes prohibitive
Let the set of 2
k
codewords {U} be described as:
U=m
1
V
1
+ m
2
V
2
+ ..+ m
k
V
k


In general, generator matrix can be defined by the following k
x n array:





Generation of codeword U:
U=mG

=
kn k k
n
n
k
v v v
v v v
v v v
V
V
V
G

2 1
2 22 21
1 12 11
2
1


Example:
Let the generator matrix be:





Generate Codeword U4 for the fourth message vector 1
1 0 in Table 1

U
4
=[ 1 1 0] V=V
1
+V
2
+0*V
3

= 110100+011010+000000
=101110 ( Codeword for the message vector
110)

=
1 0 0 1 0 1
0 1 0 1 1 0
0 0 1 0 1 1
3
2
1
V
V
V
G

A convolutional code is described by three integers, n, k,
and K where the ratio k/n is called the rate of the code

The integer K is constraint length; it represents number
of k-tuple stages in the encoding shift register.

Encoder has memorythe n-tuple emitted by the
convolutional encoding procedure is not only a function
of an input k-tuple, but is also a function of the
previous K-1 input k-tuples

Figure: Convolutional encoder with constraint length K and rate k/n
Figure exmpl 1: Convolutional Encoder (rate ,
k=3)
Time Encoder Output
0
0
t 3
1 0 1
+
+
U1
U2
t 4
0 1 0
+
+
U1
U2
1
0
Time Encoder Output
t2
0 1 0
+
+
U1
U2
1
0
1 0 0
+
+
U1
U2
t1
U1
U2
1
1
t 5
0 0 1
+
+
U1
U2
1
1
t 6
0 0 0
+
+
U1
U2
0
0
Output Sequence: 11 10 00 10 11 00
Convolutional encoder maybe represented with a set of
n generator polynomials, one for each modulo-2 adders
Continuing with the same example, we can write the
generator polynomial for upper connections g
1
(X) and
g
2
(X) for lower connections:


U(X) is the output sequence


Where m= 101,encoder can be found as:
2
2
2
1
1 ) (
1 ) (
X X g
X X X g
+ =
+ + =
) ( ) ( with interlaced ) ( ) ( ) (
2 1
X g X m X g X m X U =
1 1 0 1 0 0 0 1 1 1
) 1 , 1 ( ) 0 , 1 ( ) 0 , 0 ( ) 0 , 1 ( ) 1 , 1 ( ) (
0 0 0 1 ) ( ) (
0 1 ) ( ) (
1 ) 1 )( 1 ( ) ( ) (
1 ) 1 )( 1 ( ) ( ) (
4 3 2
4 3 2
2
4 3 2
1
4 2 2
2
4 3 2 2
1
=
+ + + + =

+ + + + =
+ + + + =

+ = + + =
+ + + = + + + =
U
X X X X X U
X X X X X g X m
X X X X X g X m
X X X X g x m
X X X X X X X g X m
For the encoder shown in figure exmpl 1 , show state
changes and resulting codeword sequence U for m = 1
1 0 1 1 followed by 2 zeros to flush the registers.
Assume initial contents of the register are all zeros
Input bit m
i
Register
Contents
State at time
t
i
State at
t
i
+1
Branch
Word at t
i
U1 U2
---- 000 00 00 ----
1 100 00 10 1 1
1 110 10 11 0 1
0 011 11 01 0 1
1 101 01 10 0 0
1 110 10 11 0 1
0 011 11 01 0 1
0 0 0 1 01 00 1 1
State t
i
State t
i
+1
Output Sequence: 11 01 01 00 01 01 11
Figure : Encoder state (the contents of the K-1 leftmost registers) diagram (rate ,k=3)
Figure : Tree representation of encoder (rate , k=3)
Tree diagram adds the
dimension of time to the
state diagram
Figure : Encoder trellis diagram (rate , K=3)
The trellis diagram, by exploiting the repetitive structure, provides a
more manageable encoder description
Figure : Decoder trellis diagram (rate , K=3)
The Viterbi convolutional decoding Algorithm

Potrebbero piacerti anche