Sei sulla pagina 1di 21

LOW DENSITY PARITY CHECK CODES

BLOCK DIAGRAM OF A GENERAL COMMUNICATION


SYSTEM
WHAT IS CODING?
Coding is the conversion of information to another
form for some purpose.
Source Coding : The purpose is lowering the
redundancy in the information. (e.g. ZIP, JPEG,
MPEG2)
Channel Coding : The purpose is to defeat channel
noise.

CHANNEL CODING
Channel encoding : The application of
redundant symbols to correct data errors.
Modulation : Conversion of symbols to a
waveform for transmission.
Demodulation : Conversion of the waveform
back to symbols, usually one at a time.
Decoding: Using the redundant symbols to
correct errors.
WHY CHANNEL CODING?
Trade-off between Bandwidth, Energy and
Complexity.
Coding provides the means of patterning signals so
as to reduce their energy or bandwidth
consumption for a given error performance.

CHANNELS
The Binary Symmetric Channel(BSC)
The Binary Erasure Channel (BEC)
HOW TO EVALUATE CODE PERFORMANCE?
Need to consider Code Rate (R), SNR (Eb/No), and
Bit Error Rate (BER).
Coding Gain is the saving in Eb/No required to
achieve a given BER when coding is used vs. that
with no coding.
Generally the lower the code rate, the higher the
coding gain.
Better Codes provides better coding gains.
Better Codes usually are more complicated and
have higher complexity.

SHANNONS CODING THEOREMS
8
If C is a code with rate R>C, then the
probability of error in decoding this code is
bounded away from 0. (In other words, at
any rate R>C, reliable communication is
not possible.)
It Tells the maximum rate at which
information can be transmitted over a
communications channel of a
specified bandwidth in the presence
of noise.

(x)) H (H(x) Max C
y
=
SHANNONS CODING THEOREMS
STATEMENT OF THE THEOREM
Where,
C is the channel capacity in bits per second.
B is the bandwidth of the channel in hertz.
S is the average received signal power over the
bandwidth
N is the average noise or interference power over the
bandwidth
S/N is the signal-to-noise ratio (SNR) or the carrier-to-
noise ratio (CNR) of the communication signal.
COMMON FORWARD ERROR CORRECTION
CODES
Convolutional Codes
Block Codes (e.g. Reed-Solomon Code)
Trellis-Coded-Modulation (TCM)
Concatenated Codes



LINEAR BLOCK CODES
The parity bits of linear block codes are linear
combination of the message. Therefore, we
can represent the encoder by a linear system
described by matrices.

BASIC DEFINITIONS
Linearity:

where m is a k-bit information sequence
c is an n-bit codeword.
is a bit-by-bit mod-2 addition without carry
Linear code: The sum of any two codewords is a
codeword.
Observation: The all-zero sequence is a codeword in
every Linear Block Code.

then
and If
2 1 2 1
2 2 1 1
c c m m
c m c m

BASIC DEFINITIONS (CONTD)


Def: The weight of a codeword c
i
, denoted by
w(c
i
), is the
number of of nonzero elements in the codeword.
Def: The minimum weight of a code, w
min
, is the
smallest
weight of the nonzero codewords in the code.
Theorem: In any linear code, d
min
= w
min


Systematic codes



Any linear block code can be put in systematic form
n-k
check bits
k
information bits
THE ERROR-CONTROL PARADIGM
Noisy channels give rise to data errors: transmission or storage systems
Need powerful error-control coding (ECC) schemes: linear or non-linear
Linear EC Codes: Generated through simple generator or parity-check
matrix



Binary information vector (length k)
Code vector (word): (length n)
Key property: Minimum distance of the code, , smallest separation
between two codewords.
Rate of the code R= k/n
1 0 0 0 1 0 1
1 1 1 0 1 0 0
0 1 0 0 1 1 0
0 1 1 1 0 1 0
0 0 1 0 1 1 1
1 0 1 1 0 0 1
0 0 0 1 0 1 1
G H
(
(
(
(
(
= =
(
(
(
(

(

min
d
1 2 3 4
( , , , ) u u u u u =
, 0
T
x uG Hx = =
Binary linear codes:
LDPC CODES
More than 40 years of research (1948-1994) centered around
Weights of errors that a code is guaranteed to correct
Bounded distance decoding cannot achieve Shannon limit
Trade-off minimum distance for efficient decoding
Low-Density Parity-Check (LDPC) Codes

Gallager 1963, Tanner 1984, MacKay 1996

1. Linear block codes with sparse (small fraction of ones) parity-check matrix
2. Have natural representation in terms of bipartite graphs
3. Simple and efficient iterative decoding in the form of belief propagation
(Pearl, 1980-1990)
min
d
min
( 1)/2 d s (


THE CODE GRAPH AND ITERATIVE DECODING
Variable nodes Check nodes
(Irregular degrees/codes)
1 1 1 0 1 0 0
0 1 1 1 0 1 0
1 0 1 1 0 0 1
H
(
(
=
(
(

Most important consequence of graphical description:
efficient iterative decoding
Message passing:
Variable nodes: communicate to check nodes their
reliability (log-likelihoods)
Check nodes: decide which variables are not reliable
and suppress their inputs
Small number of edges in graph = low complexity
Nodes on left/right with constant degree: regular code
Otherwise, codes termed irregular
Can adjust degree distribution of variables/checks
Best performance over standard channels: long, irregular, random-like LDPC codes
Have proportional to length of code, but correct many more errors

min
d
CONSTRUCTION
LDPC codes are defined by a
sparse parity-check matrix. This sparse
matrix is often randomly generated,
subject to the sparsity constraints. These
codes were first designed by Gallager in
1962.
In this graph, n variable nodes in the top
of the graph are connected to (nk)
constraint nodes in the bottom of the
graph. This is a popular way of
graphically representing an (n, k) LDPC
code.
Specifically, all lines connecting to a
variable node (box with an '=' sign) have
the same value, and all values
connecting to a factor node (box with a
'+' sign) must sum, modulo two, to zero
(in other words, they must sum to an
even number).
CONTD.,
This LDPC code fragment
represents a 3-bit message
encoded as six bits.
Redundancy is used, here, to
increase the chance of
recovering from channel errors.
This is a (6, 3) linear code,
with n = 6 and k = 3.
In this matrix, each row
represents one of the three
parity-check constraints, while
each column represents one of
the six bits in the received
codeword.
CONT.,
In this example, the eight codewords can be obtained by putting the parity-
check matrix H into this form through basic row operations.
From this, the generator matrix G can be obtained as (noting
that in the special case of this being a binary code ), or specifically:

Finally, by multiplying all eight possible 3-bit strings by G, all eight valid
codewords are obtained. For example, the code word for the bit-string '101'
is obtained by:

Potrebbero piacerti anche