Sei sulla pagina 1di 4

Low Density Parity-Check Code: Encoding and

Decoding in AWGN Channel


Muhammad Asaduzzaman
#1
, Mohammad Abu Raihan Miah
*2
, M. Masud Reza
*3
#1
EEE (0706062),
2
EEE(0706116),
3
EEE (0706120),
1,2,3
Bangladesh University of Engineering and Technology
BUET, Dhaka, Bangladesh.
1
asadzaman_007@yahoo.com
2
raihan_0507@yahoo.com
3
masud_buet_eee@yahoo.com
Abstract Low density parity check(LDPC) codes are class of
linear block codes. Here a sparse parity check matrix H is used
to encode and decode for error correction. There are two classes
of LDPC codes: regular and irregular. In this paper, we
presented basic LDPC encoding and decoding for regular case.
Encoding was performed using Accumulation approach.
Encoded data was modulated by a 8PSK scheme then passed
through an AWGN channel. Corrupted signal is then
demodulated and passed to the LDPC decoder. Decoder used
Belief Propagation Algorithm to give hard decision. Finally, a
curve of BER vs. SNR was plotted.
Keywordslow density parity-check codes, belief propagation,
accumulation approach, tanner graph, regular LDPC, hard-
decision decoding.
I. INTRODUCTION
Low-density parity-check (LDPC) codes are a class of
linear block codes. The name comes from the characteristics
of their parity-check matrix which contains only a few 1s in
comparison to the amount of 0s. The main advantage is that
sparse codes provide a performance which is very close to the
Shannons limit for a lot of different channels and for its
linear time complex algorithms for decoding. It is used in
noisy communication channel to reduce the probability of loss
of information.
LDPC was first developed by Robert Gallager in his
doctoral dissertation at MIT in 1960 [1]. Due to the limitation
to computational effort in implementing the coder and decoder
for such codes and the introduction of Reed-Solomon codes,
LDPC was ignored for many years. Gallager's LDPC codes
were forgotten until Gallager's work was discovered in 1996
by Mckay [2]. However, in the last few years, the advances in
low-density parity-check codes have seen them surpass turbo
codes in terms of error floor and performance in the higher
code rate range, leaving turbo codes better suited for the lower
code rates only. Nowadays LDPC is used in many modem
applications such as 10GBase-T, Ethernet, WiFi, WiMAX,
Digital Video Broadcasting (DVB).
LDPC codes can be regular and irregular. In regular LDPC
code every code digit is contained in the same number of
equations and each equation contains the same number of
code symbols. So in regular LDPC the number of 1s in any
row of parity check matrix H is constant and number of 1s in
any column is constant. On the other hand, an LDPC which is
not regular is called irregular. We have implemented regular
LDPC in our project.
Section II discusses the encoding procedure of LDPC codes
using accumulate approach. How two different code rate can
be achieved is shown. Section III talks about the modulation
procedure of coded signal. 8PSK modulation scheme is
introduced. Then the signal is passed through an AWGN
channel and demodulated at the receiving end. In section IV
we discuss how to decode LDPC code using belief
propagation algorithm. Only hard-decision decoder is
presented. Section V shows the Bit Error Rate (BER) vs
Signal to Noise Ratio (SNR) plot for the described
communication system.
II. ENCODING
As LDPC is a linear block code (LBC), we can find and use
generator matrix G for encoding. Parity check matrix H for
LDPC code is sparse. Hence in general, G, which can be
obtained from H will not be sparse. So the straightforward
method of encoding for LBC costs (number of operation)
much. Hence several methods have been developed for fast
operation. Among them are: a) Accumulate Approach, b)
Lower Triangular Modification Approach [3]. As the size of
our H matrix is not large, we will not use Lower Triangular
Modification approach though it could have reduced cost.
Accumulate Approach: Accumulate approach has fast
encoding algorithm. First of all, we assign a value to each of
our check nodes. In case of our code, we give each value zero.
That is there will be even number of 1 connected to a certain
check node. The message consists of the message nodes
appended by the values of the check nodes to illustrate the
difference between this modified version of LDPC and the
original version, consider Figure 1. If Figure 1 represents an
original LDPC then c
1
, c
2
, c
3
, c
4
are information bits and c
5
, c
6
,
c
7,
c
8
are parity bits which have to be calculated from parity-
check equations in f
1
, f
2
, f
3
, f
4
. The code rate is
4
8
=
1
2
. Now
if figure 1 represents a modified LDPC, then all of c
1
, c
2
, c
3
,
c
4
, c
5
, c
6
, c
7
, c
8
are information bits; while f
1
, f
2
, f
3
, f
4
are
redundant bits calculated from c
1
, c
2
, c
3
, c
4
, c
5
, c
6
, c
7
, c
8
. f
1
is
connected to c
2
, c
4
, c
5
, c
8
so
1 2 4 5 8
f c c c c = + + + and so on.
The codeword in this case is | |
1 8 1 4
... ...
T
c c f f The code rate is
8
12
=
2
3
. Though the code rate is higher in the modified
case , but decoding becomes a major problem. In case the
channel is erasure, the value of the check nodes might be
erased. On the contrary, the check nodes of the original LDPC
are dependencies, not values, a check node defines a
relationship of its connected message nodes. We use the
algorithm with code rate
1
2
in our project implementation.
Parity check equations are solved by Gaussian Elimination
Process and parity bits are calculated. In the following table I
information bits and encoded bits are shown. Our information
bits in a block are 4 and encoded bits in a block are 8.
TABLE I
INFORMATION BITS AND ENCODED BITS
Information Bits Encoded Bits
0 0 0 0 0 0 0 0 1 0 1 1
0 0 0 1 0 0 0 1 0 0 1 1
0 0 1 0 0 0 1 0 1 1 1 1
0 0 1 1 0 0 1 1 0 1 1 1
0 1 0 0 0 1 0 0 0 1 0 1
0 1 0 1 0 1 0 1 1 1 0 1
0 1 1 0 0 1 1 0 0 0 0 1
0 1 1 1 0 1 1 1 1 0 0 1
1 0 0 0 1 0 0 0 1 1 0 1
1 0 0 1 1 0 0 1 0 1 0 1
1 0 1 0 1 0 1 0 1 0 0 1
1 0 1 1 1 0 1 1 0 0 0 1
1 1 0 0 1 1 0 0 0 0 1 1
1 1 0 1 1 1 0 1 1 0 1 1
1 1 1 0 1 1 1 0 0 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1
III. 8PSKMODULATION AND DEMODULATION
8PSK modulation uses eight phases for carrier signal which
are separated by 45. LDPC encoded data is modulated by
8PSK scheme and then pass through an AWGN channel. At
the receiving end signal is demodulated using demodulator.
We vary Signal to Noise Ratio (SNR) from 0 to 15dB. For
each case the demodulated signal is sent to the LDPC decoder
for error correction.
IV. DECODING
Decoding is done in iterative process. Different authors call
them in different names: the sum-product algorithm, the belief
propagation algorithm, and the message passing algorithm.
There are two cases like all other decoder: hard-decision and
soft-decision algorithm. We will present only hard-decision
part. This algorithm is not perfect maximum likelihood
decoder but the empirical results are record-breaking [4].
A. Hard-decision Decoder
Any linear block Code (LBC) can be represented by Tanner
graph. Tanner graph is a bipartite graph. The graph is
separated by two portions: variable nodes and check nodes.
Otherwise it is known as digit nodes and subcode nodes or,
message nodes and check nodes. Tanner graph maps parity
check matrix H. In a sparse graph code, the nodes in the graph
represent the transmitted bit and the constraint they satisfy.
We use a (4,8) LBC to illustrate the hard decision decoder.
Fig. 1 Belief Propagation Algorithm for a regular LDPC.

In the figure1, square box indicate check nodes while the
oval shapes indicate variable nodes. The corresponding parity
check matrix is given below:
0 1 0 1 1 0 0 1
1 1 1 0 0 1 0 0
0 0 1 0 0 1 1 1
1 0 0 1 1 0 1 0
H
(
(
(
=
(
(

For example, an error free codeword of H is
c = [1 1 1 0 1 1 0 0]
T
. Suppose we
receivey = [1 0 1 0 1 1 0 0]
T
. So c
2
was flipped. Now our
decoding algorithm works as follow:
1. All variable nodes sends a message to their
corresponding check nodes. For our case, each variable
node sends message to two check nodes.
2. Each check node then calculates a response and sends
it to each variable node. While calculating what
ashould be a value of a specific message node, all other
variable nodes connected to a specific check node is
assumed to be true. Table II describes and enlist this
process for our example. Now, if all the check nodes
are satisfied no more iteration is required. Then the
decoding process is finished here. If not, we require to
go through the following steps.
3. Now after getting response from the check nodes, it is
time to decide whether we keep the message node as it
is or we should flip. We used majority rule. The value
sent, and two values returned from check nodes are
considered. New value of variable node is then chosen.
4. Now we repeat the process again and again from step 2
until either exit from step 2 or a certain number of
iterations.
TABLE III
CHECKNODES ACTIVITIES FOR HARD-DECISIONDECODER FOR
CODE OF FIGURE 1
Check
nodes
Activities
f
1
receive c
2
0 c
4
0 c
5
1 c
8
0
send 1 c
2
1 c
4
0 c
5
1 c
8
f
2
receive c
1
1 c
2
0 c
3
1 c
6
1
send 0 c
1
1 c
2
0 c
3
0 c
6
f
3
receive c
3
1 c
6
1 c
7
0 c
8
0
send 1 c
3
1 c
6
0 c
7
0 c
8
f
4
receive c
1
1 c
4
0 c
5
1 c
7
0
send 1 c
1
0 c
4
1 c
5
0 c
7
TABLE IIIIIVA
MESSAGE NODES DECISIONS FOR HARD-DECISIONDECODER FOR
CODE OF FIGURE 1
message
nodes
y
i
messages from
check nodes
decision
c
1
1 f
2
0 f
4
1 1
c
2
0 f
1
1 f
2
1 1
c
3
1 f
2
0 f
3
1 1
c
4
0 f
1
1 f
4
0 0
c
5
1 f
1
0 f
4
1 1
c
6
1 f
2
0 f
3
1 1
c
7
0 f
3
0 f
4
0 0
c
8
0 f
1
1 f
3
0 0
TABLE VIVI B
MESSAGE NODES DECISIONS FOR HARD-DECISIONDECODER FOR
CODE OF FIGURE 1
message
nodes
y
i
messages from
check nodes
decision
c
1
1 f
2
1 f
4
1 1
c
2
1 f
1
1 f
2
1 1
c
3
1 f
2
1 f
3
1 1
c
4
0 f
1
0 f
4
0 0
c
5
1 f
1
1 f
4
1 1
c
6
1 f
2
1 f
3
1 1
c
7
0 f
3
0 f
4
0 0
c
8
0 f
1
0 f
3
0 0
We can see form table IIIA and table IIIB, after first
iteration the error at bit 2 is corrected.
V. BIT ERROR RATE CALCULATION
For an AWGN channel for different SNR, bit error rate is
calculated. The bit error rate (BER) is the number of bit errors
divided by the total number of transferred bits during a studied
time interval.
From the curve, we can observe, BER is reduced as signal
power is increased as expected. For SNR greater than 11, error
counted was zero. Hence, the curve is not defined there. The
calculation was done for SNR 0 to 15dB.
Fig. 2 A sample line graph using colors which contrast well both on screen
and on a black-and-white hardcopy
VI. CONCLUSION
This paper discusses the very basic concepts of low density
parity check codes. Different modification is possible in both
encoding and decoding. As the irregular LDPC codes can
approach closer to the Shannons Limit, we expect to work
with irregular LDPC in future. Luby et.al. [5] described in
their work, irregularity incorporates greater degree of freedom.
The reason is, message node with high degree tend to correct
their value faster. These nodes then provide good information
to the check nodes, which subsequently provide good
information to the message nodes. Therefore, irregular LDPC
has potential to provide a wave effect where high degree
message nodes are corrected first, followed by slightly smaller
degree nodes, and so on. We also expect to work with
different encoding algorithm, like Richardson and Urbanke
Algorithm [6] which has lower complexity with very small
increase in BER. We coded the modified algorithm for
accumulate encoding. But the decoding for modified encoding
algorithm is a bit tricky than the rate decoder which we
used for this work. We would like to complete this 2/3 rate
decoder in near future. Though LDPC require huge
computational effort, due to its magnificient error correction
capabilities, it and its derivatives class of codes have been
using extensively in the last two decades
REFERENCES
[1] R. Galleger, Low Density Parity-Check Codes, IRE Trans.
Information Theory, pp. 2128, Jan. 1962.
[2] David J.C. MacKay and Radford M. Neal, "Near Shannon Limit
Performance of Low Density Parity Check Codes," Electronics Letters,
July 1996.
II
IIIA
IIIB
Edited by Foxit Reader
Copyright(C) by Foxit Software Company,2005-2006
For Evaluation Only.
[3] Tuan Ta, A Tutorial on Low Density Parity Check Code.
[4] David J.C.MacKay, Information Theory, Inference, and Learning
Algorithms, Cambridge University Press, 2003.
[5] M. Luby, M. Mitzenmacher, A. Shokrollahi, and D. Spielman,
Improverd Low Density Parity-Check Codes Using Irregular Graphs,
IEEE Trans. Inform. Theory, vol. 47, pp. 399-431, Mar. 1999.
[6] T. Richardson and R. Urbanke, Efficient Encoding of Low Density
Parity Check Codes, IEEE Trans. Inform. Theory, vol. 47, p. 638- 656,
2001.

Potrebbero piacerti anche