Sei sulla pagina 1di 6

JOURNAL OF TELECOMMUNICATIONS, VOLUME 12, ISSUE 1, JANUARY 2012 17

New efficient decoder for product and concatenated block codes


Abderazzak Farchane and Mostafa Belkasmi
AbstractIn this paper a new decoder for product and concatenated codes is introduced. The proposed decoding algorithm outperforms the one of Chase-Pyndiah by 1dB for Parallel concatenated codes and 0.4dB for product codes. As regards, the performances of serially concatenated codes are similar for the two decoders. The proposed decoder use predetermined scaling factor, which must be re-optimized whenever we change the decoder circumstances. To overcome this problem, we have adapted this parameter to the circumstances of the channel, the codes and modulation scheme. Moreover, our decoder does not need to normalize the extrinsic information at each decoding stage. So, this reduces the complexity of the proposed decoder. Index Terms Parallel concatenated ocdes, Generalized serially concatenated codes, Chase decoding, Chase-Pyndiah decoder, turbo codes, iterative decoding.

1 INTRODUCTION
nterest in product codes increased with the introduction of Turbo decoding [2]. One of the reasons is that product codes are closely related to concatenated codes and multilevel codes [6], [3]. A solution that works for product codes can easily be extended to concatenated codes and multilevel codes. Various computation methods of soft value for iterative decoding of product codes have been proposed. In 1994, Pyndiah et al. [11] proposed a new iterative decoding algorithm based on a soft output decision version of Chase decoding [4]. The obtained results for product codes based on BCH codes are similar to those obtained by convolutional turbo codes [2]. We have proposed a new soft-input soft-output iterative decoder for decoding Product codes. The proposed decoder is inspired of that of the Chase-Pyndiah decoder [11]. Our algorithm is based on list decoding algorithms like the one of Chase-Pyndiah. So, our algorithm can uses any list decoding algorithm. In his first version, our decoder uses a scaling factor to penderat the extrinsic information. The scaling factor is predetermined experimentaly. The drawback of this predetermined parameter is that it must be reoptimised whenever we change the application. In order to overcome this probem, we have developed an adapted scaling factor to the circumstansens of the decoder. The adapted parameter performs the same as the pre-determined factor. The adventige of the proposed decoder in relation to the concurrent decoder is that can be directly applied to auther schemes like the generalized serially a block codes presented in [5] and parallel concatenated block codes. And parallel concatenated block codes in [7].

This decoder is based on the Chase-Pyndiah one [11]. Unlike the later decoder, the proposed one uses a new decoding scheme and adapts the scaling factors to the circumstances of the decoder. The performance obtained by Monte Carlo simulation shows that our decoder is better then the one of ChasePyndiah for product codes. It outperforms the later by about 0.4dB for some BCH codes. Regarding the parallel concatenated block (PCB) codes, our decoder outperforms the PCB [7] by almost 1.0dB for some PCB codes based on BCH codes. In regard to the generalized serially concatenated (GSCB) codes [5], the proposed decoder is similar to GSCB codes. Unlike the Chase-Pyndiah decoder, the proposed decoder allows us to analys the behavior convergence of the iterative decoding algorithm using the EXIT Charts (Extrinsic information transfer charts). The remainning part of this paper is organized as follow: in section 2, we introduce the proposed decoder. In section 3, we present the performances of this decoder. We adapt the scaling factor to the circumstances of the decoder in Section 4. The last section concludes this paper.

2 THE PROPOSED DECODING ALGORITHM

2.1 component decoder The decoding of product codes is done by decoding the rows, then the columns of the code matrix. Like turbo codes, it is possible to decode product codes using an iterative process. In this case the decoding of rows and columns must be done. The weighted decision of each decoded symbol. A A. Farchane is with the Department of Computer Science, Ecole Nationale reliability must be associated to each symbol. Suprieure d'Informatique et d'Analyse des Systmes, Rabat, Morocco. We consider a transmission that use BPSK modulation
M. Belkasmi. is with the Department of Computer Science, Ecole Nationale Suprieure d'Informatique et d'Analyse des Systmes, Rabat, Morocco. .

coded by a block code, with code rate

ki ni (i=1 or 2).

The input of the decoder, when the channel is perturbed by a

2012 JOT www.journaloftelecommunications.co.uk

18

white Gaussian noise, is equal to Y = C + B , where

Y = ( y1 ,..., y j ,..., yn )

is

the

observed

vector,

(3) is relatively complex. When the signal-to-noise ratio is sufficiently big, the relation (3) can be simplifyed by conserving in the numerator and denominator only the two codewords

C = (c1 ,..., c j ,..., cn ) c j = 1 , the emitted codeword


corresponding to a row or column of the coding matrix,

D = (d1 ,..., d j ,..., d n )


nents

C min( +1) and C min( 1) having the minimum dis+1 1 tance from R and belonging respectively to S j and S j .
The expression (3) form: of the LLR can be simplified to this
min( 1) 2

the

decided

codeword

and

B = (b1 ,..., b j ,..., bn ) is the white noise whose compob j have zero average and variance 2 .

j =

1 2 2

{RC

R C min( +1)

( 4)

The reliability of component y j using the log-likelihood ratio (LLR) of the received sequence is dened by:

By introducing the components of the vector

R and if we suppose that is constant, we can normalize j with


2

pr c j = +1 / y j r j = ln p c = 1 / y j r j

[ [

] = ( 2 ) y ]
2

respect to the constant


j

(1)

we can write the LLR in follow-

ing form [11]:

Decoding of rows (or columns) is realized using a list decoding algorithm which lets us to determine the most likelihood codewords. Then, among those codewords, it selects the closest codeword to the reliability of the received sequence R in term of Euclidean distance, where

j = =

j = rj + w j

2 2 1 R C min( 1) R C min( +1) 4

(5) w j inde-

The LLR of a bit is equal to the sum of the reliability of simple r j in the input of the decoder and a quantity

R = (r1 ,..., r j ,..., rn ) is the reliability of the received sequence. The decoder affects a weighting to each component

pendent to the the reliability of simple r j . The quantity w j d j of is analog to the extrinsic information for the convolutional turbo-codes. In order to determinate the simplified expression of the LLR of a bit in the output of the decoder. It is necessary to deterand C having minate the two codewords C the minimum distance from R and having an opposite sign in position j. For this we use a list decoding algorithm (like Chase algorithm [4]). It allows us to determine a sub-set of codewords among which we can find the searched two codewords. Sometime we can not find the two codewords in sub-set determined by the list decoding (Chase algorithm for example). This means that all codewords have the same decision on the j th element, d j of the vector D . They vote for the same condidat. In this case the decision comfirms the input decoder. Consequently, the reliability of the decision must be increased while the sign of decision, d j , is given by the decoder. We propose a formla that can allows computing the reliability of the decision by taking into account the reliability of the decoder input, the sign of the decision. The
min( +1) min( 1)

the decided codeword in order to measure the reliability of each decision. This reliability is evaluated by the logarithm of the likelihood ratio associated to a decision output of the decoder and is defined by

d j at the

p r d j = +1 / R j = ln ( 2) p d = 1 / R r j The sign of j give the decision d j and the absolute value


of

[ [

] ]

j is the measure of the reliability of this decision.

By using the Bayes rules and taking into account that the noise is Gaussian, we can demonstrate [11] that the LLR associated to

d j is equal to:
q 2

R C exp + 2 2 qS j j = ln R Cq exp 2 2 qS j
where

(3)

j of

the j th element of the decision is given by the following formula:

S ij represent the set of codewords having a bit equal

to i (i = 1) in position j. The number of codewords of a block code is generally immense and the computing of LLR of a bit using the relation

1 j = R d j + rj d j (6) 2 where R represents the standard deviation of the decoder input.

19

2.2 iterative decoding of concatenated coding schemes


Let us consider the decoding of the rows and columns of a product code described in section II and transmitted on a Gaussian channel using BPSK signaling. On receiving matrix [R ] corresponding to a transmitted codeword [C ] , the rst decoder performs the soft decoding of the rows (or columns) using as input the matrix [R ] . Soft-input decoding is performed using a list decoding algorithm and the soft output is computed using (5) or (6). By subtracting the soft input from the soft output we obtain the extrinsic information [W ( 2)] where index 2 indicates that we are considering the extrinsic information for the second decoding of product code which was computed during the rst decoding of the soft input for the decoding of the columns (or rows) at the second decoding of is given by

decoder as a list decoding. The parameter ( p ) affects the performances of these codes. For this reason, we have tested several values for this parameter for several codes. In legend of figures, we denote by cp: the ChasePyndiah decoder and proposed denote the proposed decoder.

3.1 Turbo Product Codes


For the product codes, we have used the predetermined values of ( p ) shown in the table 1: TABLE 1: THE PRE-DETERMINED VALUES OF ( p )
iteration 1
0.0, 0.13
FOR PRODUCT CODES

2
0.15, 0.18

3
0.2, 0.25

4
0.3, 0.35

5
0.4, 0.45

6
0.5, 0.55

( p)
iteration

7
0.6, 0.65

8
0.7, 0.72

9
0.75, 0.77

10
0.8, 0.82

11
0.85, 0.87

[ R(2)] = [ R] + (2)[W (2)]

(7 )

( p)

where ( 2) is a scaling factor is used to reduce the effect of the extrinsic information in the soft decoder in the rst decoding steps when the BER is relatively high. It takes a small value in the rst decoding steps and increases as the BER tends to zero. The decoding procedure described above is then generalized by cascading elementary decoders. The turbo decoding algorithm described in this paper applies to any product code or concatenad codes based on linear block codes. The results we present here concern BoseChaudhuriHocqenghem (BCH) product codes. The principle of the iterative decoding is illustrated by the scheme in figure 1. Unlike the chase-Pyndiah scheme [11], our algorithm normalizes the received sequence, Y by

The Chase decoder uses a set of 16 test sequences. The results for BCH(63,51)2 and BCH(127,113)2 are shown in figure 2 and 3 for 1, 2, 3, 4, 6 and 11 iterations. According to the obtained performances, we observe that until the 4th iteraton the two decoders are similar. However, beyond the 4th iteraton, the Chase-Pyndiah one is saturated, but the proposed decoder can forther ameliorates the BER. The proposed decoder outperforms the Chase-Pyndiah one by 0.4 dB at BER = 10 5 , for the code BCH(63, 51)2. Besides, the proposed one needs not to normalise the extrinsic information at each decoding stage. This alleges the proposed decoder. Nevertheless, the number of iterations needed for the convergence of this decoder is 11.

Figure 1: Proposed decoder scheme

This algorithm doesnt need to normalize the extrinsic. The next section deals with the performances of the proposed decoder.

Figure 2: Performance of the proposed decoder for the product code BCH(63, 51)2

3 THE PERFORMANCE OF THE PROPOSED ECODER


We have evaluated the performance of the proposed decoder, over AWGN channel and BPSK modulation, for product codes and concatenated codes. We use the Chase

20

Figure 3: Performance of the proposed decoder for the product code BCH(127, 113)2

Figure 5: Performance of the proposed decoder for the parallel concatenated code PCB-BCH(141, 113), With M=100.

3.2 Parallel concatenated block Codes


We have used the proposed decoder to decode the parallel concatenated block codes [7], and we have evaluated the performance for these codes using the predetermined value for the parameter ( p ) is shown in table 2: TABLE 2: THE PRE-DETERMINED VALUES OF ( p ) FOR PARALLEL CONCATENATED BLOC CODES

iteration

1 6

2 7

3 8

4 9

5 10

( p)
iteration

0.00, 0.18 0.24, 0.26 0.28, 0.31 0.34, 0.38 0.42, 0.44

( p)

0.46, 0.48 0.52, 0.55 0.60, 0.65 0.67, 0.70 0.72, 0.75

For parallel concatenated codes, The Chase decoder uses a set of 18 test sequences (for detail see the apendix). The results for the parallel concatenated code using the proposed decoder and result in [7], for PCB-BCH(75, 51) and PCB-BCH(141, 113), With M=100, are shown in figure 4 and 5 for 1, 2, 3, 4, 7 and 10 iterations. According to the obtained performances, we observe that until the 7th the proposed decoder is best than the one of Chase-Pyndiah. The later is saturated 7th iteraton, but the proposed decoder can forther ameliorates the BER. The proposed decoder outperforms the Chase-Pyndiah one by about 1.0dB for the code PCB-BCH(75, 51), and 0.2dB for the code PCB-BCH(141, 113). Moreover, our decoder is less complex than the Chase-Pyndiah one. It needs 10 iterations to converge, for this type of codes. The following section shows the performance of serially concatenated codes.

3.3 Generalized Codes

serially

concatenated

block

Like for Parallel concatenated codes, we have applied the proposed decoder to the generalized serially concatenated [5]. For evaluate the performance of this code we have chosen for the parameter ( p ) the values shown in the table 3: TABLE 3: THE PRE-DETERMINED VALUES OF ( p ) FOR PARALLEL CONCATENATED BLOC CODES

iteration

1
0.00, 0.20

2
0.25, 0.25

3
0.30, 0.30

4
0.35, 0.35

5
0.40, 0.40

( p)
iteration
Figure 4: Performance of the proposed decoder for the parallel concatenated code PCB-BCH(75, 51), With M=100.

6
0.45, 0.45

7
0.50, 0.50

8
0.60, 0.60

9
0.70, 0.70

10
0.80, 0.80

( p)

The performance of the proposed decoder is evaluated over an AWGN channel. We have obtained result shown in figures 6 and 7.

21

2 where W ( p 1) denote the variance of the normalized extrin-

sic information delivered by the previous decoder. The performance obtained by using the adapted parameter ( p) is comparable to those obtained by the predetermined parameter. Therefore, we don't need to re-optimize this parameter if we change the application. We have used the adapted parameter ( p ) (8). The obtained performances using this parameter is comparable with those obtained by the predetermined one. The figures 8, 9 and 10 show the performances obtained using this parameter. The advantage of the adapted parameters is that decoder needn't to re-optimise the parameters whenever they used in other applications.
Figure 6: Performance of the proposed decoder for the generalized serially concatenated code GSCB-BCH(63, 39), With M=100.

Figure 8: Performance of the proposed decoder for the product code BCH(63, 51)2, with adapted ( p )

Figure 7: Performance of the proposed decoder for the generalized serially concatenated code GSCB-BCH(127, 99), With M=100.

According to the obtained result for the generalized serially concatenated code using ouer decoder and the result optained in [5], we remark that the proposed decoder is comparable to the Chase-Pyndiah one. Besides, our decoder is less complex than the Chase-Pyndiah one. It needs 10 iterations to converge, for generalized serially concatenated codes.

4 THE ADAPTED PARAMETER ( p)


The role of the parameter ( p) is vital in the decoding performance. In the works [11], [10], [9], this parameter was experimentally predetermined. Its values are chosen such as the BER = 10 5 is attained with the minimum number of iterations. This process is too hard. We have adapted the parameters to the circumstances of the product codes and turbo like-codes to overcome this problem. The following formula gives the expression of ( p) :
Figure 9: Performance of the proposed decoder for the parallel concatenated code PCB-BCH(141,113) ), with adapted ( p ) and M=100.

(p) =

2 W ( p1)

(8)

22

Y17 (I1, I2, I3, I4, I5)

REFERENCES
Pyndiah A. Picart, R. Adapted iterative decoding of product codes. Global Telecommunications Conference GlobCom, 1999. [2] P. Thitimajshima C. Berrou, A. Glavieux. Near Shannon limit error correcting coding and decoding: Turbocodes. IEEE International Conference on Communication ICC93, 2/3, May 1993. [3] A. R. Calderbank. Multilevel codes and multistage decoding. IEEE Transactions on Information Theory, 37(3): 222 -229, Mar 1989. [4] D. Chase. Class of algorithms for decoding block codes with channel measurement information. IEEE Trans. Information theory, 13:170-182, Jan 1972. [5] A. Farchane and M. Belkasmi. Generalized serially concatenated codes: construction and iterative decoding. IJMCS, V.6:2 2010. [6] H. Imai and S. Hirakawa. A new multilevel coding method using error correcting codes. IEEE Transactions on Information Theory, 1977. [7] M. Belkasmi and A. Farchane. Iterative decoding of parallel concatenated block codes. ICCCE08, May 13-15 2008. [8] P. Desmond Taylor Philipa A. Martin. On adaptive rducedcompexity iterative decoding. [9] R. Pyndiah and A. Picart, Performance of turbo-decoded product codes used in multilevel coding. IEEE proc. of ICC'96, June Dallas, 1996. [10] A. Picart A. Glavieux R. Pyndiah, Performance of block turbocoded 16-qam and 64-qam modulations. IEEE proc. of GLOBCOM'95, Singapore, Nov. 1995. [11] A. Picart S. Jacq R. Pyndiah, A. Glavieux. Near optimum decoding of product codes. GLOBECOM94, (7-8), November 1994. A. Farchane received his license diploma in Computer Science and Engineering in June-2001 and Master diploma in Computer Science and telecommunication from University of Mohammed V - Agdal, Rabat, Morocco in 2003. Currently he is doing his PhD in Computer Science and Engineering at ENSIAS (Ecole Nationale Suprieure d'Informatique et d'Analyse des Systmes), Rabat, Morocco. His areas of interest are Information and Coding Theory. M. Belkasmi is a professor at ENSIAS (Ecole Nationale Suprieure d'Informatique et d'Analyse des Systmes); and head of Telecom and Embedded Systems Team at SIME Lab. He had PhD at Toulouse University (France) in 1991. His current research interests include mobile and wireless communications, interconnexions for 3G and 4G, and Information and Coding Theory. [1]

Figure 10: Performance of the proposed decoder for the generalized serially concatenated code GSCB-BCH(127; 99), with adapted ( p) and M=100.

5 CONCLUSION
In this paper, we have introduced a new decoder for product and concatenated codes. This decoder uses a scaling parameter, ( p) like that of the Chase-Pyndiah one. For the later decoder this parameter is pre-determined experimentally. The parameter must be re-optimised whenever we change the code, the modulation or the application. To overcome this problem, we have adapted this parameter to the circumstances of the decoder. The performance obtained by the adapted parameter is slightly better than those obtained by pre-determined one. The proposed decoding algorithm outperforms Chase-Pyndiah one by about 1.0dB for a Parallel concatenated code and 0.4dB for a product code. Moreover, it needn't to normalise the extrinsic information at each decoding stage. So, this reduces the complexity of the proposed decoder. In a future work we will show that the proposed decoder allows us to analyse the convergence behavior of the iterative decoding

APPENDIX
The number of the test-sequences, Yl, used by ChasePyndiah decoding is 18. Let I1, I2, I3, I4 and I5 denote the positions of the five least reliable symbols at the input of the component decoder. These five positions are classed in increasing reliability order. The first test-sequence is, Y0, the hard decision of the input of the Chase-Pyndiah decoder, the other test-sequences are given below. Between brackets are the non null positions for each sequence. Y1 (I1) Y2 (I2) Y3 (I1, I2) Y4 (I3) Y5 (I1, I3) Y6 (I4) Y7 (I2, I3) Y8 (I1, I4) Y9 (I1, I2, I3) Y10 (I1, I5) Y11 (I2, I3, I4) Y12 (I1, I2, I3, I4) Y13 (I1, I3, I5) Y14 (I1, I2, I4, I5) Y15 (I1, I3, I4, I5) Y16 (I2, I3, I4, I5)

Potrebbero piacerti anche