Sei sulla pagina 1di 7

TURBO CODE CONCEPTS MADE EASY,OR HOW I LEARNED TO CONCATENATE AND REITERATE

Bernard Sklar Communications Engineering Services Tarzana, California

ABSTRACT ?%egoal of this paper is to describe the main ideas behind the new class of codes called turbo codes, whose f performance in terms o bit-error-probability h been shown to be very close to the Shannon limit. The ben&ts of these concatenated codes are illustrated in the context of likelihood functions and a posteriori probabilities. Since the algorithms needed to implement the decoders have been well documented by others, they are only referenced here, but not described in detail. A numerical example, using a simple concatenated coding scheme, provides a vehiclefor explaining how error perfomnce can be improved when sop outputs from the decoders are used in an iterative decoding process.
EVOLUTION OF TURBO CODES Concatenated coding schemes were first proposed by Forney [11 for achieving large coding gains by combining two or more relatively simple component or building-block codes. The resulting codes had the error-correction capability of much longer codes, and they were endowed with a structure that permitted relatively easy to moderately complex decoding. A serial concatenation of codes are most often used for power-limited systems such as deep-space probes. The most popular of these schemes consists of a Reed-Solomon outer (applied first, removed last) code followed by a convolutional inner (applied last, removed first) code [Z]. A turbo code can be thought of as a refinement of the concatenated encoding structure and an iterative algorithm for decoding the associated code sequence.
Turbo codes were introduced by Berrou, Glavieux, and Thitimajshm [3, 41. For a bit-error-probabiliq of 10 and code rate = ID, the authors report a required E J N ~ of 0.7 dB. The codes are constructed by applying two or more component codes to different interleaved versions of the same information sequence. For any single traditional code, the final step at the decoder yields harddecision decoded bits (or more generally, decoded symbols). In order for a concatenated scheme such as a turbo code to work properly, the decoding algorithm should not limit itself to passing hard-decisions among the decoders. To best exploit the information learned from each decoder, the decoding algorithm must effect an exchange of soft decisions rather than hard decisions. For a system with two component codes, the concept behind turbo decoding is to pass soft decisions from the output of one decoder to the input of the other decoder, and to iterate this process several times so as to produce better decisions.

LOGLII(ELIH0OD RATIO
Let the binary logical elements 1 and 0 be represented electronically by voltages +1 and -1 respectively. This pair of transmitted voltages is assigned to the data variable d which may now take on the values d = +l and d = -1. Let the binary 0 (or the voltage value -1) be the null element under addition. The log-likelihood ratio (LLR) of d conditioned on x is defined as
L ( d l X ) = log

where x is a variable representing data plus noise, and P ( d = i l x ) is the conditional a posteriori probability (APP) of the data. L(dlx) is a real number representing a soft decision out of a detector. Using Bayes theorem [5], we write

[P ( d -1 1x1 continuous-valued random


=

(1)

where p (x Id = i) is the conditional probability density function (pdf) of x and P(d = i) is the a priori probability of d . We represent Eq. (2) in simplified notation

(3) where L ( x ( d ) is the LLR of the channel measurements of


+

L ( d J r )= L ( x l d )

L(d)

x under the alternate conditions that d = +1 or d = -1 may have been transmitted, and L(d) is the a-priori LLR of the data bit d . To further simplify the notation, we represent

(4) where the notation Lc(x) emphasizes that this LLR term is the result of a channel measurement made at the detector. Eqs. (1) through (4) were developed with only a data detector in mind. Next, the introduction of a decoder will typically yield decision-making benefits. For a systematic code, it can be shown [3] that the LLR (soft output) L ( d ) out of the decoder is equal to (5) & where L ( is the LLR of a data bit out of the detector (input to the decoder), and L,(d), called the extrinsic LLR, represents extra knowledge that is gleaned from the decoding process. From Eq. (4) and (5), we write:
=

Eq. (3) as

L(ci) = L,(x) + L ( d )

L(2)

L(d) + L,(h)

L ( J ) = Lc(x) + L ( d ) + Le(& (6) The soft decision L ( 2 ) is a real number that provides a hard decision as well as the reliability of that decision. The sign of L ( 2 ) denotes the hard decision; that is, for positive values of L(& decide +I, for negative values decide -1. The magnitude of L ( d ) denotes the reliability of that decision.

0-7803-4249-6/97/$10.00 0 1997 IEEE

20

PRINCIPLES OF ITERATIVE DECODING


FeedDeck for the Next Iteration

L) & chamel

t-73
A poStwioci

VaClesh

Vahres

Figure 1. Soft InputlSoft Output Decoder (systematic code)

For the first decoding iteration of the soft input/soft output decoder in Fig. 1, one generally assumes the binary data to be equally likely, yielding an initial a priori LLR value of L ( d ) = 0 for the third term in Eq. (6). The outputL(d) of the Fig. 1 decoder is made up of the LLR from the detector, L(d), and the extrinsic LLR output, Le(d), representing knowledge gleaned from the decoding process. For iterative decoding, the extrinsic likelihood is fed back to the decoder input, to serve as a refmement of the a-priori value for the next iteration. Consider the 2dimensional code (product code) depicted in Fig. 2.
ni2 2c
Wcdums
k l Rows
nl-kl
ROW8

2. Decode horizontally, and using Eq. (6) obtain the horizontal extrinsic information as shown below: L( = L(2) L , ( X ) - L ( d ) ,& 3. Set L ( d ) = LJd) 4. Decode vertically, and using Eq. (6) obtain the verticd extrinsic information as shown below: Le(& = L ( d ) -L,(X) L ( d ) 5 . Set ~ ( d = L J ~ ) ) 6. If there have been enough iterations to yield a reliable decision, go to step 7, otherwise go to step 2. 7. The soft output is: L(d) + Leh(d) + Le,(d) (7) TWO-DIMENSIONAL SINGLEPARITY CODE At the encoder, let the data bits and parity bits take on the values shown in Fig. 3, where the relationships between the data and parity expressed as binary digits (0, l), are as follows: di @ dj Pjj (8)

cahmns

Homontal

Expinsic

where denotes modulo-2 addition. As shown in Fig. 3, & I the transmitted symbols are represented by the sequence: dl d, 4 d, pu pa pu pu. At the receiver input, the received symbols are represented by the sequence: {xi,xi,} where xi = di+ n for each received data signal,xi, = pi,+ R for each received parity signal, and A represents independent and identically distributed noise. For notational simplicity, we shall denote the received sequence with a single index, as { x k ) where k can be treated as a time index. Using the relationships developed in Eq. (2) through (4), and assuming Gaussian noise, we can write the LLR for the channel measurement of a received signal x k , as follows:

Extrinsic

vertical

Figure 2. Two-Dimensional Product Code The configuration can be described as a data array made up of kl rows and k, columns. Each of the kl rows contains a code vector made up of k, data bits andn,-k, parity bits. Similarly, each of the k, columns contains a code vector made up of kl data bits and nl-kl parity bits. The various portions of the structure are labeled d for data, p,, for horizontal parity (along the rows), and pv for vertical parity (along the columns). Additionally, there are blocks labeled Le, and L which house the extrinsic LLR , values learned from the horizontal and vertical decoding steps respectively. This product code is a simple example of a concatenated code. Its structure encompasses two separate encoding steps - horizontal and vertical. The iterative decoding algorithm for this code proceeds as follows: 1. Set the a-priori information L ( d ) = 0

where the natural logarithm is used. If we M e r make a simplifying assumption that the noise variance az is
Consider the following example, where the data sequence dl 4 6 d, is made up of the bits 1 0 0 1, as shown in Fig. 3. By the use of Eq. (8), it is seen that the parity plz pj,* pl, p 2 , must be made up of 1 1 1 1. Thus, the transnutted sequence is

21

{ d , , p i j )= 1 0 0 1 1 1 1 1 which is shown in Fig. 3 as the encoder output. Expressed in terms of bipolar voltage values, the transmitted sequence is +1 -1 -1 +1 +1 +1 +1 +1 Let us assume that the noise transforms this data-plusparity sequence into the received sequence {x,l= 0 7 , 0.05, 0.10, 0.15, 1 2 , 1.0, 3.0, 0.5 .5 .5 From Eq. (13), our assumed channel measurements yield the LLR values

very small: L ( d ) = -L(d) and L ( d ) til 0 = 0. Note that the log-likelihood algebra described here differs slightly from Hagenauers reference 161 because of choosing the null element differently. In this paper, the null element of the binary set (0, 1) has been chosen to be the 0. EXTRINSIC LIKELIHOODS For the product-code in Fig. 3, consider the soft output L(dJ for the received signal corresponding to data til.
t(ci3=Lc(xl)+L(d1)+([Lc(x,)+L(~)lR

(16)

{Le(xk)I1.5, 0.1, 0.20, 0.3, 2.5, 2.0, 6.0, 1.0 =


which is shown in Fig. 3 as the decoder input measurements. It should be noted that, if hard decisions values shown above, are made on the irk)or the i Lc(zr)) such a detection process would result in two errors, since d, and d , would each be incorrectly classified as binary 1.
ENCODER OUTWT BINARY DIGITS

or, in general, the soft output L(d,) for the received signal corresponding to data ti, is:

UJ,) + L ( d , ) +(rLc(x,) +Ud,)l &I Lc(x,,)) (17) =Lc(xJ where L,(x,), L , ( x ) , and Lc(xu) are the channel LLR measurements of the received signals corresponding to d,, d,, and pij respectively. L(d,) and L(d/) are the LLRs of the a priori probabilities of ti, and di respectively, and ([Lc(xj) +L(d,)l &I L&)) is the extrinsic LLR contribution from the code. The horizontal and vertical calculations for Le&) and L J d ) respectively, are:
Le&
@) ,

= E ,@ L(
= rLCb3) = [L,(x,) = =

+
+

L(d,)l

4(qJ
Lc(x,,)

U411
WJl

( W (18b) (19a) (19b)

DECODER

INPUT LOG-LIKELIHOOD RATIOS L , ( X )

Lek@)

+
+

L(d,)l E I Lc(Xn) L
Lc(X24)

L$2)

[L,(x,)

EL,(^,,)

+ +

4(15,) = [L,(x,)
Figure 3. Product Code Example LOGLIKELIHOOD ALGEBRA To best explain the iterative feedback of soft decisions, we introduce the idea of a log-likelihood algebra. For statistically independent d , we define the s u m of two log likelihood ratios (LLRs)as follows [6]:
4&4)
4,(J4)

W,)I W,)l
W3)l L(d,ll

~ ~ ( x ~(204 )
LC(XU)

(2ob)
(21a) (21b)

= W,(x,)
=

+
+

Lc(x34)
@ Lc(x24)

[L,(x,)

The LLR values shown in Fig. 3 are entered into the Le,@) expressions in Eq. (18) through (21) and theud) values are initially set equal to zero, yielding: Le&
= (0.1 + 0 )

Ea 2.5 = -0.1 = new L(d,) new L(d,)

(22) (23)

Lek@$= (1.5 + 0 ) Ea 2.5 = - . = new L(d$ 15 Le&


where the natural logarithm is used. There are three is used for ordinary addition operations in Eq. ( 4 . 1) addition, @ is used for modulo-2 addition of the data expressed as binary digits, and &I is used for log-likelihood addition. The s u m of two LLRs is denoted by the operator Ea, where such an addition is defined as the LLR of the modulo-:! sum of the underlying data bits. Eq. (15) is an approximation of Eq. (14) that will prove useful later in a numerical example. The sum of LLRs as described by Eq. (14) or (15), yield the following interesting results when one of the LLRs is very large or
= (0.3 + 0 ) Ea 2.0 = -03 =

(24)
(25)

= ( - ~ ) X S ~ ~ [ ~ ( ~ ~ ) I X S ~ ~ [ L ( ~ U 4)) I )X(15) I ~ ( ~Leh((id) (0.2 + 0 ) Ell 2.0 = -0.2 = new L(dJ I~ I ~( ,)I, =

where the log-likelihood addition has been calculated using the approximation in Eq. (15). We next proceed to calculate the first vertical calculations using thelev(& expressions in Eq. (18) through (21). Now the values of L ( d ) can be refined by using the new L ( d ) values gleaned from the first horizontal calculations, shown in Eq. (22) through (25). From this, we obtain the following values: Lep(dl) 0.1 , LeY(a2) ,Lev(@ = -1.4, Lm(a4) 1 0 = = -0.1 = .. The results of the first full iteration of the two decoding steps (horizontal and vertical) are shown below.

22

Original LLR Measurements

1After First Horizontal


-1.4

For this example, the second full iteration of both codes (yielding a total of 4 iterations) suggests a modest improvement over one full iteration. Now, there seems to be a balancing of the confidence values amongst each of the four data decisions.

COMPONENT CODES
We next apply the ideas of concatenation and iteration to the implementation of turbo codes. We form a turbo code by the parallel concatenation of component codes (building blocks) [ , ] First, a reminder of a simple binary rate 37. 1/2 convolutional encoder with constraint length K and memory K-1. The input to the encoder at time k is a bit d k , and the corresponding codeword is the bit pair 5 1 , where
K-1
uk

After First Vertical Each decoding step improves the original LLR channel measurements. This is seen by calculating the decoder output LLR using Eq. (7). The original LLRs plus the horizontal J n o extrinsic LLRs yields the following improved LLR values.

=
110

gl1$4

mod

g1= 1

(27)

1.4

K-1

v k = c &dk,
1.0

mod2 gB=o,1

(28)

-0.1

0.1

The original LLRs plus both the horizontal and vertical extrinsic LLRs yields the following improved LLR values.

For this example, it is seen that the horizontal parity alone yields the correct hard decisions out of the decoder, but with very low confidence for data bits 4 and d,. Afier enhancing the decoder LLRs with the vertical extrinsic LLRs, the new LLR solutions have better confidence. Let us pursue one additional horizontal and vertical decoding iteration to see if there are any significant changes in the results. We again use the relationships shown in Eq. (18) through (21) and proceed with the second horizontal ) calculations for L&) using the new ~ ( dresulting from the first vertical calculations listed above. From this, we = obtain the following values: Leh(dl) 0, Le& = -1.6, Le&d,) = -13, Lek( = 1.2. Next, we proceed with the second vertical calculations for L,,(d), using the new L ( d ) from the second horizontal calculations. T i hs yields the values: Lev( = 1.1 , Lev( = - . , 2,) d2) 1 0 Le,,(d3) -1.5, and Le,(a4) = 1.0. After the second = iteration of horizontal and vertical decoding calculations, the soft-output likelihood values are again calculated from Eq. (7), rewritten below:

1-

G, = {gJ and G, = {gzr}are the code generators and dk is represented as a binary digit. This encoder has a finite impulse response (FIR), and gives rise to the familiar nonsystematic convolutional code (NSC), an example of which is seen in Fig. 4.

"k

Figure 4. Nonsystematic Convolutional (NSC) Code In this example, the constraint length is K=3, and the two code generators are described by GI= I1111 and G2={101I. It is well known that, a large E&,, the t error performance of a NSC is better than that of a systematic code with the same memory. At small E,/No, it is generally the other way around. A class of infinite impulse response (IIR) convolutional codes has been proposed [3] for the building blocks of a turbo code. Such codes are also referred to as recursive systematic convolutional (RSC) codes because previously encoded information bits are continually fed back to the encoder's input. For high code rates, RSC codes result in better error performance than the best NSC codes at any Eb/No. A binary, rate 1/2, RSC code is obtained f o a NSC rm code by using a feedback loop, and setting one of the two outputs (ut or vk) equal to 4. Fig. 5 illustrates an example of such a RSC code, with K=3, where a& is recursively calculated as:
K-1

a*)

a2)

L ( d ) = L,(x) + L,,(ci) + Le,(&

(26)

The final horizontal and vertical extrinsic LLRs, and the resulting decoder LLRs are displayed below.

2.6 -2.6

I
I

-2.5 2.5

1
I

ak = dk +
I-1

giak,

mod 2

(29)

23

dk

"k

Figure 5. Recursive Systematic Convolutional Code and where g{ is respectively equal to gli if U, = ti,, and tog, if v, = 4 . We assume that the input bit d, takes on values of 0 or 1 with equal probability. It can be shown thatu, exhibits the same equally-likely probability. The trellis structure is identical for the RSC code of Fig. 5 and the NSC code of Fig. 4, and these two codes have the same free distance. However, the two output sequences {U,} &bJ do not correspond to the same input sequence Id,) for RSC and NSC codes. For the same code generators, we can say that RSC codes do not modify the output weight distribution of the output codewords compared to NSC codes. They only change the mapping between input data sequences and output codeword sequences. CONCATENATION OF RSC CODES Consider the parallel concatenation of two RSC encoders of the type shown in Fig. 5 . Good turbo codes have been constructed with their component codes having quite short constraint lengths (K= 3 to 5). An example of a such a turbo encoder is shown in Fig. 6 , where the switch yielding vk punctures the code, making it rate 1/2. Without the switch, the code would be rate 1/3.

The turbo encoder in Fig. 6 produces codewords from each of two component encoders. The weight distribution for the codewords out of this parallel concatenation depends on how the codewords from one of the component encoders are combined with codewords from the other encoder(s). Intuitively, we would like to avoid pairing low-weight codewords from one encoder with low-weight codewords from the other encoder. Many such pairings can be avoided by proper design of the interleaver. An interleaver that permutes the data in a random fashion provides better performance than the familiar block interleaver [9]. If the encoders are not recursive, a low weight codeword generated by the input sequence d = (0 0. .O 0 1 0 O..O 0) with a single binary 1 will always appear again at the input of the second encoder for any choice of interleaver. In other words, the interleaver would not have an important function in codeword weight distribution if the codes were not recursive. However if the component cod. : recursive, a weight-1 input sequence generates an in ,Ate impulse response (infinite weight output). Therefore, for the case of recursive codes, the weight-1 input sequence does not yield the minimum weight codeword out of the encoder. The encoded output weight is kept finite only by trellis termination, a process that forces the coded sequence to terminate in the zero state. In effect, the convolutional code is converted to a block code. For the Fig. 6 encoder, the minimum weight codeword for each component encoder is generated by the weight-3 input sequence (0 0. .O 0 1 1 1 0 0 0. .O 0) with three consecutive 1's. Another input that produces fairly low weight codewords is the weight-2 sequence (0 0. .O 0 1 0 0 1 0 0. .O 0). However after the permutations of an interleaver, either of these deleterious input patterns is not likely to appear again at the input to another encoder, so it is unlikely that a minimum weight codeword will be combined with another minimum weight codeword. The important aspect of the building blocks used in turbo codes is that they are recursive (the systematic aspect is merely incidental). It is the RSC code's IIR property that protects against those low-weight encodings which cannot be remedied by an interleaver. One can argue that turbo code performance is determined largely from minimum weight codewords that result from the weight-2 input sequence. The argument is that weight-1 inputs can be ignored since they yield large weights due to the IIR code structure, and for input sequences having weight-3 and larger, the interleaver makes the number of such input words relatively rare 18-12]. A FEEDBACK DECODER The Viterbi algorithm (VA) is an optimal decoding method for minimizing the probability of sequence error. Unfortunately, the VA is not able to yield the a-posteriori probability (APP) or soft-decision output for each decoded bit. A relevant algorithm for doing this has been proposed by Bahl et. al. [13]. The Bahl algorithm was modified by Berrou, et. al. [3] for use in decoding RSC codes. The

I
I

"k

c2

Figure 6 . Parallel Concatenation of Two RSC Encoders Additional concatenations can be continued with multiple component codes. In general, the component encoders need not be identical with regard to constraint length and rate. In designing turbo codes, the goal is to choose the best component codes by maximizing the effective free distance of the code [8]. At large values of Eb/No,this is tantamount to maximizing the minimum weight codeword. However, at low values of EJN, (the region of greatest interest) optimizing the weight distribution of the codewords is more important than maximizing the minimum weight [7].

24

APP of a decoded data bit d, can be derived from the joint


probability Ai(m) defined by: (30) where S, is the state at time k, and Rf is the received sequence from time k= 1 through some time N. Thus, the APP for a decoded data bit d,, represented as a binary digit, is equal to:
P { d , = i ( R ~ ) = ~ A ~ ( i=O,1 m),
m

DEC2 makes use of the DECl output, provided this output is time ordered in the same way as the input to C2.

&(m) = P { d , = i, S = m I RT) ,

DECODING WITH A -BACK LOOP We rewrite Eq. (6) for the softdecision output at time k, with the a priori LLR t ( d k ) initially set to zero. This follows from the assumption that the data bits are equally likely.
L(dk)

+> e ',' (

(31)

The log-likelihood function is written as the logarithm of the ratio of APPs, as follows: (32)
J

(34)
where L(dk) is the soft decision output, and L, is the &) LLR channel measurement, stemming from the ratio of likelihood functions p (xk 1dk = i ) of the discrete memoryless channel. = L(cj,)I, , is a function of the redundant information. It is the extrinsic information supplied by the decoder, and does not depend on the decoder input x,. Ideally L,(x,) and Le&) are corrupted by uncorrelated noise, and thus Le@) may be used as a new observation of d, by another decoder for an iterative process. The fundamental principle for feeding back information to another decoder is, never feed a decoder with information that stems from itself (because the input and output corruption will be highly correlated). For the Gaussian channel, we use the natural logarithm in Eq. (34) to describe the channel LLR, Lc(xk),as was done in Eq. (11) and (12). We rewrite the Eq. (12) LLR result below:

L,(Q

The decoder can make a decision by comparing L(dk) to a zero threshold. 2, = 1 if L(2,) > 0 and 2, =O if L(2,) < 0 . For a systematic code, the LLR L(a,) associated with each decoded bit d, can be described as the sum of the LLR ofd, out of the detector and of other LLRs generated by the decoder (extrinsic information), as was expressed in Eq. (5) and (6). Consider the detection of a noisy data sequence that stems from the encoder of Fig. 6. The decoder is shown in Fig. 7.
wb.dclaop

x,

DLyn

Figure 7. Feedback Decoder

-wlpn

For a discrete memoryless Gaussian channel and binary modulation, the decoder input is made up of a couplelp, of two random variables x, and y k , Where dk and v, are bits, we can express the received bit-to-bipolar pulse conversion at time k, as follows: xk= ( 2 4 - 1) + ik and yk= (2vk - 1 + q, ) (33) where i, and q, are two independent noises with the same variance u2. The redundant information, y,, is demultiplexed and sent to decoder DECl as y, when v, = vlk, and to decoder DEC2 as y? when vk= vu. When the redundant information of a given encoder (C1 OR C2) is not emitted, the corresponding decoder input is set to zero. Note, that the output of DECl has an interleaver structure identical to the one used at the transmitter between the two encoders. This is because the information processed by DECl is the noninterleaved output of C1 (corrupted by channel noise). Conversely, the information processed by DEC2 is the noisy output of C2 whose input is the same data going into C1, however permuted by the interleaver.

Both decoders DECl and DEC2 use the modified Bahl algorithm [ ] If the inputs L 7. & and ya to decoder at DEC2 are independent, then the log-likelihood 4(dk) the output of DEC2 can be written as:

with

L,(ci,) = frLlcs,,l+ L,,(Q 2 L&) = x, + LeI(ak)


00

(36)
(37)

where f [ I indicates a functional relationship. The extrinsic information OLe2(ci,)out of DEC2 is a function of the sequence { ~ ( ~ n ) l w , k . Since &(an) depends on observation R;, then the extrinsic information Le.&) is correlated with observations x, and y. Nevertheless the , and greater In-kl is, the less correlated are &(in) the observations x,, y, . Thus,due to the interleaving between DECl and DECS, the extrinsic information and the observations xk,ylr are weakly correlated. Therefore, they can be jointly used for the decoding of bit d,. 2 = LJQacts as a diversity effect. In generd,L&) ,

&(ak)

25

will have the same sign as 4 . Therefore, Le2(ak) may improve the LLR associated with each decoded data bit. The algorithmic details for computing the LLR, L ( @ , of the a posteriori probability (APP) for each bit has been described by several authors [3,4,6], and suggestions for decreasing the the implementational complexity is still ongoing [14-16], A reasonable way to t i k of the process hn that produces APP values for each bit is to imagine computing a maxi" likelihood sequence estimation or Viterbi algorithm (VA) in two directions over the block of coded bits. Having metrics associated with states in the forward and backward direction allows us to compute the APP of a bit if there is a transition between two given states. We can proceed with this bidirectional VA, and in a sliding-window fashion, compute the APP for each code bit in the block. Thus, the complexity of decoding a turbo code is estimated to be at least two times more complex than decoding one of its component codes using the VA.

2.

3.

4.

5.
6. 7.

ERROR-PEWORMANCE EXAMPLE
Monte Carlo simulations have been presented for a rate 1/2, K=5 encoder with generators G l = { l l l l l ~and G,= (1Q O O l ) , and parallel concatenation. The interleaver was a 256 X 256 array. The modified Bahl algorithm has been used with a data block length of 65,536 bits [3]. For 18 iterations, the bit-error probability PB is lower than 10" at Eb/No.=Q.7 dB. Note that, as we approach the Shannon l m t of -1.6 dB, the required bandwidth approaches infinity, and the capacity (code rate) approaches zero. For binary modulation, several authors use PB=10" and EB/No=OdB as the Shannon limit reference for a rate 1/2 code. Thus with parallel concatenation of RSC convolutional codes and feedback decoding, the error performance of a turbo code is at 0.7 dB from the Shannon limit. Recently, a class of codes that use serial instead of parallel concatenation of the interleaved building blocks have been proposed. It has been suggested that these codes may have superior performance to turbo codes [15]. 8. 9.

10.

11.

12.
13.

SUMMARY In this paper, we used the measures of a posteriori probability and likelihood for evaluating the error performance of a soft input/soft output decoder. A numerical example helped to illustrate how performance is improved when soft outputs from concatenated decoders are used in an iterative decoding process. We next proceeded to apply these concepts to the parallel concatenation of recursive systematic convolutional (RSC) codes, and explained why such codes are the preferred building blocks in turbo codes. A feedback decoder was described in general ways, and its remarkable performance was presented.
1.

14.

15.

16.

REFERENCES G. D. Fomey, Jr., Concatenated Codes, Cambridge: M. I . T. Press, 1966.

J. H. Yuen, et. al., "Modulation and Coding for Satellite and Space Communications, Proceedings ZEEE, vol. 78, no. 7, July 1990, pp. 1250-1265. C. Berrou, A. Glavieux, and P. Thitimajshima, "Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes," ZEEE Proc. of Znt. Con$ Comm., Geneva, May 1993, pp. 1064-1070. C. Berrou and A. Glavieux, "Near Optimum Error Correcting Coding And Decoding: Turbo-Codes, ZEEE Trans. on Communications, vol. 44, no. 10, October 1996, pp. 1261-1271. B. Sklar, Digital Communications:Fundamentalsand Applications, Appendix B, Englewood Cliffs, NJ: Prentice Hall, 1988. J. Hagenauer, "Iterative Decoding of Binary Block and Convolutional Codes," IEEE Trans. on Znfor. Theory, vol. 42, no. 2, March 1996, pp. 429-445. D. Divsalar and F. Pollara, "On the Design of Turbo Codes, ZZlA Progress Report 42-123, Jet Propulsion Laboratory, Pasadena, California, November 15, 1995, pp. 99-121. D. Divsalar and R. J. McEliece, "Effective Free Distance of Turbo Codes, Electronic Letters, vol. 32, no. 5, Feb. 29, 1996, pp. 445-446. S . Dolinar and D. Divsalar, "Weight distributions for Turbo Codes Using Random and Nonrandom Permutations," IZ)A Prog. Report 42-122, Jet Prop. Lab., Pasadena, CA, Aug. 15, 1995, pp. 56-65. D. Divsalar and F. Pollara, "Turbo Codes for DeepSpace Communications,'' TDA Prog. Report 42-120, Jet Prop. Lab., Pasadena, Feb. 15, 1995, pp. 29-39. D. Divsalar and F. Pollara, "Multiple Turbo Codes for Deep-Space Communications, TDA Progress Report 42-121, Jet Propulsion Laboratory, Pasadena, California, May 15, 1995, pp. 66-77. D. Divsalar and F. Pollara, "Turbo Codes for PCS Applications," Proc. ICC '95, Seattle, Washington, June 18-22, 1995. L. R. Bahl, J. Cocke, F. Jeinek, and J. Raviv, "Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate," Trans. Inform. neory, vol. IT-20, March 1974, pp. 248-287. S. Benedetto, et. al., "Soft Output Decoding Algorithm in Iterative Decoding of Turbo Codes," l D A Prog. Report 42-124, Jet Propulsion Laboratory, Pasadena, California, Feb. 15, 1996, pp. 63-87. S . Benedetto, et. al., "A Soft-Input Soft-Output Maximum A Posteriori (MAP) Module to Decode Parallel and Serial Concatenated Codes," ZDA Prog. Report 42-127, Jet Propulsion Laboratory, Pasadena, California, November 15, 1996, pp. 63-87. S. Benedetto, et. al., "A Soft-Input Soft-Output APP Module for Iterative Decoding of Concatenated Codes, ZEEE Communications Letters, vol. 1, no. 1, January 1997, pp. 22-24.
'I It I'

26

Potrebbero piacerti anche