Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
1 0 0 0| 0 1 1 1
0 1 0 0 | 1 1 1 0
H sys = [ I |P T ] =
(1)
0 0 1 0 | 1 1 0 1
0 0 0 1| 1 0 1 1
0 1 1 1| 1 0 0 0
1 1 1 0 | 0 1 0 0
Gsys = [ P |I ] =
(2)
1 1 0 1 | 0 0 1 0
1 0 1 1| 0 0 0 1
To find the minimum distance analytically, we use the property that the minimum distance of a binary linear code is equal to the smallest number of
columns of the parity-check matrix H that sum up to zero (see Corollary
3.2.2 in the book). Hence, we find that:
there are no two identical columns in H sys dmin > 2
3.2 General structure of a systematic encoder for a linear (n, k) block code is
shown in Figure 3.2 in the book. Based on the parity check equations from the
previous problem, the systematic encoder for the (8, 4) code has the structure
as shown in Figure 1.
3.3 Syndrome vector is obtained from the received sequence r = [r0 r1 . . . r7 ]
using the parity check matrix of the code:
s = [s0 s1 s2 s3 ] = rH T .
Error Control Coding week 1: Binary field algebra and linear block codes
u0
u1
v0
u2
v1
u3
v3
v2
=
=
=
=
r0 + r5 + r6 + r7
r1 + r4 + r5 + r6
r2 + r4 + r5 + r7
r3 + r4 + r6 + r7 .
Using these equations the syndrome circuit is easily constructed following the
general structure shown in Figure 3.4 in the book for any linear (n, k) block
code.
3.4 (a) The parity check matrix H of a linear (n, k) code C has dimensions
(n k) n and its rank is n k (all rows are linearly independent).
The matrix H 1 has dimensions (n k + 1) (n + 1) and its rank is
thus n k + 1. Since n k rows of H are linearly independent, then,
after adding a leading 0, these rows (the first n k rows of H 1 ) are still
independent. The last row of H 1 begins with a 1, while the others begin
with 0. Hence, we conclude that all rows of H 1 are independent, and
the rank of H 1 is exactly n k + 1. Therefore, H 1 is a parity check
matrix of a code C1 whose dimension is the dimension of the null space
of H 1 , that is, dim(C1 ) = (n + 1) (n k + 1) = k.
(b)+(c) Extending the parity-check matrix H with a zero-column to the left and
adding all-one row at the bottom is equivalent to adding the digit v to
the left of each codeword of the original code C, which is involved only
in the last parity check equation specified by the all-one row, that is,
v + v0 + v1 + ... + vn1 = 0.
This parity check equation implies that the weight, that is, the total
number of ones in each codeword [v v0 v1 ... vn1 ] of the extended code
C1 must be even.
From the above parity check equation, it follows that the added digit v
equals
n1
X
v =
vi .
i=0
Error Control Coding week 1: Binary field algebra and linear block codes
3.6 (a) The given condition on G ensures that, for any symbol position i, 0
i < n, there is a row in G with a non-zero symbol on that position. Since
rows of the generator matrix are codewords of the code C, we conclude
that in the code array each column must contain at least one non-zero
entry. Therefore, there are no all-zero columns in the code array.
(b) Consider an arbitrary ith column of the code array, 0 i < n, and let
S0 and S1 be the sets of codewords that have a 0 and a 1 on the ith
position, respectively. From part a) it follows that there is at least one
codeword with a 1 on the ith position, that is, |S1 | 1. Now we follow
an approach similar to the solution of Problem 3.5.
Let v be a codeword from S1 . Then, if we add v to each codeword from
S1 , we obtain a set S00 of codewords with a 0 on the ith position, and
|S00 | = |S1 |. Also, it holds that S00 S0 , and thus
|S00 | = |S1 | |S0 |
(5)
(6)
Error Control Coding week 1: Binary field algebra and linear block codes
n
X
i=1
Ai pi (1 p)ni
where p is the crossover probability of the BSC. For the (8, 4) code we have
Pu (E) = A4 p4 (1 p)84 + A8 p8 (1 p)88 = 14p4 (1 p)4 + p8
which, for p = 102 = 0.01 yields
Pu (E) = 14 108 0.994 + 1016 = 1.3448 107 .
3.10 The optimum (maximum likelihood) decoder for the BSC is the minimumdistance decoder, which can be realized as a standard array decoder, or, more
efficiently, as a syndrome decoder. For the (8, 4) code, there are 2nk = 24 =
Error Control Coding week 1: Binary field algebra and linear block codes
Error Control Coding week 1: Binary field algebra and linear block codes
such sequences. On the other hand, there are 2nk diferrent cosets, and this
number cannot be smaller than the number of coset leaders (correctable error
patterns). Thus we have
n
n
n
nk
2
+
+ +
.
0
1
t
Taking the logarithm log2 of both sides of the above inequality yields the
Hamming bound, which completes the proof.
3.16 Plotkin bound: The minimum distance of an (n, k) linear code satisfies
dmin
n 2k1
.
2k 1
Proof: Consider the 2k n code array whose rows are codewords of the
(n, k) code C. Each column of this array contains exactly 2k1 zeros and 2k1
ones (see Problem 3.6). Hence, the total number of ones in the code array is
n 2k1 . On the other hand, each of the 2k 1 non-zero codewords has weight
of at least dmin . Hence, the total number of ones in the code array is at least
(2k 1)dmin . By combining these two results we obtain
n 2k1 (2k 1)dmin ,
which yields the Plotkin bound
dmin
n 2k1
.
2k 1
3.17 Theorem: There exists an (n, k) linear code with a minimum distance of at
least d if
d1
X
n
< 2nk .
i
i=1
Proof:
The number of non-zero vectors of length n and weight d 1 or less is
d1
X
n
.
i
i=1
From the result of Problem 3.11, each of these vectors is contained in 2(k1)(nk)
d1
P n
linear codes. Therefore, there are at most M = 2(k1)(nk)
linear codes
i
that contain nonzero codewords of weight d 1 or less.
i=1
Error Control Coding week 1: Binary field algebra and linear block codes
of binary linear codes. If M < N , there exists at least one code with minimum
codeword weight at least d. Condition M < N is equivalent to
(k1)(nk)
d1
X
n
i=1
< 2k(nk)
which yields
d1
X
n
i=1
< 2nk .
(7)
dmin
X
n
n
nk
<2
.
i
i
i=1
.
i
i
i=1
i=1
According to Problem 3.17, if the the left part of the above inequality is
fulfilled, there exists a linear code of minimum distance at least dmin .
3.20 A codeword of an (n, n 1) single parity check code consists of n P
1 = k
information symbols followed by an overall parity bit vn1 = p = k1
i=0 ui .
The encoder can be realized using a single memory element, as shown in Figure
2. The memory element is initially in the zero state. During the encoding it
contains the current parity of the information sequence. After k clocks, at the
output of the memory element appears the overall parity bit p which is sent
after the information block.
u = u0u1...uk1
p
Figure 2: Encoder for the (n, n 1) single parity check (SPC) code.