Sei sulla pagina 1di 175

PARITY CHECK MATRICES OF

CONVOLUTIONAL CODES
OVER RINGS
HERBERT SALUDES PALINES
SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF THE PHILIPPINES LOS BA

NOS
IN PARTIAL FULFILLMENT OF THE
REQUIREMENTS FOR
THE DEGREE OF
MASTER OF SCIENCE
(Mathematics)
June 2011
ii
BIOGRAPHICAL SKETCH
Herbert S. Palines was born on March 8, 1984 as the second eldest child of Spouses
Thelma DA. Saludes and Peter D. Palines. He attended his primary education at
Paaralang Elementarya ng Lucban in Quezon. He got his secondary education at
Southern Luzon Polytechnic College Laboratory High School also located in Lucban,
Quezon and graduated as salutatorian. With his love to learn mathematics, he contin-
ued his college education at University of the Philippines Los Ba nos with the degree
of Bachelor of Science in Mathematics. After his graduation, he became an instructor
of mathematics. It is his hope that teaching and learning mathematics can help solve
problems in our lives.
HERBERT SALUDES PALINES
iii
ACKNOWLEDGEMENTS
I would like to extend my gratitude to the following:
Math Division, for supporting me to go on study leave.
Graduate School of UPLB and DOST, for giving the thesis grant.
Coding Theory Cluster of IMSP, especially to Prof. Cerezo, for the helpful con-
versations on MAGMA routines.
Members of my guidance committee, for their time, especially to Dr. Cuaresma,
for her valuable inputs in this manuscript.
My adviser, Dr. Sison, for his guidance and patience that enabled me to develop
an understanding of my thesis.
My family and friends, for their immeasurable support.
My wife, Kareen, for her love and understanding.
Above all, to Almighty God, who made all things possible.
iv
TABLE OF CONTENTS
Page
Title Page i
Approval Page ii
Biographical Sketch iii
Acknowledgements iv
List of Tables viii
List of Figures ix
Abstract x
1 INTRODUCTION 1
1.1 Coding Theory 1
1.2 Applications 3
1.3 Objectives of the study 4
1.4 Synopsis 4
2 THEORETICAL BACKGROUND 11
2.1 Groups, Rings and Fields 11
2.2 Modules and Vector Spaces 20
2.3 Matrices 23
2.4 Linear Block Codes 29
2.5 Summary 31
v
3 CONVOLUTIONAL CODES OVER RINGS 32
3.1 Denition of a Convolutional Code 32
3.2 Structural Properties of Convolutional Encoders 36
3.3 Estimating Free Distance 42
3.4 Summary 44
4 THE PARITY CHECK MATRIX OF A
CONVOLUTIONAL CODE 45
4.1 Denition of a Parity Check Matrix 45
4.2 Deriving a Parity Check Matrix 48
4.2.1 A parity check matrix from a systematic encoder 48
4.2.2 A parity check matrix from a square polynomial matrix with
polynomial inverse (PMPI) 57
4.2.3 A parity check matrix from the subdeterminants of a generator
matrix 83
4.3 Summary 87
5 NEW EXAMPLES OF ENCODERS FOR SELF-DUAL
CONVOLUTIONAL CODES 88
5.1 Self-Dual Convolutional Codes 88
5.2 The Algorithm for Constructing the Examples 91
5.2.1 A 4 8 minimal-basic PGM over Z
2
(D) 96
5.2.2 A 4 8 systematic PGM over Z
4
(D) 99
5.3 Summary 100
6 STRUCTURAL PROPERTIES OF
PARITY CHECK MATRICES 101
6.1 Connections Between Encoders 101
vi
6.1.1 On subdeterminants 102
6.1.2 On constraint lengths 110
6.2 Structural Properties of a Parity Check
Matrix From a Generator Matrix over Z
p
r (D) 118
6.3 Summary 123
7 LOW-DENSITY CONVOLUTIONAL (LDC) CODES 124
7.1 Denition of LDC codes 124
7.2 Construction of an LDC code 127
7.3 Encoder of an LDC code 129
7.4 Summary 133
SUMMARY AND RECOMMENDATIONS 135
REFERENCES 138
A MAGMA PROGRAMS 141
A.1 Computing for the subdeterminants of a generator matrix 141
A.2 Checking basicity of a PGM 144
A.3 Checking reducedness of a PGM 148
A.4 Checking predictable degree property of a PGM 152
A.5 An estimation of the free distance of a code 156
A.6 For the new examples of encoders of self-dual convolutional codes 160
B LIST OF SYMBOLS 164
vii
LIST OF TABLES
Table Page
2.1 The Lee weight function on Z
4
30
viii
LIST OF FIGURES
Figure Page
7.1 Illustration of the Jimenez - Zigangirov method for constructing the
syndrome former of a semi-homogeneous LDC (2, 2.5, 5)-code 128
ix
ABSTRACT
PALINES, HERBERT S. University of the Philippines Los Ba nos, June 2011.
Parity Check Matrices of Convolutional Codes Over Rings
Major Professor: Virgilio P. Sison
We extend the block code case of deriving a parity check matrix from a sys-
tematic generator matrix to convolutional codes over rings. We show that if G(D) =
(I, A) is a kn generator matrix of a convolutional code C over a ring R where n = 2k
(i.e. I, A R(D)
kk
), A is invertible over R(D) and A
1
= A
t
, then the parity
check matrix H(D) = (A
t
, I) for C is equivalent to G(D) and C is self-dual. New
examples of encoders of self-dual convolutional codes over Z
2
and Z
4
are constructed.
In the Z
p
r case, we show that the systematicity of the generator matrix implies the
systematicity, right invertibility, basicity, noncatastrophicity, and minimality of the
parity check matrix. We consider specic conditions where a parity check matrix and
a generator matrix are both minimal-basic.
By completing a basic encoder G(D) into a square polynomial matrix with poly-
nomial inverse (PMPI) matrix B, a basic parity check matrix H(D) is taken from
B
1
. This construction is obtained by proving that an encoder G(D) over Z
p
r [D] is
basic if and only if it is a submatrix of a square PMPI matrix over Z
p
r [D]. We show
that the minors of G(D) and H(D) are equivalent up to units in Z
p
r [D]. It is also
proved that each i-th constraint length of H(D) is bounded above by the sum of row
degrees of B.
From a given (n 1) n encoder G(D), we derive a 1 n parity check matrix
H(D). We prove that if G(D) is basic, then H(D) is basic. Given a polynomial
generator matrix G(D), we show that the overall constraint length of H(D) is equal
to the maximum degree of the minors of G(D).
Finally, a thorough discussion of the classical theory of low-density convolutional
(LDC) codes is given. For time-invariant LDC codes, it has been observed that an
encoder of a code can be obtained from the syndrome former H(D)
t
of the code if
the parity check matrix H(D) is systematic and basic.
x
Chapter 1
INTRODUCTION
In this chapter, we introduce a brief background of coding theory and its applications.
Then, we state the objectives of the study and discuss the synopsis of this material.
1.1 Coding Theory
Coding, in its broadest sense, is the transformation of information from one form to
another. On the other hand, the term coding theory pertains to a special kind of
coding that allows error detection and correction. According to the paper by Massey
[14], coding theory was initially a special tool in telegraphy and its development was
slow. It became a compact theory as a consequence of the Channel Coding Theorem
by Claude E. Shannon (1916-2001) as discussed in his 1948 paper entitled, A Math-
ematical Theory of Communication [20]. Because of this breakthrough, information
theory and its branch coding theory started to grow rapidly. It was further developed
by the work of R. W. Hamming who was stimulated by his colleagues (Shannon)
discovery. Hamming then discovered the rst general class of error correcting codes
(ECC). Originally, ECCs came about in response to practical problems in reliable
communication. The main idea in information theory is that every information is in
its digital form. Now, the given problem is that when the information is produced
2
from a source and it passed through a noisy channel, it must be possible that the
original information can be recovered.
Coding theorists found out that in the construction of ECCs, sophisticated math-
ematics is required. At present, coding theory in the form of algebraic coding is also
a mathematical subject.
One of the oldest forms of coding for error control is the addition of parity check
bits to an information string. In a digital communication system, it is in source
coding where the original message is divided into sequences and then transformed in
its digital form using the symbols from a suitable alphabet, usually a ring. What
follows is channel coding or simply encoding where adding the parity check bits or
redundancy digits occurs. In this process, the message becomes a code. Then the
code is ready to combat noise in the channel.
The fundamental challenge in coding theory is to nd good codes with both rea-
sonable information content and error handling ability. It is being done since it is
known that a message can be corrupted when it passes through a noisy channel or
it can be distorted when stored on a device with unreliable memory. To put it sim-
ply, there exists no perfect channel. By Shannons channel coding theorem, nding
good codes became possible. In other words, the theorem implies that an arbitrarily
reliable communication is possible.
3
1.2 Applications
It has been said that the main motivation of coding theory comes from its practical
engineering applications, particularly in digital telecommunications. Coding is used
in compact discs and in Wi- and Wi-Max technologies, to mention some.
Philips

invented the Compact Disc Digital Audio system that uses the so-
called Reed-Solomon codes that make the CDs still playable even after experiencing
scratches, cracks or other similar damages, to some extent.
The International Telecommunication Union (ITU) adopted Wi-Fi and Wi-Max as
one of the International Mobile Telecommunications-2000 (IMT-2000) Technologies
due to increasing demand of low complexity and higher performance systems. Wi-Fi
and Wi-Max technologies are major global cellular wireless standards. IEEE 802.16e
standards use low density parity check (LDPC) codes based on single frequency net-
work (SFN). LDPC codes with the help of high frequency powerful processors are
error correcting codes which allow transmission at rates in close proximity to Shan-
nons limit for large block length. Gupta and Virmani [8] described various LDPC
codes for Wi-Fi and Wi-max technologies.
4
1.3 Objectives of the study
This study aims to do the following:
1. Prove additional duality properties for convolutional codes over rings.
2. Construct new examples of convolutional codes over rings with desirable parity
check matrices.
3. Derive the induced structural properties of parity check matrices of convolu-
tional codes over rings from the structural properties of the generator matrices.
4. Observe the eect of using low density parity check matrix on the encoder of
the code.
1.4 Synopsis
In Chapter 2, we discuss the theoretical concepts needed in this thesis. We introduce
convolutional codes over rings in Chapter 3. Particularly, in Section 3.1, we describe
convolutional encoding and highlight its connection to the ring of Laurent series.
What follows is a denition of a convolutional code C over a ring R and a deniton
of a generator matrix over the ring of rational functions R(D). In Section 3.2, we
focus on the structural properties of convolutional encoders and consider polynomial
generator matrices (PGMs). The free distance of a code is an important parameter
5
for the goodness of a code. Hence, a discussion on estimating the free distance based
on the truncation method by Sison [22] is given in Section 3.3.
The main results in this thesis are found in Chapters 4 to 7. To be guided in these
chapters, the denitions, examples, theorems, lemmas or corollaries without names
after it are formulated by the authors. Moreover, some theorems/lemmas/corollaries
in these chapters are classic results. However, we provide alternative proofs for them.
Particularly, we give our own proofs for Lemma 4.2, Corollary 6.1, Corollary 6.2,
Corollary 6.3 and Theorem 6.3. In the case of Theorem 6.5, we provide the details of
its proof.
In Chapter 4, we discuss parity check matrices of a convolutional code. In Section
4.1, we give the denition of a parity check matrix and a sucient condition for its
existence. The main method in this thesis, which is on deriving a parity check matrix
from a generator matrix of a code, is discussed in Section 4.2. Specically, in Section
4.2.1, a parity check matrix is derived from a systematic generator matrix (CI); in
Section 4.2.2, we complete a k n basic encoder over Z
p
r [D] into a nn polynomial
matrix with polynomial inverse (PMPI) matrix B over Z
p
r [D], then obtain a (nk)n
parity check matrix from the last (n k) columns of B
1
(CII); and in Section 4.2.3,
a parity check matrix H(D) is taken from the (n 1) (n 1) subdeterminants of a
(n 1) n encoder G(D) (CIII). Note that CII is attained by showing that a k n
encoder G(D) over Z
p
r [D] is basic if and only if G(D) is a submatrix of a nn PMPI
6
matrix over Z
p
r [D]. The given constructions CI, CII and CIII are used repeatedly in
the succeeding chapters. CI is utilized in Sections 5.2, 5.2.1, 5.2.2, 6.1.2, 6.1.2 and
7.3, while CII and CIII are used in Sections 6, 6.1.2, 6.1.2 and 7.3.
In Chapter 5, we consider the dual of convolutional codes. Note that there are
two notions of duality for convolutional codes (see [23]). In our case, we dene the
dual of a convolutional code in Section 5.1 as an analog to the dual of a linear block
code. Based on this denition and using CI, we show that if G(D) = (I, A) is a
k n generator matrix of a convolutional code C over a ring R where n = 2k (i.e.
I, A R(D)
kk
), A is invertible over R(D) and A
1
= A
t
, then the parity check
matrix H(D) = (A
t
, I) for C is equivalent to G(D) and C is self-dual. Moreover, we
consttuct new examples of 4 8 encoders for self-dual convolutional codes over Z
2
and Z
4
(see Sections 5.2.1 and 5.2.2, respectively). The said encoder over Z
2
(D) is
minimal-basic while the encoder over Z
4
(D) is a systematic PGM. In Section 5.2, the
algorithm for constructing these examples is given. We created a MAGMA program
to construct these examples (see Appendix A.6). Since MAGMA do not have an
intrinsic function that generates the ring of rational functions Z
4
(D), the constructed
encoder over Z
4
(D) is limited to polynomial entries.
A parity check matrix H(D) of a code C is an encoder of the the dual code C

. So,
in Chapter 6, we study the structural properties of H(D) and consider polynomial
7
parity check matrices (PPCMs). Initially, in Section 6, we discuss connections be-
tween equivalent encoders of a code in terms of their subdeterminants and constraint
lengths. In Section 6, we show in the ring case that the k k subdeterminants of two
equivalent generator matrices are equal up to units in R(D). Consequently, we prove
that the k k minors of two equivalent basic generator matrices are equal up to units
in R[D]. Thus, in the eld case, it is clear that the k k minors of two equivalent
basic generator matrices are equal up to nonzero elements in the eld. Verifying that
(the maximum degree among the kk minors of a PGM) is invariant overall equiv-
alent basic encoders of convolutional codes over elds. Interestingly, we can extend
this idea among basic PGMs and PPCMs of a code. We show that if G(D) and H(D)
are both basic encoders over Z
p
r [D], then the (n k) (n k) minors of H(D) are
equal to the k k minors of G(D), up to units in Z
p
r [D]. In addition, we show that
if G(D) and H(D) are submatrices of unimodular matrices B and B
1
over Z
p
r [D],
that is det(B) and det(B
1
) are units in Z
p
r , then
G
=
H
, where
G
and
H
are
the maximum degree among the k k minors of G(D) and H(D), respectively. In
Section 6.1.2, we determine the connections of encoders in terms of their constraint
lengths. In the eld case, the constraint lengths of equivalent minimal-basic encoders
are equal, except for the order. As a result, we show that the overall constraint
length is invariant among minimal-basic PGMs and PPCMs of a code. While in
the Z
p
r case, we show that if a k n PGM G(D) and a (nk) n PPCM H(D) are
both basic such that they are submatrices of n n PMPI matrices B and B
1
(see
CII), respectively, then the i-th constraint lengths of H(D) are bounded above by
8
the sum of row degrees of B. In the context of CIII, it is immediate that the overall
constraint length
H
of the 1 n parity check matrix H(D) is exactly
G
. In Sec-
tion 6.1.2, we focus on parity check matrices over Z
p
r (D) and study their structural
properties. We show that the systematicity of the encoder G(D) over Z
p
r [D] causes
the parity check matrix H(D) (derived from G(D) using CI) to be systematic, right
invertible, basic, noncatastrophic and minimal. Also, we consider a k n (n = 2k)
PGM G(D) = (I
k
, A) and a PPCM H(D) = (A
t
, I
k
) of a code over Z
p
r (D) and give
conditions where G(D) and H(D) are both minimal-basic.
Chapter 7 is a thorough exposition of the theory of low-density parity check con-
volutional codes (LDPC-CCs) or low-density convolutional (LDC) codes. In Section
7.1, the denition of LDC code is introduced. A specic construction of LDC code
using Jimenez-Zigangirov method is given in Section 7.2. In Section 7.3, we focus
on time-invariant binary LDC code which is dened by a n (n k) polynomial
syndrome former H(D)
t
, the transposed of PPCM H(D). Note that an LDC code,
say

C, is completely determined by its sparse syndrome former H
t
(in semi-innite
matrix form). Nevertheless, an encoder G of

C can be described by H
t
through
v
j
H
t
0
+v
j1
H
t
1
+ +v
jms
H
t
ms
= 0
1(nk)
(1.1)
where v = v
0
v
1
v
2
. . . v
j
. . . is a causal codeword of

C and H
t
0
, H
t
1
, . . . , H
t
ms
are n(nk)
submatrices of H
t
over Z
2
. In particular, we observe the connections between the
syndrome former H(D)
t
and an encoder G(D) of

C where H(D) is systematic and
basic. As a consequence of CI, if H(D) is systematic, then we can derive a systematic
9
encoder G(D). We illustrate this through an example. Moreover, we verify that the
rst k components of every codeblock v
j
, encoded by G at time j, coincide with the
information block u
j
and the last (n k) components of v
j
are dened by (1.1). In
this situation, the memory of the syndrome former H(D)
t
is equal to the memory
of G(D). If H(D) is minimal-basic with overall constraint length
H
, then

C can be
encoded by a minimal-basic encoder with overall constraint length
H
. Finally, we
show that if the n (n k) syndrome former H(D)
t
is basic, then using CII, a basic
k n encoder G(D) can be obtained from a n n PMPI matrix. It is shown that if
H(D)
t
is a submatrix of a n n PMPI matrix B

, then the i-th constraint lengths of


G(D) are bounded above by the column degrees of B

(or the row degrees of (B

)
t
).
We constructed MAGMA programs that helped us in the analysis of certain prob-
lems in this thesis and they are found in Appendix A. The subroutine given in Ap-
pendix A.1 is for computing the k k subdeterminants of a k n matrix, 1 k 4.
The subroutines in Appendix A.2 are used to check for the basicity of a k n PGM
over F[D] via the minors, 1 k 4. In the eld case, reducedness is equivalent to

G
=
G
where G(D) is a PGM,
G
is the maximum degree among the k k minors
of G(D) and
G
is the overall constraint length of G(D). Please note that we will not
use the notion of reducedness in this thesis so the reader is referred to [15]. However,
we created subroutines, found in Appendix A.3, to test for the reducedness of a given
kn PGM G(D) through
G
and
G
. In Appendix A.4, the subroutines are intended
10
to verify the predictable degree property (PDP) of a kn PGM over the ring of poly-
nomials, where 1 k 4. Technically, the program on PDP works over any rings
dened in MAGMA. The subroutines given in Appendix A.5 are meant to estimate
the free distance of convolutional codes over Z
2
and Z
4
. In this program, we use the
truncation method introduced by Sison [22]. Finally, the subroutines in Appendix
A.6 are used to construct examples of 4 8 encoders of self-dual convolutional codes
over Z
2
and Z
4
.
11
Chapter 2
THEORETICAL BACKGROUND
In this chapter we introduce groups, rings, elds, modules, vector spaces, matrices,
and linear block codes. We look closely on the special kind of rings, the ring of Laurent
series and its subrings which are of great use in the discussion of the succeeding
chapters. These mathematical concepts are necessary for the understanding of this
thesis. The references for this chapter are the works by Hungerford [10] and Sison
[21]. Proofs and some details are excluded so the reader is referred to [10] and [21],
accordingly.
2.1 Groups, Rings and Fields
Denition 2.1. A group is a non-empty set G together with a binary operation
on G such that the following three properties hold:
(i) is associative, that is, for any a, b, c G,
a (b c) = (a b) c;
(ii) there is an identity element e
G
in G such that for all a G,
a e
G
= e
G
a = a;
12
(iii) for each a G, there exists an inverse element a
1
G such that
a a
1
= a
1
a = e
G
.
If the group also satises
(iv) for all a, b G, a b = b a,
then the group is called abelian.
Example 2.1.
1. The set Z
2
= {0, 1} is an abelian group under addition modulo 2. We can also
use the notation F
2
for Z
2
.
2. Similarly, the set Z
4
= {0, 1, 2, 3} is an abelian group under addition modulo 4.
3. The set of all n-tuples over F
2
[resp. Z
4
], denoted by F
n
2
[resp. Z
n
4
] is an abelian
group under componentwise addition modulo 2 [resp. addition modulo 4].
A non-empty subset H of a group G is said to be a subgroup of G if properties
(i)-(iii) hold for arbitrary elements a, b, c H.
Theorem 2.1. A non-empty subset H of a group G is a subgroup if and only if
a b
1
H for all a, b H.
Denition 2.2. A ring is a non-empty set R together with two binary operations,
usually denoted as addition (+) and multiplication (), such that:
13
(i) (R, +) is an abelian group;
(ii) (ab)c = a(bc) for all a, b, c R;
(iii) a(b +c) = ab +ac and (a +b)c = ac +bc for all a, b, c R.
If in addition:
(iv) ab = ba for all a, b R,
then R is said to be a commutative ring. If R contains an element 1
R
such that
(v) 1
R
a = a1
R
= a for all a R,
then R is said to be a ring with identity (unity), and the identity is the element
1
R
. The additive identity of R is called the zero element and is denoted by 0.
From this point up to the last part of this section, the ring R under consideration
is assumed to be a commutative ring with identity, unless otherwise specied. A zero
divisor is a nonzero element a R such that ab = 0 for some nonzero b R. An
element c R is called nilpotent if there is a positive integer n such that c
n
= 0.
A ring R has no zero divisor if and only if the cancellation laws hold in R. If a
commutative ring R with identity 1
R
= 0 has no zero divisor, then R is said to be an
14
integral domain.
If there is a smallest positive integer n such that na = 0 for all a in a ring R,
then R is said to have characteristic n, denoted by charR = n. If no such n exists,
then R is said to have characteristic zero, charR = 0. If R has identity 1
R
and
charR = n > 0, then n is the order of 1
R
in the additive group of R. Furthermore,
if R is an integral domain, then n is prime.
An element a R is said to be left [resp. right] invertible if there exists c R
[resp. b R] such that ca = 1
R
[resp. ab = 1
R
]. The element c [resp. b] is called a left
[resp. right] inverse of a. If an element a of R is both left and right invertible, then a
is said to be invertible or a unit in R. The set of units in R form a multiplicative
group. From this point, we denote that group by R
u
.
A nonzero element q of R is said to divide an element p R, or to be a divisor
of p, denoted q|p, if there exists x R such that qx = p. In this case, we say that p is
a multiple of q, or that p is divisible by q. Further if q|p and p|q, then we say that
q and p are associates. A nonzero nonunit element c of R is said to be irreducible
if c = ab, then a or b is a unit in R.
A ring D with identity 1
D
= 0 in which every nonzero element is a unit is called
a division ring. A eld is a commutative division ring. Every eld is an integral
domain, and every nite integral domain is a eld. A nite eld of order q is usually
15
called as Galois eld, denoted by GF(q). Henceforth, we use F or GF(q) to denote
an arbitrary eld.
A nonempty subset I of a commutative ring R is called an ideal if and only if,
for all a, b I and r R, the following conditions hold:
a, b I =a b I
r R and a I =ra I .
Let a R, then the set Ra = {ra | r R} is an ideal, called the principal ideal
generated by a which can be denoted by (a). If every ideal of a ring is principal,
that ring is called a principal ideal ring. The ring Z
M
is a principal ideal ring. A
principal ideal ring which is an integral domain is called a principal ideal domain
(PID).
Example 2.2.
1. The set of integers Z is an innite ring under the usual addition and multipli-
cation.
2. The set of integers Z
M
= {0, 1, 2, . . . , M 1} together with addition modulo M
and multiplication modulo M is a nite commutative ring with identity. This
ring is usually called the ring of integers modulo M If M = p
r
, where p is a
prime and r Z, r > 0, then we have Z
p
r .
16
3. In the set Z
4
= {0, 1, 2, 3} together with addition and multiplication modulo 4,
2 is a zero divisor while 1 and 3 are both units. Hence, Z
4
is not a eld since 2
is not a unit.
4. Z
2
= {0, 1} under addition and multiplication modulo 2 is a eld and commonly
known as the binary eld.
5. Consider a commutative ring R with unity and the indeterminate D. We intro-
duce the ring of Laurent Series and its subrings.
(a) A Laurent Series is an innite sum in D with a nite number of negative
powers of D. For instance,
x(D) =
+

i=
x
j
D
j
where the coecients x
j
are in R, is a Laurent series over R in indetermi-
nate D. We denote the ring of Laurent Series over R in indeterminate
D as R((D)) where an element of this ring is given by x(D) above. If R is
a eld, then R((D)) is also a eld. A Laurent series is said to be causal if
it contains no negative powers of D.
(b) The ring of formal power series, denoted by R[[D]] , contains an ele-
ment of the form
x(D) =
+

i=0
x
j
D
j
where x
j
R. A formal power series is nothing more than a causal Laurent
series.
17
(c) The ring of polynomials is denoted by R[D] where each element contains
no negative power and only nite number of positive powers of D. That
is, every element p(D) in R[D] is of the form
p(D) = a
0
+a
1
D +a
2
D
2
+. . . +a
n
D
n
=
n

i=0
a
i
D
i
where a
i
R and n is a nonnegative integer.
Consider f(D), g(D) R[D] given by f(D) =
n

i=0
a
i
D
i
and g(D) =
m

i=0
b
i
D
i
, respectively, then addition is dened as
f(D) +g(D) =
max(n,m)

i=0
(a
i
+b
i
)D
i
and multiplication is given by
f(D)g(D) =
n+m

k=0
c
k
D
k
,
where c
k
=
k

i=0
a
ki
b
i
.
The leading coecient of a nonzero polynomial is the nonzero coecient
of the term with the largest power of D. On the other hand, the trailing
coecient of a nonzero polynomial is the nonzero coecient of the term
with the smallest power of D. For p(D) R[D], if a
n
= 0 and a
i
= 0
for all i > n, then n is the degree of the polynomial p(D), denoted by
deg(p(D)). The leading coecient of the polynomial p(D) shown above is
18
a
n
if a
n
= 0.
A polynomial p(D) is said to be a monic polynomial if the leading coe-
cient of p(D) is 1
R
. The additive identity of R[D] is the zero polynomial,
the polynomial whose coecients are all zero. A root of a nonzero poly-
nomial p(D) R[D] is an element w in a ring S R, such that p(w) = 0.
Given that p(D) R[D] is not a unit, p(D) is said to be irreducible over
R[D] if for every factorization p(D) = r(D)s(D), either r(D) or s(D) is a
unit in R[D].
(d) We let the ring of rational functions R(D) be the set
{
p(D)
q(D)
p(D), q(D) R[D], q(D) = 0 and the trailing coecient of
q(D) is a unit in R} .
The condition that the trailing coecient of q(D) is a unit in R allows us
to treat a rational function as an equivalence class in the relation
p
1
(D)
q
1
(D)

p
2
(D)
q
2
(D)
if and only if p
1
(D)q
2
(D) = p
2
(D)q
1
(D).
Note that we can expand
p(D)
q(D)
by performing long division and thus, every
rational fraction is uniquely expressible as a Laurent series with at most
nitely many negative powers of D.
(e) F(D) is a eld and F[D] is a PID.
(f) Z
4
(D) is a principal ideal ring [5] while Z
4
[D] is not a eld since it contains
zero divisors.
19
(g) A considerable subring of R(D) is the ring of realizable functions R
r
(D)
. This ring consists of rational functions p(D)/q(D), where q(0) is a unit
in R. A realizable function when expanded into a Laurent series is a causal
rational function. That is, the ring R
r
(D) can be seen as the intersection
of R(D) and R[[D]].
If R and S are rings, a function f : R S is a ring homomorphism provided
that for all a, b, R, we have
f(a +b) = f(a) +f(b) and f(ab) = f(a)f(b) .
We focus our attention on a specic ring homomorphism on Z
p
r , called mod-p
reduction map, given by
: Z
p
r Z
p
x x mod p .
We also consider a natural extension of to Z
p
r [D] dened by
: Z
p
r [D] Z
p
[D]
a
0
+a
1
D + +a
n
D
n
(a
0
) +(a
1
)D + +(a
n
)D
n
. (2.1)
The extended map is a ring homomorphism from Z
p
r [D] onto Z
p
[D].
Consider p
1
(D), p
2
(D) Z
p
r [D]. Then p
1
(D) and p
2
(D) are said to be coprime
in Z
p
r [D] if there are polynomials c
1
(D), c
2
(D) Z
p
r [D] such that
c
1
(D)p
1
(D) +c
2
(D)p
2
(D) = 1 .
20
Two non-invertible polynomials p
1
(D) and p
2
(D) are coprime in Z
p
r [D] if and only if
(p
1
(D)) and (p
2
(D)) are coprime in Z
p
[D] (see [21]).
2.2 Modules and Vector Spaces
Denition 2.3. Consider a commutative ring R. An R-module is an additive
abelian group A together with a function R A A, where the image of the
ordered pair (r, a) R A is denoted by ra, such that for all r, s R and a, b A,
the following properties are satised:
(i) r(a +b) = ra +rb;
(ii) (r +s)a = ra +sa;
(iii) (rs)a = r(sa);
If R has an identity element 1
R
and
(iv) 1
R
a = a for all a A.
then A is said to be a unitary R-module. If R is a division ring, then a unitary
R-module is called a vector space.
Moreover, if R is a eld, that R-module is also regarded as a vector space or an
21
R-vector space.
A non-empty subset B of an R-module A is called a submodule of A if B is an
additive subgroup of A and rb B for all r R and all b B. A submodule of a
vector space is called a subspace.
Theorem 2.2 (Compact Criterion, [10]). A non-empty set W of an F-vector space
V is a subspace if and only if a +rb W for all a, b W and r F.
Example 2.3.
1. The set of all n-tuples over F, denoted by F
n
, is a vector space.
2. F
2
(D)
n
is a vector space over F
2
(D) while F
2
[D]
n
is an F
2
[D]-module.
3. Z
4
(D)
n
and Z
4
[D]
n
are Z
4
(D)-module and Z
4
[D]-module, respectively.
Denition 2.4. If u = (u
1
, . . . , u
n
) and v = (v
1
, . . . , v
n
) are vectors in F
n
. Then the
inner product of u and v is
u v =
n

i=1
u
i
v
i
.
If u v = 0, then u and v are orthogonal to each other.
A subset X of an R-module A is said to be linearly independent provided that
for distinct x
1
, x
2
, . . . , x
n
X and r
i
R
r
1
x
1
+r
2
x
2
+. . . +r
n
x
n
= 0 r
i
= 0 for every i.
22
We say that a set Y spans A if A is generated by Y as an R-module. If R has an
identity and A is unitary, every element of A may be written as a linear combination:
r
1
y
1
+r
2
y
2
+. . . +r
n
y
n
,
r
i
R, y
i
Y if and only if Y spans A. A linearly independent subset of A that
spans A is called a basis of A. An R-module F with a nonempty basis X is also
called a free R-module on the set X, and X is called a free basis.
Every vector space V over a division ring D has a basis and is therefore a free
D-module. In general, every linearly independent subset of V is contained in a basis
of V . R is said to have the invariant dimension property if any two basis of a
free R-module F have the same cardinality. The cardinal number of any basis of F is
called the dimension (or rank) of F over R. Furthermore, if F has a basis of nite
cardinality containing k elements, then F is said to be nitely generated with k
generators, and F is said to have dimension (or rank) k. The dimension (or rank)
of F is uniquely determined by F.
A module A is said to satisfy the ascending chain condition (ACC) on sub-
modules (or to be Noetherian) if for every chain
A
1
A
2
A
3
. . .
of submodules of A, there is an integer s such that A
i
= A
s
for all i s.
A module B is said to satisfy the descending chain condition (DCC) on
23
submodules (or to be Artinian) if for every chain
B
1
B
2
B
3
. . .
of submodules of B, there is an integer t such that B
i
= B
t
for all i t.
A commutative ring R is considered as a module over itself, and the submodules
of R are precisely the ideals of R. The ring R is said to be Noetherian or Artinian
if R satises the ACC or DCC on its ideals, respectively. The integer ring Z
M
is
both Noetherian and Artinian. An R-module A has nite length if and only if A is
Noetherian and Artinian.
2.3 Matrices
In this section, we discuss some properties and operations on matrices and a special
type of matrices, the unimodular matrices. For convenience, we denote by R
kn
the
set of all k n matrices with entries coming from R. Similarly, when we want to say
that a matrix A is a k n matrix over R, we write A
kn
or A R
kn
. We denote
the a a identity matrix by I
a
or simply I when the size is not important.
In our discussion, a k n matrix A over R can also be written as
A = (a
ij
)
where a
ij
is the (i, j)-th entry (the entry in the i-th row and j-th column of A) or
24
the (i, j)-th element of A. Clearly, a
ij
R for i = 1 to k and j = 1 to n. Further, we
write A
t
to indicate the transpose of A which is given by the nk matrix A
t
= (b
ij
)
such that b
ij
= a
ji
for all 1 i k and 1 j n.
Denition 2.5 (Hungerford, [10]). Let R be a commutative ring with identity and
A = (a
ij
) R
nn
. The determinant function, denoted by det , is dened by
det(A) =

S
(sgn)a
1(1)
a
2(2)
a
n(n)
,
where the summation is over all permutations of the set S = {1, 2, . . . , n} and sgn
is taken + or according to whether is even or odd, respectively.
It is well known that a determinant of a matrix, say A, can be expressed as a
cofactor expansion along a selected row or column of A. The general case of this
is through the Laplaces expansion of det(A), where det(A) is expanded along a
selected set of rows or columns of A. We now discuss how to obtain such expansion
and consider the expansion of det(A) along a selected set of rows of A. Even so, the
same arguments and notations will carry through if we expanded det(A) along a set
of selected columns of A. Let A = (a
ij
), i, j I = {1, 2, . . . , n}. Select any k rows of
A indexed by
K = {i
1
, i
2
, . . . , i
k
} I ,
where i
1
< i
2
< . . . < i
k
. Then form the kk matrix A
(s)
k
= (a
rc
), r K, by choosing
column indices c C
s
= {j
1
, j
2
, . . . , j
k
} I, j
1
< j
2
< . . . < j
k
. Consequently, the
corresponding (nk) (nk) matrix A
(s)
nk
can be formed by removing the columns
25
and rows indexed by C
s
and K, respectively. That is, we can think of A
(s)
k
and A
(s)
nk
as complements with respect to A. Since we are choosing k columns out of n columns,
the number of matrices A
s
k
(similarly A
(s)
nk
) that can be formed is given by
N =
_
n
k
_
=
n!
k!(n k)!
.
Also, the number of distinct index sets C
s
that can be formed is N. Moreover, for a
given column index set C
s
and row index set K that determine A
s
k
, we let
K
s
= j
1
+j
2
+. . . +j
k
+i
1
+i
2
+. . . +i
k
.
We are now ready to state the Laplaces Expansion Theorem.
Theorem 2.3 (Laplaces Expansion). If A is an n n matrix over a commutative
ring with unity, then for any choice of k rows (or k columns) of A,
det(A) =
N

s=1
(1)
Ks
det(A
(s)
k
) det(A
(s)
nk
) . (2.2)
The factor (1)
Ks
det(A
(s)
nk
) in (2.2) is also known as the cofactor of det(A
(s)
k
).
When k = 1, (2.2) is the cofactor expansion of det(A) along a row or column.
The following lemmas are standard results in linear algebra.
Lemma 2.1. If A, B R
nn
, then det(AB) = det(A) det(B).
Lemma 2.2 (Homan and Kunze, [9]). Consider R to be a commutative ring with
identity. Suppose M R
nn
is a block matrix say
_
_
_
_
A B
C D
_
_
_
_
where A R
kk
,
26
B R
k(nk)
, C R
(nk)k
, D R
(nk)(nk)
, k n and either B or C is a zero
matrix, that is M is either
_
_
_
_
A 0
C D
_
_
_
_
or
_
_
_
_
A B
0 D
_
_
_
_
or
_
_
_
_
A 0
0 D
_
_
_
_
, then det(M) =
det(A)det(D).
We say that a square matrix S is invertible over R if there exists a square matrix
S

such that SS

= S

S = I or if det(S) is a unit in R.
Denition 2.6 (McEliece, [15]). A unimodular matrix over F[D] is a square poly-
nomial matrix whose determinant is a nonzero scalar in F.
We adopt the given denition for the ring case.
Denition 2.7. A unimodular matrix over R[D] is a square polynomial matrix
whose determinant is a unit in R.
Consider the set U(n, R[D]) of all nn unimodular matrices over R[D]. Note that
in general, all matrices in U(n, R[D]) are invertible over R(D). But, the following
theorem gives more about the invertibility of matrices in U(n, R[D]).
Theorem 2.4. U(n, R[D]) is a group.
Proof:
U(n, R[D]) is not empty since I
n
U(n, R[D]) where it is clear that
I
n
is a polynomial matrix and det(I
n
) R
u
. Let A, B U(n, R[D]),
27
then det(A), det(B) R
u
; A, B are both polynomial matrices, hence the
product AB is also a polynomial matrix; and det(AB) = det(A) det(B)
R
u
. Thus, U(n, R[D]) is closed under matrix multiplication. Moreover,
matrix multiplication is associative in U(n, R[D]).
It remains to show that if A U(n, R[D]), then A
1
U(n, R[D]).
Specically, we want to show that A
1
is also a polynomial matrix and
det(A
1
) R
u
.
Since A is invertible, we can compute for A
1
using a standard formula
in linear algebra. Suppose A = (a
ij
), then A
1
is given by
A
1
=
_
_
_
_
_
_
_
_
_
_
_
_
A
11
det(A)
A
21
det(A)

A
n1
det(A)
A
12
det(A)
A
22
det(A)

A
n2
det(A)
.
.
.
.
.
.
.
.
.
.
.
.
A
1n
det(A)
A
2n
det(A)

Ann
det(A)
_
_
_
_
_
_
_
_
_
_
_
_
where A
ij
= (1)
i+j
det(M
ij
) is the cofactor of a
ij
and M
ij
is a (n 1)
(n1) submatrix of A obtained by deleting the i-th row and j-th column
of A. It is clear that A
ij
R[D] since det(M
ij
) R[D]. Also, since
det(A) R
u
, thus
A
ij
det(A)
R[D], therefore A
1
R[D]
nn
.
Expanding det(A
1
) using Denition 2.5, we have
det(A
1
) =

Sn
(sgn)
A
(1)1
det(A)
A
(2)2
det(A)

A
(n)n
det(A)
. (2.3)
28
As another standard result in linear algebra, we have
A(adj(A)) = (adj(A))A = det(A)I
n
= diag(det(A), det(A), . . . , det(A))
=
_
_
_
_
_
_
_
_
_
_
_
_
det(A) 0 0
0 det(A) 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 det(A)
_
_
_
_
_
_
_
_
_
_
_
_
(2.4)
Where adj(A) is the so-called adjoint of A. So, getting the determinants
of both sides of (2.4) will result to
det(A) det(adj(A)) = [det(A)]
n
or det(adj(A)) = [det(A)]
n1
. (2.5)
From (2.3) and (2.5), it follows that
det(A
1
) =

Sn
(sgn)
A
(1)1
A
(2)2
A
(n)n
[det(A)]
n
=
1
[det(A)]
n

Sn
(sgn)A
(1)1
A
(2)2
A
(n)n
=
1
[det(A)]
n
det(adj(A))
=
1
[det(A)]
n
[det(A)]
n1
=
1
det(A)
R
u
since det(A) R
u
. Thus, A
1
U(n, R[D]).
The theorem above simply says that a unimodular matrix is a square polynomial
matrix over R[D] with determinant in R
u
, thus its inverse is also a polynomial matrix.
29
Since the units in R[D], where R is a commutative ring with unity, are not only the
nonzero units in R, apparently, there is a larger set of square polynomial matrices
which are invertible over R[D]. It is also immediate from the proof of Theorem 2.4
that this set of square polynomial matrices over R[D] forms a group. We call such a
matrix as square polynomial matrix with polynomial inverse (PMPI).
2.4 Linear Block Codes
A block encoder is described by a linear map that is completely determined by a
kn matrix Gover R, wherein a k-tuple u of symbols in R, called information word,
is sent to an n-tuple v of symbols over R, called a codeword, via the relationship
v = uG.
A rate-k/n linear block code B over R generated by G is the set of those codewords
given by
B = {v R
n
| v = uG, u R
k
} .
The matrix G is called a generator matrix for B if the rows of G span B and if no
proper subset of the rows of G generates B. In general, a block code B over R is a
subset of R
n
, but, if B is linear, then B can be regarded as an R-submodule of R
n
which may not be necessarily free.
One of the indicators of the goodness of a code is its minimum distance. The
30
distance measures the capability of the code to detect and correct errors. A linear
block code can be equipped with a suitable distance metric through a weight function
wt dened on B and is given by
wt : R
n
R .
We discuss the Hamming metric and Lee metric.
The Hamming weight of an element x R is given by
wt
H
(x) =
_

_
1 if x = 0
0 if x = 0
. (2.6)
Let y = (y
1
, y
2
, . . . , y
n
) R
n
, then the Hamming weight of y is given by
wt
H
(y) = wt
H
(y
1
) +wt
H
(y
2
) +. . . +wt
H
(y
n
) .
We consider a quaternary linear block code to be a linear block code whith
alphabet Z
4
. The Lee weight of an element x Z
4
, denoted by wt
L
(x) , is given
below:
Table 2.1: The Lee weight function on Z
4
x w
L
(x)
0 0
1 1
2 2
3 1
Suppose y = (y
1
, y
2
, . . . , y
n
) Z
n
4
, the Lee weight of y is extended as follows:
wt
L
(y) = wt
L
(y
1
) +wt
L
(y
2
) +. . . +wt
L
(y
n
) .
31
The minimum (Hamming, Lee, etc.) distance d of a linear block code B is
given by
d = min{wt(v v

)|v, v

B, v = v

}
where wt is a weight function (Hamming, Lee, etc.) dened on B and wt(v v

) is
the (Hamming, Lee, etc.) distance between v and v

.
Consider a linear block code over a eld and codewords v and v

. The number of
coordinates where v and v

are dierent is precisely given by wt


H
(v v

). So if v is
a sent codeword and v

is the received codeword after transmission, then wt


H
(v v

)
is the number of errors that occurred during the transmission. A linear block code B
with minimum distance d can correct up to
_
d1
2
_
number of errors. Thus, the higher
the minimum distance, the better the code is. Thus, for most practical applications,
the minimum distance of a code is one of the most important parameters. However,
determining the minimum distance of a code is not an easy problem.
2.5 Summary
The basic algebraic structures such as groups, rings, elds, modules and vector spaces
have been discussed. A special kind of rings, the ring of Laurent series and its subrings,
especially the ring of rational functions and ring of polynomials, were considered since
they will be of great use in the discussion of the succeeding chapters. Some properties
and notations on matrices were established. A general background on linear block
codes have been presented.
32
Chapter 3
CONVOLUTIONAL CODES OVER RINGS
In this chapter, we discuss convolutional encoder. Then we adopt a specic denition
of a convolutional code. We also give the structural properties of generator matrices
and the free distance of a convolutional code. The references used in this chapter are
the works by McEliece [15], Mittelholzer [16, 17], Sison [21, 22] and Wittenmark [23].
3.1 Denition of a Convolutional Code
A convolutional encoder over R is a linear mapping where the input sequence
u is possibly an innite sequence of information blocks u
j
, denoted by
u = . . . u
2
u
1
u
0
u
1
u
2
. . .
where each block u
j
has k symbols, that is
u
j
= (u
(1)
j
, u
(2)
j
, . . . , u
(k)
j
)
where u
(i)
j
R, i = 1, . . . , k.
At a given time-instant j, a k-ary block u
j
is fed into the convolutional encoder
and an n-ary block v
j
, called the code block is produced. Consequently, after
33
each information block has passed through the convolutional encoder, the output
sequence v is obtained, which is given by
v = . . . v
2
v
1
v
0
v
1
v
2
. . . ,
where
v
j
= (v
(1)
j
, v
(2)
j
, . . . , v
(n)
j
)
and v
(i)
j
R, i = 1, . . . , n. It should be noted that the information sequence and code
sequence u and v, respectively, should start at some nite time j (conveniently at
j = 0) but may or may not end.
In convolutional encoding, the code block v
j
uses not only the current informa-
tion block u
j
, but also uses a xed number, say m, of earlier information blocks
u
j1
, u
j2
, . . . , u
jm
. Specically, at each time-instant j, the code block v
j
is gener-
ated as
v
j
= u
j
G
0
+u
j1
G
1
+. . . +u
jm
G
m
(3.1)
where each generator submatrix G
i
is a kn matrix over R, i = 1, 2, . . . , m. Unlike
the block encoder, convolutional encoder has the so-called memory, which is given
by the integer m. We can think of a linear block code as a degenerate special case of
a convolutional code. That is, we can say that a linear block code is a memory-less
(m = 0) convolutional code.
Consider an information sequence u. The code sequence v is determined through
v = uG
34
or
. . . v
2
v
1
v
0
v
1
v
2
. . . = (. . . u
2
u
1
u
0
u
1
u
2
. . .)G
where G is the semi-innite scalar matrix over R given by
_
_
_
_
_
_
_
_
_
_
_
_
G
0
G
1
G
2
G
m1
G
m
G
0
G
1
G
2
G
m1
G
m
G
0
G
1
G
2
G
m1
G
m
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
_
_
_
_
_
_
_
_
_
_
_
_
where each empty cell is assumed to be lled with zeros. The matrix G is called
the generator matrix for the code. In the convolutional encoding process, we see
that for every k-tuple of information block, a corresponding n-tuple of code block is
produced. Then we say that the convolutional code is of rate-k/n.
We can represent the input sequence u and output sequence v in another way
through the delay operator D or the so-called D-transform. The D-transforms of
the sequences u and v are given by
u(D) = . . . +u
1
D
1
+u
0
+u
1
D
1
+u
2
D
2
+. . .
and
v(D) = . . . +v
1
D
1
+v
0
+v
1
D
1
+v
2
D
2
+. . . ,
respectively. The exponent j of D gives the time-instant when u
j
and v
j
appeared in
35
the sequences. Since u
j
R
k
and v
j
R
n
, we have
u(D) = [u
1
(D), . . . , u
k
(D)]
and
v(D) = [v
1
(D), . . . , v
n
(D)]
where
u
j
(D) =
+

i=
u
(j)
i
D
i
, j = 1, . . . , k
and
v
j
(D) =
+

i=
v
(j)
i
D
i
, j = 1, . . . , n .
They are related through the equation
v(D) = u(D)G(D)
where
G(D) = G
0
+G
1
D +G
2
D
2
+. . . +G
m
D
m
is a generator matrix in D-transform. As a result, we can view convolutional
codes as a linear block codes over the ring of Laurent series R((D)) or over the
ring of rational functions R(D). In this case, convolutional codes can be studied
algebraically.
As a result, we have the following denition, due to Massey.
Denition 3.1. A rate-k/n convolutional code C over R is an R(D)-submodule
of R(D)
n
given by
C = {u(D)G(D)|u(D) R(D)
k
}
36
where G(D) is a k n matrix over R(D) whose rows are free over R(D), that is,
the kernel of G(D) is trivial. The free module C is the R(D)-row span of G(D).
The rational matrix G(D) is called a transfer function matrix. If the entries of
G(D) are realizable then G(D) is called a generator matrix or encoding matrix
or simply an encoder for C. In this sense, we can view the convolutional code C as
a rate-k/n linear block code over R(D) with its block generator matrix G(D).
Another denition of convolutional code is due to G.D. Forney wherein C is re-
garded as an R((D))-submodule of R((D))
n
given by
C = {u(D)G(D)|u(D) R((D))
k
}
where G(D) is a k n matrix over R(D) whose rows are free over R((D)). For
convolutional codes over elds or over Notherian rings, the two denitions lead to
equivalent theories. But in general, this is not true. T. Mittelholzer, in [17] found
out that there exist generator matrices over R(D) whose rows are free over R(D) but
not free over R((D)).
In this thesis, we use Denition 3.1, by Massey. Moreover, since Z
M
is Noetherian,
it is sucient to use Denition 3.1 for convolutional codes over Z
M
.
3.2 Structural Properties of Convolutional Encoders
Encoding and decoding are important parts of the communication process. We
can see that the quality of encoding is merely dependent on the structural properties
37
of the encoders or the generator matrices. We can also look at the encoding process
as a linear mapping from the input space R(D)
k
to the output space R(D)
n
, also
known as the convolutional transducer over R(D), given by
: R(D)
k
R(D)
n
u(D) v(D).
The codeword v(D) is obtained via
v(D) = u(D)G(D) .
We require the rows of G(D) to be free over R(D) or equivalently, we want to be
injective so that the reconstruction of the information sequence u(D) from v(D) after
transmission is possible, even when there is no noise in the channel.
For the entire section, we consider C to be a rate-k/n convolutional code over the
ring R.
The study of encoding naturally leads to the study of the encoders or generator
matrices. The generator matrix of a code is not unique. We say that two generator
matrices are equivalent if they generate the same code. Further, two generator
matrices G(D) and G

(D), of C, are equivalent if and only if there exists a k k


38
invertible matrix T(D) over R(D) such that G

(D) = T(D)G(D) [23].


We say that the code C is right invertible if it has a generator matrix G(D)
which has a right inverse over R(D). If the code C has a generator matrix G(D)
which has a right inverse over R(D), so does every generator matrix for this code. It
is well-known that every generator matrix of a convolutional code over a eld is right
invertible. Not in the ring case, where there exist convolutional codes over R which
are not right invertible. However, if R satises the DCC, then every convolutional
code over R is right invertible [17]. Hence, for codes over nite commutative rings
Z
M
, every generator matrix has a right inverse.
A generator matrix G(D) of C is said to be systematic if it causes the information
symbols to appear unchanged among the code symbols, i.e., if some k of its columns
form I
k
. Convolutional codes over elds have both systematic and nonsystematic
generator matrices. So we say, systematicity in the eld case is an encoder property.
In the ring case, it is a code property. A convolutional code C is systematic if it
has a systematic generator matrix. C is systematic if and only if it has a generator
matrix G(D) that has k k submatrix whose determinant is a unit in R
r
(D) and a
convolutional code over Z
p
r is systematic if and only if G(0) mod p has full rank over
GF(p) (see [23]). It is worth noting that if C is systematic, then C is right invertible,
39
but the converse is not true.
A generator matrix is said to be non-catastrophic if there does not exist an
innite weight information sequence u(D) that gives a codeword v(D) of nite weight.
On the other hand, a decoding catastrophe is said to have occurred if there is a
codeword of nite weight, but after decoding, the corresponding information word
has innite weight resulting to an innite error [15]. Obviously, in all cases, we do
not want our generator matrices to be catastrophic.
In [16], it was reported that every realizable generator matrix over R(D) can be
realized with a nite number of memory cells capable of storing a nite number of
scalars and adders that perform multiplication by constants and additions, respec-
tively, within the ring R. The multiplication and addition happen during the encoding
process. To see this, recall that at time-instant j, the code block v
j
needs m number
of previous information blocks u
j
, u
j
1, . . . , u
jm
. We call this set as the encoder
states at time j. G(D) is minimal if there exists a realization of G(D) that uses
the least number of encoder states required to generate the code. Furthermore, every
realizable systematic generator matrix is minimal [16].
A code can have rational and polynomial generator matrices. We give much atten-
tion to the latter. A generator matrix G(D) is called polynomial generator matrix
(PGM) if all its entries are polynomial. Let v(D) = (v
1
(D), v
2
(D), . . . , v
n
(D)) be a
polynomial vector (i.e a vector which all the components v
i
(D) are polynomial),
40
and dene the degree of v(D), denoted by degv(D), to be degv(D)=max{degv
i
(D)}.
We let g
i
(D) to be the i-th row of the k n generator matrix G(D) = (g
ij
(D)). The
i-th constraint length,
i
, is given by
i
= degg
i
(D), i = 1, . . . , k. Its overall
constraint length, , is the sum of all the constraint lengths. The memory m is
given by m = max
1ik
{
i
}.
If u(D) = (u
1
(D), u
2
(D), . . . , u
k
(D)) is a polynomial input and v(D) is a polyno-
mial output given by v(D) = u(D)G(D), then in general we have
degv(D) max
i
{degu
i
(D)g
i
(D)} = max
i
{degu
i
(D) +
i
} .
If degv(D) = max
i
{degu
i
(D) +
i
} for all polynomial input u(D), then we say that
the generator matrix G(D) has the predictable degree property (PDP). Dene
the indicator matrix [G(D)]
h
to be a kn matrix over R with the row-wise highest
degree coecients of the entries in G(D) in the corresponding positions of [G(D)]
h
,
and zeros elsewhere. A polynomial generator matrix G(D) has the predictable degree
property if and only if the rows of the [G(D)]
h
are free over R. Moreover, if [G(D)]
h
has a k k submatrix whose determinant is a unit in R, then the rows of [G(D)]
h
41
are free, hence G(D) has PDP (see [23]).
Let be the highest degree among the determinants of the k k submatrices of
a PGM G(D). If G(D) has PDP, then = [23].
In some literature (see for instance [6, 5, 15]), the term minor is used to indicate
the determinant of a k k submatrix of a PGM G(D). Henceforth, we use the term
minor for the determinant of a k k submatrix of a k n PGM G(D).
We say that a generator matrix G(D) is basic if it is polynomial and has a
polynomial right inverse G

(D) such that G(D)G

(D) = I. Basicity is equivalent


with the so-called polynomial output implies polynomial input (POPI) property
[3]. In the eld case, a k n PGM G(D) is basic if and only if the gcd of the k k
minors of G(D) is 1 and equivalently, if the invariant factors of G(D) are all 1 (see
Theorem 4.1). Also, G(D) is non-catastrophic if and only if the gcd of the k k
minors of G(D) is a power of D ([15]). So, in the eld case we have:
systematicity basicity non-catastrophicity (see [15]).
A generator matrix G(D) is minimal-basic if it is basic and the overall constraint
length is minimal over all equivalent basic generator matrices. A basic generator
matrix G(D) with PDP is minimal-basic [23]. In the eld case, the four statements
are equivalent for a basic G(D): G(D) has PDP, [G(D)]
h
has full rank, G(D) is
42
minimal-basic and = . Moreover, in the eld case, if G(D) is minimal-basic then
it is minimal. In the ring case, minimal-basicity does not imply minimality [23].
3.3 Estimating Free Distance
Consider a codeword v(D) = (v
1
(D), v
2
(D), . . . , v
n
(D)) C. The weight of v
i
(D)
is the sum of the weights (Hamming, Lee, etc.) of the Laurent series coecients of
v
i
(D). The weight of v(D) is the sum of the weights of its components v
i
(D), which
probably innite. The free distance of C, denoted by d
free
(C) is given by
d
free
(C) = min{wt(v(D))|v(D) C, v(D) = 0}
where wt is a weight function (Hamming, Lee, etc.) dened on R.
It is known that for practical purposes that the free distance of a code is the main
important parameter that determines the goodness of a code. Computing for the free
distance of a code is not an easy job. That is why bounds for the free distance is very
important. In [22], V.P. Sison found the Heller-type bounds for the homogeneous free
distance of convolutional codes over nite Frobenius rings, which are the generalized
upper bounds of the free distances of convolutional codes over rings. In the proof of
Theorem 1 in [22], Sison suggested how to estimate the said bound and it is reected
in the MAGMA program found in Appendix A.5. We discuss the algorithm of the
43
said estimation.
Consider a k n PGM G(D) of a rate-k/n convolutional code C over a nite
commutative ring with unity, say R. Let m and
G
i
be the memory and the i-th con-
straint length of G(D), respectively. Moreover, let u(D) = [u
1
(D), u
2
(D), . . . , u
k
(D)]
be a polynomial information word of C. We truncate C by making
deg(u
i
(D)) L 1, (3.2)
where 1 i k and L is a non-negative integer. Consequently, the corresponding
codeword v(D) given by v(D) = u(D)G(D) satises
deg(v(D)) m+L 1
since
G
i
m, for all 1 i k. Since R is nite, the set of polynomial inputs
u(D), say P
L1
, is also nite. Specically, the number of polynomials u
i
(D), that
satisfy (3.2), is given by |R|
L
. Thus, P
L1
has |R|
kL
elements. Since G(D) is a
PGM, therefore the set of all possible polynomial codewords v(D) = u(D)G(D),
say C
L
, has |R|
kL
elements. We focus on C
L
and call it the truncated code of
C at parameter L. We now consider a linear block code, say B
L
, corresponding
to C
L
. Let v(D) = [v
1
(D), v
2
(D), . . . , v
n
(D)] be a codeword in C
L
. Recall that
we can write each v
i
(D) as v
i
(D) = v
(i)
0
+ v
(i)
1
D + v
(i)
2
D
2
+ + v
(i)
m+L1
D
m+L1
,
v
(i)
j
R. Then a corresponding causal nite sequence to V (D) is given by v =
v
(1)
0
v
(2)
0
. . . v
(n)
0
v
(1)
1
v
(2)
1
. . . v
(n)
1
. . . v
(1)
m+L1
v
(2)
m+L1
. . . v
(n)
m+L1
. Note that the length of v
is given by n(m+L). The set of all causal nite sequence v obtained from C
L
form the
44
linear block code B
L
of length n(m+L). The minimum distance of B
L
approximates
the free distance of C. We now give the algorithm of the estimation process for
MAGMA implementation.
1. Construct P
L1
.
2. Generate C
L
via v(D) = u(D)G(D), where u(D) P
L1
.
3. Transform each v(D) in C
L
to a causal nite sequence v.
4. Take B
L
to be the set of all causal nite sequences v.
5. Get the minimum distance of B
L
.
3.4 Summary
We introduced convolutional codes over rings and adopted the denition by Massey.
The connections between convolutional encoding and the ring of Laurent series have
been emphasized. We dened a generator matrix of a convolutional code over the
ring of rational functions R(D) and considered PGMs. The structural properties of
convolutional encoders were given. Estimation of free distance of a convolutional code
based on the truncation method proposed by Sison [22] was discussed.
45
Chapter 4
THE PARITY CHECK MATRIX OF A
CONVOLUTIONAL CODE
In this chapter, we dene a parity check matrix of a convolutional code and introduce
three ways of deriving a parity check matrix from a given generator matrix. These
constructions serve as the main tool in this thesis.
4.1 Denition of a Parity Check Matrix
A code is completely determined by its generator matrix. In some cases, a code can
be described by its parity check matrix. In the eld case, it is known that the parity
check matrix of a convolutional code always exists. While in the ring case, it only
happens if the the ring satises the descending chain condition (DCC). It is stated in
the following lemma.
Lemma 4.1 (Mittelholzer, [17]). Let R be a commutative ring satisfying the descend-
ing chain condition and let G(D) be a k n matrix over R(D). Then the following
statements are equivalent:
(i) The kernel of G(D) is trivial;
(ii) G(D) has a right inverse G(D)

R(D)
nk
;
46
(iii) The code C generated by G(D) can be characterized by a parity check matrix
H(D) R(D)
(nk)n
, i.e.,
C = {x(D) R(D)
n
|x(D)H(D)
t
= 0
1(nk)
} .
Since the kernel of G(D) is trivial, i.e. the rows of G(D) are free over R(D),
the code C generated by G(D) is a free R(D)-submodule of R(D)
n
. In other words,
every codeword in C can be expressed as a linear combination of the rows of G(D).
Consequently, the rows of G(D) are also codewords in C. Thus, to verify whether
H(D) is a parity check matrix of C, it is enough to check that G(D)H(D)
t
= 0
k(nk)
.
It will suce that each codeword in C is orthogonal to the rows of H(D).
For the entire paper, we assume that a ring R satises DCC when we are talking
about the parity check matrix of a convolutional code over the ring R.
Example 4.1 (McEliece, [15]).
G(D) =
_
_
_
_
_
_
_
_
1 0 0 1
0 1 0 1
0 0 1 D
2
+ 1
_
_
_
_
_
_
_
_
is a generator matrix of a rate-3/4 convolutional code C over Z
2
. A parity check
matrix of C is given by
H(D) =
_
1 1 D
2
+ 1 1
_
.
47
We can verify that G(D)H(D)
t
= 0
31
. This is given by
G(D)H(D)
t
=
_
_
_
_
_
_
_
_
1 0 0 1
0 1 0 1
0 0 1 D
2
+ 1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1
1
D
2
+ 1
1
_
_
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
(1)(1) + (0)(1) + (0)(D
2
+ 1) + (1)(1)
(0)(1) + (1)(1) + (0)(D
2
+ 1) + (1)(1)
(0)(1) + (0)(1) + (1)(D
2
+ 1) + (D
2
+ 1)(1)
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
0
0
0
_
_
_
_
_
_
_
_
.
Example 4.2 (Wittenmark, [23]). Consider the convolutional code C over Z
4
, of
rate-1/2, generated by a PGM G(D) given by
G(D) =
_
D
2
+ 3D + 3 2D
2
+D + 2
_
.
A parity check matrix H(D) of C is given by
H(D) =
_
2D
2
+ 3D + 2 D
2
+ 3D + 3
_
.
Similarly, we verify that G(D)H(D)
t
= 0.
G(D)H(D)
t
=
_
D
2
+ 3D + 3 2D
2
+D + 2
_
_
_
_
_
2D
2
+ 3D + 2
D
2
+ 3D + 3
_
_
_
_
=
_
(D
2
+ 3D + 3)(2D
2
+ 3D + 2) + (2D
2
+D + 2)(D
2
+ 3D + 3)
_
=
_
(2D
4
+D
3
+D
2
+ 3D + 2) + (2D
4
+ 3D
3
+ 3D
2
+D + 2)
_
=
_
0
_
.
Now, the problem at hand is deriving a parity check matrix of a code. We consider
it in the following section.
48
4.2 Deriving a Parity Check Matrix
Since one of our objectives is to study the connections between encoders and parity
check matrices of a code, hence we device ways of deriving a parity check matrix
from a given encoder of a code. Specically, we derive a parity check matrix from a
systematic encoder, a basic encoder, and from the subdeterminants of the encoder.
4.2.1 A parity check matrix from a systematic encoder
The given lemma below is a natural extension from the block code case.
Lemma 4.2 (McEliece, [15]). Let G(D) be a k n matrix over R(D). If G(D) =
(I
k
, A) (i.e. G(D) is a systematic encoder in standard form) is a generator matrix of
a convolutional code C over R, then a (n k) n parity check matrix H(D) of C is
given by
H(D) =
_
A
t
, I
nk
_
.
The proof is motivated by [13]. Note that we can extend this proof to the case
where the columns of I
k
are not necessarily the rst k columns of G(D). However,
without loss of generality, we consider the following.
Proof:
Suppose
49
G(D) = (I
k
, A) =
_
_
_
_
_
_
_
_
_
_
_
_
1 0 . . . 0 a
1(k+1)
a
1(k+2)
. . . a
1n
0 1 . . . 0 a
2(k+1)
a
2(k+2)
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 . . . 1 a
k(k+1)
a
k(k+2)
. . . a
kn
_
_
_
_
_
_
_
_
_
_
_
_
.
Consider the homogeneous system
G(D)
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
x
1
x
2
.
.
.
x
k
x
k+1
.
.
.
x
n
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
= 0
k1
(4.1)
where (x
1
, x
2
, . . . , x
k
, x
k+1
, . . . , x
n
) is an arbitrary n-tuple in R(D)
n
and
0 is the k 1 zero matrix.
Our aim is to nd a basis for the solution set of (4.1) that will constitute
the parity check matrix H(D) of C. From (4.1), we have
x
1
+x
k+1
a
1(k+1)
+x
k+2
a
1(k+2)
+. . . +x
n
a
1
n = 0
x
2
+x
k+1
a
2(k+1)
+x
k+2
a
2(k+2)
+. . . +x
n
a
2
n = 0
.
.
.
x
k
+x
k+1
a
k(k+1)
+x
k+2
a
k(k+2)
+. . . +x
n
a
k
n = 0
. (4.2)
50
By a change of variable, we let x
k+1
, x
k+2
, . . ., x
n
as s
1
, s
2
, . . . , s
nk
, re-
spectively. Then solving for x
i
in (4.2) in terms of the other, we obtain
x
1
= s
1
a
1(k+1)
s
2
a
1(k+2)
. . . s
nk
a
1n
x
2
= s
1
a
2(k+1)
s
2
a
2(k+2)
. . . s
nk
a
2n
.
.
.
x
k
= s
1
a
k(k+1)
s
2
a
k(k+2)
. . . s
nk
a
kn
.
So we can express the solution as
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
x
1
x
2
.
.
.
x
k
x
k+1
x
k+2
.
.
.
x
n
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
s
1
a
1(k+1)
s
2
a
1(k+2)
. . . s
nk
a
1
n
s
1
a
2(k+1)
s
2
a
2(k+2)
. . . s
nk
a
2
n
.
.
.
s
1
a
k(k+1)
s
2
a
k(k+2)
. . . s
nk
a
k
n
s
1
s
2
.
.
.
s
nk
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
51
= s
1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
a
1(k+1)
a
2(k+1)
.
.
.
a
k(k+1)
1
0
.
.
.
0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
+s
2
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
a
1(k+2)
a
2(k+2)
.
.
.
a
k(k+2)
0
1
.
.
.
0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
+. . . +s
nk
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
a
1n
a
2n
.
.
.
a
kn
0
0
.
.
.
1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
. (4.3)
Since s
1
, s
2
, . . . , s
nk
can be assigned with arbitrary values, we can con-
veniently assign the following value for s
i
s and obtain the corresponding
solution for each assignment. Thus, it will follow from (4.3) that:
when s
1
= 1, s
2
= 0, . . . , s
nk
= 0, then a solution h
1
is given by
52
h
1
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
a
1(k+1)
a
2(k+1)
.
.
.
a
k(k+1)
1
0
.
.
.
0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
,
when s
1
= 0, s
2
= 1, . . . , s
nk
= 0, then a solution h
2
is given by
h
2
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
a
1(k+2)
a
2(k+2)
.
.
.
a
k(k+2)
0
1
.
.
.
0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
,
.
.
.
and when s
1
= 0, s
2
= 0, . . . , s
nk
= 1, then a solution h
nk
is given by
53
h
nk
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
a
1n
a
2n
.
.
.
a
kn
0
0
.
.
.
1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
.
The fact that
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
x
1
x
2
.
.
.
x
k
x
k+1
.
.
.
x
n
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
= s
1
h
1
+s
2
h
2
+. . . +s
nk
h
nk
, where the s
i
s are
arbitrary elements in R(D), the set {h
1
, h
2
, . . . , h
nk
} spans the solution
set of (4.1). Moreover, if

1
h
1
+
2
h
2
+. . . +
nk
h
nk
=
54

1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
a
1(k+1)
a
2(k+1)
.
.
.
a
k(k+1)
1
0
.
.
.
0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
+
2
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
a
1(k+2)
a
2(k+2)
.
.
.
a
k(k+2)
0
1
.
.
.
0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
+. . . +
nk
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
a
1n
a
2n
.
.
.
a
kn
0
0
.
.
.
1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
= 0
n1
(4.4)
for some
i
R(D), then from the last n k rows of the solutions
h
1
, h
2
, . . . , h
nk
, it can be easily seen in (4.4) that
nk
= . . . =
2
=

1
= 0. Hence, h
1
, h
2
, . . . , h
nk
are linearly independent. Thus, the set
{h
1
, h
2
, . . . , h
nk
} is a basis of the solution set of (4.1).
We can now construct the matrix H(D) from the basis {h
1
, h
2
, . . . , h
nk
}
given by
H(D) =
_
_
_
_
_
_
_
_
_
_
_
_
h
t
1
h
t
2
.
.
.
h
t
nk
_
_
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
_
a
1(k+1)
a
2(k+1)
. . . a
k(k+1)
1 0 . . . 0
a
1(k+2)
a
2(k+2)
. . . a
k(k+2)
0 1 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
1n
a
2n
. . . a
kn
0 0 . . . 1
_
_
_
_
_
_
_
_
_
_
_
_
55
= (A
t
, I
nk
) .
By direct computation, we can verify that the rows of G(D) and H(D)
are orthogonal. That is,
G(D)H(D)
t
=
_
_
_
_
_
_
_
_
_
_
_
_
a
1(k+1)
+a
1(k+1)
a
1(k+2)
+a
1(k+2)
. . . a
1n
+a
1n
a
2(k+1)
+a
2(k+1)
a
2(k+2)
+a
2(k+2)
. . . a
2n
+a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
k(k+1)
+a
k(k+1)
a
k(k+2)
+a
k(k+2)
. . . a
kn
+a
kn
_
_
_
_
_
_
_
_
_
_
_
_
= 0
k(nk)
.
Therefore, H(D) = (A
t
, I
nk
) is a parity check matrix for C.
Example 4.3 (McEliece, [15]). Let
G(D) =
_
_
_
_
1 0
1
1+D
D
1+D
0 1
D
1+D
1
1+D
_
_
_
_
be generator matrix over Z
2
(D). By Lemma 4.2, a parity check matrix H(D) is given
by
H(D) =
_
_
_
_
1
1+D
D
1+D
1 0
D
1+D
1
1+D
0 1
_
_
_
_
.
56
Let G(D) = (I
2
, A), where A =
_
_
_
_
1
1+D
D
1+D
D
1+D
1
1+D
_
_
_
_
. Notice that A = A
t
. In addition,
A
1
= A
t
. Note that G(D) satises the conditions given in Theorem 5.1. In
this case, G(D) is an encoder of a rate-2/4 binary self-dual convolutional code (see
Chapter 5).
Example 4.4 (Wittenmark, [23]). Consider the generator matrix
G(D) =
_
_
_
_
1 0 1 + 3D +D
2
0 1 1 +D +D
2
_
_
_
_
over Z
4
(D). Note that Z
4
satises DCC. By Lemma 4.2, a parity check matrix H(D)
is given by
H(D) =
_
3 +D + 3D
2
3 + 3D + 3D
2
1
_
.
We call the construction of a parity check matrix specied by Lemma 4.2 as
CI. If a parity check matrix has entries which are all polynomial, we call such as a
polynomial parity check matrix or PPCM.
57
4.2.2 A parity check matrix from a square polynomial matrix
with polynomial inverse (PMPI)
The rst part of this section focuses on parity check matrices over F[D], while the
latter part is on parity check matrices over Z
p
r [D].
Consider an arbitrary kn PGM G(D) for a convolutional code C over the eld F.
The invariant factor decomposition of G(D), also known as the Smith normal form
of G(D), can be used to nd a basic (hence non-catastrophic) and minimal-basic
generator matrix equivalent to G(D). The said decomposition can also be used to
nd a polynomial right inverse for a basic generator matrix for C, and a parity check
matrix for C (see [15]).
Theorem 4.1 (Forney, [6]). Let R be a principal ideal domain (PID) and let G be a
k n matrix over R. Then G has an invariant factor decomposition
G = AB ,
where A and B are k k and n n invertible matrices over R, respectively, that is
both det(A) and det(B) are units in R; and is a k n matrix given by
=
_
P
k
0
k(nk)
_
where 0
k(nk)
is the zero matrix and P
k
is a diagonal matrix whose diagonal entries

i
, i = 1, . . . , k, are called the invariant factors of G with respect to R. The
invariant factors are unique and are computable as follows: let
i
be the greatest
58
common divisor of the ii subdeterminants (minors) of G, with
0
= 1 by convention;
then =
i
/
i+1
. We have that
i
divides
i+1
if
i+1
is not zero, for i = 1, . . . , k
1. The matrices A and B, which are not unique in general, can be obtained by a
computational algorithm. Finally, if there is any decomposition G = AB such that
A and B are invertible matrices over R and is a diagonal matrix with
i
|
i+1
or

i+1
= 0, then
i
are the invariant factors of G with respect to R.
The key tool in nding such decomposition of a matrix is the extended Smith
algorithm that gives matrices A, and B. The reader is referred to [15] for some
examples and for a detailed discussion of the said algorithm.
Theorem 4.1 states that every matrix over a PID R has a unique set of invariant
factors, except for the order, with respect to R. That is, is unique with respect to
R. Suppose G(D) is a PGM of C over F and suppose G(D) = AB is the invariant
factor decomposition of G(D) with respect to F[D]. Via block matrix multiplication,
we can write the product A as
A
_
P
k
0
k(nk)
_
=
_
AP
k
A0
k(nk)
_
=
_
T(D) 0
k(nk)
_
,
where T(D) = AP
k
F[D]
kk
. Write
B =
_
_
_
_
G

(D)
L(D)
_
_
_
_
,
where G

(D) F[D]
kn
, L(D) F[D]
(nk)n
. Through block matrix multiplication,
59
we now have
G(D) = AB =
_
T(D) 0
k(nk)
_
_
_
_
_
G

(D)
L(D)
_
_
_
_
=
_
T(D)G

(D) +0
kn
_
= T(D)G

(D).
Therefore, the rst k rows of matrix B, given by G

(D), can be taken to constitute


an equivalent encoder to G(D) since T(D) is invertible over F(D). We compute for
the inverse of B and write it as
B
1
=
_
P(D) K(D)
_
,
where P(D) F[D]
nk
and K(D) F[D]
n(nk)
. Again, via block matrix multipli-
cation, we have
BB
1
=
_
_
_
_
G

(D)
L(D)
_
_
_
_
_
P(D) K(D)
_
=
_
_
_
_
G

(D)P(D) G

(D)K(D)
L(D)P(D) L(D)K(D)
_
_
_
_
=
_
_
_
_
I
k
0
k(nk)
0
(nk)k
I
nk
_
_
_
_
.
Thus, G

(D) is basic since it has a polynomial right inverse given by P(D). Further-
more, the last (n k) columns of B
1
, given by K(D), yields to a (n k) n parity
check matrix of C, given by K(D)
t
. Note that matrices A and B are unimodular
matrices over F[D].
Example 4.5 (McEliece, [15]). Consider matrix G(D) over Z
2
(D) given by
G(D) =
_
_
_
_
1 D
2
+D + 1 D
2
+ 1 D + 1
D D
2
+D + 1 D
2
1
_
_
_
_
.
60
The invariant factor decomposition of G(D), with respect to Z
2
[D], is given by
G(D) = AB
where
A =
_
_
_
_
1 0
D 1
_
_
_
_
,
=
_
_
_
_
1 0 0 0
0 D
2
+D + 1 0 0
_
_
_
_
,
and
B =
_
_
_
_
_
_
_
_
_
_
_
_
1 D
2
+D + 1 D
2
+ 1 D + 1
0 D + 1 D 1
0 0 1 0
0 1 0 0
_
_
_
_
_
_
_
_
_
_
_
_
.
We can take the rst 2 rows of B to be an equivalent PGM to G(D), say G

(D),
given by
G

(D) =
_
_
_
_
1 D
2
+D + 1 D
2
+ 1 D + 1
0 D + 1 D 1
_
_
_
_
,
61
which appears in [15] as G
3
where it is basic and non-catastrophic.
To nd for T(D) such that G(D) = T(D)G

(D), get the product


A =
_
_
_
_
1 0 0 0
D D
2
+D + 1 0 0
_
_
_
_
and take
T(D) =
_
_
_
_
1 0
D D
2
+D + 1
_
_
_
_
.
B
1
is given by
B
1
=
_
_
_
_
_
_
_
_
_
_
_
_
1 D + 1 D + 1 D
0 0 0 1
0 0 1 0
0 1 D D + 1
_
_
_
_
_
_
_
_
_
_
_
_
,
taking the last 2 columns of B
1
as H(D)
t
, we have
H(D) =
_
_
_
_
D 1 0 D + 1
D + 1 0 1 D
_
_
_
_
as a parity check matrix for C. One can immediately verify that G(D)H(D)
t
=
62
0
k(nk)
= G

(D)H(D)
t
. Moreover, the rst 2 columns of B
1
is a polynomial right
inverse of G

(D).
Example 4.6. Consider the minimal-basic generator matrix G(D) given by
G(D) =
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0 1 1 0 0 1
1 1 0 0 0 0 1 1
1 0 1 0 0 1 0 1
D 0 0 1 0 1 1 D + 1
_
_
_
_
_
_
_
_
_
_
_
_
of a convolutional code C over Z
2
. Its invariant factor decomposition is given by
G(D) =
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 1 0 0 0 0 0
0 0 0 1 0 0 0 0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0 1 1 0 0 1
1 1 0 0 0 0 1 1
1 0 1 0 0 1 0 1
D 0 0 1 0 1 1 D + 1
1 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0
0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
.
(4.5)
Notice that the rst four rows (from top) of the rightmost matrix in 4.5 is equiv-
alent, in fact equal, to G(D). Similarly with the previous examples, a parity check
matrix H(D) is taken to be the transpose of the last four columns of the inverse of
the rightmost matrix in 4.5 which is given by
63
H(D) =
_
_
_
_
_
_
_
_
_
_
_
_
1 1 1 D D + 1 0 0 0
0 0 1 1 1 1 0 0
0 1 0 1 1 0 1 0
0 1 1 D + 1 D 0 0 1
_
_
_
_
_
_
_
_
_
_
_
_
.
The discussion above is a good motivation to consider unimodular matrices on
getting a parity check matrix from a given encoder. We saw in the eld case that
every polynomial encoder G(D) has invariant factor decomposition that is given by
AB, with respect to F[D]. So, we can always obtain a parity check matrix from
B

1. However, in the ring case, R[D] is not a PID in general. That is, the invariant
factor decomposition of a PGM G(D) with respect to R[D] is not possible. One
main reason is due to the existence of zero divisors in R[D]. That is, R[D] is not
an integral domain. Hence, R[D] cannot be a Euclidean domain. This makes the
algorithm for invariant factor decomposition impossible to work over R[D] (see [15]).
Nevertheless, Fagnani and Zampieri [5] noted that there is the so called adapted
form (which we will not consider here) of a PGM G(D) over Z
p
r [D, D
1
] which can
be considered as the counter part of the invariant factor decomposition with respect
to F[D]. Moreover, they used the Smith normal form of a generator matrix over
Z
p
r (D) (given the fact that Z
p
r (D) is a principal ideal ring) to analyze the connections
between the generator matrix and the generated code.
In this thesis, we introduce an alternative method that uses unimodular matrices
or in general, square polynomial matrix with polynomial inverse (PMPI) for
64
deriving a parity check matrix from a given basic generator matrix. Hence, we oer
Lemma 4.3, Corollary 4.1, and Theorem 4.2.
Lemma 4.3. If G(D) is a basic generator matrix over Z
p
r [D], then there exists a
set of minors of G(D), say M, such that the elements of M are pairwise coprime in
Z
p
r [D].
Proof:
Consider the minors of G(D), say
i
, i = 1, 2, . . . , N =
_
n
k
_
. Since the
map given in (2.1) is a ring homomorphism, it follows that the minors
of G(D)mod2 are given by (
i
), i = 1, 2, . . . , N. Since G(D) is basic,
G(D) mod p is also basic [23]. So, the gcd of (
i
) Z
p
[D] is 1, for all
i {1, 2, . . . , N} [15]. Now, consider the set M of all relatively prime
(
i
). Let the elements of M be indexed by 1 < i
1
< i
2
< . . . < i
s
N.
Recall that two polynomials in Z
p
[D] are coprime if and only if they
have no common divisor of degree greater than or equal to 1. That is,
two relatively prime polynomials in Z
p
[D] are coprime in Z
p
[D]. Thus,
M = {(
i
1
), (
i
2
), . . . , (
is
)} is a set of pairwise coprime polynomials
in Z
p
[D]. Since two non-invertible polynomials are coprime in Z
p
r [D] if
and only if they are coprime in Z
p
[D], hence
M= {
i
1
,
i
2
, . . . ,
is
}
is a set of pairwise coprime polynomials in Z
p
r [D].
65
The following corollary is immediate from Lemma 4.3.
Corollary 4.1. Any non-empty subset S M (cardinality of S 2) spans 1.
Now, we consider the main theorem of this section. This theorem is a generaliza-
tion of the result in the eld case by McEliece [15].
Theorem 4.2. A k n PGM G(D) over Z
p
r [D] is basic if and only if G(D) is a
submatrix of an n n PMPI matrix B over Z
p
r [D].
Proof:
Consider a n n PMPI matrix B

. Since det(B

) is a unit in Z
p
r [D], it
follows that the rows of B are linearly independent. Hence, we can take
any k rows of B

to be the PGM G(D). For simplicity, we rearrange the


rows of B

such that the rst k rows form G(D). Suppose


B = AB

where A is the k k elementary matrix that corresponds to such rear-


rangements of rows. Let
B =
_
_
_
_
G(D)
L(D)
_
_
_
_
and
B
1
=
_
P(D) K(D)
_
66
where G(D) Z
p
r [D]
kn
, L(D) Z
p
r [D]
(nk)n
, P(D) Z
p
r [D]
nk
and
K(D) Z
p
r [D]
n(nk)
. Via block matrix multiplication,
BB
1
=
_
_
_
_
G(D)P(D) G(D)KD)
L(D)P(D) L(D)K(D)
_
_
_
_
=
_
_
_
_
I
k
0
k(nk)
0
(nk)k
I
nk
_
_
_
_
.
It is clear that P(D) is a polynomial right inverse of G(D), hence G(D)
is basic.
Now, suppose G(D) is a basic. We want to show that it is possible to
complete G(D) to a n n PMPI matrix B. Specically, by letting B =
_
_
_
_
G(D)
L(D)
_
_
_
_
, we want to show that it is possible to nd a (nk) n matrix
L(D) such that det(B) is a unit in Z
p
r [D].
Case 1. G(D) is a basic (n 1) n PGM (i.e. n k = 1).
Let
B =
_
_
_
_
G(D)
L(D)
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
g
11
g
12
g
1n
g
21
g
22
g
2n
.
.
.
.
.
.
.
.
.
g
k1
g
k2
g
kn
l
k+1,1
l
k+1,2
l
k+1,n
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
. (4.6)
67
Expand the determinant of B with respect to its n-th row:
det(B) =
n

j=1
l
k+1,j
(1)
k+1+j
det(G
(j)
k
) . (4.7)
where det(B) is set as a unit in Z
p
r [D]; G
(j)
k
is a (n 1) (n 1) subma-
trix of B obtained by removing the n-th row and j-th column of B and
det(G
(j)
k
) is a minor of G(D). Let
M

=
_

_
det(G
(1)
k
)
_
,
_
det(G
(2)
k
)
_
, . . . ,
_
det(G
(n)
k
)
__
,
where is given in (2.1). Consider set M M

that contains all relatively


prime elements of M

. Since G(D) is basic, following the proof of Lemma


4.3, there is a non-empty set M of polynomials in Z
p
r [D], corresponding
to M such that the elements of M are pairwise coprime in Z
p
r [D]. Let
M=
_

_
det(G
(s
1
)
k
)
_
,
_
det(G
(s
2
)
k
)
_
, . . . ,
_
det(G
(s
m
)
k
)
__
,
for some 1 s
1
< s
2
< . . . < s
m
n, where m

is the cardinality of M.
Thus,
M=
_
det(G
(s
1
)
k
), det(G
(s
2
)
k
), . . . , det(G
(s
m
)
k
)
_
.
By Corollary 4.1, Mspans 1. That is, there exist c
s
1
(D), c
s
2
(D), . . . , c
s
m

(D)
Z
p
r [D] such that
m

h=1
c
s
h
(D) det(G
(s
h
)
k
) = 1 . (4.8)
We arrange the minors of G(D) and write it in this manner:
det(G
(s
1
)
k
), det(G
(s
2
)
k
), . . . , det(G
(s
m
)
k
), det(G
(s
m

+1
)
k
), . . . , det(G
(sn)
k
)
68
where 1 s
m

+1
< s
m

+2
< . . . < s
n
n. We can also think of the
arrangement s
1
, s
2
, . . . , s
n
as corresponding to a permutation of 1, 2, . . . , n.
Thus, we can rewrite (4.7) as
det(B) =
m

h=1
l
k+1,s
h
(1)
k+1+s
h
det(G
(s
h
)
k
)
+
n

h=m

+1
l
k+1,s
h
(1)
k+1+s
h
det(G
(s
h
)
k
) (4.9)
From (4.9),
m

h=1
l
k+1,s
h
(1)
k+1+s
h
det(G
(s
h
)
k
)
= det(B)
n

h=m

+1
l
k+1,s
h
(1)
k+1+s
h
det(G
(s
h
)
k
) . (4.10)
We can give the corresponding polynomials for l
k+1,s
h
, h = m

+ 1, . . . , n,
of our own choice. Then, simplify the right-hand side of (4.10) to a single
polynomial in Z
p
r [D], say r(D). So, we have
m

h=1
l
k+1,s
h
(1)
k+1+s
h
det(G
(s
h
)
k
) = r(D) . (4.11)
Multiply both sides of (4.8) by r(D):
m

h=1
r(D)c
s
h
(D) det(G
(s
h
)
k
) = r(D) . (4.12)
Combine (4.11) and (4.12):
m

h=1
l
k+1,s
h
(1)
k+1+s
h
det(G
(s
h
)
k
) =
m

h=1
r(D)c
s
h
(D) det(G
(s
h
)
k
) . (4.13)
69
Now, we can supply the corresponding polynomials for the remaining en-
tries l
k+1,s
h
, for h = 1, . . . , m

, satisfying the following:


l
k+1,s
h
(1)
k+1+s
h
= r(D)c
s
h
(D)
or
l
k+1,s
h
=
_
(1)
k+1+s
h

1
r(D)c
s
h
(D) .
Case 2. G(D) is a basic k n PGM, k = n 1.
We begin this construction through Laplaces expansion of det(B) (see
Theorem 2.3) with respect to the rst k rows of B, given by
det(B) =
N

i=1
(1)
K
i
det(B
(i)
k
) det(B
(i)
nk
) , (4.14)
where it is clear that det(B
(i)
k
) is a minor of G(D) corresponding to column
indices 1 j
1
< j
2
< . . . < j
k
n labeled as i. The value of N is given
by N =
_
n
k
_
and K
i
= 1 + 2 + +k +j
1
+j
2
+ +j
k
. To be guided
properly along the construction, we let G
(i)
k
= B
(i)
k
and L
(i)
nk
= B
(i)
nk
so
that we can easily identify that G
(i)
k
and L
(i)
nk
are kk and (nk)(nk)
submatrices of G(D) and L(D), respectively. Rewrite (4.14) to
det(B) =
N

i=1
(1)
K
i
det(L
(i)
nk
) det(G
(i)
k
) . (4.15)
We set the value of det(B) to be a unit in Z
p
r [D]. We can immediately
notice that this construction focuses on supplying the appropriate poly-
nomial entries for L(D) such that (4.14) or (4.15) is being satised. For
some notational requirements, let
70
B =
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
g
11
g
12
g
1n
g
21
g
22
g
2n
.
.
.
.
.
.
.
.
.
g
k1
g
k2
g
kn
l
k+1,1
l
k+1,2
l
k+1,n
l
k+2,1
l
k+2,2
l
k+2,n
.
.
.
.
.
.
.
.
.
l
n1
l
n2
l
nn
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
and
L
(i)
nk
=
_
_
_
_
_
_
_
_
_
_
_
_
l
(i)
11
l
(i)
12
l
(i)
1r
l
(i)
21
l
(i)
22
l
(i)
2r
.
.
.
.
.
.
.
.
.
.
.
.
l
(i)
r1
l
(i)
r2
l
(i)
rr
_
_
_
_
_
_
_
_
_
_
_
_
,
r = n k.
Rearrange terms in (4.15) such that the rst N
n
terms contain the factors
det(L
(i)
nk
) wherein the submatrix L
(i)
nk
contains the n-th column of L(D).
Consequently, consider an arrangement of indices 1, 2, . . . , N given by
i
1
, i
2
, . . . , i
Nn
, i
Nn+1
, . . . , i
N
where i
1
< i
2
< . . . < i
Nn
and i
Nn+1
< . . . < i
N
. Thus,
det(B) =
i
Nn

i=i
1
(1)
K
i
det(L
(i)
nk
) det(G
(i)
k
)+
i
N

i=i
(Nn+1)
(1)
K
i
det(L
(i)
nk
) det(G
(i)
k
)
71
or
i
Nn

i=i
1
(1)
K
i
det(L
(i)
nk
) det(G
(i)
k
) = det(B)
i
N

i=i
(Nn+1)
(1)
K
i
det(L
(i)
nk
) det(G
(i)
k
).
(4.16)
Using Denition 2.5,
det(L
(i)
nk
) =

Sr
(sqn)l
(i)
1(1)
l
(i)
2(2)
l
(i)
r(r)
. (4.17)
Combining (4.17) to the left-hand side of (4.16), we have
i
Nn

i=i
1
(1)
K
i

Sr
(sqn)l
(i)
1(1)
l
(i)
2(2)
l
(i)
r(r)
det(G
(i)
k
) =
det(B)
i
N

i=i
(Nn+1)
(1)
K
i
det(L
(i)
nk
) det(G
(i)
k
) . (4.18)
Let
p(D) = det(B)
i
N

i=i
(Nn+1)
(1)
K
i
det(L
(i)
nk
) det(G
(i)
k
) .
We focus on the left-hand side of (4.18). Expand it with respect to i:
i
Nn

i=i
1
(1)
K
i

Sr
(sqn)l
(i)
1(1)
l
(i)
2(2)
l
(i)
r(r)
det(G
(i)
k
)
= (1)
K
i
1

Sr
(sqn)l
(i
1
)
1(1)
l
(i
1
)
2(2)
l
(i
1
)
r(r)
det(G
(i
1
)
k
)
+(1)
K
i
2

Sr
(sqn)l
(i
2
)
1(1)
l
(i
2
)
2(2)
l
(i
2
)
r(r)
det(G
(i
2
)
k
) (4.19)
.
.
.
+(1)
K
i
Nn

Sr
(sqn)l
(i
Nn
)
1(1)
l
(i
Nn
)
2(2)
l
(i
Nn
)
r(r)
det(G
(i
Nn
)
k
) .
Since each submatrix L
(i)
nk
on the left-hand side of (4.18) contains the
n-th row of L(D), we can rewrite each det(L
(i)
nk
), i = i
1
, i
2
, . . . , i
Nn
, as
72
det(L
(i)
nk
) =

Sr
=
(sqn)l
(i)
1(1)
l
(i)
2(2)
l
(i)
r(r)
+

Sr
=
(sqn)l
(i)
1(1)
l
(i)
2(2)
l
(i)
r1,(r1)
l
nn
, (4.20)
where the permutations correspond to l
(i)
r(r)
= l
nn
. Incorporating (4.20)
in each term in (4.19) and distributing (1)
K
i
and det(G
(i)
k
), we have
i
Nn

i=i
1
(1)
K
i

Sr
(sqn)l
(i)
1(1)
l
(i)
2(2)
l
(i)
r(r)
det(G
(i)
k
)
= (1)
K
i
1

Sr
=
(sqn)l
(i
1
)
1(i
1
)
l
(i
1
)
2(2)
l
(i
1
)
r(r)
det(G
(i
1
)
k
)
+(1)
K
i
1

Sr
=
(sqn)l
(i
1
)
1(1)
l
(i
1
)
2(2)
l
(i
1
)
r1,(r1)
l
nn
det(G
(i
1
)
k
)
+(1)
K
i
2

Sr
=
(sqn)l
(i
2
)
1(1)
l
(i
2
)
2(2)
l
(i
2
)
r(r)
det(G
(i
2
)
k
)
+(1)
K
i
2

Sr
=
(sqn)l
(i
2
)
1(1)
l
(i
2
)
2(2)
l
(i
2
)
r1,(r1)
l
nn
det(G
(i
2
)
k
)
.
.
. (4.21)
+(1)
K
i
Nn

Sr
=
(sqn)l
(i
Nn
)
1(1)
l
(i
Nn
)
2(2)
l
(i
Nn
)
r(r)
det(G
(i
Nn
)
k
)
+(1)
K
i
Nn

Sr
=
(sqn)l
(i
Nn
)
1(1)
l
(i
Nn
)
2(2)
l
(i
Nn
)
r1,(r1)
l
nn
det(G
(i
Nn
)
k
) .
Substituting (4.21) to the left-hand side of (4.18) and placing the terms
without the factor l
nn
to the right-hand side of (4.18) will result to
l
nn
_
a
i
1
(D) +a
i
2
(D) + +a
i
Nn
(D)
_
= p(D)
_
b
i
1
(D) +b
i
2
(D) + +b
i
Nn
(D)
_
(4.22)
where
73
a
i
1
(D) = (1)
K
i
1

Sr
=
(sqn)l
(i
1
)
1(1)
l
(i
1
)
2(2)
l
(i
1
)
r1,(r1)
det(G
(i
1
)
k
)
a
i
2
(D) = (1)
K
i
2

Sr
=
(sqn)l
(i
2
)
1(1)
l
(i
2
)
2(2)
l
(i
2
)
r1,(r1)
det(G
(i
2
)
k
)
.
.
.
a
i
Nn
(D) = (1)
K
i
Nn

Sr
=
(sqn)l
(i
Nn
)
1(1)
l
(i
Nn
)
2(2)
l
(i
Nn
)
r1,(r1)
det(G
(i
Nn
)
k
)
b
i
1
(D) = (1)
K
i
1

Sr
=
(sqn)l
(i
1
)
1(i
1
)
l
(i
1
)
2(2)
l
(i
1
)
r(r)
det(G
(i
1
)
k
)
b
i
2
(D) = (1)
K
i
2

Sr
=
(sqn)l
(i
2
)
1(1)
l
(i
2
)
2(2)
l
(i
2
)
r(r)
det(G
(i
2
)
k
)
.
.
.
b
i
Nn
(D) = (1)
K
i
Nn

Sr
=
(sqn)l
(i
Nn
)
1(1)
l
(i
Nn
)
2(2)
l
(i
Nn
)
r(r)
det(G
(i
Nn
)
k
) .
Letting the right-hand side of (4.22) to be q(D) and a(D) = a
i
1
(D) +
a
i
2
(D) + +a
i
Nn
(D) we have
l
nn
a(D) = q(D) . (4.23)
It is clear that l
nn
is a polynomial in Z
p
r [D] if a(D) is a unit in Z
p
r [D].
Note that we can express a(D) as
a(D) =
Nn

h=1
f
i
h
(D) det(G
(i
h
)
k
) (4.24)
where
f
i
h
(D) = (1)
K
i
h

Sr
=
(sqn)l
(i
h
)
1(1)
l
(i
h
)
2(2)
l
(i
h
)
r1,(r1)
, h = 1, 2, . . . , N
n
.
74
Let
M

=
_

_
det(G
(i
1
)
k
)
_
,
_
det(G
(i
2
)
k
)
_
, . . . ,
_
det(G
(i
Nn
)
k
)
__
.
Consider set M M

that contains all relatively prime elements of M

.
Again, following the proof of Lemma 4.3, there is a non-empty set M, of
polynomials in Z
p
r [D], corresponding to M such that the elements of M
are pairwise coprime in Z
p
r [D]. Let
M=
_

_
det(G
(s
1
)
k
)
_
,
_
det(G
(s
2
)
k
)
_
, . . . ,
_
det(G
(s
m
)
k
)
__
,
i
1
s
1
< s
2
< . . . < s
m
i
Nn
, where m

is the cardinality of M. Thus,


M=
_
det(G
(s
1
)
k
), det(G
(s
2
)
k
), . . . , det(G
(s
m
)
k
)
_
.
By Corollary 4.1, Mspans 1. That is, there exist c
s
1
(D), c
s
2
(D), . . . , c
s
m

(D)
Z
p
r [D] such that
m

h=1
c
s
h
(D) det(G
(s
h
)
k
) = 1 . (4.25)
Multiply both sides of (4.25) by a(D):
m

h=1
a(D)c
s
h
(D) det(G
(s
h
)
k
) = a(D) . (4.26)
From (4.24) and (4.26):
m

h=1
a(D)c
s
h
(D) det(G
(s
h
)
k
) =
Nn

h=1
f
i
h
(D) det(G
(i
h
)
k
) . (4.27)
75
Let a(D) be a unit in Z
p
r [D]. In (4.27), if m

= N
n
(i.e. s
h
= i
h
for
h = 1, 2, . . . , N
n
), then the value of f
i
h
(D) is subject to:
f
i
h
(D) = a(D)c
s
i
(D)
or
(1)
K
i
h

Sr
=
(sqn)l
(i
h
)
1(1)
l
(i
h
)
2(2)
l
(i
h
)
r1,(r1)
= a(D)c
s
i
(D) .
Thus, supplying the corresponding polynomials for l
(i
h
)
1(1)
, l
(i
h
)
2(2)
, . . . , l
(i
h
)
r1,(r1)
is restricted to the following:

Sr
=
(sqn)l
(i
h
)
1(1)
l
(i
h
)
2(2)
l
(i
h
)
r1,(r1)
=
_
(1)
K
i
h

1
a(D)c
s
h
(D) , (4.28)
for h = 1, 2, . . . , N
n
. If m

< N
n
, we rearrange the terms in the left-hand
side of (4.27) such that its rst m

terms contain the factor det(G


(i
h
)
k
) =
det(G
s
h
k
). By a change of indices we can rewrite (4.27) as
m

h=1
a(D)c
s
h
(D) det(G
(s
h
)
k
) =
m

h=1
f
s
h
(D) det(G
(s
h
)
k
)+
Nn

h=m

+1
f
s
h
(D) det(G
(s
h
)
k
).
(4.29)
Similar with (4.28), supplying the corresponding polynomials for l
(s
h
)
1(1)
, l
(s
h
)
2(2)
,
. . . , l
(s
h
)
r1,(r1)
is restricted to the following:

Sr
=
(sqn)l
(s
h
)
1(1)
l
(s
h
)
2(2)
l
(s
h
)
r1,(r1)
=
_
(1)
Ks
h

1
a(D)c
s
h
(D) ,
76
for h = 1, 2, . . . , m

and

Sr
=
(sqn)l
(s
h
)
1(1)
l
(s
h
)
2(2)
l
(s
h
)
r1,(r1)
= 0 ,
for h = m

+ 1, . . . , N
m
. We partially constructed B after giving the cor-
responding polynomials for l
(i
h
)
1(1)
, l
(i
h
)
2(2)
, . . . , l
(i
h
)
r1,(r1)
such that (4.27) or
(4.29) is satised. We totally complete the construction of B by supply-
ing the polynomials for the other entries of L(D) then solving for l
nn
. By
doing so, we can simplify q(D), in (4.23), to a single polynomial and it
immediately follows that l
nn
= a(D)
1
q(D) Z
p
r [D].
Thus, considering cases 1 and 2, we completed G(D) into a nn PMPI
matrix B =
_
_
_
_
G(D)
L(D)
_
_
_
_
.
Remarkably, we can complete a k n basic encoder G(D) over Z
p
r [D] into a nn
PMPI matrix B over Z
p
r [D], given by B =
_
_
_
_
G(D)
L(D)
_
_
_
_
. Moreover, a corresponding
PPCM H(D) can be obtained from the inverse of the matrix B given by B
1
=
_
P(D) H(D)
t
_
. We call this construction as CII . We illustrate this in Example
4.7 and in Example 4.8.
Example 4.7. Consider the 2 3 basic generator matrix G(D) given by
G(D) =
_
_
_
_
D + 1 3D 1
D
2
D
2
+D + 1 1
_
_
_
_
.
77
Let
B =
_
_
_
_
_
_
_
_
D + 1 3D 1
D
2
D
2
+D + 1 1
l
31
l
32
l
33
_
_
_
_
_
_
_
_
.
Expanding the determinant of B along its third row, we have
det(B) = (1)
3+1
l
31
det(G
(1)
2
) + (1)
3+2
l
32
det(G
(2)
2
) + (1)
3+3
l
33
det(G
(3)
2
)
= l
31
det(G
(1)
2
) l
32
det(G
(2)
2
) +l
33
det(G
(3)
2
) , (4.30)
where, we let G
j
2
to be 22 submatrix of G(D) obtained by deleting its j-th column.
Thus,
det(G
(1)
2
) = 3D
2
+ 2D + 3 ,
det(G
(2)
2
) = 3D
2
+D + 1 ,
det(G
(3)
2
) = 2D
3
+ 2D
2
+ 2D + 1 .
Take det(B) = 3 and let
M=
_

_
det(G
(1)
2
)
_
,
_
det(G
(2)
2
)
_
,
_
det(G
(3)
2
)
_
,
_
=
_
D
2
+ 1, D
2
+D + 1, 1
_
.
Since the elements of Mare relatively prime in Z
2
[D], thus they are pairwise coprime
in Z
2
[D]. Hence,
M=
_
3D
2
+ 2D + 3, 3D
2
+D + 1, 2D
3
+ 2D
2
+ 2D + 1
_
is a set of pairwise coprime polynomials in Z
4
[D]. Particularly, we have
(D + 1)(3D
2
+ 2D + 3) + 3D(3D
2
+D + 1) + 2(2D
3
+ 2D
2
+ 2D + 1) = 1 . (4.31)
Multiply both sides of (4.31) by det(B) = 3:
(3D + 3)(3D
2
+ 2D + 3) + (D)(3D
2
+D + 1) + 2(3D
2
+ 2D + 3) = 3 . (4.32)
78
From (4.30), (4.32), and since det(B) = 3, it follows that
(3D + 3)(3D
2
+ 2D + 3) + (D)(3D
2
+D + 1) + 2(3D
2
+ 2D + 3)
= l
31
det(G
(1)
2
) l
32
det(G
(2)
2
) +l
33
det(G
(3)
2
) . (4.33)
Thus, we can take
l
31
= 3D + 3, l
32
= D = 3D, l
33
= 2 .
Therefore,
B =
_
_
_
_
_
_
_
_
D + 1 3D 1
D
2
D
2
+D + 1 1
3D + 3 3D 2
_
_
_
_
_
_
_
_
.
Moreover, the inverse of B is given by
B
1
=
_
_
_
_
_
_
_
_
2D
2
+D + 2 3D D
2
+ 2D + 1
2D
2
+D + 1 D + 1 3D
2
+D + 1
2D
2
+ 2D + 3 2D
2
+ 2D 2D
3
+ 2D
2
+ 2D + 3
_
_
_
_
_
_
_
_
.
A parity check matrix H(D) obtained from the last column of B is given by
H(D) =
_
D
2
+ 2D + 1 3D
2
+D + 1 2D
3
+ 2D
2
+ 2D + 3
_
.
79
Example 4.8. Consider the 2 4 basic generator matrix G(D) given by
G(D) =
_
_
_
_
3 D
2
+D + 1 D
2
+ 2D + 1 D + 1
0 D + 1 1 D
_
_
_
_
.
Let
B =
_
_
_
_
G(D)
L(D)
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
_
3 D
2
+D + 1 D
2
+ 2D + 1 D + 1
0 D + 1 D 1
l
31
l
32
l
33
l
34
l
41
l
42
l
43
l
44
_
_
_
_
_
_
_
_
_
_
_
_
.
We expand det(B) via Laplaces expansion with respect to the rst two rows of
B. That is,
det(B) =
N

i=1
(1)
K
i
det(G
(i)
2
) det(L
(i)
2
) . (4.34)
Note that we are going to take two columns out of four and there are N = 6 ways to
do that. For convenience, we label each column indices we chose, by the following:
1: 1,2; 3: 1,4; 5: 2,4;
2: 1,3; 4: 2,3; 6: 3,4.
Thus, we have
80
G
(1)
2
=
_
_
_
_
3 D
2
+D + 1
0 D + 1
_
_
_
_
, L
(1)
2
=
_
_
_
_
l
33
l
34
l
43
l
44
_
_
_
_
,
G
(2)
2
=
_
_
_
_
3 D
2
+ 2D + 1
0 1
_
_
_
_
, L
(2)
2
=
_
_
_
_
l
32
l
34
l
42
l
44
_
_
_
_
,
G
(3)
2
=
_
_
_
_
3 D + 1
0 D
_
_
_
_
, L
(3)
2
=
_
_
_
_
l
32
l
33
l
42
l
43
_
_
_
_
,
G
(4)
2
=
_
_
_
_
D
2
+D + 1 D
2
+ 2D + 1
D + 1 1
_
_
_
_
, L
(4)
2
=
_
_
_
_
l
31
l
34
l
41
l
44
_
_
_
_
,
G
(5)
2
=
_
_
_
_
D
2
+D + 1 D + 1
D + 1 D
_
_
_
_
, L
(5)
2
=
_
_
_
_
l
31
l
33
l
41
l
43
_
_
_
_
,
G
(6)
2
=
_
_
_
_
D
2
+ 2D + 1 D + 1
1 D
_
_
_
_
, L
(6)
2
=
_
_
_
_
l
31
l
32
l
41
l
42
_
_
_
_
.
For simplicity, we let
i
= det(G
(i)
2
). Therefore,

1
= 3D+3,
2
= 3,
3
= 3D,
4
= 3D
3
+2D
2
+2D,
5
= D
3
+3D+3, and
6
= D
3
+2D
2
+3.
We expand (4.34) and rewrite it as

1
det(L
(1)
2
)
2
det(L
(2)
2
) +
4
det(L
(4)
2
)
81
= det(B)
3
det(L
(3)
2
) +
5
det(L
(5)
2
)
6
det(L
(6)
2
) (4.35)
where the submatrix L
(i)
2
in the left-hand side of (4.35) contains the 4th column of
L(D). Expand each det(L
(i)
2
) in (4.35) using Denition 2.5. So,

1
(l
33
l
44
l
43
l
34
)
2
(l
32
l
44
l
43
l
34
) +
4
(l
31
l
44
l
41
l
34
) = p(D) (4.36)
where
p(D) = det(B)
3
(l
32
l
43
l
42
l
33
) +
5
(l
31
l
43
l
41
l
33
)
6
(l
31
l
42
l
41
l
32
) .
Let det(B) = 1. We factor out l
44
in the left-hand side of (4.36) and rewrite it to
l
44
(
1
l
33

2
l
32
+
4
l
31
) = p(D) +
1
(l
43
l
34
)
2
(l
42
l
34
) +
4
(l
41
l
34
)) . (4.37)
Let the right-hand side of (4.37) to be q(D) and a(D) =
1
l
33

2
l
32
+
4
l
31
. Thus,
l
44
a(D) = q(D) . (4.38)
Let a(D) = 1, so
1 =
1
l
33

2
l
32
+
4
l
31
. (4.39)
By Lemma 4.3, there is a set of pairwise coprime minors of G(D), say M. In this
case, we can take
M= {
1
,
2
,
4
} .
By Corollary 4.1, M spans 1. For instance, we have
3
1
+D
2
+ 0
4
= 1 . (4.40)
82
From (4.39) and (4.40), we can take
l
33
= 3, l
32
= D = 3D, l
31
= 0 .
For simplicity, let
l
34
= l
41
= l
42
= l
43
= 0 .
Hence, l
44
= q(D) = 1. Therefore,
B =
_
_
_
_
_
_
_
_
_
_
_
_
3 D
2
+D + 1 D
2
+ 2D + 1 D + 1
0 D + 1 D 1
0 3D 3 0
0 0 0 1
_
_
_
_
_
_
_
_
_
_
_
_
.
Then, from the last two columns of the inverse of B, given by
B
1
=
_
_
_
_
_
_
_
_
_
_
_
_
3 3D
3
+ 3D
2
+ 1 3D
3
+ 2D
2
+ 2D D
4
+D
3
+ 1
0 1 1 3D
0 3D 3D + 3 D
2
0 0 0 1
_
_
_
_
_
_
_
_
_
_
_
_
,
we can obtain a parity check matrix, given by
83
H(D) =
_
_
_
_
3D
3
+ 2D
2
+ 2D 1 3D + 3 0
D
4
+D
3
+ 1 3D D
2
1
_
_
_
_
.
4.2.3 A parity check matrix from the subdeterminants of a
generator matrix
It is known that, in the eld case, if G(D) is a (n 1) n basic generator matrix
of a convolutional code C and if the (n 1) (n 1) subdeterminants of G(D) are

1
,
2
, . . . ,
n
, where
j
is the determinant of the submatrix of G(D) obtained by
removing the j-th column of G(D), then the 1 n matrix
H(D) =
_
()
1
(+)
2
(1)
n

n
_
is a minimal-basic parity check matrix of C (see [15]).
We extend this idea to convolutional codes over rings by giving the following
theorem. Henceforth, we call this type of construction as CIII.
Theorem 4.3. If G(D) is an (n 1) n is a generator matrix of a convolutional
code C over R and if the (n 1) (n 1) subdeterminants of G(D) are det(G
1
(D)),
det(G
2
(D)), . . . , det(G
n
(D)), where G
j
(D) is a submatrix of G(D) obtained by re-
moving the j-th column of G(D), then the 1 n matrix
H(D) =
_

1

2

n
_
84
is a parity check matrix of C where
j
= (1)
n+j
det(G
j
(D)), j = 1, 2, . . . , n. Further,
if C is over Z
p
r and G(D) is basic, then H(D) is basic.
Proof:
Let
G(D) =
_
_
_
_
_
_
_
_
_
_
_
_
g
11
g
12
g
1n
g
21
g
22
g
2n
.
.
.
.
.
.
.
.
.
.
.
.
g
n1,1
g
n1,2
g
n1,n
_
_
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
_
g
1
(D)
g
2
(D)
.
.
.
g
n1
(D)
_
_
_
_
_
_
_
_
_
_
_
_
.
Dene G
i
(D) to be an n n matrix given by
G
i
(D) =
_
_
_
_
G(D)
g
i
(D)
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
g
1
(D)
g
2
(D)
.
.
.
g
n1
(D)
g
i
(D)
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
.
It is clear that det(G
i
(D)) = 0 since it has two rows which are identical
or not linearly independent.
In getting the determinant of G
i
(D) via cofactor expansion along its n-th
row, we have
det(G
i
(D)) = 0 = g
i1
A
n1
+g
i2
A
n2
+ +g
in
A
nn
85
where the cofactor A
nj
is given by A
nj
= (1)
n+j
det(M
nj
), M
nj
is a
submatrix of G
i
(D) obtained by deleting the n-th row and j-th column
of G
i
(D). Clearly, A
nj
=
j
, j = 1, 2, . . . , n. For i = 1, 2, . . . , n 1,
g
11

1
+g
12

2
+ +g
1n

n
= 0 = det(G
1
(D)) ;
g
21

1
+g
22

2
+ +g
2n

n
= 0 = det(G
2
(D)) ;
.
.
.
g
n1,1

1
+g
n1,2

2
+ +g
n1,n

n
= 0 = det(G
n1
(D)) .
In matrix form,
_
_
_
_
_
_
_
_
_
_
_
_
g
11
g
12
g
1n
g
21
g
22
g
2n
.
.
.
.
.
.
.
.
.
.
.
.
g
n1,1
g
n1,2
g
n1,n
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_

2
.
.
.

n
_
_
_
_
_
_
_
_
_
_
_
_
= G(D)H(D)
t
= 0
(n1)1
.
Therefore, H(D) is a parity check matrix of C. Now, suppose G(D) is a
basic encoder over Z
p
r [D]. By Lemma 4.3 and Corollary 4.1, there exists
a set of minors of G(D) that span 1. Since the entries of H(D) are exactly
the minors of G(D), except for a factor (1), it follows immediately that
H(D) is basic.
We end this section by illustrating Theorem 4.3.
86
Example 4.9 (Schneider, [19]). Consider a 2 3 basic encoder over Z
2
[D] given by
G(D) =
_
_
_
_
1 +D 1 D
1 1 1
_
_
_
_
.
Using CIII,
H(D) =
_
1 +D 1 D
_
is a parity check matrix of C. Take note that the negative signs do not matter since
1 = 1. One can immediately check that H(D) is basic.
Example 4.10 (Wittenmark, [23]). From Example 4.4,
H(D) =
_
3 +D + 3D
2
3 + 3D + 3D
2
1
_
can be also derived from the 2 3 basic encoder over Z
4
[D] given by
G(D) =
_
_
_
_
1 0 1 + 3D +D
2
0 1 1 +D +D
2
_
_
_
_
using CIII. Consider the following:

1
= (1)
3+1
((0)(1 +D +D
2
) (1)(1 + 3D +D
2
)) = 3 +D + 3D
2
;

2
= (1)
3+2
((1)(1 +D +D
2
) (0)(1 + 3D +D
2
)) = 3 + 3D + 3D
2
;
and

3
= (1)
3+3
((1)(1) (0)(0)) = 1 .
87
Thus,
H(D) =
_

1

2

3
_
=
_
3 +D + 3D
2
3 + 3D + 3D
2
1
_
.
It is easy to check that H(D) is basic.
4.3 Summary
In this chapter, parity check matrix has been dened. The three methods of deriving
a parity check matrix from an encoder were given. In particular, CI is used to obtain
a parity check matrix from a systematic encoder, CII is on completing a basic encoder
into a PMPI matrix B and taking a PPCM from the columns of B
1
, and CIII is
specic to a 1 n parity check matrix that is taken from the (n 1) (n 1)
subdeterminants of a (n1) n encoder. It was shown that a k n encoder is basic
if and only if it is a submatrix of a n n PMPI matrix. The reader is reminded that
CI, CII and CIII are the main tools in the succeeding chapters.
88
Chapter 5
NEW EXAMPLES OF ENCODERS FOR SELF-DUAL
CONVOLUTIONAL CODES
In this chapter, we give the denition of the dual of a convolutional code that is
considered in this thesis. Based on this, we oer a sucient condition for a system-
atic encoder to be an encoder of a self-dual convolutional code. We construct new
examples of encoders for self-dual convolutional codes over Z
2
and Z
4
which are given
in Sections 5.2.1 and 5.2.1, respectively.
5.1 Self-Dual Convolutional Codes
In literature (see for example [23] and [19]), there are two denitions of a dual of a
convolutional code over a eld. In this paper, we adopt the following denition.
Denition 5.1 (McEliece, [15]). If C is a rate-k/n convolutional code over the eld
F, its dual code, denoted by C

is dened as follows:
C

= {x(D) F(D)
n
|x(D) v(D) = 0, for all v(D) C} .
For a linear block code B, its dual B

is the set of n-tuples which are orthogonal


to every codeword in B. So the denition above is a natural extension from the
89
block code case. In this case, C is considered as a linear block code over F(D).
Moreover, if x(D) = [x
1
(D), x
2
(D), . . . , x
n
(D)] and vD) = [v
1
(D), v
2
(D), . . . , v
n
(D)],
then x(D) v(D), or simply x(D)v(D), is given by
n

i=1
x
i
(D)v
i
(D) ,
where the product x
i
(D)v
i
(D) is taken over F(D).
It is a standard result in linear algebra that C

is a subspace of F(D)
n
of dimension
n k. That is, C

is a rate-(n k)/n convolutional code over F. Recall that the


rows of a (n k) n parity check matrix H(D) for C are linearly independent and
each row of H(D) are orthogonal to every row of a generator matrix for C, hence to
every codeword in C. That is, the rows of H(D) are members of C

and since they


are linearly independent, H(D) can be seen as a generator matrix of C

.
We adopt the same denition of the dual code of a convolutional code over a ring
R as in Denition 5.1.
Recall that Lemma 4.1 assures the existence of a parity check matrix H(D) de-
scribing a code C over a ring R that satises DCC. The parity check matrix H(D)
will generate the dual of a code C over R. In other words, similar to the eld case,
we can treat C

as a free R(D)-submodule (of rank n k) of R(D)


n
. Thus, C

is a
rate-(n k)/n convolutional code over R with a generator matrix given by H(D).
If C = C

(C C

), then we say C is self-dual (self-orthogonal). We can see


90
that a convolutional code C is self-dual if and only if the generator matrices of C and
C

are equivalent.
Self-dual and self-orthogonal block codes gathered much attention for its goodness
in practical and theoretical purposes. The problem at hand is to nd or classify all
self-dual block codes with good parameters, such as distance. In the case of convolu-
tional codes, recently, R. Johannesson, P. Stahl and E. Wittenmark [12] reported the
worlds second Type II binary convolutional code. The rst was reported by A. R.
Calderbank, G. D. Forney, Jr. and A. Vardy [2]. We say that a code is of Type II
if each codeword in the code has a weight divisible by four (doubly-even) and the
code is self-dual.
We can see that the reported self-dual convolutional codes are classied in terms of
their weight properties (i.e. of being Type II). Moreover, the duality of the two codes
are dened with respect to the sequence space duality (see [23]), which we will not
consider here. We will focus on self-duality or self-orthogonality of convolutional codes
in the sense of Denition 5.1. In this case, little is known about this type of self-dual
convolutional codes (see [19]). Nevertheless, Hans-Gert Schneider, in his dissertation
[19], tried to generalize concepts involved in self-duality (self-orthogonality) of linear
block codes to self-duality (self-orthogonality) of convolutional codes over elds.
91
5.2 The Algorithm for Constructing the Examples
In this section, we give the algorithm that is used in constructing examples of encoders
for self-dual convolutional codes over Z
2
and Z
4
. We begin proving the following.
Theorem 5.1. If G(D) = (I, A) is a k n generator matrix of a convolutional
code C over R where n = 2k (i.e. I, A R(D)
kk
), A is invertible over R(D) and
A
1
= A
t
, then the parity check matrix H(D) = (A
t
, I) for C is equivalent to
G(D) and C is self-dual.
Proof:
H(D) = (A
t
, I)
= (A
1
, I)
= A
1
(I, A)
= A
1
G(D).
Take note further that A and A
1
are invertible over R(D). Indeed,
H(D) and G(D) are equivalent. Since G(D) and H(D) generates C and
C

, respectively, therefore C = C

or C is self-dual.
Theorem 5.1 also tells us that G(D) is both a generator matrix and a parity
check matrix of C. We can immediately verify by block matrix multiplication that
G(D)G(D)
t
= 0
kk
. That is,
92
(I
k
, A)
_
_
_
_
I
k
A
t
_
_
_
_
= I
k
+AA
t
= I +A(A
1
) = I
k
I
k
= 0
kk
.
Example 5.1 (Rains and Sloane, [18]). The octacode O
8
is a linear block code over
Z
4
generated by a generator matrix
G =
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0 0 2 1 1 1
0 1 0 0 3 2 1 3
0 0 1 0 3 3 2 1
0 0 0 1 3 1 3 2
_
_
_
_
_
_
_
_
_
_
_
_
.
Notice that if we let
A =
_
_
_
_
_
_
_
_
_
_
_
_
2 1 1 1
3 2 1 3
3 3 2 1
3 1 3 2
_
_
_
_
_
_
_
_
_
_
_
_
,
A
1
= A = A
t
and
93
AA
t
= I
4
=
_
_
_
_
_
_
_
_
_
_
_
_
3 0 0 0
0 3 0 0
0 0 3 0
0 0 0 3
_
_
_
_
_
_
_
_
_
_
_
_
.
Therefore, a parity check matrix H of O
8
given by H = (A
t
, I) is equivalent to
G. That is, H = A
1
G.
Example 5.2 (McEliece, [15]).
G(D) =
_
_
_
_
1 0
1
1+D
D
1+D
0 1
D
1+D
1
1+D
_
_
_
_
in Example 4.3 is a generator matrix of a rate-2/4 self-dual convolutional code over
Z
2
(see [15]) and it satises the conditions in Theorem 5.1. To see this, let
A =
_
_
_
_
1
1+D
D
1+D
D
1+D
1
1+D
_
_
_
_
.
It is apparent that A
1
= A
t
= A
t
= A. Verify that AA
t
= I
2
= I
2
. Thus,
G(D) = A
1
H(D) where H(D) = (A
t
, I).
Example 5.3 (Schneider, [19]). A 2 4 PGM of a self-dual convolutional code over
Z
3
is given by
94
G(D) =
_
_
_
_
D
2
+D + 1 D
2
+ 2D + 1 1 D
2
D D + 2 1 D + 1
_
_
_
_
(see [19]). Since G(D) admits a row-reduced echelon form over Z
3
(D), G(D) is
equivalent to, say G

(D), which is given by


G

(D) =
_
_
_
_
1 0
2D
2
+2D+1
D
2
+2D+2
2D
2
+2
D
2
+2D+2
0 1
D
2
+1
D
2
+2D+2
2D
2
+2D+1
D
2
+2D+2
_
_
_
_
.
The equivalence of G(D) and G

(D) can be seen via


G

(D) = T(D)G(D)
where T(D) is given by
T(D) =
_
_
_
_
D+2
D
2
+2D+2
2D
2
+D+2
D
2
+2D+2
2D
D
2
+2D+2
D
2
+D+1
D
2
+2D+2
_
_
_
_
.
In G

(D), let
A =
_
_
_
_
2D
2
+2D+1
D
2
+2D+2
2D
2
+2
D
2
+2D+2
D
2
+1
D
2
+2D+2
2D
2
+2D+1
D
2
+2D+2
_
_
_
_
.
95
The algorithm for nding G(D) = (I, A), that satises the conditions of Theorem
5.1, focuses on nding the matrix A such that A
1
= A
t
. Examples 5.1, 5.2 and
5.3, among others, motivated this construction. The algorithm is given below.
1. Construct set P of polynomials of degree less than or equal to (say) L.
2. Construct set Q of all possible rational functions from the elements of P.
3. Construct square matrices, say A
i
, with entries coming from Q.
4. For each i, test whether matrix A
i
is invertible and satises A
1
i
= A
t
i
.
5. Obtain matrix G(D) by augmenting matrix A
i
, that passed step 4, to the iden-
tity matrix I such that G(D) = (I, A
i
).
The following examples are obtained using the MAGMA program in Appendix
A.6. In the eld case, Q in step 2 is possible in MAGMA but, not in the Z
M
case.
So, we are limited with polynomial entries for the encoder over Z
M
(D). Nevertheless,
theoretically the algorithm works.
96
5.2.1 A 4 8 minimal-basic PGM over Z
2
(D)
Consider the matrix G(D) over Z
2
(D) given by
G(D) =
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0 0
1
D+1
1
D+1
1
D+1
D
D+1
0 1 0 0
1
D+1
1
D+1
D
D+1
1
D+1
0 0 1 0
1
D+1
D
D+1
1
D+1
1
D+1
0 0 0 1
D
D+1
1
D+1
1
D+1
1
D+1
_
_
_
_
_
_
_
_
_
_
_
_
.
Let
A =
_
_
_
_
_
_
_
_
_
_
_
_
1
D+1
1
D+1
1
D+1
D
D+1
1
D+1
1
D+1
D
D+1
1
D+1
1
D+1
D
D+1
1
D+1
1
D+1
D
D+1
1
D+1
1
D+1
1
D+1
_
_
_
_
_
_
_
_
_
_
_
_
.
We can verify that A
1
= A
t
. But since 1 = 1 in Z
2
, we have A
t
= A
t
= A and
AA
t
= I
4
. Furthermore, we can derive a minimal-basic PGM equivalent to G(D) by
doing the following:
1. Multiply G(D) by the 4 4 invertible matrix
_
_
_
_
_
_
_
_
_
_
_
_
D + 1 0 0 0
0 D + 1 0 0
0 0 D + 1 0
0 0 0 D + 1
_
_
_
_
_
_
_
_
_
_
_
_
97
over Z
2
(D), resulting to an equivalent PGM, say G
1
(D), given by
G
1
(D) =
_
_
_
_
_
_
_
_
_
_
_
_
D + 1 0 0 0 1 1 1 D
0 D + 1 0 0 1 1 D 1
0 0 D + 1 0 1 D 1 1
0 0 0 D + 1 D 1 1 1
_
_
_
_
_
_
_
_
_
_
_
_
.
2. Get the invariant-factor decomposition of G
1
(D) with respect to Z
2
[D] where
we can obtain an equivalent basic PGM (see Examples 4.5 and 4.6), say G
2
(D),
given by
G
2
(D) =
_
_
_
_
_
_
_
_
_
_
_
_
D + 1 0 0 0 1 1 1 D
1 1 0 0 0 0 1 1
1 0 1 0 0 1 0 1
D 0 0 1 0 1 1 D + 1
_
_
_
_
_
_
_
_
_
_
_
_
.
3. Consider the indicator matrix of G
2
(D) given by
[G
2
(D)]
h
=
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0 0 0 0 0 1
1 1 0 0 0 0 1 1
1 0 1 0 0 1 0 1
1 0 0 0 0 0 0 1
_
_
_
_
_
_
_
_
_
_
_
_
.
The matrix [G
2
(D)]
h
is not of full-rank. We make the rank of the matrix
[G
2
(D)]
h
over Z
2
to be equal to four by replacing the rst row of G
2
(D) by the
sum of the rst row and the fourth row of G
2
(D). Consequently, we obtain an
98
equivalent minimal-basic PGM, say G
3
(D), given by
G
3
(D) =
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0 1 1 0 0 1
1 1 0 0 0 0 1 1
1 0 1 0 0 1 0 1
D 0 0 1 0 1 1 D + 1
_
_
_
_
_
_
_
_
_
_
_
_
.
The process above can be summarized by
G
3
(D) =
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0 1
1 1 0 0
1 0 1 0
D 0 0 1
_
_
_
_
_
_
_
_
_
_
_
_
G(D) .
The encoder G
3
(D) also appears in Example 4.6. As a consequence of Theorem
5.1, we expect that G(D), or any equivalent generator matrix to G(D), is also a
parity check matrix of the code generated by them. For instance, we can verify that
G
3
(D)G
3
(D)
t
= 0
44
. In other words, G
3
(D) is a minimal-basic PGM of a rate-4/8
self-dual convolutional code over Z
2
.
99
5.2.2 A 4 8 systematic PGM over Z
4
(D)
An example of a PGM for a self-dual convolutional code over Z
4
is given by
G(D) =
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0 0 1 1 2 1
0 1 0 0 3 1 2D + 1 2
0 0 1 0 2 2D + 3 1 2D + 1
0 0 0 1 3 2 2D + 3 1
_
_
_
_
_
_
_
_
_
_
_
_
.
One can verify that G(D)G(D)
t
= 0
44
and since the rst four columns of G(D)
form the identity matrix, it is immediate that the rows of G(D) are linearly indepen-
dent or we say that the rows of G(D) are free over Z
4
(D). Similarly, if we let
A =
_
_
_
_
_
_
_
_
_
_
_
_
1 1 2 1
3 1 2D + 1 2
2 2D + 3 1 2D + 1
3 2 2D + 3 1
_
_
_
_
_
_
_
_
_
_
_
_
,
then A
1
= A
t
where
A
1
=
_
_
_
_
_
_
_
_
_
_
_
_
3 1 2 1
3 3 2D + 1 2
2 2D + 3 3 2D + 1
3 2 2D + 3 3
_
_
_
_
_
_
_
_
_
_
_
_
.
100
5.3 Summary
We dened the dual of a convolutional code as an exact analog to the dual of a linear
block code. The conditions for a systematic encoder to be equivalent to a parity check
matrix of a code C were have been given. As a result, new examples of convolutional
encoders over Z
2
(D) and Z
4
(D) were constructed.
101
Chapter 6
STRUCTURAL PROPERTIES OF PARITY CHECK
MATRICES
In this chapter, we study the structural properties of a parity check matrix as encoder
of the dual of the code. First, we determine the connections between equivalent
encoders in terms of their subdeterminants and constraint lengths. In similar manner,
we study the connections between a parity check matrix and an encoder. Specically,
we study the structural properties of a parity check matrix that is constructed using
CI, CII and CIII.
6.1 Connections Between Encoders
It is known that most of algorithmic checking of structural properties of encoders uses
subdeterminants or minors. Therefore, we establish relations of parity check matrix
and an encoder with respect to their subdeterminants and constraint lengths. With
this, the maximum degree among the minors of a polynomial encoder, given by ,
can be studied over PGMs and PPCMs of a code.
102
6.1.1 On subdeterminants
Recall that two k n generator matrices G(D) and G

(D) for a convolutional code


C over a ring R are equivalent if and only if there exists a k k invertible matrix
T(D) over R(D) such that G(D) = T(D)G

(D) [23]. Moreover, if G(D) and G

(D)
are both basic, we have the following.
Theorem 6.1 (Wittenmark, [23]). Let G(D) and G

(D) be two basic generator matri-


ces. Then G(D) and G

(D) are equivalent if and only if there exists a k k invertible


matrix T(D) over R(D) such that G

(D) = T(D)G(D), the determinant of T(D) is a


unit in R[D]. Since the determinant of T(D) is a unit in R[D], the inverse of T(D)
is also polynomial.
In other words, T(D) in Theorem 6.1 is a PMPI matrix. As a consequence ,
we can tell something about the k k subdeterminants of two equivalent generator
matrices G(D) and G

(D).
Corollary 6.1 (Wittenmark, [23]). The k k subdeterminants of two equivalent
generator matrices G(D) and G

(D) are equal up to units in R(D).


Proof:
Since G(D) and G

(D) are equivalent, there exists a k k matrix T(D)


103
which is invertible over R(D) such that G

(D) = T(D)G(D). We let


G

(D) =
_
_
_
_
_
_
_
_
_
_
_
_
g

11
g

12
g

1n
g

21
g

22
g

2n
.
.
.
.
.
.
.
.
.
g

k1
g

k2
g

kn
_
_
_
_
_
_
_
_
_
_
_
_
;
T(D) =
_
_
_
_
_
_
_
_
_
_
_
_
t
11
t
12
t
1k
t
21
t
22
t
2k
.
.
.
.
.
.
.
.
.
t
k1
t
k2
t
kk
_
_
_
_
_
_
_
_
_
_
_
_
;
and
G(D) =
_
_
_
_
_
_
_
_
_
_
_
_
g
11
g
12
g
1n
g
21
g
22
g
2n
.
.
.
.
.
.
.
.
.
g
k1
g
k2
g
kn
_
_
_
_
_
_
_
_
_
_
_
_
.
Since G

(D) = T(D)G(D), each entry g

ij
of G

(D) can be written as


g

ij
=
k

h=1
t
ih
g
hj
. (6.1)
For instance,
g

22
=
k

h=1
t
2h
g
h2
= t
21
g
12
+t
22
g
22
+t
23
g
32
+. . . +t
2k
g
k2
.
Wherein, g

22
is the product of the second row of T(D) and the second
column of G(D). In general, following (6.1), g

ij
is the product of the i-th
104
row of T(D) and the j-th column of G(D).
Now, we consider k k submatrices g

(D) and g(D) of G

(D) and G(D),


respectively, where g

(D) and g(D) are obtained by taking arbitrary k


columns from G

(D) and G(D), respectively. We denote the column num-


bers of g

(D) and g(D) by j


1
< j
2
< . . . < j
k
where j
1
, j
2
, . . . , j
k

{1, 2, . . . , n}. We can now write g

(D) and g(D) as follows:


g

(D) =
_
_
_
_
_
_
_
_
_
_
_
_
g

1j
1
g

1j
2
g

1j
k
g

2j
1
g

2j
2
g

2j
k
.
.
.
.
.
.
.
.
.
g

kj
1
g

kj
2
g

kj
k
_
_
_
_
_
_
_
_
_
_
_
_
and
g(D) =
_
_
_
_
_
_
_
_
_
_
_
_
g
1j
1
g
1j
2
g
1j
k
g
2j
1
g
2j
2
g
2j
k
.
.
.
.
.
.
.
.
.
g
kj
1
g
kj
2
g
kj
k
_
_
_
_
_
_
_
_
_
_
_
_
.
Using (6.1), we can further write g

(D) as
g

(D) =
_
_
_
_
_
_
_
_
_
_
_
_

k
h=1
t
1h
g
hj
1

k
h=1
t
1h
g
hj
2


k
h=1
t
1h
g
hj
k

k
h=1
t
2h
g
hj
1

k
h=1
t
2h
g
hj
2


k
h=1
t
2h
g
hj
k
.
.
.
.
.
.
.
.
.

k
h=1
t
kh
g
hj
1

k
h=1
t
kh
g
hj
2


k
h=1
t
kh
g
hj
k
_
_
_
_
_
_
_
_
_
_
_
_
105
=
_
_
_
_
_
_
_
_
_
_
_
_
t
11
t
12
t
1k
t
21
t
22
t
2k
.
.
.
.
.
.
.
.
.
t
k1
t
k2
t
kk
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
g
1j
1
g
1j
2
g
1j
k
g
2j
1
g
2j
2
g
2j
k
.
.
.
.
.
.
.
.
.
g
kj
1
g
kj
2
g
kj
k
_
_
_
_
_
_
_
_
_
_
_
_
= T(D)g(D). (6.2)
Taking the determinants of (6.2), we have
det(g

(D)) = det((T(D))(g(D))) = det(T(D)) det(g(D)) .


Thus, the kk subdeterminant (det(g

(D)) of G

(D) is equal to the prod-


uct of det(T(D)) and a corresponding k k subdeterminant (det(g(D)))
of G(D). Since T(D) is invertible over R(D), det(T(D)) is a unit in R(D).
Corollary 6.2 (Wittenmark, [23]). The kk minors of two equivalent basic generator
matrices G(D) and G

(D) are equal up to units in R[D]. In the eld case, the k k


minors of two equivalent basic generator matrices G(D) and G

(D) are equal up to


nonzero elements in F.
Proof:
The rst part of the corollary is immediate from Theorem 6.1 and Corol-
lary 6.1. For the second part, just add the fact that the units in F[D] are
precisely the units in F, which are the nonzero elements in F (see [10]).
The following corollary is evident from Theorem 6.1 and Corollary 6.2.
106
Corollary 6.3 (Wittenmark, [23]). For convolutional codes over elds, (the maxi-
mum degree among the k k minors of a PGM) is invariant over all equivalent basic
generator matrices.
However, it is also clear that Corollary 6.3 is not true in the ring case. Interestingly,
in the eld case, there is a strong connection between the minors of a PPCM H(D)
and a PGM G(D) of a convolutional code.
Theorem 6.2 (Forney, [6]). The (n k) (n k) minors of H(D) are equal to the
k k minors of G(D), up to units in F[D].
The proof is given in [6].
Proof:
In Section 4.2.2, recall that we can take the rst k rows of a unimodular
matrix B as G(D) while H(D)
t
is from the last (n k) columns of B
1
.
We write B and B
1
as block matrices as follows:
B =
_
_
_
_
P Q
R S
_
_
_
_
and B
1
=
_
_
_
_
P

_
_
_
_
where P, P

F[D]
kk
; Q, Q

F[D]
k(nk)
; R, R

F[D]
(nk)k
and
S, S

F[D]
(nk)(nk)
.
Note, G(D) =
_
P Q
_
and H(D) =
_
Q

_
. We want to show that
the upper left kk subdeterminant of B, i.e. det(P), is equal to the lower
107
right (nk) (nk) subdeterminant of B
1
, i.e. det(S

), up to units in
F[D].
Now,
BB
1
=
_
_
_
_
P Q
R S
_
_
_
_
_
_
_
_
P

_
_
_
_
= I
n
=
_
_
_
_
I
k
0
k(nk)
0
(nk)k
I
nk
_
_
_
_
. (6.3)
Then consider the matrix product
_
_
_
_
P Q
0
(nk)k
I
nk
_
_
_
_
_
_
_
_
P

_
_
_
_
=
_
_
_
_
I
k
0
k(nk)
R

_
_
_
_
(6.4)
Taking the determinants of (6.4) and by Lemma 2.1 and Lemma 2.2, we
have
_
det(P) det(I
(nk)
)

det(B
1
) = det(I
k
) det(S

)
or
det(P) det(B
1
) = det(S

) .
But det(B
1
) is a unit in F[D], hence det(P) is equal to det(S

), up to a
unit in F[D]. By transposition, the proof will carry through for any other
choice of columns in B and the corresponding rows in B
1
(see [6]).
It is implied that the proof of Theorem 6.2 is in the context where G(D) and
H(D) are both basic polynomial matrices and the reason is via Theorem 4.2. So, we
restate Theorem 6.2 in Theorem 6.3 and extend it to Z
p
r [D]. First, dene
G
and
H
to be the maximum degree among the k k minors of a PGM G(D) and a PPCM
108
H(D), respectively. Similarly, let
G
and
H
to be the respective overall constraint
lengths of G(D) and H(D).
Theorem 6.3. If G(D) and H(D) are both basic encoders over Z
p
r [D], then the
(nk) (nk) minors of H(D) are equal to the k k minors of G(D), up to units
in Z
p
r [D]. If G(D) and H(D) are submatrices of n n unimodular matrices B and
B
1
, respectively, then their minors are equal up to units in Z
p
r and
G
=
H
.
Proof:
By Theorem 4.2, there exist PMPI matrices B and B
1
where G(D) is
a submatrix of B and H(D) is a submatrix of B
1
(see CII). Following
the proof of Theorem 6.2, the minors of G(D) and H(D) dier only by
det(B
1
) which is a unit in Z
p
r [D]. In addition, if B and B
1
are uni-
modular matrices, then det(B
1
) is a unit in Z
p
r . Thus, the minors of
G(D) and H(D) are equal up to units in Z
p
r . Hence,
G
=
H
.
The following result shows that the minors of arbitrary PGM and PPCM over
F(D) are equal up to nonzero elements in F(D).
Theorem 6.4. Consider a generator matrix G(D) and a parity check matrix H(D)
of a convolutional code C over F. The (nk) (nk) subdeterminants of H(D) are
equal to the k k subdeterminants G(D), up to nonzero elements in F(D).
Proof:
Let G

(D) be a k n basic PGM equivalent to G(D) and let H

(D) be
109
a (n k) n basic PPCM equivalent to H(D). G

(D) and H

(D) exist
based on Theorem 4.1 and the discussion following it. Thus, by Theorem
6.1 there exist a k k and a (n k) (n k) invertible matrices T(D)
and S(D) over F(D), respectively, such that
G

(D) = T(D)G(D) and H

(D) = S(D)H(D). (6.5)


Now, consider an arbitrary k k subdeterminants g

(D) and g(D) of


G

(D) and G(D), respectively and the corresponding (n k) (n k)


subdeterminants h

(D) and h(D) of H

(D) and H(D), respectively. By


Theorem 6.1,
g

(D) = det(T(D))g(D) (6.6)


and
h

(D) = det(S(D))h(D) . (6.7)


By Theorem 6.3, we have
g

(D) = h

(D) for some nonzero F. (6.8)


Combining (6.6), (6.7), and (6.8), we have
det(T(D))g(D) = det(S(D))h(D)
or
g(D) = [() det(T(D))
1
det(S(D))]h(D) .
It is clear that () det(T(D))
1
det(S(D)) F(D).
110
6.1.2 On constraint lengths
The theorem below gives the relation of a minimal-basic encoder to any equivalent
encoders in terms of their i-th constraint lengths.
Theorem 6.5 (McEliece, [15]). If e
1
e
2
. . . e
k
are the constraint lengths of a
minimal-basic generator matrix G(D) for a convolutional code C over a eld F, and
if f
1
f
2
. . . f
k
are the constraint lengths of any other (equivalent) PGM, say
G

(D) for C, then e


i
f
i
, for all i = 1, . . . k.
Proof:
We prove this theorem by contradiction. Suppose there exists j, 1 j
k, such that e
1
f
1
, e
2
f
2
, . . . , e
j
f
j
but e
j+1
> f
j+1
. We let
G

(D) = T(D)G(D) (6.9)


where
G

(D) =
_
_
_
_
_
_
_
_
_
_
_
_
g

1
(D)
g

2
(D)
.
.
.
g

k
(D)
_
_
_
_
_
_
_
_
_
_
_
_
,
G(D) =
_
_
_
_
_
_
_
_
_
_
_
_
g
1
(D)
g
2
(D)
.
.
.
g
k
(D)
_
_
_
_
_
_
_
_
_
_
_
_
,
111
and
T(D) =
_
_
_
_
_
_
_
_
_
_
_
_
u
11
u
12
u
1k
u
21
u
22
u
2k
.
.
.
.
.
.
.
.
.
u
k1
u
k2
u
kk
_
_
_
_
_
_
_
_
_
_
_
_
;
where g

i
(D), g
i
(D) F[D]
n
and u
ij
F(D). Then we have
_
_
_
_
_
_
_
_
_
_
_
_
g

1
(D)
g

2
(D)
.
.
.
g

k
(D)
_
_
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
_
u
11
u
12
u
1k
u
21
u
22
u
2k
.
.
.
.
.
.
.
.
.
u
k1
u
k2
u
kk
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
g
1
(D)
g
2
(D)
.
.
.
g
k
(D)
_
_
_
_
_
_
_
_
_
_
_
_
,
specically,
g

1
(D) = u
11
g
1
(D) +u
12
g
2
(D) + +u
1k
g
k
(D)
g

2
(D) = u
21
g
1
(D) +u
22
g
2
(D) + +u
2k
g
k
(D)
.
.
.
g

k
(D) = u
k1
g
1
(D) +u
k2
g
2
(D) + +u
kk
g
k
(D)
. (6.10)
That is, each row of G

(D) is a linear combination of the rows of G(D)


with scalars from F(D).
Since G(D) is basic, there exists a polynomial right inverse, say P(D), of
G(D) such that
G(D)P(D) = I
k
. (6.11)
112
From (6.9) and (6.11), we have
G

(D)P(D) = T(D)G(D)P(D) = T(D)I


k
= T(D) .
Since G

(D) and P(D) are both polynomial matrices, T(D) is also a poly-
nomial matrix or u
ij
F[D], for all i, j = 1, . . . k.
From minimal-basicity of G(D), G(D) has PDP. Thus, together with
(6.10), we have f
i
s given below
deg(g

1
(D)) = f
1
= max{deg(u
11
) +e
1
, deg(u
12
) +e
2
, . . . , deg(u
1j
) +e
j
,
deg(u
1,j+1
) +e
j+1
, . . . , deg(u
1k
) +e
k
}
deg(g

2
(D)) = f
2
= max{deg(u
21
) +e
1
, deg(u
22
) +e
2
, . . . , deg(u
2j
) +e
j
,
deg(u
2,j+1
) +e
j+1
, . . . , deg(u
2k
) +e
k
}
.
.
.
deg(g

j
(D)) = f
j
= max{deg(u
j1
) +e
1
, deg(u
j2
) +e
2
, . . . , deg(u
jj
) +e
j
,
deg(u
j,j+1
) +e
j+1
, . . . , deg(u
jk
) +e
k
}
deg(g

j+1
(D)) = f
j+1
= max{deg(u
j+1,1
) +e
1
, deg(u
j+1,2
) +e
2
, . . . , deg(u
j+1,j
) +e
j
,
deg(u
j+1,j+1
) +e
j+1
, . . . , deg(u
j+1,k
) +e
k
}
.
.
.
deg(g

k
(D)) = f
k
= max{deg(u
k1
) +e
1
, deg(u
k2
) +e
2
, . . . , deg(u
kj
) +e
j
,
deg(u
k,j+1
) +e
j+1
, . . . , deg(u
kk
) +e
k
}
.
By the denition of the value of f
i
and by our assumption that e
j+1
> f
j+1
,
and since each u
ij
is a polynomial, we have to force
113
u
j+1,j+1
= . . . = u
j+1,k
= 0
u
j,j+1
= . . . = u
j,k
= 0
.
.
.
u
2,j+1
= . . . = u
2,k
= 0
u
1,j+1
= . . . = u
1,k
= 0
. (6.12)
In other words, the rst j +1 rows of G

(D) are polynomial linear combi-


nations of the rst j rows of G(D) which contradicts the fact that the rows
of G

(D) are linearly independent. To show this, suppose


1
,
2
, . . . ,
k

F(D) and

1
g

1
(D) +
2
g

2
(D) + +
k
g

k
(D) = 0
1n
(6.13)
From (6.10) and (6.13), we can write

1
g

1
(D) =
1
u
11
g
1
(D) +
1
u
12
g
2
(D) + +
1
u
1j
g
j
(D) +

1
u
1,j+1
g
j+1
(D) + +
1
u
1k
g
k
(D);

2
g

2
(D) =
2
u
21
g
1
(D) +
2
u
22
g
2
(D) + +
2
u
2j
g
j
(D) +

2
u
2,j+1
g
j+1
(D) + +
2
u
2k
g
k
(D);
.
.
.

j
g

j
(D) =
j
u
j1
g
1
(D) +
j
u
j2
g
2
(D) + +
j
u
jj
g
j
(D) +

j
u
j,j+1
g
j+1
(D) + +
j
u
jk
g
k
(D);

j+1
g

j+1
(D) =
j+1
u
j+1,1
g
1
(D) +
j+1
u
j+1,2
g
2
(D) + +

j+1
u
j+1,j
g
j
(D) +
j+1
u
j+1,j+1
g
j+1
(D) + +
j+1
u
j+1,k
g
k
(D);
.
.
.
114

k
g

k
(D) =
k
u
k1
g
1
(D) +
k
u
k2
g
2
(D) + +
k
u
kj
g
j
(D) +

k
u
k,j+1
g
j+1
(D) + +
k
u
kk
g
k
(D).
Further, we have
(
1
u
11
+
2
u
21
+. . . +
j
u
j1
+
j+1
u
j+1,1
+. . . +
k
u
k1
) g
1
(D)
+(
1
u
12
+
2
u
22
+. . . +
j
u
j2
+
j+1
u
j+1,2
+. . . +
k
u
k2
) g
2
(D)+
.
.
.
+(
1
u
1j
+
2
u
2j
+. . . +
j
u
jj
+
j+1
u
j+1,j
+. . . +
k
u
kj
) g
j
(D)
+(
1
u
1,j+1
+
2
u
2,j+1
+. . . +
j
u
j,j+1
+
j+1
u
j+1,j+1
+. . . +
k
u
k,j+1
) g
j+1
(D)+
.
.
.
+(
1
u
1k
+
2
u
2k
+. . . +
j
u
jk
+
j+1
u
j+1,k
+. . . +
k
u
kk
) g
k
(D) = 0
1n
By linear independence of the rows of G(D), it will follow that

1
u
11
+
2
u
21
+. . . +
j
u
j1
+
j+1
u
j+1,1
+. . . +
k
u
k1
= 0

1
u
12
+
2
u
22
+. . . +
j
u
j2
+
j+1
u
j+1,2
+. . . +
k
u
k2
= 0
.
.
.

1
u
1j
+
2
u
2j
+. . . +
j
u
jj
+
j+1
u
j+1,j
+. . . +
k
u
kj
= 0

1
u
1,j+1
+
2
u
2,j+1
+. . . +
j
u
j,j+1
+
j+1
u
j+1,j+1
+. . . +
k
u
k,j+1
= 0
.
.
.

1
u
1k
+
2
u
2k
+. . . +
j
u
jk
+
j+1
u
j+1,k
+. . . +
k
u
kk
= 0
115
or in matrix form,
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
u
11
u
21
u
j1
u
j+1,1
u
k1
u
12
u
22
u
j2
u
j+1,2
u
k2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
u
1j
u
2j
u
jj
u
j+1,j
u
kj
u
1,j+1
u
2,j+1
u
j,j+1
u
j+1,j+1
u
k,j+1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
u
1k
u
2k
u
jk
u
j+1,k
u
kk
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_

2
.
.
.

j+1
.
.
.

k
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
= 0
k1
.
(6.14)
We let the left most matrix in (6.14) by A satisfying (6.12), so we have
A =
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
u
11
u
21
u
j1
u
j+1,1
u
j+2,1
u
k1
u
12
u
22
u
j2
u
j+1,2
u
j+2,2
u
k2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
u
1j
u
2j
u
jj
u
j+1,j
u
j+2,j
u
kj
0 0 0 0 u
j+2,j+1
u
k,j+1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 u
j+2,k
u
kk
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
. (6.15)
Since det(A) = 0, (6.14) has nontrivial solutions. That is,
i
= 0, for some
i {1, . . . , k}. Thus, the rows of G

(D) are not linearly independent.


116
Theorem 6.5 implies that the constraint lengths of equivalent minimal-basic PGMs
are equal, up to permutations. The following theorem gives the connection of the
overall constraint lengths of a minimal-basic encoder of the code and a minimal-basic
encoder of the dual code.
Theorem 6.6 (Forney, [6]). If G(D) is equivalent to a minimal-basic encoder with
overall constraint length , then the dual code can also be generated by a minimal-basic
encoder with overall constraint length .
McEliece [15] showed that
G

G
in the eld case. We claim that it is also
true in the ring case. Let e
i
be the i-th constraint length of G(D). In the expansion
of any k k minor of G(D), using Denition 2.5, each term is the product of k
entries of G(D), one from each row (and column). Therefore, it is immediate that

G
e
1
+ e
2
+ + e
k
=
G
since each entry from the i-th row has degree at most
e
i
. Thus, given a PGM G(D) and a PPCM H(D), we have

G
+
H

G
+
H
, (6.16)
since
G

G
and
H

H
.
In the eld case, if G(D) and H(D) are both basic, then
G
=
H
(see Theorem
6.2). Hence,
2
G

G
+
H
or 2
G

G

H
. (6.17)
Moreover, if G(D) is minimal-basic, then
G
=
G
(see [23]). Thus, we have

G

H
. (6.18)
117
Hence, the overall constraint lengths of basic parity check matrices of the code is
bounded below by the overall constraint length of the minimal-basic encoders of the
code. In the sense of Theorem 6.6, we can nd a minimal-basic encoder equivalent to
H(D), say H

(D), such that

G
=
H
. (6.19)
Theorem 6.5 implies that the overall constraint length is invariant among equivalent
minimal-basic encoders. Therefore, from (6.19), we can also say that the overall
constraint length is invariant among minimal-basic PGMs and PPCMs of a code.
For the ring case, we focus on basic PPCMs that is obtained using CII. Recall
that we get a parity check matrix H(D) from B
1
and compute B
1
via adjoints of
B =
_
_
_
_
G(D)
L(D)
_
_
_
_
(see the proof of Theorem 2.4), it follows that

H
i

G
+
L

L
i
(6.20)
where
H
i
,
G
,
L
, and
L
i
are the i-th constraint length of H(D), overall constraint
length of G(D), sum of the row degrees of L(D), and the i-th row degree of L(D),
respectively. Thus, each i-th constraint length of H(D) is bounded above by the row
degrees of B. In particular, if we use CIII to obtain a 1n PPCM from a (n1) n
PGM G(D), we have

H
=
H
=
G

G
.
118
6.2 Structural Properties of a Parity Check
Matrix From a Generator Matrix over Z
p
r(D)
We consider convolutional codes over the ring Z
p
r . Wittenmark [23] studied con-
volutional codes over Z
M
and pointed out that it can be partly studied using the
theory of convolutional codes over elds. To do this, one can use the fact that
Z
M

= Z
p
r
1
1
Z
p
r
2
2
. . . Z
p
rs
s
where M = p
r
1
1
p
r
2
2
p
rs
s
. Moreover, Wittenmark
noted that the study of convolutional codes over Z
M
can be separated into study of
convolutional codes over Z
p
r where the properties of a generator matrix G(D) of a
convolutional code C over Z
p
r can be connected with the properties of G(D) mod p,
a generator matrix of a convolutional code C mod p over the eld Z
p
. Fagnani and
Zampieri [5] gave a complete characterization of the structural properties of generator
matrices and convolutional codes over Z
p
r . Since Z
p
r satises DCC (see Section 2.1
on rings), we are assured of the existence of a parity check matrix for a convolutional
code C over Z
p
r (see Lemma 4.1). Given this, we can use CI to have a parity check
matrix H(D) derived from a generator matrix G(D) = (I, A) over Z
p
r (D) and we
can study the properties of H(D) from the properties of G(D).
Lemma 6.1. Suppose G(D) = (I, A) is a k n PGM for a convolutional code C over
Z
p
r . Then
(i) G(D) is right invertible,
(ii) G(D) is basic,
119
(iii) G(D) is non-catastrophic, and
(iv) G(D is minimal.
Proof:
Since G(D) is systematic, hence G(D) is right invertible (see Section 3.2)
Clearly, G(D) mod p is systematic since G(D) is systematic. So, I
k
is
a submatrix of G(D) mod p where det(I
k
) = 1. Thus, the gcd of the
k k minors of G(D) mod p is 1. Hence from our discussion in Section
3.2 on basicity and non-catastrophicity, G(D) mod p is basic and non-
catastrophic (see also [15]). A PGM G(D) is basic and noncatastropic
if and only if G(D) mod p is basic and noncatastropic, respectively [23].
Therefore, G(D) is also basic and non-catastrophic. Part (iii) is due to
Mittelholzer [16].
Remarkably, the systematicity of a PGM G(D) of convolutional code over Z
p
r
implies the right invertibility, basicity, non-catastrophicity and minimality of G(D).
Recall that using CI, we can derive a parity check matrix H(D), given by H(D) =
(A
t
, I
nk
), from a systematic encoder given by G(D) = (I, A). Given the fact
that H(D) is also systematic in this particular construction, we have the following
corollary where the proof is clear from the proof of Lemma 6.1.
Corollary 6.4. Suppose H(D) = (A
t
, I) is a PPCM for a convolutional code C
over Z
p
r where G(D) = (I, A) is a k n PGM of C. Then
120
(i) H(D) is right invertible,
(ii) H(D) is basic,
(iii) H(D) is non-catastrophic, and
(iv) H(D) is minimal.
As a remark, given a systematic PGM G(D) over Z
p
r (D), we can derive a system-
atic, right invertible, basic, non-catastrophic and minimal PPCM H(D) over Z
p
r (D).
A specic situation is given in Corollary 6.5 where both PGM G(D) and PPCM H(D)
(obtained from G(D) using CI) satisfy PDP.
Corollary 6.5. Consider a kn (n = 2k) PGM G(D) = (I
k
, A) of convolutional code
over Z
p
r and a parity check matrix H(D) = (A
t
, I
k
) (note that both G(D) and H(D)
are of the same sizes). Let A = (a
ij
), 1 i k, 1 j k. If deg(a
ij
) = deg(a
ji
)
for all i = j and the determinant of the indicator matrix of A (det([A]
h
)) is a unit in
Z
p
r , then G(D) and H(D) are minimal-basic.
Proof:
Consider the square matrix A = (a
ij
). Since deg(a
ij
) = deg(a
ji
) for
all i = j, A and A
t
have exactly the same row degrees. Hence, the i-th
constraint lengths of G(D) and H(D) are equal. Moreover, since det([A]
h
)
is a unit in Z
p
r , it follows that det([A
t
]
h
) is also a unit in Z
p
r . Hence,
[G(D)]
h
and [H(D)]
h
both have submatrices, given by [A]
h
and [A
t
]
h
,
121
respectively, whose determinants are units in Z
p
r . Thus, G(D) and H(D)
both satisfy PDP. G(D) and H(D) are basic by Lemma 6.1 and Corollary
6.4, respectively. Therefore, G(D) and H(D) are minimal-basic.
Recall that using CII, given a k n basic encoder G(D) over Z
p
r [D], we de-
rive a (n k) n PPCM H(D) from B
1
, where B =
_
_
_
_
G(D)
L(D)
_
_
_
_
and B
1
=
_
P(D) H(D)
t
_
, P(D) Z
p
r [D]
nk
. Since H(D) is a submatrix of a square PMPI
matrix, by Theorem 4.2, H(D) is basic. Furthermore, we show how to obtain a
polynomial right inverse of H(D). Consider the matrix product given by
BB
1
=
_
_
_
_
G(D)
L(D)
_
_
_
_
_
P(D) H(D)
t
_
= I
n
. (6.21)
Transposing each expression in (6.21), we have
(B
1
)
t
B
t
=
_
_
_
_
P(D)
t
H(D)
_
_
_
_
_
G(D)
t
L(D)
t
_
= I
n
. (6.22)
Further, via block matrix multiplication we have
_
_
_
_
P(D)
t
G(D)
t
P(D)
t
L(D)
t
H(D)G(D)
t
H(D)L(D)
t
_
_
_
_
=
_
_
_
_
I
kk
0
k(nk)
0
(nk)k
I
(nk)(nk)
_
_
_
_
. (6.23)
Therefore, L(D)
t
is a polynomial right inverse of H(D). Indeed, H(D) is basic.
Hence, using CII, a basic PGM will yield to a basic PPCM. In the case of CIII, where
122
a (n1) n G(D) is a basic PGM over Z
p
r [D], a 1 n PPCM H(D) taken from the
(n 1) (n 1) minors of G(D) is also basic.
We end this section by illustrating Corollary 6.5.
Example 6.1. Consider a 2 4 PGM G(D) over Z
4
[D], given by
G(D) =
_
_
_
_
1 0 D D
2
+ 3
0 1 3D
2
+ 1 2
_
_
_
_
.
Let
A =
_
_
_
_
D D
2
+ 3
3D
2
+ 1 2
_
_
_
_
.
So, A
t
is given by
A
t
=
_
_
_
_
3D D
2
+ 3
3D
2
+ 1 2
_
_
_
_
and using CI, a PPCM H(D) is given by
H(D) =
_
_
_
_
3D D
2
+ 3 1 0
3D
2
+ 1 2 0 1
_
_
_
_
.
Hence the respective indicator matrices of A and A
t
are given by
[A]
h
=
_
_
_
_
0 1
3 0
_
_
_
_
and [A
t
]
h
=
_
_
_
_
0 1
3 0
_
_
_
_
.
Moreover, we have
[G(D)]
h
=
_
_
_
_
0 0 0 1
0 0 3 0
_
_
_
_
and [H(D)]
h
=
_
_
_
_
0 1 0 0
3 0 0 0
_
_
_
_
.
123
It is clear that [G(D)]
h
and [H(D)]
h
has submatrices whose determinants are units
in Z
p
r which are given by det([A]
h
) and det([A
t
]
h
), respectively. Therefore, G(D)
and H(D) satisfy PDP. Since G(D) and H(D) are both basic, therefore, they are
minimal-basic.
6.3 Summary
Connections of encoders and parity check matrices, with respect to their subde-
terminants and constraint lengths, were discussed. We looked closely on parity check
matrices that was derived from encoders using CI, CII and CIII. In the Z
p
r case, it
was shown that the minors of a basic encoder G(D) and a basic PPCM H(D) (de-
rived from G(D) using CII) are equal up to units in Z
p
r [D]. The constraint lengths of
H(D) are bounded above by the row degrees of the square PMPI matrix that contains
G(D). We have been studied structural properties of a parity check matrix that were
derived from a given encoder. It was proven that systematicity of a PGM G(D) over
Z
p
r [D] implies the systematicity, right invertibility, basicity, non-catastrophicity and
minimality of a PPCM H(D) that was derived from G(D) using CI. In the context
of CII and CIII, it was demonstrated that given a basic PGM, we can always obtain
a basic PPCM.
124
Chapter 7
LOW-DENSITY CONVOLUTIONAL (LDC) CODES
Low-density parity check block codes (LDPC-BC) were rst discovered by Gallager
in the early 60s [7]. Low-density parity check convolutional codes (LDPC-CC) or
simply low-density convolutional (LDC) codes, that can be considered as the convo-
lutional analog of LDPC-BCs, were rst introduced by Felstrom and Zigangirov in
late 90s [1]. LDC codes caught the attentions of many researchers mainly because of
their property of being not limited to a single block length unlike LDPC-BCs [1]. The
known constructions of LDC codes are few compared to the well-explored LDPC-BCs.
In literature, the focus is on the ecient and eective construction and analysis of
LDC codes. However, in this chapter, we discuss the formal theory of LDC codes.
Specically, we introduce the denition of LDC codes; we also give a specic con-
struction of LDC code found in [4]; and lastly, we observe the connections between
the sparse syndrome former and the encoder of an LDC code. The main reference for
this chapter is the paper by Engdahl and Zigangirov [4].
7.1 Denition of LDC codes
We begin with the denition of a binary LDPC-BC.
Denition 7.1 (Engdahl and Zigangirov, [4]). A rate-k/n binary block code dened
125
by its transposed parity check matrix H
t
is called a low-density parity check block
code (LDPC-BC) if the rows h
i
, i = 0, 1, . . . , n 1, of H
t
are sparse, i.e. if
wt
H
(h
i
) << n k, (7.1)
where wt
H
() is given in (2.6). If all rows of H
t
have the same weight and the same
applies for all columns, then the code is called a homogeneous (or regular) LDPC-BC.
In [4], Engdahl and Zigangirov introduced the denition of binary LDC codes in
the sense of the general case time-varying rate-k/n binary convolutional code. That
is, the code sequence, say v, is dependent not only on the information sequence u
but, it also varies with the time-index j. Consequently, let be the information and
code sequences of a time-varying rate-k/n convolutional encoder be given by
u = . . . u
2
u
1
u
0
u
1
u
2
. . . u
j
. . . where u
j
= u
jk
, . . . , u
(j+1)k1
(7.2)
and
v = . . . v
2
v
1
v
0
v
1
v
2
. . . v
j
. . . where v
j
= u
jn
, . . . , u
(j+1)n1
, (7.3)
respectively, where u
i
, v
i
F
2
, i Z. Let
H
t
=
_
_
_
_
_
_
_
_
_
_
_
_
.
.
.
.
.
.
.
.
.
H
0
(1)
t
H
1
(0)
t
H
ms
(m
s
1)
t

H
0
(0)
t
H
1
(1)
t
H
ms
(m
s
)
t
.
.
.
.
.
.
.
.
.
_
_
_
_
_
_
_
_
_
_
_
_
(7.4)
126
be the transposed innite parity check matrix of the convolutional code, also known as
the syndrome former, where H
i
(j)
t
, i = 1, . . . , m
s
are binary n(nk) submatrices
and the value m
s
is the syndrome former memory. Moreover, assume that the
following two conditions hold:
(i)
the rank of H
0
(j)
t
is n k, j Z, (7.5)
which is fullled by the last (nk) rows of H
0
(j)
t
being linearly indepen-
dent, and
(ii)
H
ms
(j)
t
= 0, j Z. (7.6)
We are now ready to give the denition of a binary LDC code.
Denition 7.2 (Engdahl and Zigangirov, [4]). A rate-k/n binary convolutional code
dened by its innite syndrome former in (7.4) of memory m
s
is called a low-density
convolutional (LDC) code if the row vectors h
i
of H
t
are sparse for all i Z, that
is if
wt
H
(h
i
) << (n k)m
s
, i Z. (7.7)
If all rows of the syndrome former have the same Hamming weight and the same
applies for all columns, then the LDC code is called homogeneous.
It is possible that we omit condition (7.6) but, the syndrome former memory
127
would not be dened and we would have to change Denition 7.2, to some extent.
Similar with LDPC-BCs, we can also consider semi-homogeneous LDC codes. A
rate-k/n homogeneous LDC code with syndrome former memory m
s
, for which each
row in H
t
has weight , and each column has weight /(1 k/n), where /(1 k/n)
is an integer, will be called a (m
s
, , /(1 k/n))-code. We consider as the average
weight of the rows in H
t
for semi-homogeneous LDC codes. If H
i
(j) = H
i
(j +T), i =
1, . . . , m
s
, j Z, then the LDC code is periodic with period T.
7.2 Construction of an LDC code
We give specic construction (from [4]) of the syndrome former of a semi-homogeneous
LDC code using Jimenez-Zigangirov method [1] using the transposed parity check
matrix of a LDPC-BC. We begin by cutting the 8 4 transposed parity check matrix
of a block code, having one 1 in each row and two 1s in each column, as shown in
Figure 7.1(a).
In Figure 7.1(b), the lower portion is moved and resulting matrix is repeated
periodically, where, the blank spaces are assumed to have zero entries. Thus, we get
an innite matrix that has one 1 in each row and two 1s in each column. However,
128
this matrix does not satisfy the syndrome former conditions (7.5) and (7.6).
As shown in Figure 7.1(c), we obtain the syndrome former of a homogeneous
LDC code, having two 1s in each row and four 1s in each column, by appending the
submatrix (11)
t
at the left end of each pair of rows of the matrix in Figure 7.1(b).
The matrix in Figure 7.1(c) satises (7.5) but not (7.6). To do so, we append
the submatrix (01)
t
at the right end of each pair of rows, as shown in Figure 7.1(d).
Therefore, we obtain the syndrome former of a periodical (period T = 4) semi-
homogeneous LDC (2, 2.5, 5)-code, having two 1s in each even row, three 1s in each
odd row, and ve 1s in each column.
Figure 7.1: Illustration of the Jimenez - Zigangirov method for constructing the
syndrome former of a semi-homogeneous LDC (2, 2.5, 5)-code
129
7.3 Encoder of an LDC code
We focus on a time-invariant rate-k/n binary convolutional code

C described by a
(n k) n PPCM H(D). The matrix H(D)
t
is a syndrome former of

C in D-
transform (see [11]). The syndrome former H(D)
t
can be expanded as
H(D)
t
= H
t
0
+H
t
1
D + +H
t
ms
D
ms
(7.8)
where H
t
i
Z
n(nk)
2
, 0 i m
s
. Since H(D) is a parity check matrix of

C,
v(D)H(D)
t
= 0
1(nk)
(7.9)
for all codewords v(D)

C. We can write (7.9) recursively as
v
j
H
t
0
+v
j1
H
t
1
+ +v
jms
H
t
ms
= 0
1(nk)
(7.10)
using (7.8) (see [11]). Equivalently, for causal codewords v = v
0
v
1
v
2
. . . in

C, we have
vH
t
= 0
where
H
t
=
_
_
_
_
_
_
_
_
H
t
0
H
t
1
H
t
2
H
t
ms
H
t
0
H
t
1
H
t
2
H
t
ms
.
.
.
.
.
.
.
.
.
_
_
_
_
_
_
_
_
130
is a semi-innite syndrome former matrix of

C. In similar manner,

C is a LDC code
if H
t
satises 7.5, 7.6 and the condition in Denition 7.2.
As we have seen in Section 7.2, the construction of an LDC code

C, starts with
the syndrome former H
t
. In other words,

C is completely determined by its syndrome
former H
t
or H(D)
t
. That is,

C = {v|vH
t
= 0} or

C = {v(D)|v(D)H(D)
t
= 0
1(nk)
} .
In general, deriving the encoder of

C is not straight forward. Nevertheless, (7.10) can
be used to describe the encoder of

C. In particular, if the rst k components of a
code block v
j
coincide with the information block u
j
at time j, and the other n k
components of v
j
are dened by (7.10), then an encoder of such code is systematic.
Moreover, if H(D)
t
contains the (nk) (nk) identity matrix, then we can derive
an encoder of

C from H(D)
t
using CI. It is worthy of an illustration.
Example 7.1. Let H
t
be a syndrome former of the rate-1/2 LDC (2, 1, 1)-code

C,
given by
H
t
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
0 0 1
1 0 0
0 0 1
1 0 0
.
.
.
.
.
.
.
.
.
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
,
131
where the submatrices of H
t
are given by H
t
0
=
_
_
_
_
0
1
_
_
_
_
, H
t
1
=
_
_
_
_
0
0
_
_
_
_
, and H
t
2
=
_
_
_
_
1
0
_
_
_
_
.
Note that H
t
0
has rank 1 and H
t
2
= 0. Thus, H
t
satises 7.5 and 7.6. The syndrome
former memory m
s
= 2. Consider a causal code sequence v = v
0
v
1
v
2
. . . v
j
. . . =
v
(1)
0
v
(2)
0
v
(1)
1
v
(2)
1
v
(1)
2
v
(2)
2
. . . v
(1)
j
v
(2)
j
. . ., where v
(1)
j
, v
(2)
j
Z
2
. Following (7.10), at time
instant j,
v
j
H
t
0
+v
j1
H
t
1
+v
j2
H
t
2
=
_
v
(1)
j
v
(2)
j
_
_
_
_
_
0
1
_
_
_
_
+
_
v
(1)
j1
v
(2)
j1
_
_
_
_
_
0
0
_
_
_
_
+
_
v
(1)
j2
v
(2)
j2
_
_
_
_
_
1
0
_
_
_
_
= 0 . (7.11)
Then solving for v
(2)
j
in (7.11), we have
v
(2)
j
= v
(1)
j2
. (7.12)
Therefore, each code block at time j is given by
_
v
(1)
j
v
(1)
j2
_
. (7.13)
Note that we can write H
t
in D-transform as
H(D)
t
=
_
_
_
_
D
2
1
_
_
_
_
.
Using Lemma 4.2, we get an encoder given by
G(D) =
_
_
_
_
1
D
2
_
_
_
_
.
It is clear that G(D) is systematic. We can further verify that G(D) is a systematic
encoder of

C. Consider the submatrices of G(D) given by G
0
=
_
1 0
_
, G
1
=
_
0 0
_
, and G
2
=
_
0 1
_
. Then the semi-innite generator matrix G is given by
132
G =
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0 0 0 1
1 0 0 0 0 1
1 0 0 0 0 1
.
.
.
.
.
.
.
.
.
_
_
_
_
_
_
_
_
_
_
_
_
.
Consider the information word u = u
0
u
1
u
2
. . . u
j
. . .. From (3.1), the code block
v
j
corresponding to the information block u
j
Z
2
is given by
v
j
= u
j
G
0
+u
j1
G
1
+u
j2
G
2
= u
j
_
1 0
_
+u
j1
_
0 0
_
+u
j2
_
0 1
_
=
_
u
j
u
j2
_
=
_
v
(1)
j
v
(2)
j
_
.
Since u
j
= v
(1)
j
and u
j2
= v
(1)
j2
, therefore the codeblock v
j
=
_
v
(1)
j
v
(1)
j2
_
satises
(7.12) and (7.13). Hence, v = v
0
v
1
v
2
. . . v
j
. . . where v
j
=
_
u
j
u
j2
_
for all non-
negative j Z, is a codeword of

C. Since the rst component of each code block v
j
coincides with the information block u
j
, therefore G(D) is a systematic encoder of

C.
In general, the memory of a syndrome former is not equal to the memory of an
encoder of a code. But in Example 7.1, we have seen that the memory of the encoder
G(D) is equal to the memory of the syndrome former H(D)
t
. One reason is that
the entries of G(D) are taken from the entries of H(D)
t
. Moreover, it is easy to
verify that G(D) and H(D) are both minimal-basic and their respective memories
133
are exactly their overall constraint lengths. Recall that in the eld case, the overall
constraint length is invariant among minimal-basic PGMs and PPCMs of a code (see
Theorem 6.6). As a remark, an encoder of a LDC code

C can be easily derived from a
systematic syndrome former of

C using CI. Consequently, the memory of the encoder
is exactly the memory of the syndrome former. In addition, if a PPCM H(D) of
an LDC code

C is minimal-basic, with overall constraint length
H
, then

C can be
generated by a minimal-basic encoder with overall constraint length
H
.
If the n (n k) syndrome former H(D)
t
is basic, using CII, we can complete
H(D)
t
into a n n PMPI matrix B

=
_
P(D) H(D)
t
_
over Z
2
[D]. Then, a k n
encoder G(D) can be obtained from the inverse of B

given by (B

)
1
=
_
_
_
_
G(D)
L(D)
_
_
_
_
.
Since we compute (B

)
1
via the adjoints of B

, the i-th constraint lengths


G
i
, 1
i k, of G(D) is bounded above by the row degrees of (B

)
t
. Specically, we have

G
i

H
+
P
t
P
t
i
where
H
,
P
t and
P
t
i
are the overall constraint length of H(D), sum of the row
degrees of P(D)
t
and i-th row degree of P(D)
t
, respectively.
7.4 Summary
The denition of a binary low-density convolutional (LDC) code was given. A spe-
cic construction of a semi-homogeneous LDC code using the Jimenez - Zigangirov
134
method was considered. It has been observed that an encoder of an LDC code can
be obtained from the syndrome former H(D)
t
of the code if H(D) is systematic and
basic. Consequently, the connections between the syndrome former and an encoder
was also investigated.
135
SUMMARY AND RECOMMENDATIONS
We have been devised constructions of parity check matrices for convolutional codes
over rings. In particular, it was shown that similar in the block code case, a parity
check matrix can be obtained from a systematic encoder. Also, a basic encoder was
completed into a square PMPI matrix B and a parity check matrix was derived from
the columns of B
1
. The entries of a 1 n parity check matrix were taken from the
(n 1) (n 1) subdeterminants of a (n 1) n encoder. The given constructions
can be extended to a more general ring. One can devise new constructions of parity
check matrices that will lead to the analysis of the code and its dual.
A specic denition of the dual of a convolutional code C were treated. This
denition is an exact analog of the block code case, wherein C was thought as a
linear block code over R(D). Sucient conditions for a systematic encoder to be an
encoder for a self-dual convolutional code were have been given. Hence, new examples
of encoders of self-dual convolutional codes over Z
2
and Z
4
were constructed. Distance
properties of these examples can be studied.
It has been shown that there are direct connections between encoders and parity
check matrices in terms of their subdeterminants and constraint lengths. In the Z
p
r
case and in the context of CII, the minors of basic encoders and basic parity check
matrices are equal up to units in Z
p
r [D]. The i-th constraint lengths of a parity
check matrix H(D), taken from the columns of a square PMPI matrix given by
136
B
1
=
_
_
_
_
P(D)
H(D)
t
_
_
_
_
, are bounded above by the sum of row degrees of the square PMPI
matrix B. A specic case of this was demonstrated using CIII, such that the overall
constraint length of a 1 n PPCM, that was taken from the (n1) (n1) minors
of a (n 1) n PGM G(D), is equal to
G
which is less than or equal to the sum of
the i-th constraint lengths of G(D). Parity check matrices were treated as encoders
of the dual code. Consequently, the structural properties of parity check matrices
have been studied. Using CI, it was proven that the systematicity of a PGM over
Z
p
r [D] causes a PPCM to be systematic, right invertible, basic, non-catastrophic and
minimal. We considered specic case where the encoder and parity check matrix are
both minimal-basic. Using CII and CIII, it was proven that a basic PGM over Z
p
r [D]
will always yield to a basic PPCM over Z
p
r [D]. It is suggested to analyze structural
properties of PPCM H(D) via PMPI matrices B =
_
_
_
_
G(D)
L(D)
_
_
_
_
and B
1
=
_
_
_
_
P(D)
H(D)
t
_
_
_
_
.
In general, B and B
1
are not unique. Thus, it is recommended to have a study on
the equivalence of parity check matrices H(D) via the structures of B and B
1
and
determine its relation to the group structure of the set of all square PMPI matrices
over R[D].
A thorough exposition of the classical theory of low-density convolutional (LDC)
codes was presented. A binary LDC code was dened and time-invariant LDC code
was adopted. A specic construction of semi-homogeneous LDC code using the
Jimenez - Zigangirov method has been given. An encoder of an LDC code can be
137
obtained (using CI and CII) from the syndrome former H(D)
t
of the code if H(D) is
systematic and basic. The connections between the syndrome former and an encoder
were investigated.
Finally, MAGMA programs were created to: compute for the k k subdetermi-
nants of a k n encoder; check the basicity, reducedness (see [15]), and PDP of a
PGM; estimate free distance of a convolutional code using its PGM; and construct
encoders for self-dual convolutional codes. An implementation of these programs to
other computer algebra software is recommended to achieve higher computing e-
ciency.
138
REFERENCES
[1] A. J. Felstrom and K. S. Zigangirov, Time-varying periodic convolu-
tional codes with low-density parity-check matrix, IEEE Trans. Inform. Theory,
45 (1999), p. 2181 2191.
[2] A. R. Calderbank, G. D. Forney, Jr., and A. Vardy, Minimal tail-biting
trellises: The Golay code and more, IEEE Trans. Inform. Theory, 45 (1999),
p. 1435 1455.
[3] R. B. Dela Cruz, Hilbert series of quaternary convolutional codes, masters
thesis, University of the Philippines, November 2006.
[4] K. Engdahl and K. S. Zigangirov, On the theory of low-density convolu-
tional codes I, in AAECC, 1999, pp. 7786.
[5] F. Fagnani and S. Zampieri, System-theoretic properties of convolutional
codes over rings, IEEE Trans. Inform. Theory, 47 (2001), pp. 2256 2274.
[6] G. D. Forney, Jr., Convolutional codes I: Algebraic structure, IEEE Trans.
Inform. Theory, IT-16 (1970), pp. 720 738.
[7] R. Gallager, Low-density parity-check codes, IRE Trans. Inform. Theory, 8
(1962), pp. 21 28.
[8] S. H. Gupta and B. Virmani, LDPC for Wi-Fi and WiMAX technologies,
ELECTRO-2009, (2009), pp. 262 265.
139
[9] K. Hoffman and R. Kunze, Linear Algebra, New Jersey: Prentice-Hall, Inc.,
2nd ed., 1971.
[10] T. W. Hungerford, Algebra (Graduate Texts in Mathematics), vol. 73, New
York: Springer-Verlag, 1974.
[11] R. Johannesson and Kamil Sh. Zigangirov, Fundamentals of Convolu-
tional Coding, New York: IEEE Press, 1999.
[12] R. Johannesson, P. Stahl, and E. Wittenmark, A note on type II con-
volutional codes, IEEE Trans. Inform. Theory, 46 (2000), pp. 1510 1514.
[13] B. Kolman and D. R. Hill, Elementary Linear Algebra, New Jersey:
Prentice-Hall, Inc., 7th ed., 2000.
[14] J. L. Massey, Coding theory, in Handbook in Applied Mathematics, W. Le-
dermann, ed., vol. 5, pt. B, Combinatorics and Geometry, W. Ledermann and
S. Vajda, eds., Chichester and New York: Wiley, 1985, ch. 16.
[15] R. J. McEliece, The algebraic theory of convolutional codes, in Handbook of
Coding Theory, V. S. Pless and W. C. Human, eds., Amsterdam, The Nether-
lands: North-Holland, Elsevier, 1998.
[16] T. Mittelholzer, Minimal encoders for convolutional codes over rings, Com-
munications Theory and Applications: Systems, Signal Processing and Error
Control Coding, (1993), pp. 30 36.
140
[17] , Convolutional codes over rings and the two chain condition, Proc. IEEE
Int. Symp. Information Theory, (1997), p. 285.
[18] E. M. Rains and N.J.A. Sloane, Self-dual codes, May 19, 1998.
[19] H.-G. Schneider, On the weight adjacency matrix of convolutional codes, PhD
thesis, Rijksuniversiteit Groningen, September 2008.
[20] C. E. Shannon, A mathematical theory of communication, Bell Syst. Tech. J.,
27 (1948), pp. 379 423.
[21] V. P. Sison, Convolutional codes from linear block codes over galois rings, PhD
thesis, University of the Philippines Diliman, October 2005.
[22] , Heller-type bounds for the homogeneous free distance of convolutional codes
over nite frobenius rings, MATIMY

AS MATEMATIKA: Journal of the Math-


ematical Society of the Philippines, 30 (2007), pp. 23 30.
[23] E. Wittenmark, An encounter with convolutional codes over rings, PhD thesis,
Lund University, Sweden, 1998.
141
Appendix A
MAGMA PROGRAMS
Below are the MAGMA programs that we used to aid us in analyzing problems or
concepts in this thesis. Note that the subroutine, in A.1, for computing the subde-
terminants or minors of a generator matrix is also used in A.2, A.3 and A.4. Conse-
quently, the size of the input generator matrix G(D) for this programs are limited to
k n where 1 k 4. All lines with // preceeding them are comments to specic
syntaxes. We give a sample run of each program.
A.1 Computing for the subdeterminants of a gen-
erator matrix
This subroutine will compute for the k k subdeterminants or minors of a k n
generator matrix where k is limited to 1 k 4.
forward minors, gd;
Minors:=function(g1)
k:=Nrows(g1);
n:=Ncols(g1);
GT:=Transpose(g1);
DT:=[];
d:=0;
for i:=1 to n do
g:=RowSubmatrix(GT,i);
142
end for;
case k:
when 1:
DT:=Eltseq(g1);
when 2:
for f:=1 to (n-1) do
for s:=f+1 to n do
m:=Matrix([g[f],g[s]]); // a 2x2 submatrix
d:=Determinant(m); // a 2x2 minor
DT:=Append(DT,d); // collects the minors as a sequence
end for;
end for;
when 3:
for f:=1 to (n-2) do
for s:=f+1 to (n-1) do
for t:= s+1 to n do
m:=Matrix([g[f],g[s],g[t]]); // a 3x3 submatrix
d:=Determinant(m); // a 3x3 minor
DT:=Append(DT,d); // collects the minors as a sequence
end for;
end for;
end for;
when 4:
for f:=1 to (n-3) do
for s:=f+1 to (n-2) do
for t:= s+1 to n-1 do
for q:=t+1 to n do
m:=Matrix([g[f],g[s],g[t],g[q]]); // a 4x4 submatrix
d:=Determinant(m); // a 4x4 minor
DT:=Append(DT,d); // collects the minors as a sequence
end for;
end for;
end for;
end for;
end case;
143
return DT;
end function;
cmr
Example A.1
> g1;
[ D + 1 3*D 1]
[ D^2 D^2 + D + 1 1]
> Minors(g1);
[
2*D^3 + 2*D^2 + 2*D + 1,
3*D^2 + D + 1,
3*D^2 + 2*D + 3
]
144
A.2 Checking basicity of a PGM
These subroutines will test for the basicity of a kn PGM G(D) over F(D).
Recall that G(D) is basic if and only if the gcd of the kk minors of
G(D) is 1 (see [15]). Basically the algorithm for this program is based
on that theorem.
forward Minors, gd;
Minors:=function(g1)
k:=Nrows(g1);
n:=Ncols(g1);
GT:=Transpose(g1);
DT:=[];
d:=0;
for i:=1 to n do
g:=RowSubmatrix(GT,i); // the ith row of a PGM as a sequence
end for;
case k:
when 1:
DT:=Eltseq(g1);
when 2:
for f:=1 to (n-1) do
for s:=f+1 to n do
m:=Matrix([g[f],g[s]]); // a 2x2 submatrix
d:=Determinant(m); // a 2x2 minor
DT:=Append(DT,d); // collects the minors as a sequence
end for;
end for;
when 3:
145
for f:=1 to (n-2) do
for s:=f+1 to (n-1) do
for t:= s+1 to n do
m:=Matrix([g[f],g[s],g[t]]); // a 3x3 submatrix
d:=Determinant(m); // a 3x3 minor
DT:=Append(DT,d); // collects the minors as a sequence
end for;
end for;
end for;
when 4:
for f:=1 to (n-3) do
for s:=f+1 to (n-2) do
for t:= s+1 to n-1 do
for q:=t+1 to n do
m:=Matrix([g[f],g[s],g[t],g[q]]); // a 4x4 submatrix
d:=Determinant(m); // a 4x4 minor
DT:=Append(DT,d); // collects the minors as a sequence
end for;
end for;
end for;
end for;
end case;
return DT;
end function;
// This subroutine will compute for the gcd of the kxk minors
forward gd, IsBasic;
gd:=function(g1)
GTCD:=[];
gcd:=0;
m:=Minors(g1); // m is a sequence of kxk minors of g1 (m := DT)
for i:=1 to #m do // #m is the number of elements in the sequence
gcd:=GCD(m[i],gcd); // get the GCD of the minors
if gcd ne 1 then
146
gcd:=gcd;
end if;
end for;
return gcd,m;
end function;
// This subroutine will test for the basicity of a given PGM using
the subroutines Minors and gd.
IsBasic:=function(g1);
k:=Nrows(g1);
n:=Ncols(g1);
gcd,m:=gd(g1);
print g1,"is a",k,"x",n,"matrix";
print "there are",#m, "(",k,"x",k,")-","minors given below";
print m;
if gcd eq 1 then
return "The given PGM is BASIC since the gcd of the
minors is", gcd;
else
return "The given PGM is NOT BASIC. The gcd of the
minors is", gcd,".";
end if;
end function;
Example A.2
> g2;
[ D + 1 D 1]
[ D^2 D^2 + D + 1 1]
> IsBasic(g2);
[ D + 1 D 1]
[ D^2 D^2 + D + 1 1]
is a 2 x 3 matrix
there are 3 ( 2 x 2 )- minors given below
[
1,
D^2 + D + 1,
147
D^2 + 1
]
The given PGM is BASIC since the gcd of the minors is 1
> g3;
[ 1 D^2 + D + 1 D^2 + 1 D + 1]
[ D D^2 + D + 1 D^2 1]
> IsBasic(g3);
[ 1 D^2 + D + 1 D^2 + 1 D + 1]
[ D D^2 + D + 1 D^2 1]
is a 2 x 4 matrix
there are 6 ( 2 x 2 )- minors given below
[
D^3 + 1,
D^3 + D^2 + D,
D^2 + D + 1,
D^2 + D + 1,
D^3 + D^2 + D,
D^3 + 1
]
The given PGM is NOT BASIC. The gcd of the minors is D^2 + D + 1
148
A.3 Checking reducedness of a PGM
This program will check the reducedness of a k n PGM G(D) over F(D).
The property of G(D) being reduced is equivalent to
G
=
G
where
G
is the maxmimum degree among the kk minors of G(D) and
G
is the overall
constraint length of G(D) (see [15]).
forward Minors, intdeg;
Minors:=function(g1)
k:=Nrows(g1);
n:=Ncols(g1);
GT:=Transpose(g1);
DT:=[];
d:=0;
for i:=1 to n do
g:=RowSubmatrix(GT,i); // the ith row of a PGM as a sequence
end for;
case k:
when 1:
DT:=Eltseq(g1);
when 2:
for f:=1 to (n-1) do
for s:=f+1 to n do
m:=Matrix([g[f],g[s]]); // a 2x2 submatrix
d:=Determinant(m); // a 2x2 minor
DT:=Append(DT,d); // collects the minors as a sequence
end for;
end for;
when 3:
for f:=1 to (n-2) do
for s:=f+1 to (n-1) do
for t:= s+1 to n do
149
m:=Matrix([g[f],g[s],g[t]]); // a 3x3 submatrix
d:=Determinant(m); // a 3x3 minor
DT:=Append(DT,d); // collects the minors as a sequence
end for;
end for;
end for;
when 4:
for f:=1 to (n-3) do
for s:=f+1 to (n-2) do
for t:= s+1 to n-1 do
for q:=t+1 to n do
m:=Matrix([g[f],g[s],g[t],g[q]]); // a 4x4 submatrix
d:=Determinant(m); // a 4x4 minor
DT:=Append(DT,d); // collects the minors as a sequence
end for;
end for;
end for;
end for;
end case;
return DT;
end function;
// This subroutine will compute for the internal degree
(:=maximum degree among the kxk minors) of a PGM
forward intdeg, IsReduced;
intdeg:=function(g1)
k:=Nrows(g1);
m:=Minors(g1);
intd:=0;
for i:=1 to #m do
if Degree(m[i]) gt intd then
intd:=Degree(m[i]); // gets the maximum degree among
end if; // the kxk minors
end for;
return intd; // the maximum degree as the internal degree of the PGM
150
end function;
// This subroutine will compute for the external degree
// (:=overall constraint length) of a PGM
forward extdeg, IsReduced;
extdeg:=function(g1)
k:=Nrows(g1);
rd:=0;
ed:=0;
for i:=1 to k do
g:=RowSubmatrix(g1,i); // the ith row of a PGM as a sequence
rs:=Eltseq(g[i]); // for each row, access each entry by expressing
for j:=1 to #rs do // each row as a sequence
if Degree(rs[j]) gt rd then
rd:=Degree(rs[j]); // the maximum degree among the
end if; // entries in each row
end for;
ed:=ed+rd;
rd:=0;
end for;
return ed; // return the sum of the maximum degrees
end function;
// This subroutine will test for the reducedness of a given PGM
// using the subroutines Minors,intdeg and extdeg.
IsReduced:=function(g1);
k:=Nrows(g1);
n:=Ncols(g1);
intd:=intdeg(g1);
extd:=extdeg(g1);
m:=Minors(g1);
print g1,"is a",k,"x",n,"matrix.";
print "There are",#m, "(",k,"x",k,")-","minors given below.";
print m;
print "The EXTERNAL degree of G(D) is", extd,".";
print "and the INTERNAL degree of G(D) is", intd,".";
if extd eq intd then
151
return "Hence the given PGM is REDUCED.";
else
return "Hence the given PGM is NOT REDUCED.";
end if;
end function;
Example A.3
> g4;
[D^2 + D + 1 D^2 + 1]
> IsReduced(g4);
[D^2 + D + 1 D^2 + 1]
is a 1 x 2 matrix.
There are 2 ( 1 x 1 )- minors given below.
[
D^2 + D + 1,
D^2 + 1
]
The EXTERNAL degree of G(D) is 2 .
and the INTERNAL degree of G(D) is 2 .
Hence the given PGM is REDUCED.
> g2;
[ D + 1 D 1]
[ D^2 D^2 + D + 1 1]
> IsReduced(g2);
[ D + 1 D 1]
[ D^2 D^2 + D + 1 1]
is a 2 x 3 matrix.
There are 3 ( 2 x 2 )- minors given below.
[
1,
D^2 + D + 1,
D^2 + 1
]
The EXTERNAL degree of G(D) is 3 .
and the INTERNAL degree of G(D) is 2 .
Hence the given PGM is NOT REDUCED.
152
A.4 Checking predictable degree property of a PGM
This program will check whether a PGM satisfies PDP or not. The algorithm
in this program is based on the result of Wittenmark [23]: a k n PGM
G(D) over R(D) satisfies PDP if [G(D)]
h
contains a submatrix whose determinant
is a unit in R.
forward rowd, indicatormatrix;
rowd:=function(g)
k:=Nrows(g);
rd:=0;
rowdeg:=[];
for i:=1 to k do
g1:=RowSubmatrix(g,i); // the ith row of a PGM as a sequence
rs:=Eltseq(g1[i]); // for each row, access each entry by
for j:=1 to #rs do // expressing each row as a sequence
if Degree(rs[j]) gt rd then
rd:=Degree(rs[j]); // the maximum degree among the
end if; // entries in each row
end for;
rowdeg:=Append(rowdeg,rd);
rd:=0;
end for;
return rowdeg; // return row degrees
end function;
forward indicatormatrix, HasPDP;
indicatormatrix:=function(g)
c:=CoefficientRing(BaseRing(g));
k:=Nrows(g);
n:=Ncols(g);
seqpdp:=[];
rowdegrees:=rowd(g);
blockrow:=[];
blockg:=[];
153
for i:=1 to k do
g1:=RowSubmatrix(g,i); // the ith row of a PGM as a sequence
rs1:=Eltseq(g1[i]);
for j:=1 to n do
if Degree(rs1[j]) ne rowdegrees[i] then
blockrow:=Append(blockrow,0);
else
blockrow:=Append(blockrow,Coefficient(rs1[j],rowdegrees[i]));
end if;
end for;
blockg:=Append(blockg,blockrow);
blockrow:=[];
end for;
indmat:=Matrix(c,k,n,blockg);
return indmat;
end function;
forward minors, HasPDP;
minors:=function(g)
k:=Nrows(g);
n:=Ncols(g);
GT:=Transpose(g);
DT:=[];
for i:=1 to n do
g1:=RowSubmatrix(GT,i);
end for;
case k:
when 1:
DT:=Eltseq(g);
when 2:
for f:=1 to (n-1) do
for s:=f+1 to n do
m:=Matrix([g1[f],g1[s]]); // a 2x2 submatrix
d:=Determinant(m); // a 2x2 minor
DT:=Append(DT,d); // collects the minors as a sequence
154
end for;
end for;
when 3:
for f:=1 to (n-2) do
for s:=f+1 to (n-1) do
for t:= s+1 to n do
m:=Matrix([g1[f],g1[s],g1[t]]); // a 3x3 submatrix
d:=Determinant(m); // a 3x3 minor
DT:=Append(DT,d); // collects the minors as a sequence
end for;
end for;
end for;
when 4:
for f:=1 to (n-3) do
for s:=f+1 to (n-2) do
for t:= s+1 to n-1 do
for q:=t+1 to n do
m:=Matrix([g1[f],g1[s],g1[t],g1[q]]); // a 4x4 submatrix
d:=Determinant(m); // a 4x4 minor
DT:=Append(DT,d); // collects the minors as a sequence
end for;
end for;
end for;
end for;
end case;
return DT;
end function;
HasPDP:=function(g);
im:=indicatormatrix(g);
seqminors:=minors(im);
sequnits:=[];
for i:=1 to #seqminors do
if (seqminors[i])*(seqminors[i]) eq 1 then // is there a
sequnits:=Append(sequnits,seqminors[i]); // unit in seqminors?
155
end if;
end for;
if sequnits ne [] then
return true;
else
return false;
end if;
end function;
Example A.4
> g1;
[ D + 1 3*D 1]
[ D^2 D^2 + D + 1 1]
> g2;
[ D + 1 D 1]
[ D^2 D^2 + D + 1 1]
> g3;
[ 1 D^2 + D + 1 D^2 + 1 D + 1]
[ D D^2 + D + 1 D^2 1]
> g4;
[D^2 + D + 1 D^2 + 1]
> HasPDP(g1);
false
> HasPDP(g2);
false
> HasPDP(g3);
false
> HasPDP(g4);
true
156
A.5 An estimation of the free distance of a code
The subroutines in this program are designed to estimate the free distance
of convolutional codes over Z
2
and Z
4
. The main algorithm in this program
follows the truncation method introduced by Sison [22] (see discussion in
Section 3.3).
EstimateFreeDistance:=function(R,G,L)
P<D>:=PolynomialRing(R);
k:=Nrows(G);
n:=Ncols(G);
mu:=0;
seqrow:=Eltseq(G);
mindistances:=[];
// this part gets the maximum degree (mu) among the entries of a PGM G
for i:=1 to #seqrow do
if seqrow[i] ne 0 then
if mu lt Degree(seqrow[i]) then
mu:=Degree(seqrow[i]);
end if;
end if;
end for;
// Coeffset is the sequence of all coefficients for the truncated
// polynomial inputs of degree <= L-1
if IsField(R) then
coeffset:=[Eltseq(a):a in VectorSpace(R,L)];
else
coeffset:=[Eltseq(a):a in RSpace(R,L)];
end if;
// seqpoly collects all the possible truncated polynomial inputs of
// degree <= L-1
seqpoly:=[];
157
for i:=1 to (#R)^L do
p:=P!coeffset[i];
seqpoly:=Append(seqpoly,p);
end for;
// this part will construct the input space (input) over truncated
// polynomial inputs of degree <= L-1
case k:
when 1:
input:=[Matrix(P,1,k,[x]):x in seqpoly];
when 2:
input:=[Matrix(P,1,k,[x1,x2]):x1,x2 in seqpoly];
when 3:
input:=[Matrix(P,1,k,[x1,x2,x3]):x1,x2,x3 in seqpoly];
when 4:
input:=[Matrix(P,1,k,[x1,x2,x3,x4]):x1,x2,x3,x4 in seqpoly];
when 5:
input:=[Matrix(P,1,k,[x1,x2,x3,x4,x5]):x1,x2,x3,x4,x5 in seqpoly];
end case;
// this part will construct the output space (output) using input and G
output:=[];
for i:=1 to #input do
output:=Append(output,input[i]*G);
end for;
// seqbcw collects the block codewords corresponding to the
// ploynomial codewords of degree <= mu+L-1
bcw:=[];
seqbcw:=[];
for h:=1 to #output do
for i:= 0 to (mu+L-1) do
for j:=1 to n do
bcw:=Append(bcw,Coefficient(output[h][1,j],i));
end for;
end for;
seqbcw:=Append(seqbcw,bcw);
bcw:=[];
end for;
158
// LBC is a linear block code corresponding to the output space
// of truncated codewords
// LBC is of block length := n*(mu+L) and generated by the
// block codewords in seqbcw
blocklength:=n*mu+n*L;
LBC:=LinearCode<R,blocklength|seqbcw>;
// mindistance is the minimum distance of the linear block code LBC
// the test "if #LBC eq #seqbcw" is for whether C is a
// linear block code or not
if #LBC eq #seqbcw then
if IsField(R) eq true then
mindistance:=MinimumDistance(LBC);
else
mindistance:=MinimumLeeDistance(LBC);
end if;
else
print LBC, "is NOT equal to the code generated by the",
#seqbcw, "block codewords";
end if;
print "A PGM of C is:", G;
print "";
if IsField(R) eq true then
print "The minimum Hamming distance of the truncated code C_L is" ,
mindistance, ",where L is", L,".";
else
print "The minimum Lee distances of the truncated code C_L is" ,
mindistance, ",where L is", L,".";
end if;
print "A block generator matrix of a linear block code corresponding
to C_L is:";
print GeneratorMatrix(LBC); //prints the generator matrix of LBC
159
print "Therefore, the estimated free distance of C is";
return mindistance;
end function;
Example A.5
> g4;
[D^2 + D + 1 D^2 + 1]
> EstimateFreeDistance(GF(2),g4,13);
A PGM of C is:
[D^2 + D + 1 D^2 + 1]
The minimum Hamming distance of the truncated code C_L is 5 ,
where L is 13 .
A block generator matrix of a linear block code corresponding
to C_L is:
[1 1 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1 0 0 1 0 1 1]
[0 0 1 1 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1 1 1 0 0]
[0 0 0 0 1 1 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1 1 1]
[0 0 0 0 0 0 1 1 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1 0 0 1 0 1 1]
[0 0 0 0 0 0 0 0 1 1 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1 1 1 0 0]
[0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1 1 1]
[0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 1 0 0 0 1 0 1 0 0 1 0 1 1]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 1 0 0 0 1 0 1 1 1 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 1 0 0 0 1 0 1 1 1]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 1 0 0 1 0 1 1]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 1 1 1 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 1 1 1]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 1]
Therefore, the estimated free distance of C is
5
160
A.6 For the new examples of encoders of self-dual
convolutional codes
These subroutines will generate a set of 4 4 matrices A over F(D) or
Z
M
[D] such that A
1
= A
t
. These matrices are used to construct a 4
8 encoder given by G(D) = (I
4
, A). Consequently, G(D) satisfies the conditions
of Theorem 5.1. Thus, G(D) is an encoder of a self-dual convolutional
code.
// This function will generate 4x4 matrices A over R[D] s.t.
// A is invertible and A^-1=-A^t.
// We can modify this to an nxn matrices by modifying the subroutine SM.
// Note: For Z_M case, this program is limited for polynomial matrices
// A over Z_M[D].
// For the field case, it can be done up to matrices A with rational entries.
// ***********************************************************//
// This is for the Z_M case.
// This subroutine will compute for the polynomials of degree <= deg.
poly:=function(R,deg)
P<D>:=PolynomialRing(R);
coeffset:=[Eltseq(a):a in RSpace(R,deg+1)];
seqpoly:=[]; //collect the polynomials of degree <= deg
for i:=1 to (#R)^(deg+1) do
p:=P!coeffset[i];
seqpoly:=Append(seqpoly,p);
end for;
return P,seqpoly;
end function;
// ***********************************************************//
161
// This is for the field case.
rationals:=function(R,deg)
P:=PolynomialRing(R);
PP<D>:=FieldOfFractions(P);
coeffset:=[Eltseq(a):a in RSpace(R,deg+1)];
seqpoly:=[]; //collect the polynomials of degree <= deg
for i:=1 to (#R)^(deg+1) do
p:=P!coeffset[i];
seqpoly:=Append(seqpoly,p);
end for;
seqrat:= [PP![p,q]:p,q in seqpoly|q ne 0];
u:=[];
for i:=1 to #seqrat do
if seqrat[i] notin u then
u:=Append(u,seqrat[i]);
end if;
end for;
return PP, u; // u is a sequence of unique rational functions over F[D]
end function;
// ***********************************************************//
// This subroutine will form the 4x4 matrices A and save it as
// a sequence in sm.
// As has been observed from the examples of self-dual encoders
// of the form (I,A), the matrix A, in this case, is given by
// a b c d
// -b a e f
// -c e a g
// -d -f -g a
// If R=Z_M, let p,u:=poly(R,L); where p is the polynomial ring R[D]
// and L is the maximum degree of the polynomials in u.
// If R is a field, let p,u=rationals(R,L); where p is the field of
162
// rational functions over R[D] and L is the maximum degree of the
// polynomials f1(D) and f2(D) such that f1(D)/f2(D) is in u
// For efficiency of computation, we consider not all the elements in u.
// Instead, we consider a subset of u indexed by the seqeunce I.
// e.g.
I:=[1,2,3];
SM:=function(p,u)
Om:=[];
for a in I do
for b in I do
for c in I do
for d in I do
for e in I do
for f in I do
for g in I do
a4:=Matrix(p,4,4, [u[a],u[b],u[c],u[d],
-u[b],u[a],u[e],u[f],
-u[c],-u[e],u[a],u[g],
-u[d],-u[f],-u[g],u[a]]);
if Determinant(a4) eq 1 then
Om:=Append(Om,a4); // Om collects the matrices (a4)
end if; // whith determinants equal to 1.
end for;
end for;
end for;
end for;
end for;
end for;
end for;
// sm collects the matrices A s.t. A is invertible and A^-1=-A^t
sm:=[x:x in Om|x^-1 eq -Transpose(x)];
return sm;
end function;
// Then we chose from sm a matrix A and augment it to the
163
// 4x4 identity matrix I_4 such that a systematic encoder G
// is given by G=(I_4,A).
Example A.6
> p,u:=poly(IntegerRing(4),1);
> p;
Univariate Polynomial Ring in D over IntegerRing(4)
> u;
[
0,
1,
2,
3,
D,
D + 1,
D + 2,
D + 3,
2*D,
2*D + 1,
2*D + 2,
2*D + 3,
3*D,
3*D + 1,
3*D + 2,
3*D + 3
]
> I;
[ 1, 2, 3 ]
> SM(p,u);
[
[1 1 0 1]
[3 1 1 0]
[0 3 1 1]
[3 0 3 1],
[1 1 2 1]
[3 1 1 2]
[2 3 1 1]
[3 2 3 1]
]
164
Appendix B
LIST OF SYMBOLS
SYMBOL MEANING PAGE REFERENCE
R ring 13
F eld 15
R
u
set of all units in R 14
Z ring of integers 15
R eld of real numbers 30
Z
M
ring of integers modulo M 15
Z
p
r ring of integers modulo p
r
, p prime, r Z, r > 0 15
R((D)) ring of Laurent Series 16
R[[D]] ring of formal power series 16
R[D] ring of polynomials 17
R(D) ring of rational functions 18
R
r
(D) ring of realizable functions 19
mod-p reduction map 19
extended mod-p reduction map 19
F
n
set of all n-tuples over F 21
A
t
transpose of matrix A 24
165
SYMBOL MEANING PAGE REFERENCE
det() determinant function 24
R
kn
set of all k n matrices over R 23
U(n, R[D]) set of all n n unimodular matrices over R[D] 26
wt
H
() Hamming weight function 30
wt
L
() Lee weight function 30
convolutional transducer/ encoding map 37
[G(D)]
h
indicator matrix of G(D) 40

i
i-th constraint length of a PGM 40
overall constraint length of a PGM 40
the highest degree among the minors of a PGM 41

G
maximum degree among the k k 107
minors of a PGM G(D)

H
maximum degree among the k k 107
minors of a PPCM H(D)

G
overall constraint length of a PGM G(D) 108

H
overall constraint length of a PPCM H(D) 108
CI construction of a parity check matrix based 56
on Lemma 4.2
CII construction of a parity check matrix based 76
on Theorem 4.2
CIII construction of a parity check matrix based 83
on Theorem 4.3
G(D) mod p a generator matrix with entries reduced modulo p 118
H(D)
t
syndrome former of a convolutional code 129

Potrebbero piacerti anche