Sei sulla pagina 1di 69

INTRODUCTION

1.1 Digital Television (DTV) Broadcasting:


Digital broadcasting will replace the analog systems in coming years: TV
production environment has become digital –and communications is mostly digital. DTV
means digitalization of the TV transmission. Improved picture quality (although
marginal with good analog reception) Saves frequency space: 4 Digital channels versus 1
analog channel in 8 MHz slot. This is the real advantage. Digital video signal can be
distributed via the link network or using ATM- Tai SDH-trunk network. Digital delivery
makes it possible to transfer the signal globally without suffering in quality. Delivery to
users is via:

 Broadcasting antennas (DVB-T, Digital Video Broadcasting –Terrestrial)


 Satellite (DVB-S, Digital Video Broadcasting –Satellite)
 Cable (DVB-C, Digital Video Broadcasting –Cable)

Set-top boxes for the variants differ from the front-end part, but MPEG
decoding is the same. The DVB-system is actually a bit pipe offering around 22 Mb/s of
transmission bandwidth (parameters in Finland) In DTV typically four (4) MPEG-2
television signals are transported using ODFM modulation in the 8 MHz grid of the
current TV system. MPEG-2 is the standard for high quality video also used in DVD:
(Digital Video Disk). The bit rate is approx. 5 Mbit/s for a good quality TV-picture. In
DVB-T one HDTV-signal (bit rate approx. 20 Mb/s) can be transmitted in an 8 MHz
frequency slot using the most efficient coding (QAM-64).

Digital television (DTV) broadcasting is in the Process of migrating from


analog to digital systems, with regions around the world at different stages of adoption.
With some governments looking to auction off the remaining analogue spectrum,
deadlines have been set for regions to switch off analogue TV transmission, resulting in
growing demand for flexible technologies to enable a smooth transition in the timescales
available. Various delivery systems are available for digital TV, but the main ones are

:: 1 ::
satellite, cable, terrestrial, and mobile. Each has a variety of standards and derivatives
that are either mature or emerging. These standards aim to ensure interoperability
between different vendor’s equipment, including the set-top boxes, cell phones, and other
means for DTV reception.

Each region has their own requirements for a DTV delivery system,
dependent on many factors, such as bandwidth available in the digital spectrum, the
terrain––whether it is flat, mountainous, or a dense city center––or to add interactivity,
perhaps for education or medical services in a developing country. All of these mean that
one size does not necessarily fit all when it comes to the delivery mechanism. However,
there remain similarities in the methods used to prepare video, audio, and data for wired
or wireless communication, with all of these systems relying on Forward Error
Correction (FEC) techniques to ensure that any data lost during transmission can be
reconstructed at the receiver.

1.2 Overview Of Digital TV- System:

:: 2 ::
1.3 Digital TV Signal Multiplexing:
1. For one video program (e.g. MTV3), video audio and service info packets (containing
e.g. program information, descrambling key codes, user and control data) and
multiplexed in the packet multiplexer to make Program elementary stream, PES
2. Then a number of Program streams are further multiplexed together to form the
transport stream, TS.

1.4 DTV Transmitter Section:

:: 3 ::
INTRODUCTION:

2.1 Standards for DTV Broadcasting:

The major worldwide standards for DTV broadcasting are:

 Digital Video Broadcasting (DVB)

 Advanced Television Systems Committee (ATSC)

 Integrated Services Digital Broadcasting (ISDB)

 Data Over Cable Service Interface Specification (DOCSIS) J.83 A/B/C

 Digital Multimedia Broadcasting (DMB)

Even though these standards all have differing modulation schemes and data rates, they
all share a requirement for error correction. Forward Error Correction (FEC) is widely
used in digital television systems for the reliable transmission of audio, video, and data.
FEC has a number of objectives to enable a receiver to detect and correct errors
automatically without requesting retransmission

 To add redundant parity information to the data at the transmitter


 To manipulate data to reduce susceptibility to different noise types
 To optimize the use of available bandwidth

2.2 Standards Overview:

Digital Video Broadcasting (DVB):

The DVB Project is an alliance of around 260 worldwide companies


comprising of broadcasters, manufacturers, network operators, and regulatory bodies.

:: 4 ::
Their objective is to agree to flexible and interoperable specifications for digital
broadcasting systems. The main baseline DVB standards are listed below.
There are over 100 standards and recommended practices related to DVB,
available from ETSI, with new standards constantly being released for new and improved
delivery methods (for instance, the latest to be ratified is the DVB-SH standard for
satellite distribution to handheld, based on a turbo code FEC and working with terrestrial
repeaters for indoor coverage). This paper concentrates on the mainstream broadcast
methods to outline the variety of forward error correction techniques used, the
commonality between many which often allows fundamental FEC elements to be
modified through parameterization, and the use of FPGAs to support the evolving needs
of these systems.

2.3 DVB-T (Terrestrial) System:

The DVB-T system is the terrestrial transmission system of the DVB


standards. The block diagram for a DVB-T transmitter is shown in Figure 1. Source data,
consisting of video, audio, and data, is multiplexed into MPEG transport stream (TS)
packets. Each packet is 188 bytes long with 184 bytes for data and 4 bytes for header
information, such as synch and packet ID bytes.

:: 5 ::
For error protection, a concatenated forward error correction (FEC)
scheme is used. The randomized TS packets are encoded by the Reed-Solomon (R-S)
Encoder using a (n, k) code; where n is the block size and k is the number of information
symbols. In this case, n is 255 symbols and k is 239 symbols, and a symbol represents 8
bits. This means that 16 check symbols are appended to the information bytes. This code
allows up to 8 symbols of data to be corrupted and still be corrected that is a total of 64
bits. As the transport stream packets are only 188 bytes long, the first 51 bytes are set to
zero, but are not transmitted. Therefore, it is actually a shortened (204,188) code.
The Reed Solomon scheme corrects random errors where only a few bytes
of data are lost during transmission. The data is then applied to an outer interleaver.

The purpose of the interleaver is to spread long burst errors across several
data packets and improve the BER performance by making it easier for the R-S decoder
to correct errors. The interleaver uses a convolution approach known as the Forney
algorithm.

:: 6 ::
The interleaver has 12 branches of shift registers to perform the interleaving, and a delay
of 17 as shown in Figure 2.

Figure 2: Interlever

Convolutional encoding provides inner error protection. A convolutional


encoder with code rate 1/2 is used. This means for every input bit, two output bits are
generated. The implementation consists of XOR gates and shift registers and is simple
logic. To reduce the number of bits transmitted, various puncturing schemes are used
which discard selected bits from the encoder output. Typical puncture rates are 2/3, 3/4,
5/6, and 7/8. For instance, a 3/4 puncture rate means that for every 3 input bits, 4 output
bits are transmitted from the encoder output rather than the 6 bits that are actually
generated. Puncturing can be implemented using external logic to the convolutional
encoder. This allows the freedom to change between the various puncture rates. The next
section is the inner interleaver. It consists of a bit and symbol interleaver. The purpose is

:: 7 ::
to change the sequence of the symbols to distribute any errors that might get introduced
in the transmission channel, which is achieved by another pseudorandom generator.

MODULATION:
The mapping/modulation process is next, and a choice can be made
between Quadrature Phase shift Key (QPSK), Quadrature Amplitude Modulation (QAM)
16- QAM and 64-QAM allowing tradeoff between transmission data rates and signal
robustness. QAM conveys data by changing, dependent on the input data to be
transmitted, the amplitude of two carrier signals which are 90 degrees out-of-phase with
each other. Phase-shift keying is similar to QAM, but the amplitude of the modulating
signal is constant, only the phase varies dependent on input data. To help in the reception
of the signal being transmitted on the terrestrial radio channel, additional signals, known
as Pilot and Transmission Parameters Signaling (TPS) are inserted in each block. Pilot
signals are used during the synchronization and equalization phase, while TPS signals are
used to send the parameters of the transmitted signal and to identify the transmission cell.
It should be noted that the receiver must be able to synchronize, equalize, and decode the
signal to gain access to the information held by the TPS pilots. Thus, the receiver must
know this information beforehand, and the TPS data is only used in special cases, such as
changes in the parameters, resynchronizations, etc.

The DVB-T standard also allows the use of hierarchical modulation. Two
independent data streams can be transmitted using different paths through the early part
of the concatenated FEC (as shown by the dotted blocks in Figure 1) and different
modulation schemes. An example is QPSK for high priority and 16-QAM for low
priority. The high priority stream, with the lower data rate can be received in an
environment of low carrier-to-noise ratio. The low priority stream, with the higher data
rate, would be used in environments with a higher carrier-to-noise ratio. In fact, one of
the original intentions of the hierarchical mode was to transmit high definition (HD)
channels with lower priority alongside standard definition (SD) channels with the higher
priority, allowing receivers to fall back to an SD transmission should the HD signal be

:: 8 ::
lost. Adoption of this scheme seems to be very limited currently, but may well be taken
advantage of in the future.

The DVB-T receiver is shown in Figure 4. After the conversion from RF


to digital domain, the guard interval is removed. A forward Fast Fourier Transform (FFT)
performs the Orthogonal Frequency Division Multiplex (OFDM) demodulator, and the
Transmission Parameters Signal (TPS) is removed. After de-mapping, the information
data is decoded by the inner decoder. A Viterbi decoder performs the inner decoding
function. The Viterbi decoder has a constraint length of 7 and uses polynomials 171
(octal) and 133 (octal). To handle the puncture rates, as specified by the standard,
puncturing is performed external to the Viterbi decoder, and in place of the missing
symbols, null symbols can be inserted along with an erase input to indicate the position of
the null symbols. The format of the input data to the Viterbi decoder can either be hard or
soft coding.

The soft coding format tends to give a better bit error rate (BER)
performance, because it gives a confidence value for each bit ranging from a maximum
confidence zero to a maximum confidence one. The computation of the soft data is done
using the Log Likelihood Ratio (LLR). The I and Q data from the demodulator are used
to create the confidence value.

The calculation is done by calculating the minimum distance from the


received symbol to all the possible transmitted symbols and then calculating the
probability that a particular bit is either a zero or a one. The data from the Viterbi decoder
is de-interleaved before the block data is sent to the R-S decoder.

The R-S decoder has the same (204,188) code as the transmitter and can
give indications of the number of errors and how many bits have changed from ones to
zeroes and vice versa. The transport packets are then recovered.

:: 9 ::
FIGURE 3:DVB-T-Reciever

2.4 DVB-S (Satellite) System:


The DVB-S system uses similar FEC components as DVB-T error
protection. It has less overall complexity, as it does not require the use of OFDM to
overcome reflections and interference due to terrain.
The transmitter is shown in Figure 4.

Figure 4

:: 10 ::
As per the DVB-T system, the video, audio, and data input streams are
multiplexed into an MPEG-2 Transport Stream. The outer code is a shortened R-S
(204,188) code, allowing the correction of up to a maximum of 8 erroneous bytes for
each 188-byte packet. The interleaver rearranges the transmitted data sequence in such a
way that it becomes more rugged to long sequences of errors. Inner error protection is
provided by convolutional encoding as before.

The DVB-S system has a number of puncture schemes based on ½ code rate and
these are 2/3, 3/4, 5/6, and 7/8. The mapping is done into QPSK, which is then processed
through base band shaping with a Finite Impulse Response (FIR) based raised-cosine
shaped filter to remove inter-signal interference (ISI) at the receive side. ISI is typically
the result of reflections on the signal, which cause either constructive or destructive
interference between the currently received signal and a previously transmitted one,
depending on phase. The I and Q values of the QPSK signal are finally modulated to
radio frequency by theRF front-end for a 36-MHz satellite transponder to give
approximately 45 Mbps data rate.

Figure 5 shows the satellite receiver. The QPSK demodulator


performs the down conversion function to produce the real and imaginary data. The
matched filter uses FIR filters to perform the receive pulse shaping. I and Q data are
converted to soft decisions for the Viterbi decoder, also known as the inner decoder.
Even though the Viterbi decoder can do hard decision coding, a better BER performance
is achieved with soft decisions. De-interleaving is followed by the R-S decoder, or outer
decoder which is the second layer of protection. The data is recovered from the random
pattern introduced in the transmitter to create the original video/data packets.

:: 11 ::
Figure-5 DVB-S Receiver

2.5 DVB-C (Cable) System:

The DVB-C transmitter is exactly the same as the DVB-S system up to the
interleaver output and is shown in Figure 6. For Byte/m-tuple conversion, the data bytes
are encoded into bit m-tuples where m is 4, 5, 6, 7, or 8. The differential coding block
takes the two most significant bytes in each m-tuple and encodes in order to give some
ruggedness to the signal. Then the mapping is done into either 16-QAM, 32- QAM, 64-
QAM, 128-QAM, or 256-QAM. The QAM (Quadrature Amplitude Modulation) signal is
filtered with a raised-cosine shaped filter to remove ISI. The I and Q values are finally
modulated to radio frequency by the RF front-end for an 8 MHz cable channel to gives
approximately 38-40 Mbps data rate.

Figure-6 DVB-C Transmitter System

:: 12 ::
The receiver is shown in Figure 7. After demodulation, the I and Q signals
go through a matched filter and through to the FEC coding sections consisting of the
deinterleaver and the R-S decoder.

Figure-7 DVB-C Receiver System

2.6 DVB-H (Handheld) System:

DVB-H is a technical specification for bringing broadcast services to


handheld receivers. The standard is based on the DVB-T terrestrial specification and uses
the OFDM transmission system. The additional features are:

Multi-Protocol-Encapsulation (MPE). The input data for DVB-H is Internet Protocol (IP)
packets, and these are encapsulated into the TS streams for DVB-T transmission.Time-
slicing. Since handheld receivers have a limited battery life, time
slicing is used to reduce power consumption.
An extra 4k network mode, providing a 4k carrier option for OFDM
MPE-FEC. Protection using multi-protocol encapsulated FEC (MPE-FEC)

:: 13 ::
Figure-8 DVB-H system

MPE-FEC uses an FEC frame, which is made up of 255 columns and up


to a maximum of 1024 rows. The frame is split into two sections: an application table,
using 191 columns; and an RS data table using the other 64 columns for parity data.

2.7 Advanced Television Systems Committee (ATSC):


Established in 1982, the Advanced Television Systems Committee is the
group that developed the ATSC digital television standard for the United States, also
adopted by Canada, Mexico, and South Korea. An ATSC transmitter is shown in Figure
13. The transmitter receives 188 bytes of packetized data/sync (i.e., video/data/audio) and
randomizes the data to achieve a flat noise-like spectrum. This random data allows the
clock recovery loops in the receiver to retrieve the clock signal.

:: 14 ::
Figure9 ATSC transmitter system

The R-S encoder provides burst noise protection by appending 20 bytes at


the end of the 187 data packets for the (207,187) code. The data is interleaved before
being applied to a trellis encoder. For trellis coding, each incoming byte is split up into a
stream of four, 2-bit words. For every 2-bit word entering the encoder, 3-bits are output
based on the past history of previous incoming 2-bit words.

Figure10 ATSC Receiver System

:: 15 ::
2.8 Integrated Services Digital Broadcasting (ISDB):
ISDB is the set of Japanese standards that covers terrestrial (ISDB-T),
satellite (ISDB-S) and cable (ISDB-C) communication. Multiple transport streams are
remultiplexed into a single transport stream. The TS is first processed through an R-S
encoder with (204, 188) code. A key feature of ISDB is hierarchical transmission. The TS
packets are divided into sets of packets according to program information, into a
maximum of three parallel processing sections, known as hierarchical separation (A, B,
C). Each section performs energy dispersal, byte interleaving and convolutional
encoding. The convolutional encoder could have different coding rates, and different
modulation scheme (D, E, F). The transmission system is shown in Figure 11.

Figure11: ISDB transmission system

:: 16 ::
3.1 MPEG INTRODUCTION:

In 1936, the year of the Berlin Olympic Games, spectators crowded into
specially built viewing rooms called Fernsehstuben (literally, television rooms) to catch a
glimpse of one of the first-ever television broadcasts. In black and white, 180 lines per
frame, and 25 frames per second, it would hardly compare to today’s standards for
television quality however, it was the progenitor of modern-day broadcasting, one of the
most powerful tools of the Information Age. In the formative years of the broadcast
industry, a handful of vertically integrated companies controlled everything from content
creation to broadcast delivery.

These firms reached millions of viewers with over-the-air analog signals


delivered by their local affiliate stations. Over the years, the advent of new broadcast
delivery technologies (namely cable and satellite), coupled with deregulation and the rise
of new providers, has made video broadcasting a far more interesting and competitive
business. Driven by technical, financial and regulatory demands, the current transition
from analog to digital video services has spawned a market for the creation, manipulation
and delivery of MPEG-2 transport streams.

In fact, MPEG-2-based protocols have become the worldwide standards


for carrying broadcast quality compressed digital video, audio and data over terrestrial,
satellite and cable broadband networks. In short, MPEG-2 has become to digital
broadcast what IP is to the Internet.

:: 17 ::
3.2 MPEG HISTORY:

In 1987 the International Electro technical Commission (IEC) and the


international Organization for Standardization (ISO) created a working group of experts
tasked to standardize the compression of digital video and audio. This group became
known as the Moving Picture Experts Group, or MPEG. When the first official MPEG
meeting was held in May of 1988, digital television broadcasting was no more than a
vision. The development of audio CDs had proven that analog signals could be digitized
to produce high-quality sound, and the implications of digitization combined with
compression stretched as far as television, where decreased bandwidth requirements
would make room for more programs, Internet services and interactive applications. But
developing a method to successfully compress and then transmit digital programs would
require extensive research. And making the transition from analog to digital television
would impose on the industry an entirely new approach to broadcasting, including new
technology, new equipment and new international standards. The MPEG series of
protocols answered the need for digital broadcast standardization.

MPEG consists of a family of standards that specify the coding of video,


associated audio and hypermedia. These currently include MPEG-1, MPEG-2 and
MPEG-4, and will soon be joined by MPEG-7. Though this guide deals mainly with
MPEG-2, the digital broadcasting standard, we will discuss MPEG-1, MPEG-4 and
MPEG-7 briefly. Note that, while all the MPEG standards deal with compression, only
MPEG-2 addresses the transmission, or movement, of compressed digital content across
a network.

The MPEG standards define the syntax, or structure, and semantics of a


compressed bit stream and the procedure for decoding the stream back into the original
video and audio content. Since neither specific algorithms nor encoding methods are
defined by MPEG, these can be improved over time without any risk of violating the

:: 18 ::
standards. This flexibility affords manufacturers the opportunity to gain a proprietary
advantage from new Technical developments.
3.3 MPEG-1:
MPEG-1 is the original MPEG standard for audio and video coding. First
published in 1993, this standard defines digital audio and video coding at bit rates up to
approximately 1.5 Mbps. It is a frame-based standard for delivering a single program on a
CD-ROM, and its quality is comparable to that of VHS cassettes. Common applications
include the storage of sound and pictures for interactive CDs such as video games and
movie CDs.

MPEG-1 has also been used for digital radio broadcasts. Soon after work
on MPEG-1 began, champions of the “digital television” concept realized that MPEG-1’s
syntax and structure would not support the complexity and versatility required by digital
TV transmission. For this reason, in 1990, work began on MPEG-2 the standard that
would make digital television broadcasting a reality. MPEG-2 was developed as a frame-
or field-based standard that allows digital broadcasting applications to deliver
multiplexed programs efficiently. MPEG-2 is backward compatible with MPEG-1,
meaning that MPEG-2 decoders can process MPEG-1 video streams.

3.4 MPEG-4:

MPEG-4 represents the latest breakthrough in audiovisual coding. It


allows for simultaneous coding of synthetic and natural objects and sound, giving service
providers more options for creating games and other multimedia applications. It extends
interactive possibilities by allowing the user to manipulate things like views and the
viewing perspective.

MPEG-4 supports the application of different compression routines to


different parts of a frame, resulting in considerable processing efficiency and allowing for
the coding of arbitrarily shaped objects, instead of the standard rectangular video frame.

:: 19 ::
Because of this, it provides even greater compression than MPEG-1 or MPEG-2 and will
be used for applications with especially limited transmition capacity.
Though digital broadcast will continue to use the MPEG-2 standard,
MPEG-4 will serve a variety of applications including networked video applications,
computer games and wireless services. In addition, programs compressed using MPEG-4
techniques can be encapsulated into MPEG-2 transport streams.

3.5 MPEG-7:

Formally called ‘Multimedia Content Description Interface,’ the MPEG-7


specification will provide standardized descriptions for searching, filtering, selecting and
handling audiovisual content. These descriptions, called metadata, will allow users in
various applications to search and manage volumes of audio and video files. Applications
include digital libraries, multimedia directory services, broadcast media selection and
multimedia editing. The MPEG-7 standard is currently under development and will be
released in late 2001.

3.6 MPEG-2:

MPEG-2 is a set of standards for building a single digital transport stream, or


multiplex, which can carry a dozen programs or more, depending upon the level of
compression used and the communications bandwidth available. In the following
sections, we will discuss the fundamentals of MPEG-2 compression and transport. This
standard covers rules for:

(1) Compressing audio and video content


(2) Transporting the multiplex across a network
(3) Encapsulating data into the multiplex

:: 20 ::
What the MPEG-2 standard does not regulate is the handling of multiple
transport streams simultaneously. Because a set-top box, or Integrated Receiver
Decoder (IRD), operating in a live network environment must be able to manage several
transport streams simultaneously, extensions to the MPEG-2 system layer were
developed by Digital Video Broadcasting (DVB) and Advanced Television Systems
Committee (ATSC).

3.7 MPEG-2 Transport: The System Layer:

We’ve talked about compressing and decompressing a single video or


audio stream, but MPEG-2 transport streams simultaneously carry many programs or
services with audio, video and data all interlaced together. A decoder must be able to sort
through the transport stream, organizing the video, audio and data streams by program or
service. It must also know when to present each part of the program or service to the
viewer. This is where the MPEG-2 System Layer comes into play. The System Layer
specifies the structure of the transport stream, the transmission mechanism for MPEG-2
compressed data. Among other things, this structure provides for rapid synchronization
and error correction at the decoder. The System Layer also defines Program Specific
Information (PSI) tables. These act as a table of contents, allowing the decoder to
quickly sort and access information in the transport stream.

3.8 Creating a Transport Stream:

The MPEG-2 transport mechanism is similar to IP transport in that


MPEG-2 streams carry data that has been divided into transport packets, each with a
header and a payload.

:: 21 ::
3.9 MPEG VIDEO STREAM INFORMATION:
Entire MPEG video picture encoding Process

Structure of the MPEG video data packet:

The following process transforms several analog video, audio and data
streams into a single transport stream.

Once a video or audio stream is compressed, it becomes an Elementary


Stream (ES). From there, it is divided into a Packetized Elementary Stream (PES)

:: 22 ::
with variable-length packets, each containing a header and a payload. The payload
contains a single frame of video or audio. The header includes timing information that
tells the decoder when to decode and present the frame.

Creation of packetized elementary stream (PES)

Next, during the encoding process, PESs are further divided into fixed-length transport
packets of 188 bytes each. This packet size was initially chosen to simplify mapping of
MPEG-2 packets over ATM, which uses cells with a payload of 47 bytes (47x4=188).
Like the PES packet, each transport packet also contains a header and a payload.

Once the audio or video stream has been divided into transport packets, it
is multiplexed, or merged, with similarly packetized content for other services. A
multiplex composed of one or more services is called a transport stream.

:: 23 ::
A number called a PID, or Packet Identifier, whether it contains audio, video, tables or
data, identifies each packet in the transport stream. PIDs enable the decoder to sort
through the packets in a transport stream.

3.10 Timing: PCR, PTS and DTS:

Timing in the transport stream is based on the 27MHz System Time


Clock (STC) of the encoder. To ensure proper synchronization during the decoding
process, the decoder’s clock must be locked to the encoder’s STC. In order to achieve
this lock, the encoder inserts into the transport stream a 27MHz time stamp for each
program.
This time stamp is called the Program Clock Reference, or PCR. Using
the PCR, the decoder generates a local 27MHz clock that is locked to the encoder’s STC.
As we mentioned earlier, compressed video frames are often transmitted out of order.
This means that an I-frame used to regenerate preceding B-frames must be available in
the decoder well before its presentation time arrives. To manage this critical timing
process, there are two time stamps in the header of each PES packet, the Decoding Time
Stamp (DTS) and the Presentation Time Stamp (PTS). These tell the decoder when a
frame must be decoded (DTS) and displayed (PTS).

:: 24 ::
If the DTS for a frame precedes its PTS considerably, the frame is
decoded and held in a buffer until its presentation time arrives. The following figure
shows the timing sequence in the transport stream. Before the transport stream is created,
the encoder adds PTSs and DTSs to each frame in the PES. It also places the PCR for
each program into the transport stream. Inside the decoder, the PCR goes through a
Phase Lock Loop (PLL) algorithm, which locks the decoder’s clock to the STC of the
encoder. This synchronizes the decoder with the encoder so that data buffers in the
decoder do not overflow or underflow. Once the decoder’s clock is synchronized, the
decoder begins decoding and presenting programs as specified by the PTS and DTS for
each audio or video frame.

Transport stream timing

3.11 MPEG-2 PSI Tables:

Because viewers may choose from multiple programs on a single transport


stream, a decoder must be able to quickly sort and access video, audio and data for the
various programs. Program Specific Information (PSI) tables act as a table of contents for
the transport stream, providing the decoder with the data it needs to find each program
and present it to the viewer. PSI tables help the decoder locate audio and video for each

:: 25 ::
program in the transport stream and verify Conditional Access (CA) rights. The tables
are repeated frequently (for example, 10 times/second) in the stream to support random
access required by a decoder turning on or switching channels. The following table gives
a basic overview of the PSI tables.

While MPEG-2 PSI tables enable the decoder to decipher the programs on
a single transport stream, they do not provide enough information to support the
numerous programs and services available on an entire network of transport streams. The
Digital Video Broadcast (DVB) standard defines a set of tables, called Service
Information (SI) tables that extend the capabilities of the MPEG-2 system layer such
that a decoder can receive and decode any number of programs and services across a
network of transport streams.

The MPEG-2 standards for compression and transmission have made


digital television a reality. Coupled with DVB or ATSC, MPEG-2 is being used to
transmit entire networks of digital programming and services to customers all over the
world. As it expands and converges with other new technologies, it will likely reshape the
way we see and use television.

:: 26 ::
REED SOLOMON ERROR CORRECTION CODE

4.1 INTRODUCTION:

The integrity of received data is a critical consideration in the design of digital


communications and storage systems. Many applications require the absolute validity of
the received message, allowing no room for errors encountered during transmission
and/or storage.Reliability considerations frequently require that Forward Error Correction
(FEC)techniques be used when Error Detection And Correction (EDAC) strategies are
required. The power of FEC is that the system can, without retransmissions, find and
correct limited errors caused by a transport or storage system.

While there are several approaches to FEC, this note will concentrate on
the Reed-Solomon codes. These codes provide powerful correction, have high channel
efficiency, and are very versatile. With the advent of VLSI implementations, such as the
AHA PerFEC 4000 series, RS codes can be easily and economically applied to both high
and low data rate systems. In some new circuits, the FEC function is integrated with data
formatters and buffer managers.

A typical system is shown here:

:: 27 ::
The Reed-Solomon encoder takes a block of digital data and adds extra "redundant” bits.
Errors occur during transmission or storage for a number of reasons (for example noise or
interference, scratches on a CD, etc).
The Reed-Solomon decoder processes each block and attempts to correct errors and
recover the original data. The number and type of errors that can be corrected depends

on the characteristics of the Reed-Solomon code.

4.2 REED-SOLOMON BLOCK CODES CHARACTERISTICS:

Block codes differ from other EDAC codes because they process data in
batches or blocks rather than continuously. The data is partitioned into blocks, and each
block is processed as a single unit by both the encoder and the decoder.

There are two classifications of block codes: systematic and non-


systematic. Nonsystematic codes add redundancy and transform the coded message such
that no part of the original message is recognizable from the un-decoded message. Non-
systematic codes must be decoded properly before any message information is available
at the receiver. With systematic codes the message data is not disturbed in any way in the
encoder and the redundant symbols are added separately to each block.

The AHA RS codecs implement a systematic block code. All of these


actions appear to be taking place continuously in real time, regardless of the error
patterns encountered because of the internal architecture of the PerFEC codecs. For an
RS code, each symbol may be represented as a binary m-tuple. RS codes may be
considered to be a special case of Bose-Chaudhuri-Hocquenghem (BCH) codes.

4.3 CHANNEL NOISE AND ERROR TYPES:

:: 28 ::
A system’s noise environment can cause errors in the received message.
Properties of these errors depend upon the noise characteristics of the channel. Errors,
which are usually encountered, fall into three broad categories:

1) Random errors - the bit error probabilities are independent or nearly independent of
each other. Additive noise typically causes random errors.

2) Burst errors - the bit errors occur sequentially in time and as groups. Media defects
in digital storage systems typically cause burst errors.

3) Impulse errors - large blocks of the data are full of errors. Lightning strikes and
major system failures typically cause impulse errors.

Random errors occur in the channel when individual bits in the transmitted
message are corrupted by noise. Random errors are generally caused by thermal noise in
Communications channels. We will show that block codes and specifically the Reed-
Solomon codes can be a good code choice to correct random channel errors. Burst errors
happen in the channel when errors occur continuously in time.

Burst errors can be caused by fading in a communications channel or by


large media and mechanical defects in a storage system. For some codes, burst errors are
difficult to correct, however, block codes (including Reed-Solomon codes) handle burst
errors very efficiently.

Impulse errors can cause catastrophic failures intercommunications


systems that are so severe they may be unrecognizable by FEC using present-day coding
schemes. In general all coding systems fail to reconstruct the message in the presence of
catastrophic errors. However, certain codes like the Reed-Solomon codes can detect the
presence of a catastrophic error by examining the received message.

:: 29 ::
This is very useful in system design because the unrecoverable message
can at least be flagged at the decoder. The following sections describe RS codes and
focus on their performance in each of these noise environments.

ENCODING AND DECODING OF REED SOLOMON CODES:

4.4 ENCODING:

Reed Solomon codes are a subset of BCH codes and are linear block
codes. A Reed Solomon code is specified as RS (n,k) with s-bit symbols.This means that
the encoder takes k data symbols of s bits each and adds parity symbols to make an n
symbol codeword. There are n-k parity symbols of s bits each.

A Reed-Solomon decoder can correct up to t symbols that contain errors in


a codeword, where 2t = n-k.

The following diagram shows a typical Reed-Solomon codeword (this is


known as a Systematic code because the data is left unchanged and the parity symbols are
appended).

Example: A popular Reed-Solomon code is RS(255,223) with 8-bit symbols. Each


codeword contains 255 code word bytes, of which 223 bytes are data and 32 bytes are
parity.
For this code:

:: 30 ::
n = 255, k = 223, s = 8
2t = 32, t = 16
The decoder can correct any 16-symbol errors in the code word: i.e. errors in up to 16
bytes anywhere in the codeword can be automatically corrected.

Given a symbol size s, the maximum codeword length (n) for a Reed-Solomon code is
n= 2^S – 1.
For example, the maximum length of a code with 8-bit symbols (s=8) is
255 bytes. Reed Solomon codes may be shortened by (conceptually) making a number of
data symbols zero at the encoder, not transmitting them, and then re-inserting them at the
decoder. Example: The (255,223) code described above can be shortened to (200,168).

The encoder takes a block of 168 data bytes, (conceptually) adds 55 zero
bytes, creates a (255,223) codeword and transmits only the 168 data bytes and 32 parity
bytes. The amount of processing "power" required to encode and decode Reed-Solomon
codes is related to the number of parity symbols per codeword. A large value of t means
that a large number of errors can be corrected but requires more computational power
than a small value of t. Symbol Errors One symbol error occurs when 1 bit in a symbol is
wrong or when all the bits in a symbol are wrong.

Example: RS (255,223) can correct 16 symbol errors. In the worst case, 16 bit errors
may occur, each in a separate symbol (byte) so that the decoder corrects 16 bit errors. In
the best case, 16 complete byte errors occur so that the decoder corrects 16 x 8 bit errors.
Reed-Solomon codes are particularly well suited to correcting burst errors (where a
series of bits in the codeword are received in error).

:: 31 ::
4.5 DECODING:

Reed-Solomon algebraic decoding procedures can correct errors and


erasures. An erasure occurs when the position of an erred symbol is known. A decoder
cans correct up to errors or up to 2t erasures. The demodulator can often supply erasure
information in a digital communication system, i.e. the demodulator "flags" received
symbols that are likely to contain errors.

When a codeword is decoded, there are three possible outcomes:

1. If 2s + r < 2t (s errors, r erasures) then the original transmitted code word will always
be recovered,
OTHERWISE
2. The decoder will detect that it cannot recover the original code word and indicate this
fact.
OR
3. The decoder will miss-decode and recover an incorrect code word without any
indication.

The probability of each of the three possibilities depends on the particular Reed-Solomon
code and on the number and distribution of errors. Coding Gain The advantage of using
Reed-Solomon codes is that the probability of an error remaining in the decoded data is
(usually) much lower than the probability of an error if Reed-Solomon is not used. This is
often described as coding gain.

Example: A digital communication system is designed to operate at a Bit Error Ratio


(BER) of 10-9, i.e. no more than 1 in 109 bits are received in error. This can be achieved
by boosting the power of the transmitter or by adding Reed-Solomon (or another type of
Forward Error Correction).

:: 32 ::
Reed-Solomon allows the system to achieve this target BER with a lower transmitter
output power. The power "saving" given by Reed-Solomon (in decibels) is the coding
gain.

4.6 PARAMETERS:

The parameters of a Reed-Solomon code are:


m = the number of bits per symbol
n = the block length in symbols
k = the uncoded message length in symbols
(n-k) = the parity check symbols (check bytes)
t = the number of correctable symbol errors
(n-k) = 2t (for n-k even)
(n-k)-1 = 2t (for n-k odd)

Therefore, an RS code may be described as an (n,k) code for any RS code where,
n ≤ 2m - 1, and n - k ≥ 2t.

RS codes operate on multi-bit symbols rather than on individual bits like binary codes.
The AHA PerFEC codecs are typical of RS codes and use 8-bit symbols. This allows
symbols to correspond to digital bytes.

Consider the RS(255,235) code. The encoder groups the message into 235 8-bitsymbols
and adds 20 8-bit symbols of redundancy to give a total block length of 255 8-bitsymbols.
In this case, 8% of the transmitted message is redundant data. In general, due to decoder
constraints, the block length cannot be arbitrarily large. The block length for the PerFEC
codecs is bounded by the following equation: The number of correctable symbol errors
(t), and block length (n) is set by the user.

:: 33 ::
4.7 Architectures for encoding and decoding Reed-Solomon codes:

Reed-Solomon encoding and decoding can be carried out in software or in special


purpose hardware.

4.7.1 Finite (Galois) Field Arithmetic:

Reed-Solomon codes are based on a specialist area of mathematics known


as Galois fields or finite fields. A finite field has the property that arithmetic operations
(+,-,x,/ etc.) on field elements always have a result in the field.

A Reed-Solomon encoder or decoder needs to carry out these arithmetic operations.


These operations require special hardware or software functions to implement.

4.7.2 Generator Polynomial:

A Reed-Solomon codeword is generated using a special polynomial. All


valid codeword are exactly divisible by the generator polynomial. The general form of
the generator polynomial is:

and the codeword is constructed using:


c(x) = g(x).i(x) where g(x) is the generator polynomial, i(x) is the information block, c(x)
is a valid codeword and a is referred to as a primitive element of the field.

:: 34 ::
Example: Generator for RS (255,249)

4.8 Encoder architecture:

The 2t parity symbols in a systematic Reed-Solomon codeword are given by:

The following
diagram shows
architecture for a
systematic RS
(255,249)
encoder:

Each of the 6
registers holds a
symbol (8 bits). The arithmetic operators carry out finite field addition or multiplication
on a complete symbol.

4.9 Decoder architecture:

A general architecture for decoding Reed-Solomon codes is shown in the


following diagram.

:: 35 ::
Key:
r(x) :: Received codeword
Si ::Syndromes
L(x):: Error locator polynomial
Xi ::Error locations
Yi ::Error magnitudes
c(x):: Recovered code word
v ::Number of errors

The received codeword r(x) is the original (transmitted) codeword c(x) plus errors:
r(x) = c(x) + e(x)

A Reed-Solomon decoder attempts to identify the position and magnitude


of up to t errors (or 2t erasures) and to correct the errors or erasures.

Syndrome Calculation This is a similar calculation to parity calculation. A


Reed-Solomon codeword has 2t syndromes that depend only on errors (not on the
transmitted code word). The syndromes can be calculated by substituting the 2t roots of
the generator polynomial g(x) into r(x).

:: 36 ::
4.10 Error Detection and Correction:

Error detection and correction codes use redundancy to perform their


function. This means that extra code symbols are added to the transmitted message to
provide the necessary detection and correction information. The simplest form of
redundancy for detecting errors in digital binary messages is the parity-check scheme. In
the even parity scheme, an extra bit is attached to each message block so the total number
of 1’s in the data block is even.

A transmission error is detected when the number of 1’s in the received message is odd.
This simple scheme will detect only an odd number of errors in the transmitted message
and cannot correct erroneous messages.

Redundancy is used by all Forward Error Correction (FEC) codes to


perform EDAC.FEC codes allow a receiver in the system to perform EDAC without
requesting a retransmission. In 1948, C.E. Shannon published a classic technical paper on
using redundancy to perform EDAC. In it, he proved that impressive performance gains
can be realized with the proper use of redundancy, but the paper gave no indication as to
which codes might be used to achieve these gains. In the following years rapid
advancements were made in both EDAC theory and EDAC practice. In 1960, I. Reed and
G. Solomon developed the “block code” coding technique called Reed-Solomon (RS)
coding. Today, RS codes remain popular due to standards compliance and economic
implementations.

:: 37 ::
The entire range of possible code-words (or “dictionary”), therefore,
consists of only 16, 7-bit words (out of a possible 128 combinations). If a code word is
received that does not match one of these 16, then it is, by definition, in error. To
calculate which word is in error, the XOR equations are performed on the incoming data.
If no error has occurred, then the three equations will all return correct (Logical True). If
any one of the 7 bits is in error, then a certain subset of the three XOR equations will fail,
because each of the 7 bits occurs in a different subset of the three equations. The table
below shows which equations will be false for each bit in error. Once the incorrect bit is
located, it is corrected by inverting it.

:: 38 ::
Thus, by definition, if Equations 1 and 2 are false, and Equation 3 is true,
then bit B is in error. When we invert it (correct it) we get: 1 1 0 0 0 0 1which is a correct
code word. Thus, we see that if any one of the 7 bits is in error, we can find and correct
the error by
Interpreting the pattern of failed XOR equations. Reed-Solomon codes
work essentially the same as Hamming codes, except RS codes deal with multi-bit
symbols rather than individual bits. For example a (255,235) Reed-Solomon code
specifies a total block length of 255 bytes (or symbols); 235 bytes used for Information
and 20 check bytes. The check bytes are calculated in a similar manner to the 3 check bits
in the Hamming code example above and are appended to the end of the data block.
Reed-Solomon codes are much more complex however, and require a significant amount
of arithmetic and logical processing.

:: 39 ::
4.11 Channel Capacity and Coding Limits:

System capacity, C, in bits per second gives an upper limit to the number
of bits per second that can be reliably transmitted across a given communications
channel. In a paper published in 1948, Shannon showed that the system capacity for
channels perturbed by additive white Gaussian noise is a function of three system
parameters:
W - Channel bandwidth in Hz
S - Received signal power
N - Additive noise power
The capacity relationship among these parameters, known as the Shannon-Hartley
Theorem can be stated as:

where: Eb is the signal energy per bit and No is the noise power level in Watts/Hz.
Shannon proved that on an infinite bandwidth channel, with a sufficiently complicated
coding scheme, it is possible to transmit information with an arbitrarily small error rate.
This can be accomplished at a transmission rate of (R) bits/sec, where R < C. For a rate
R > C, it is not possible to achieve an arbitrarily small error rate no matter what code is
used and no matter how much redundancy is added.

It may be shown from the Shannon-Hartley Theorem that the required limit for
Eb/No approaches the Shannon limit of -1.6 dB as W increases without bound.

An excellent measure of a code’s performance is how well it performs in relation


to the Shannon bound. Shannon’s initial work proved that good codes do exist, but he
never showed how to generate the codes.

:: 40 ::
Today using modern codes, including the Reed-Solomon codes, coding systems
have been designed to operate within a few dB of the Shannon bound.

4.12 Code Rate (R):

The code rate (efficiency) of a code is given by:


code rate = R = k/n
where k is the number of information (message) symbols per block, and n is total number
of code symbols per block. This definition holds for all codes whether block codes or not.
Codes with high code rates are generally desirable because they efficiently use the
available channel for information transmission. RS codes typically have rates greater than
80% and can be configured with rates greater than 99% depending on the error correction
capability needed. The RS codes used in the AHA PerFEC codecs have rates which can
be as high as 99.2%.

4.13 Interleaving:

Interleaving is another tool used by the code designer to match the error
correcting capabilities of the code to the error characteristics of the channel. Interleaving
in digital communications systems enhances the random-error correcting capabilities of a
code to the point that it can also become useful in a burst-noise environment.
The interleaver subsystem rearranges the encoded bits over a span of several block
lengths. The amount of error protection, based on the length of bursts encountered on the
channel, determines the span length of the interleaver. The receiver must be given the
details of the bit arrangement so the bit stream can be de-interleaved before it is decoded.
The overall effect of interleaving is to spread out the effects of long bursts so they appear
to the decoder as independent random bit errors or shorter more manageable burst errors.
The AHA RS codecs require external circuitry to accomplish interleaving.

:: 41 ::
4.14 Correction Power Of RS Codes:

In general, an RS decoder can detect and correct up to (t = r/2) incorrect symbols


if there are (r = n - k) redundant symbols in the encoded message. If the code is being
used only to detect errors and not to correct them, (r) errors can be detected.

One redundant symbol is used in detecting and locating each error, and one more
redundant symbol is used in identifying the precise value of that error. This concept of
using redundant symbols to either locate or correct errors is useful in the understanding of
erasures.

The term “erasures” is used for errors whose position is identified at the decoder
by external circuitry. If an RS decoder has been instructed that a specific message symbol
is in error, it only has to use one redundant symbol to correct that error and does not have
to use an additional redundant symbol to determine the location of the error.

If the locations of all the errors are given to the RS codec by the control logic of
the system, 2t erasures can be corrected. In general, if (E) symbols are known to be in
error (e.g. erasures) and if there are (e) errors with unknown locations, the block can be
correctly decoded provided that (2e + E) < r.

:: 42 ::
4.15 Advantages of Error Detection And Correction:

EDAC has a number of advantages for the design of high reliability digital
systems:

1) Forward Error Correction (FEC) enables a system to achieve a high degree of data
reliability, even with the presence of noise in the communications channel. Data
integrity is an important issue in most digital communications systems and in all
mass storage systems.

2) In systems where improvement using any other means (such as increased transmit
power or components that generate less noise) is very costly or impractical, FEC
can offer significant error control and performance gains.

3) In systems with satisfactory data integrity, designers may be able to implement FEC
to reduce the costs of the system without affecting the existing performance.

This is accomplished by degrading the performance of the most costly or


sensitive element in the system, and then regaining the lost performance with the
application of FEC. In general, for digital communication and storage systems where data
integrity is a design criteria, FEC needs to be an important element in the trade-off study
for the system design.

The introduction of the PerFEC line of FEC encoders and decoders makes
powerful FEC implementation a realistic goal for most digital communication and
storage systems. More than ever before, FEC is available for a wide range of
applications.

:: 43 ::
4.16 Applications of Reed Solomon Codes:

Reed-Solomon codes are block-based error correcting codes with a wide range of
applications in digital communications and storage. Reed-Solomon codes are used to
correct errors in many systems including:

 Storage devices (including tape, Compact Disk, DVD, barcodes, etc)

 Wireless or mobile communications (including cellular telephones)

 Satellite communications

 Digital television / DVB

 High-speed modems

 It is also used in microwave links..

4.17 Implementation:

For implementation, we have to consider the following steps:

 Taking the original frames into consideration.

 Appending noise to the original frames.

 Correcting the original frames.

 Comparing the original frame and the noise frame.

 Comparing the original frame and the corrected frame.

 Calculating the PSNR between the original frames, noise frames and corrected

frames.

:: 44 ::
5 Appendix:

Appending noise:

#include <io.h>

#include <fcntl.h>

#include <sys/stat.h>

#include <mem.h>

#include <stdio.h>

#include <stdlib.h>

void main ( )

char fname [80];

for ( int ix = 1; ix < 11; ix++ )

sprintf ( fname, "frm%02d.bmp", ix );

int hin = open ( fname, O_RDWR | O_BINARY );

int fl = filelength ( hin );

lseek ( hin, 54, SEEK_SET );

fl -= 54;

char * buffer = new char [ fl ];

:: 45 ::
read ( hin, buffer, fl );

for ( int iy = 0; iy < fl; iy++ )

buffer [iy] ^= 0x80; // NOISE//

lseek ( hin, 54, SEEK_SET );

write ( hin, buffer, fl );

close ( hin );

delete [ ] buffer;

Correcting the Original Frames:

#include <io.h>

#include <fcntl.h>

#include <sys/stat.h>

#include <mem.h>

#include <stdio.h>

:: 46 ::
#include <stdlib.h>

void main ( )

char fname [80];

char oname [80];

srand ( 100 );

for ( int ix = 1; ix < 11; ix++ )

sprintf ( fname, "frm%02d.bmp", ix );

sprintf ( oname, "ok%02d.bmp", ix );

int hin = open ( fname, O_RDONLY | O_BINARY );

int fl = filelength ( hin );

int rn = rand ( );

int byteno = rn % 16;

int bitno = byteno ^ 0xFA6E0000; // GENERATOR POLYNOMIAL;//

char * buffer = new char [ fl ];

char * buf = new char [ fl ];

read ( hin, buffer, fl );

memcpy ( buf, buffer, fl );

int offset = 54;

int limit = fl - offset;

:: 47 ::
for ( int iy = 0; iy < limit; iy++ )

buffer [iy + offset] ^= 0x80; // NOISE//

buf [ byteno ] ^= ( 1 << bitno );

int jj = open ( oname, O_CREAT | O_RDWR | O_BINARY, S_IREAD | S_IWRITE );

write ( jj, buffer, fl );

close (jj );

close ( hin );

delete [ ] buffer;

delete [ ] buf;

:: 48 ::
Calculating the PSNR:

//
===============================================================
===============
// Source File: PSNR.CPP ( compute SNR and Peak SNR between 2 images )
//
===============================================================
===============

#include <io.h>

#include <fcntl.h>

#include <sys/stat.h>

#include <math.h>

#include <stdio.h>

#include <conio.h>

const double Lower_Threshold = 1.0e-80;

const double Upper_Threshold = 1.0e+80;

//
===============================================================
===============
// Function: int Compute_PSNR ( char *, char *, int, int, double & )
//
===============================================================
===============

:: 49 ::
int cdecl Compute_PSNR

char * Pix_Buf_1, // Buffer 1 containing pixel data

char * Pix_Buf_2, // Buffer 2 containing pixel data

int Buf_Width, // Width of the buffers (should be same)

int Buf_Height, // Height of the buffers (should be same)

double& Peak_SNR

double DiffSqr = 0.0;

for ( int rr = 0; rr < Buf_Height; rr++ )

char * LinePtr_1 = Pix_Buf_1 + rr * Buf_Width;

char * LinePtr_2 = Pix_Buf_2 + rr * Buf_Width;

for ( int cc = 0; cc < Buf_Width; cc++ )

char c_Pix_1 = LinePtr_1 [ cc ];

char c_Pix_2 = LinePtr_2 [ cc ];

short s_Pix_1 = ( unsigned short ) c_Pix_1;

short s_Pix_2 = ( unsigned short ) c_Pix_2;

:: 50 ::
short Diff = s_Pix_1 - s_Pix_2;

DiffSqr += ( double ) Diff * Diff;

//
===============================================================
===========
// Check point for divide by 0! ( Happens, if both input buffer are same )
//
===============================================================
===========

double PixelCount = double ( Buf_Width ) * Buf_Height;

double rmse = sqrt ( DiffSqr / ( PixelCount * PixelCount ) );

if ( rmse < Lower_Threshold || rmse > Upper_Threshold )

Peak_SNR = 0.0;

return -1;

Peak_SNR = 20.0 * log10 ( 255.0 / rmse );

return 0;

} // End of Source File: PSNR.CPP //

Result:

:: 51 ::
To implement the reed Solomon error correction code we will consider a stream of 10
frames.And the result can be shown as follows.

Frame 1: Original image

Noisy image:

Corrected image:

:: 52 ::
Frame 2:

Original image:

Noisy image:

Corrected image

:: 53 ::
Frame 3:

Original image:

Noisy image:

Corrected image:

:: 54 ::
Frame 4:

Original image:

Noisy image:

Corrected image:

:: 55 ::
Frame 5:

Original image:

Noisy image:

Corrected image:

:: 56 ::
Frame 6:

Original image

Noisy image:

Corrected image:

:: 57 ::
Frame 7:

Original image

Noisy image:

Corrected image:

:: 58 ::
Frame 8:

Original image:

Noisy image:

Corrected image:

:: 59 ::
Frame 9:

Original image

Noisy image:

Corrected image:

:: 60 ::
Frame 10:

Original image:

Noisy image:

Corrected image:

:: 61 ::
6. CONCLUSION:

Reed-Solomon error correction is an error-correcting code that works by over sampling


a polynomial constructed from the data. The polynomial is evaluated at several points,
and these values are sent or recorded. Sampling the polynomial more often than is
necessary makes the polynomial over-determined. As long as it receives "many" of the
points correctly, the receiver can recover the original polynomial even in the presence of
a "few" bad points.

Reed-Solomon codes are used in a wide variety of commercial applications, most


prominently in CDs and DVDs, in data transmission technologies such as DSL &
WiMAX, and broadcast systems such as DVB and ATSC.

Reed Solomon codes are the best one for the purpose of Forward Error Correction. So,
we can conclude that its efficiency in correcting the errors is high. Reed Solomon codes
have accuracy between 90 to 99%.For higher accuracy we go for further enhancements
which will yield good results.

6.1 FUTURE ENHANCEMENTS:

The codes used for enhancing Reed- Solomon are as follows

:: 62 ::
 LDPC CODE
 TURBO CODE
 SPACE–TIMECODE

:: 63 ::
6.2 LDPC Code:

In information theory, a low-density parity-check code (LDPC code) is an error


correcting code, a method of transmitting a message over a noisy transmission channel.
While LDPC and other error correcting codes cannot guarantee perfect transmission, the
probability of lost information can be made as small as desired. LDPC was the first code
to allow data transmission rates close to the theoretical maximum, the Shannon Limit.
Impractical to implement when developed in 1963, LDPC codes were forgotten. The next
30 or so years of information theory failed to produce anything one-third as effective and
LDPC remains, in theory, the most effective developed to date (2007).

The explosive growth in information technology has


produced a corresponding increase of commercial interest in the development of highly
efficient data transmission codes as such codes impact everything from signal quality to
battery life. Although implementation of LDPC codes has lagged that of other codes,
notably the turbo code, the absence of encumbering software patents has made LDPC
attractive to some and LDPC codes are positioned to become a standard in the developing
market for highly efficient data transmission methods. In 2003, an LDPC code beat six
turbo codes to become the error correcting code in the new DVB standard for the satellite
transmission of digital television.

LDPC codes are also known as Gallager codes, in honor of Robert G. Gallager, who
developed the LDPC concept in his doctoral dissertation at MIT in 1960.

LDPC advantage:

The advantage LDPC gives over eg Reed Solomon, but cannot find this basic
information anywhere. Andrew P, UK.

Not impressed with the article either. But in answer to your


question: LDPC codes approach the theoretical (Shannon) limit as the block size

:: 64 ::
increases. ( They are the best (in terms of error recovery) that an error correcting code
can be.) A TC)

Applications of LDPC:

Other than the mention that LDPC is adopted for satellite broadcasting it would
be useful if someone added more text on other applications of LDPC.

6.3 Turbo Code:

In electrical engineering and digital communications electrical engineering


digital communication , turbo codes are a class of high-performance error correction
codes developed in 1993 which are finding use in deep space satellite communications
and other applications where designers seek to achieve maximal information transfer
over a limited-bandwidth communication link in the presence of data-corrupting noise.

Advantages of Turbo Code:

Of all practical error correction methods known to date, turbo codes and low-
density parity-check codes (LDPCs) come closest to approaching the Shannon limit, the
theoretical limit of maximum information transfer rate over a noisy channel.

Turbo codes make it possible to increase data rate without increasing the power of
a transmission, or they can be used to decrease the amount of power used to transmit at a
certain data rate. Its main drawbacks are the relative high decoding complexity, which
makes it unsuitable for some applications. For satellite use, this is not of great concern,
since the transmission distance it introduces latency due to the limited speed of light.

:: 65 ::
Disadvantage of Turbo Code:

The Complexity of these algorithms and the fact that these algorithms have encumbering
software patents are a detractor of implementing these algorithms in a system

6.4 Space–time code:

A space–time code (STC) is a method employed to improve the reliability of data


transmission in wireless communication systems using multiple transmit antennas. STCs
rely on transmitting multiple, redundant copies of a data stream to the receiver in the
hope that at least some of them may survive the physical path between transmission and
reception in a good enough state to allow reliable decoding.

:: 66 ::
Bibliography:

Websites:

 http://www.xilinx.com/support/documentation/white_papers/wp270.pdf

 www2.sims.berkely.edu/courses/is224/399/report1.html-52k

 www.4i2i.com

 http://en.wikipedia.org/wiki/error detection and correction

 www.ieeestandards.com

 www.google.com

Publications:

 ATSC Standard: Digital Television Standard, Revision B (Doc.

A/53B)

 Wicker, "Error Control Systems for Digital Communication and

Storage", Prentice-Hall 1995

 Lin and Costello, "Error Control Coding: Fundamentals and

Applications", Prentice-Hall 1983

 Clark and Cain, "Error Correction Coding for Digital

Communications", Plenum 1988

:: 67 ::
 Wilson, "Digital Modulation and Coding", Prentice-Hall 1996

Project Work Submitted by:

B.V.DURGAPRASAD
E-Mail - venkat14333@gmail.com
Phone Number: +919490188903

P.V.S.SOWJANYA
E-Mail - sowra33@gmail.com
Phone Number: +919440308287

G.RAMA NAIDU
E-Mail – ramanaidu_412@yahoo.co.in
Phone Number: +919848941651

V.MADHURI
E-Mail - madhu_vepa@yahoo.com
Phone Number: +919490186460

M.V.SWAMI NAIDU
E-Mail - swamimalla@gmail.com
Phone Number: +919989797018

:: 68 ::
:: 69 ::

Potrebbero piacerti anche