Sei sulla pagina 1di 4

www.jntuworld.com

TEACHING NOTES

L CE/7.5.1/RC 01

Department: ELECTRONICS & COMMU

Unit: VIII Topic name: Image Compression

Books referred: 01. Digital Image P rocessing by R C Gonzalez and R E Woods

NICATION ENGINEERING

Date:

No. of marks allotted by JNTU K:

02. www.wikipedi a.org

03. www.google.c om

Image Compression: It is a process that reduce s the amount of data required to
Image Compression:
It is a process that reduce s the amount of data required to represent a giv en quantity of
information. The size of the im age data files are reduced, while retaining ne cessary image
information. Various amounts of d ata can be used to represent the same amount
of information.
Representations with irrelevant o r repeated information contain redundant data. C ompression is
measured in compression ratio. De noted by,
For example if the original
image is 256 Χ 256 pixels, 8-bits per pixel graysc ale. This file is
65,536 bytes in size. After compres sion the image file is 6,554 bytes. The compression
ratio is,
This can also be written as 10:1
Relative Data Redundancy:
The relative data redundan cy, R can be determined as,
This indicates that 90% of
its data is redundant. The higher the R value, th e more data is
redundant and will be compressed.
Compression Methods:
There are two types of c ompression methods. They are Lossless Compres sion and Lossy
Compression. The key in image co mpression algorithm development is to determin e the minimal
data required to retain the necessa ry information.
Types of Redundancy:
Compression algorithms a re developed by taking advantage of the redu ndancy that is
inherent in image data. There are
three primary types of redundancy. They are Codin g Redundancy,
Inter-pixel Redundancy and Psycho- -visual redundancy.
Coding Redundancy: A code is a s ystem of symbols (letters, numbers and bits) used
body of information. Each piece o f information or event is assigned a sequence of
to represent a
code symbols,
called a code word. The number of symbols in each code word is its length.
Inter-pixel Redundancy: This is al so called spatial or geometric or inter-frame red undancy. This
results from structural or geomet ric relationships between objects and the image . The adjacent
pixels are usually highly correlated
(pixel similar or very close to neighboring pixels), th us information
is unnecessarily replicated in the re presentations.

Faculty/Date:

Page 1 of 4

HOD/Date:

www.jntuworld.com

TEACHING NOTES

L CE/7.5.1/RC 01

Department: ELECTRONICS & COMMU

Unit: VIII Topic name: Fidelity Criteria

Books referred: 01. Digital Image P rocessing by R C Gonzalez and R E Woods

NICATION ENGINEERING

Date:

No. of marks allotted by JNTU K:

02. www.wikipedi a.org

03. www.google.c om

Psycho-visual Redundancy: Most i ntensity arrays contain information that is ignored visual system (which has
Psycho-visual Redundancy: Most i ntensity arrays contain information that is ignored
visual system (which has its limi tations) and/or extraneous to the intended use
by the human
of the image.
Therefore, they are redundant.
This can be eliminated without significant qual ity loss, lossy
compression, irreversible compress ion and quantization.
General Image Compression Syste m:
This consists of two parts. T hey are Compressor and the Decompressor.
Fidelity Criteria:
To determine exactly wha t information is important, and to be able to m
easure image
quality, we need to define image fi delity criteria. The information required is applicat ion specific, so
the imaging specialist needs to be
knowledgeable of the various types and approache s to measuring
image quality.
Fidelity criteria can be di vided into two classes. They are objective fideli ty criteria and
subjective fidelity criteria.
The objective fidelity crite ria are borrowed from digital signal processing a nd information
theory, and provide us with equ ations that can be used to measure the amount
of error in a
processed image by comparison
to a known image. We will refer to the process ed image as a
reconstructed image – typically, o ne that can be created from a compressed data fil e or by using a
restoration method. Thus, these
available for comparison.
measures are only applicable if an original or sta
dard image is
Subjective fidelity criteria r equire the definition of a qualitative scale to assess image quality.
This scale can then be used by hu
man test subjects to determine image fidelity. In or der to provide
unbiased results, evaluation with s ubjective measures requires careful selection of th e test subjects
and carefully designed evaluation e xperiments.

Faculty/Date:

Page 2 of 4

HOD/Date:

www.jntuworld.com

TEACHING NOTES

LCE/7.5.1/RC 01

Department: ELECTRONICS & COMMUNICATION ENGINEERING

Unit: VIII Topic name: Image Compression Models

Error Free Compression Books referred: 01. Digital Image Processing by R C Gonzalez and R E Woods

Date:

No. of marks allotted by JNTUK:

02. www.wikipedia.org

03. www.google.com

Image Compression Models: A general image compression model consists of a source encoder, a channel
Image Compression Models:
A general image compression model consists of a source encoder, a channel encoder, the
storage or transmission media (also referred to as channel), a channel decoder, and a source
decoder. The source encoder reduces or eliminates any redundancies in the input image, which
usually leads to bit savings. Source encoding techniques are the primary focus of this discussion. The
channel encoder increase noise immunity of source encoder’s output, usually adding extra bits to
achieve its goals. If the channel is noise-free, the channel encoder and decoder may be omitted. At
the receiver’s side, the channel and source decoder perform the opposite functions and ultimately
recover (an approximation of) the original image.
The main components are:
Mapper:
It transforms the input data into a (usually non-visual) format designed to reduce inter-pixel
redundancies in the input image. This operation is generally reversible and may or may not directly
reduce the amount of data required to represent the image.
Quantizer:
It reduces the accuracy of the mapper’s output in accordance with some pre-established
fidelity criterion. Reduces the psychovisual redundancies of the input image. This operation is not
reversible and must be omitted if lossless compression is desired.
Symbol (entropy) encoder:
Creates a fixed or variable length code to represent the quantizer’s output and maps the
output in accordance with the code. In most cases, a variable-length code is used. This operation is
reversible.
Error Free Compression:
Error-free compression techniques usually rely on entropy-based encoding algorithms. The
concept of entropy is mathematically described in the following equation,
H(z)
= - ΣP(a i ) log r P(a i )z
Where a j is a symbol produced by the information source
P (a j ) is the probability of that symbol
J is the total number of different symbols
H (z) is the entropy of the source.
The concept of entropy provides an upper bound on how much compression can be
achieved, given the probability distribution of the source. In other words, it establishes a theoretical
limit on the amount of lossless compression that can be achieved using entropy encoding techniques
alone.

Faculty/Date:

Page 3 of 4

HOD/Date:

www.jntuworld.com

TEACHING NOTES

LCE/7.5.1/RC 01

Department: ELECTRONICS & COMMUNICATION ENGINEERING

Unit: VIII Topic name: Lossy Compression

Books referred: 01. Digital Image Processing by R C Gonzalez and R E Woods

Date:

No. of marks allotted by JNTUK:

02. www.wikipedia.org

03. www.google.com

Lossy Compression:

A lossy compression method is one where compressing data and then decompressing it retrieves data that is different from the original, but is close enough to be useful in some way. Lossy compression is most commonly used to compress multimedia data (audio, video, still images), especially in applications such as streaming media and internet telephony. By contrast, lossless compression is required for text and data files, such as bank records, text articles, etc. In many cases it is advantageous to make a master lossless file which can then be used to produce compressed files for different purposes; for example a multi-megabyte file can be used at full size to produce a full- page advertisement in a glossy magazine, and a 10 kilobyte lossy copy made for a small image on a web page.

Faculty/Date:

Page 4 of 4

HOD/Date: