Sei sulla pagina 1di 17

LOSSLESS IMAGE COMPRESSION

A Lab Project Report

Submitted in Partial Fulfillment of the Requirements


For the award of the Degree of
Bachelor of Technology in
Electronics & Computer Engineering (ECM)

By

G.MEGHANA REDDY
15311A1960

Department of Electronics & Computer Engineering


Sreenidhi Institute of Science & Technology (Autonomous)

OCTOBER 2017

DEPARTMENT OF ELECTRONICS & COMPUTER ENGINEERING

SREENIDHI INSTITUTE OF SCIENCE & TECHNOLOGY (AUTONOMOUS)

1
CERTIFICATE

This is to certify that the Lab Project work entitled LOSSLESS IMAGE
COMPRESSION , submitted by G.Meghana reddy , bearing Roll No.15311A1960
respectively , towards partial fulfillment for the award of Bachelors Degree in Electronics &
Computer Engineering from Sreenidhi Institute of Science & Technology, Ghatkesar,
Hyderabad, is a record of bonafide work done by him/ her. The results embodied in the work
are not submitted to any other University or Institute for award of any degree or diploma.

Mr.Kasi Bandla / M.SHAILAJA Dr.K.Sashidar

Associate Professor HOD ECM

2
ACKNOWLEDGMENT

I convey my sincere thanks to Dr. P. Narsimha Reddy , Director and Dr.K.Sumanth,


Principal, Sree Nidhi Institute of Science and Technology, Ghatkesar for providing resources
to complete this group project.

I am very thankful to Prof.k.Sashidar , Head of ECM Department, Sree Nidhi Institute of


Science and Technology, Ghatkesar for providing an initiative to this seminar and giving
valuable timely suggestions over our project and for their kind co-operation in the completion
of the group project.

I convey my sincere thanks to M.Shailaja, Assosiate professor and Kasi bandla,


Assosiate professor of ECM department , Sree Nidhi Institute of Science and
Technology, for their continuous help, co-operation, and support to complete this project.

Finally, I extend my sense of gratitude to almighty, my parents, all my friends, teaching and
non-teaching staff, who directly or indirectly helped me in this endeavor.

G.MEGHANA REDDY (15311A1960)

3
ABSTRACT

In this project we have implemented the Baseline JPEG standard using MATLAB. We have
done both the encoding and decoding of grayscale images in JPEG. With this project we have
also shown the differences between the compression ratios and time spent in encoding the
images with two different approaches viz-a-viz classic DCT and fast DCT. The project also
shows the effect of coefficients on the image restored. The steps in encoding starts with first
dividing the original image in 8X8 blocks of sub-images. Then DCT is performed on these
sub-images separately. And it is followed by dividing the resulted matrices by a Quantization
Matrix. And the last-step in algorithm is to make the data one-dimensional which is done by
zigzag coding and compressed by Huffman coding, run level coding, or arithmetic coding.
The decoding process takes the reverse process of encoding. Firstly, the bit-stream received is
converted back into two-dimensional matrices and multiplied back by Quantization Matrix.
Then, the Inverse DCT is performed and the sub-images are joined together to restore the
image.

4
INDEX

S.N0 TITLE PAGE NO.


1. INTRODUCTION 6

8
2. LITERATURE SURVEY

10
3. REQUIREMENTS
3.1 MATLAB R2016B
3.2WINDOWS 7 OS

SOURCE CODE/PROGRAM 11
4.

TESTING AND RESULT 15


5.

6. 16
CONCLUSION

7. REFERENCES 17

5
I.INTRODUCTION

A common characteristic of most images is that the neighboring pixels are correlated
and therefore contain redundant information. The foremost task then is to find less
correlated representation of the image.

Image compression addresses the problem of reducing the amount of data required to
represent a digital image. The underlying basis of the reduction process is the removal
of redundant data. From a mathematical viewpoint, this amounts to transforming a 2-D
pixel array into a statistically uncorrelated data set. The transformation is applied
prior to storage and transmission of the image. The compressed image is
decompressed at some later time, to reconstruct the original image or an
approximation to it.

Two fundamental components of compression are redundancy and irrelevancy


reduction. Redundancy reduction aims at removing duplication from the signal
source (image/video). Irrelevancy reduction omits parts of the signal that will not be
noticed by the signal receiver, namely the Human Visual System (HVS). In general,
three types of redundancy can be identified:
Spatial Redundancy or correlation between neighbouring pixel values.
Spectral Redundancy or correlation between different color planes or spectral
bands.
Temporal Redundancy or correlation between adjacent frames in a sequence of
images (in video applications).

Image compression research aims at reducing the number of bits needed to represent
an image by removing the spatial and spectral redundancies as much as possible.

In lossless compression schemes, the reconstructed image, after compression, is


numerically identical to the original image. However lossless compression can only a
achieve a modest amount of compression. An image reconstructed following lossy
compression contains degradation relative to the original. Often this is because the
compression scheme completely discards redundant information. However, lossy

6
schemes are capable of achieving much higher compression. Under normal viewing
conditions, no visible loss is perceived (visually lossless).

The information loss in lossy coding comes from quantization of the data.
Quantization can be described as the process of sorting the data into different bits and
representing each bit with a value. The value selected to represent a bit is called the
reconstruction value. Every item in a bit has the same reconstruction value, which
leads to information loss (unless the quantization is so fine that every item gets its
own bit).

A typical lossy image compression system is shown in Fig. 1. It consists of three


closely connected components namely (a) Source Encoder (b) Quantizer, and (c)
Entropy Encoder. Compression is accomplished by applying a linear transform to
decorrelate the image data, quantizing the resulting transform coefficients, and
entropy coding the quantized values .

fig:- Block diagram

7
II.LITERATURE SURVEY

Face recognition has been considered as an important subject of research work over the last
fifteen years. This subject has gained as much importance as the areas of image analysis,
pattern recognition and more precisely biometrics , because it has become one of the
identification methods to be used in e-passports and identification of candidates appearing in
various national and international academic examinations. The resolution or the size of the
image plays an important role in the face recognition. Higher the resolution the better it is.
However, the image compression effects on the face recognition system are not given as
importance it deserves in the recent years. Images are compressed for different reasons like
storing the images in a small memory like mobile devices or low capacity devices, for
transmitting the large data over network, or storing large number of images in databases for
experimentation or research purpose. This is essential due to the reason that compressed
images occupy less memory space or it can be transmitted faster due its small size. Due to
this reason, the effects of image compression on face recognition started gaining importance
and have become one of the important areas of research work in other biometric approaches
as well like iris recognition and fingerprint recognition. Most recent contribution were made
in iris recognition and fingerprint recognition . In addition to paying importance to standard
compression methods in recognition, researchers have focused in developing special purpose
compression algorithms, e.g. a recent low bit-rate compression of face images [3]. One of the
major drawbacks in the face recognition using compressed images is, the image has to in the
decompressed mode. However, the task of decompressing a compressed image for the
purpose of face recognition is computationally expensive and the face recognition systems
would benefit if full decompression could somehow be eliminated. In other words, the face
recognition is carried out while the images are in compressed mode and it would additionally
increase computation speed and overall performance of a face recognition system. The most
popular compression techniques are JPEG [1,2] and their related transformations are Discrete
Cosine Transform and Discrete Wavelet Transform. It is treated that common image
compression standards such as JPEG and JPEG2000 have the highest number of applications
for actual usage in real life, since the image will always have to decompressed and presented
to a human at some point. In this review, progress made in the DCT and other algorithms of a
single image, and a series images from a video, namely 2D DCT and 3D DCT respectively,
in the application of face recognition are discussed in detail.

8
One of the important aspects of image storage is its efficient compression. To make
this fact clear let's see an example. An image, 1024 pixel x 1024 pixel x 24 bit,
without compression, would require 3 MB of storage and 7 minutes for transmission,
utilizing a high speed, 64 Kbit/s, ISDN line. If the image is compressed at a 10:1
compression ratio, the storage requirement is reduced to 300 KB and the transmission
time drops to under 6 seconds. Seven 1 MB images can be compressed and transferred
to a floppy disk in less time than it takes to send one of the original files,
uncompressed, over an AppleTalk network.

9
III.REQUIREMENTS

3.1 MATLAB R2016B

MATLAB is widely used in all areas of applied mathematics, in education and research at
universities, and in the industry. MATLAB stands for MATrix LABoratory and the software
is built up around vectors and matrices. This makes the software particularly useful for linear
algebra but MATLAB is also a great tool for solving algebraic and differential equations and
for numerical integration. MATLAB has powerful graphic tools and can produce nice
pictures in both 2D and 3D. It is also a programming language, and is one of the easiest
programming languages for writing mathematical programs. MATLAB also has some tool
boxes useful for signal processing, image processing, optimization, etc.

3.2 WINDOWS 7 OS

The most widely used operating system for desktop and laptop computers. Developed by
Microsoft, Windows primarily runs on x86-based computers (the ubiquitous PC), although
versions have run on Intel's Itanium CPUs. Windows provides a graphical user interface and
desktop environment in which applications are displayed in resizable, movable windows on
screen.

Windows comes in both client and server versions, all of which support networking, the
difference being that the server architecture is designed for dedicated server hardware.
Although they can easily share their data with other users on the network, the client versions
of Windows are geared to running user applications

Windows 7 is the latest version of Microsoft Windows OS

10
IV. SOURCE CODE

function varargout = ImageCompression1(varargin)

gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @ImageCompression1_OpeningFcn, ...
'gui_OutputFcn', @ImageCompression1_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end

function ImageCompression1_OpeningFcn(hObject, eventdata, handles, varargin)

handles.output = hObject;

guidata(hObject, handles);
guidata(hObject, handles);
set(handles.axes1,'visible','off')
set(handles.axes2,'visible','off')

11
axis off
axis off

function varargout = ImageCompression1_OutputFcn(hObject, eventdata, handles)

varargout{1} = handles.output;

function pushbutton1_Callback(hObject, eventdata, handles)

global file_name;

file_name=uigetfile({'*.bmp;*.jpg;*.png;*.tiff;';'*.*'},'Select an Image File');


fileinfo = dir(file_name);
SIZE = fileinfo.bytes;
Size = SIZE/1024;
set(handles.text7,'string',Size);
imshow(file_name,'Parent', handles.axes1)

function pushbutton2_Callback(hObject, eventdata, handles)

global file_name;
if(~ischar(file_name))
errordlg('Please select Images first');
else
I1 = imread(file_name);

I = I1(:,:,1);
I = im2double(I);

12
T = dctmtx(8);
B = blkproc(I,[8 8],'P1*x*P2',T,T');
mask = [1 1 1 1 0 0 0 0
1 1 1 0 0 0 0 0
1 1 0 0 0 0 0 0
1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0];
B2 = blkproc(B,[8 8],'P1.*x',mask);
I2 = blkproc(B2,[8 8],'P1*x*P2',T',T);

I = I1(:,:,2);
I = im2double(I);
T = dctmtx(8);
B = blkproc(I,[8 8],'P1*x*P2',T,T');
mask = [1 1 1 1 0 0 0 0
1 1 1 0 0 0 0 0
1 1 0 0 0 0 0 0
1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0];
B2 = blkproc(B,[8 8],'P1.*x',mask);
I3 = blkproc(B2,[8 8],'P1*x*P2',T',T);

I = I1(:,:,3);
I = im2double(I);
T = dctmtx(8);
B = blkproc(I,[8 8],'P1*x*P2',T,T');

13
mask = [1 1 1 1 0 0 0 0
1 1 1 0 0 0 0 0
1 1 0 0 0 0 0 0
1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0];
B2 = blkproc(B,[8 8],'P1.*x',mask);
I4 = blkproc(B2,[8 8],'P1*x*P2',T',T);

L(:,:,:)=cat(3,I2, I3, I4);


imwrite(L,'CompressedColourImage.jpg');

fileinfo = dir('CompressedColourImage.jpg');
SIZE = fileinfo.bytes;
Size = SIZE/1024;
set(handles.text8,'string',Size);
imshow(L,'Parent', handles.axes2)
end

14
V. TESTING AND RESULT

15
VI. CONCLUSION

We pointed out that this s transform can be assigned to the encoder or the decoder and
that it can hold compressed data. We provided an analysis for the case where both
encoder and decoder are symmetric in terms of memory needs and complexity. We
described highly scalable spiht coding algorithm that can work with very low memory
in combination with the wavelet transform, and showed that its performance can be
competitive with state of the art image coders, at a fraction of their memory
utilization. To the best of our knowledge, our work is the first to propose a detailed
implementation of a low memory wavelet image coder. It others a significant
advantage by making a wavelet coder attractive both in terms of speed and memor y
needs. Further improvements of our system especially in terms of speed can be
achieved by introducing a lattice factorization of the wavelet kernel or by using the
lifting steps. This will reduce the computational complexity and complement the
memory reductions mentioned in this work

16
VII.REFERENCES

1) Zhao W., Chellappa R., Rosenfeld A., Phillips P.J., Face Recognition: A Literature
Survey, ACM Computing Surveys, Vol. 35, Issue 4, December 2003, pp. 399-458.

2) Delac K., Grgic M., A Survey of Biometric Recognition Methods, Proc. of the 46th
International Symposium Electronics in Marine, ELMAR-2004, Zadar, Croatia, 16-18
June 2004, pp. 184-193.

3) Li S.Z., Jain A.K., ed., Handbook of Face Recognition, Springer, New York, USA,
2005.

4) Delac, K., Grgic, M. (eds.), Face Recognition, ITech Education and Publishing, ISBN
978-3- 902613-03-5, Vienna, July 2007, 558 pages .

5) Rakshit, S., Monro, D.M., An Evaluation of Image Sampling and Compression for
Human Iris Recognition, IEEE Trans. on Information Forensics and Security, Vol. 2,
No. 3, 2007, pp. 605-612.

6) Matschitsch, S., Tschinder, M., Uhl, A., Comparison of Compression Algorithms'


Impact on Iris Recognition Accuracy, Lecture Notes in Computer Science - Advances
in Biometrics, Vol. 4642, 2007, pp. 232-241.

17

Potrebbero piacerti anche