Sei sulla pagina 1di 2

MAEER’s Maharashtra Institute of Technology, Pune

Department of E&TC
_______________________________________________________________
Class: TE (E&TC) Assignment No. 2 Subject: SCET

AIM: Implementation of algorithm for


a) Generation and evaluation of variable length source coding using
(i) Shannon –Fano algorithm,
(ii) Huffman algorithm.
b) Decoding of above generated source code.

Develop a MATLAB script file for the same.


Hint: Read help for functions huffmanenco, huffmandict, huffmandeco.

THEORY:
Huffman algorithm is more efficient than Shannon Fano algorithm. It generates an
optimum code with the highest efficiency.
Both algorithms are described below:

• Procedure for implementation of Shannon Fano algorithm:


1. List the source symbols in order of decreasing probability.
2. Partition the set into two sets that are as close to equiprobable as possible, and
assign 0 to the upper set and 1 to the lower set.
3. Continue this process, each time partitioning the sets with as nearly equal
probabilities as possible until further partitioning is not possible.

• Procedure for implementation of Huffman algorithm:


1. List the source probabilities in decreasing order.
2. Combine the probabilities of the two symbols having the lowest probabilities, and
record the resultant probabilities; this step is called reduction1.
The same procedure is repeated until there are two ordered probabilities remaining.
3. Start encoding with the last reduction, which consists of exactly two ordered
probabilities. Assign 0 as the first digit in the code words for all the source symbols
associated with the first probability; assign 1 to the second probability.
4. Now go back and assign 0 and 1 to the second digit for the two probabilities that
were combined in the previous reduction step, retaining all assignments made in
step 3.
5. Keep regressing this way until the first column is reached.

1
User inputs for part a):
1. Discrete memory less Source probabilities.
2. Choice for the algorithm

Program outputs for part a):


1. Source code i.e. List of Codewords corresponding to all source symbols.
2. Average codeword length
3. Efficiency of the source code
4. Verification of generation of unique and instantaneous decodable code
i.e Kraft inequality

User inputs for part b):


Binary Sequence for decoding.

Program outputs for part b):


Symbol sequence decoded

Use following equations:


_
Efficiency = Entropy H(X) / Avg. codeword length, N
_
Avg. codeword length N = ∑ (Pi * Ni) where Ni is the length of ith codeword.

Also calculate Kraft inequality parameter K= ∑(2-Ni) and verify that K ≤1 and
_
Code Variance σ =∑ Pi *(N - Ni)2
2

Study questions:
1. What is the disadvantage of Huffman algorithm?
2. What is an extended Huffman code?
3. Compare LZW algorithm with Huffman algorithm.
4. What is an Optimum source code?
5. What is prefix free condition?

Potrebbero piacerti anche