Sei sulla pagina 1di 2

1

Quiz I
Instructor: B. Sainath, 2210−A, Electrical and Electronics Eng. Dept., BITS Pilani.
Course: Coding theory and practice (EEE G612)
DATE: Aug. 27th , 2019 TIME: 40 Mins. Max. points: 10

Note: Note: There is no negative marking. There is no partial marking for just writing down a right formula.
Answer all the subparts of a question at one place. Provide critical steps in each solution. Overwritten answers
will not be rechecked. Highlight your final answer in a rectangular box. Use the same notation given in data.

Q. 1. Determine the probability density function pY (y) that maximizes the differential entropy for given mean
µ. Hint: Use the method of Lagrange multipliers. Note that there are two constraints. [2.5 points]
Determine the differential entropy of random variable with the probability distribution derived above. [1 point]

Q. 2. Let joint probability distribution p(x, y) is given by

Y
0 1
X
1 1
0 3 3

1
1 0 3

Fig. 1. Pertaining to Q. 2.

Compute the following: i). H(Y ). ii). H(Y |X). iii). I(X; Y ). [2.5 points]

Q. 3. Consider the code {0, 01}.


Answer the following:
i). Is it instantaneous? Justify your answer.
ii). Is is uniquely decodable? Justify your answer. [1 + 1 points]

Q. 4. True/False. Justify. [2 points]


i). In an ergodic Markov information source, the states in any long output sequence will occur with a unique
probability distribution.
ii). The joint entropy H (Y, X) of a pair of discrete random variables (Y, X) with a joint probability distribution
p(y, x) can be expressed as
H (Y, X) = E [log p(Y, X)] ,
where E [.] denotes expectation.
ALL THE BEST!
2

I. SOLUTIONS
Q. 1. After solving the constrained optimization problem using the method of Lagrange multipliers, we get the
required probability density function (pdf). The pdf is given by
 
1 y
pY (y) = exp − , y ≥ 0.
µ µ
Note that Y is an exponential random variable. [2.5 points]
The differential entropy of the exponential random variable Y is equal to log (µe). [1 point]

Q. 2. i). H(Y ) = 0.92 bits. ii). H(Y |X) = 0.67 bits. iii). I(X; Y ) = H(Y )−H(Y |X) = 0.25 bits. [2.5 points] .

Q. 3. i). No. The code is not instantaneous. Because the first code word is the prefix of the second code word.
ii). Yes. The code is uniquely decodable. Given a sequence of code words, first isolate occurrences of 01 and
then parse the remaining into 0’s. [1 + 1 points]

Q. 4. i). True. This unique probability distribution is called the stationary distribution of the ergodic Markov
information source/process. This is because the stationary distribution does not depend upon the initial distribution
with which the states are chosen, it can be determined from the conditional symbol probabilities.
ii). False. The joint entropy is defined as
X X
H (Y, X) = − p(y, x) log p(y, x),
y∈X X∈Y
= −E [log p(Y, X)] .
[1 + 1 points]

Potrebbero piacerti anche