Sei sulla pagina 1di 175

Mutlirate systems, Filterbanks

B. Sainath
sainath.bitragunta@pilani.bits-pilani.ac.in

Department of Electrical and Electronics Engineering


Birla Institute of Technology and Science, Pilani

October 1, 2018

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 1 / 29


Outline

1 Introduction & Motivation

2 Fundamentals of Multirate Systems

3 Applications

4 Filter Banks

5 Conclusions

6 Textbooks & References

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 2 / 29


Introduction & Motivation

Figure: Source: PPV’s textbook.

Single rate DSP system


multipliers, adders, delay elements
e.g., digital filters

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 3 / 29


Multirate DSP System

Figure: Source: Mathworks-MATLAB.

Multirate DSP system


multipliers, adders, delay elements plus downsampler, upsampler

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 4 / 29


Multirate DSP System: Building Blocks

Figure: Basic building blocks of multirate system

M−fold decimator or downsampler

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 5 / 29


Multirate DSP System: Building Blocks

Figure: Basic building blocks of multirate system

M−fold decimator or downsampler


reduce sampling rate by M
before downsampler, use anti-aliasing filter
L−fold expander or upsampler

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 5 / 29


Multirate DSP System: Building Blocks

Figure: Basic building blocks of multirate system

M−fold decimator or downsampler


reduce sampling rate by M
before downsampler, use anti-aliasing filter
L−fold expander or upsampler
increase sampling rate by L
after upsampler, use anti-imaging filter
Depending on application
perform sampling rate alteration at i/p or at o/p or internally
Advantages (depends on application)
lower computational complexity for a given task
reduced rate of transmission (and/or)
reduced storage requirement, power consumption

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 5 / 29


Applications

Figure: Source: Mathworks-MATLAB.

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 6 / 29


Need for Sampling Rate Alteration

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 7 / 29


Need for Sampling Rate Alteration

Clock rates are different for various subsystems


music players
audio broadcasting
cellular communication
Enhanced flexibility & reduced computational complexity ⇒ efficient &
robust DSPs

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 7 / 29


Downsampling Illustration (M = 2)

Figure: Decimation for M = 2

Anti-aliasing requirement: x[n] band-limited


Use antialiasing filter (Decimation filter) before downsampler block

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 8 / 29


Transform Domain Analysis

O/p of downsampler in time-domain

yD [n] = x[Mn]

Exercise: Prove that


M−1
1 X  j (ω−2πk ) 
YD (ejω ) = X e M
M
k =0

YE (ejω ): L−fold compressed version of X (ejω )

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 9 / 29


Graphical Interpretation

M−1
1 X  j (ω−2πk ) 
YD (ejω ) = X e M
M
k=0

Stretch X (ejω ) by M ⇒

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 10 / 29


Graphical Interpretation

M−1
1 X  j (ω−2πk ) 
YD (ejω ) = X e M
M
k=0

ω
Stretch X (ejω ) by M ⇒ X (ej M )
ω
Create (M - 1) copies of X (ej M ) shifting uniformly k × 2π, k positive
integer
ω
Sum all these shifted ‘stretched versions’ to X (ej M ) & divided by M

Q: Verify that YD (ejω ) has period 2π

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 10 / 29


Example

x[n] ↔ X (ejω ). y[n] = x[2n]


Q. Sketch Y (ejω )  ω
Y (ejω ) = X 0 ej 2 ,

where   1    
X 0 ejω = X ejω + X ej(ω−π)
2

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 11 / 29


Sketch of X 0 ejω


B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 12 / 29


Sketch of Y (ejω )

clearly, we see aliasing problem


Solution: use anti-aliasing filter (lowpass filter) before downsampling
More details in class

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 13 / 29


Exercise

π
Suppose that x[n] is passed through an ideal LPF with ωc = 2 and then
applied to downsampler with M = 2. Sketch Y (ejω ).
Sketch of XLP (ejω )

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 14 / 29


Filtering & Downsampling

Figure: Anti-aliasing filter before downsampler.

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 15 / 29


Upsampling Illustration (L = 2)

Figure: Upsampling for L = 2

Upsampler (or expander) does not cause loss of information


Upsampling results in imaging effect
Anti-imaging requirement
Use anti-imaging filter (interpolation) after upsampler block
zero-valued samples converted into interpolated samples by using a LPF

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 16 / 29


Transform Domain Analysis

Let k & L are integers. O/p of upsampler in time-domain

x[ nL ], n = kL

yE [n] =
0, elsewhere.

Q: Verify that
YE (ejω ) = X (ejωL )

YE (ejω ): L−fold compressed version of X (ejω )


Math details in class

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 17 / 29


An Illustration (time-domain)

Figure: Upsampling & Downsampling ( L = M = 2 )

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 18 / 29


An Illustration (frequency domain)

Figure: Upsampling contracts frequency axis.

x[n] = xa [nTs ], Ωs = 2ΩN

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 19 / 29


An Illustration (frequency domain)

Figure: Upsampling contracts frequency axis.

x[n] = xa [nTs ], Ωs = 2ΩN


y [n] = U2 (x[n])

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 19 / 29


Combined Upsampling & Downsampling

Figure: Upsampling & Downsampling

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 20 / 29


Combined Upsampling & Downsampling

Figure: Upsampling & Downsampling. O/p sampling period Fs ML

Figure: Simplified by combining filters

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 21 / 29


Equivalent Filters: Decimation-based

Figure: Illustrating equivalent filters using decimation

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 22 / 29


Equivalent Filters: Decimation-based

Figure: Illustrating equivalent filters using decimation

Q. Prove that Ya ejω = Yb ejω


 

Generalization to M−fold decimation


called noble identity

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 22 / 29


Equivalent Filters: upsampling-based

Figure: Illustrating equivalent filters using upsampling

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 23 / 29


Equivalent Filters: upsampling-based

Figure: Illustrating equivalent filters using upsampling

Q. Prove that Ya ejω = Yb ejω


 

Generalization to L−fold interpolation


called noble identity

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 23 / 29


Polyphase Representation (PPR)

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 24 / 29


Polyphase Representation (PPR)

Q.

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 24 / 29


Polyphase Representation (PPR)

Q. Prove that they are equivalent

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 24 / 29


M-Component PPR

Polyphase representation
Valid for FIR/IIR; causal/non-causal
Applicable to any sequence (not just impulse response)
Type-1 & Type-2 PPR (in class)
PPR of interpolation filter

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 25 / 29


Polyphase Implementations of
Digital Filters

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 26 / 29


Digital Filter Banks
Collection of digital filters with a common input & a common output
xk [n], k = 0, 1, . . . , M − 1 called subband signals
Hk [n], k = 0, 1, . . . , M − 1 called analysis filters
Fk [n], k = 0, 1, . . . , M − 1 called synthesis filters
combine M subband signals into x̂[n]

Figure: Analysis (left) & Synthesis (right) filter banks

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 27 / 29


Filter Responses

Figure: Illustration of typical filter responses: i). Marginally overlapping, ii).


Non-overlapping & iii). Overlapping

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 28 / 29


Uniform DFT Filter Bank

Filter bank based on DFT matrix

B. Sainath (BITS, PILANI) Multirate signal processing October 1, 2018 29 / 29


Wavelets & Applications

B. Sainath
sainath.bitragunta@pilani.bits-pilani.ac.in

Department of Electrical and Electronics Engineering


Birla Institute of Technology and Science, Pilani

November 16, 2018

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 1 / 56


Outline

1 Introduction & Motivation

2 Short-Time Fourier Transform

3 Continuous Wavelet Transform

4 Discrete Wavelet Transform

5 Applications

6 Conclusions

7 References & Further Reading

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 2 / 56


Quote By Mallat

Quote
“Wavelet theory” is the result of a multidisciplinary effort that brought
together mathematicians, physicists and engineers...this connection has
created a flow of ideas that goes well beyond the construction of new
bases or transforms
—-Stephane Mallat, Author of Wavelet tour of signal processing

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 3 / 56


Introduction & Motivation

Figure: Fourier & wavelet analysis of two signals.

Limitation of Fourier spectrum:

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 4 / 56


Introduction & Motivation

Figure: Fourier & wavelet analysis of two signals.

Limitation of Fourier spectrum:

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 4 / 56


Introduction & Motivation

Figure: Fourier & wavelet analysis of two signals.

Limitation of Fourier spectrum:


could not distinguish the two signals (top left & right)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 4 / 56


Introduction & Motivation
Examples of signals having time-varying frequencies

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 5 / 56


Introduction & Motivation
Examples of signals having time-varying frequencies
speech, music, biomedical, seismic, so on
Fourier analysis is not useful tool to analyze such signals
Need for transforms using which frequency content can be obtained
locally in time

Figure: Application of wavelet to real-world signals having transients.

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 5 / 56


Introduction & Motivation
Wavelets
class of functions localized in time & frequency
short wave(-like) oscillations
exist for finite duration & have zero mean

Figure: Wavelet Illustration.

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 6 / 56


Examples of Wavelets

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 7 / 56


Short-Time Fourier Transform (STFT)

Definition: (continuous-time)
Z ∞
Vf (ω, t) = f (u)v (u − t)e−jωu du,
u=−∞
Z ∞
= f (u)vω,t (u) du
u=−∞

Also called windowed Fourier transform/short-term FT


Consider FT framework
achieve time localization by windowing the data at various times
STFT is an energy preserving transformation (called isometry)
Z ∞ Z ∞Z ∞
2 1 2
|f (t)| dt = |Vf (ω, t)| dω dt
−∞ 2π −∞ −∞

Gabor transform: Gaussian function is used as a window


More details in class

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 8 / 56


Discrete-time STFT

P∞
Vf (ω, n) = m=−∞ f [m]v [m − n]e−jωm
Signal f [m], window v [m]
n is discrete & ω is continuous
However, the STFT is performed on a computer using the FFT ⇒ both
variables are discrete & quantized

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 9 / 56


Discrete-time STFT

P∞
Vf (ω, n) = m=−∞ f [m]v [m − n]e−jωm
Signal f [m], window v [m]
n is discrete & ω is continuous
However, the STFT is performed on a computer using the FFT ⇒ both
variables are discrete & quantized

DFT implementation principle & Lowpass filter interpretation of STFT (in


class)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 9 / 56


STFT Implementation (MATLAB)

STFT
can be used for the t − f information in signals of interest (for e.g., audio
signal)
consists of the DFTs of portions of the time-domain signal

STEPS
Read the input signal to be analyzed
For e.g., audioread to read audio signal of known sampling frequency fs
[x,fs] = audioread(’file.wav’); ’x’ contains samples & fs sampling frequency
N
Plot the discrete-time (DT) signal: Duration of DT-signal with N samples = fs
sec.
t = (0:length(x)-1)/fs; plot(t,x);
Plot the f − domain signal with FFT or freqz:
[H,W] = freqz(x); plot(W,abs(H)); (f in rad/sample)

f = (fs/2)*W/pi; plot(f/1000,abs(H)); (f in Hz)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 10 / 56


STFT Implementation (MATLAB):
Waveforms

Audio signal in time domain

0.5

0
x

-0.5

0 1 2 3 4 5
t, seconds

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 11 / 56


STFT Implementation (MATLAB):
Waveforms

300
f− response magnitude

250

200

150

100

50

0 0.5 1 1.5 2 2.5 3


angular frequency, radians/sample

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 12 / 56


STFT Implementation (MATLAB):
Waveforms

300
f− response magnitude

250

200

150

100

50

0 5 10 15 20
frequency, KHz

We observe some peaks, which correspond to notes of the audio signal


However, difficult to say the time instants of the peaks
STFT can address this problem

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 13 / 56


STFT Implementation using FFT

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 14 / 56


STFT Implementation (MATLAB):
Steps (Contd.,)

Basic idea & step:


Consider small portion of samples using a window
Compute its FFT & place it as the column of a matrix
N = 40*fs/1000; win = hamming(N); F = fft( x(1:N) .* win ); Z = F;
Obtain the next column by sliding the window by ’hop’ samples (called the
’hop-size’)
hop = round(length(win)/4); F = fft( x( hop + (1:N) ) .* win ); Z(:,2) = F;
Similarly,
F = fft( x( 2*hop + (1:N) ) .* win ); Z(:,3) = F;
Continue until you reach the end of the data sequence

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 15 / 56


STFT Spectrogram (Magnitude)

Magnitude of the STFT yields the magnitude spectrogram


Squared magnitude of STFT gives PSD

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 16 / 56


Inverse STFT

STFT is (in general) invertible


Methods: Filter bank summation (FBS), overlap-add (OLA)
FBS method
uses bank of filters
STFT viewed as set of outputs from analysis filters
OLA method
Take IFFT for each fixed time in the discrete STFT

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 17 / 56


Analysis & Synthesis in STFT

Exercise: Assume that v [n] is finite & v [n] 6= 0, 0 ≤ n ≤ N − 1. Let


ωk = 2π Nk . Show the following:
N−1
1 X
v [n − m]f [m] = √ Vf [n, k ]ejωk m , for v [0] 6= 0
N k=0
N−1
1 X
f [n] = √ Vf [n, k]ejωk n , for n = m.
Nv [0] k =0

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 18 / 56


OLA Method

Synthesis equation:
∞ N−1
M X 1 X
f [n] = √ Vf [pM, k]ejωk n
v [0] p=−∞ N
k=0

where M denotes decimation factor


Exact synthesis, i.e., g[n] = f [n] is possible when either
analysis window has finite bandwidth with ωc < 2πM
, or
sum of analysis windows obtained by shifting v [n] with M−points increments
add to constant
Exercise: Show the following condition for exact synthesis:

X V [0]
Vf [pM − n] =
p=−∞
M

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 19 / 56


Applications of STFT

In those applications where combining t & f domains in one framework is


useful
Signal processing of
speech, music, audio
SONAR signal processing, geographical exploration
Image processing

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 20 / 56


Limitations of STFT

Figure: Source: Wiki

Fixed-resolution (therefore,)
suits for analyzing processes where all the features appear approximately at
the same scale
Wider window gives higher frequency resolution (but poor t−resolution)
Narrower window gives good time resolution (but poor f −resolution)
Time-frequency tradeoff!

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 21 / 56


Features & Parameters of Wavelets

Provide good t−resolution for high-frequency events & good f −resolution


for low-frequency events ⇒ suitable for many real-world signals!
Wavelets are two parameter family of functions
dilation parameter (scaling)
translation parameter (shifting)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 22 / 56


Features & Parameters of Wavelets

Stretched wavelet is useful for capturing

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 23 / 56


Features & Parameters of Wavelets

Stretched wavelet is useful for capturing slowly varying changes


Compressed wavelet is useful for capturing abrupt changes
Shifting: (e.g., Ψ(t − k ))
delaying (or) advancing onset of a wavelet along the length of the signal
required to align and extract features of the signal

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 23 / 56


Wavelet Transforms

Continuous wavelet transform (CWT)


Discrete wavelet transform (DWT)
Wavelet packet transform (generalized DWT)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 24 / 56


Practice Questions

Based on the concepts covered in class, read the book chapter (by
Nawab & Quatieri) on STFT (check Nalanda).
Questions:
Let f [n] = exp j 2π

N d
f n . Let v [n] denote the analysis window. Determine the
discrete-STFT of f [n]. What is the D-STFT when rectangular window is
used?
Let f [n] = cos 2π

N d
f n . Let v [n] denote the analysis window. Determine the
discrete-STFT of f [n]. What is the D-STFT when rectangular window is
used? What is the D-STFT if v [n] = δ[n]?

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 25 / 56


Wavelet Concept: An illustration

Continuous wavelet transform (CWT)


computation of WT in smooth continuous manner
Discrete wavelet transform (DWT)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 26 / 56


Wavelet Concept: An illustration

Continuous wavelet transform (CWT)


computation of WT in smooth continuous manner
Discrete wavelet transform (DWT)
computation of WT in discrete steps

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 26 / 56


Wavelet Concept: Location & Scale

Two parameters: translation, scale


Imp. Note: change in scale does not correspond to shift in frequency
frequency ∝ 1/scale
Change in scale compresses or dilates ψ(t) ⇒ changing temporal
concentration
Definitions, concepts & math (in class)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 27 / 56


Examples: Haar Wavelet

Figure: https://en.wikipedia.org/wiki/Haar_wavelet

Sequence of rescaled square-shaped functions


form a wavelet family or basis
Mother wavelet ψ(t) satisfies two conditions: Integrates to zero, unit norm
Check the two conditions (in class)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 28 / 56


Examples: Ricker Wavelet
(Mexican Hat wavelet)

Negative normalized second derivative of a Gaussian function


Application: used to model seismic data

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 29 / 56


Remarks on Energy Spread

Let ψ(t) is centered at t = 0 =⇒ ψa,b (t) is centered at ?

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 30 / 56


Remarks on Energy Spread

Let ψ(t) is centered at t = 0 =⇒ ψa,b (t) is centered at ? (t = b)


Write down the expression for energy spread σt2 (a, b) (in class)
Let f0 be the center frequency of Ψ(f ). Then
center frequency of ψa,b (f ) is ?

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 30 / 56


Remarks on Energy Spread

Let ψ(t) is centered at t = 0 =⇒ ψa,b (t) is centered at ? (t = b)


Write down the expression for energy spread σt2 (a, b) (in class)
Let f0 be the center frequency of Ψ(f ). Then
center frequency of ψa,b (f ) is ? ( fa0 )
What is the energy spread about fa0 ? (in class)
σf2
σf2 (a, b) = a2
Time position depends on b alone i.e. on the translation parameter
Frequency position depends on f0 and a, spread depends on the scale
parameter a

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 30 / 56


Heisenberg boxes

Figure: Heisenberg boxes representing the energy spread of two wavelets

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 31 / 56


Continuous Wavelet Transform &
Scalogram

Definitions in class
Scalogram is analogous to spectrogram in STFT
Energy computation from scalogram (in class)
Matlab command for continuous wavelet transform (CWT): cwt
http://in.mathworks.com/help/wavelet/ref/cwt.html
wt = cwt(x,wname) uses the analytic wavelet specified by wname to
compute the CWT

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 32 / 56


Wavelets & Expansions

Orthogonality
Wavelets (or wavelet basis functions) are localized waveforms whose
scaled and translated versions are all orthogonal to each other

Let x(t) be a finite energy signal


Expansion of x(t) in terms of basis functions (in class)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 33 / 56


Inverse Wavelet Transform

Recovery of original signal x(t) from its wavelet transform by integrating


over all scales & locations a and b
Cg denotes admissibility constant
depends on chosen wavelet
math in class
Exercise For Mexican hat wavelet show that Cg = π

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 34 / 56


Complex Wavelets

Requirement: Fourier transform is zero for negative frequencies


E.g., Morlet wavelet or Gabor wavelet (details in class)
Application

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 35 / 56


Complex Wavelets

Requirement: Fourier transform is zero for negative frequencies


E.g., Morlet wavelet or Gabor wavelet (details in class)
Application
Using complex wavelets, we can separate magnitude & phase components
within a signal
Magnitude & phase of CWT using complex wavelet (in class)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 35 / 56


Expansion using Wavelet Basis

By choosing orthonormal wavelet basis & using wavelet coefficients we


can reconstruct the original signal
Express f (t) in terms of basis & coefficients (in class)
Computation of energy & Parseval’s theorem (in class)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 36 / 56


Multiresolution Analysis (MRA)

Any finite energy signal can be decomposed in an orthonormal wavelet


basis
Note that scale ∝ 1/resolution & v.v
Finer (smaller) scale =⇒ higher (larger) resolution
Need for multiresolution
allows to process important & relevant details for a specific task
Application: Multiresolution image analysis
facilitates advanced tasks such as image restoration, segmentation, object
recognition
Math details (in class)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 37 / 56


Analysis at Scale 2−m

Consider scale 2−m


Compute local averages of f (t) at positions {k × 2−m }, k ∈ Z over
intervals of width ∝ 2−m
Illustration of f (t) (in class)
MRA: Analysis of f (t) over embedded grids of approximation

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 38 / 56


Scaling Function & Coefficients

Discrete dyadic grid wavelet lends itself to fast computer algorithm


Scaling function
Used in discrete wavelet transform
Notations/definitions in class
φ(t) is called father scaling function
Integrates to one

Orthogonality
Scaling function is orthogonal to translations of itself but not dilations of itself

From Wavelet function, get wavelet coefficients


From Scaling function, get scaling coefficients
Together, we get DWT coefficients

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 39 / 56


Discretized CWT versus DWT

Discretized approximate CWT

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 40 / 56


Discretized CWT versus DWT

Discretized approximate CWT


required for practical implementation
Involve a discrete approximation of the transform integral (summation)
computed on discrete grid of a scales & b locations
Accuracy of approximation depends on resolution of discretization

Discrete wavelet transform


Transform integral remains continuous but determined only on a dyadic grid
of scales & locations

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 40 / 56


f (t) Representation

Represent f (t) using combined series approximation coefficients &


wavelet (detail) coefficients (in class)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 41 / 56


Haar Wavelet Scaling Function

φ(t) = c0 φ(2t) + c1 φ(2t − 1)


Scaling coefficients c0 = c1 = 1 (proof in class)
Consider wavelets of finite support
Wavelet function in terms of scaling function (in class)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 42 / 56


Wavelet Equation

Wavelet equation (in terms of scaling function)


k
X
ψ(t) = (−1) c1−k φ(2t − k )
k
For finite number of scaling coefficients 0, Nk − 1, Nk ∈ N
k
X
ψ(t) = (−1) cNk −1−k φ(2t − k)
k

Recall Haar scaling function: φ(t) = φ(2t) + φ(2t − 1) =⇒ Haar wavelet


function
ψ(t) = φ(2t) − φ(2t − 1)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 43 / 56


Use of φ(t) & ψ(t)

Using scaling function φ(t) & wavelet function ψ(t), we have


∞ ∞
m
X X X
f (t) = c(n) φ(t − n) + dm (n) 2 2 ψ(2m t − n)
n=−∞ m≥0 n=−∞

φ(t − n) denote set of scaling functions (orthonormal basis)


m
2 2 ψ(2m t − n) denote set of wavelet functions (orthonormal basis)
c(n) are the scaling coefficients

c(n) =< f , φ(t − n) >

dm (n) are the wavelet coefficients


m
dm (n) =< f , 2 2 ψ(2m t − n) >

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 44 / 56


Discrete Wavelet Transform (DWT)

Generally, we can start at any scale 2m0 =⇒


∞ ∞ ∞
X m0 X X m
f (t) = cm0 (n) 2 2 φ(2m
0 t − n) + dm (n) 2 2 ψ(2m t − n)
n=−∞ n=−∞ m=m0

cm0 (n) ⇐ low-resolution (coarse scale) approximation coefficients


dm (n) ⇐ high-resolution (detailed) coefficients
{cm0 (n)}n & {dm (n)}m≥m0 ,n form DWT of f (t)
Haar scaling function approximation (in class)
cm (n) ≈ f (n2−m )
Scaling coefficients are approximately equal to signal samples at with duration
2−m
Q What is the sampling frequency?

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 45 / 56


Discrete Wavelet Transform (DWT)

Generally, we can start at any scale 2m0 =⇒


∞ ∞ ∞
X m0 X X m
f (t) = cm0 (n) 2 2 φ(2m
0 t − n) + dm (n) 2 2 ψ(2m t − n)
n=−∞ n=−∞ m=m0

cm0 (n) ⇐ low-resolution (coarse scale) approximation coefficients


dm (n) ⇐ high-resolution (detailed) coefficients
{cm0 (n)}n & {dm (n)}m≥m0 ,n form DWT of f (t)
Haar scaling function approximation (in class)
cm (n) ≈ f (n2−m )
Scaling coefficients are approximately equal to signal samples at with duration
2−m
Q What is the sampling frequency? Ans. fs = 2m , ωs = 2π2m

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 45 / 56


DWT Computation
Mallat’s fast wavelet transform
Assume an initial set of scaling coefficients {cm (n)}n representing an
approximation to a signal
Let {h[n]}n denote impulse responses of lowpass (scaling) filters
n
Let {h1 [n] = (−1) h[1 − n]}n denote impulse responses of highpass
(wavelet) filters
Compute recursively the wavelet coefficients & the scaling coefficients at
coarser scale using scaling filter (lowpass) & wavelet filter (highpass)
m
cm (n) =< f , 2 2 φ(2m t − n) >
X
= h[n − 2k ] cm+1 (n)
n

m
dm (n) =< f , 2 2 ψ(2m t − n) >
X
= h1 [n − 2k] cm+1 (n)
n

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 46 / 56


(FWT Contd.,) Decimation in FWT

The filters are shifted by 2k (rather than k) so that only even indexed
terms (at filter o/ps) are retained
Eliminates redundant information
With these coefficients (computed using simple digital filters), we can
recover PVm f (finite sum approximation to finite-time f (t)!) =⇒ New
world of DSP!
Instead of processing signal samples, we can analyze & process a signal
using its DWT
Haar analysis example (in class)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 47 / 56


Signal Filtering

Let Vfm,n denote approximation (scaling) coefficients


Let Wfm,n denote detailed (wavelet) coefficients
Scheme for filtering of approximation coefficients to produce approximate
& detailed coefficients at successive scales (in class)
Scheme for filtering of approximation & detailed coefficients to produce
approximate coefficients at successive scales (in class) =⇒ subband
coding scheme!
LPF & HPF together known as QMF

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 48 / 56


Key Theorem

Theorem
Let {φ(t − n), n ∈ Z} denote an orthonormal basis and φ(t) denotes
orthonormal scaling function. Then, to ensure a valid multiresolution
analysis, the sequence

h[n] =< φ(t), 2 φ(2t − n) >

must satisfy
2 2
|H(ω)| + |H(ω + π)| = 2, ω ∈ [0, 2π)

H(0) = 2

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 49 / 56


Daubechies Wavelets

Based on the work of Ingrid Daubechies


Family of orthogonal wavelets defining a discrete wavelet transform
Characterized by a maximal number of vanishing moments for some
compact support
Discrete wavelets of which Haar wavelet (D2) is the simplest
Scaling functions associated with these wavelets satisfy the following
conditions: X
ck = 2
k

if k 0 = 0,

X 2,
ck ck+2k 0 =
0, otherwise.
k

Nk denotes finite number of scaling coefficients =⇒ compact support

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 50 / 56


Vanishing Moments

Smoothness of the wavelet is associated with a moment condition:


k −1
NX
k
(−1) ck k m = 0,
k =0

where m = 0, 1, . . . , N2k − 1 =⇒ Nk
2 vanishing moments
Nk
suppressing parts of the signal which are polynomial up to degree 2
−1
Examples: DB2, DB4, DB6 so on
Determine the 4 scaling coefficients c0 , c1 , c2 , c3 of DB4 (in class)

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 51 / 56


Wavelet Packets (Adaptive Transforms)
Generalization of DWT
Wavelet packets involve particular linear combinations of wavelets
Wavelet packet signal decomposition
Both approximate & detailed coefficients further decomposed at each level
WPD: Wavelet transform where DT signal is passed through more filters
than the DWT

Figure: WPD over 3 levels. g[n]: LP approximation coefficients & h[n]: HP detailed
coefficients

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 52 / 56


References & Further Reading

Wavelets and subband coding by Vetterli & Kovacevic


Wavelet tour of signal processing by Mallat
The Illustrated wavelet transform handbook by Paul Addison
https://en.m.wikipedia.org/wiki/Wavelet
https://in.mathworks.com/help/wavelet/gs/
continuous-and-discrete-wavelet-transforms.html

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 53 / 56


Practice Questions (PQs)/Problems

Refer to class notes for PQs/problems. Find additional PQs below


Q.1. Determine scaling coefficients of DB6 wavelet. You may use
MATLAB. Check the conditions. Determine the scaling and the wavelet
functions.
Q.2. Consider a continuous-time signal f (t), which is sampled at a rate
T = 21M , M > 0 seconds/sample, for 1 second. Assume that M is very
large. Prove the following:

f (n2−M ) ≈ cM (k ),
X
PVM f ≈ f (n2−M )φ(2M t − n)
n

Note: Most of the part proved in class.

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 54 / 56


Practice Questions (PQs)/Problems

Q.3. Operational complexity of DWT:Assume that we give Nk coefficients


as input to filter of finite length L. Suppose that single stage filter bank is
used to obtain approximate and detail coefficients of successive scale,
answer the following:
Number of floating point operations required for the single stage Hint:
Convolution requires approximately LN2 k operations
How many operations will be performed a multi-stage (or level) filter bank?
Derive upper bound for the number of operations
Comment on the computational complexity
P
Q.4. Let x(t) = n x[n]sinc(t − n). Let h[n] = sinc(t) ~ φ(−t)|t=n .
Determine < x(t), φ(t − k ) > & its DTFT. Comment on < x(t), φ(t − k) >

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 55 / 56


Practice Questions (PQs)/Problems

Q.5. Let φ(t) denote the father scaling function & ψ(t) denote the mother
wavelet of DB2. Determine
Z ∞
φ(t)ψ(t) dt
−∞

Q.6. Express DB4 scaling coefficients in terms of angle ψ. Does any


value of ψ produces useful wavelet? Justify your answer
Q.7. Draw a schematic of input-system-output view point of continuous
wavelet transform & compare with STFT (in t & f domains)
Identify the wavelet defined by the following equation & determine its
spectrum
(jt)2
ψ(t) = <{e 2 ej5t }
Q.8. Determine the magnitude spectrum of Haar Mother wavelet &
sketch in MATLAB

B. Sainath (BITS, PILANI) Wavelets November 16, 2018 56 / 56


Detection & Estimation: Fundamentals &
Applications

B. Sainath
sainath.bitragunta@pilani.bits-pilani.ac.in

Department of Electrical and Electronics Engineering


Birla Institute of Technology and Science, Pilani

December 2, 2018

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 1 / 30


Outline

1 Introduction & Motivation

2 Detection Theory

3 Rules, Problems & Solutions

4 Estimation Theory

5 Types of Estimators

6 Textbooks & References

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 2 / 30


Introduction & Motivation

Need for Detection theory


Decision making & information extraction
Fundamental to the design of electronic signal processing systems include
RADAR, SONAR, Communication, Speech, Image, Control so on
Detection versus Estimation
Commonalities:
Random observation & model
Unknown parameter to be determined
Optimality criterion
Differences:
Detection problem involves discrete unknown parameter: countable number of
choices
Estimation problem involves continuous unknown parameter: uncountable
number of choices
Bayesian problem ⇐ random unknown
Non-Bayesian problem ⇐ non-random unknown
Observations: scalar, vector, sequence, random process
(discrete/continuous)

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 3 / 30


Application: RADAR System for
Target Detection

Detection problem: Presence


or absence of target
Two possible hypotheses: i)
signal plus noise present ii)
only noise present
Called binary hypothesis
testing problem
Goal is to efficiently use
received data for decision
making

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 4 / 30


Application: Digital communication

Figure: Coherent receiver of BPSK.

Detection in Gaussian noise (receiver side)


Maximum a posteriori probability (MAP), maximum likelihood (ML)
Detector to decide between 0 or 1 (e.g., binary phase shift keying) or
decode received symbol
Design challenge: Optimum receiver

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 5 / 30


Hypothesis Testing

Neyman-Pearson (NP) Rule & Likelihood ratio test (LRT)


Probability of detection under PFA constraint
Receiver operating characteristics (ROC)
Bayesian Rule
Minimum probability of error
Notations & Examples in class

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 6 / 30


Example: DC level Detection
Performance Curves

r !
NA2
PD = Q Q −1 (PFA ) −
σ2

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 7 / 30


Receiver Operating Characteristics (ROC)
for DC Level in WGN

r
NA2
d, ⇐ deflection coefficient
σ2
B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 8 / 30
MAP Detector Example

Figure: Effect of prior probability on decision regions: i). (left) MAP detector with
P(H0 ) = P(H1 ) = 12 , ii). (right) MAP detector with P(H0 ) = 14 & P(H1 ) = 34

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 9 / 30


MAP Detection & ML Detection

Bayes rule:
p (Y |H1 ) p (H1 )
> .
p (Y |H0 ) p (H0 )
For equal a priori probabilities, i.e., p (H1 ) = p (H0 ) = 21 , we get

p (Y |H1 )
> 1,
p (Y |H0 )

=⇒ ML rule
Example (in class)

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 10 / 30


N-P Versus Bayesian

Minimize PD such that PFA ≤ p ⇐ N-P


To minimize average probability of error
depends on PFA & PFA & a priori probabilities
Matched filter receiver uses N-P detection approach

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 11 / 30


Matched Filter (MF)

Employs N-P detector/detection


Problem: To detect know deterministic signal corrupted by noise
Discussion with example (in class)

Figure: N-P detector in: a). Correlator structure b). Matched filter structure

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 12 / 30


Example: MF Impulse Response

s[n]n = 0, 1, . . . N − 1 denotes known deterministic sequence with finite


energy
h[n] = s[N − 1 − n] ⇐ Impulse response of Matched filter (MF)
s[n] = 1, n = 0, 1, 2, 3, 4. Determine & sketch the MF impulse response &
o/p

Figure: O/p of MF for dc sequence i/p

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 13 / 30


Detection Performance

Figure: MF detection performance

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 14 / 30


Generalized Matched Filters

Model: Correlated noise W ∼ N (0, K), where K denotes covariance


matrix
Detector performance

PD = Q(Q −1 (PFA ) − st K−1 s)

Q. Let I denote the identity matrix. For K = σ 2 I, what happens?


Reading exercise: Sec. 4.4, Fundamentals of statistical signal
processing: Detection theory by Steven M. Kay

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 15 / 30


Estimation Theory: Introduction

Key historical developments


1925: Maximum Likelihood Estimation (MLE) by Fisher
1930’s, 40’s: Minimum Mean Square Estimation (MMSE) by Kolmogorov,
Winer
Applications
RADAR, SONAR, speech, Image processing so on
RADAR: Range (parameter) estimation problem
SONAR: Frequency estimation
Wireless communication: fading channel estimation
Parameter estimation problem
N−point data set: X [0], X [1], . . . , X [N − 1]
θ̂ = g(X [0], X [1], . . . , X [N − 1]) ⇐ Some function of samples

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 16 / 30


Bayesian Estimation

Estimation based on PDFs


parameter not known, but deterministic
P(X , θ) = P(X |θ)P(θ), e.g., carrier phase estimation in communication
systems
Common assumption: White Gaussian Noise
Enables mathematically tractable model
Closed form estimates can be obtained
Estimator: Rule that assigns a value to θ for each realization of X (e.g., in
P(X , θ))

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 17 / 30


MLE Example

Model: X (t) = s(t, θ) + W (t), 0 ≤ t ≤ T


N0
W (t) denotes White Gaussian Noise with PSD 2
θ: unknown parameter to be estimated
MLE is quite popular
Well suited for estimating a real non-random parameter when prior
knowledge is unavailable
Analysis in class
Estimators: unbiased or biased/optimal or suboptimal
Unbiased estimate h i
E θ̂ = θ, a < θ < b,

where (a, b) denotes the range of possible values of theta


Example (in class)

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 18 / 30


Estimator Performance

Estimator is a random variable


Performance is completely described by statistically or its PDF
Simulations on computer are useful, but, may give errors for insufficient
number of runs (experiments)
Common aspect: tradeoff between performance & computational
complexity

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 19 / 30


Unbiased Estimators: Example

DC level in WGN
Model: X [n] = A + W [n], n = 0, 1, . . . , N − 1
Goal: To estimate the parameter A
Average value of X [n] ⇐ unbiased estimate
N−1
1 X
 = X [n] , g(X [n])
N
n=0

called sample mean estimator (SME)

Determine mean & variance of SME (in class)

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 20 / 30


Unbiased vs. Biased Estimators

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 21 / 30


Minimum Variance Criterion

Optimality criterion based on mean square error (MSE)


MSE  2 
MSE(θ̂) = E θ̂ − θ ⇐ variance

Minimum variance unbiased estimator (MVU estimate)

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 22 / 30


Minimum Variance Criterion

Optimality criterion based on mean square error (MSE)


MSE  2 
MSE(θ̂) = E θ̂ − θ ⇐ variance

Minimum variance unbiased estimator (MVU estimate)


Unbiased estimate that minimizes variance
Several approaches to find MVU estimator
We discuss only Cramer Rao lower bound (CRLB)-based approach

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 22 / 30


CRLB

Due to Herald Cramer, Radhakrishna Rao


Likelihood function (LF): PDF viewed as a function of unknown parameter
Let P(X ; θ) denote the LF
Log-likelihood function (LLF): ln P(X ; θ)

Regularity condition & CRLB


The PDF P(X ; θ) satisfies the “regularity” condition
 
∂ ln P(X ; θ)
E = 0 for all θ
∂θ

Variance of any unbiased estimator θ̂ must satisfy


h i 1
var θ̂ ≥ h i
∂ 2 ln P(X ;θ)
−E ∂θ 2

Example (in class)

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 23 / 30


Fisher Information

Le I(θ) denote Fisher Information for data X


 2 
∂ ln P(X ; θ)
I(θ) = −E
∂θ2

Q: How I(θ) is related to CRLB?

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 24 / 30


Fisher Information

Le I(θ) denote Fisher Information for data X


 2 
∂ ln P(X ; θ)
I(θ) = −E
∂θ2

Q: How I(θ) is related to CRLB?


h i 1
var θ̂ ≥
I(θ)

Unbiased estimator which achieves CRLB is said to be (fully) efficient

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 24 / 30


Reading Exercise

Book: Fundamentals of Statistical signal processing: Estimation theory


by Steven M. Kay
Section 3.9

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 25 / 30


Example: Range Estimation

Theory of CRLB is applicable to several statistical signal processing


problems
E.g., Range (R) estimation in RADAR
Let s(t) denote transmitted signal, c is the speed of wave propagation
Assumption: Signal s(t) is bandlimited to B Hz
Model: Received waveform

X (t) = s(t − τ0 ) + W (t), 0 ≤ t ≤ T,


2R
where τ0 = c
Analysis (in class)

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 26 / 30


Practice Questions

Q. [Hypothesis Testing, MAP & ML Decision Rule]: Consider the binary


Hypothesis testing problem:

H0 : N,
H1 : B exp (jΨ) + N ,

where Ψ ∼ U[0, 2π), N is a complex Gaussian random variable with PDF


 
1 |n|
pN (n) = exp −
2πσ 2 2σ 2
Assume that Ψ & N are statistically independent
Answer the following:
Determine p(y |H0 ) & p(y |H1 ) Hint: Zeroth order modified Bessel function is
defined by   Z 2π  
B |y | 1 B |y |
I0 = exp cos ψ dψ
σ2 2π 0 σ2
Contd..,

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 27 / 30


Practice Questions

Determine MAP decision rule, ML decision rule. Simplify the rules to the
extent possible.
Derive the minimum average probability of error. Express in terms of
Marcum Q−function (Refer to Wiki page for definition)

Q. [Hypothesis Testing, N-P Detector]: Suppose that an observation Y


has the following conditional PMFs defined by

λy0
p(y|H0 ) = exp (−λ0 ) ,
y!
λy
p(y|H1 ) = 1 exp (−λ1 ) ,
y!

where λ1 > λ0 > 0, and y ∈ 0, 1, 2, . . . ,


Derive N-P detector. Simplify the decision rule
Derive the N-P detector when λ0 = 3, λ1 = 9. Plot ROC curves (in MATLAB)

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 28 / 30


Practice Questions

Q. [Unbiased Estimate]: Let X1 , X2 , and X3 denote uniformly distributed


i.i.d. samples defined in the interval [0, θ], where 0 < θ < ∞.
Determine Y , max(X1 , X2 , X3 )
Determine E [max(X1 , X2 , X3 )]
Let θ̂(y ) = 43 Y . Is this an unbiased estimate? Justify your response.

Q. [CRLB]: Suppose that Y ∼ N (0, σ 2 IN ), where IN denotes N × N


identity matrix, σ > 0 is not known. Let θ , σ 2 . Answer the following:
Write down the expression for P (y,
h iθ)
Let θ̂(Y) = N1 ||Y ||2 . Determine E θ̂ . Comment on the result.
Derive the CRLB. Comment on the result.
Hint: E Y 4 = 3σ 4
 

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 29 / 30


Textbooks & References

Fundamentals of Statistical signal processing: Detection theory by


Steven M. Kay
Communication systems by Simon Haykin
Fundamentals of Statistical signal processing: Estimation theory by
Steven M. Kay
Wikipedia article on CRLB

B. Sainath (BITS, PILANI) Detection & Estimation December 2, 2018 30 / 30


Adaptive Filters: Fundamentals & Applications

B. Sainath
sainath.bitragunta@pilani.bits-pilani.ac.in

Department of Electrical and Electronics Engineering


Birla Institute of Technology and Science, Pilani

December 2, 2018

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 1 / 19


Outline

1 Introduction & Preliminaries

2 Wiener Filter

3 LMS Algorithm

4 Configurations

5 RLS Algorithm

6 Conclusions

7 References & Further Reading

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 2 / 19


Introduction & Preliminaries

What is an adaptive filter?


time-variant, non-linear, stochastic systems (or)
(ref. wiki page) system with a linear filter that has transfer function controlled
by variable parameters & means to adjust those parameters according to
optimization algorithm
adaptive filters are digital filters (FIR/IIR)! (mostly)
Adaptive filters
required for some applications (e.g., tracking) since a few parameters of the
desired signal processing operation are i). not known in advance or ii).
time-varying
used for filtering, smoothing, prediction & estimation
Specifying adaptive filter
Signals being processed by filter
Model that defines how filter’s o/p can be computed from i/p signal
Parameters within the model that can be varied in an iterative manner
algorithm that provides method of altering the parameters

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 3 / 19


System Model & Problem Formulation

Let p denote the number of sensors


Let x1 , x2 , . . . , xp denote different signals from p sensors
Let w[n] = [w1 [n], w2 [n], . . . , wp [n]]t denote time-varying parameters
called weights
Let y [n] denote the o/p:
p
X
y [n] = wj [n]xj [n], n = 0, 1, 2, . . .
j=1

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 4 / 19


Adaptive System & Objective

Objective: To determine optimum set of weights


wopt = [w01 , w02 , . . . , w0p ]t
minimize error signal e[n] = difference between desired response d[n] &
system o/p y [n]
Cost function or Objective function: Mean square error (MSE)
h i
2
Φe , E (d[n] − y [n])

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 5 / 19


Optimum Filtering Problem & Solution

Problem
Given signal {x[n], n = 0, 1, . . . , n}
Determine optimum set of weights that minimize Φe

Solution to the problem is known as Wiener Filter


Details in class
Optimum solution: (compact form)

wopt = Rx−1 rdx ,

Called Wiener-Hopf equations


Rx−1 is the inverse of input autocorrelation matrix, rdx is the crosscorrelation
vector
Wiener Filter: Filter that satisfies Wiener-Hopf equations
Block filter that operates on the complete set of data
Drawback: Computational complexity of inverse ACF

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 6 / 19


Method of Steepest Descent

An iterative solution
avoids computation of ACF inverse ⇒ less computational complexity
Let wk [n] denote current weight at nth iteration
Let wk [n + 1] denote weight of next iteration i.e. updated filter weight
 p 
X
wk [n + 1] = wk [n] + µ Rdx [k ] − wj [n]Rx [j, k] , k = 1, . . . , p
j=1

Solution in vector form:


 
w[n + 1] = w + µ Rdx − Rw[n] , k = 1, . . . , p

µ ∈ R+ is called learning rate or step-size

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 7 / 19


Remarks on µ

Choose µ that guarantees stability of the algorithm


Smaller the µ, less update you do ⇒ more time to converge!
Let µc denote critical value of learning rate
µ < µc ⇒ convergent or stable system
µ > µc ⇒ divergent/unstable
µ = µc ⇒ stability bound

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 8 / 19


Least Mean Squares (LMS) Algorithm
Based on use of instantaneous estimates of ACF Rx [j, k] & Rdx [k ]
Details in class
LMS in compact form:

w[n + 1] = w + µe[n]x[n]

LMS Summary:
Initialization: wk [0] = 0, k = 1, 2, . . . , p ≡ w[0] = 0
Filtering: For n = 1, . . . , (∞) compute
p
X
y [n] = wj [n]xj [n]
j=1

e[n] = d[n] − y [n]


w[n + 1] = w + µ e[n]x[n]

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 9 / 19


Least Mean Squares (LMS):
Temporal Filter Perspective

Q: What is the o/p of the temporal filter of memory p?

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 10 / 19


Configurations of
Adaptive Filters: Linear Prediction

Figure: Source: http://zone.ni.com/reference/en-XX/help/371988G-01/


lvaftconcepts/aft_prediction/

Linear Prediction
I/p vector: set of past values
Desired signal: current i/p samples
Objective: To estimate the future values of a signal based on past values
of the signal
Application: Linear predictive coding (LPC) (e.g., speech compression)
B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 11 / 19
Configurations of
Adaptive Filters: Noise Cancellation

Figure: Source: Google

Noise cancellation or denoising


Requirement: the noise in the primary i/p & the reference noise need to be
correlated

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 12 / 19


Recursive Least Squares

Reading Exercise: Section 12.2 Adaptive filters by Ali Sayeed

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 13 / 19


Recursive Least Squares (RLS)

Reading Exercise: Section 12.2 Fundamentals of Adaptive filtering by Ali


H. Sayed

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 14 / 19


Conclusions

Adaptive filters: Optimum filter coefficients minimize MSE


Applications of adaptive filters include RADAR, SONAR, Audio, Mobile,
Biomedical so on
Many complex models such as neural networks & deep learning are
based on adaptive filters

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 15 / 19


Practice Questions

Q. [Minimum Error Computation]: Refer to the optimum filtering problem


& its solution given by the Wiener-Hopf equations
Prove that the minimum MSE is given by

Φe (wopt ) = σd2 − rtdx wopt

Q. [Orthogonality]: Prove that:

E eopt x[n − k ] = 0,
 
k = 0, 1, 2, . . . , M − 1

where eopt denotes estimation error obtained using optimum wk

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 16 / 19


Practice Questions

Q. [Filtering of Noisy Signals]: Let y [n] = d[n] + v [n] denote the received
signal. In it, d[n] denotes the desired signal & v [n] denotes noise with
zero mean, variance σv2 . Assume that d[n] & v [n] uncorrelated.
Derive Wiener-Hopf equations
Q. [MSE Function]: Suppose that input autocorrelation matrix of given
data as Identity matrix (2 × 2) & crosscorrelation vector [24.5]t . Assume
that σd2 = 9, determine Φe , that is, the mean square error function in
terms of coefficients w0 & w1 .

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 17 / 19


Practice Questions

Q. [Optimum Filtering MSE (Numerical Example)]: Let input vector


x ∼ N (0, σx2 I), where I denotes identity matrix. Suppose

d[n] = [0.9 0.6 0.2][x[n] x[n − 1] x[n − 2]]t

Determine the crosscorrelation E [d[n]x[n − j]]


Determine the optimum solution
Compute: i). σd2 ii). Minium MSE Φe,min

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 18 / 19


References & Further Reading

http://www.commsp.ee.ic.ac.uk/˜mandic/ASP_Slides/
Fundamentals of Adaptive filtering by Ali H. Sayed
https://in.mathworks.com/help/dsp/ug/
overview-of-adaptive-filters-and-applications.html
https:
//en.wikipedia.org/wiki/Least_mean_squares_filter

B. Sainath (BITS, PILANI) Adaptive Filters December 2, 2018 19 / 19


Introduction to Compressive Sensing: Fundamentals
& Applications

B. Sainath
sainath.bitragunta@pilani.bits-pilani.ac.in

Department of Electrical and Electronics Engineering


Birla Institute of Technology and Science, Pilani

December 2, 2018

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 1 / 17


Outline

1 Introduction & Preliminaries

2 CS Problem & Objectives

3 Signal Reconstruction

4 Applications

5 References & Further Reading

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 2 / 17


Introduction & Preliminaries

Nyquist/Shannon Sampling Theorem:

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 3 / 17


Introduction & Preliminaries

Nyquist/Shannon Sampling Theorem:


Sampling rate ≥ Nyquist rate fN (2 × signal bandwidth) Why?

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 3 / 17


Introduction & Preliminaries

Nyquist/Shannon Sampling Theorem:


Sampling rate ≥ Nyquist rate fN (2 × signal bandwidth) Why?
Ans. For successful signal recovery without losing information
Several applications use over sampling rate
Example: digital cameras
Oversampling rate ⇒ increased burden (w.r.to cost) on imaging systems,
high speed ADCs
Compressive sensing or sparse sampling
method to capture & represent compressible signals at a rate << fN
uses nonadaptive linear projections which preserve structure of the signal
signal recovery from the projections using optimization process

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 3 / 17


Notation & Definitions

Notation:
x[n]: real-valued, 1D, DT signal of length N ⇐ viewed as column vector
ψj , j = 1, 2, . . . , N: an orthonormal basis
Ψ: [ψ1 | ψ2 | . . . |ψN ], ψj ’s are column vectors
{sj }: vector of weight coefficients
Representation of x:
N
X
x= sj ψj , sj = < x, ψj >= ψjt x
j=1

x (in time or space domain) & s (in Ψ domain) are equivalent representations

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 4 / 17


Notation & Definitions

N
X
x= sj ψj , sj = < x, ψj >= ψjt x
j=1

K −sparse signal
Signal x is K −sparse if it is linear combination of K basis vectors
That is, only K of sj coefficients are non-zero & N − K are zero

Interesting scenario: K << N


Compressibility of x
In the representation of x, a few large coefficients & many small coefficients
are present

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 5 / 17


Transform Coding

Compressible signals are well approximated by K − sparse


representation ⇐ foundation for transform coding (TC)
Example: Digital acquisition systems such as digital cameras where TC
plays an important role
TC Process & Sample-and-Compress (SAC) Framework
Let x denote the acquired signal (via measurements) with N samples
{sj , j = 1, 2, . . . , N} are transform coefficients
We have
s = Ψt x
Encode K values & locations of largest coefficients & discard N − K smallest
coefficients
Inefficiencies of SAC Framework
Even if K small, N is large in general
Computation burden of {sj , j = 1, 2, . . . , N}
Encoding locations of large coefficients =⇒ increased overhead

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 6 / 17


Compressive Sensing (CS) Problem

Figure: a). CS measurement process with random Gaussian measurement matrix Φ &
Discrete cosine transform (DCT) matrix Ψ. b). Measurement process with Θ = ΦΨ.

CS Problem
Directly acquire compressed signal (via measurements)
avoid intermediate stage of acquiring N sample
Let {φj , j = 1, 2, . . . , M} denote collection of vectors
{yj , j = 1, 2, . . . M} denote set of measurements
yj = < x, ψj >
We have
y = Φx = ΦΨs = Θs
B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 7 / 17
Sensing Matrix (SM) & UUP

Θ = ΦΨ is called the sensing matrix


Gaussian measurement matrix (GMM) Φ is of order M × N
Entries of the MM can be taken from Gaussian distribution
DCT matrix Ψ is the known matrix
Remarks
Sensing matrix affects signal recovery
Selection of sensing matrix depends on specific application
Uniform Uncertainty Principle (UUP) states that
If every set of Θ columns with cardinality (i.e. number of elements) less than
the sparsity of the signal of interest is approximately orthogonal, then the
sparse signal can be exactly recovered with high probability

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 8 / 17


Compressive Sensing (CS) Objectives

CS Objectives
To design a stable measurement matrix Φ such that key information in any
K − sparse signal is not lost due to the reduction of dimensionality
To develop reconstruction algorithm for signal recovery from only K
measurements

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 9 / 17


RIP & Incoherence

Restricted Isometry property


Matrix Θ = ΦΨ must preserve the lengths of K − sparse vectors

Incoherence
The rows {φj } of Φ cannot sparsely represent the columns {ψi } of Ψ (&
vice versa)

RIP & incoherence can be achieved with high probability by selecting Φ


as random matrix
φj,i ∼ N (0, N1 )
Measurements y are M different randomly weighted linear combinations
of elements of x

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 10 / 17


Useful Properties

Properties of Φ
M × N i.i.d. Gaussian matrix Θ = ΦI = Φ satisfies RIP with high
cK
probability if M ≥ log KN << N, where c is small constant
Matrix Φ is universal
Θ = ΦΨ will be i.i.d. Gaussian =⇒ satisfies RIP with high probability
regardless of orthonormal basis Ψ

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 11 / 17


Designing Signal Reconstruction
Algorithm (SRA)

SRA must take


M measurements in y, random measurement matrix Φ, orthonormal basis Ψ
& reconstruct signal x of length N (or) equivalently its sparse coefficients
vector s

Problem & Solution


Optimization problem
2
ŝ = arg min (||s||2 ) ⇐ L2 norm
s.t. Θs0 = y

Solution −1
ŝ = Θt ΘΘt y

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 12 / 17


Practical Example

Figure: Single-pixel CS camera.

DigCam acquires M random linear measurements


Digital micromirror device (DMD): array of N tiny mirrors
Desired image x via first lens ⇒ reflected back by DMD ⇒ collected by
Photo diode via second lens
Random number generator (RNG) sets mirror orientations in a
pseudorandom 1/0 patterns to create φj
Voltage at photo diode yj =< x, φj >
Repeat the process M times to obtain all entries of y
B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 13 / 17
Applications of CS

CS in Cameras to ⇓ power consumption, computational complexity,


storage space
Medical imaging (e.g. MRI) & Seismic imaging
CS in RADAR, SONAR
CS in communications & networks
Sparse channel estimation
Spectrum sensing in cognitive radio (CR)
Ultra-wideband systems
Wireless sensor networks

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 14 / 17


Practice Questions

Q. Refer to RIP (property). Come up with at least one example of Θ and


v so that inequality becomes equality
Q. Suppose that a periodic signal contains only 5 sinusoids (K = 10
frequencies in DFT domain). Let the fundamental period T0 = 1 second
and bandwidth 0.499 KHz. Discuss traditional sampling (at Nyquist rate)
versus compressive sampling for this K − sparse signal.
Q. Let ψ & φ together constitute a pair of orthonormal bases. The mutual
coherence µ(ψ, φ) is defined by

µ(ψ, φ) = max ψi ? φj

1≤i,j≤n

Prove that µ(ψ, φ) ≤ 1


Let Ψ = I, that is, an Identity matrix. Let Φ = F denote the DFT matrix.
Determine µ(Ψ, Φ) & comment on the result

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 15 / 17


Practice Questions

Q. CS in CR Formulate a binary hypothesis problem in compressive


sensing (CS) scenario for the following model:

w[n], H0 ,
y[n] =
s[n] + w[n], H1 ,

y [n] denotes received signal at the cognitive radio (CR) at the nth sampling
instant
s[n] denotes the primary signal
w[n] is the additive white Gaussian noise
Key objective: the CR user (secondary) has to decide if primary user’s
signal is present (H1 ) or not (H0 )

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 16 / 17


References & Further Reading

Compressive sensing lecture notes by Richard G. Baraniuk


IEEE Survey paper: ”Compressive sensing: from theory to applications, a
survey”

B. Sainath (BITS, PILANI) Compressive Sensing December 2, 2018 17 / 17

Potrebbero piacerti anche