Sei sulla pagina 1di 9

Linear Predictor

Linear prediction is a method for signal source modelling


dominant in speech signal processing and having wide application
in other areas. Starting with a demonstration of the relationship
between linear prediction and the general difference equation for
linear systems, the unit shows how the linear prediction equations
are formulated and solved.

Nature of linear prediction


The object of linear prediction is to form a model of a linear time-
invariant digital system through observation of input and output
sequences. That is: to estimate a set of coefficients which will
describe the behaviour of an LTI system when its design is not
available to us and we cannot choose what input to present.

The redundancy in the speech signal is exploited in the LP


analysis. The prediction of current sample as a linear combination
of past p samples form the basis of linear prediction analysis
where p is the order of prediction. The predicted sample s ^(n) can
be represented as follows,

where aks are the linear prediction coefficients and s(n) is the
windowed speech sequence obtained by multiplying short time
speech frame with a hamming or similar type of window which is
given by,

where (n) is the windowing sequence. The prediction error e(n)


can be computed by the difference between actual sample
s(n)and the predicted sample s^(n) which is given by,
In the frequency domain, the equation (16) can be represented
as,

i.e.

So LP residual can be obtained filtering the speech signal with


A(z) as indicated. Similarly it can be shown that the LP spectrum
H(z) as,

The prediction error, e(n), can be viewed as the output of the


prediction error filter A(z) shown below, where H(z) is the optimal
linear predictor, x(n) is the input signal, and x(n)is the predicted
signal

As A(z) is the reciprocal of H(z), LP residual is obtained by the


inverse filtering of speech.
In the similar manner, speech can be reconstructed from the LP
residual by filtering using H(z)
Matlab Code:
noise = randn(50000:1);
x = filter(1,[1 1/2 1/3 1/4],noise);
x = x(45904:50000);

a = lpc(x,3);
est_x = filter([0 -a(2:end)],1,x);
e = x-est_x;
[acs,lags] = xcorr(e,'coeff');

subplot(1,2,1)
plot(1:97,x(4001:4097),1:97,est_x(4001:4097),'--' ),grid
title 'Original Signal vs. LPC Estimate'
xlabel 'Sample number', ylabel 'Amplitude'
legend('Original signal','LPC estimate')

subplot(1,2,2)
plot(lags,acs), grid
title 'Autocorrelation of the Prediction Error'
xlabel 'Lags', ylabel 'Normalized value'

Output:
Description
LPC function:
[a,g] = lpc(x,p)

lpc determines the coefficients of a forward linear predictor by


minimizing the prediction error in the least squares sense. It has
applications in filter design and speech coding.

[a,g] = lpc(x,p) finds the coefficients of a pth-order linear


predictor (FIR filter) that predicts the current value of the real-
valued time series x based on past samples.

x(n)=a(2)x(n1)a(3)x(n2)a(p+1)x(np)

p is the order of the prediction filter polynomial, a = [1


a(2) ... a(p+1)]. If p is unspecified, lpc uses as a
default p = length(x)-1. If x is a matrix containing a separate
signal in each column, lpc returns a model estimate for each
column in the rows of matrix a and a column vector of prediction
error variances g. The length of p must be less than or equal to
the length of x.

Algorithm:

lpc uses the autocorrelation method of autoregressive (AR)


modeling to find the filter coefficients. The generated filter might
not model the process exactly even if the data sequence is truly
an AR process of the correct order. This is because the
autocorrelation method implicitly windows the data, that is, it
assumes that signal samples beyond the length of x are 0.

lpc computes the least squares solution to

Xa=b

where
Filter function:

y = filter(b,a,x)

creates filtered data y by processing the data in vector x with the


filter described by vectors a and b.

The filter function is a general tapped delay-line filter, described


by the difference equation

a(1)y(n)=b(1)x(n)+b(2)x(n1)+
+b(Nb)x(nNb+1)a(2)y(n1)a(Na)y(nNa+1)

Here, n is the index of the current sample, Na is the order of the


polynomial described by vector a, and Nb is the order of the
polynomial described by vector b. The output y(n) is a linear
combination of current and previous inputs, x(n) x(n 1)..., and
previous outputs, y(n 1) y(n 2)... .

if

a = 1;
b = [1 1/2 1/3 1/4];

y(n)=x(n)+1/2*x(n-1)+1/3*x(n-2)+1/4*x(n-3)
Xcorr fuction:
Cross correlation is a standard method of estimating the degree
to which two series are correlated.
The cross-correlation of two complex functions and of a real
variable , denoted is defined by
(1
)

where denotes convolution and is the complex conjugate of . Since convolution is defined by

(2)

it follows that

(3)

Letting , , so (3) is equivalent to

(4)

(5)

The cross-correlation satisfies the identity

(6)

If or is even, then

(7)

where again denotes convolution.


Example: Linear Estimation for discrete signal
x=[1 2 3 4 5 6 7 8]
x = filter(1,[1 1/2 1/3 1/4],x)

a = lpc(x,7)
est_x = filter([0 -a(2:end)],1,x)
e = x-est_x
[acs,lags] = xcorr(e,'coeff')

subplot(1,2,1)
plot(1:7,x(1:7),1:7,est_x(1:7),'--'),grid
title 'Original Signal vs. LPC Estimate'
xlabel 'Sample number', ylabel 'Amplitude'
legend('Original signal','LPC estimate')

subplot(1,2,2)
plot(lags,acs), grid
title 'Autocorrelation of the Prediction Error'
xlabel 'Lags', ylabel 'Normalized value'

Output:
Application: In Differential pulse code modulation

Differential pulse-code modulation (DPCM) is a signal


encoder that uses the baseline of pulse-code modulation (PCM)
but adds some functionalities based on the prediction of the
samples of the signal. The input can be an analog signal or
a digital signal.

If the input is a continuous-time analog signal, it needs to


be sampled first so that a discrete-time signal is the input to the
DPCM encoder.

Let x(t) be the signal to be sampled and x(nTs) be its samples. In this
scheme the input to the quantizer is a signal

where x^(nTs) is the prediction for unquantized sample x(nTs). This predicted
value is produced by using a predictor whose input, consists of a quantized
versions of the input signal x(nTs). The signal e(nTs) is called the prediction error.
By encoding the quantizer output, in this method, we obtain a modified version of
the PCM called differential pulse code modulation (DPCM).
The receiver consists of a decoder to reconstruct the quantized
error signal. The quantized version of the original input is
reconstructed from the decoder output using the same predictor
as used in the transmitter. In the absence of noise the encoded
signal at the receiver input is identical to the encoded signal at
the transmitter output. Correspondingly the receive output is
equal to u(nTs), which differs from the input x(nts) only by the
quantizing error q(nTs).

Potrebbero piacerti anche