Sei sulla pagina 1di 53

INTRODUCTION

Introduction
The increasing demand for digital cellular telephony and other new services including multi-media communications prompted numerous studies on implementing not only the algorithms for half-rate speech coding using the available DSP processors on the market but also the need to enhance the speech quality subject to both degradations due to the effects of the ambient acoustical noise and the echo in the environment. The speech quality of the emerging totally digital cellular phones will greatly depend upon the speech quality available at the near-end transmitter end. In order to suppress the disturbing environmental noise in a hands-free speech transmission system, several noise reduction (NR) algorithms were introduced. Among all those disturbing signals that we dont want to transmit, theres the reverberated speech signal of the person we talk to - the far-end speaker. As the incoming far-end speakers speech signal is known to our terminal equipment, we can exploit it to cancel the reverberated signal. So does the acoustic echo canceller (AEC). Acoustic echo is a major problem in telecommunications in the GSM network where echo delay is especially annoying for speakers. There is an evident need for an acoustic echo canceller (AEC) to overcome this problem, particularly when poor quality mobile phones are used and in case of hands-free Communication.

1.1 Background
During transmission and reception, signals are often corrupted by noise and echo, which can cause severe problems for downstream processing and user perception. It is well known that to cancel the noise component present in the received signal using adaptive signal processing technique, a reference signal is needed, which is highly correlated to the noise. Since the noise gets added in the channel and is totally random, hence there is no means of creating a correlated noise, at the receiving end.

Telecommunications is about transferring information from one location to another. This includes many forms of information: telephone conversations, television signals, computer files, and other types of data. To transfer the information, you need a channel between the two locations. This may be a wire pair, radio signal, optical fiber, etc. Telecommunications companies receive payment for transferring their customer's information, while they must pay to establish and maintain the channel. The financial bottom line is simple: the more information they can pass through a single channel, the more money they make. DSP has revolutionized the telecommunications industry in many areas: In this, well explore on echo cancellation elements/techniques will focus on the basics of echo cancellation of echo cancellation and acoustic echo canceller, let's provide an overview of echo. Echo is a phenomenon in which a delayed and distorted version of an original sound or electrical signal is reflected back to the source. Acoustic echo cancellation is important for audio teleconferencing when simultaneous communication (or full-duplex transmission) of speech is necessary. In acoustic echo cancellation, a measured microphone signal d(n) contains two signals: - the near-end speech signal v(n) - the far-end echoed speech signal d (n) The goal is to remove the far-end echoed speech signal from the microphone signal so that only the near-end speech signal is transmitted. Echo cancellation is a recurring problem in telecommunication and wireless networks. And for years, equipment designers have turned to digital signal processing (DSP) and standalone board solutions as a means for curbing echo in carrier class equipment designs. For successful communication, In order to suppress the disturbing environmental noise in a hands-free speech transmission system meet this requirement, dealing with echo is a must.

The task of the acoustical echo canceller is to adaptively cancel the echo during non-speech periods; but it must cancel the echo only when the near-end speaker speaks. In other words, no adaptation is to be performed during those instances. In order to achieve that near-end speaker activity detection is needed. Messerschmitt etc. all have developed a coefficient adaptation algorithm based LMS algorithm.

1.2. Scope of the Project


Echo cancellation is a recurring problem in telecommunication and wireless networks. The purpose of this project is to look into the problem of acoustic echoes that appear in hands-free communication systems and to develop the simulation model for the removal of echo lean effect which is prevalent in hands-free communication equipment using Adaptive filters by using LMS algorithm.

SMART ANTENNAS

A smart antenna is a phased or adaptive array that adjusts to the environment. That is, for the adaptive array, the beam pattern changes as the desired user and the interference move, and for the phased array, the beam is steered or different beams are selected as the desired user moves. Phased array or multibeam antenna consists of either a number of fixed beams with one beam turned on towards the desired signal or a single beam (formed by phase adjustment only) that is steered towards the desired signal. Adaptive antenna array is an array of multiple antenna elements with the received signals weighted and combined to maximize the desired signal to interference and noise (SINR) ratio. This means that the main beam is put in the direction of the desired signal while nulls are in the direction of the interference.

Figure 1: Smart antenna systems definition. Adaptive antenna array systems represent the most advanced smart antenna approach to date. Using a variety of new signal-processing algorithms, the adaptive system takes advantage of its ability to effectively locate and track various types of signals to dynamically minimize interference and maximize intended signal reception.

Figure 1.7: Switched beam system coverage patterns (a) and Adaptive array coverage (b). 7

Smart Antennas are arrays of antenna elements that change their antenna pattern dynamically to adjust to the noise, interference in the channel and mitigate multipath fading effects on the signal of interest. The difference between a smart (adaptive) antenna and dumb (fixed) antenna is the property of having an adaptive and fixed lobe-pattern, respectively. The secret to the smart antennas ability to transmit and receive signals in an adaptive, spatially sensitive manner is the digital signal processing capability present. An antenna element is not smart by itself; it is a combination of antenna elements to form an array and the signal processing software used that makes smart antennas effective. This shows that smart antennas are more than just the antenna, but rather a complete transceiver concept. Smart Antenna systems are classified on the basis of their transmit strategy, into the following three types (levels of intelligence): Switched Beam Antennas Dynamically-Phased Arrays Adaptive Antenna Arrays Adaptive Antenna Arrays Adaptive antenna arrays can be considered the smartest of the lot. An Adaptive Antenna Array is a set of antenna elements that can adapt their antenna pattern to changes in their environment. Each antenna of the array is associated with a weight that is adaptively updated so that its gain in a particular look-direction is maximized, while that in a direction corresponding to interfering signals is minimized. In other words, they change their antenna radiation or reception pattern dynamically to adjust to variations in channel noise and interference, in order to improve the SNR (signal to noise ratio) of a desired signal. This procedure is also known as adaptive beam forming or digital beam forming Conventional mobile systems usually employ some sort of antenna diversity (e.g. space, polarization or angle diversity). Adaptive antennas can be regarded as an extended diversity scheme, having more than two diversity branches. In this context, phased arrays will have a greater gain potential than switched lobe antennas because all elements can be used for diversity combining. 8

Relative Benefits/Tradeoffs of Switched Beamed Adaptive Array Systems In the previous section three different definitions of Smart Antenna Systems, most commonly found in literature, are listed. However, the second definition, in which Smart Antenna Systems are divided into Switched Beam and Adaptive Array antenna systems, will be taken as a reference throughout this report. In this definition, the adaptive array antennas are subdivided into two classes: the first is the phased array antennas where only the phase of the currents is changed by the weights, and the second class are adaptive array antennas in strict sense, where both the amplitude and the phase of the currents are changed to produce a desired beam.

Figure 1.8: Different smart antenna concepts.

ADAPTIVE EQUALIZERS

10

3.1 What is a Filter?


The term filter is commonly used to refer to any device or system that take a mixture of particles/elements (frequency components) from its input and process them according to some specific rules to generate a corresponding set of particles /elements at its output. OR A filter can be defined as a piece of software or hardware that takes an input signals processes it so as to extract and output certain desired elements of that signal. Filters can be linear or non linear. But we are considering the linear filters. However, we do not usually think of something as a filter unless it can modify the sound in some way. For example, speaker wire is not considered a filter, but the speaker is (unfortunately). The different vowel sounds in speech are produced primarily by changing the shape of the mouth cavity, which changes the resonances and hence the filtering characteristics of the vocal tract. The tone control circuit in an ordinary car radio is a filter, as are the bass, midrange, and treble boosts in a stereo preamplifier. Graphic equalizers, reverberates, echo devices, phase shifters, and speaker crossover networks are further examples of useful filters in audio. There are also examples of undesirable filtering, such as the uneven reinforcement of certain frequencies in a room with bad acoustics.'' A well-known signal processing wizard is said to have remarked, ``when you think about it, everything is a filter.'' A digital filter is just a filter that operates on digital signals, such as sound represented inside a computer. It is a computation, which takes one sequence of numbers (the input signal) and produces a new sequence of numbers (the filtered output signal). The filters mentioned in the previous paragraph are not digital only because they operate 11

on signals that are not digital. It is important to realize that a digital filter can do anything that a real-world filter can do. That is, all the filters alluded to above can be simulated to an arbitrary degree of precision digitally. Thus, a digital filter is only a formula for going from one digital signal to another. It may exist as an equation on paper, as a small loop in a computer subroutine, or as a handful of integrated circuit chips properly interconnected. Linear Time Invariant Systems that change the shape of the spectrum are often referred to as frequency shaping filters. Systems that are designed to pass some frequencies essentially undistorted and significantly attenuate or eliminate others are referred to as frequency selective filters.

3.2 FIR Filter


Finite impulse response (FIR) filter operates on discrete-time signals and can be implemented with a digital signal processor. The convolution is very useful for the design of FIR filters, since we can approximate it with a finite number of terms, or

y ( n) =( n) x ( n ) h k
Different techniques are available for the design of FIR filters, such as a commonly used technique that utilizes the Fourier series. A very useful feature of an FIR filter is that it can guarantee linear phase. The linear phase feature can be very useful in applications such as speech analysis, where phase distortion can be very critical. For example, with linear phase, all input sinusoidal components are delayed by the same amount. Otherwise, harmonic distortion can occur.
k= 0

Figure X.1 FIR Filter Showing Delay 12

3.3 IIR Filter


The infinite impulse response IIR) filter that makes use of the vast knowledge already acquired with analog filters. The design procedure involves the conversion of an analog filter to an equivalent discrete filter using the bilinear transformation (BLT) technique. As such, the BLT procedure converts a transfer function of an analog filter in the s-domain into an equivalent discrete-time transfer function in the z-domain.

y ( n) = ak x( n k ) b j y ( n j )

= a0 x (n) + a1 x(n 1) + a2 x (n 2) + ... + a N x(n N ) b1 y (n 1) b2 y (n 2) ..... bM y (n M )


This recursive type of equation represents an infinite impulse response (IIR) filter. The output depends on the inputs as well as past outputs (with feedback).

k =0

j =1

3.4 Frequency Selective Filters


Frequency selective filters are a class of filters specifically intended to accurately or approximately select some band of frequencies and reject others. The frequency selective filters arise in a variety of situations. For example, if noise is an audio recording in a higher frequency band than the music or voice on the recording is, it can be removed by frequency selective filtering. Another important application of frequency selective filters is in communication systems. Like the basis for amplitude modulation systems is the transmission of information from many different sources simultaneously by putting the information from each channel into a separate frequency band and extracting the individual channels or bands at the receiver using the frequency selective filter. Frequency selective filters for separating the individual channel and frequency shaping filters for adjusting the quality of the tone from a major part of any home radio 13

and television receiver. While frequency selectivity is the only issue of concern in applications, its broad importance has lead to a wildly accepted set of terms describing the characteristics of frequency selective filters. In particular, while the nature of the frequencies to be passed by a frequency selective filter varies considerably from application to application, several basic filters are wildly used and have been given names indicative of their function

Types of frequency selective filters:


Three types of filters, 1. Low Pass Filter 2. High Pass Filter 3. Band Pass Filter Low Pass Filter A low pass filter is a filter that passes low frequencies i.e. frequencies around w=0 and attenuates or rejects higher frequencies. High Pass Filter A high pass filter is a filter that passes high frequencies and attenuates or rejects low ones. Band Pass Filter A band pass filter is a filter that passes a band of frequencies and attenuates frequencies both higher and lower than those in the band that is passed. In each case the cut off frequencies are the frequencies defining the boundaries between frequencies that are passed and frequencies that are rejected i.e. the frequencies in the pass band and stop band. Specifically, an ideal frequency selective filter is a filter that exactly passes complex exponentials at one set of frequencies without any distortion and completely 14

rejects frequencies at all other frequencies. For example an ideal frequency low pass filter with cutoff frequency Wc is an LTI system that passes complex exponentials e^ (jwt) for values of w in the frequencies in the range Wc <W< Wc and rejects signals at all other frequencies. The frequency response of an ideal continuous time high pass filter with cut off frequency Wc , and the figure illustrates an ideal continuous time band pass filter with low cutoff frequency wc1 and upper cutoff frequency Wc 2 . Note that each of these filters is symmetric about w=0, and thus, there appear to be two pass bands for the high pass and band pass filters. This is the consequence of having our adopted the use of complex exponential signal. Note that the characteristics of the continuous time and discrete time ideal filters differ by virtue of the fact that for discrete time filters the frequency response must be periodic with period 2pi, with low frequencies near even multiples of pi and high frequencies near odd multiples of pi.

Fig 2.2 (a) Low pass filter

(b) High pass

(c) Band pass

3.5 ADAPTIVE FILTERS


As we begin our study of adaptive filters it may be important to understand the meaning of the terms adaptive and filter in a very general sense. The adjective adaptive can be understood by considering a system which is trying to adjust itself so as to respond to some phenomenon that is taking place in its surrounding. In other words the system tries to adjust its parameters with a aim of meeting some well define goal or target which depends upon the state of system as well as its surrounding .this is what adaptation means. Moreover there is a need to have a set of steps or certain procedure by which this process of adaptation is carried out. And finally the system that

15

carries out or undergoes the process of adaptation is called by the more technical name filter. Depending upon the time required to meet the final target of the adaptation process, which we will call the convergence time that is available to carry out the adaptation, we can have a variety of adaptation algorithms and filter structures.

Adaptive filter
So after the above explanation we can define adaptive filter as The purpose of the general adaptive system is to filter the input signal so that it resembles (in some sense) the desired signal input OR An adaptive FIR or IIR filter designs itself based on the characteristics of the input signal to the filter and a signal which represent the desired behavior of the filter on its input. Adaptive filters are a class of filters that iteratively alter their parameters. The filter minimizes the error between some desired signal and some reference signal. A result of the process is a set of N tap values which define the nature of the input signal being filtered. Now, let's say we want to determine the nature of some channel, say a room in which a signal is being created and received to a microphone. To do this, it seems that we would have to have some way of supplying a known signal and comparing it to the same signal as received into the mic. The LMS filter should provide a set of taps define the inverse of the room. After filtering out noise alone (which is another task altogether), the tap weights could then be implemented as the corresponding filter, inversed and applied to the received signal thus producing the original signal in its original form. Tapsre already inversed! When we feed an adaptive filter a training sequence, it allows it to adapt so that the filtered signal is as close as possible to the reference signal (i.e., the training signals). In order to do that, the filter should define the inverse of whatever the channel is, because

Y ( f ) =X ( f ) * H ( f ) * F ( f )
Where H (f) is the "channel," F (f) is the adaptive filter, X (f) is the input to the channel, and Y (f) is the output from the adaptive filter. So in order to make

Y( f ) =X ( f )
you must have 1 =H ( f ) * F ( f ) 16

And therefore H ( f ) = 1/ F ( f )

3.5.1 Adaptive Filter Structure:


The most commonly used structure in the implementation of adaptive filter is the transversal structure. The adaptive filter has a single input, x (n) and output, y (n). The sequence d (n) is the desired signal. The output, y (n), is generated as a linear combination of the delayed samples of input sequence, x (n), according to the equation

Y ( n) = (i ) X ( n i ) W
i= 0

N 1

Where W I (n) s are the filter tap weights (coefficients) and N is the filter length. We refer to the input samples, x (n-i), for i= 0, 1, 2... N-1, as the filter tap inputs. The tap weights which may vary in time are controlled by the adaptive algorithm

Figure 3.1 Adaptive Filter Block Diagram

17

Figure 3.2 Basic Adaptive Filter structure Designing the filter does not require any other frequency response information or specification. To define the self learning process the filter uses, you select the adaptive algorithm used to reduce the error between the output signal y (k) and the desired signal d (k). When the LMS performance criteria for e (k) have achieved its minimum value through the iterations of the adapting algorithm, the adaptive filter is finished and its coefficients have converged to a solution. Now the output from the adaptive filter matches closely the desired signal d (k). When you change the input data characteristics, sometimes called the filter environment, the filter adapts to the new environment by generating a new set of coefficients for the new data. Notice that when e (k) goes to zero and remains there you achieve perfect adaptation; the ideal result but not likely in the real world. So the system has six main components to be defined: Input signal Desired signal Output signal Error signal FILTER - Filtering process Adaptive process - Some kind of algorithm The coefficients of an adaptive filter are adjusted to compensate for changes in input signal, output signal, or system parameters. Instead of being rigid, an adaptive system can learn the signal characteristics and track slow changes. An adaptive filter can

18

be very useful when there is uncertainty about the characteristics of a signal or when these characteristics change.

3.5.2 Adaptive Filtering System Configurations


There are four major types of adaptive filtering configurations; 1- Adaptive system identification. 2- Adaptive echo cancellation. 3- Adaptive linear prediction. 4- Adaptive inverse system. Digital signal processing (DSP) has been a major player in the current technical advancements such as noise filtering, system identification, and voice prediction. Standard DSP techniques, however, are not enough to solve these problems quickly and obtain acceptable results. Adaptive filtering techniques must be implemented to promote accurate solutions and a timely convergence to that solution a number of adaptive structures have been used for different applications in adaptive filtering. All of the above systems are similar in the implementation of the algorithm, but different in system Configuration. All 4 systems have the same general parts; an input x (n), a desired result d(n), an Output y (n), an adaptive transfer function w(n), and an error signal e(n) which is the difference Between the desired output u(n) and the actual output y(n). In addition to these parts, the system Identification and the inverse system configurations have an unknown linear system u (n) that can receive an input and give a linear output to the given input. Adaptive System Identification The adaptive system identification is primarily responsible for determining a discrete estimation of the transfer function for an unknown digital or analog system. The same input x (n) is applied to both the adaptive filter and the unknown system from which the outputs are compared (see figure). The output of the adaptive filter y (n) is subtracted from the output of the unknown system resulting in a desired signal d (n). The resulting difference is an error signal e (n) used to manipulate the filter coefficients of the 19

adaptive system trending towards an error signal of zero. After a number of iterations of this process are performed, and if the system is designed correctly, the adaptive filters transfer function will converge to, or near to, the unknown systems transfer function. For this configuration, the error signal does not have to go to zero, although convergence to zero is the ideal situation, to closely approximate the given system. There will, however, be a difference between adaptive filter transfer function and the unknown system transfer function if the error is nonzero and the magnitude of that difference will be directly related to the magnitude of the error signal. Additionally the order of the adaptive system will affect the smallest error that the system can obtain. If there are insufficient coefficients in the adaptive system to model the unknown system, it is said to be under specified. This condition may cause the error to converge to a nonzero constant instead of zero. In contrast, if the adaptive filter is over specified, meaning that there are more coefficients than needed to model the unknown system, the error will converge to zero, but it will increase the time it takes for the filter to converge

Figure 3.3 Adaptive System Configurations Adaptive noise/Echo cancellation The second configuration is the adaptive noise cancellation configuration as shown in figure.

20

Figure 3.3 Adaptive Noise or Echo cancellations In this configuration the input x (n) is compared with a desired signal d (n), which consists of a signal s (n) corrupted by another noise N 0 (n) . The adaptive filter coefficients adapt to cause the error signal to be a noiseless version of the signal s (n). Both of the noise signals for this configuration need to be uncorrelated to the signal s (n). In addition, the noise sources must be correlated to each other in some way, preferably equal, to get the best results. Do to the nature of the error signal; the error signal will never become zero. The error signal should converge to the signal s (n), but not converge to the exact signal. In other words, the difference between the signal s (n) and the error signal e (n) will always be greater than zero. The only option is to minimize the difference between those two signals. The basic principle is canceling a sound wave by generating another sound wave exactly out of phase with the first one. The superposition of the two waves would result in silence. Adaptive Linear Prediction Adaptive linear prediction is the third type of adaptive configuration figure 4.1.2. This configuration essentially performs two operations. The first operation, if the output is taken from the error signal e (n), is linear prediction. The adaptive filter coefficients are being trained to predict, from the statistics of the input signal x (n), what the next input signal will be. The second operation, if the output is taken from y (n), is a noise filter similar to the adaptive noise cancellation outlined in the previous section. As in the previous section, neither the linear prediction output nor the noise cancellation 21

output will converge to an error of zero. This is true for the linear prediction output because if the error signal did converge to zero, this would mean that the input signal x (n) is entirely deterministic, in which case we would not need to transmit any information at all. In the case of the noise filtering output, as outlined in the previous section, y (n) will converge to the noiseless version of the input signal. -

Figure 3.3 Adaptive Linear Prediction

Adaptive Inverse System Configuration The adaptive inverse system configuration as shown in figure. The goal of the adaptive filter here is to model the inverse of the unknown system u (n). This is particularly useful in adaptive equalization where the goal of the filter is to eliminate any spectral changes that are caused by a prior system or transmission line. The way this filter works is as follows. The input x (n) is sent through the unknown filter u (n) and then through the adaptive filter resulting in an output y (n). The input is also sent through a delay to attain d (n). As the error signal is converging to zero, the adaptive filter coefficients w (n) are converging to the inverse of the unknown system u (n). For this configuration, as for the system identification configuration, the error can theoretically go to zero. This will only be true; however, if the unknown system consists only of a finite number of poles or the adaptive filter is an IIR filter. If neither of these conditions is true, the system will converge only to a constant due to the limited number of zeroes available in an FIR system.

22

Figure 3.3 Adaptive Inverse systems

3.6 Additional Structures


(a) Notch with two weights, Which can be used to notch or cancel/reduce a sinusoidal noise signal. This structure has only two weights or coefficients.

(b) Adaptive channel equalization Used in a modem to reduce channel distortion resulting from the high speed of data transmission over telephone channels.

Adaptive Channel Equalization

3.6 Classes of Adaptive Filter


Mean Square Adaptive Filter

23

The aim to minimize a cost function equal to the expectation of the square of the difference between the desired signal d (n), and the actual output of the adaptive filter y (n)

E[e 2 ( n)] = E[( d (n) y (n)) 2 ]


Least Mean Square Algorithm Normalized Least Mean Square Algorithm Variable Step Size Least Mean Square Algorithm Variable Step Size Normalized Least Mean Square Algorithm. Recursive Least Square Adaptive Filter Algorithm The aim is to minimize a cost function equal to the weighted sum of the squares of the difference between the desired and the actual output of the adaptive filter for different time instances. The cost function is recursive in the sense that unlike the MSE cost function, weighted previous values of the estimation error are also considered. . The parameter is in the range of 0< <1. It is known as the forgetting factor as for <1 it causes the previous values to have an increasingly negligible effect on updating of the filter tap weights The value of 1/(1- ) is a measure of the memory of the algorithm.

3.7 Performance Measures in Adaptive Systems


There are six major performance measures of Adaptive Systems. Step size parameter determines the convergence or divergence speed and precision of the adaptive filter coefficients. If is large, the filter will converge fast, but could diverge if is too large. When is large, the adaptation is quick, but there will be an increase in the average excess MSE. This excess MSE may be undesirable result. If is small, the filter will converge slowly, which is equivalent to the algorithm having long memory, an undesirable quality. Every application will have a different step size that needs to be adjusted. When choosing a value, their needs to be balance between speed convergence 24

and the MSE. The value is decided through trial and error so that speed at which the adaptive filter learns and the excess MSE is obtained within application requirements. The values differs from simulation to real-time because of the inherit differences between the systems. Convergence Rate The convergence rate determines the rate at which the filter converges to its resultant state. Usually a faster convergence rate is a desired characteristic of an adaptive system. Convergence rate is not, however, independent of all of the other performance characteristics. There will be a tradeoff, in other performance criteria, for an improved convergence rate and there will be a decreased convergence performance for an increase in other performance. For example, if the convergence rate is increased, the stability characteristics will decrease, making the system more likely to diverge instead of converge to the proper solution. Likewise, a decrease in convergence rate can cause the system to become more stable. This shows that the convergence rate can only be considered in relation to the other performance metrics, not by itself with no regards to the rest of the system. Minimum Mean Square Error The minimum mean square error (MSE) is a metric indicating how well a system can adapt to a given solution. A small minimum MSE is an indication that the adaptive system has accurately modeled, predicted, adapted and/or converged to a solution for the system. A very large MSE usually indicates that the adaptive filter cannot accurately model the given system or the initial state of the adaptive filter is an inadequate starting point to cause the adaptive filter to converge. There are a number of factors which will help to determine the minimum MSE including, but not limited to; quantization noise, order of the adaptive system, measurement noise, and error of the gradient due to the finite step size. Computational Complexity 25

Computational complexity is particularly important in real time adaptive filter applications. When a real time system is being implemented, there are hardware limitations that may affect the performance of the system. A highly complex algorithm will require much greater hardware resources than a simplistic algorithm. Stability Stability is probably the most important performance measure for the adaptive system. By the nature of the adaptive system, there are very few completely asymptotically stable systems that can be realized. In most cases the systems that are implemented are marginally stable, with the stability determined by the initial conditions, transfer function of the system and the step size of the input. Robustness The robustness of a system is directly related to the stability of a system. Robustness is a measure of how well the system can resist both input and quantization noise. Filter Length The filter length of the adaptive system is inherently tied to many of the other performance measures. The length of the filter specifies how accurately a given system can be modeled by the adaptive filter. In addition, the filter length affects the convergence rate, by increasing or decreasing computation time, it can affect the stability of the system, at certain step sizes, and it affects the minimum MSE. If a filter length of the system is increased, the number of computations will increase, decreasing the maximum convergence rate. Conversely, if the filter length is decreased, the number of computations will decrease, increasing the maximum convergence rate. For stability, due to an increase in length of the filter for a given system, you may add additional poles or zeroes that may be smaller than those that already exist. In this case the maximum step size, or maximum convergence rate, will have to be decrease to maintain stability. 26

Finally, if the system is under specified, meaning there is not enough pole and/or zeroes to model the system, the mean square error will converge to a nonzero constant. If the system is over specified, meaning it has too many poles and/or zeroes for the system model, it will have the potential to converge to zero, but increased calculations will affect the maximum convergence rate possible.

4.4 Adaptive Echo Cancellers


A technique to remove or cancel echoes is shown in Figure. The echo canceller mimics the transfer function of the echo path (or room acoustic) to synthesize a replica of the echo, and then subtracts that replica from the combined echo and near-end speech (or disturbance) signal to obtain the near end signal alone. However, the transfer function is unknown in practice, and so it must be identified. The solution to this problem is to use an adaptive filter .The method used to cancel the echo signal is known as adaptive filtering. Adaptive filters are dynamic filters which iteratively alter their characteristics in order to achieve an optimal desired output. An adaptive filter algorithmically alters its parameters in order to minimize a function of the difference between the desired output d (n) and its actual output y (n). This function is known as the cost function of the adaptive algorithm. Figure shows a block diagram of the adaptive echo cancellation system implemented throughout this thesis. Here the filter H (n) represents the impulse response of the acoustic environment, W (n) represents the adaptive filter used to cancel the echo signal. The adaptive filter aims to equate its output y (n) to the desired output d (n) (the signal reverberated within the acoustic environment). At each iteration the error signal, e (n) =d (n)-y (n), is fed back into the filter, where the filter characteristics are altered accordingly.

27

Block diagram of Adaptive Echo Canceller

4.5 Choice of Algorithm


A wide variety of recursive algorithms have been developed in the literature for the operation of linear adaptive filters, in the final analysis, the choice of one algorithm over another is determined by one or more of the following factors: Rate of Convergence This is defined as the number of iterations required for the algorithm, in response to stationary inputs, to converge close enough to the optimum wiener solution in the mean-square error sense. A fast rate of convergence allows the algorithm to adapt rapidly to a stationary environment of unknown statistics. Misadjustments For an algorithm of interest, this parameter provides a quantitative measure of the amount which the final value of the mean-square error, averaged over an ensemble of adaptive filters, deviates from the minimum mean-square error produced by the Wiener filter

Tracking 28

When an adaptive filtering algorithm operates in a non-stationary environment. The algorithm is required to track statistical variations in the environment. The tracking performance of the algorithm, however, is influenced by two contradictory features: (1) Rate of convergence, and (2) steady-state fluctuation due to algorithm noise. Robustness For an adaptive filter to be robust, small disturbances (I.e., disturbances with small energy) can only result in small estimation errors. The disturbances may arise from a variety of factors, internal or external to the filter. Computational requirements: Here the issues of concern include (a) The number of operations (i.e., multiplications, divisions, and additions/ subtractions) Required to make one complete iteration of the algorithm. (b) The size of memory locations required to store the data and the program, (c) The investment required to program the algorithm on a computer.

4.6 Approach to Develop Linear Adaptive Filter


4.6.1 Stochastic Gradient Approach The stochastic gradient approach uses a tapped-delay line, or transversal filter, as the structural basis for implementing the linear adaptive filter. For the case of stationary inputs, the cost function, also referred to as the index of performance, is defined as the mean square error (i.e., the mean square value of the difference between the desired response and the transversal filter output). This cost function is precisely a second order function of the tap weights in the transversal filter. To develop a recursive algorithm for updating the tap weights of the adaptive transversal filter, we proceed in two stages, First, we use an iterative procedure 29

to solve the Wiener Hopf equations (i.e., the Matrix equation defining the optimum Wiener solution); the iterative procedure is based on the method of steepest descent, which is a well known technique in optimization theory. This method required the use of a gradient vector, the value of which depends on two parameters: the correlation Matrix of the tap inputs in the transversal filter and the cross correlation vector between the desired response and the same tap inputs. Next, we use instantaneous values for this correlation, so as to derive an estimate for the gradient vector, making it assume a stochastic character in general. The resulting algorithm is widely known as the least mean square (LMS) algorithm, , the essence of which for the case of a transversal filter operating on real valued data may be described as

Where the error signal is defined as the difference between some desired response and the actual response of the transversal filter produced by the tap input vector.

4.7 Least Mean Square (LMS) Algorithm


The Least Mean Square (LMS) algorithm was first developed by Widrow and Hoff in1959 through their studies of pattern recognition. From there it has become one of the most widely used algorithms in adaptive filtering. The LMS algorithm is an important member of the family of stochastic gradient-based algorithms as it utilizes the gradient vector of the filter tap weights to converge on the optimal wiener solution. It is well known and widely used due to its computational simplicity. It is this simplicity that has made it the benchmark against which all other adaptive filtering algorithms are judged. The LMS algorithm is a linear adaptive filter algorithm, which in general consists of two basic processes. 1. A filter process: which involves? a. Computing the output of a linear filter in response to an input signal. 30

b. Generating an estimation error by comparing this output with a desired response. 2. An adaptive process: which involves the automatic adjustment to the parameters of the filter in accordance with the estimation error. The combination of these two processes working together constitutes a feedback loop; first, we have a transversal filter, around which the LMS algorithm is built. This component is responsible for performing the filtering process. Second, we have a mechanism for performing the adaptive control process on the tap weights of the transversal filter. With each iteration of the LMS algorithm, the filter tap weights of the adaptive filter are updated according to the following formula (Farhang-Boroujeny 1999).

Here x (n) is the input vector of time delayed input values,

W (n +1)] = W (n) + 2 e(n) x(n)

x(n) = [x(n) x(n - 1) x(n - 2) .......x(n - N + 1)


The vector W(n) = [W (n) W (n) W (n) .......W (n) ] represents the coefficients of 0 1 2 N -1 the adaptive FIR filter tap weight vector at time n. The parameter is known as the step size parameter and is a small positive constant. This step size parameter controls the influence of the updating factor. Selection of a suitable value for is imperative to the performance of the LMS algorithm, if the value is too small the time the adaptive filter takes to converge on the optimal solution will be too long; if is too large the adaptive filter becomes unstable and its output diverges. Is the simplest to implement and is stable when the step size parameter is selected appropriately. This requires prior knowledge of the input signal which is not feasible for the echo cancellation system. 4.7.1 Derivation of the LMS algorithm The derivation of the LMS algorithm builds upon the theory of the wiener W0 solution for the optimal filter tap weights; it also depends on the steepest-descent algorithm. This is a formula which updates the filter coefficients using the current tap

31

weight vector and the current gradient of the cost function with respect to the filter tap weight coefficient vector,

W (n +1) = W (n) ( n)
2 Where ( n) = E[e ( n)]

As the negative gradient vector points in the direction of steepest descent for the N-dimensional quadratic cost function, each recursion shifts the value of the filter coefficients closer toward their optimum value, which corresponds to the minimum achievable value of the cost function, (n).The LMS algorithm is a random process implementation of the steepest descent algorithm. Here the expectation for the error signal is not known so the instantaneous value is used as an estimate. The steepest descent algorithm then becomes

W (n + 1) = W (n) ( n)
Where ( n ) = e 2 ( n) The gradient of the cost function, (n), can alternatively be expressed in the following form. ( n ) = e 2 ( n )
= e 2 (n) W
e( n) W

= 2e( n)

( d ( n) y ( n)) W eW T (n) X (n) = 2e(n) W = 2e ( n )

= 2e(n) X (n)

Substituting this into the steepest descent algorithm of Eqn, we arrive at the recursion for the LMS adaptive algorithm.

W (n + 1)] = W (n) + 2 e(n) x(n)


4.7.2 Implementation of the LMS algorithm

32

Each iteration of the LMS algorithm requires 3 distinct steps in this order: 1. The output of the FIR filter, y(n) is calculated using equation

y (n) = W (n) X ( n) = W T ( n) X ( n)
2. The value of the error estimation is calculated using equation
i=0

N 1

e( n ) = d ( n ) X ( n )
3. The tap weights of the FIR vector are updated in preparation for the next iteration, by

W (n + 1)] = W (n) + 2 e(n) x(n)


The main reason for the LMS algorithms popularity in adaptive filtering is its

Computational simplicity, making it easier to implement than all other commonly use adaptive algorithms. For each iteration the LMS algorithm requires 2N additions and 2N+1 multiplications (N for calculating the output, y(n), one for 2e(n) and an additional N for the scalar by vector multiplication).

4.8 Normalized Least Mean Square (NLMS) Algorithm


One of the primary disadvantages of the LMS algorithm is having a fixed step size parameter for every iteration. This requires an understanding of the statistics of the input signal prior to commencing the adaptive filtering operation. In practice this is rarely achievable. Even if we assume the only signal to be input to the adaptive echo cancellation system is speech, there are still many factors such as signal input power and amplitude which will affect its performance. The normalized least mean square algorithm (NLMS) is an extension of the LMS algorithm which bypasses this issue by selecting a different step size value, (n), for each iteration of the algorithm. This step size is proportional to the inverse of the total expected energy of the instantaneous values of the coefficients of the input vector x(n). This sum of the expected energies of the input samples is also equivalent to the dot product of the input vector with itself, and the trace of input vectors auto-correlation matrix.

tr [ R ] = E[ x 2 ( n i )]
i =0

N 1

33

= E[ x 2 (n i )]
i =0

N 1

The recursion formula for the NLMS algorithm is stated in equation

W (n + 1) = w(n) +

1 e( n ) x ( n ) X T ( n) X ( n)

Simple to implement and computationally efficient. Shows very good attenuation and variable step size allows stable performance with non-stationary signals. This was the obvious choice for real time implementation. 4.8.1 Derivation of the NLMS algorithm This derivation of the normalized least mean square algorithm .To derive the NLMS algorithm we consider the standard LMS recursion, for which we select a variable step size parameter, (n). This parameter is selected so that the error value, e + (n), will be minimized using the updated filter tap weights, w (n+1), and the current input vector, x (n).

W ( n +1)] =W ( n) + 2 e( n) x ( n)

e + (n) = d (n) W T (n + 1) X (n)

= (1 2 (n) X T (n) X (n))e(n)


Next we minimize (e + (n)) 2 with respect to (n) .Using this we can then find a value for (n) which forces e + (n) to zero.

( n) =

1 2 X T ( n) X ( n)

This (n) is then substituted into the standard LMS recursion replacing , resulting in the following.

W (n + 1)] = W (n) + 2 e(n) x(n)


W (n + 1) = w(n) +
34

1 e( n ) x ( n ) X T ( n) X ( n)

Here the value of is a small positive constant in order to avoid division by zero when the values of the input vector are zero. This was not implemented in the real time implementation as in practice the input signal is never allowed to reach zero due to noise from the microphone and from the AD codec on the Texas Instruments DSK. The parameter is a constant step size value used to alter the convergence rate of the NLMS algorithm, it is within the range of 0<<2, usually being equal to 1.

W (n + 1)] = W (n) + 2 e(n) x(n)


Where ( n) =
X X +
T

4.8.2 Implementation of the NLMS algorithm The NLMS algorithm has been implemented in Mat lab and in a real time application using the Texas Instruments TMS320C6711 Development Kit. As the step size parameter is chosen based on the current input values, the NLMS algorithm shows far greater stability with unknown signals. This combined with good convergence speed and relative computational simplicity makes the NLMS algorithm ideal for the real time adaptive echo cancellation system. The code for both the Mat lab and TI Code composer studio applications can be found in appendices A and B respectively. As the NLMS is an extension of the standard LMS algorithm, the NLMS algorithms practical implementation is very similar to that of the LMS algorithm. Each iteration of the NLMS algorithm requires these steps in the following order:1. The output of the adaptive filter is calculated.

y (n) = W (n) X ( n) = W T ( n) X ( n)
2. An error signal is calculated as the difference between the desired signal and the filter output.
i=0

N 1

e( n ) = d ( n ) X ( n )
3. The step size value for the input vector is calculated.

W (n + 1)] = W (n) + 2 e(n) x(n)


35

4. The filter tap weights are updated in preparation for the next iteration. Each iteration of the NLMS algorithm requires 3N+1 multiplications, this is only N more than the standard LMS algorithm .This is an acceptable increase considering the gains in stability and echo attenuation achieved.

36

CONCLUSION

37

Conclusions:
Proposed SA algorithms are becoming more complex and involve combinations with processing in time domain, multiuser detection, ST coding, and multiple antennas at MS. There are number of parameters such as the level of CCI reduction, diversity gain, and SNR, which can be improved with an SA. Some of these parameters can be interdependent and even conflicting. Their importance and tradeoff need to be decided on a cell-by-cell basis. The following parameters should be taken into consideration: propagation, interference environment, users mobility, and requirements for link quality. Network coverage and capacity in urban macro cells can at least be doubled with existing SA receivers. To achieve sensible capacity improvements in an urban microcell, more complex SA algorithms, discussed in this work, are required.

38

REFERENCES

39

Bibliography [1] IoWave Inc. Smart Antenna. http://www.iowave.com/. [2] G. Tsoulos, M. Beach, and J. McGeehan. Wireless Personal Communications for the 21st Century: European Technological Advances in Adaptive Antennas. IEEE Communications Magazine, September 1997. [3] R. H. Ray. Application of Smart Antenna Technology in Wireless Communication Systems. ArrayComm Inc. http://www.arraycomm.com/. [4] M. Cooper and M. Goldburg. Intellingent Antennas: Spatial Division Multiple Access. Wireless, Annual Review of Communications, 1996. ArrayComm Inc. [5] Smart Antenna Systems. Web ProForum Tutorials, The International Engineering Consortium. http://www.iec.org/. [6] J. H. Winters. Smart Antennas for Wireless Systems. IEEE Personal Communications, pages 2327, February 1998. [7] J. H. Winters. WTEC Panel Report on Wireless Technologies and Information Networks, chapter 6. Smart Antennas. International Technology Research Institute, Baltimore, July 2000. [8] Ng K. Chong, O. K. Leong, P. R. P. Hoole, and E. Gunawan. Smart Antennas and Signal Processing, chapter 8. Smart Antennas: Mobile Station Antenna Beamforming, pages 245267. WITPress, 2001. [9] D. Nowicki and J. Roumeliotos. Smart Antenna Strategies. Mobile Communications, April 1995. [10] A. Jacobsen. Smart Antennas for Dummies. Technical report, Telenor R&D, January 2001. 40

APPENDIX

Appendix A- MATLAB
A.1.Features of Matlab used in this project:
MATLAB stands for matrix laboratory is a high performance language for technical computing. It integrates computation, visualization and programming in an easy to use environment where problems and solutions are expressed in familiar mathematical notation. MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time it 41

would take to write a program in a scalar non interactive language such as C or FORTRAN. MATLAB was originally written to provide easy access to matrix software developed by the LINPACK and EISPACK projects. Today, MATLAB engines incorporate the LAPACK and BLAS libraries, embedding the state of the art in software for matrix computation. MATLAB has evolved over a period of years with input from many users. In university environments, it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-productivity research, development, and analysis. MATLAB features a family of add-on application-specific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others.

A.1.1. Tool boxes in Matlab 1. Signal processing 2. Image processing 3. Control systems 4. Neural Networks 5. Communications 6. Robust control 7. Statistics A.1.2. Typical uses of MATLAB 1. Math and computation 42

2. Algorithm development 3. Data acquisition 4. Data analysis ,exploration and visualization 5. Scientific and engineering graphics 6. 7. Modeling, simulation, and prototyping Application development, including graphical user interface building. A.1.3. Main features of MATLAB: 1. Advance algorithm for high performance numerical computation, especially in the Field matrix algebra. 2. A large collection of predefined mathematical functions and the ability to define ones own functions. 3. Two-and three dimensional graphics for plotting and displaying data. 4. A complete online help system. 5. Powerful, matrix or vector oriented high level programming language for individual applications. 6. Toolboxes available for solving advanced problems in several application areas.

43

A.2.The MATLAB System:


The MATLAB system consists of five main parts: 1. Development Environment : This is the set of tools and facilities that help you use MATLAB functions and files. Many of these tools are graphical user interfaces. It includes the MATLAB desktop and Command Window, a command history, an editor and debugger, and browsers for viewing help, the workspace, files, and the search path. 2. The MATLAB Mathematical Function Library : This is a vast collection of computational algorithms ranging from elementary functions, like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix Eigen values, Bessel functions, and fast Fourier transforms. 3. The MATLAB Language : This is a high-level matrix/array language with control flow statements, functions, data structures, input/output, and object-oriented programming features. It allows both "programming in the small" to rapidly create quick and dirty throw-away programs, and "programming in the large" to create large and complex application programs. 4. Graphics :

44

MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. It includes high-level functions for two-dimensional and three-dimensional data visualization, image processing, animation, and presentation graphics. It also includes low-level functions that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications. 5. The MATLAB Application Program Interface (API) : This is a library that allows you to write C and FORTRAN programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MAT-files.

A.3.Matlab Working Environment:


Matlab Desktop:Matlab Desktop is the main Matlab application window. The desktop contains five sub windows, the command window, the workspace browser, the current directory window, the command history window, and one or more figure windows, which are shown only when the user displays a graphic. The command window is where the user types MATLAB commands and expressions at the prompt (>>) and where the output of those commands is displayed. MATLAB defines the workspace as the set of variables that the user creates in a work session. The workspace browser shows these variables and some information about them. The current directory tab above the workspace tab shows the contents of the current directory, whose path is shown in the current directory window. The Command History Window contains a record of the commands a user has entered in the command window, including both current and previous MATLAB sessions.

45

A.4. Features and capabilities of MATLAB

46

MATLAB

MATLAB Programming language

User written / built in functions

Graphics 2-D graphics 3-D graphics Color and lighting Animation

Computation Linear algebra Signal processing Quadrature Etc

External interface Interface with C and FORTRAN Programs

Tool boxes Signal processing Image processing Control systems Neural Networks Communications Robust control Statistics

Appendix B- SIMULINK
47

B.1. Features of Simulink used in this project:


Simulink is a graphical extension to MATLAB for modeling and simulation of systems. In Simulink, systems are drawn on screen as block diagrams. Many elements of block diagrams are available, such as transfer functions, summing junctions, etc., as well as virtual input and output devices such as function generators and oscilloscopes. Simulink is integrated with MATLAB and data can be easily transferred between the programs. Simulink is supported on UNIX, Macintosh, and Windows environments; and is included in the student version of MATLAB for personal computers. Simulink is a platform for multinomial simulation and Model-Based Design for dynamic systems. It provides an interactive graphical environment and a customizable set of block libraries, and can be extended for specialized applications. Simulink is started from the MATLAB command prompt by entering the following command: simulink Alternatively, you can hit the New Simulink Model button at the top of the MATLAB command window as shown below:

48

When it starts, Simulink brings up two windows. The first is the main Simulink window, which appears as:

The second window is a blank, untitled, model window. This is the window into which a new model can be drawn.

49

B.1.1. Basic Elements:


There are two major classes of items in Simulink: blocks and lines. Blocks are used to generate, modify, combine, output, and display signals. Lines are used to transfer signals from one block to another.

Blocks:
There are several general classes of blocks:

Sources: Used to generate various signals Sinks: Used to output or display signals Discrete: Linear, discrete-time system elements (transfer functions, state-space
models, etc.)

Linear: Linear, continuous-time system elements and connections (summing


junctions, gains, etc.)

Nonlinear: Nonlinear operators (arbitrary functions, saturation, delay, etc.) Connections: Multiplex, De multiplex, System Macros, etc.
Blocks have zero to several input terminals and zero to several output

terminals. Unused input terminals are indicated by a small open triangle. Unused output terminals are indicated by a small triangular point. The block shown below has an unused input terminal on the left and an unused output terminal on the right.

50

Lines:
Lines transmit signals in the direction indicated by the arrow. Lines must always transmit signals from the output terminal of one block to the input terminal of another block. On exception to this is a line can tap off of another line, splitting the signal to each of two destination blocks. Lines can never inject a signal into another line; lines must be combined through the use of a block such as a summing junction. A signal can be either a scalar signal or a vector signal. For Single-Input, Single-Output systems, scalar signals are generally used. For Multi-Input, Multi-Output systems, vector signals are often used, consisting of two or more scalar signals. The lines used to transmit scalar and vector signals are identical. The type of signal carried by a line is determined by the blocks on either end of the line.

Modifying Blocks:
A block can be modified by double-clicking on it. If you double-click on the block which you want to modify, you will see a dialog box which contains fields for the parameters related to the variables of that block. By entering the values which you want to modify the block can be modified as desired. After entering the values hit the close button, the model window will change according to the values entered.

Running Simulations:
Before running a simulation of a system, first open the sink block (ex: scope window) by double-clicking on the block. Then, to start the simulation, either select Start from the Simulation menu or hit Ctrl-T in the model window. The simulation runs very quickly and the sink display shows the result of simulation. If the simulation output is at a very low level relative to the axes of the scope or display hit the auto scale button (binoculars), which will rescale the axes.

51

Building Systems:
Systems are built in Simulink using the building blocks in Simulink's Block Libraries. First gather all the necessary blocks from the block libraries. Then modify the blocks so they correspond to the blocks in the desired model. Finally, connect the blocks with lines to form the complete system. After this, simulate the complete system to verify that it works.

B.2. Simulink Basics Tutorial - Interaction with MATLAB:


The three of the ways in which Simulink can interact with MATLAB:

Block parameters can be defined from MATLAB variable. Signals can be exchanged between Simulink and MATLAB. Entire systems can be extracted from Simulink into MATLAB.

Taking Variables from MATLAB:


In some cases, parameters, such as gain, may be calculated in MATLAB to be used in a Simulink model. If this is the case, it is not necessary to enter the result of the MATLAB calculation directly into Simulink. For example, suppose we calculated the gain in MATLAB in the variable K. Emulate this by entering the following command at the MATLAB command prompt. K=2.5 This variable can now be used in the Simulink Gain block. In the simulink model, double-click on the Gain block and enter the following in the Gain field.

52

Close this dialog box. Notice now that the Gain block in the shows the variable K rather than a number.

Simulink

model

Besides variable, signals, and even entire systems can be exchanged between MATLAB and Simulink. Simulink is a platform for multinomial simulation and Model-Based Design for dynamic systems. It provides an interactive graphical environment and a customizable set of block libraries, and can be extended for specialized applications.

53

Potrebbero piacerti anche