Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Introduction
The increasing demand for digital cellular telephony and other new services including multi-media communications prompted numerous studies on implementing not only the algorithms for half-rate speech coding using the available DSP processors on the market but also the need to enhance the speech quality subject to both degradations due to the effects of the ambient acoustical noise and the echo in the environment. The speech quality of the emerging totally digital cellular phones will greatly depend upon the speech quality available at the near-end transmitter end. In order to suppress the disturbing environmental noise in a hands-free speech transmission system, several noise reduction (NR) algorithms were introduced. Among all those disturbing signals that we dont want to transmit, theres the reverberated speech signal of the person we talk to - the far-end speaker. As the incoming far-end speakers speech signal is known to our terminal equipment, we can exploit it to cancel the reverberated signal. So does the acoustic echo canceller (AEC). Acoustic echo is a major problem in telecommunications in the GSM network where echo delay is especially annoying for speakers. There is an evident need for an acoustic echo canceller (AEC) to overcome this problem, particularly when poor quality mobile phones are used and in case of hands-free Communication.
1.1 Background
During transmission and reception, signals are often corrupted by noise and echo, which can cause severe problems for downstream processing and user perception. It is well known that to cancel the noise component present in the received signal using adaptive signal processing technique, a reference signal is needed, which is highly correlated to the noise. Since the noise gets added in the channel and is totally random, hence there is no means of creating a correlated noise, at the receiving end.
Telecommunications is about transferring information from one location to another. This includes many forms of information: telephone conversations, television signals, computer files, and other types of data. To transfer the information, you need a channel between the two locations. This may be a wire pair, radio signal, optical fiber, etc. Telecommunications companies receive payment for transferring their customer's information, while they must pay to establish and maintain the channel. The financial bottom line is simple: the more information they can pass through a single channel, the more money they make. DSP has revolutionized the telecommunications industry in many areas: In this, well explore on echo cancellation elements/techniques will focus on the basics of echo cancellation of echo cancellation and acoustic echo canceller, let's provide an overview of echo. Echo is a phenomenon in which a delayed and distorted version of an original sound or electrical signal is reflected back to the source. Acoustic echo cancellation is important for audio teleconferencing when simultaneous communication (or full-duplex transmission) of speech is necessary. In acoustic echo cancellation, a measured microphone signal d(n) contains two signals: - the near-end speech signal v(n) - the far-end echoed speech signal d (n) The goal is to remove the far-end echoed speech signal from the microphone signal so that only the near-end speech signal is transmitted. Echo cancellation is a recurring problem in telecommunication and wireless networks. And for years, equipment designers have turned to digital signal processing (DSP) and standalone board solutions as a means for curbing echo in carrier class equipment designs. For successful communication, In order to suppress the disturbing environmental noise in a hands-free speech transmission system meet this requirement, dealing with echo is a must.
The task of the acoustical echo canceller is to adaptively cancel the echo during non-speech periods; but it must cancel the echo only when the near-end speaker speaks. In other words, no adaptation is to be performed during those instances. In order to achieve that near-end speaker activity detection is needed. Messerschmitt etc. all have developed a coefficient adaptation algorithm based LMS algorithm.
SMART ANTENNAS
A smart antenna is a phased or adaptive array that adjusts to the environment. That is, for the adaptive array, the beam pattern changes as the desired user and the interference move, and for the phased array, the beam is steered or different beams are selected as the desired user moves. Phased array or multibeam antenna consists of either a number of fixed beams with one beam turned on towards the desired signal or a single beam (formed by phase adjustment only) that is steered towards the desired signal. Adaptive antenna array is an array of multiple antenna elements with the received signals weighted and combined to maximize the desired signal to interference and noise (SINR) ratio. This means that the main beam is put in the direction of the desired signal while nulls are in the direction of the interference.
Figure 1: Smart antenna systems definition. Adaptive antenna array systems represent the most advanced smart antenna approach to date. Using a variety of new signal-processing algorithms, the adaptive system takes advantage of its ability to effectively locate and track various types of signals to dynamically minimize interference and maximize intended signal reception.
Figure 1.7: Switched beam system coverage patterns (a) and Adaptive array coverage (b). 7
Smart Antennas are arrays of antenna elements that change their antenna pattern dynamically to adjust to the noise, interference in the channel and mitigate multipath fading effects on the signal of interest. The difference between a smart (adaptive) antenna and dumb (fixed) antenna is the property of having an adaptive and fixed lobe-pattern, respectively. The secret to the smart antennas ability to transmit and receive signals in an adaptive, spatially sensitive manner is the digital signal processing capability present. An antenna element is not smart by itself; it is a combination of antenna elements to form an array and the signal processing software used that makes smart antennas effective. This shows that smart antennas are more than just the antenna, but rather a complete transceiver concept. Smart Antenna systems are classified on the basis of their transmit strategy, into the following three types (levels of intelligence): Switched Beam Antennas Dynamically-Phased Arrays Adaptive Antenna Arrays Adaptive Antenna Arrays Adaptive antenna arrays can be considered the smartest of the lot. An Adaptive Antenna Array is a set of antenna elements that can adapt their antenna pattern to changes in their environment. Each antenna of the array is associated with a weight that is adaptively updated so that its gain in a particular look-direction is maximized, while that in a direction corresponding to interfering signals is minimized. In other words, they change their antenna radiation or reception pattern dynamically to adjust to variations in channel noise and interference, in order to improve the SNR (signal to noise ratio) of a desired signal. This procedure is also known as adaptive beam forming or digital beam forming Conventional mobile systems usually employ some sort of antenna diversity (e.g. space, polarization or angle diversity). Adaptive antennas can be regarded as an extended diversity scheme, having more than two diversity branches. In this context, phased arrays will have a greater gain potential than switched lobe antennas because all elements can be used for diversity combining. 8
Relative Benefits/Tradeoffs of Switched Beamed Adaptive Array Systems In the previous section three different definitions of Smart Antenna Systems, most commonly found in literature, are listed. However, the second definition, in which Smart Antenna Systems are divided into Switched Beam and Adaptive Array antenna systems, will be taken as a reference throughout this report. In this definition, the adaptive array antennas are subdivided into two classes: the first is the phased array antennas where only the phase of the currents is changed by the weights, and the second class are adaptive array antennas in strict sense, where both the amplitude and the phase of the currents are changed to produce a desired beam.
ADAPTIVE EQUALIZERS
10
on signals that are not digital. It is important to realize that a digital filter can do anything that a real-world filter can do. That is, all the filters alluded to above can be simulated to an arbitrary degree of precision digitally. Thus, a digital filter is only a formula for going from one digital signal to another. It may exist as an equation on paper, as a small loop in a computer subroutine, or as a handful of integrated circuit chips properly interconnected. Linear Time Invariant Systems that change the shape of the spectrum are often referred to as frequency shaping filters. Systems that are designed to pass some frequencies essentially undistorted and significantly attenuate or eliminate others are referred to as frequency selective filters.
y ( n) =( n) x ( n ) h k
Different techniques are available for the design of FIR filters, such as a commonly used technique that utilizes the Fourier series. A very useful feature of an FIR filter is that it can guarantee linear phase. The linear phase feature can be very useful in applications such as speech analysis, where phase distortion can be very critical. For example, with linear phase, all input sinusoidal components are delayed by the same amount. Otherwise, harmonic distortion can occur.
k= 0
y ( n) = ak x( n k ) b j y ( n j )
k =0
j =1
and television receiver. While frequency selectivity is the only issue of concern in applications, its broad importance has lead to a wildly accepted set of terms describing the characteristics of frequency selective filters. In particular, while the nature of the frequencies to be passed by a frequency selective filter varies considerably from application to application, several basic filters are wildly used and have been given names indicative of their function
rejects frequencies at all other frequencies. For example an ideal frequency low pass filter with cutoff frequency Wc is an LTI system that passes complex exponentials e^ (jwt) for values of w in the frequencies in the range Wc <W< Wc and rejects signals at all other frequencies. The frequency response of an ideal continuous time high pass filter with cut off frequency Wc , and the figure illustrates an ideal continuous time band pass filter with low cutoff frequency wc1 and upper cutoff frequency Wc 2 . Note that each of these filters is symmetric about w=0, and thus, there appear to be two pass bands for the high pass and band pass filters. This is the consequence of having our adopted the use of complex exponential signal. Note that the characteristics of the continuous time and discrete time ideal filters differ by virtue of the fact that for discrete time filters the frequency response must be periodic with period 2pi, with low frequencies near even multiples of pi and high frequencies near odd multiples of pi.
15
carries out or undergoes the process of adaptation is called by the more technical name filter. Depending upon the time required to meet the final target of the adaptation process, which we will call the convergence time that is available to carry out the adaptation, we can have a variety of adaptation algorithms and filter structures.
Adaptive filter
So after the above explanation we can define adaptive filter as The purpose of the general adaptive system is to filter the input signal so that it resembles (in some sense) the desired signal input OR An adaptive FIR or IIR filter designs itself based on the characteristics of the input signal to the filter and a signal which represent the desired behavior of the filter on its input. Adaptive filters are a class of filters that iteratively alter their parameters. The filter minimizes the error between some desired signal and some reference signal. A result of the process is a set of N tap values which define the nature of the input signal being filtered. Now, let's say we want to determine the nature of some channel, say a room in which a signal is being created and received to a microphone. To do this, it seems that we would have to have some way of supplying a known signal and comparing it to the same signal as received into the mic. The LMS filter should provide a set of taps define the inverse of the room. After filtering out noise alone (which is another task altogether), the tap weights could then be implemented as the corresponding filter, inversed and applied to the received signal thus producing the original signal in its original form. Tapsre already inversed! When we feed an adaptive filter a training sequence, it allows it to adapt so that the filtered signal is as close as possible to the reference signal (i.e., the training signals). In order to do that, the filter should define the inverse of whatever the channel is, because
Y ( f ) =X ( f ) * H ( f ) * F ( f )
Where H (f) is the "channel," F (f) is the adaptive filter, X (f) is the input to the channel, and Y (f) is the output from the adaptive filter. So in order to make
Y( f ) =X ( f )
you must have 1 =H ( f ) * F ( f ) 16
And therefore H ( f ) = 1/ F ( f )
Y ( n) = (i ) X ( n i ) W
i= 0
N 1
Where W I (n) s are the filter tap weights (coefficients) and N is the filter length. We refer to the input samples, x (n-i), for i= 0, 1, 2... N-1, as the filter tap inputs. The tap weights which may vary in time are controlled by the adaptive algorithm
17
Figure 3.2 Basic Adaptive Filter structure Designing the filter does not require any other frequency response information or specification. To define the self learning process the filter uses, you select the adaptive algorithm used to reduce the error between the output signal y (k) and the desired signal d (k). When the LMS performance criteria for e (k) have achieved its minimum value through the iterations of the adapting algorithm, the adaptive filter is finished and its coefficients have converged to a solution. Now the output from the adaptive filter matches closely the desired signal d (k). When you change the input data characteristics, sometimes called the filter environment, the filter adapts to the new environment by generating a new set of coefficients for the new data. Notice that when e (k) goes to zero and remains there you achieve perfect adaptation; the ideal result but not likely in the real world. So the system has six main components to be defined: Input signal Desired signal Output signal Error signal FILTER - Filtering process Adaptive process - Some kind of algorithm The coefficients of an adaptive filter are adjusted to compensate for changes in input signal, output signal, or system parameters. Instead of being rigid, an adaptive system can learn the signal characteristics and track slow changes. An adaptive filter can
18
be very useful when there is uncertainty about the characteristics of a signal or when these characteristics change.
adaptive system trending towards an error signal of zero. After a number of iterations of this process are performed, and if the system is designed correctly, the adaptive filters transfer function will converge to, or near to, the unknown systems transfer function. For this configuration, the error signal does not have to go to zero, although convergence to zero is the ideal situation, to closely approximate the given system. There will, however, be a difference between adaptive filter transfer function and the unknown system transfer function if the error is nonzero and the magnitude of that difference will be directly related to the magnitude of the error signal. Additionally the order of the adaptive system will affect the smallest error that the system can obtain. If there are insufficient coefficients in the adaptive system to model the unknown system, it is said to be under specified. This condition may cause the error to converge to a nonzero constant instead of zero. In contrast, if the adaptive filter is over specified, meaning that there are more coefficients than needed to model the unknown system, the error will converge to zero, but it will increase the time it takes for the filter to converge
Figure 3.3 Adaptive System Configurations Adaptive noise/Echo cancellation The second configuration is the adaptive noise cancellation configuration as shown in figure.
20
Figure 3.3 Adaptive Noise or Echo cancellations In this configuration the input x (n) is compared with a desired signal d (n), which consists of a signal s (n) corrupted by another noise N 0 (n) . The adaptive filter coefficients adapt to cause the error signal to be a noiseless version of the signal s (n). Both of the noise signals for this configuration need to be uncorrelated to the signal s (n). In addition, the noise sources must be correlated to each other in some way, preferably equal, to get the best results. Do to the nature of the error signal; the error signal will never become zero. The error signal should converge to the signal s (n), but not converge to the exact signal. In other words, the difference between the signal s (n) and the error signal e (n) will always be greater than zero. The only option is to minimize the difference between those two signals. The basic principle is canceling a sound wave by generating another sound wave exactly out of phase with the first one. The superposition of the two waves would result in silence. Adaptive Linear Prediction Adaptive linear prediction is the third type of adaptive configuration figure 4.1.2. This configuration essentially performs two operations. The first operation, if the output is taken from the error signal e (n), is linear prediction. The adaptive filter coefficients are being trained to predict, from the statistics of the input signal x (n), what the next input signal will be. The second operation, if the output is taken from y (n), is a noise filter similar to the adaptive noise cancellation outlined in the previous section. As in the previous section, neither the linear prediction output nor the noise cancellation 21
output will converge to an error of zero. This is true for the linear prediction output because if the error signal did converge to zero, this would mean that the input signal x (n) is entirely deterministic, in which case we would not need to transmit any information at all. In the case of the noise filtering output, as outlined in the previous section, y (n) will converge to the noiseless version of the input signal. -
Adaptive Inverse System Configuration The adaptive inverse system configuration as shown in figure. The goal of the adaptive filter here is to model the inverse of the unknown system u (n). This is particularly useful in adaptive equalization where the goal of the filter is to eliminate any spectral changes that are caused by a prior system or transmission line. The way this filter works is as follows. The input x (n) is sent through the unknown filter u (n) and then through the adaptive filter resulting in an output y (n). The input is also sent through a delay to attain d (n). As the error signal is converging to zero, the adaptive filter coefficients w (n) are converging to the inverse of the unknown system u (n). For this configuration, as for the system identification configuration, the error can theoretically go to zero. This will only be true; however, if the unknown system consists only of a finite number of poles or the adaptive filter is an IIR filter. If neither of these conditions is true, the system will converge only to a constant due to the limited number of zeroes available in an FIR system.
22
(b) Adaptive channel equalization Used in a modem to reduce channel distortion resulting from the high speed of data transmission over telephone channels.
23
The aim to minimize a cost function equal to the expectation of the square of the difference between the desired signal d (n), and the actual output of the adaptive filter y (n)
and the MSE. The value is decided through trial and error so that speed at which the adaptive filter learns and the excess MSE is obtained within application requirements. The values differs from simulation to real-time because of the inherit differences between the systems. Convergence Rate The convergence rate determines the rate at which the filter converges to its resultant state. Usually a faster convergence rate is a desired characteristic of an adaptive system. Convergence rate is not, however, independent of all of the other performance characteristics. There will be a tradeoff, in other performance criteria, for an improved convergence rate and there will be a decreased convergence performance for an increase in other performance. For example, if the convergence rate is increased, the stability characteristics will decrease, making the system more likely to diverge instead of converge to the proper solution. Likewise, a decrease in convergence rate can cause the system to become more stable. This shows that the convergence rate can only be considered in relation to the other performance metrics, not by itself with no regards to the rest of the system. Minimum Mean Square Error The minimum mean square error (MSE) is a metric indicating how well a system can adapt to a given solution. A small minimum MSE is an indication that the adaptive system has accurately modeled, predicted, adapted and/or converged to a solution for the system. A very large MSE usually indicates that the adaptive filter cannot accurately model the given system or the initial state of the adaptive filter is an inadequate starting point to cause the adaptive filter to converge. There are a number of factors which will help to determine the minimum MSE including, but not limited to; quantization noise, order of the adaptive system, measurement noise, and error of the gradient due to the finite step size. Computational Complexity 25
Computational complexity is particularly important in real time adaptive filter applications. When a real time system is being implemented, there are hardware limitations that may affect the performance of the system. A highly complex algorithm will require much greater hardware resources than a simplistic algorithm. Stability Stability is probably the most important performance measure for the adaptive system. By the nature of the adaptive system, there are very few completely asymptotically stable systems that can be realized. In most cases the systems that are implemented are marginally stable, with the stability determined by the initial conditions, transfer function of the system and the step size of the input. Robustness The robustness of a system is directly related to the stability of a system. Robustness is a measure of how well the system can resist both input and quantization noise. Filter Length The filter length of the adaptive system is inherently tied to many of the other performance measures. The length of the filter specifies how accurately a given system can be modeled by the adaptive filter. In addition, the filter length affects the convergence rate, by increasing or decreasing computation time, it can affect the stability of the system, at certain step sizes, and it affects the minimum MSE. If a filter length of the system is increased, the number of computations will increase, decreasing the maximum convergence rate. Conversely, if the filter length is decreased, the number of computations will decrease, increasing the maximum convergence rate. For stability, due to an increase in length of the filter for a given system, you may add additional poles or zeroes that may be smaller than those that already exist. In this case the maximum step size, or maximum convergence rate, will have to be decrease to maintain stability. 26
Finally, if the system is under specified, meaning there is not enough pole and/or zeroes to model the system, the mean square error will converge to a nonzero constant. If the system is over specified, meaning it has too many poles and/or zeroes for the system model, it will have the potential to converge to zero, but increased calculations will affect the maximum convergence rate possible.
27
Tracking 28
When an adaptive filtering algorithm operates in a non-stationary environment. The algorithm is required to track statistical variations in the environment. The tracking performance of the algorithm, however, is influenced by two contradictory features: (1) Rate of convergence, and (2) steady-state fluctuation due to algorithm noise. Robustness For an adaptive filter to be robust, small disturbances (I.e., disturbances with small energy) can only result in small estimation errors. The disturbances may arise from a variety of factors, internal or external to the filter. Computational requirements: Here the issues of concern include (a) The number of operations (i.e., multiplications, divisions, and additions/ subtractions) Required to make one complete iteration of the algorithm. (b) The size of memory locations required to store the data and the program, (c) The investment required to program the algorithm on a computer.
to solve the Wiener Hopf equations (i.e., the Matrix equation defining the optimum Wiener solution); the iterative procedure is based on the method of steepest descent, which is a well known technique in optimization theory. This method required the use of a gradient vector, the value of which depends on two parameters: the correlation Matrix of the tap inputs in the transversal filter and the cross correlation vector between the desired response and the same tap inputs. Next, we use instantaneous values for this correlation, so as to derive an estimate for the gradient vector, making it assume a stochastic character in general. The resulting algorithm is widely known as the least mean square (LMS) algorithm, , the essence of which for the case of a transversal filter operating on real valued data may be described as
Where the error signal is defined as the difference between some desired response and the actual response of the transversal filter produced by the tap input vector.
b. Generating an estimation error by comparing this output with a desired response. 2. An adaptive process: which involves the automatic adjustment to the parameters of the filter in accordance with the estimation error. The combination of these two processes working together constitutes a feedback loop; first, we have a transversal filter, around which the LMS algorithm is built. This component is responsible for performing the filtering process. Second, we have a mechanism for performing the adaptive control process on the tap weights of the transversal filter. With each iteration of the LMS algorithm, the filter tap weights of the adaptive filter are updated according to the following formula (Farhang-Boroujeny 1999).
31
weight vector and the current gradient of the cost function with respect to the filter tap weight coefficient vector,
W (n +1) = W (n) ( n)
2 Where ( n) = E[e ( n)]
As the negative gradient vector points in the direction of steepest descent for the N-dimensional quadratic cost function, each recursion shifts the value of the filter coefficients closer toward their optimum value, which corresponds to the minimum achievable value of the cost function, (n).The LMS algorithm is a random process implementation of the steepest descent algorithm. Here the expectation for the error signal is not known so the instantaneous value is used as an estimate. The steepest descent algorithm then becomes
W (n + 1) = W (n) ( n)
Where ( n ) = e 2 ( n) The gradient of the cost function, (n), can alternatively be expressed in the following form. ( n ) = e 2 ( n )
= e 2 (n) W
e( n) W
= 2e( n)
= 2e(n) X (n)
Substituting this into the steepest descent algorithm of Eqn, we arrive at the recursion for the LMS adaptive algorithm.
32
Each iteration of the LMS algorithm requires 3 distinct steps in this order: 1. The output of the FIR filter, y(n) is calculated using equation
y (n) = W (n) X ( n) = W T ( n) X ( n)
2. The value of the error estimation is calculated using equation
i=0
N 1
e( n ) = d ( n ) X ( n )
3. The tap weights of the FIR vector are updated in preparation for the next iteration, by
Computational simplicity, making it easier to implement than all other commonly use adaptive algorithms. For each iteration the LMS algorithm requires 2N additions and 2N+1 multiplications (N for calculating the output, y(n), one for 2e(n) and an additional N for the scalar by vector multiplication).
tr [ R ] = E[ x 2 ( n i )]
i =0
N 1
33
= E[ x 2 (n i )]
i =0
N 1
W (n + 1) = w(n) +
1 e( n ) x ( n ) X T ( n) X ( n)
Simple to implement and computationally efficient. Shows very good attenuation and variable step size allows stable performance with non-stationary signals. This was the obvious choice for real time implementation. 4.8.1 Derivation of the NLMS algorithm This derivation of the normalized least mean square algorithm .To derive the NLMS algorithm we consider the standard LMS recursion, for which we select a variable step size parameter, (n). This parameter is selected so that the error value, e + (n), will be minimized using the updated filter tap weights, w (n+1), and the current input vector, x (n).
W ( n +1)] =W ( n) + 2 e( n) x ( n)
( n) =
1 2 X T ( n) X ( n)
This (n) is then substituted into the standard LMS recursion replacing , resulting in the following.
1 e( n ) x ( n ) X T ( n) X ( n)
Here the value of is a small positive constant in order to avoid division by zero when the values of the input vector are zero. This was not implemented in the real time implementation as in practice the input signal is never allowed to reach zero due to noise from the microphone and from the AD codec on the Texas Instruments DSK. The parameter is a constant step size value used to alter the convergence rate of the NLMS algorithm, it is within the range of 0<<2, usually being equal to 1.
4.8.2 Implementation of the NLMS algorithm The NLMS algorithm has been implemented in Mat lab and in a real time application using the Texas Instruments TMS320C6711 Development Kit. As the step size parameter is chosen based on the current input values, the NLMS algorithm shows far greater stability with unknown signals. This combined with good convergence speed and relative computational simplicity makes the NLMS algorithm ideal for the real time adaptive echo cancellation system. The code for both the Mat lab and TI Code composer studio applications can be found in appendices A and B respectively. As the NLMS is an extension of the standard LMS algorithm, the NLMS algorithms practical implementation is very similar to that of the LMS algorithm. Each iteration of the NLMS algorithm requires these steps in the following order:1. The output of the adaptive filter is calculated.
y (n) = W (n) X ( n) = W T ( n) X ( n)
2. An error signal is calculated as the difference between the desired signal and the filter output.
i=0
N 1
e( n ) = d ( n ) X ( n )
3. The step size value for the input vector is calculated.
4. The filter tap weights are updated in preparation for the next iteration. Each iteration of the NLMS algorithm requires 3N+1 multiplications, this is only N more than the standard LMS algorithm .This is an acceptable increase considering the gains in stability and echo attenuation achieved.
36
CONCLUSION
37
Conclusions:
Proposed SA algorithms are becoming more complex and involve combinations with processing in time domain, multiuser detection, ST coding, and multiple antennas at MS. There are number of parameters such as the level of CCI reduction, diversity gain, and SNR, which can be improved with an SA. Some of these parameters can be interdependent and even conflicting. Their importance and tradeoff need to be decided on a cell-by-cell basis. The following parameters should be taken into consideration: propagation, interference environment, users mobility, and requirements for link quality. Network coverage and capacity in urban macro cells can at least be doubled with existing SA receivers. To achieve sensible capacity improvements in an urban microcell, more complex SA algorithms, discussed in this work, are required.
38
REFERENCES
39
Bibliography [1] IoWave Inc. Smart Antenna. http://www.iowave.com/. [2] G. Tsoulos, M. Beach, and J. McGeehan. Wireless Personal Communications for the 21st Century: European Technological Advances in Adaptive Antennas. IEEE Communications Magazine, September 1997. [3] R. H. Ray. Application of Smart Antenna Technology in Wireless Communication Systems. ArrayComm Inc. http://www.arraycomm.com/. [4] M. Cooper and M. Goldburg. Intellingent Antennas: Spatial Division Multiple Access. Wireless, Annual Review of Communications, 1996. ArrayComm Inc. [5] Smart Antenna Systems. Web ProForum Tutorials, The International Engineering Consortium. http://www.iec.org/. [6] J. H. Winters. Smart Antennas for Wireless Systems. IEEE Personal Communications, pages 2327, February 1998. [7] J. H. Winters. WTEC Panel Report on Wireless Technologies and Information Networks, chapter 6. Smart Antennas. International Technology Research Institute, Baltimore, July 2000. [8] Ng K. Chong, O. K. Leong, P. R. P. Hoole, and E. Gunawan. Smart Antennas and Signal Processing, chapter 8. Smart Antennas: Mobile Station Antenna Beamforming, pages 245267. WITPress, 2001. [9] D. Nowicki and J. Roumeliotos. Smart Antenna Strategies. Mobile Communications, April 1995. [10] A. Jacobsen. Smart Antennas for Dummies. Technical report, Telenor R&D, January 2001. 40
APPENDIX
Appendix A- MATLAB
A.1.Features of Matlab used in this project:
MATLAB stands for matrix laboratory is a high performance language for technical computing. It integrates computation, visualization and programming in an easy to use environment where problems and solutions are expressed in familiar mathematical notation. MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time it 41
would take to write a program in a scalar non interactive language such as C or FORTRAN. MATLAB was originally written to provide easy access to matrix software developed by the LINPACK and EISPACK projects. Today, MATLAB engines incorporate the LAPACK and BLAS libraries, embedding the state of the art in software for matrix computation. MATLAB has evolved over a period of years with input from many users. In university environments, it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-productivity research, development, and analysis. MATLAB features a family of add-on application-specific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others.
A.1.1. Tool boxes in Matlab 1. Signal processing 2. Image processing 3. Control systems 4. Neural Networks 5. Communications 6. Robust control 7. Statistics A.1.2. Typical uses of MATLAB 1. Math and computation 42
2. Algorithm development 3. Data acquisition 4. Data analysis ,exploration and visualization 5. Scientific and engineering graphics 6. 7. Modeling, simulation, and prototyping Application development, including graphical user interface building. A.1.3. Main features of MATLAB: 1. Advance algorithm for high performance numerical computation, especially in the Field matrix algebra. 2. A large collection of predefined mathematical functions and the ability to define ones own functions. 3. Two-and three dimensional graphics for plotting and displaying data. 4. A complete online help system. 5. Powerful, matrix or vector oriented high level programming language for individual applications. 6. Toolboxes available for solving advanced problems in several application areas.
43
44
MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. It includes high-level functions for two-dimensional and three-dimensional data visualization, image processing, animation, and presentation graphics. It also includes low-level functions that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications. 5. The MATLAB Application Program Interface (API) : This is a library that allows you to write C and FORTRAN programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MAT-files.
45
46
MATLAB
Tool boxes Signal processing Image processing Control systems Neural Networks Communications Robust control Statistics
Appendix B- SIMULINK
47
48
When it starts, Simulink brings up two windows. The first is the main Simulink window, which appears as:
The second window is a blank, untitled, model window. This is the window into which a new model can be drawn.
49
Blocks:
There are several general classes of blocks:
Sources: Used to generate various signals Sinks: Used to output or display signals Discrete: Linear, discrete-time system elements (transfer functions, state-space
models, etc.)
Nonlinear: Nonlinear operators (arbitrary functions, saturation, delay, etc.) Connections: Multiplex, De multiplex, System Macros, etc.
Blocks have zero to several input terminals and zero to several output
terminals. Unused input terminals are indicated by a small open triangle. Unused output terminals are indicated by a small triangular point. The block shown below has an unused input terminal on the left and an unused output terminal on the right.
50
Lines:
Lines transmit signals in the direction indicated by the arrow. Lines must always transmit signals from the output terminal of one block to the input terminal of another block. On exception to this is a line can tap off of another line, splitting the signal to each of two destination blocks. Lines can never inject a signal into another line; lines must be combined through the use of a block such as a summing junction. A signal can be either a scalar signal or a vector signal. For Single-Input, Single-Output systems, scalar signals are generally used. For Multi-Input, Multi-Output systems, vector signals are often used, consisting of two or more scalar signals. The lines used to transmit scalar and vector signals are identical. The type of signal carried by a line is determined by the blocks on either end of the line.
Modifying Blocks:
A block can be modified by double-clicking on it. If you double-click on the block which you want to modify, you will see a dialog box which contains fields for the parameters related to the variables of that block. By entering the values which you want to modify the block can be modified as desired. After entering the values hit the close button, the model window will change according to the values entered.
Running Simulations:
Before running a simulation of a system, first open the sink block (ex: scope window) by double-clicking on the block. Then, to start the simulation, either select Start from the Simulation menu or hit Ctrl-T in the model window. The simulation runs very quickly and the sink display shows the result of simulation. If the simulation output is at a very low level relative to the axes of the scope or display hit the auto scale button (binoculars), which will rescale the axes.
51
Building Systems:
Systems are built in Simulink using the building blocks in Simulink's Block Libraries. First gather all the necessary blocks from the block libraries. Then modify the blocks so they correspond to the blocks in the desired model. Finally, connect the blocks with lines to form the complete system. After this, simulate the complete system to verify that it works.
Block parameters can be defined from MATLAB variable. Signals can be exchanged between Simulink and MATLAB. Entire systems can be extracted from Simulink into MATLAB.
52
Close this dialog box. Notice now that the Gain block in the shows the variable K rather than a number.
Simulink
model
Besides variable, signals, and even entire systems can be exchanged between MATLAB and Simulink. Simulink is a platform for multinomial simulation and Model-Based Design for dynamic systems. It provides an interactive graphical environment and a customizable set of block libraries, and can be extended for specialized applications.
53