Sei sulla pagina 1di 7

LINEAR CONVOLUTION

clc clear all close all x=input('Enter the input sequence x(n) = ') lx=input('Starting time index of x(n) = ') h=input('Enter the impulse response h(n) = ') lh=input('Starting time index of h(n) = ') y=conv(x,h) n=lx+lh:length(y)+lx+lh-1 subplot(311) stem(x) xlabel('time') ylabel('amp') title('input') subplot(312) stem(h) xlabel('time')

ylabel('amp') title('impulse') subplot(313) stem(n,y) xlabel('time') ylabel('amp') title('output')

DECONVOLUTION
clc clear all close all y=input('y(n) = ') x=input('h(n) = ') h=deconv(y,x) subplot(311)

stem(y) subplot(312) stem(x) subplot(313) N=length(h) n=0:N-1 stem(n,h)

VIVA 3dB
Generally speaking, a filter's cutoff frequency is not necessarily defined at -3dB. Such is the case for Butterworth filters, as a direct result of Butterworth's initial formulation, which ends up with a gain value of 0.707 at the cutoff frequency, regardless of order; ex. seehttp://en.wikipedia.org/wiki/Butterworth_filter. In contrast, a Chebyshev filter is defined differently, allowing the designer to specify the desired amount of ripple within a given passband or stop-band. In contrast to the Butterworth and Chebyshev filters, a Bessel filter is defined with respect to phase response, with the design objective of approximating a delay line, with a maximally flat phase response in a given passband, i.e. near constant delay time for all frequencies within that passband. Whereas a Butterworth filter's order can

be increased for greater stop-band attenuation while keeping a constant -3dB cutoff frequency, a Bessel filter's order can be increased, also for greater stopband attenuation, but while maintaining a constant group delay within some passband (ex. see www.mathworks.com/help/toolbox/signal/ref/besself.html), but resulting in a varying -3dB cutoff frequency. Since different filter designs aim at different objectives, it can be mis-leading to compare them on the basis of just a 3dB point. It is also worth noting, for example, that for a first order low-pass filter, there is a useful symmetry in the Bode phase response relative to the -3dB frequency, which is clearer to see in the Nyquist plot, and even clearer when compared to a first order high-pass filter with the same -3dB frequency.

Sampling theorem[edit]
The NyquistShannon sampling theorem states that perfect reconstruction of a signal is possible when the sampling frequency is greater than twice the maximum frequency of the signal being sampled, or equivalently, when the Nyquist frequency (half the sample rate) exceeds the highest frequency of the signal being sampled. If lower sampling rates are used, the original signal's information may not be completely recoverable from the sampled signal.[2] For example, if a signal has an upper band limit of 100 Hz, a sampling frequency greater than 200 Hz will avoid aliasingand would theoretically allow perfect reconstruction. The full range of human hearing is between 20 Hz and 20 kHz.[3] The minimum sampling rate that satisfies the sampling theorem for this full bandwidth is 40 kHz. The 44.1 kHz sampling rate used for Compact Disc was chosen for this and other technical reasons.

baud (/bd/, unit symbol "Bd") is synonymous to symbols per second or pulses per second. It is the unit of symbol rate, also known as baud ormodulation rate; the number of distinct symbol changes (signaling events) made to the transmission medium per second in a digitally modulated signal or a line code. Baud is related to but should not be confused with gross bit rate expressed in bit/s. However, though technically incorrect, in the case of modem manufacturers baud commonly refers to bits per second. They make a distinction by also using the term characters per second (CPS). In these anomalous cases, refer to the modem manufacturers documentation to ensure an understanding of their use of the term

"baud". An example would be the 1996 User's guide for the U.S. Robotics Sportster modem, which includes these definitions.

the Barkhausen stability criterion is a mathematical condition to determine when a linear electronic circuit will oscillate.[1][2][3] It was put forth in 1921 by German physicist Heinrich Georg Barkhausen (18811956).[4] It is widely used in the design of electronic oscillators, and also in the design of general negative feedback circuits such as op amps, to prevent them from oscillating.

Quantization, in mathematics and digital signal processing, is the process of mapping a large set of input values to a (countable) smaller set such as rounding values to some unit of precision. A device or algorithmic function that performs quantization is called a quantizer. The round-off error introduced by quantization is referred to as quantization error. In analog-to-digital conversion, the difference between the actual analog value and quantized digital value is called quantization error or quantization distortion. This error is either due to rounding ortruncation. The error signal is sometimes modeled as an additional random signal called quantization noise because of its stochastic behaviour. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.

Quantization Error
By Sweetwater on Dec 18, 1997, 12:00 AM Like+1Tweet

Error resulting from trying to represent a continuous analog signal with discrete, stepped digital data. The problem arises when the analog value being sampled falls between two digital steps. When this happens, the analog value must be represented by the nearest digital value, resulting in a very slight error. In other words, the difference between the continuous analog waveform, and the stairstepped digital representation is quantization error. For a sine wave, quantization error will appear as extra harmonics in the signal. For music or program material, the signal is constantly changing and quantization error appears as wideband

noise, cleverly referred to as quantization noise. It is extremely difficult to measure or spec quantization noise, since it only exists when a signal is present. Quantization error is one reason higher digital resolutions (longer word lengths) and higher sample rates sound better to our ears; the steps become finer, reducing quantization errors.

Nyquist stability criterion


From Wikipedia, the free encyclopedia

The Nyquist plot for

In control theory and stability theory, the Nyquist stability criterion, discovered by Swedish-American electrical engineer Harry Nyquistat Bell Telephone Laboratories in 1932,[1] is a graphical technique for determining the stability of a system. Because it only looks at theNyquist plot of the open loop systems, it can be applied without explicitly computing the poles and zeros of either the closed-loop or open-loop system (although the number of each type of right-half-plane singularities must be known). As a result, it can be applied to systems defined by non-rational functions, such as systems with delays. In contrast to Bode plots, it can handle transfer functions with right half-plane singularities. In addition, there is a natural generalization to more complex systems with multiple inputs and multiple outputs, such as control systems for airplanes. While Nyquist is one of the most general stability tests, it is still restricted to linear, time-invariant systems. Nonlinear systems must use more complex stability criteria, such as Lyapunov or the circle criterion. While Nyquist

is a graphical technique, it only provides a limited amount of intuition for why a system is stable or unstable, or how to modify an unstable system to be stable. Techniques like Bode plots, while less general, are sometimes a more useful design tool.

The primary function of an amplifier is to convert an input signal into an output signal, amplifying it during the process. For example, an electric guitar that is plugged into the input of an amplifier will transmit a signal to be converted to an output that is louder than it initially was. Amplifiers can also be used to shape the sound of an input signal through the adjustment of bass, treble and other variables. Unlike amplifiers, oscillators have a built-in autonomous circuit that creates an overlap between the input and output signals. The result is an everrepeating oscillation, which can be a square wave, chaotic wave or other signal.

Oscillators and amplifiers are similar in that amplifiers can be made to oscillate through an increase in gain, and oscillators can be made to amplify simply by the nature of how they work. The major difference between the two is the autonomous circuit that is characteristic of the oscillator; while amplifiers are capable of replicating such a circuit, it is not what they are most commonly used for.

Potrebbero piacerti anche