Sei sulla pagina 1di 24

Vibration Data Analysis

Daniel R. Kiracofe

Revision March 16th, 2007

1 Preface

There are a good number of books which deal with vibration. But I have not
found a single book which focuses on the analysis of experimental vibration
data, which is what I nd myself doing quite a bit of. I have put this together
to serve both as a reference for myself (e.g. collecting various bits of knowledge
1
together for easy access) and hopefully as a reference for others . I have done
similar writeups of other topics in the past and found them to be very well
received by the Internet community. This is going to be a work in progress for
some time. So if there is a particular area that is blank which you have a specic
question about, please e-mail me (kiracofe.8@osu.edu or drk@current.net) and
I will try to answer your questions. If you have more extensive questions or
problems, I may be able to provide consulting services for a nominal fee. Also,
if you wish to make a contribution by providing text, examples, or references
from your area of expertise, please let me know.
Occasionally I have made notes to myself on areas I wish to come back to.
This are marked as xme. You can ignore those.

2 Filtering

I remember that when I was rst learning about signal processing I found a
book from the library and was intrigued that more than half the book dealt
with ltering. But ltering is such a simple thing, surely there is more to
signal processing than just ltering I said to myself. As I have since learned,
although the basic concept of a lter is fairly straightfoward, the correct, ecient
implementation of a lter can be very subtle.

1 Further, the act of writing this down has helped me nd where the gaps are in my
knowledge and I have subsequently learned quite a bit lling those in. In my opinion, one of
the best ways to learn something is to try to teach it to someone else. If you can't teach it,
then you don't know it.

1
2.1 Basic lter concepts
2.1.1 Bandpass lters
One misconception that I had early on with this: since a lowpass lter keeps
low frequencies, and a highpass lter keeps high frequencies, and a bandpass
lter keeps a range of frequencies, then a bandpass lter is equivalent to simply
a highpass and a lowpass lter in series. Further, I thought, since a bandpass
lter always has an even number of poles, that must be because there is one
pole for the highpass part and one pole for the lowpass part. Although one can
create such a lter, a true bandpass lter is dierent than just a highpass and
low pass lter, and the even number of poles has nothing to do with one for
each part. Practically, it is important to understand this distinction since a two
pole pair bandpass lter has a much sharper cuto than a one pole lowpass and
one pole highpass lter combined in series.
A simple lowpass lter is a 1st order device. To be specic, the dierential
equation that describes it has only rst derivates. Electrically, this would be
resistors and capacitors, but no inductors. Mechanically, this would be springs
and dashpots, but no masses. To get a multi-pole lter, one just stacks in more
springs and dashpots, but never adds any mass. A simple bandpass lter, on
the other hand, is a 2nd order device. So the dierential equation has second
derivatives. Electrically, it involves an inductor, and mechanically it involves
a mass. The addition of the 2nd derivative means that the system now has a
natural frequency and can have a resonant response. It is this the on-resonance
condition that is the pass band of the lter and the o-resonance condition is
the stop band.
FIXME: elaborate on this point, maybe add some examples.

2.2 Standard Analog Filter topologies


There are many dierent tradeos to be made in lter design. There are sev-
eral dierent standard analog lter topologies, each of which maximizes some
properties at the expense of others. Here is a concise summary of their charac-
teristics.

2.2.1 Butterworth
The butterworth lter is maximally at in the passband, but does not have
a very sharp rollo. The butterworth lter is an all-pole lter and it has a
montonic frequency response (i.e. it keeps attenuating more and more as you
go out in frequency, in constrast to say an elliptical lter, which reach some
maximum attentuation and then stop).
Personally, I use butterworth lters frequently in vibration analysis because
they are the easiest to design. You specify a lter order and a cuto frequency
and Matlab can do the rest. In constrast, an elliptic lter design needs to
specify the tradeos between attenuation and ripple. A lot of times I don't

2
want to think about that, I just want to get rid of some high frequency noise in
my signal.

2.2.2 Cheyshev
A chebyshev lter has a sharper rollo but more ripple than a butterworth lter.

2.2.3 Elliptic (Cauer)


The elliptic lter is equiripple and has a very sharp rollo.

2.2.4 Bessel
The Bessel lter has a maximally at group delay. That is, it's phase response
is maximally linear. This is important if ones does not want to distort the shape
of signals with content at dierent frequencies (e.g. square waves).

2.3 Digital Filters


There are two types of digitial lters, commonly called IIR and FIR, which stand
for Innite Impulse Response and Finite Impulse response respectively. What
this denition means it that if you ping an IIR lter, as the response decays
it will asympotitically approach zero but never quite reach it (e.g. 0.1, 0.01,
0.001, 0.0001, etc., etc. but never exactly 0). The FIR lter, on the other hand,
will at some point reach zero. Practically, what this means in the IIR lter uses
feedback- the output at time t is dependent on the input at time t as well as
the output at time t-1 but the FIR lter is only dependent on the inputs. We
care about this because digital math can never have innite precision - there is
always some rounding error. Feedback can tend to compound rounding errors.
Thus IIR lters can sometimes be unstable.

2.3.1 IIR Filters


IIR lters are typically designed by designing an analog lter and then convert-
ing to an equivalent digital lter.

2.3.2 FIR Filters


Windowed Design
Remez Exchange (Parks-McClellan)

3 The analog to digital conversion process

3.1 Aliasing and the Nyquist frequency


3.2 The delta-sigma ADC
http://www.beis.de/Elektronik/DeltaSigma/DeltaSigma.html

3
3.3 oversampling [xme, need to think about this one
more. how does it relate to zero-padding FFTs]
http://www.bksv.com/pdf/Bv0047.pdf

4 Instrumentation

As far as I am concerned, the denitive book on instrumentation is Measurement


Systems by Ernest Doebelin [1]. I had the pleasure of taking a class from him
in graduate school and he is a god among men (or, at least, among engineers).
This book has information on every kind of measuring devices that a mechanical
engineer would want to use. I will go over the highlights of common vibration
instruments here.

4.1 Accelerometers
4.1.1 Low impedance (charge type) versus high impedance (ICP)
piezoelectric accelerometers
The plain piezoelectric accelerometer has an output of charge proportional to
accelerations. Most analog to digital converters are

4.2 Strain gages


4.3 Tachometers and encoders
4.3.1 Introduction to tachometers
The tachometer is a very important sensor in vibration analysis of rotating
machines, since many of the important excitations and natural frequencies de-
pend on speed. A tachometer will output either a sine wave or a square wave
where the frequency (and sometimes the amplitude) will be dependent on the
shaft speed. For steady-state conditions, one can get away with fairly simple
tachometer processing. But for transients where the speed changes very quickly,
some fairly complicated processing may be needed to get the data you want.
First, let's introduce some of the important parameters and tradeos in
designing a tachometer algorithm.

1. Update rate. How often does the tachometer algorithm give you a new
speed? Once a second, a hundred times a second?

2. Speed precision / number of signicant digits. Can the tachometer algo-


rithm distinguish between 60 RPM and 61 RPM? Between 60 RPM and
60.0001 RPM?

3. Lag time. When there is a sudden change in the speed, say from 100 RPM
to 200 RPM, how long is it before the tachometer algorithm reaches the
new speed (e.g. successive samples may say 100, 150, 175, 200. We want

4
to know how long before we get 200). This is dierent from update rate.
You may be getting a new speed a hundred times a second, but that speed
may be half a second behind the actual shaft speed.

4. Accuracy. If the speed is a constant 100 RPM, does the tachometer report
that, or does it report 101 or 99?

5. Computational cost. For post-processing, the computational cost is usu-


ally not important, but for online applications it may be a limiting factor.

Various tachometer algorithms will be developed, and the tradeos in these


parameters will be discussed. I will focus on algorithms for digital signal pro-
cessing, but analog processing is also available (frequency to voltage converters).
For the various examples, assume a tachometer which gives 60 pulses per
revolution. The perform of the algorithms will be examined on a simulated
signal which starts at 10.5 RPM and then increases to 20.5 RPM in 1 second.

4.3.2 Some simple algorithms


The most basic algorithm is to simply count the number of cycles (zero crossings)
of the sine wave over some time period and then divide to get cycles per second.
So if we count for 1 second, and see 600 pulses, then we have 10 revolutions per
second, or 600 RPM (note the immediate attraction of the 60 tooth wheel- 600
RPM gives 600 Hz, no conversion is necessary so this is a popular number). See
1 for the performance of this algorithm on the example. The true speed is blue
and this algorithm is red. Note the red dots show the update rate is once per
second. The speed bounces bacg4.3.epsk and forth from 10 to 11- the precision
is 1 RPM so the true speed cannot be displayed. On the transient, the lag time
is 2 seconds. Update rate and lag time could be decreased by counting over,
say, 0.5s, but then the precision is only 2 RPM.
The precision could be increased by increasing the number of pulses per
second. This will be true of all the algorithms and so will not be mentioned
again.
This approach is adequate only for slowly varying speeds or where the tran-
sient behavior is not of interest.
The most simple extension is to add overlap. For example, count how many
pulses are seen in a block of 0.1 second. Then, report the average over the past
10 blocks. We are still averaging over 1 second, but now the update rate is 10
times/s. See the green line in 1. The precision is still 1 RPM, but the lag time
is now only 1.7 second.

4.3.3 Tooth-to-tooth counting


In this method, instead of recording the number of zero-crossings over a time
period, we instead record the times at which each zero crossing happens. Then,
subtract subsequent time periods to nd a velocity. For example, if one tooth
crosses at t=0 and then next at t=0.001, then we have 1000 pulses/second,

5
22
actual speed
overlapped average
average
20

18
RPM

16

14

12

10
0 1 2 3 4 5 6 7
time (s)

Figure 1: Comparison of simple tachometer algorithms

6
or 16.66 RPM. See 2 for the perfomance of this algorithm. Compared to the
previous one it looks almost perfect. Lag time is a mere 0.050s and the precision
is approx 0.1 RPM.
The speed precision in this case is driven by the time resolution of the signal
processor. At 20.5 RPM with 60 teeth, a zero-crossing happens every 0.04875s.
In this example, I assumed an A/D with a 4kHz sample rate and assigned the
time of the zero crossing as the time of the sample nearest where the signal
crossed. So, the time resolution is .00025s. So the algorithm cannot tell the
dierence between an interval of 0.04875s and an interval of 0.04900s. The
dierence between these two intervals works out to be approx 0.1 RPM. As
shaft speed increases, the interval decreases, so an error of .00025s becomes a
larger percentage of the interval. At 200 RPM, the precision of this example
would be only 10 RPM; at 2000 RPM, the precision is a horrible 700 RPM. This
will be illustrated in an example in the next section.
To get acceptable precision at high speeds, increasing the sampling rate
become prohibitive. There are two solutions which are normally employed. The
rst is to use interpolation to determine the zero crossing time more accurately
instead of rounding to the nearest sample. The second is to use a hardware
timer device instead of an A/D converter. For example, the NI-6602 card can
return the time of zero-crossings based on a very accurate time base (something
like 10 MHz if I remember correctly).
Since the algorithm returns a speed point at every zero crossing, the update
rate increases with shaft speed and the lag time will decrease with shaft speed.

4.3.4 Real-world problems with the tooth-to-tooth algorithm and


the tooth-to-next-revolution algorithm
The tooth-to-tooth timing algorithm of the previous section looks good, but only
because we are using it on simulated sine waves. On real data, it does not fair
as well. To understand this, consider that fact that when we see a tachometer
tooth come by, that does not tell us the shaft speed, but actually the shaft
position. So the tachometer pulses give us a record of theta, the shaft position
angular versus time. Then we dierentiate that signal to get velocity. As any
good numerical methods book will tell you, dierentiation is an ill-conditioned
algorithm. That is, a very small change in the input (from noise) will cause a
very large change in the output.
There are many dierent sources of noise that can aect this algorithm, but
one of the biggest problem is tooth spacing errors on the tachometer wheel.
Consider an example in which 1 tooth out of 60 is 0.07% ahead of where is
should be and the tooth two places down is 0.07% behind where is should be.
This is the case in 3 for the red trace. The reader can see that there is a +/-
1 RPM spike that happens periodically. For a nominal shaft speed of 45 RPM,
this is a 2.2% error in the ouput for a 0.07% error in the input, an amplication
of 31 times. Shaft runout and rotating unbalance can also cause problems (green
trace).
One often used and straightforward way to correct these problems is to low

7
22
actual speed
tooth to tooth

20

18
RPM

16

14

12

10
0 1 2 3 4 5 6 7
time (s)

Figure 2: Tooth-to-tooth timing algorithm

8
pass lter the output. Of course, this will introduce some lag time. Another way
is to determine the repetitive errors ahead of time (presumably at low speeds
where there are no dynamic eects) and then subtract the reptitive errors out.
This requires somewhat more computational eort (I personally have never done
this, nor found it to be necessary. If anyone has done this, I would like to
hear about your application). The way I use most frequently is to take the
time dierence not between the zero crossings of sucessive teeth but between
a tooth's zero crossing and the zero crossing of that same tooth the next time
it comes around. Thus the repetitive errors are automatically cancelled out, at
the expense of adding one revolution's worth of lag time. At low shaft speeds,
one revolution is a long time compared to the lag introduced to the delay of a
low pass lter. But at high shaft speeds, it can be much shorter. For example,
in 4 at 30 - 40 RPM, a 3 Hz low pass lter gives the better response, but at
500 RPM (5), the tooth-to-next-revolution algorithm is better. Notice that the
speed precision of the tooth-to-next-rev algorithm is worse at higher speeds, as
was mentioned in the previous section. This example was using a 10 kHz sample
rate.

4.3.5 Using frequency demodulation to detect speed oscillations?


4.3.6 Using proximity-type tachometers to detect shaft whirl
5 Time domain analysis

5.1 Time domain dierentiation and integration


6 Frequency domain analysis

6.1 The Discrete Fourier Transform and Fast Fourier Trans-


form
Let us examine the four dierent things that we attach Fourier's name to (see
1). The rst one you probably learned about in your classes was the Fourier
series. This takes a periodic signal with a continuous time input and gives
an innite number of frequency terms (technically, a countable innity). And
then you learned about the Fourier transform, which takes an non-periodic (i.e.
innite extent) signal with continuous time input and gives an innite number
of frequency terms (technically, an uncountable innity). But neither of these
is very useful for digital signal processing. We never have a continuous time
input, all we have is a discretely sampled time input. So, we might wish to
do the Discrete time-Fourier transform, which takes a non-periodic signal with
discrete time to an innite number of frequency terms (a countable innity).
But that is no good either, since we never have an innite length data sample.
The best DSP money can buy only has a nite length buer. So we must resort
to the DFT. One can think of the DFT as sampling the DTFT. The DTFT

9
48
true speed
0.2% runout
tooth error
46

44

42
RPM

40

38

36

34
0 1 2 3 4 5 6 7
time (s)

Figure 3: Real world eects on the tooth-to-tooth algorithm

10
48
true speed
tooth to next rev
tooth−tooth low pass filter 3 Hz
46

44

42
RPM

40

38

36

34
1 2 3 4 5 6 7
time(s)

Figure 4: The tooth to next revolution algorithm versus a low pass lter on the
tooth to tooth algorithm for a low speed.

11
512

510

508

506
RPM

504

502

true speed
tooth to next rev
500 tooth−tooth low pass filter 3 Hz

498
1 2 3 4 5 6
time(s)

Figure 5: The tooth to next revolution algorithm versus a low pass lter on the
tooth to tooth algorithm for a high speed.

12
Table 1: 6.1
Continuous time input Discretely sampled time input

Periodic input signal Fourier Series Discrete Fourier Transform (DFT)


Non-periodic signal Fourier Transform Discrete time-Fourier Transform (DTFT)

computes an innite number of frequency bins, and the DFT picks out a nite
number of those.
The Fast Fourier Transform (FFT) is simply an ecient method for com-
puting the DFT. The two terms are often used interchangeably, and often the
terminology is abused (which is par for the course). The FFT is a recursive
algorithm - it uses the divide-and-conquer strategy. For a problem of size N,
the FFT splits the problem into several smaller problems, solves the smaller
problems (recursively), and then assembles the results into the solution of the
original problem. For this reason, the FFT is most ecient for problems were
N is a power of two (or, at least, the product of several small factors such as 3
or 5). The FFT is not fast for problems where N is a large prime number, since
that cannot be split up into smaller problems. For example, in Matlab v6.5,
an FFT of length 121,000 (having factors 2,5,11) is 2 to 3 times faster than an
FFT of length 121,001 (prime).

6.2 Zero-Padding of FFTs


Zero-padding means adding additional zeros to a sample of data (after the
data has been windowed, if applicable). For example, you may have 1023 data
points, but you might want to run a 1024 point FFT or even a 2048 point FFT.
There are two reasons why you might do this. First, from section6.1 we recall
that the FFT is slow for prime numbers, but much faster for powers of two.
We can add an extra zero to the end of the sample and thus get much better
performance. On a modern PC, one need not be too concerned with this for
moderate sample sizes. 1023 is not noticeably slower than 1024. But if one
is running very big FFTs (100,000 or more) or using lower powered embedded
devices, one might be very concerned about this.
The other reason that zero-padding is used is to get better frequency reso-
lution. There is something to be gained here, but it is very subtle. National
Instruments has a good writeup on this [4]. Here is a summary. Recall from 6.1
that the DFT is a sampling of the DTFT. Zero padding allows us to take more
samples of the DTFT. For example, if we have 1000 points of data, sampled at
1000 Hz, and perform the standard FFT, I get a frequency bin every 1 Hz. But
if I pad with 1000 zeros and then run a 2000 point FFT, now I get frequency
bins every 0.5 Hz. This allows us to get around some of the disadvantages of
the DFT (e.g. it may allow us to read amplitudes more accurately - reducing
the spacing between bins may put a bin closer to true frequency of a signal and
those avoid the picket fence eect. But, since all this does is sample the DTFT
more nely, we cannot get around any inherent limitations of the DTFT itself.

13
Most noteable, if your choice of window type and length do not allow you to
resolve two closely spaced frequency components, then zero padding is not going
to help. See on the following page.
Note that increasing the sampling rate does not confer the same type of
benets that zero padding does.

6.3 The windowed DFT


6.3.1 Motivation and theoretical background
The concept of a window was one that I found very dicult to learn. At the
begining, all I really wanted to know is 'in what situations should I use which
window?' Only later did I take the time to learn all of the details. If you are in
that situation, skip this section and go to the next. Or, Wikipedia [2] has some
good information on windows and shows the formulas and frequency response
of many of them. National Instruments also has a good page [3].
First, the motivation for DFT windows: the DFT is like a set of passband
lters... but not a very good one. For example, at a sampling frequency of 1000
Hz with a 1000 point DFT, the rst frequency bin is like a bandpass lter from
0 to 1 Hz, the second bin is a bandpass lter from 1 to 2 Hz, the third 2 to 3,
etc. But, if we have a signal of 1.1 Hz, we would expect a response in the 2nd
bin, but in reality we may also see a response in the 1st and the 3rd bins. This
is called spectral leakage.
To see this, remember from section 6.1 that the DFT, like the Fourier series,
assumes that the input signal is periodic even if it is not. But that is no problem
you say, we are doing vibration work here, of course our signal is periodic. Yes,
but there is a subtle catch. If the length of our data sample is not matched to
an integer number of periods of the signal, then we get leakage. Consider 7. On
the far left we have a 1 second long sample of a pure sine wave signal. The top
is 4 Hz and the bottom is 4.25 Hz. Note that an integer number of cycles t into
the top sample, but not the bottom sample. In the next column is a periodic
extension, which is what the DFT assumes. Note the discontinuity. On the next
column is the result of a DFT. The top spectra is what we would expect. But
the bottom signal shows non-zero values at many dierent frequencies. This is
caused by the discontinuity. Now, in the next column we introduce a windowing
function to the FFT. Prior to the FFT the raw signal is multiplied by a Hann
window (dashed lines) such that the function is zero valued at the begining and
end of the interval. Thus, the periodic extension will have no discontinuities.
The far right column shows the windowed DFT. Note the following things: in
the bottom row, the Hann window has reduced the leakage in the bins far away
from the signal frequency, but it is has introduced a signicant error in the
amplitude of the main bin. You can think of this as the window throwing
away some of the energy that is at the begining and ends of the interval.
Also, on the top, the Hann window has actually introduced some leakage that
was not present before. Whether the Hann windowed DFT is better or worse
than the unwindowed DFT depends on your application. If you care about

14
10.3 Hz and 10.8 Hz, rect wind, len = 2s
0.2
1
2
0.15 4
8

0.1

0.05

0
8 9 10 11 12 13 14
freq (Hz)

10.4 Hz and 10.7 Hz, rect wind, len = 2s


0.2
1
2
0.15 4
8

0.1

0.05

0
8 9 10 11 12 13 14
freq (Hz)

Figure 6: Example of the benets and limitations of zero padding FFTs. Top:
Using a rectangular window, a two second sample at 1000 S/s consisting of a
10.3 Hz tone and a 10.8 Hz tone is computed using a 2000 sample FFT, as
well as zero padded FFTs for a total length of 4000, 8000, and 16000. With the
standard FFT, the two signals are distinguishable, but the picket fence eect has
distorted their magnitudes. The zero-padded FFTs allow a better estimation
of the amplitudes and frequencies. Bottom: the same procedure is used, but
with tones at 10.4 Hz and 10.7 Hz. These tones are not distinguishable and zero
padding the FFT does not help the situtation. To resolve these, one must take
a longer data sample.

15
accurately resolving amplitude, then it is not good. But for other applications,
it is very desirable. The following sections describe dierent applications that
arise commonly and describe the best windows to use for those tasks.
FIXME: need to talk about normalizing the results based on the norm of
the window function.
FIXME: need to talk about correction factors for the picket fence eect.

6.3.2 Application: general purpose signal processing


The Hann window is the most commonly used window. It has a good balance
between resolution and dynamic range. If you don't know what to use, start
with this one. You will often see this referred to as the Hanning window, but
that is technically not correct, since it was named for Julius von Hann ([2])

6.3.3 Application: distinguish very small signals from white noise -


high dynamic range windows
6.3.4 Application: distinguish two signals that are very close in fre-
quency - high resolution windows
According to Randall [2], the the most selective window is the Kaiser-Bessel
window. He also notes, however, that resolution of close frequencies can also
be performed with zoom FFTs6.4. The Zoom FFT, however, is most useful
when large samples of data are available. For transient analysis, sometimes the
sample length available is limited and high resolution windows are necessary.
The rectangular window (which is basically no window at all) also has high
resolution.

6.3.5 Application: analyze short transients


6.3.6 Application: accurately determine amplitude / order tracking
/ avoid picket fence eect - the at top window
The attop window is explicitly designed to avoid the picket fence eect. One
situation where this is useful is if you are trying to build a tracking lter out of
an FFT. For example, if you have signal that changes in frequency (say the rst
order of a rotating machine), and you want to track that signals amplitude as
a function of time, the attop window will give you the most accurate results.

6.4 Zoom FFTs


The Zoom FFT is a way of speeding up some computations. Despite the name, it
doesn't allow you to zoom in on a signal any closer than you could with a regular
FFT. But, it does allow you to zoom in on a signal with a lot less computation
than you might otherwise. Let us take as an example a waveform sampled at
10,000 samples/s which we suspect (for some reason) has two components, one
at 4000 Hz and one at 3999 Hz. In order to distinguish these two components,
we will need a frequency resolution of at least 0.5 Hz, which calls for a 2 second

16
Raw signal Periodic extension DFT Windowed signal DFT w/ Hann window
1 1 1 1 1

0.8 0.8
0.5 0.5 0.5

0.6 0.6
0 0 0
0.4 0.4

−0.5 −0.5 −0.5


0.2 0.2

−1 −1 0 −1 0
−0.5 0 0.5 1 1.5 −0.5 0 0.5 1 1.5 0 5 10 0 0.5 1 0 5 10
time(s) time(s) freq (Hz) time(s) freq (Hz)

Windowed signal
1 1 1 1 1

0.8 0.8
0.5 0.5 0.5

0.6 0.6
0 0 0
0.4 0.4

−0.5 −0.5 −0.5


0.2 0.2

−1 −1 0 −1 0
−0.5 0 0.5 1 1.5 −0.5 0 0.5 1 1.5 0 5 10 0 0.5 1 0 5 10
time(s) time(s) freq (Hz) time(s) freq (Hz)

Figure 7: Example of spectral leakage. Columns from left to right: original


signals, periodic extension, DFT, windowed signals, DFT w/ Hann window .
Top: 4 Hz. Bottom: 4.25 Hz.

17
window. That requires a 20,000 point FFT. Actually, with a modern PC, that
really doesn't take that long. But maybe we want to do a short time fourier
transform 8.1over a few minutes worth of data (this happens quite frequently), or
maybe we have an embedded device that has limited computational capabilities
(this also happens quite frequently). Let us assume that we need to be able to
use a 2000 point FFT. Then we employ the zoom FFT. There are three steps:
frequency shift, low pass lter, and decimate. First, a frequency shift. We
multiply the signal by the complex phasor exp(−iωt) where ω is choosen to be
3900. It should be easy to see that this will shift the signal at 4000 Hz down
to 100 Hz (remember, when two exponential expressions are multiplied, their
exponents add). Of course it has also made the signal complex, but that's okay.
Then, we low pass lter with a cuto lter of, say, 300 Hz (this is 3700 - 4300
Hz in the original signal, remember). Now, since we have no frequency content
above 300 Hz, a sampling rate of 10,000 Hz is unnecessary. So we decimate by
a factor of 10. Now the signal is sampled at 1000 Hz. So a 2 second window
requires only a 2000 point FFT. After the FFT, we simply shift the frequencies
back up by adding 3900 Hz to the result. Now we have 0.5 Hz resolution between
3700 Hz and 4300 hz with 10 times less computation.
Remember, we still need 2 seconds worth of data to get a 0.5 Hz resolu-
tion. This is no better than we could do without a zoom. All we did was save
computation time.
FIXME: add some graphs or charts showing this example

6.5 Frequency Domain integration and dierentiation


6.6 Enveloped FFT
The so-called envoloped FFT (sometimes called the acceleration enveloped FFT
because it is most commonly used on accelerometer data) is a technique for
analyzing repetitive impacts. The canonical example is a defect on a rolling
element bearing, but it might also be aplicable to other situtations - gear tooth
defects perhaps. Bentley Nevada has a good application note on this[5]. I will
give a very short summary here.
Take as an example a ball bearing where there is a defect on one race such
that a ball passes this defect at a rate of 5 Hz. Further, assume that the bearing
housing has a natural frequency of 100 Hz. Every time the ball passes the
defect, the impact will ring the natural frequency of the housing - a series of
small pings. This is shown in 8. The time history is fairly easy to interpret since
this one simulated defect is the only signal present. In a real situation, there
will be many other eects (shaft unbalance, other bearing frequencies, etc.).
When the situation gets complicated, we might like to turn to a frequency
domain representation, and pull out the component corresponding to the defect
frequency. We could then trend on this component over time to see if the defect
is getting worse. But look at the spectrum in this gure. It is not going to be
easy to interpret. There is energy at 5 Hz and 100 Hz, but also at many other
frequencies. The dominant component is at 100 Hz, not 5 Hz. This will not be

18
easy to trend on.
The enveloping process is as follows: rst we band pass lter around 100
Hz to remove noise. Then we rectify the signal (take absolute value), and
nd the envelope (e.g. nd the peaks, then t a cubic spline to them). Then
the spectrum of the enveloped signal is found. Now the strongest component
is at the defect frequency (and a few harmonics). Eectively, we have taken
energy which was spread out over the spectrum and moved it down to the
defect frequency. Now it will be easier to trend upon this component and see if
it increases with time.

7 Random vibrations

The whole area of random vibrations is not one that I am very familiar with.
Most of this section is just me trying to get my thoughts on paper to see if I
understand the concepts. I may have gotten things wrong. If so, please correct
me.

7.0.1 Power spectrums


In some practical problems, the quantity of interest is not the measured signal,
but the power of the measured signal. In electrical engineering, the measured
quantity may be voltage, and for resistive loads power is proportional to voltage
squared. For mechanical vibrations, we recall energy = force*displacement. So
power=force*velocity. For a viscous damper, force = c * velocity. Thus power
dissapated = c * velocity^2. By similar means we can get a signal proportional
to power by squaring the signal of interest. Thus, the power spectrum of a
signal is dened to be the

7.0.2 Power spectral density


A good introduction to the need for the concept of power spectral density is
presented by Tustin [1].
http://www.stat.unc.edu/faculty/hurd/papers/period.ps

7.0.3 Barlett's method / Welch's method


8 Time-Spectral Analysis

For a steady-state periodic vibration, say a motor running at a constant speed,


the methods presented thus far are sucient. But for transient (i.e. non-
stationary) one is concerned with how the vibration changes with time. Taking
an FFT of the entire time range will not be sucient.

19
time history spectra
0.02

0.018
0.6
0.016

0.4 0.014

0.012

0.2
0.01
g

0.008
0

0.006

−0.2
0.004

0.002
−0.4

0
0 0.1 0.2 0.3 0.4 0 100 200
time (s) Frequency (Hz)

Figure 8: Simulated bearing defect signal and spectrum

20
time history spectra
0.04

0.035
0.5

0.03

0.4
0.025

0.3 0.02
g

0.015
0.2

0.01

0.1
0.005

0 0
0.15 0.2 0.25 0 20 40 60 80 1
time (s) Frequency (Hz)

Figure 9: (left) Simulated bearing defect signal after low pass ltering, rectify-
ing, and enveloping. (right) FFT of enveloped signal

21
8.1 Short time Fourier transform
The simplest method is to simply split the sample of interest up into multiple
blocks and perform and FFT on each block. These blocks are often overlapped.

8.2 Visualization methods for the STFT


8.3 Campbell diagrams
8.4 The reassignment method
good summary here [7]

8.5 Order tracking


Order tracking is one thing for which I have not found a good all-in-one reference.
There are various resources on specic methods, but no one resource with all at
once. If you know of such a resource, please let me know.
Order tracking is a method of analysis used for analyzing rotating machinery.
Typically such a machine will have one or more shafts and the frequency of
vibration will depend on some way on the speed of the shaft. For example, if a
turbine shaft has 20 blades and runs at 10 Hz, we may nd a vibration signal
at 200 Hz. But if the speed increases to 20 Hz, we will not be surprised to nd
that the vibration is now at 400 Hz.

8.5.1 Naive STFT methods


8.5.2 Naive Adaptive Filtering
8.5.3 Vold-Kalman ltering
http://www.bksv.com/pdf/Bv0052.pdf
www.vold.com/VoldKalman.htm

8.5.4 Resample
8.5.5 Gabor Transform
8.5.6 Time Variant discreate fourier transform order tracking
http://www.modalshop.com/techlibrary/Milut-Tachometer Order Tracking.pdf

22
8.6 Wavelet analysis
9 Analysis of Impacts

10 Analysis of Non-linear vibration

11 AC units Conversion

We are often concerned with the average value or level of a signal over some
time period. Vibrations of course, tend to produce signals that look like sine
waves. But the average value of any sine wave over one cycle is zero. So that is
not very useful. If we just dealt with pure sine waves, we could simply report
the peak value. E.g. for x = Asin(ωt)we could just report A (sometimes called
single amplitude, and abbreviated as p, pk, or SA). Or we could report the peak-
to-peak, which would be 2A in this case (sometimes called double amplitude, and
abbreviated as pp, pk-pk, or DA). Many times, of course, we do not have pure
sine waves, and it is useful to distinguish between signals which spent a lot of
time at the peak value and only a little time near zero and those which spend a
lot of time near zero and only a little time near the peak. A simple method is

to simply take the average of the absolute value, namely
1
T | f (ωt) | dt. This
has the advantage that it is easy to do in hardware - absolute value is a rectier
(a few diodes) and averaging is a low pass lter (a few resistors and capacitors).
Some companies refer to this value value as the average value of the signal,
although it is not the same as an arithmetic mean. The most common method
of reporting levels is the RMS level, which stands for root-mean-square. This is
√ ∫
1
dened as
T f (t)2 dt. For a pure sine wave, there is a one-to-one relationship
between these four dierent measures, as dened in the table below. FIXME:
check the table for correctness.
To-> RMS Peak PP Avg

from
√ √ √
2 2
RMS

1 2 2 2 π
2 2
Peak 1 2
√2 π
2 1
Peak-to-Peak 0.5 1
4 π
π
√ π
Average
2 2 2 π 1

12 Misc

12.0.1 Noise
Here is an interesting article on the colors of noise. You've probably used the
term white noise hundreds of times without realizing why it is called white.
http://en.wikipedia.org/wiki/Colors_of_noise

23
References

[1] Doebelin, Ernest. Measurement Systems: Application and Design. McGraw-


Hill. 5th ed. 2003.

[2] http://en.wikipedia.org/wiki/Window_function

[3] http://zone.ni.com/devzone/cda/tut/p/id/4844#3

[4] http://zone.ni.com/devzone/cda/tut/p/id/4880 Zero Padding Does Not


Buy Spectral Resolution

[5] http://www.bently.com/articles/articlepdf/2Q04AccelEnvel.pdf and


http://www.bently.com/articles/articlepdf/2Q04WindTurbCondMon.pdf

[7] Jan E. Odegard, Richard G. Baraniuk and Kurt L. Oehler. Proceedings of


the 68th SEG Meeting, New Orleans, Louisiana, USA, 1998. http://www-
dsp.rice.edu/publicationsold/pub/odegard-seg97.pdf

[1] Tustin, Wayne. What is the meaning of PSD in g^2/Hz units?


http://www.vmebus-systems.com/pdf/EquipmentReliability.Dec05.pdf

[2] Randall, Robert B. Vibration Analyzers and Their Use. Chapter 14 in


Harris's Shock and Vibration Handbook, edited by Harris, Cyril M. and
Piersol, Allan G. 5th edition. 2002. McGraw Hill. New York.

24

Potrebbero piacerti anche