Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Daniel R. Kiracofe
1 Preface
There are a good number of books which deal with vibration. But I have not
found a single book which focuses on the analysis of experimental vibration
data, which is what I nd myself doing quite a bit of. I have put this together
to serve both as a reference for myself (e.g. collecting various bits of knowledge
1
together for easy access) and hopefully as a reference for others . I have done
similar writeups of other topics in the past and found them to be very well
received by the Internet community. This is going to be a work in progress for
some time. So if there is a particular area that is blank which you have a specic
question about, please e-mail me (kiracofe.8@osu.edu or drk@current.net) and
I will try to answer your questions. If you have more extensive questions or
problems, I may be able to provide consulting services for a nominal fee. Also,
if you wish to make a contribution by providing text, examples, or references
from your area of expertise, please let me know.
Occasionally I have made notes to myself on areas I wish to come back to.
This are marked as xme. You can ignore those.
2 Filtering
I remember that when I was rst learning about signal processing I found a
book from the library and was intrigued that more than half the book dealt
with ltering. But ltering is such a simple thing, surely there is more to
signal processing than just ltering I said to myself. As I have since learned,
although the basic concept of a lter is fairly straightfoward, the correct, ecient
implementation of a lter can be very subtle.
1 Further, the act of writing this down has helped me nd where the gaps are in my
knowledge and I have subsequently learned quite a bit lling those in. In my opinion, one of
the best ways to learn something is to try to teach it to someone else. If you can't teach it,
then you don't know it.
1
2.1 Basic lter concepts
2.1.1 Bandpass lters
One misconception that I had early on with this: since a lowpass lter keeps
low frequencies, and a highpass lter keeps high frequencies, and a bandpass
lter keeps a range of frequencies, then a bandpass lter is equivalent to simply
a highpass and a lowpass lter in series. Further, I thought, since a bandpass
lter always has an even number of poles, that must be because there is one
pole for the highpass part and one pole for the lowpass part. Although one can
create such a lter, a true bandpass lter is dierent than just a highpass and
low pass lter, and the even number of poles has nothing to do with one for
each part. Practically, it is important to understand this distinction since a two
pole pair bandpass lter has a much sharper cuto than a one pole lowpass and
one pole highpass lter combined in series.
A simple lowpass lter is a 1st order device. To be specic, the dierential
equation that describes it has only rst derivates. Electrically, this would be
resistors and capacitors, but no inductors. Mechanically, this would be springs
and dashpots, but no masses. To get a multi-pole lter, one just stacks in more
springs and dashpots, but never adds any mass. A simple bandpass lter, on
the other hand, is a 2nd order device. So the dierential equation has second
derivatives. Electrically, it involves an inductor, and mechanically it involves
a mass. The addition of the 2nd derivative means that the system now has a
natural frequency and can have a resonant response. It is this the on-resonance
condition that is the pass band of the lter and the o-resonance condition is
the stop band.
FIXME: elaborate on this point, maybe add some examples.
2.2.1 Butterworth
The butterworth lter is maximally at in the passband, but does not have
a very sharp rollo. The butterworth lter is an all-pole lter and it has a
montonic frequency response (i.e. it keeps attenuating more and more as you
go out in frequency, in constrast to say an elliptical lter, which reach some
maximum attentuation and then stop).
Personally, I use butterworth lters frequently in vibration analysis because
they are the easiest to design. You specify a lter order and a cuto frequency
and Matlab can do the rest. In constrast, an elliptic lter design needs to
specify the tradeos between attenuation and ripple. A lot of times I don't
2
want to think about that, I just want to get rid of some high frequency noise in
my signal.
2.2.2 Cheyshev
A chebyshev lter has a sharper rollo but more ripple than a butterworth lter.
2.2.4 Bessel
The Bessel lter has a maximally at group delay. That is, it's phase response
is maximally linear. This is important if ones does not want to distort the shape
of signals with content at dierent frequencies (e.g. square waves).
3
3.3 oversampling [xme, need to think about this one
more. how does it relate to zero-padding FFTs]
http://www.bksv.com/pdf/Bv0047.pdf
4 Instrumentation
4.1 Accelerometers
4.1.1 Low impedance (charge type) versus high impedance (ICP)
piezoelectric accelerometers
The plain piezoelectric accelerometer has an output of charge proportional to
accelerations. Most analog to digital converters are
1. Update rate. How often does the tachometer algorithm give you a new
speed? Once a second, a hundred times a second?
3. Lag time. When there is a sudden change in the speed, say from 100 RPM
to 200 RPM, how long is it before the tachometer algorithm reaches the
new speed (e.g. successive samples may say 100, 150, 175, 200. We want
4
to know how long before we get 200). This is dierent from update rate.
You may be getting a new speed a hundred times a second, but that speed
may be half a second behind the actual shaft speed.
4. Accuracy. If the speed is a constant 100 RPM, does the tachometer report
that, or does it report 101 or 99?
5
22
actual speed
overlapped average
average
20
18
RPM
16
14
12
10
0 1 2 3 4 5 6 7
time (s)
6
or 16.66 RPM. See 2 for the perfomance of this algorithm. Compared to the
previous one it looks almost perfect. Lag time is a mere 0.050s and the precision
is approx 0.1 RPM.
The speed precision in this case is driven by the time resolution of the signal
processor. At 20.5 RPM with 60 teeth, a zero-crossing happens every 0.04875s.
In this example, I assumed an A/D with a 4kHz sample rate and assigned the
time of the zero crossing as the time of the sample nearest where the signal
crossed. So, the time resolution is .00025s. So the algorithm cannot tell the
dierence between an interval of 0.04875s and an interval of 0.04900s. The
dierence between these two intervals works out to be approx 0.1 RPM. As
shaft speed increases, the interval decreases, so an error of .00025s becomes a
larger percentage of the interval. At 200 RPM, the precision of this example
would be only 10 RPM; at 2000 RPM, the precision is a horrible 700 RPM. This
will be illustrated in an example in the next section.
To get acceptable precision at high speeds, increasing the sampling rate
become prohibitive. There are two solutions which are normally employed. The
rst is to use interpolation to determine the zero crossing time more accurately
instead of rounding to the nearest sample. The second is to use a hardware
timer device instead of an A/D converter. For example, the NI-6602 card can
return the time of zero-crossings based on a very accurate time base (something
like 10 MHz if I remember correctly).
Since the algorithm returns a speed point at every zero crossing, the update
rate increases with shaft speed and the lag time will decrease with shaft speed.
7
22
actual speed
tooth to tooth
20
18
RPM
16
14
12
10
0 1 2 3 4 5 6 7
time (s)
8
pass lter the output. Of course, this will introduce some lag time. Another way
is to determine the repetitive errors ahead of time (presumably at low speeds
where there are no dynamic eects) and then subtract the reptitive errors out.
This requires somewhat more computational eort (I personally have never done
this, nor found it to be necessary. If anyone has done this, I would like to
hear about your application). The way I use most frequently is to take the
time dierence not between the zero crossings of sucessive teeth but between
a tooth's zero crossing and the zero crossing of that same tooth the next time
it comes around. Thus the repetitive errors are automatically cancelled out, at
the expense of adding one revolution's worth of lag time. At low shaft speeds,
one revolution is a long time compared to the lag introduced to the delay of a
low pass lter. But at high shaft speeds, it can be much shorter. For example,
in 4 at 30 - 40 RPM, a 3 Hz low pass lter gives the better response, but at
500 RPM (5), the tooth-to-next-revolution algorithm is better. Notice that the
speed precision of the tooth-to-next-rev algorithm is worse at higher speeds, as
was mentioned in the previous section. This example was using a 10 kHz sample
rate.
9
48
true speed
0.2% runout
tooth error
46
44
42
RPM
40
38
36
34
0 1 2 3 4 5 6 7
time (s)
10
48
true speed
tooth to next rev
tooth−tooth low pass filter 3 Hz
46
44
42
RPM
40
38
36
34
1 2 3 4 5 6 7
time(s)
Figure 4: The tooth to next revolution algorithm versus a low pass lter on the
tooth to tooth algorithm for a low speed.
11
512
510
508
506
RPM
504
502
true speed
tooth to next rev
500 tooth−tooth low pass filter 3 Hz
498
1 2 3 4 5 6
time(s)
Figure 5: The tooth to next revolution algorithm versus a low pass lter on the
tooth to tooth algorithm for a high speed.
12
Table 1: 6.1
Continuous time input Discretely sampled time input
computes an innite number of frequency bins, and the DFT picks out a nite
number of those.
The Fast Fourier Transform (FFT) is simply an ecient method for com-
puting the DFT. The two terms are often used interchangeably, and often the
terminology is abused (which is par for the course). The FFT is a recursive
algorithm - it uses the divide-and-conquer strategy. For a problem of size N,
the FFT splits the problem into several smaller problems, solves the smaller
problems (recursively), and then assembles the results into the solution of the
original problem. For this reason, the FFT is most ecient for problems were
N is a power of two (or, at least, the product of several small factors such as 3
or 5). The FFT is not fast for problems where N is a large prime number, since
that cannot be split up into smaller problems. For example, in Matlab v6.5,
an FFT of length 121,000 (having factors 2,5,11) is 2 to 3 times faster than an
FFT of length 121,001 (prime).
13
Most noteable, if your choice of window type and length do not allow you to
resolve two closely spaced frequency components, then zero padding is not going
to help. See on the following page.
Note that increasing the sampling rate does not confer the same type of
benets that zero padding does.
14
10.3 Hz and 10.8 Hz, rect wind, len = 2s
0.2
1
2
0.15 4
8
0.1
0.05
0
8 9 10 11 12 13 14
freq (Hz)
0.1
0.05
0
8 9 10 11 12 13 14
freq (Hz)
Figure 6: Example of the benets and limitations of zero padding FFTs. Top:
Using a rectangular window, a two second sample at 1000 S/s consisting of a
10.3 Hz tone and a 10.8 Hz tone is computed using a 2000 sample FFT, as
well as zero padded FFTs for a total length of 4000, 8000, and 16000. With the
standard FFT, the two signals are distinguishable, but the picket fence eect has
distorted their magnitudes. The zero-padded FFTs allow a better estimation
of the amplitudes and frequencies. Bottom: the same procedure is used, but
with tones at 10.4 Hz and 10.7 Hz. These tones are not distinguishable and zero
padding the FFT does not help the situtation. To resolve these, one must take
a longer data sample.
15
accurately resolving amplitude, then it is not good. But for other applications,
it is very desirable. The following sections describe dierent applications that
arise commonly and describe the best windows to use for those tasks.
FIXME: need to talk about normalizing the results based on the norm of
the window function.
FIXME: need to talk about correction factors for the picket fence eect.
16
Raw signal Periodic extension DFT Windowed signal DFT w/ Hann window
1 1 1 1 1
0.8 0.8
0.5 0.5 0.5
0.6 0.6
0 0 0
0.4 0.4
−1 −1 0 −1 0
−0.5 0 0.5 1 1.5 −0.5 0 0.5 1 1.5 0 5 10 0 0.5 1 0 5 10
time(s) time(s) freq (Hz) time(s) freq (Hz)
Windowed signal
1 1 1 1 1
0.8 0.8
0.5 0.5 0.5
0.6 0.6
0 0 0
0.4 0.4
−1 −1 0 −1 0
−0.5 0 0.5 1 1.5 −0.5 0 0.5 1 1.5 0 5 10 0 0.5 1 0 5 10
time(s) time(s) freq (Hz) time(s) freq (Hz)
17
window. That requires a 20,000 point FFT. Actually, with a modern PC, that
really doesn't take that long. But maybe we want to do a short time fourier
transform 8.1over a few minutes worth of data (this happens quite frequently), or
maybe we have an embedded device that has limited computational capabilities
(this also happens quite frequently). Let us assume that we need to be able to
use a 2000 point FFT. Then we employ the zoom FFT. There are three steps:
frequency shift, low pass lter, and decimate. First, a frequency shift. We
multiply the signal by the complex phasor exp(−iωt) where ω is choosen to be
3900. It should be easy to see that this will shift the signal at 4000 Hz down
to 100 Hz (remember, when two exponential expressions are multiplied, their
exponents add). Of course it has also made the signal complex, but that's okay.
Then, we low pass lter with a cuto lter of, say, 300 Hz (this is 3700 - 4300
Hz in the original signal, remember). Now, since we have no frequency content
above 300 Hz, a sampling rate of 10,000 Hz is unnecessary. So we decimate by
a factor of 10. Now the signal is sampled at 1000 Hz. So a 2 second window
requires only a 2000 point FFT. After the FFT, we simply shift the frequencies
back up by adding 3900 Hz to the result. Now we have 0.5 Hz resolution between
3700 Hz and 4300 hz with 10 times less computation.
Remember, we still need 2 seconds worth of data to get a 0.5 Hz resolu-
tion. This is no better than we could do without a zoom. All we did was save
computation time.
FIXME: add some graphs or charts showing this example
18
easy to trend on.
The enveloping process is as follows: rst we band pass lter around 100
Hz to remove noise. Then we rectify the signal (take absolute value), and
nd the envelope (e.g. nd the peaks, then t a cubic spline to them). Then
the spectrum of the enveloped signal is found. Now the strongest component
is at the defect frequency (and a few harmonics). Eectively, we have taken
energy which was spread out over the spectrum and moved it down to the
defect frequency. Now it will be easier to trend upon this component and see if
it increases with time.
7 Random vibrations
The whole area of random vibrations is not one that I am very familiar with.
Most of this section is just me trying to get my thoughts on paper to see if I
understand the concepts. I may have gotten things wrong. If so, please correct
me.
19
time history spectra
0.02
0.018
0.6
0.016
0.4 0.014
0.012
0.2
0.01
g
0.008
0
0.006
−0.2
0.004
0.002
−0.4
0
0 0.1 0.2 0.3 0.4 0 100 200
time (s) Frequency (Hz)
20
time history spectra
0.04
0.035
0.5
0.03
0.4
0.025
0.3 0.02
g
0.015
0.2
0.01
0.1
0.005
0 0
0.15 0.2 0.25 0 20 40 60 80 1
time (s) Frequency (Hz)
Figure 9: (left) Simulated bearing defect signal after low pass ltering, rectify-
ing, and enveloping. (right) FFT of enveloped signal
21
8.1 Short time Fourier transform
The simplest method is to simply split the sample of interest up into multiple
blocks and perform and FFT on each block. These blocks are often overlapped.
8.5.4 Resample
8.5.5 Gabor Transform
8.5.6 Time Variant discreate fourier transform order tracking
http://www.modalshop.com/techlibrary/Milut-Tachometer Order Tracking.pdf
22
8.6 Wavelet analysis
9 Analysis of Impacts
11 AC units Conversion
We are often concerned with the average value or level of a signal over some
time period. Vibrations of course, tend to produce signals that look like sine
waves. But the average value of any sine wave over one cycle is zero. So that is
not very useful. If we just dealt with pure sine waves, we could simply report
the peak value. E.g. for x = Asin(ωt)we could just report A (sometimes called
single amplitude, and abbreviated as p, pk, or SA). Or we could report the peak-
to-peak, which would be 2A in this case (sometimes called double amplitude, and
abbreviated as pp, pk-pk, or DA). Many times, of course, we do not have pure
sine waves, and it is useful to distinguish between signals which spent a lot of
time at the peak value and only a little time near zero and those which spend a
lot of time near zero and only a little time near the peak. A simple method is
∫
to simply take the average of the absolute value, namely
1
T | f (ωt) | dt. This
has the advantage that it is easy to do in hardware - absolute value is a rectier
(a few diodes) and averaging is a low pass lter (a few resistors and capacitors).
Some companies refer to this value value as the average value of the signal,
although it is not the same as an arithmetic mean. The most common method
of reporting levels is the RMS level, which stands for root-mean-square. This is
√ ∫
1
dened as
T f (t)2 dt. For a pure sine wave, there is a one-to-one relationship
between these four dierent measures, as dened in the table below. FIXME:
check the table for correctness.
To-> RMS Peak PP Avg
from
√ √ √
2 2
RMS
√
1 2 2 2 π
2 2
Peak 1 2
√2 π
2 1
Peak-to-Peak 0.5 1
4 π
π
√ π
Average
2 2 2 π 1
12 Misc
12.0.1 Noise
Here is an interesting article on the colors of noise. You've probably used the
term white noise hundreds of times without realizing why it is called white.
http://en.wikipedia.org/wiki/Colors_of_noise
23
References
[2] http://en.wikipedia.org/wiki/Window_function
[3] http://zone.ni.com/devzone/cda/tut/p/id/4844#3
24