Sei sulla pagina 1di 6

JOIJRNAL OF MAGNETIC RESONANCE x,223-228 (1977)

Algebraic Analysisof Noisy ExponentialDecays

L. A. MCLACHLAN
Institute of Nuclear Sciences, D.S.I.R., Private Bag, Lower Hutt, New Zealand

Received August 29,1976

A simple algebraic method of determining the relaxation time of an exponential


decay with an unknown baseline is presented. This method is suitable for on-line
computer analysis, is very stable, and is comparable in accuracy to other simplified
ways of analyzing exponential decays which have been described in the last few years.

1NTRODUCTION
,4 common problem in pulsed nuclear magnetic resonance experiments is the ana-
lyz,ing of a noisy exponential decay to determine the time constant r. Such a signal is
described by
y(t) = A exp(-t/r) + B + v(t), PI
where A and B are constants and v(t) is a Gaussian random variable of rms amplitude
a. ,4t equispaced time intervals A, a digital signal averager makes Nsequential measure-
ments of this signal, so it may be written
y(nA) = A exp(-nA/z) + B + u(nA), PI
where 0 < IZ < N. Linear or nonlinear least-squares-fitting techniques are then usually
used to extract r from this equation. Recently, however, two papers have appeared
describing much simpler methods of analysis which need only modest computing
facilities.
In the first of these (I), z is found from the difference between the logarithms of pairs
of points, with the baseline obtained by varying B until the variance of r is a minimum.
A subsequent paper (2) showed that by first grouping the points in threes and solving the
resulting simultaneous equations for B, the accuracy of the method proposed in the
first paper can be improved. It is shown in this paper that a further simplification is ob-
tained by grouping the data in four blocks and solving the resulting simultaneous equa-
tions for r. The expression obtained is in a form suitable for use with an on-line com-
puter, or even an ordinary scientific pocket calculator with limited memory capacity.

DATA ANALYSIS
The N measurements are split into four blocks and each block is summed to give
(1/4)N-1
&= z: y(nA)=A(1-Q)(l-R)-1++NB+(N/4)1’2VI,
II=0
(l/Z)N-1
S,= 2 y(nA)=AQ(l-Q)(l-R)-1+~NB+(N/4)“2V2,
n=(1/4)N
Copyright 0 1977 by Academic Press, Inc. 223
All rights of reproduction in any form reserved. ISSN 0022-2364
Printed in Great Britain
224 L. A. McLACHLAN

(3/4)N-1
S, = 1 y(nA) = AQ’(1 - Q)(l - R)-’ + $NB i- (N/4)“2 V3,
n=(l/Z)N
N-l
S, = C y(nA) = AQ” (1 - Q) (1 - R)-’ + $NB + (N/4)“2 I’,, [31
n(3/4)N

where R = exp(-d/r), Q = exp(-$Nd/r), and V,, V,, V,, and V, are random variables
with rms values of a. Solving these four equations for z gives
7-l = 4(Nd)-‘ln[(S, - S,) (S, - S,)-l]. [41
Using the approximation ln(1 + x) = x gives the fractional error E in r as
E = 2aA-‘P(A/z) G(NA/z), PI
where
F(x) = x-112 [l - exp(-x)], PI
G(x) = xm112[2{1 + exp($x)}]“2/[l - exp(-ix)]-’ [l - exp(-3x)]-‘. [71
The error in r depends upon two parameters, the normalized total measuring time
NA/z, and the normalized time between two sequential measurements A/z. Equations
[5], [6], and [7] show that, since F(A/z) is a monotonically decreasing function, NA
should be chosen so as to minimize G(NA/z). A minimum occurs in G when NA = 5.22,
which fortunately is broad so NA need not be set accurately. For instance, Eis within 20 %
of its minimum value for NA in the range of 3.32 to 6.62. If at all possible, short sweep
times should be avoided since the error increases rapidly for small NA/z, being double
its minimum value for NA = 2.12. Conversely, long sweep times are insensitive to
errors in the choice of NA, requiring NA = 112 for a doubling in error.
At the minimum of G, the value of E is
E, = 3.07 Nli2aA-l [1 - exp(-52/N)]. [81
This monotonically decreasing expression reaches its familiar asymptotic form of
E, oc N-1’2 for N greater than about 30.

For N less than about 30, E, is smaller than the value given by its asymptotic form.
For some types of experiments, such as T1 measurements, a modest gain in accuracy
can thus be obtained by doing repeated measurements with a comparatively small N
and adding them, rather than doing one measurement with a very large N. Repeated
measurements, rather than a single measurement accupying the same time, also often
reduce the effects of slow drifts in the equipment.
The expression for E assumed ln(1 + x) = x in its derivation. This approximation is
valid provided S, - S, % (N/2a)‘/‘, which, at the E minimum, can be shown to be
equivalent to N1’2 B 40a/A by substituting in Eq. [3]. The latter inequality is easily
satisfied for A/a greater than about 40 and can be satisfied for much smaller values by
using an N of some hundreds. Unfortunately, this contradicts the desirability of a small
value of N, which, as mentioned in the preceding paragraph, sometimes occurs.
If an estimate of the accuracy of r is also required, then both a and A must be found
and substituted into Eq. [5] or Eq. [8]. If a is already known, then simply subtracting
the last value of y measured from the first will give a sufficiently accurate value of A for
calculating a. More commonly, a must be found from the root mean square difference
ALGEBRAIC ANALYSIS OF EXPONENTIALS 225

between the actual values of y and those calculated from an exponential decay using
the measured values of A, B, and r. Solving the simultaneous equations [3] gives

A= (~1-&)(&-&)(1-R)
[91
(&+&-&-&)(I -Q>’
and
B,T (s,s,-s,s,)
N(S,+ s, - s, - S,)’
Along with the calculated value of r and the measured values ofy, these give a value of a
with a fractional error of order Neil2 which may then be used to obtain E.Unfortunately,
estimating the accuracy of a measurement of r requires better computing facilities than
those required for calculating r alone, since the N data points must now be stored while
A, B, and r are being calculated.
At first sight, since there are only three unknowns in Eq. [I], it would seem best to
divide the data into three blocks and solve the resulting three equations for r. Indeed,
if this is done, the expression obtained,
z-l = 3(NA)-‘ln[(& - S,) (S, - S&i], [I11
is similar to Eq. [4]. There is, however, one significant difference between the two ex-
pressions. In Eq. [I 11, noise associated with S, appears in the numerator and denomi-
nator in such a way that its effect is maximized, but in Eq. [4] there is no correlation
between noise in the numerator and that in the denominator. Because of this difference
in noise correlation, when the detailed expressions for E are examined, it is found that
in all circumstances the error for Eq. [ 1 I] is at least 30 % larger than that for Eq. [4].
Although there is a distinct advantage in dividing the data into four blocks for
calculating r, this does not apply to calculating A or B since in these cases correlations
exist between noise in the numerator and noise in the denominator in both [9] and [lo].
Detailed calculations show that for NA 5 2.5~, Eq. [IO] is slightly more accurate than
the corresponding equation for three data blocks, but in the asymptotic limit of NA = m,
it is about 13 “//, poorer in accuracy.
These equations have all been derived under the assumption that the noise voltages
at consecutive samplings are uncorrelated. This is true for Ti measurements, and is
usually true for spin-echo measurements, but it is not true for T2 measurements from
the free induction decay since the sampling theorem requires that the noise bandwidth
be narrow enough for at least two consecutive sampled noise voltages to be partially
correlated. To avoid signal distortion, however, the bandwidth must be wide enough
for the number of partially correlated sequential noise voltages sampled to be much
less than +N. Periodic noise, such as mains frequency ripple, may also introduce corre-
lation between the noise voltages.
The effect of correlated noise voltages is to increase the rms value of the cumulative
noise voltage Vof Eq. [3]. Since the correlation extends over less than about N/40 con-
secutive samplings, the increase in V is small, usually less than 10 %. Furthermore, the
form of Eq. [4] is such that the net effect of correlations between V, and Vz, V2 and
I’,, and V, and V, is partially to cancel the increase in E expected from the increase in
V alone. Indeed, in the case of periodic correlations, complete cancellation of the extra
correlated noise can sometimes occur. Thus, for almost all cases likely to be met in
226 L. A. McLACHLAN

practice, the assumption of uncorrelated noise is either exact, or an excellent


approximation.

DISCUSSION
A comparison of the various methods of analyzing exponential decays should always
include the three features : accuracy, computational complexity, and stability against
noise. Of prime importance is the ability of a method to give a small value of s for a given
value of N. It is often also advantageous for a method to have its minimum value of E
occurring for a small value of Nd/r, since this keeps the total experimental time needed
for a given accuracy as short as possible. For some pulse sequences, the savings in time
can be considerable (3), but for other experimental situations the time saved may be a
minor consideration. Many workers have access to computers of sufficient power to
handle any of the least-squares methods commonly used, but for those less fortunate
the computational complexity becomes important. Neither must computational sim-
plicity bc obtained at the expense of length of running time, as occurs in some iterative
methods. Only the luckiest of experimenters always has signal-to-noise ratios of 100 or
more; yet many approaches to analyzing exponential decays are somewhat unstable in
the presence of noise and may be unreliable for signal-to-noise ratios lower than about
50 where they are most needed.
By fitting artificial computer-generated noise exponential decays, it was shown that
the error in z was accurately given by Eq. [5] for A/a > 20. On the basis of this equation,
the accuracy of the method given in this paper was compared with that of the method of
Moore and Yalcin (I), and the modified method of Smith and Buckmaster (2). For ten
sampling points, A/a = 100, and an unknown baseline, Moore and Yalcin’s method
gives E, as 4.4 %, while both the present method and that of Smith and Buckmaster have
E, = 3.9 %. In the experimentally uncommon situation, at least for NMR, where the
baseline position is known to much greater accuracy than the rms noise level, Moore
and Yalcin’s method gives e, = 1.6 %.
All three methods have an optimum measuring time. Moore and Yalcin’s method has
an optimum time of 2.22, which is much shorter than the 5r of the present method and is
often shorter than the time required for Smith and Buckmaster’s method. This may
sometimes compensate for its poorer intrinsic accuracy. For the greatest accuracy,
Smith and Buckmaster’s approach requires as long a time as possible for accurately
measuring the baseline, and a much shorter measuring time in the region of r to 2.2~
for actually evaluating the decay time.
The presence of an optimum measuring time means that prior knowledge of z to
within a factor of 2 is desirable for optimum use of any of these methods. A poor esti-
mate of z has less effect on the accuracy of the present method than on that of the other
two methods. Not only is the minimum broad, but it occurs at over twice the value of
Nd/r, so, for the same absolute error in the estimate of z, the fractional error is less than
half that of the other methods.
Of the many methods available for analyzing exponential decays, the one proposed
in this paper appears to be the simplest. It can readily be converted into an algorithm
for on-line computer analysis which uses the data points as they arrive, and then immedi-
ately discards them, storing only their sums. Because of the limited storage capacity
and trivial amount of mathematical manipulation needed, even a simple scientific
ALGEBRAIC ANALYSIS OF EXPONENTIALS 227

pocket calculator can be used. Prior manipulation of the data is unnecessary since the
ability to handle an arbitrary baseline means that it is immaterial whether y obeys Eq.
[I], or the form l-2 exp(-t/z) obtained from 180-90” spin-lattice relaxation time
measurements. As mentioned before, if an estimate of the accuracy of z is also required,
computational facilities more elaborate than a pocket calculator are needed, but the
requirements are still less than for most other methods.
Another advantage of the present method is its stability in the presence of noise.
Many linear, or nonlinear, least-squares-fitting methods suffer from the disadvantage
that they are unstable with noise levels of only a few percent, even though they may be
very accurate at lower noise levels. This instability always arises in some form or other
from a denominator involving the small difference of two numbers, both of which con-
tain noise. Probably all methods suffer from this defect to some extent, but in the pre-
sent method it is minimized both because the differences in the sums are comparatively
large numbers and because the noise is averaged by summing the individual points.
Inequalities similar to those in the earlier discussion on the validity of approximating
ln(1 + x) by x also govern the stability of the solution. The evaluation of computer-
generated data showed that the method was stable when a/A = 0.1 and was even stable
for a/A = 0.3, provided NA was in the region of 42 to 62. Only the most desperate ex-
perimenter would require stability in higher noise levels than this. Experimentally, it
was found that even when the exponential had a superimposed sinusoidal component,
a.s found in Carr-Purcell-Gill-Meiboom experiments with pulse imperfections, the
rnethod remained stable for large amplitudes of the oscillation, provided the oscillatory
period was much less than NA/4.
There are two other effects which cause minor errors in some methods of analyzing
clecay curves. Moore and Yalcin mentioned one of these: bias in r introduced by the
presence of noise. The present method, along with that of Moore and Yalcin, involves
taking the logarithm of the experimentally measured amplitude; this is the sum of the
true amplitude and a random noise voltage whose mean value is zero. Taking the loga-
rithm, however, is a nonlinear process so the mean value of the logarithmic function is
not the logarithm of the true amplitude, but is displaced from it by a small amount
which depends on the signal-to-noise ratio. This systematic error in amplitude becomes
a. systematic error in r whose magnitude depends to some extent on the method of
a.nalysis. Robinson (4) examined this problem for six different methods of analysis in
cases where the noise obeyed Poisson statistics. Using the same type of investigation,
he found that for the present case of Gaussian statistics (5)
~~ = 2’ [ 1 + (a/A)’ (A/z’) Qw2(1 - Qm2) (1 + Q2)-l J, WI
where ~~is the true value of r, and z’ is the experimentally determined value. A com-
parison of this expression with that for statistical errors (Eq. [5 J) shows that for all reas-
onable experimental situations the statistical error is much larger than the systematic
error. In the situation where repeated measurements are made ofthe same decay time,
a correction for the systematic error may, however, need to be made. Although no ex-
perimental examination of the bias in Eq. [4] was undertaken, it was noticed that with
computer-generated data, r tended to be about 5 y0 too high when a/A = 0.1. This trend
is in reasonable agreement with the 2 % error calculated from Eq. [12] for the same
parameters.
228 L. A. McLACHLAN

The discrete nature of the analog-to-digital conversion process also limits the ac-
curacy with which r can be measured. If the discrete voltage step D is much less than the
rms noise, then the error in the baseline position is about &+D, so the error in z is in the
order of f$D/A. On the other hand, when a 2 D, the quantizing of the voltage becomes
less important, being partially or completely smoothed out by the averaging effect of
the random noise (6). Most commercial analog-to-digital converters have such good
resolution that the quantization effects can be ignored, but some fast analog-to-digital
converters have only 6-bit resolution and in this situation there may be no advantage
in having Nd > 42, even though the nominal optimum value is larger than this.
This method of analysis can be systematically extended to the much more difficult
case of multiple exponential decays by splitting the data up into more blocks and solving
the increased number of simultaneous equations by matrix methods. A preliminary
study of two decays suggested, however, that the errors in the time constants are
large and that other algebraic methods (7) may be better for the multiple-decay case.
Although other methods may be better for their analysis, testing for the presence of
multiple decays can be done by the present method simply by using a range of NA/T
values from about 2.5 to 7. With a single exponential decay, the values of z obtained
from Eq. [4] are independent of NA/z, but if there is more than one decay present, the
value of z will steadily increase with increasing NA/z. Such a systematic trend is easily
detectable even if it is not much bigger than the noise level, so the method is quite a
sensitive indicator of the presence of multiple decays.

ACKNOWLEDGMENTS
The author wishes to thank Mr. G. J. McCallum for willingly doing the computer programming, and
Dr. D. C. Robinson for calculating the systematic error expression.

REFERENCES
1. W. S. MOORE AND T. YALCIN, J. Magn. Resonance l&50 (1973).
2. M. R. SMITH AND H. A. BUCKMASTER, J. Magn. Resonance 17,29 (1975).
3. G. G. MCDONALD AND J. S. LEIGH, J. Magn. Resonance 9,358 (1973).
4. D. C. ROBINSON, UKAEA Report AERE-R5911 (1968).
5. D. C. ROBINSON, personal communication.
6. J. BUTTERWORTH, D. E. MACLAUGHLIN, AND B. C. Moss, Rev. Sci. Instrum. 44, 1029 (1967).
7. 0. CAPRANI, E. SVEINSDOTTIR, AND N. LASSEN, J. Theor. Biof. 25,299 (1975).

Potrebbero piacerti anche