Sei sulla pagina 1di 8

The construction of minimax rational approximations to functions

By Alan Curtis* and M. R. Osbornej


The problem of constructing minimax rational approximations to functions is surveyed, and an
algorithm having quadratic convergence is suggested for their computation. Some numerical
results are presented.

1. Introduction on the points of the reference. This means that if Q(x)


Methods for constructing the best approximation in the is a function with continuous (n + l)th derivative then
d"+1Q

Downloaded from https://academic.oup.com/comjnl/article-abstract/9/3/286/406298 by guest on 25 March 2019


sense of Chebyshev to a continuous function on a finite n+2
interval by means of a polynomial of given degree have S KQ(xf) is proportional to , n+ 1 evaluated at
j= 1 MA
been available for at least thirty years (see, for example, n+2
Remez, 1934). Commonly, these methods make use of some point in [a, b]. In particular, £ Xf Q(xf)
the classical result (Chebyshev, 1899) that the best vanishes if Q is a polynomial of degree < n.
approximation is unique and is characterized by the The standard method for computing P is often
following oscillation properties. Let/(x) be continuous referred to as the exchange algorithm. It starts with a
n reference [x(0)] which is successively improved by com-
in a < x < b, and let P — X />/*' be the best approxi- puting (at the rth stage) the levelled reference function
P (r) , determining the points of maximum deviation from
mation to f(x) in this range by polynomials of degree n. zero of/— P (r) (there are at least n + 2 of these by the
Then there exists a set of points xx < x2 < • • • < xn+ 2 manner of constructing .P(r)), and then selecting the
such that points of [x(r + 1)] from among these points of maximum
deviation. It is always possible to ensure that ( / — />(r))
(i) xu b, alternates in sign on [x (r+1) ]. In this case the conver-
gence of the algorithm is readily demonstrated (Rice,
i=0
1964). Veidinger (1960) has shown that the algorithm
is actually quadratically convergent.
(iii) max | / - P\ = \h\. The best approximation to a continuous function on
a finite interval by the quotient of two polynomials of
Any set of n + 2 distinct points x* satisfying con- given degrees is characterized in general by oscillation
dition (i) above will be called a reference and written properties very similar to those for best polynomial
[x*]. For any reference, condition (ii) is a set of linear approximations. There are, however, degenerate cases
equations for the coefficients pf, i = 0, 1, . . . , « , and h*. (Achieser, 1956). Here we consider only the most usual
This set of equations can always be solved. For let A* case where the error curve is said to have standard form
be the signed cofactor of (— l) s , then a simple calculation (Maehly, 1963). Let P and Q be polynomials of degree
shows that p and q, respectively. We consider references having
n (*; - xf) p + q + 2 points, and the optimal approximation is
characterized by the existence of a reference such that
fE
(i) a < xu and xp+q+2 < b,
so that the A* alternate in sign. Therefore the modulus (ii) fix,) - P(xs)/Q(xs) = (-1)%
n+2 (iii) max | / - P/Q\ = h.
of the determinant is equal to 2 I ^? | and is therefore
non zero. The polynomial P*, satisfying (iii) when The case where P and Q have common factors is
xs = xf, is called the levelled reference function, and h* degenerate; it occurs only for special functions/(x) and
is called the levelled reference deviation for the reference will not be considered here. In all other cases equation
[x*]. The graph of (/— P*) is called the error curve. (iii) implies that Q is one-signed in [a, b].
Note that the A* are proportional to the coefficients of These oscillation properties make it reasonable to try
the divided difference operator A(l, 2, .. ., n + 2) defined a method similar to that described for polynomials to
compute the best rational approximation, and several
§ A positive function g(x) can be introduced to weight the errors such algorithms have been published recently (Maehly,
if not equally important for all values of x. This may be done by 1963; Werner, 1962 and 1963; Stoer, 1964). These
replacing (— I) 1 by (— \)sg(x^. We shall consider only g(x) = 1
in this paper, except in Section 5. algorithms differ one from another and from the
* A.E.R.E., Harwell, Didcot, Berks.
t Edinburgh University Computer Unit {now at Australian National University).
286
Approximations to functions
algorithm for polynomial approximation in the method The result follows from a straightforward application
used for computing the levelled reference function on of the composition law for divided differences
the current reference (let this be the jth). Here the
coefficients in PO)/QO) and hiJ) must satisfy , . . . N)uv = l, 2, . . ., s)uA(s,. .., N)v.
s=l
o (1.2)
In fact if we set N = p + q + 2, u = Q and define v
s=l,2,...,p+q+2. to be a function cotabular with (/(x s ) — (— Vfh),
These equations are homogeneous in the coefficients of s = 1, 2 , . . ., N, then we have from equation (1.2) that
PU) and QW and depend on the parameter W* so that 0 = A(l,. . . , N)P = A ( l , . . ., N)Qv
the levelled reference function is a solution of an
eigenvalue problem.

Downloaded from https://academic.oup.com/comjnl/article-abstract/9/3/286/406298 by guest on 25 March 2019


We describe our algorithm for computing best rational , . . ., N)v l, 2)QA(2, . . .,
approximations in Section 4. Here we note that the using the linearity of Q. Thus Q is one-signed in [a, b]
determinant of equation (1.2) is a polynomial of degree provided A(l, . . ., N — l)v and A(2,. . ., N)v have the
q + 1 in hfj) so that there are q + 1 possible rational same sign.
approximations at each stage, and this gives rise to the Let w be a function which is equal to (— l) s when
question of which, if any, are suitable. Maehly (1963) has x = xs, s = 1, 2,.. ., N, then
shown that at most one of the solutions of (1.2) asso-
ciated with distinct eigenvalues can be pole-free in [a, b].
He has also shown by giving examples that there need - A A ( 1 , . . . . N-
not be a pole-free solution, and that the pole-free solu-
and
tion need not be associated with the value of h of smallest
modulus. These examples are discussed in the next A(2,. . ., N)v = A(2,.. ., N)f- hA(2,. . ., N)w.
section. Werner (1963) has shown that the values of h
determined by (1.2) are real, and he has given necessary The divided differences of / in the above expressions
and sufficient conditions for the existence of a pole-free dp+1f
have the same sign as both are proportional to , + 1
solution in the case where Q is linear. He also discusses
the case where equation (1.2) has repeated eigenvalues, evaluated at some point of the interval. Therefore these
and shows that this corresponds to a degenerate case of expressions can only be one-signed if h is the eigenvalue
the theorem guaranteeing the existence of a best approxi- of least modulus as there can only be one pole-free
mation. solution, and the divided differences of w are clearly of
In the next section we give a sufficient condition for opposite sign.
the pole-free solution (assuming it exists) to be associated It is interesting to consider here the examples given in
with the eigenvalue of smallest modulus for the case in Maehly (1963).
which Q is linear. We use arguments very similar to (a) We fit to / = x i n [ - 1, 1] with p = 0, q = 1,
those used by Werner in deriving his necessary and
sufficient conditions. These conditions appear to be of xt = — 1, x2 — 0, x3 = 1. Here, although-3-= 1
little practical value, and our result is something of a
is one-signed, there is no pole-free solution so
compromise.
our result does not apply. This actually corre-
In Section 3 we show that the exchange algorithm is sponds to a degenerate case of the basic theorem.
quadratically convergent. Our proof is an extension to (b) We fit to f=2+2-5x-0-5x2 in [1,6] with
the rational case of the one given by Veidinger for p = 0, q = 1, Xi = 1, x2 = 2, x3 — 5. Here
polynomials. In Section 5 we consider an example df
which is difficult to do by other means (such as by -7- = 2 • 5 — x and changes sign in the range.
computing Pade" approximants), but which is very
successfully treated by the methods of this paper. We There is a pole-free approximation but it is asso-
have tried to give fairly full computational details of the ciated with the eigenvalue of largest modulus.
methods we use when these are not readily available in The technique used in deriving the above sufficient
the literature. Most of this material is included in condition can be applied also when Q is of higher degree
Section 4. than 1. For example if Q is quadratic then Q(xi) and
Q(xN) will have the same sign if
2. The pole-free solution
In this section we show that the pole-free solution A(3, ...,N)v A(2,
(if it exists) is associated with the eigenvalue of smallest A(3, ...,N-l)v A(2,. - \)v
dp+lf
modulus in the case Q linear provided , p+l is one and
r
signed in [a, b]. In this case P/Q is pole-free if Q(a) -2)!)A(l,
and Q(b) have the same sign. A(2,. . ., -2>A(2, - \)v
287
Approximations to functions
have the same sign. Q will be pole-free either if scale each levelled reference function so that the constant
A(l, 2)g and A(JV— 1, N)Q have the same sign, or if term in its denominator is equal to 1. It follows from
dQ assumption (ii) that the rank of the matrix of equation
they differ and the value of Q where -r- = 0 has the (1.2) is N — 1. Thus one among the cofactors of the
same sign as Q(x{) and Q(xN). Both these conditions elements of the column of this matrix corresponding to
can be expressed as 2 x 2 determinants. For example the constant term in the denominator is non-singular.
A(l, 2) Q and A(iV — 1, N)Q have the same sign if Let the error at the y'th stage be ~r)lJKx). Then the
, . . ., N)v A(3, . . ., N)v points in satisfy
, ...,N-X)v A(3, . . . , N - \
i=2, ..., N- 1 (3.1)
and

Downloaded from https://academic.oup.com/comjnl/article-abstract/9/3/286/406298 by guest on 25 March 2019


so that
A(2, . . ., N)v A(2, ...,N-2)v
have the same sign.
In similar fashion one could derive conditions for
denominators of higher degree. But even in the case (3.2)
of linear Q the condition we derive involves (in general)
J) +1
a fairly high order derivative of / and is therefore where y\ is a mean value between and xp >. On
unsuitable for practical applications. Werner's neces- the new reference we have
sary and sufficient conditions for this case are open to
similar criticism. However, these results do seem to
suggest that it is necessary for the degree of the numerator
of the approximations to be sufficiently high for it to
absorb most of the oscillations of f. This can be used (3.3)
as a rule of thumb for an attack on the main problem
on an experimental basis. whence, using the properties of the (see equation
(1-1))
3. Rate of convergence of the exchange algorithm
The convergence of the exchange algorithm for
(3.4)
rational approximations in the case in which the error
curve has standard form is demonstrated for example S
in Rice (1964). Here we show that the ultimate rate of
convergence of the exchange algorithm is quadratic. where
Our proof exactly parallels that given in Veidinger (1960)
for the polynomial case. We make the following
assumptions: Now, as the iteration is convergent, a stage of the
iteration will be reached when -qOi^J+n) a n d rfj\x\iiy) =
(i) The rational approximation is pole-free on every (— lyAW have the same signs for each i. In this case
reference used in the computation. we have
(ii) The appropriate eigenvalue is, at each stage, a
simple root of the characteristic equation.
(iii) The successive error curves have non-vanishing
second derivatives in small neighbourhoods con- (3.5)
taining each of the extrema which are interior to
[a, b].
(3.6)
Assumptions (i) and (ii) ensure that the error curve has
standard form, while assumption (iii) is necessary for by equation (3.2). Note that assumption (iii) is being
quadratic convergence. used here.
Assumption (iii) is usually made in implementing the Now by equation (3.3) we have for each point of the
exchange algorithm—the extrema of the error curves new reference
being determined by locally fitting quadratic inter- (gO + O _ QU)) _ (/>O + ») -
polation polynomials and computing the extrema of these.
t
For simplicity we assume that a = 0 and b = 1, and
that these points are always points of the reference.
The more general case is treated by Veidinger, and this (3.7)
can also be done for rational approximation. By
assumption (i), 2(0) ^ 0. It is therefore possible to by equation (3.6).
288
Approximations to functions
This is a set of equations for the differences in the /,(,• + !) _ /,(,) = _ /jO- 1))

coefficients of two consecutive approximations. The


matrix is that of the eigenvalue problem determining and this demonstrates the quadratic convergence of the
/jt/+D a n d j s therefore singular. However, the equations exchange algorithm.
are compatible by construction. Also Q^ and gt/ + 1)
are both assumed scaled so that they have constant 4. A numerical method for the rational case
term 1. Thus the vector of difference in the coefficients
contains a zero element in the position of the constant The computation falls into two parts.
term in the denominator. Now the column of the (i) The calculation of the points of the new reference
matrix corresponding to this zero element has at least from the extrema of the error curve for the
one non-singular cofactor. If we select one such co- current rational approximation.
factor and delete the remaining row and column then (ii) The solution of the eigenvalue problem to deter-

Downloaded from https://academic.oup.com/comjnl/article-abstract/9/3/286/406298 by guest on 25 March 2019


the vector of remaining coefficients is determined by a mine the levelled reference function for the new
non-singular set of equations. As the right-hand side is reference.
t i) _ /,O)) this estimate also holds for the solution.
The basic procedure for computing the new reference
Consider now the equation seeks the extrema adjacent to x\h by sub-dividing the
interval between x(/ip and xJ{J., in some fashion,
l) = pU- DQO) _ evaluating the error on this grid to bracket the extremum,
and then locating it by fitting a quadratic to the error
(3.8)
curve and taking the extremum of the quadratic. For a
which holds for all JC in [a, b]. Differentiating, setting variety of common functions, subdivision by 25 equi-
spaced points proved satisfactory; this implies a mini-
x = xP\ i = 2,.. ., N — 1, and using—— (X/0)) mum of 13 evaluations of the error for each reference
= 0 we obtain point. However, it is usual for only three or four to be
needed after the first iteration, and three error evalua-
QU- DQO) = _ (^0) _ yfJ- D ) _y(QO)gO- 0) tions is the minimum number for a bracket. In most
dx dx cases fewer than 25 points could clearly have been used,
but even with 25 there have been cases when the bracket
— (P<J- OgU) _ p(J)QU- D) has been completed by a value which was larger in
modulus but opposite in sign to the previous values.
for xe[x 0) ]. This gives This abrupt behaviour has been due in every case to a
poorly chosen first reference, and has been absent from
), i = 2, . . ., N - 1, all iterations after the first. In these cases the points
of the initial reference were equispaced. For the
(3.9) standard functions it was observed that the points of
as (i) - i) > 0 )
the optimal reference did not change much as p and q
were varied keeping N fixed, and that they were close
to the extrema of the Chebyshev polynomial of degree
N- 1.
(hi) j-i.Pu~ l)
QU) ~ This suggests that expressing x in terms of a variable
2x - b - a
0by- = cos 6, 0 < 6 < it and then searching
(as derivatives of a polynomial whose coefficients satisfy b-a
this relation) and, by equation (3.8) in each subinterval x^^, x^\x at equal intervals in 0
should be successful. The Chebyshev points form an
(iv) - r,U- appropriate initial reference, because they are equispaced
in 6. We have found that this searching technique also
Now, by the mean-value theorem, we have works well in more complicated cases.
Several methods for the solution of the eigenvalue
problem have been suggested. Mattson (1960)
dx dx dx
linearizes equation (1.2) by replacing Q^ixSj)), where it
occurs as a coefficient of W\ by Qu~ i\x(s'y). A related
method due to Fraser and Hart (1962) has proved to
have a slow rate of convergence. In implementing
z,U> being a mean value. Mattson's method we took steps to improve the con-
Whence, using assumption (iii) and equation (3.9), ditioning of the resulting linear equations, and the
(3.10) method then proved satisfactory. Stoer (1964) satisfies
(iV — 1) of the equations by interpolating a continued
Substituting this result in equation (3.6) gives fraction and then uses Newton's method to find a value
289
Approximations to functions
of hfJ* such that the remaining equation is also satisfied. The suggested procedure in which each iteration
This method has also been found to work in practice. consists of one step of Osborne's iteration followed by
Werner (1962) has suggested a method which in some one step of the exchange algorithm has been imple-
ways is similar to Mattson's. It may also be expected mented on the Atlas computer and has worked extremely
to give difficulty with ill-conditioning. In particular his well on analytic functions. Using single precision
equation (4.3) requires the solution of a set of linear floating-point arithmetic (with a 40-bit mantissa) we
equations with a Vandermonde matrix. have been able to test convergence of the eigenvalue h
Strangely enough, none of the published techniques to within a tolerance of 5 x 10~'' without difficulty.
is obviously related to a conventional procedure for the The convergence test is applied to equation (4.2) and
algebraic eigenvalue problem. This is doubly odd as this has proved adequate (the extrema of the error
inverse iteration suggests itself because zero can be curve differing by less than this tolerance). Also the
expected to be a good guess at the required eigenvalue. abscissae of the extrema are found to be accurate to

Downloaded from https://academic.oup.com/comjnl/article-abstract/9/3/286/406298 by guest on 25 March 2019


In addition inverse iteration can be expected to be free within a similar tolerance in the sense that (for example)
from the ill-conditioning problems of the other methods. if the extrema of the error curve are 0{\Q~af) then they
In this case inverse iteration to solve (1.2) takes the can have at most j8 — a significant figures correct on a
form (using partitioned matrices) machine working to j8 significant figures so that we
cannot expect their abscissae to be determined to more
(4.1) than this number of significant figures. That we actually
get this number of significant figures is shown in
where P(x) = 2 asxs, Table 4.1.

Q(x) = S bsx, 5. Example


Consider the function
s=0

J\ o
= 1,2, ...,p + r=l,2,...,N <f>(z) = e 2
dt (5.1)
rs
1 j Z. ? . . . j if I -
and F and G are diagonal matrices with elements f(xr) for 0< z < oo. Because <j>(z) = 0(z~') for large z,
and (— l)r, respectively, r = 1,2,..., N(here xr denotes we seek rational approximations with q = p + 1, which
the rth point of the current reference). it is convenient to take in the form
The full computation involves an outer iteration (the
exchange algorithm), and an inner iteration to determine
the current eigenvalue. There is no point in carrying P{z) 0 (5.2)
out the inner iteration for more steps than is justified by p
the approach to convergence of the outer iteration. In Cp +
fact if an inner iteration of order at least two is used s
0
then, provided the initial guess is good enough, the outer Let us write x = (1 + z)~', so that xe[0, 1]. Then
iteration will still be of second order even if only one z = x~ l — 1 and the approximation can be written
stage of the inner iteration is carried out. In Osborne
(1964) a third order iteration is given for general = x-Sr (5.3)
algebraic eigenvalue problems linear in the eigenvalue P+ ,(*)
parameter. As each iteration requires only one triangular where again the degrees are p and p + 1. The problem
factorisation, the resulting algorithm for the rational has been transformed into that of approximating to
case involves little more work per step (involving one
inner followed by one outer iteration) than does the
polynomial case. In the notation used here Osborne's
for 0 < x < 1. We chose the weight function to be
equal to/(x) so as to minimize the relative error.
algorithm is I writing * (l) = , (/) J The minimized errors are given as a function of/? in
Table 5.1, and the coefficients of the approximations in
Table 5.2.
The errors hp are rather well fitted by the expression
= [Oj h. ~ 0-0947(0-02592)" (5.5)
Most other methods (e.g. the Lanczos T method or the
[yo+o i _ q — d algorithm) for approximating to this function by
means of a general rational function of the form (5.2)
fail to give results useful over the whole range 0 < z < oo.
Note that since all terms in the numerator and
(4.2) denominator of (5.2) have the same sign, rounding errors
290
Approximations to functions

Table 4.1
Successive columns show the progress of the iteration

(a)f = sqrt (x),i<x<\,p=2,q=0


0-25 0-25 0-25 0-25 0-25 0 •25
JC(2) 0-5 0-4006441 0-3979189 0-3981111 0-3981107 0 •3981107
x(3) 0-75 0-7891342 0-7741352 0-7740727 0-7740731 0 •7740731
x(4) 10 10 10 10 10 1 •0
/JXIO3 2-905516 3-494400 3-502500 3-502482 3-502482

Downloaded from https://academic.oup.com/comjnl/article-abstract/9/3/286/406298 by guest on 25 March 2019


(b)f= log (x), i < x < 1, P = 2, q = 2
*(1) 0-5 0-5 0-5 0-5 0-5 0 •5
x(2) 0-6 0-539066 0-533511 0-534549 0-534547 0 • 534574
x(3) 0-7 0-666176 0-636185 0-635601 0-635626 0 •635626
x(4) 0-8 0-801603 0-790808 0-786829 0-786830 0 •786830
x(5) 0-9 0-940268 0-936826 0-935518 0-935471 0 •935472
x(6) 10 10 10 10 10 1 •0
hxlO6 0-905892 1-614088 1-713179 1-714640 1-714628

(c)/=tan- {(x), 0 < x < 1> P = 3, q = 3


x(l) 0 0 0 0 0 0
x{2) 0-05268 003128 0-03797 003819 0-03819 0 •03819
*(3) 0-19960 0-13878 0-14664 0-14695 0-14696 0 • 14695
x(4) 0-40999 0-32087 0-31212 0-31397 0-31400 0 •31399
x(5) 0-63660 0-54820 0-52030 0-52229 0-52229 0 •52228
x(6) 0-83019 0-77309 0-74457 0-74393 0-74392 0 •74392
x(7) 0-95643 0-93877 0-92766 0-92654 0-92653 0 •92653
*(8) 10 10 10 10 10 1 •0
/!XlO 7 -3-0359 -5-4265 -5-6392 -5-6400 -5-6399

(<0/=(sin (7TX/2))/(7rX/2), 0 <x < \,p == 2, q = 6


x(l) 0 0 0 0 0 0
x(2) 0-1111 0 0377 0 0272 00312 0 0307
JC(3) 0-2222 0 1646 01169 01198 01189
x(4) 0-3333 0-3009 0-2622 0-2523 0-2520
x(5) 0-4444 0-4263 0-4179 0-4133 0-4155
x(6) 0-5555 0-5668 0-5785 0-5867 0-5874
x(7) 0-6667 0-6946 0-7413 0-7508 0-7469
x(8) 0-7778 0-8377 0-8841 0-8820 0-8776
x(9) 0-8889 0-9623 0-9730 0-9686 0-9694
x(10) 10 10 10 10 10
AxlO 10 1-904 5-629 7-566 7-661

Table 5.1
Minimized relative errors hp of approximations (5.2)
p \ 2 3 4 5
3 5 6 8
hB 2-Al.10- 6-46.10- 1-67.10" 4-24.10~ 1-11.10" 9

291
Approximations to functions
Table 5.2 values of k; for values farther from the minimum, the
minimax error was typically of order 10 times the mini-
Coefficients a,, br of best approximations mum value we subsequently found.
In order to make progress, we generalized the
p r br exchange algorithm to allow us to vary k in (5.8) as
well as the coefficients in (5.7), treating the problem as
1 0 0-830924 0-939875 one of {q + 2) parameters. We then obtained the
1 0-498789 1-576026 results shown in Tables 5.3 and 5.4. The minimax
errors are fitted fairly well by the expression
0 1-69071595 1-90764542
2 1 1-45117156 3-79485940 Eq = 0-0442(0-230)9 (5.9)
2 0-50003230 2-90845448 The second difference of log Eq is positive, so that (5.9)

Downloaded from https://academic.oup.com/comjnl/article-abstract/9/3/286/406298 by guest on 25 March 2019


may underestimate Eq if it is used for extrapolation.
0 4-057828706 4-578777018
3 1 4-516219969 10-262132648
2 2-247729024 9-541712441
0-499999165 4-495139976 Table 5.3
3
Minimized relative errors E9 of approximations (5.7)
0 11-0411267148 12-4585768388 q 2 3 4
1 14-8200946492 30-7807309733
4 2 9-3329101760 32-8041733598
3 3-1503258898 19-1651105085 Eq 2-37.10- 3 5-25.10- 4 1-26.10"4
4 0-5000000212 6-3006656953
Table 5.4
0 33 0068965765 37-2442945086
1 51-3154807148 99-9290005933 Coefficients of best approximations (5.7)
2 38-2520360969 118-676398126
5 3 16-5250302796 80-6459493922 Q k r c
r
4 4-14316421200 33.5501020941
5 0-49999999949 8-28632791559 2 0-55351783 0 0-27741459
1 0-25242792
2 0-35848400
Sar, Sbr in a ar, br cannot cause an additional relative
error exceeding 3 0-44046932 0 0-22011903
1 0-22675065
8a, Sar 2 0-14459256
max •< max (5.6)
3 0-29522999
in magnitude. Only rarely can such a precise statement
be made, and in general caution is desirable in rounding 4 0-37338663 0 0-18671676
1 0-18482519
coefficients found for best approximations.
2 0-19652617
Following an idea of Hastings (1954), we had earlier 3 0-05281852
tried minimizing the relative error of the approximations 4 0-26545158
(5.7)
o
where u = (1 + kz)~ '. (5.8) A little care is necessary in comparing the two kinds
of approximation, (5.2) and (5.7). For the same degrees
If q = p, (5.7) is a special case of (5.2), with the deno- in z of numerator and denominator, we should take
minator polynomial Qp+, the (p + l)th power of a q = p but this would be unfair to (5.7).
linear function, and it is interesting to compare the For the same number of parameters we have q = 2p.
results with those given above. Writing L, S, D, M, A for the times of load, store,
We first used the exchange algorithm to compute divide, multiply and add instructions, the evaluation
polynomials Tq(u) giving minimax error for given k, times are:
hoping to choose near-optimal values of k by experiment.
Unfortunately the minimax error as a function of k for (5.2), 2(L + S) + D + 2pM + (2p + \)A;
for given q has a very sharp minimum. For values of k for (5.7), 2(L + S) + D + (q + 2)M + (q + \)A.
near this minimum, the error curve has an extra sign These are equal if
oscillation, and its form depends critically on k. We
were unable to find minimax polynomials for such (q + 1 ~2p)(M + A) +M-A =0 (5.10)
292
Approximations to functions
so that if M ^ A we should take q = 2p — 1 for equal required accuracy increases. Moreover, (5.2) can be
evaluation times. obtained using the standard algorithm we have described,
suitable for a wide class of approximands; whereas to
Setting q = 2/? — 1 in (5.9) gives get (5.6) we had to code a special algorithm.
It is perhaps worth pointing out that no improvement
£ 2 p _, = 0-192(0-0529)*. (5.11)
is possible by taking x = (1 + kz)~ l, with k ^ 1, in
These results thus suggest that, for equal evaluation transforming (5.2) into (5.3). Although the minimax
time, (a) (5.7) is about an order of magnitude worse approximation found is a different function of JC, it is
than (5.2) if 4-figure accuracy or better is required, and the same function of z. Varying k might have a slight
(6) (5.2) tends to become relatively better still as the effect on the convergence of the algorithm.

Downloaded from https://academic.oup.com/comjnl/article-abstract/9/3/286/406298 by guest on 25 March 2019


References
ACHIESER, N. I. (1956). Theory of Approximation, Ungar.
CHEBYSHEV, P. J. (1899). Oeuvres I, Petersbourg.
FRASER, W., and HART, J. F. (1962). "On the Computation of Rational Approximations to Continuous Functions", Comm.
Assoc. for Comput. Mach., Vol. 5, pp. 401^403.
HASTINGS, C. (1954). Approximations for Digital Computers, Rand. Corp., Santa Monica, California.
MATTSON, H. F. (1960). An Algorithm forfindingrational approximations, Air Force Cambridge Research Laboratories, Bedford,
Mass.
MAEHLY, H. J. (1963). "Methods offittingrational approximations, Part II", / . Assoc. for Comput. Mach., Vol. 10, pp. 257-266.
OSBORNE, M. R. (1964). "A new method for the solution of eigenvalue problems", The Computer Journal, Vol. 7, pp. 228-232.
REMEZ, E. (1934). "Sur le calcul effectif des polynomes d'approximation de Tschebyscheff", Comptes Rendues, Vol. 199, pp. 337-
340.
RICE, J. R. (1964). The approximation offunctions, Addison-Wesley.
STOER, J. (1964). "A direct method for Chebyshev approximation by rational functions", /. Assoc. for Comput. Mach., Vol. 11,
pp. 59-69.
VEIDINGER, L. (1960). "On the numerical determination of the best approximations in the Chebyshev sense", Num. Math.,
Vol. 2, pp. 99-105.
WERNER, H. (1962). "Die konstruktive Ermittlung der Tschebyscheff—Approximation im Bereich der rationalen Funktionen,"
Arch, for Rational Mechanics and Analysis, Vol. 11, pp. 368-384.
WERNER, H. (1963). "Rationale Tschebyscheff—Approximation, Eigenwerttheorie und Differenzenrechnung", Arch, for Rational
Mechanics and Analysis, Vol. 13, pp. 330-347.

Book Review
The Application of Matrix Theory to Electrical Engineering, by The third is devoted to more advanced matrix operations
W. E. Lewis andD. G. Pryce, 1966; 195 pages. (London: and the subsequent chapters to their use in further electrical
E. and F. N. Spon Limited, 35s., Hard cover 50s.) applications. These include connection matrices for the
impedance and the nodal voltage equation transformations;
The use of matrix "language" in scientific papers and in the use of matrices for three-phase line fault finding by the
computer operation is sufficiently widespread by now to symmetrical component method and four-terminal network
warrant the number of books written lately on the subject, but analysis, including some well prepared tabulations of typical
most of them are in the realm of pure, rather than applied circuit matrices.
mathematics and would not convince a sceptical engineer that
it is worth his while to learn all about matrix algebra. The last chapter deals with the introduction of matrix
The book by Lewis and Pryce is one of the few which fill the concepts to the theory of rotating machines, but is sketchy and
gap in the field of electrical engineering. Its greatest value is contributes only an outline of a new and difficult subject, as
the development of matrix analysis, together with theexamples the authors' state themselves. It would have been better
of its application, whilst progressing from the simple to more perhaps, were this topic omitted and more space devoted to
difficult concepts. Thus the first chapter explains the elemen- the problems of matrix inversion and partitioning with the
tary properties of matrices while the second gives the topology addition of more examples. It might have been an advantage
of electrical networks with good examples of the matrix had a short chapter on determinants been included.
algebra used to solve network problems. J. B. RASHBA

293

Potrebbero piacerti anche