The most obvious approximation may not be the best. But sometimes a small change to an obvious approximation can make it better approximation. This post gives an example illustrating this point.
The easiest, most obvious way to obtain a polynomial approximation is to use a truncated Taylor series. But such a polynomial may not be the most efficient approximation. Forman Acton gives the example of approximating cos(π x) on the interval [−1, 1] in his classic book Numerical Methods that Work. The point of this example is not the usefulness of the final result; a footnote below explains that this isn’t how cosines are computed in practice. The point is that you can sometimes improve a convenient but suboptimal approximation with a small change.
The goal in Acton’s example is to approximate cos(π x) with a maximum error of less than 10−6 across the interval. The Taylor polynomial
is accurate to within about 10−7 and so is certainly good enough. However the last term of the series, the x16 term, contributes less than the other terms to the accuracy of the approximation. On the other hand, this term cannot simply be discarded because without it the error rises to 10−5. The clever idea is to replace the x16 term with a linear combination of the other terms. After all, x16 doesn’t look that different from x14 or x12. Acton uses the 16th Chebyshev polynomial to approximate x16 by a combination of smaller even powers of x. This new approximation is almost as accurate as the original Taylor polynomial with an error that remains below the desire threshold. Acton calls this process economizing an approximation.
This process could be repeated to see whether the x14 term could be eliminated. Or you could directly find a Chebyshev series approximation to cos(π x) from the beginning. Acton did not have a symbolic computation package like Mathematica when he wrote his book in 1970 and so he was computing his approximations by hand. Directly computing a Chebyshev approximation would have been a lot of work. By just replacing the highest order term, he achieved nearly the same effect but with less effort.
Computing power has improved several orders of magnitude since Acton wrote his book, and some of his examples now seem quaint. However, I don’t know of a better book for teaching how to think about numerical analysis than Numerical Methods that Work. Acton has another good book that is harder to find, Real Computing Made Real: Preventing Errors in Scientific and Engineering Calculations.
Footnote 1: evaluating polynomials
Suppose you want to write code to evaluate the polynomial
P(x) = a0 + a2x2 + a4x4 + … + a14x14.
The first step would be to reduce this to a 7th degree polynomial in y = x2.
Q(y) = a0 + a1y + a2y2 + … + a7y7.
Directly evaluating Q(y) would take 1 + 2 + 3 + … + 7 = 28 multiplications, computing every power of y directly. For example, computing y5 as y*y*y*y*y. Factoring the polynomial is much more efficient:
((((((a7y + a6)y + a5)y + a4)y + a3)y + a2)y + a1)y + a0
Footnote 2: computing sine and cosine
The point of Acton’s example was to improve on a Taylor polynomial evaluated a moderate distance from the point where the Taylor series is centered. It does not illustrate how cosines are actually computed. See this answer on StackOverflow for an outline of how trig functions are computed in practice.
The factored approach to polynomial evaluation is known as Horner’s Method. It may be fewer operations than the naive approach, but that’s misleading if you’re not computing by hand. On modern computers, flops are free. You actually want to optimize for the shallowest expression, which runs directly counter to what Horner’s method does.
I have a benchmark for a few different methods available in git://charm.cs.illinois.edu/users/phil/polyeval.git constructed when the professor in my Numerical Analysis class described Horner’s Method in the same misleading terms.