Suppose you’d like to evaluate the function
for small values of z, say z = 10−8. This example comes from [1].
The Python code
from numpy import exp def f(z): return (exp(z) - 1 - z)/z**2 print(f(1e-8))
prints -0.607747099184471.
Now suppose you suspect numerical difficulties and compute your result to 50 decimal places using bc -l
.
scale = 50 z = 10^-8 (e(z) - 1 - z)/z^2
Now you get u(z) = .50000000166666667….
This suggests original calculation was completely wrong. What’s going on?
For small z,
ez ≈ 1 + z
and so we lose precision when directly evaluating the numerator in the definition of u. In our example, we lost all precision.
The mean value theorem from complex analysis says that the value of an analytic function at a point equals the continuous average of the values over a circle centered at that point. If we approximate this average by taking the average of 16 values in a circle of radius 1 around our point, we get full accuracy. The Python code
def g(z): N = 16 ws = z + exp(2j*pi*arange(N)/N) return sum(f(ws))/N
returns
0.5000000016666668 + 8.673617379884035e-19j
which departs from the result calculated with bc
in the 16th decimal place.
At a high level, we’re avoiding numerical difficulties by averaging over points far from the difficult region.
[1] Lloyd N. Trefethen and J. A. C. Weideman. The Exponentially Convergent Trapezoid Rule. SIAM Review. Vol. 56, No. 3. pp. 385–458.