Fitting a polynomial to a function at more points might not produce a better approximation. This is Faber’s theorem, something I wrote about the other day.
If the function you’re interpolating is smooth, then interpolating at more points may or may not improve the fit of the interpolation, depending on where you put the points. The famous example of Runge shows that interpolating
f(x) = 1 / (1 + x²)
at more points can make the fit worse. When interpolating at 16 evenly spaced points, the behavior is wild at the ends of the interval.
Here’s the Python code that produced the plot.
import matplotlib.pyplot as plt from scipy import interpolate, linspace def cauchy(x): return (1 + x**2)**-1 n = 16 x = linspace(-5, 5, n) # points to interpolate at y = cauchy(x) f = interpolate.BarycentricInterpolator(x, y) xnew = linspace(-5, 5, 200) # points to plot at ynew = f(xnew) plt.plot(x, y, 'o', xnew, ynew, '-') plt.show()
However, for smooth functions interpolating at more points does improve the fit if you interpolate at the roots of Chebyshev polynomials. As you interpolate at the roots of higher and higher degree Chebyshev polynomials, the interpolants converge to the function being interpolated. The plot below shows how interpolating at the roots of T16, the 16th Chebyshev polynomial, eliminates the bad behavior at the ends.
To make this plot, we replaced x
above with the roots of T16, rescaled from the interval [-1, 1] to the interval [-5, 5] to match the example above.
x = [cos(pi*(2*k-1)/(2*n)) for k in range(1, n+1)] x = 5*array(x)
What if the function we’re interpolating isn’t smooth? If the function has a step discontinuity, we can see Gibbs phenomena, similar to what we saw in the previous post. Here’s the result of interpolating the indicator function of the interval [−1, 1] at 100 Chebyshev points. We get the same “bat ears” as before.
Related: Help with interpolation
Every time I see the Runge interpolation example my first thought
is that interpolating polynomials are being disturbed by the poles at +i and -i, even if you focus on the real line. Going to higher and higher degree will start to exhibit power series type problems, so once you are beyond |x|>1, beware. I’m curious if there is a less hand-wavy argument than the one I just gave that makes use of that fact.
The problem isn’t the function 1/(1+x^2) per se. It’s the combination of that function and a uniformly distributed set of interpolating points.
The poles at +/- i explain why a power series centered at zero have radius of convergence 1, but don’t see the connection between power series and evenly spaced interpolation points. There may be a connection, but I don’t see it.
There is indeed a connection with the location of the complex poles of the function. Trefethen’s books [1,2] consider this type of problem and specifically the Runge example.
Chapters 7 and 8 of [1] cover convergence.
See also [3].
[1] http://www.chebfun.org/ATAP/
[2] https://people.maths.ox.ac.uk/trefethen/spectral.html
[3] http://www.chebfun.org/examples/approx/EntireBound.html
Just a note: Chebyshev interpolation/quadrature/numerical differentiation is landing in Boost 1.66, and it’s close friend barycentric rational interpolation has also landed.
Just hawkin’ my wares, man!
@Jan Van lent: Thanks for recommending Trefethen’s book. Now I understand better how the location of singularities impacts convergence.