A couple years ago I wrote a blog post on Kepler’s equation
M + e sin E = E.
Given mean anomaly M and eccentricity e, you want to solve for eccentric anomaly E.
There is a simple way to solve this equation. Define
f(E) = M + e sin E
and take an initial guess at the solution and stick it into f. Then take the output and stick it back into f, over and over, until you find a fixed point, i.e. f(E) = E.
Faster solutions
The algorithm above is elegant, and practical if you only need to do it once. However, if you need to solve Kepler’s equation billions of times, say in the process of tracking satellite debris, this isn’t fast enough.
An obvious improvement would be to use Newton’s root-finding method rather than the simple iteration scheme above, and this isn’t far from the state of the art. However, there have been improvements over Newton’s method, and a paper posted on arXiv this week gives an algorithm that is about 3 times faster than Newton’s method [1].
Patterns in applied math
This paper is an example of a common pattern in applied math. It starts with a simple problem that has a simple solution, but this simple solution doesn’t scale. And so we apply advanced mathematics to a problem formulated in terms of elementary mathematics.
In particular, the paper makes use of contour integration. This seems like a step backward in two ways.
First, we have a root-finding problem, but you want to turn it into an integration problem?! Isn’t root-finding faster than integration? Not in this case.
Second, not only are we introducing integration, we’re introducing integration in the complex plane. Isn’t complex analysis complex? Not in the colloquial sense. The use of “complex” as a technical term is unfortunate because complex analysis often simplifies problems. As Jacques Hadamard put it,
The shortest path between two truths in the real domain passes through the complex domain.
Related posts
[1] Oliver H. E. Philcox, Jeremy Goodman, Zachary Slepian. Kepler’s Goat Herd: An Exact Solution for Elliptical Orbit Evolution. arXiv:2103.15829
I’ve been loving the recent posts on effective approximations and difficult simplifications.
I spent the early days of my career (in the late 1980s and early 1990s) taking algorithms running on mainframes and minis and getting equivalent results in real-time on a 2 MHz 8085 8-bit processor. So much of what you’ve been discussing rings so many bells.
I remember thinking all my problems were solved when our next-generation system contained an 8086 and 8087. Nope, at least not in general. Many of my 8085 algorithms remained faster and more accurate.
I soon learned to leverage the strengths of both approaches: The 8087 was far from useless, and cunning floating-point emulation allowed the best 8085 code to be ported over. That code lasted through the 386/387 era, right until the first 486DX appeared.
I don’t miss those days, yet I do. The simplest embedded processor I touch these days is 32-bits, running a full POSIX OS. Sigh.
“… you want to solve for eccentric anomaly M.”
Should be “E”?
I have been trying to wrap my head around these equations for months, many sites, many papers, many videos, no luck. This, answered it all in one sentence “and take an initial guess at the solution and stick it into f. Then take the output and stick it back into f, over and over, until you find a fixed point, i.e. f(E) = E.”. Actually this part more than anything “take an initial guess”.
Thanks, I can quit banging my head against my desk now.