Voyager’s slingshot maneuvers

This post started out as a thread on X. Here I’ve edited it into a blog post. The image below and the fact cited can be found in JPL Publication 89-24.

Voyager 2 velocity relative to the sun over time

In 1960 it didn’t seem that it would be possible to explore the solar system beyond Jupiter without greatly improved propulsion.

Then the gravitational assist (“slingshot”) maneuver was discovered in 1961. With this new discovery, NASA began making plans to take advantage of an alignment of the outer planets in the 1970s. This led to the Voyager missions.

(Fun fact: In a gravitational assist, the velocity of a spacecraft with respect to the planet doesn’t change, but the velocity relative to the sun changes greatly.)

Note that before encountering Jupiter, Voyager was moving well below solar system escape velocity. As a result of gravitational assists at four planets, the spacecraft is traveling at well over solar system escape velocity.

In a gravitational assist, speed (relative to the sun) increases or decreases, depending on which direction you approach the planet. At the end of the tour, Voyager 2 no longer had a reason to increase speed—it wasn’t possible to visit Pluto—but decreasing speed allowed it to visit both Neptune and its moon Triton.

It may seem that a gravitational assist violates conservation laws: where does the additional momentum come from? From the planets. When Voyager 2 passed Jupiter, Saturn, and Uranus, each of these planets lost some momentum, transferring it to the probe. When the spacecraft passed Neptune, the planet gained some momentum and the probe lost momentum. The changes in momentum were infinitesimal relative to the momentum of the gas giants, but large relative to the momentum of Voyager.

Related post: Error-correcting codes using on Voyager missions

Advantages of Reed-Solomon codes over Golay codes

When Voyager 1 and 2 left Earth, their internal computers were programmed to use Golay error correction codes. Images transmitted from Jupiter and Saturn were encoded using Golay codes. After leaving Saturn, the software was upgraded to use Reed-Solomon error correction codes.

I didn’t realize how much difference the change of encoding made until I ran across a JPL report that elaborated on the efficiency of both codes.

Encoding these data has a price, and that paid for the old Golay encoding algorithm (used at Jupiter and Saturn) was one code bit overhead for every data bit (100 percent). The new RS encoding scheme reduces this overhead to about 20 percent. In addition, it reduces the number of bit errors from 5 in 100,000 to only 1 in a million!

So switching to Reed-Solomon cut overhead by 5x and improved error correction 50x.

There’s a reason CDs use Reed-Solomon codes and not Golay codes.

Related posts

Hand calculating exp(x)

The previous post mentioned that Martin Gardner announced that Ramanujan’s conjecture that exp(π√163) in an integer had been proven. This was an April Fool’s joke in 1975. Gardner said

Working by hand, he [Ramanujan] found the value to be 262537412640768743.999999999999… The calculations were tedious, and he was unable to verify the next decimal digits.

Calculating exp(π√163) without a computer—Ramanujan died in 1920—would indeed be tedious, but not insurmountable. Certainly it would not stop someone like Ramanujan from testing a conjecture.

How might you go about calculating exp(π√163) by hand?

Algorithm

One possibility is an algorithm in [1].

\exp(t) = \left(1 + r + \frac{r^2}{2!} + \frac{r^3}{3!} + \cdots \right)^{256} 2^n

where r = t′ / 256, t′ = tn log 2, and n is chosen to minimize |t′|.

We can choose n so that |t′| < log(2)/2 and so |r| < 0.014. This means the infinite series converges rapidly and not too many terms will be needed, depending on the desired precision.

The calculation x256 can be done by squaring 8 times.

Although the context of this post is hand calculations, this would also be a viable algorithm for a program doing extended precision calculations.

Example

In our case, t = π√163 = 40.1091… and we choose n = 58 so that t′ = −0.09336….

Then r = −0.000364….

So each term in the series will contribute 3 or 4 decimal places to the desired precision at first, more once the factorial denominators get large.

Related posts

[1] Jonathan Borwein and David Bailey. Mathematics by Experiment. Volume 1.

Very accurate pi approximation

Yesterday I ran across this approximation for π and posted it on X.

π ≈ 3 log(640320) / √163

This approximation is good to 15 decimal places, and so the approximation is exact to within the limits of floating point arithmetic.

I said in a follow up comment that the √163 term looked familiar from Heegner numbers and near integers. I was so close but didn’t make the full connection.

It’s well known that exp(π√163) is very nearly an integer N:

exp(π√163) = N − ε

where N = 262537412640768744 and ε is less than 10−12. The left-hand side of this equation is known as Ramanujan’s constant.

Incidentally, Martin Gardner announced on April Fool’s day in 1975 that Ramanujan’s constant had been proven to be an integer.

There are deep reasons for why Ramanujan’s constant is nearly an integer, but the connection between this fact and the approximation for π is not deep at all: the former is basically the latter, solved for π. Because

N = 640320³ + 744,

log(N) is very nearly 3 log(640320).

Interior of a conic

What is the interior of a circle? Obvious.

What is the interior of a parabola? Not quite as obvious.

What is the interior of a hyperbola? Not at all obvious.

Is it possible to define interior in a way that applies to all conic sections?

Circles

If you remove a circle from the plane, there are two components left. Which one is the interior and which one is the exterior?

Obviously the bounded part is the interior and the unbounded part is the exterior. But using boundedness as our criteria runs into immediate problems.

Parabolas

If you remove a parabola in the plane, which component is the interior and which is the exterior? You might say there is no interior because both components of the plane minus the parabola are unbounded. Still, if you had to label one of the components the interior, you’d probably say the smaller one.

But is the “smaller” component really smaller? Both components have infinite area. You could patch this up by taking a square centered at the origin and letting its size grow to infinity. The interior of the parabola is the component that has smaller area inside the square all along the way.

Hyperbolas

A hyperbola divides the plane into three regions. Which of these is the interior? If we try to look at area inside an expanding square, it’s not clear which component(s) will have more or less area. Seems like it may depend on the location of the center of the square relative to the position of the hyperbola.

Tangents to a circle

Here’s another way to define the interior of a circle. Look at the set of all lines that are tangent to a point on the circle. None of them go through the interior of the circle. We can define the interior of the circle as the set of points that no tangent line passes through.

This clearly works for a circle, and it’s almost as clear that it would work for an ellipse.

How do we define the exterior of a circle? We could just say it’s the part of the plane that isn’t the interior or the circle itself. But there is a more interesting definition. If the interior of the circle consists of points that tangent lines don’t pass through, the exterior of the circle consists of the set of points that tangent lines do pass thorough. Twice in fact: every point outside the circle is at the intersection of two lines tangent to the circle.

To put it another way, consider the set of all tangent lines to a circle. Every point in the plane is part of zero, one, or two of these lines. The interior of the circle is the set of points that belong to zero tangent lines. The circle is the set of points that belong to one tangent line. The exterior of the circle is the set of points that belong to two tangent lines.

Tangents to a parabola

If we apply the analogous definition to a parabola, the interior of the parabola works out to be the part we’d like to call the interior.

It’s not obvious that every point of the plane not on the parabola and not in the interior lies at the intersection of two tangent lines, but it’s true.

Tangents to a hyperbola

If we look at the hyperbola x² − y² = 1 and draw tangent lines, the interior, the portion of the plane with no crossing tangent lines, is the union of two components, one containing (−∞, −1) and one containing (1, ∞). The exterior is then the component containing the line y = x. In the image above, the pink and lavender components are the interior and the green component is the exterior.

It’s unsatisfying that the interior of the hyperbola is disconnected. Also, I believe the exterior is missing the origin. Both of these annoyances go away when we add points at infinity. In the projective plane, the complement of a conic section consists of two connected components, the interior and the exterior. The origin lies on two tangent lines: one connecting (−∞, −∞) to (∞, ∞) and one connecting (−∞, ∞) to (∞, −∞).

Grokking the James Bond chord

The last chord of the James Bond theme is known as the James Bond chord. If you’ve ever seen a Bond movie, you’ll recognize it. Here’s a quick video showing how to play it on guitar.

I tried asking Grok 3 to generate some LilyPond code for sheet music a few days ago. LilyPond is pretty obscure as programming languages go—it’s not like asking for Python code—so I was skeptical, but Grok did well.

This afternoon I asked Grok to create sheet music for the James Bond chord. Specifically, I asked

  1. What is the “James Bond chord”?
  2. Create sheet music for the James Bond chord using Lilypond.
  3. Move the chord up an octave.

Here’s the output of each step.

What is the chord?

According to Grok,

The “James Bond chord” is a distinctive musical chord famously associated with the James Bond film franchise. It’s not just a single chord but rather a specific voicing and progression that evokes a sense of mystery, danger, and sophistication—perfectly matching the suave spy’s persona. Officially, it’s often identified as an E minor major 9 chord (EmMaj9), though its exact interpretation can vary slightly depending on the context.

The chord was first introduced by composer Monty Norman in the “James Bond Theme” for Dr. No (1962), arranged by John Barry, whose orchestration gave it its iconic swagger. …

First output

Grok produced the following LilyPond code

\version "2.22.0"

\score {
  \new Staff {
    \clef treble
    \key c \major % Default key, no sharps/flats; we'll specify accidentals manually
    \time 4/4
     <e g b dis' fis'>1 % EmMaj9 chord: E G B D# F#
  }
  \layout {}
  \midi {}
}

When compiled this gave the following music.

Grok also produce a MIDI file.

Second output

I wanted to move the chord up an octave for aesthetic reasons, putting the notes inside the staff. Grok complied, changing one line in the code, essentially adding an extra prime mark after each note.

<e' g' b' dis'' fis''>1 % EmMaj9 chord moved up an octave: E' G' B' D#'' F#''

This compiled to the following music.

Problems and prospects

Grok’s not perfect. In another experiment it produced code that wouldn’t compile. But when I told Grok that the code didn’t compile and asked it to try again, it worked.

I tried to remove the time signature, the C symbol. I asked Grok to remove it, and it did not. I asked Grok “How do you get LilyPond to produce music without a time signature?” and it told me two ways, neither of which worked.

I’ve used LilyPond occasionally for years, not to produce full sheets of music but to produce little fragments for blog posts. I’ve always found it a bit mysterious, in part because I jumped in and used it as needed without studying it systematically. There have been times when I thought about including some music notation in a blog post and didn’t want to go to the effort of using LilyPond (or rather the effort of debugging LilyPond if what I tried didn’t work). I may go to the effort more often now that I have a fairly reliable code generator.

Posts using LilyPond

Rates of convergence

The last several posts have looked at counting two kinds of permutations: those that leave no consecutive integers and those that leave no integer fixed. As n grows large, the proportion of permutations of n elements that fall into both classes approaches 1/e. This post will look a little closer and as how fast each proportion approaches 1/e.

(Incidentally, if you’re looking for an problem to explore, take any statement of the form “X converges to Y” and ask how does X converge to Y. Several of the posts on this blog follow that pattern.)

Definitions

To review our definitions, Q(n) is the number of permutations p of 1, 2, 3, …, n + 1 such that there are no consecutive integers, i.e. for no k does p(k + 1) = p(k) + 1. So, for example, 13254 counts, but 15423 does not count because 2 and 3 are consecutive.

Let D(n) be the number of derangements of 1, 2, 3, …, n, i.e. permutations p such that for no k does p(k) = k. So, for example, 23451 counts, but 21354 does not because 3 remains fixed.

Note that Q(n) counts a certain kind of permutation of n + 1 elements and D(n) counts a certain kind of permutation of n elements. This is a little awkward, but I chose my definition of Q to match OEIS. In order to count the proportion of permutations of n elements with no consecutive elements we should look at  Q(n − 1)/n!. Both Q(n − 1)/n! and D(n)/n! converge to 1/e, but not at the same rate.

D convergence

The convergence of D(n)/n! to 1/e is very fast. In fact,

D(n) = [n! / e]

where [x] is x rounded to the nearest integer. So the difference between D(n)/n! and 1/e is less than 0.5/n!. For n greater than or equal to 18, D(n)/n! = 1/e within the limits of floating point precision. As a rule of thumb, you can see about three significant figures in a plot, so for n greater than about 6 you can’t see the difference between D(n)/n! and 1/e.

Q convergence

The convergence of Q(n − 1)/n! to 1/e is leisurely by comparison.

Here’s why. In this post I showed Q(n − 1) is asymptotic to (n + 1)(n − 1)! / e.

So it is true that Q(n − 1)/n! is asymptotic to 1/e, but it is more accurate to say that it is asymptotic to (n + 1) / ne.

D and Q convergence

If we look at the ratio of (n + 1) Q(n) / n to n! then it converges to 1/e about as fast as the ratio of D(n) to n! does.

Said another way,

(n + 1) Q(n) / n ~ D(n)

The ratio of (n + 1) Q(n) / n to D(n) converges to 1 very quickly, with an error on the order of 10n.

On to something else

I intend to end the series here. My next post will probably be about something else, something less mathematical.

So here’s the complete series.

Gluons, quarks, letters, and envelopes

Yesterday I wrote a couple of posts about a combinatorics question that lead to OEIS sequence A000255. That page has this intriguing line:

This comment derives from a family of recurrences found by Malin Sjodahl for a combinatorial problem for certain quark and gluon diagrams (Feb 27 2010)

I love how pulling on a thread can lead you to unexpected places. What in the world does my little permutation problem have to do with particle physics‽

I found Malin Sjödahl’s website, and apparently the citation above refers to this paper from 2009. The author’s site doesn’t list any papers from 2010. Maybe the paper was published electronically in 2009 and in print in 2010.

Counting tensors

The paper is mostly opaque to me since I don’t know particle physics, but at one point Sjödahl says

The problem of finding all such topologies is equivalent to the number of ways of mapping N elements to each other without mapping a single one to itself …

and says that the solution is

N! \sum_{i=0}^N \frac{(-1)^i}{i!} \to \frac{N!}{e}

Sjödahl is not counting physical permutations but the number possible tensors associated with gluons and quark interaction diagrams.

The right-hand side above is essentially the same as the asymptotic estimate for the function Q(n) in the previous post.

I didn’t find the recurrence that the OEIS comment alluded to. Perhaps I’m not looking at the same paper. Perhaps I’m not looking hard enough because I’m skimming a paper whose contents I don’t understand.

The Montmort letter problem

In more mathematical terminology, Sjödahl is counting the number of permutations of N objects with no fixed point, known as the number derangements of a set N objects.

If you divide by the number of possible permutations N! you have the probability that a random permutation moves every object.

This was historically known as the Montmort letter problem, named after Pierre-Remond Montmort who asked the following question. If N letters are randomly assigned to N envelopes, what is the probability that no letter ends up in the correct envelope?

The probability converges to 1/e as N approaches infinity. It approaches 1/e quickly, and so this is a good approximation even for moderate N. More on the rate of convergence in the next post.

Previous posts in this series

Example using a generating function to find asymptotic behavior

The previous post looked at how to compute Q(n), the number of permutations of 1, 2, 3, …, n + 1 that contain no consecutive integers. We found a way to numerically compute Q(n) but no analytic expression that would let us compute asymptotics.

The sequence Q(n) is sequence A000255 in OEIS, and OEIS gives the exponential generating function of Q:

\sum_{n=0}^\infty \frac{Q(n)}{n!} = \frac{\exp(-x)}{(1-x)^2}

We can use this function and Theorem 5.2.1 from [1] to get the asymptotic form of Q(n). According to that theorem, the coefficient of xn in our generating function is asymptotically the same as the coefficient of xn in the principle part at the singularity at 1. This principle part is

\frac{1}{e(1-x)^2} + \frac{1}{e(1-x)} = \frac{1}{e}\left(2 + 3x + 4x^2 + 5x^3 + \cdots\right)

and so the coefficient of xn is (n + 2)/e.

So

Q(n) \sim \frac{n + 2}{e} n!

and Q(n) / (n + 1)! approaches 1/e for large n, as conjectured in the previous post.

[1] Herbert S. Wilf. Generatingfunctionology. Second edition.

Permutations with no consecutive elements

I was working on something this week that made me wonder how many permutations break up all consecutive elements. For example, if we’re looking at permutations of abcde then acebd counts, but acdbe does not. I’d like to count the number of such permutations, and estimate for large N the number of permutations of N elements with no consecutive terms.

A brief search lead to OEIS sequence A000255 which gives a recursive formula for the answer. Before looking at that, let’s define some terms and do some brute force calculation to get a feel for the problem.

Brute force calculation

Given a positive integer n, define Q(n) to be the number of partitions p of 1, 2, 3, …, n + 1 such that for no k does p(k + 1) = p(k) + 1.

from itertools import permutations

def no_consec(perm):
    for k in range(len(perm) - 1):
        if perm[k+1] == perm[k] + 1:
            return False
    return True

def Q(n):
    count = 0
    for p in permutations(range(n+1)):
        if no_consec(p):
            count += 1
    return count

We could use this code to compute Q(n) for moderate-sized n, but the time required to compute Q(n) this way is proportional to n! and so that won’t scale well.

Recurrence relation

OEIS sequence A000255 gives the values of what we’re calling Q(n), and the first few values match the output of the code above. But OEIS does better, giving a recursive solution for Q:

Q(n) = n Q(n − 1) + (n − 1) Q(n − 2)

for n > 1 with initial conditions Q(0) = Q(1) = 1.

As with Fibonacci numbers, you could implement this easily as a recursive function

def Q2(n):
    if n in[0, 1]:
        return 1
    return n*Q2(n-1) + (n-1)*Q2(n-2)

but it is much faster to implement iteratively.

def Q3(n):
    q = [1]*(n+2)
    for k in range(2, n+1):
        q[k] = k*q[k-1] + (k-1)*q[k-2]
    return q[n]

You could make the recursive implementation faster with memoization, using lru_cache from functools. However, memoization does not prevent recursion; it just speeds up recursion using a cache, and you can still get an error saying maximum recursion depth exceeded.

Asymptotics

We can calculate Q(n) for large n, but we don’t have an analytic expression [1] that will let us find the asymptotic behavior. I intended to work that out in my next post.

Empirically, it seems Q(n) approaches (n + 1)!/e as n goes to infinity, but I don’t have a proof of that at the time of writing. If this is correct, this means that in the limit the probability of a permutation having no fixed point equals the probability of it having no consecutive values.

Update: Proof

***

[1] There is an analytic expression for Q(n) in terms of a hypergeometric function

Q(n) = (−1)n 2F0(2, −n; ; 1)

but this expression isn’t helpful in finding the asymptotic behavior.