Wrapping Christmas lights around a tree trunk

Suppose you want to wrap Christmas lights around a tree trunk that we can approximate by a cylinder of radius r.

You want to wrap lights around the tree in a helix, going up a distance h every time you go around the tree once. What length of lights do you need to make n turns around the tree?

You can model the lights as a parametric curve with equation

(r cos t, r sin t, ht/2π)

If t ranges from 0 to T, the corresponding curve length is

(r² + h²/4π²)1/2 T

and you can set T =2πn if you want to find the length of n turns.

How much does h matter? Not much if h is less than r, as is often the case when wrapping a tree trunk with Christmas lights. My daughter Allison discovered this while wrapping lights around our pine tree, and then I wrote this post adding the math details.

If we expand (r² + h²/4π²)1/2 as a function of h in a Taylor series centered at 0 we get

(r² + h²/4π²)1/2  = r + / 8π²r + O(h4).

For example, suppose a tree is r = 10 inches in diameter and move h = 4 inches vertically with each turn. To complete one turn we let T = 2π. The exact length of one turn is

(r² + h²/4π²)1/2  T = (10² + 4²/4π²)1/2 (2π) = 62.96 inches.

If we ignore the h term we get

rT = 10 (2π) = 62.83 inches.

In short, the length of n turns around the tree is 2πrn, the same as 10 circles around the tree. The difference in length between a helix with n turns and n circles is negligible, provided h is smaller than r. Even if h = r = 10 in the example above, we’d get a length of 63.6 inches and our approximation would still be off by less than an inch per turn.

The heart of this calculation pops up frequently in various contexts. For example, the same calculation appears in the post It doesn’t matter much if the tape is straight.

Curvature: principal, Gauss, and mean

This post will compute the center of curvature for an object described in the previous post. In order to do that we first need to describe principle curvature and Gauss curvature, and we’ll throw in mean curvature while we’re at it.

Let S be a surface sitting in three dimensional space. No need for more generality to get where we want to go. At any point p on S we can draw curves on the S through p and compute the curvature at p. The curvature is the reciprocal of the radius of the kissing circle.

If we draw curves through p in every direction, some may have larger or smaller curvature than others. Let k1 and k2 be the minimum and maximum curvatures at p. These re the principal curvatures. The product k1 k2 of the principle curvatures is the Gaussian curvature and their average ( k1 + k2)/2 is the mean curvature. Incidentally, when principle curvatures are not equal. the directions in which the curvature is minimized and maximized are orthogonal.

In the previous post I said that the center of curvature for the surface with equation

x^2 + y^2 + (z/h)^2 = s^2 (x^2 + y^2) (z/h)^2 + 1

is finite because the curvature is always positive. In particular, we wanted to know the center of curvature at the bottom, where x = y = 0 and z = −h.

The calculation to get there is messy, but the end result is simple: the principle curvatures are equal by symmetry, and both equal 1 − s². Therefore the center of curvature is at z = 1/(1 − s²).

Calculation details

The following Mathematica code calculates the (signed) curvature of a curve of the form F(x, y) = 0.

k[f_, x_, y_] := (D[f[x, y], y]^2 D[f[x, y], {x, 2}] - 
    2 D[f[x, y], x] D[f[x, y], y] D[f[x, y], {x, 1}, {y, 1}] + 
    D[f[x, y], x]^2 D[f[x, y], {y, 2}]) / (D[f[x, y], x]^2 + 
     D[f[x, y], y]^2)^(3/2)

Define

g[x_, y_] := x^2 + y^2 - s^2 x^2 y^1 - 1

and replace y with z/h. When we evaluate the curvature at x = 0 and simplify

Simplify[k[g, x, y] /. { y -> z/h, x -> 0}, Assumptions -> {h > 0}]

we get

(hs² z) / |z|.

When z = −h we find that the unsigned curvature is 1 − s². In the previous post we assumed h > 1, but the calculation above shows that if h < 1 it’s possible for the curvature to be 0. (Recall s must be between 0 and 1.)

Related posts

Hyperellipsoid surface area

Dimension 2

The equation for the perimeter of an ellipse is

p = 4aE(e^2)

where a is the semimajor axis, e is eccentricity, and E is a special function. The equation is simple, in the sense that it has few terms, but it is not elementary, because it depends on an advanced function, the complete elliptic integral of the second kind.

However, there is an approximation for the perimeter that is both simple and elementary:

p \approx 2 \pi \left(\frac{a^{3/2} + b^{3/2}}{2} \right )^{2/3}

Dimension 3

The generalization of an ellipse to three dimensions is an ellipsoid. The surface area of an ellipsoid is neither simple nor elementary. The surface area S is given by

\begin{align*} \varphi &= \arccos\left(\frac{c}{a} \right) \\ k^2 &= \frac{a^2\left(b^2 - c^2\right)}{b^2\left(a^2 - c^2\right)} \\ S &= 2\pi c^2 + \frac{2\pi ab}{\sin(\varphi)}\left(E(\varphi, k)\,\sin^2(\varphi) + F(\varphi, k)\,\cos^2(\varphi)\right) \end{align*}

where E is incomplete elliptic integral of the second kind and F is the incomplete elliptic integral of the first kind.

However, once again there is an approximation that is simple and elementary. The surface area approximately

S \approx 4\pi \left( \frac{(ab)^p + (bc)^p + (ac)^p}{3} \right )^{1/p}
where p = 1.6075.

Notice the similarities between the approximation for the perimeter of an ellipse and the approximation for the area of an ellipsoid. The former is the perimeter of a unit circle times a kind of mean of the axes. The latter is the area of a unit sphere times a kind of mean of the products of pairs of axes. The former uses a p-mean with p = 1.5 and the latter uses a p-mean with p = 1.6075. More on such means here.

Dimensions 4 and higher

The complexity of expressions for the surface area of an ellipsoid apparently increase with dimension. The expression get worse for hyperellipsoids, i.e. n-ellipsoids for n > 3. You can find such expressions in [1]. More of that in just a minute.

Conjecture

It is natural to conjecture, based on the approximations above, that the surface area of an n-ellipsoid is the area of a unit sphere in dimension n times the p-mean of all products of n-1 semiaxes for some value of p.

For example, the surface area of an ellipsoid in 4 dimensions might be approximately

S \approx 2\pi^2 \left( \frac{(abc)^p + (abd)^p + (acd)^p + (bcd)^p}{4} \right )^{1/p}

for some value of p.

Why this form? Permutations of the axes do not change the surface area, so we’d expect permutations not to effect the approximation either. (More here.) Also, we’d expect from dimensional analysis for the formula to involve products of n-1 terms since the result gives n-1 dimensional volume.

Surely I’m not the first to suggest this. However, I don’t know what work has been done along these lines.

Exact results

In [1] the author gives some very complicated but general expressions for the surface area of a hyperellipsoid. The simplest of his expression involves probability:

S = n \frac{\Gamma\left(\frac{n}{2}\right)}{\Gamma\left(\frac{n+1}{2} \right)} \text{E}\left(\sqrt{q_1^2 X_1^2 + q_2^2 X_2^2 + \cdots + q_n^2 X_n^2 }\right)

where the Xs are independent normal random variables with mean 0 and variance 1/2.

At first it may look like this can be simplified. The sum of normal random variables is a normal random variable. But the squares of normal random variables are not normal, they’re gamma random variables. The sum of gamma random variables is a gamma random variable, but that’s only if the variables have the same scale parameter, and these do not unless all the semiaxes, the qs, are the same.

You could use the formula above as a way to approximate S via Monte Carlo simulation. You could also use asymptotic results from probability to get approximate formulas valid for large n.

 

[1] Igor Rivin. Surface area and other measures of ellipsoids. Advances in Applied Mathematics 39 (2007) 409–427

How faithful can a map be?

It’s well known that you cannot map a sphere onto the plane without distortion. You can’t map the entire sphere to the plane at all because a sphere and a plane are not topologically equivalent.

But even if you want to map a relatively small portion of globe to paper, say France, with about 0.1% of the globe’s area, you’ll have to live with some distortion. How can we quantify this distortion? How small can it be?

We know the result has to vary with area. We expect it to approach zero as area on the globe goes to zero: the world is not flat, but it is locally flat. And we expect the amount of distortion to go to infinity as we take in more of the globe’s area. We expect to be able to make a map of France without too much distortion, less distortion than making a map of Australia, but more distortion than making a map of Lichtenstein.

This problem was solved a long time ago, and John Milnor wrote an expository article about it in 1969 [1].

Defining distortion

The way to quantify map distortion is to look at the ratio of distances on the globe to distances on paper. We measure the distance between points on the globe by geodesic distance, the length of the shortest path between two points. We’d like the ratio of distances on a globe to distances on a map to be constant, but this isn’t possible. So we look at the minimum and maximum of this ratio. In a good map these two ratios are roughly the same.

Milnor uses

dS(x, y)

to denote the distance between two points x and y on a sphere, and

dE(f(x), f(y))

for the distance between their image under a projection to the Euclidean plane.

The scale of a map with respect to points x and y is the ratio

dE(f(x), f(y)) / dS(x, y).

Let σ1 be the minimum scale as x and y vary and let σ2 be the maximum scale. The distortion of the projection f is defined to be

δ = log(σ21).

When σ1 and σ2 are approximately equal, δ is near 0.

Example: Mercator projection

One feature of the Mercator projection is that there is no distortion along lines of constant latitude. Given two points on the equator (less than 180° apart) their distance on a Mercator projection map is strictly proportional to their distance on the globe. The same is true for two points on the flat part of the US-Canada border along the 49th parallel, or two points along the 38th parallel dividing North Korea and South Korea.

For points along a parallel (i.e. a line of latitude) the scale is constant, such as a thousand miles on the globe corresponding to an inch on the map. We have σ1 = σ2 and so δ = 0.

The distortion in the Mercator projection is all in the vertical direction; distances along a meridian are distorted. Mercator maps latitude φ to something proportional to

φ ↦ log( sec φ + tan φ ).

Near the equator sec φ ≈ 1, tan φ ≈ φ, and the image of φ is approximately φ. But as φ approaches 90° the secant and tangent terms become unbounded and the distortion goes to infinity. You could use the equation for the Mercator projection of latitude to calculate the distortion of a “rectangle on a globe.”

Minimizing distortion

Let Dα be a disk geodesic radius α radians where 0 < α < π. Then Milnor shows that the minimum possible distortion when mapping Dα to the plane is

δ0 = log(α / sin α).

This map is realizable, and is unique up to similarity transformations of the plane. It is known in cartography as the azimuthal equidistant projection.

For small values of α the distortion is approximately α²/6, or 2/3 of the ratio of the area of Dα to the area of the globe.

We said earlier that France takes up about 0.1% of the earth’s surface, and so the minimum distortion for a map of France would be on the order of 0.0007.

[1] John Milnor. A Problem in Cartography. American Mathematical Monthly, December 1969, pp. 1102–1112.

{div, grad, curl} of a {div, grad, curl}

The various combinations of divergence, gradient, and curl are confusing to someone seeing them for the first time, and even for someone having seen them many times. Is the divergence of a curl zero or is it the divergence of a gradient that’s zero? And there’s another one, Is it curl of curl or curl of grad that’s zero?

It’s a mess that’s hard to sort out without pulling out differential forms. This post will show how a calculus student could make some sense out of all this, and how differential forms clarify the situation further.

Vector calculus perspective

We’ll start out looking at things from the perspective of a calculus student. We can make a table of all nine possible combinations of {grad, curl, div} applied to a {grad, curl, div} and start by asking which combinations make sense.

Gradient is something that takes a scalar function and returns a vector field. Curl takes a vector field and returns another vector field. Divergence takes a vector field and returns a scalar function. This means that only five of our nine combinations are even defined.

It turns out that the divergence of a curl is zero, and the curl of a gradient is zero (the zero vector). The other three possibilities are defined, but they are not zero in general.

So we can extend our chart to include the zeros.

Plain text version of chart image included at the bottom of the post.

Differential form perspective

From the perspective of differential forms, a scalar function f is a 0-form.

The differential of a 0-form is a 1-form and corresponds to the gradient.

The differential of a 1-form is a 2-form and corresponds to curl.

The differential of a 2-form is a 3-form and corresponds to divergence.

The differential of a differential is 0: d² = 0. This holds for k forms in general, for any non-negative integer k. So the curl of a gradient is 0 and the divergence of a curl is 0.

Gradients are 1-forms and curls are 2-forms. They’re different kinds of things. Vector calculus hides this distinction, which initially makes things simpler but ultimately makes things harder.

Now what about the three possibilities marked with question marks in the table above: the divergence of a gradient, the curl of a curl, and the gradient of a divergence?

From the perspective of differential forms, these are illegal operations. You cannot take the divergence of a gradient, because divergence operates on 2-forms, and a gradient is a 1-form. Similarly, you cannot take the curl of a curl or the gradient of a divergence. You could think of differential forms as adding type-checking to vector calculus.

But operations like taking the divergence of a gradient are legal in vector calculus. What gives?

Hodge star

The Hodge star operator is a duality between k-forms and (nk)-forms. In vector calculus n = 3, and so the Hodge star takes 0-forms to 3-forms and 3-forms to 0-forms.

f ︎↔︎ f dx dy dz

It also takes 1-forms to 2-forms and 2-forms to 1-forms.

f dx + g dy + h dz ︎↔︎ f dy dz + g dz dx + h dx dy.

You can’t take the divergence of a gradient of a function f, but you can translate the 1-form df represents into the 2-form *df via the Hodge operator, then take the divergence of that. This gives you a 3-form d*df, which you can translate to a 0-form by applying * once more to get *d*df. So the Laplacian, defined to be the divergence of the gradient in vector calculus, is *d*d in the language of differential forms.

Curl takes 1-forms to 2-forms, so you can’t take the curl of a curl. But you can turn a curl into a 1-form via the Hodge operator and then take the curl of that. And while you can’t take the divergence of a gradient, you can take the divergence of the Hodge operator applied to a gradient.

In vector calculus the Hodge operator is invisible. Making it visible explains why some combinations of operators always result in zeros and some do not: some identities follow from the general identity d² = 0, but operations requiring a Hodge operator are not zero in general.

The Hodge star operator is not so simple in general as it is in Euclidean space. On a Riemann manifold the Hodge operator is defined in terms of the metric. Defining the Laplace operator as *d*df extends to Riemann manifolds, but defining it as the sum of second partial derivatives will not.

Related posts

Plain text chart:

|------+------+------+-----|
|      | grad | curl | div |
|------+------+------+-----|
| grad | NA   | NA   | ?   |
| curl | 0    | ?    | NA  |
| div  | ?    | 0    | NA  |
|------+------+------+-----|

Nephroids and evolutes

The previous post looked at the evolute of an ellipse. This post will look at evolutes more generally, and then look at nephroids.

As a quick reminder, given a curve c, a point on its evolute is the center of curvature for a point on c. See the previous post for a detailed example.

If a curve has a parameterization (x(t), y(t))T then its evolute has parameterization

\begin{pmatrix}x(t) \\ y(t) \end{pmatrix} - \frac{x'(t)^2+y'(t)^2}{x'(t) y''(t)-x''(t) -y'(t)} \begin{pmatrix}y'(t) \\ x'(t) \end{pmatrix}

Nephroid curve

Let’s apply this to a nephroid. This word comes from the Greek for kidney-shaped. Comes from the same root as nephrology.

A nephroid can be parameterized by

\begin{pmatrix} 3\cos t - \cos 3t \\ 3\sin t - \sin 3t \end{pmatrix}

When we compute the evolute for this curve we get

\begin{pmatrix} 2\cos^3 t \\ (2 + \cos 2t)\sin t \end{pmatrix}

It’s not apparent from the expression above that the evolute of an nephroid is another nephroid, but it is clear from the plot below.

The solid blue curve is the nephroid, and the dashed orange curve is its evolute. Apparently the evolute is another nephroid, rotated and rescaled. We’ll come back to the equations shortly, but let’s look at another example first.

Cayley’s sextit

There is a curve known as Cayley’s sextit that has parameterization

\begin{pmatrix} \cos^3 (t/3) \,\cos (t) \\ \cos^3 (t/3) \,\sin (t) \end{pmatrix}

When we compute the evolute of this curve we get

\begin{pmatrix} -\cos^2(t/3) (-2 \cos(2t/3) + \cos(4t/3)) / 2 \\ \sin^3(2t/3)/4 \end{pmatrix}

When we plot Cayley’s sextit and its evolute we get

Note that to get the full curve of the evolute we let t run from 0 to 4π.

Again it’s not at all obvious from the equations, but apparently the evolute of Cayley’s sextit is also a nephroid, this time scaled, stretched, and shifted.

This says that pre-evolutes are not unique: two different curves can have the same (family) of evolutes.

Algebra

Now for the hard part: doing the algebra to show that the curves that look like nephroids are indeed nephroids.

We need to show that the set of points traced out by the evolute is a set of points on a nephroid: we don’t have to show that our parameterization of the evolute can be turned into our parameterization of a nephroid. So we shift to an implicit equation for a nephroid, scaled by a factor a:

((x^2 + y^2 - 4a^2)^3 = 180 \,a^4 y^2

Nephroid evolute

First let’s show that the evolute of a nephroid is a nephroid. From the plot it appears that the evolute is scaled by 1/2 and rotated a quarter turn. So we suspect that if we reverse x and y and multiply both by 1/2 we’ll get a nephroid. Let’s let Mathematica verify this for us:

    implicit[x_, y_, a_] := (x^2 + y^2 - 4 a^2)^3 - 108 a^4 y^2
    Simplify[implicit[(2 + Cos[2 t]) Sin[t], 2 Cos[t]^3, 1/2]]

Without calling Simplify we get something that isn’t obviously zero, but applying trig identities can reduce it to zero.

Cayley’s sextit evolute

In order to show that the evolute of Cayley’s sextit is a nephroid, we have to find what nephroid we believe it is.

Let’s look at the our parameterization of a nephroid again:

\begin{pmatrix} 3\cos t - \cos 3t \\ 3\sin t - \sin 3t \end{pmatrix}

We can see that the two cusps are at (-2, 0) and (2, 0), and the minimum and maximum heights are (0, -4) and (0, 4). We’ll line these points up with their counterparts in the evolute to Cayley’s sextit. Let’s look at its parameterization.

\begin{pmatrix} -\cos^2(t/3) (-2 \cos(2t/3) + \cos(4t/3)) / 2 \\ \sin^3(2t/3)/4 \end{pmatrix}

The two cusps correspond to y = 0, which happens when t = 0 and t = 3π/2. The corresponding x values are 1/2 and 0. This suggests the center of our nephroid is at 1/4 and our nephroid has been scaled by 1/8 in the horizontal direction.

The top and bottom correspond to x = 1/4, which happens when t = 3π/4 and t = 9π/4, and so our nephroid has height 1/4. This says our nephroid has been scaled by 1/16 in the vertical direction.

So if we shift x by 1/4 then multiply by 8, and multiply y by 16, we suspect we’ll get a standard nephroid. Let’s see what Mathematica has to say.

    implicit[8(-Cos[t/3]^2 (-2 Cos[2 t/3] + Cos[4 t/3])/2 - 1/4), 
        16 Sin[2 t/3]^3/4, 1]

This confirms that our guesses were correct. The evolute of Cayley’s sextit is indeed a nephroid, and we’ve identified which nephroid it is.

Evolute of an ellipse

Suppose you’re standing on an ellipse. (You actually are: lines of longitude are elliptical because of earth’s equatorial bulge.)

Now draw a line perpendicular to where you’re standing. Lines of longitude are nearly circles, but we’ll look at a more obviously elliptical ellipse.

ellipse with one normal line

The line is perpendicular to the northeast side of the ellipse where it starts, but not where it crosses on the other side.

Now suppose lots of people standing on the circle all draw perpendicular lines. We get a surprising result.

If we did this with a circle, all we’d see is lines radiating out of the center of the circle. But for an ellipse we get something much more interesting.

There’s a simple equation for the star-like figure in the middle created by all the normal lines. We’ll get to that shortly, but first we need to talk about kissing circles.

Kissing circles

Suppose you wanted to approximate an ellipse by pieces of a circle, as is sometimes done in computer graphics. What radius should you use?

The circle that best approximates a curve at a point is called the kissing circle, or classically the osculating circle. The radius of this circle is the radius of curvature at the point where it touches the curve.

ellipse with two kissing circles

In the image above, the small blue circle is the best approximation of the ellipse on the side. The much bigger green circle is the best approximation to the ellipse on the top. The centers of these two circles are the centers of curvature at the two points on the curve.

Equation of evolute

We could visualize all centers of curvature by plotting the center of curvature for each point on the curve. This is called the evolute of the curve.

If we parameterize our ellipse by

x(t) = a cos t
y(t) = b sin t

then the evolute has parameterization

x(t) = (a² − b²) cos³ t / a
y(t) = (b² − a²) sin³ t / b

You can find a general formula for the evolute to any parameterized curve in the next post.

Here’s what it looks like.

This is the same shape as the figure formed by the normal lines at the top of the post. The lines that are normal to the ellipse are tangent to its evolute. To confirm this, we’ll plot the normal lines again and plot the evolute on top in red.

Incidentally, the evolute of an ellipse can extend outside the ellipse, and it will if the ellipse is more eccentric. Here’s an example.

The next post looks at a curve whose evolute is another curve of the same kind.

Similar posts

How to calculate length of an elliptic arc

This post will show how to find the length of piece of an ellipse and explain what elliptic integrals have to do with ellipses.

Assume we have an ellipse centered at the origin with semi-major axis a and semi-minor axis b. So a > b > 0, the longest diameter of the ellipse is 2a and the shortest is 2b. The ellipse can be parameterized by

\begin{align*} x(t) &= a \cos(t) \\ y(t) &= b \sin(t) \end{align*}

Special case: quarter of an ellipse

Let’s first find the perimeter of a 1/4 of the ellipse.

one quarter of an ellipse highlighted in red

This is given by the integral

\int_0^{\pi/2} \sqrt{a^2 \sin^2(t) + b^2 \cos^2(t)}\, dt

The change of variables u = π/2 − t turns the integral into

\begin{align*} \int_0^{\pi/2} \sqrt{a^2 \cos^2(u) + b^2 \sin^2(u)}\, du &= a \int_0^{\pi/2} \sqrt{1 - \left(1 - b^2/a^2 \right) \sin^2(u)}\, du \\ &= a E\left(1 - b^2/a^2 \right) \end{align}

because E, the “elliptic integral of the second kind,” is defined by

E(m) = \int_0^{\pi/2} \sqrt{1 - m\sin^2 t}\, dt

Therefore the perimeter of the entire ellipse is 4aE(1 − b²/a²). Let’s define

m = 1 - b^2/a^2

for the rest of the post. Incidentally, m = e² where e is the eccentricity of the ellipse.

General case

Now let’s find the length of an arc where t ranges from 0 to T and T is not necessarily π/2.

general elliptic segment highlighted in red

The derivation is similar to that above.

\begin{align*} \int_0^T \sqrt{a^2\sin^2(t) + b^2\cos^2(t)}\, dt &= \int_{\pi/2 - T}^{\pi/2} \sqrt{a^2\cos^2u + b^2\sin^2u} \, du \\ &= a E(m) - a \int_0^{\pi/2 - T} \sqrt{1 - m \sin^2u} \, du \\ &= aE(m) -aE(\pi/2 - T, m) \\ &= a E(m) + a E(T - \pi/2, m) \end{align*}

where E, now with two arguments, is the “incomplete elliptic integral of the second kind” defined by

E(\varphi, m) = \int_0^\varphi \sqrt{1 - m\sin^2(t)} \,dt

It is “incomplete” because we haven’t completed the integral by integrating all the way up to π/2.

To find the length of a general arc whose parameterization runs from t = T0 to t = T1 we find the length from 0 out to T1 and subtract the length from 0 out to T0 which gives us

a E(T_1 - \pi/2, m) - a E(T_0 - \pi/2, m)

The elliptic integrals are so named because they came out of the problem we’re looking at in this post. The integrals cannot be computed in elementary terms, so we introduce new functions that are defined by the integrals. I expand this idea in this post.

NB: We are specifying arcs by the range of our parameterization parameter t, not by the angle from the center of the ellipse. If our ellipse were a circle the two notions would be the same, but in general they are not. The central angle θ and the parameter T are related via

\begin{align*} \theta &= \arctan\left( \frac{b}{a} \tan T \right) \\ T &= \arctan\left( \frac{a}{b} \tan\theta \right) \end{align*}

I wrote about this here.

Python code

Let’s calculate the length of an arc two ways: using our formula and using numerical integration. Note that the Python implementation of the (complete) elliptic integral is ellipe and the implementation of the incomplete elliptic integral is ellipeinc. The “e” at the end of ellipe distinguishes this elliptic integral, commonly denoted by E, from other kinds of elliptic integrals.

    
    from numpy import pi, sin, cos
    from scipy.special import ellipe, ellipeinc
    from scipy.integrate import quad
    
    def arclength(T0, T1, a, b):
        m = 1 - (b/a)**2
        t1 = ellipeinc(T1 - 0.5*pi, m)
        t0 = ellipeinc(T0 - 0.5*pi, m)
        return a*(t1 - t0)
    
    def numerical_length(T0, T1, a, b):
        f = lambda t: (a**2*sin(t)**2 + b**2*cos(t)**2)**0.5
        return quad(f, T0, T1)
        
    T0, T1, a, b = 0, 1, 3, 2
    y, err = numerical_length(T0, T1, a, b)
    ell = arclength(T0, T1, a, b)
    assert(abs(ell - y) < 1e-12)

    T0, T1, a, b = 7, 1, 4, 3
    y, err = numerical_length(T0, T1, a, b)
    ell = arclength(T0, T1, a, b)
    assert(abs(ell - y) < 1e-12)

The tests pass. This increases our confidence that the derivation above is correct.

Trig in hyperbolic geometry

I recently wrote posts about spherical analogs of the Pythagorean theorem, the law of cosines, and the law of sines. The corresponding formulas for hyperbolic space mostly just replace circular functions with hyperbolic functions, i.e. replace sine with hyperbolic sine and cosine with hyperbolic cosine.

Triangles on a sphere or on a hyperbolic space like a pseudosphere have two kinds of angles: the sides come together at an angle, but the sides themselves are angles. By convention, the former are denoted with upper case letters and the latter with lower case letters.

The translation rule from spherical to hyperbolic geometry is to change functions of sides from circular to hyperbolic, but to leave functions of intersection angles alone. Or in typographical terms, put an h on the end of functions of a lower case letter but not functions of a upper case letter.

You can find these formulas, for example, in [1].

Hyperbolic Pythagorean theorem

The Pythagorean theorem on a sphere

cos(c) = cos(a) cos(b).

becomes

cosh(c) = cosh(a) cosh(b)

in hyperbolic geometry. Here you simply change cos to cosh.

Hyperbolic law of sines

The law of sines on a sphere

sin(a) / sin(A) = sin(b) / sin(B) = sin(c) / sin(C).

becomes

sinh(a) / sin(A) = sinh(b) / sin(B) = sinh(c) / sin(C).

in hyperbolic geometry. Here sin becomes sinh, but only before a lower case letter, i.e. when applied to a side.

Hyperbolic law of cosines

The law of cosines on a sphere

cos(c) = cos(a) cos(b) + sin(a) sin(b) cos(C).

becomes

cosh(c) = cosh(a) cosh(b) − sinh(a) sinh(b) cos(C).

Note the negative sign, a small exception to our conversion rule.

We could rewrite the law of cosines on a sphere to be

cos(c) = cos(a) cos(b) + κ sin(a) sin(b) cos(C)

where κ stands for curvature, which equals 1 for a unit sphere. Then our theorem translation rule holds exactly:

cosh(c) = cosh(a) cosh(b) + κ sinh(a) sinh(b) cos(C)

in a hyperbolic space with curvature κ = −1.

Related post

See this post on the Unified Pythagorean Theorem for a version of the Pythagorean theorem that holds in spherical, plane, and hyperbolic geometry.

Maybe there are analogous unified laws of sines and cosines. This is left as an exercise for the reader.

[1] William P. Thurston. Three-Dimensional Geometry and Topology, Volume 1. Princeton University Press, 1997.

Radius, circumference, and area in non-Euclidean geometry

How does the circumference of a circle vary with its radius? What about the area? The answers are simple and familiar in Euclidean geometry, but not as simple or as familiar in non-Euclidean geometry.

This post will look at how circumference and area vary as a function of radius in three spaces: a surface with constant curvature 1 (i.e. a unit sphere), a surface of constant curvature 0 (i.e. a plane), and a surface of constant curvature -1 (a hyperbolic surface). The radius in each case is the distance from the center to the circle as measured on the surface.

Spherical case

In the spherical case, a circle of radius r has circumference

C(r) = 2π sin(r)

and area

A(r) = 2π (1 − cos(r)).

The circumference formula is valid for 0 ≤ r ≤ π and the area formula is valid for 0 ≤ r ≤ π/2.

Hyperbolic case

In the hyperbolic case, a circle of radius r has circumference

C(r) = 2π sinh(r)

and area

A(r) = 2π (cosh(r) − 1).

These formulas are valid for 0 ≤ r < ∞.

Plots

Let’s make a couple plots to illustrate the equations above. First, circumference as a function of radius.

The top subplot looks surprising at first. Can circumference decrease when radius increases? Yes, on a sphere. Imagine a circle around the north pole. As we pull that circle down from the pole, the circumference increases until the circle becomes the equator. Past that point, the circumference of the circle decreases as the circle descends further south.

Using the same range of r for all three subplots obscures the fact that the circumference of a circle in the hyperbolic case grows exponentially with the radius.

Next let’s look at area as a function of radius.

In each case, area grows approximately quadratically with radius. But again the growth in the hyperbolic case is exponential as r increases further than is possible in the spherical case.

One last note: In the plane, the ratio of area to circumference is proportional to r. In the hyperbolic case, the same ratio is asymptotically constant, independent of r.

Update

Here’s a plot of C(r) / r.

And here’s a plot of A(r) / r².

Related posts