Gaining efficiency by working modulo factors

Suppose m is a large integer that you are able to factor. To keep things simple, suppose m = pq where p and q are distinct primes; everything in this post generalizes easily to the case of m having more than two factors.

You can carry out calculations mod m more efficiently by carrying out the same calculations mod p and mod q, then combining the results. We analyze m into its remainders by p and q, carry out our calculations, then synthesize the results to get back to a result mod m.

The Chinese Remainder Theorem (CRT) says that this synthesis is possible; Garner’s algorithm, the subject of the next post, shows how to compute the result promised by the CRT.

For example, if we want to multiply xy mod m, we can analyze x and y as follows.

\begin{align*} x &\equiv x_p \pmod p \\ x &\equiv x_q \pmod q \\ y &\equiv y_p \pmod p \\ y &\equiv y_q \pmod q \\ \end{align*}

Then

\begin{align*} xy &\equiv x_py_p \pmod p \\ xy &\equiv x_qy_q \pmod q \\ \end{align*}

and by repeatedly multiplying x by itself we have

\begin{align*} x^e &\equiv x_p^e \pmod p \\ x^e &\equiv x_q^e \pmod q \\ \end{align*}

Now suppose p and q are 1024-bit primes, as they might be in an implementation of RSA encryption. We can carry out exponentiation mod p and mod q, using 1024-bit numbers, rather than working mod n with 2048-bit numbers.

Furthermore, we can apply Euler’s theorem (or the special case Fermat’s little theorem) to reduce the size of the exponents.

\begin{align*} x^e &\equiv x_p^e \equiv x_p^{e \bmod p-1}\pmod p \\ x^e &\equiv x_q^e \equiv x_q^{e \bmod q-1}\pmod q \end{align*}

Assuming again that p and q are 1024-bit numbers, and assuming e is a 2048-bit number, by working mod p and mod q we can use exponents that are 1024-bit numbers.

We still have to put our pieces back together to get the value of xe mod n, but that’s the subject of the next post.

The trick of working modulo factors is used to speed up RSA decryption. It cannot be used for encryption since p and q are secret.

The next post shows that is in fact used in implementing RSA, and that a key file contains the decryption exponent reduced mod p − 1 and mod q − 1.

Group theory and RSA encryption

RSA encryption a map from numbers mod n to numbers mod n where n is a public key. A message is represented as an integer m and is encrypted by computing

c = me mod n

where e is part of the public key. In practice, e is usually 65537 though it does not have to be.

Multiplicative group

As discussed in an earlier post, not all messages m can be decrypted unless we require m to be relatively prime to n. In practice this is almost certainly the case: discovering a message m not relatively prime to n is equivalent to finding a factor of n and breaking the encryption.

If we limit ourselves to messages which can be encrypted and decrypted, our messages come not from the integers mod n but from the multiplicative group of integers mod n: the integers less than and relatively prime to n form a group G under multiplication.

The encryption function that maps m to me is an invertible function on G. Its inverse is the function that maps c to cd where d is the private key. Encryption is an automorphism of G because

(m1 m2) e = m1e m2e mod n.

Totient functions

Euler’s theorem tells us that

mφ(n) = 1 mod n

for all m in G. Here φ is Euler’s totient function. There are φ(n) elements in G, and so we could see this as a consequence of Lagrange’s theorem: the order of an element divides the order of a group.

Now the order of a particular m might be less than φ(n). That is, we know that if we raise m to the exponent φ(n) we will get 1, but maybe a smaller exponent would do. In fact, maybe a smaller exponent would do for all m.

Carmichael’s totient function λ(n) is the smallest exponent k such that

mk = 1 mod n

for all m. For some values of n the two totient functions are the same, i.e. λ(n) = φ(n). But sometimes λ(n) is strictly less than φ(n). And going back to Lagrange’s theorem, λ(n) always divides φ(n).

For example, there are 4 positive integers less than and relatively prime to 8: 1, 3, 5, and 7. Since φ(8) = 4, Euler’s theorem says that the 4th power of any of these numbers will be congruent to 1 mod 8. That is true, but its also true that the square of any of these numbers is congruent to 1 mod 8. That is, λ(8) = 2.

Back to RSA

Now for RSA encryption, n = pq where p and q are large primes and pq. It follows that

φ(pq) = φ(p) φ(q) = (p − 1)(q − 1).

On the other hand,

λ(pq) = lcm( λ(p), λ(q) ) = lcm(p − 1, q − 1).

Since p − 1 and q − 1 at least share a factor of 2,

λ(n) ≤ φ(n)/2.

Example

It’s possible that λ(n) is smaller than φ(n) by more than a factor of 2. For example,

φ(7 × 13) = 6 × 12 = 72

but

λ(7 × 13) = lcm(6, 12) = 12.

You could verify this last calculation with the following Python code:

>>> from sympy import gcd
>>> G = set(n for n in range(1, 91) if gcd(n, 91) == 1)
>>> set(n**12 % 91 for n in s)

This returns {1}.

Implementation

The significance of Carmichael’s totient to RSA is that φ(n) can be replaced with λ(n) when finding private keys. Given a public exponent e, we can find d by solving

ed = 1 mod λ(n)

rather than

ed = 1 mod φ(n).

This gives us a smaller private key d which might lead to faster decryption.

OpenSSL example

I generated an RSA key with openssl as in this post

    openssl genpkey -out fd.key -algorithm RSA \
      -pkeyopt rsa_keygen_bits:2048 -aes-128-cbc

and read it using

    openssl pkey -in  fd.key -text -noout

The public exponent was 65537 as noted above. I then brought the numbers in the key over to Python.

    from sympy import lcm

    modulus = xf227d5...a9
    prime1 = 0xf33514...d9
    prime2 = 0xfee496...51
    assert(prime1*prime2 == modulus)

    publicExponent = 65537
    privateExponent = 0x03896d...91

    phi = (prime1 - 1)*(prime2 - 1)
    lamb = lcm(prime1 - 1, prime2 - 1)
    assert(publicExponent*privateExponent % lamb == 1)
    assert(publicExponent*privateExponent % phi != 1)

This confirms that the private key d is the inverse of e = 65537 using modulo λ(pq) and not modulo φ(pq).

Related posts

Hyperellipsoid surface area

Dimension 2

The equation for the perimeter of an ellipse is

p = 4aE(e^2)

where a is the semimajor axis, e is eccentricity, and E is a special function. The equation is simple, in the sense that it has few terms, but it is not elementary, because it depends on an advanced function, the complete elliptic integral of the second kind.

However, there is an approximation for the perimeter that is both simple and elementary:

p \approx 2 \pi \left(\frac{a^{3/2} + b^{3/2}}{2} \right )^{2/3}

Dimension 3

The generalization of an ellipse to three dimensions is an ellipsoid. The surface area of an ellipsoid is neither simple nor elementary. The surface area S is given by

\begin{align*} \varphi &= \arccos\left(\frac{c}{a} \right) \\ k^2 &= \frac{a^2\left(b^2 - c^2\right)}{b^2\left(a^2 - c^2\right)} \\ S &= 2\pi c^2 + \frac{2\pi ab}{\sin(\varphi)}\left(E(\varphi, k)\,\sin^2(\varphi) + F(\varphi, k)\,\cos^2(\varphi)\right) \end{align*}

where E is incomplete elliptic integral of the second kind and F is the incomplete elliptic integral of the first kind.

However, once again there is an approximation that is simple and elementary. The surface area approximately

S \approx 4\pi \left( \frac{(ab)^p + (bc)^p + (ac)^p}{3} \right )^{1/p}
where p = 1.6075.

Notice the similarities between the approximation for the perimeter of an ellipse and the approximation for the area of an ellipsoid. The former is the perimeter of a unit circle times a kind of mean of the axes. The latter is the area of a unit sphere times a kind of mean of the products of pairs of axes. The former uses a p-mean with p = 1.5 and the latter uses a p-mean with p = 1.6075. More on such means here.

Dimensions 4 and higher

The complexity of expressions for the surface area of an ellipsoid apparently increase with dimension. The expression get worse for hyperellipsoids, i.e. n-ellipsoids for n > 3. You can find such expressions in [1]. More of that in just a minute.

Conjecture

It is natural to conjecture, based on the approximations above, that the surface area of an n-ellipsoid is the area of a unit sphere in dimension n times the p-mean of all products of of n-1 semiaxes for some value of p.

For example, the surface area of an ellipsoid in 4 dimensions might be approximately

S \approx 2\pi^2 \left( \frac{(abc)^p + (abd)^p + (acd)^p + (bcd)^p}{4} \right )^{1/p}

for some value of p.

Why this form? Permutations of the axes do not change the surface area, so we’d expect permutations not to effect the approximation either. (More here.) Also, we’d expect from dimensional analysis for the formula to involve products of n-1 terms since the result gives n-1 dimensional volume.

Surely I’m not the first to suggest this. However, I don’t know what work has been done along these lines.

Exact results

In [1] the author gives some very complicated but general expressions for the surface area of a hyperellipsoid. The simplest of his expression involves probability:

S = n \frac{\Gamma\left(\frac{n}{2}\right)}{\Gamma\left(\frac{n+1}{2} \right)} \text{E}\left(\sqrt{q_1^2 X_1^2 + q_2^2 X_2^2 + \cdots + q_n^2 X_n^2 }\right)

where the Xs are independent normal random variables with mean 0 and variance 1/2.

At first it may look like this can be simplified. The sum of normal random variables is a normal random variable. But the squares of normal random variables are not normal, they’re gamma random variables. The sum of gamma random variables is a gamma random variable, but that’s only if the variables have the same scale parameter, and these do not unless all the semiaxes, the qs, are the same.

You could use the formula above as a way to approximate S via Monte Carlo simulation. You could also use asymptotic results from probability to get approximate formulas valid for large n.

 

[1] Igor Rivin. Surface area and other measures of ellipsoids. Advances in Applied Mathematics 39 (2007) 409–427

Solve for ellipse axes given perimeter

I posted some notes this morning on how to find the perimeter of an ellipse given its axes. The notes include a simple approximation, a better but more complicated approximation, and the exact value. So given the semi axes a and b, the notes give three ways to compute the perimeter p.

If you are given the perimeter and one of the axes, you can solve for the other axis, though this involves a nonlinear equation with an elliptic integral. Not an insurmountable obstacle, but not trivial either.

However, the simple approximation for the perimeter is easy to invert. Since

p \approx 2 \pi \left(\frac{a^{3/2} + b^{3/2}}{2} \right )^{2/3}

we have

a \approx \left( 2\left( \frac{p}{2\pi} \right)^{3/2} -\, b^{3/2}\right)^{2/3}

The same equation holds if you reverse the roles of a and b.

If this solution is not accurate enough, it at least gives you a good starting point for solving the exact equation numerically.

If you’re not given either a or b, then you might as well assume a = b and so both equal p/2π.

Possible and actual football scores

The home team lost in a new way yesterday. The Baltimore Ravens beat the Houston Texans by 25-9. This was the first time that score has been seen in the NFL.

Possible individual team scores

How many scores are possible? It is possible to score any number of points except 1. You can score 2 points for a safety, so you could score any even number of points via safeties. You can score 3 points for a field goal, so you can score any odd number of points, except 1, by a field goal and as many safeties as necessary.

Actual individual team scores

The highest score in an NFL game was 73 points. No team has scored 67, 68, 69, or 71 points. Otherwise, all possible scores up to 73 have been seen in actual games.

Possible pairs of scores

Assume a maximum possible score of M. Then there are M possible winning scores: 0, 2, 3, 4, …, M.

There are also M possible losing scores, but there are less than M² possible total scores since the winning score cannot be less than the losing or tying score.

Out of the M² pairs of two numbers coming from a set if M numbers, M of these pairs are tied, and in half of the rest the first number is higher than the second. So the number of possible scores, with each score bounded by M, is

M + (M² − M)/2 = M(M + 1)/2.

If M = 73, there are 2,701 possible scores.

Actual pairs of scores

There have been 1,076 unique scores in NFL football. (There were 1,075 until yesterday.) That means there are 1,626 possible scores we haven’t seen yet (assuming the winning team scores no more than 73 points). There are 256 scores that have only been seen once.

The smallest score not yet seen is 4-0.

Here’s a visualization of actual scores. The vertical axis is the winners score, from 0 down to 73, and the horizontal axis is the tie or loser score, starting from 0 on the left.

visualization of final scores that occurred in NFL football games

Related posts

How you define center matters a lot

Earlier I wrote a post showing what happens when you start with an equilateral triangle, then repeatedly subdivide it into smaller and smaller triangles by drawing lines from the centroid (barycenter) to each of the vertices.

I mentioned in that post that I moved the code for finding the center to its own function because in the future I might want to see what happens when you look at different choices of center. There are thousands of ways to define the center of a triangle.

This post will look at 4 levels of recursive division, using the barycenter, incenter, and circumcenter.

Barycenter

The barycenter of a set of points is the point that would be the center of mass if each point had the same weight. (The name comes from the Greek baros for weight. Think barium or bariatric surgery.)

This is the method used in the earlier post.

Incenter

The incenter of a triangle is the center of the largest circle that can be drawn inside the triangle. When we use this definition of center and repeatedly divide our triangle, we get a substantially different image.

Circumcenter

The circumcenter of a triangle is the center of the unique circle that passes through each of the three vertices. This results in a very different image because the circumcenter of a triangle may be outside of the triangle.

By recursively dividing our triangle, we get a hexagon!

Related posts

Belt around an elliptical waist

I just saw a tweet from Dave Richeson saying

I remember as a kid calculating the size difference (diameter) of a belt between each hole. Now I think about it every time I wear a belt.

Holes 1 inch apart change the diameter by about one-third of an inch (1/π). [Assuming people have a circular waistline 🤣]

People do not have circular waistlines, unless they are obese, but the circular approximation is fine for reasons we’ll show below.

Robust approximations

Good simplifications, such as approximating a human waist by a circle, are robust. It doesn’t matter how well a circle approximates a waistline but rather how well the conclusion assuming a circular waistline approximates the conclusion for a real waistline.

There’s a joke that physicists say things like “assume a spherical cow.” Obviously cows are not spherical, but depending on the context, assuming a spherical cow may be a very sensible thing to do.

Elliptical waistlines

A human waistline may be closer to an ellipse than a circle. It’s not an ellipse either—it varies from person to person—but my point here is to show that using a different model results in a similar conclusion.

For a circle, the perimeter equals π times the diameter. So an increase of 1 inch in the diameter corresponds to an increase of 1/π in the perimeter, as Dave said.

Suppose we increase the perimeter of an ellipse by 1 and keep the aspect ratio of the ellipse the same. How much do the major and minor axes change?

The answer will depend on the aspect ratio of the ellipse. I’m going to guess that the aspect ratio is maybe 2 to 1. This corresponds to eccentricity e equal to 0.87.

The ratio of the perimeter of an ellipse to its major axis is 2E(p) where E is the complete elliptic integral of the second kind. (See, there’s a good reason Dave used a circle rather than an ellipse!)

For a circle, the eccentricity is 0, and E(0) = π/2, so the ratio of perimeter to the major axis (i.e. diameter) is π. For eccentricity 0.87 this ratio is 2.42. So a change in belt size of 1 inch would correspond to a change in major axis of 0.41 and a change in minor axis of 0.21.

Dave’s estimate of 1/3 of an inch the average of these two values. If you average the major and minor axes of an ellipse and call that the “diameter” then Dave’s circular model comes to about the same conclusion as our elliptical model, but avoids having to use elliptic integrals.

Perimeter to average axis ratio

The following graph shows the ratio of perimeter to average axis length for an ellipse. On the left end, aspect 1, we have a circle and the ratio is π. As the aspect ratio goes to infinity, the limiting value is 4.

Even for substantial departures from a circle, such as a 2 : 1 or 3 : 1 aspect ratio, the ratio isn’t far from π.

Related posts

Recursive triangle subdivision

The other day I saw where Cliff Pickover tweeted some images of triangles recursively subdivided by adding a point to the barycenter of each triangle. The images were not what I expected, so I wanted to reproduce the images to look into this further.

Here are the first three steps:

I set the alpha value of the lines to 0.1 so that lines that get drawn repeatedly would appear darker.

The size of the images grows quickly as the number of subdivisions increases. Here are links to the images after five steps: SVG, PNG

Update: Plots using incenter and circumcenter look very different than the plots in this post. See here.

Python code

The code below can be used to subdivide any triangle, not just an equilateral triangle, to any desired depth.

I pulled the code to find the center of the triangle out into its own function because there are many ways to define the center of a triangle—more on that here—and I may want to come back and experiment with other centers.

    import matplotlib.pyplot as plt
    import numpy as np
    from itertools import combinations
    
    def center(points):
        return sum(points)/len(points)
    
    def draw_triangle(points):
        for ps in combinations(points, 2):
            xs = [ps[0][0], ps[1][0]]
            ys = [ps[0][1], ps[1][1]]        
            plt.plot(xs, ys, 'b-', alpha=0.1)
    
    def mesh(points, depth):
        if depth > 0:
            c = center(points)
            for pair in combinations(points, 2):
                pts = [pair[0], pair[1], c]
                draw_triangle(pts)
                mesh(pts, depth-1)
                
    points = [
        np.array([0, 1]),
        np.array([-0.866, -0.5]),
        np.array([ 0.866, -0.5]) ]
    
    mesh(points, 3)
    plt.axis("off")
    plt.gca().set_aspect("equal")
    plt.show()

Related posts

Justifiable sample size

One of the most common things a statistician is asked to do is compute a sample. There are well known formulas for this, so why isn’t calculating a sample size trivial?

As with most things in statistics, plugging numbers into a formula is not the hard part. The hard part is deciding what numbers to plug in, which in turn depends on understanding the context of the study. What are you trying to learn? What are the constraints? What do you know a priori?

In my experience, sample size calculation is very different in a scientific setting versus a legal setting.

In a scientific setting, sample size is often determined by budget. This isn’t done explicitly. There is a negotiation between the statistician and the researcher that starts out with talk of power and error rates, but the assumptions are adjusted until the sample size works out to something the researcher can afford.

In a legal setting, you can’t get away with statistical casuistry as easily because you have to defend your choices. (In theory researchers have to defend themselves too, but that’s a topic for another time.)

Opposing counsel or a judge may ask how you came up with the sample size you did. The difficulty here may be more expository than mathematical, i.e. the difficulty lies in explaining subtle concepts, not is carrying out calculations. A statistically defensible study design is no good unless there is someone there who can defend it.

One reason statistics is challenging to explain to laymen is that there are multiple levels of uncertainty. Suppose you want to determine the defect rate of some manufacturing process. You want to quantify the uncertainty in the quality of the output. But you also want to quantify your uncertainty about your estimate of uncertainty. For example, you may estimate the defect rate at 5%, but how sure are you that the defect rate is indeed 5%? How likely is it that the defect rate could be 10% or greater?

When there are multiple contexts of uncertainty, these contexts get confused. For example, variations on the following dialog come up repeatedly.

“Are you saying the quality rate is 95%?”

“No, I’m saying that I’m 95% confident of my estimate of the quality rate.”

Probability is subtle and there’s no getting around it.

Related posts

Checksum polynomials

A large class of checksum algorithms have the following pattern:

  1. Think of the bits in a file as the coefficients in a polynomial P(x).
  2. Divide P(x) by a fixed polynomial Q(x) mod 2 and keep the remainder.
  3. Report the remainder as a sequence of bits.

In practice there’s a little more to the algorithm than this, such as appending the length of the file, but the above pattern is at the heart of the algorithm.

There’s a common misconception that the polynomial Q(x) is irreducible, i.e. cannot be factored. This may or may not be the case.

CRC-32

Perhaps the most common choice of Q is

Q(x) = x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10 + x8 + x7 + x5 + x4 + x3 + x2 + x + 1

This polynomial is used in the cksum utility and is part of numerous standards. It’s know as CRC-32 polynomial, though there are other polynomials occasionally used in 32-bit implementations of the CRC algorithm. And it is far from irreducible as the following Mathematica code shows. The command

    Factor[x^32 + x^26 + x^23 + x^22 + x^16 + x^12 + 
           x^11 + x^10 + x^8 +  x^7 + x^5 + x^4 + 
           x^3 + x^2 + x + 1, Modulus -> 2]

shows that Q can be factored as

(1 + x)5 (1 + x + x3 + x4 + x6) (1 + x + x2 + x5 + x6)
(1 + x + x4 + x6 + x7) (1 + x + x4 + x5 + x6 + x7 + x8)

(Mathematica displays polynomials in increasing order of terms.)

Note that the factorization is valid when done over the field with 2 elements, GF(2). Whether a polynomial can be factored, and what the factors are, depends on what field you do your arithmetic in. The polynomial Q(x) above is irreducible as a polynomial with real coefficients. It can be factored working mod 3, for example, but it factors differently mod 3 than it factors mod 2. Here’s the factorization mod 3:

(1 + 2 x2 + 2 x3 + x4 + x5) (2 + x + 2 x2 + x3 + 2 x4 + x6 + x7)
(2 + x + x3 + 2 x7 + x8 + x9 + x10 + 2 x12 + x13 + x15 + 2 x16 + x17 + x18 + x19 + x20)

CRC-64

The polynomial

Q(x) = x64 + x4 + x3 + x + 1

is known as CRC-64, and is part of several standards, including ISO 3309. This polynomial is irreducible mod 2 as the following Mathematica code confirms.

    IrreduciblePolynomialQ[x^64 + x^4 + x^3 + x + 1, Modulus -> 2]

The CRC algorithm uses this polynomial mod 2, but out of curiosity I checked whether it is irreducible in other contexts. The following code tests whether the polynomial is irreducible modulo the first 100 primes.

    Table[IrreduciblePolynomialQ[x^64 + x^4 + x^3 + x + 1, 
        Modulus -> Prime[n]], {n, 1, 100}]

It is irreducible mod p for p = 2, 233, or 383, but not for any other primes up to 541. It’s also irreducible over the real numbers.

Since Q is irreducible mod 2, the check sum essentially views its input P(x) as a member of the finite field GF(264).

Related posts