Tighter bounds in the prime number theorem

The most elementary form of the prime number theorem says that π(x), the number of prime numbers less than x, is asymptotically equal to x / log(x). That’s true, but a more accurate result says π(x) is asymptotically equal to li(x) where

\text{li}(x) = \int_0^x \frac{dt}{\log t}

Five years ago I wrote about a result that was new at the time, giving a bound on |π(x) − li(x)| for x > exp(2000). This morning I saw a result in a blog post by Terence Tao that says

\left| \pi(x) - \text{li}(x) \right| \leq 9.2211\, x\sqrt{\log(x)} \exp\left( -0.8476 \sqrt{\log(x)} \right)

for all x ≥ 2. The result comes from this paper.

The new bound has the same form as the bound from five years ago but with smaller constants.

The middle binomial coefficient

The previous post contained an interesting observation:

\sqrt{52!} \approx 26! \, 2^{26}

Is it true more generally that

\sqrt{(2n)!} \approx n! \, 2^n

for large n? Sorta, but the approximation gets better if we add a correction factor.

If we square both sides of the approximation and move the factorials to one side, the question becomes whether

\frac{(2n)!}{(n!)^2} = \binom{2n}{n} \approx 4^n

Now the task becomes to estimate the middle coefficient in when we apply the binomial theorem to (xy)2n.

A better approximation for the middle binomial coefficient is

\binom{2n}{n} \approx \frac{4^n}{\sqrt{\pi n}}

Now the right hand side is the first term of an asymptotic series for the left. The ratio of the two sides goes to 1 as n → ∞.

We could prove the asymptotic result using Stirling’s approximation, but it’s more fun to use a probability argument.

Let X be a binomial random variable with distribution B(2n, 1/2). As n grows, X converges in distribution to a normal random variable with the same mean and variance, i.e. with μ = n and σ² = n/2. This says for large n,

\text{Prob}(X = 1/2) = \binom{2n}{n} 4^{-n} \approx \frac{1}{\sqrt{2\pi\sigma^2}} = \frac{1}{\sqrt{\pi n}}

The argument above only gives the first term in the asymptotic series for the middle coefficient. If you want more terms in the series, you’ll need to use more terms in Stirling’s series. If we add a couple more terms we get

\binom{2n}{n} = \frac{4^n}{\sqrt{\pi n}} \left(1 - \frac{1}{8n} + \frac{1}{128n^2} + {\cal O}\left(\frac{1}{n^3}\right) \right)

Let’s see how much accuracy we get in estimating 52 choose 26.

from scipy.special import binom
from numpy import pi, sqrt

n = 26
exact = binom(2*n, n)
approx1 = 4**n/sqrt(pi*n)
approx2 = approx1*(1 - 1/(8*n))
approx3 = approx1*(1 - 1/(8*n) + 1/(128*n**2))

for a in [approx1, approx2, approx3]:
    print(exact/a)

This prints

0.9952041409266293
1.0000118903997048
1.0000002776131290

and so we see substantial improvement from each additional term. This isn’t always the case with asymptotic series. We’re guaranteed that for a fixed number of terms, the relative error goes to zero as n increases. For a fixed n, we do not necessarily get more accuracy by including more terms.

Related posts

Combining in-shuffles and out-shuffles

A few days ago I wrote two posts about perfect shuffles. Once you’ve cut a deck of cards in half, an in-shuffle lets a card from the top half fall first, and an out-shuffle lets a card from the bottom half fall first.

Suppose we have a deck of 52 cards. We said in the earlier posts that the order of an in-shuffle I is 52. That is, after 52 in-shuffles, a deck returns to its initial order. And the order of an out-shuffle O is 8.

We can think of I and O as generators of subgroups of order 52 and 8 respectively in the group S of all permutations of 52 cards. I was curious when I wrote the earlier posts how large the group generated by I and O together would be. Is it possible to reach all 52! permutations of the deck by some combination of applying I and O? If not, how many permutations can be generated?

I’ve since found the answer in [1] in a theorem by Diaconis, Graham, and Kantor. I don’t know who Kantor is, but it’s no surprise that a theorem on card shuffles would come from Persi Diaconis and Ron Graham. The theorem covers the case for decks of size N = 2n, which branches into different results depending on the size of n and the value of n mod 4.

For N = 52, the group generated by I and O has

26! × 226

elements.

On the one hand, that’s a big number, approximately 2.7 × 1034. On the other hand, it’s quite small compared to 52! = 8 × 1067. So while there are a lot of permutations reachable by a combination of in-shuffles and out-shuffles, your chances of selecting such a permutation from the set of all such permutations is vanishingly small.

To put it yet another way, the number of arrangements is on the order of the square root of 52!, a big number, but not big relative to 52!. (Does this pattern

√52! ≈ 26! × 226

generalize? See the next post.)

Not only does the theorem of Diaconis et al give the order of the group, it gives the group itself: the group of permutations generated by I and O is isomorphic to the group of symmetries of a 26-dimensional octahedron.

[1] S. Brent Morris. Magic Tricks, Card Shuffling and Dynamic Computer Memories. MAA 1998.

Primecoin primality test

When I wrote about how Primecoin uses prime chains for proof of work, I left out a detail.

To mine a new Primecoin block, you have to find a prime chain of specified length that starts with a number that is a multiple of the block header hash. According to the Primecoin whitepaper

Another important property of proof-of-work for cryptocurrency is non-reusability. … To achieve this, the prime chain is linked to the block header hash by requiring that its origin be divisible by the block header hash.

So given a hash h, you have to find k such that kh is the origin of a prime chain, where “origin” is defined in footnote [2] here.

Strictly speaking, the primes in a Primecoin prime chain are probable primes. Someone verifying a Primecoin blockchain will be satisfied that the block is authentic if the necessary numbers are prime according to the test used by the Primecoin software. Whether the numbers are actually prime is irrelevant.

Probabilistic primality tests are much more efficient than deterministic tests, and most applications requiring primes, such as RSA encryption, actually use probable primes. If a number is rejected as a probable prime, it’s certainly not a prime. If it is accepted as a prime, it very like is prime. More on probable primes here.

If you write your own Primecoin mining software using a different primality test, that’s fine as long as you actually find primes. But if one time you find pseudoprimes, composite numbers that pass your primality test, then your block will be rejected unless your pseudoprimes also pass the Primecoin software’s primality test.

Primecoin’s primality test

So what is Primecoin’s (probable) primality test? We quote the Primecoin whitepaper:

The classical Fermat test of base 2 is used together with Euler-Lagrange-Lifchitz test to verify probable primality for the prime chains. Note we do not require strict primality proof during verification, as it would unnecessarily burden the efficiency of verification. Composite number that passes Fermat test is commonly known as pseudoprime. Since it is known by the works of Erdös and Pomerance that pseudoprimes of base 2 are much more rare than primes, it suffices to only verify probable primality.

Fermat’s test

The Fermat test with base b says to test whether a candidate number n satisfies

bn − 1 = 1 mod n.

If n is prime, the equation above will hold. The converse is usually true: if the equation above hold, n is likely to be prime. There are exceptions, such as b = 2 and n = 561 = 3 × 11 × 17.

Euler-Lagrange-Lifchitz test

But what is the Euler-Lagrange-Lifchitz test? The whitepaper links here, a page by Henri Lifchitz that gives results generalizing those of Leonard Euler and Joseph-Louis Lagrange.

Testing 2p + 1

Suppose p is an odd prime. Then the Euler-Lagrange test says that if p = 3 mod 4, then q = 2p + 1 is also prime if and only if

2p = 1 mod q.

Lifchitz gives three variations on this result. First, if p = 1 mod 4, then q = 2p + 1 is also prime if and only if

2p = −1 mod q.

Together these two theorems give a way to test with certainty whether 2p + 1, i.e. the next term in a Cunningham chain of the first kind, is prime. However, the test is still probabilistic if we don’t know for certain that p is prime.

Testing 2p − 1

The last two results of Lifchitz are as follows. If p and q = 2p − 1 are prime, then

2p − 1 = 1 mod q

if p = 1 mod 4 and

2p − 1 = −1 mod q

if p = 3 mod 4.

The converses of these two statements give probable prime tests for q if we know p is prime.

So we can verify a (probable) prime chain by using Fermat’s test to verify that the first element is (probably) prime and use the Euler-Lagrange-Lifchitz test that the rest of the numbers in the chain are (probably) prime.

Related posts

Bi-twin prime chains

I mentioned bi-twin prime chains in the previous post, but didn’t say much about them so as not to interrupt the flow of the article.

A pair of prime numbers are called twins if they differ by 2. For example, 17 and 19 are twin primes.

A bi-twin chain is a sequence of twin primes in which the average of each twin pair is double that of the preceding pair. For example,

5, 7, 11, 13

is a bi-twin chain because (5, 7) and (11, 13) are twin pairs of primes, and the average of the latter pair is twice the average of the former pair.

There is some variation in how to describe how long a bi-twin chain is. We will define the length of a bi-twin chain to be the number prime pairs it contains, and so the chain above has length 2. The BiTwin Record page describes a bi-twin chain of length k as a chain with k − 1 links.

It is widely believed that there are infinitely many twin primes, but this has not been proven. So we don’t know for certain whether there are infinitely many bi-twin prime chains of length 1, much less chains of longer length.

The bi-twin chain of length three with smallest beginning number is

211049, 211051, 422099, 422101, 844199, 844201

Bi-twin records are expressed in terms of primorial numbers. I first mentioned primorial numbers a few days ago and now they come up again. Just as n factorial is the product of the positive integers up to n, n primorial is the product of the primes up to n. The primorial of n is written n#. [1]

The longest known bi-twin chains start with

112511682470782472978099, 112511682470782472978101

and has length 9.

We can verify the chains mentioned above with the following Python code.

def bitwin_length(average):
    n = average
    c = 0
    while isprime(n-1) and isprime(n+1):
        c += 1
        n *= 2
    return c

for n in [6, 211050, 112511682470782472978100]:
    print(bitwin_length(n))

[1] There are two conventions for defining n#. The definition used here, and on the BiTwin Record page, is the product of primes up to n. Another convention is to define n# to be the product of the first n primes.

The priomorial function in Sympy takes two arguments and will compute either definition, depending on whether the second argument is True or False. The default second argument is True, in which case primorial(n) returns the product of the first n primes.

Prime chains

The title of this post has a double meaning. We will look at chains in the sense of number theory and in the sense of cryptocurrency, i.e. Cunningham chains and blockchains, that involve prime numbers.

Cunningham chains

A chain of primes is a sequence of prime numbers in which each is almost double its predecessor. That is, the next number after p is 2p ± 1.

In a Cunningham chain of the first kind, the successor of p is 2p + 1. For example,

41, 83, 167.

In a Cunningham chain of the second kind, the successor of p is 2p − 1. For example,

19, 37, 73.

Two questions come up immediately. First, are there infinitely many Cunningham chains? Second, how long can a Cunningham chain be? What is known and what is conjectured are at opposite ends of the spectrum. It is unknown whether there are infinitely many Cunningham chains of length 2, but it is conjectured that there are infinitely many Cunningham chains of all lengths.

According to this page, the longest known Cunningham chains of the first kind has length 17, and the longest known Cunningham chain of the second kind has length 19. We can verify these results with the following Python code.

from sympy import isprime

def chain_length(start, kind):
    p = start
    c = 0
    while isprime(p):
        c += 1
        p = 2*p + kind
    return c

print(chain_length(2759832934171386593519, 1))
print(chain_length(79910197721667870187016101, -1))

Bi-twin chains

A number n is the basis of a bi-twin chain of length k if n − 1 is the start of a Cunningham chain of the first kind of length k and n + 1 is the start of a Cunningham chain of the second kind of length k.

I say more about bi-twin prime chains in the next post.

Primecoin

Primecoin was one of the first cryptocurrencies, coming out four years after Bitcoin. Primecoin is still going, though its market cap is six orders magnitude smaller than that of Bitcoin.

What’s interesting about Primecoin is that it uses finding prime chains as its proof of work task [1]. To mint a new Primecoin block, you must find a prime chain of the required length whose origin a multiple of the hash of the block header [2].

Primecoin allows any of the three kinds of prime chains mentioned above: Cunningham chains of the first or second kind, or a bi-twin prime chain. Primecoin adjusts its mining difficulty over time by varying the length of Cunningham or bi-twin chain needed to mint a new block.

There are also cryptocurrencies based on finding prime clusters and prime gaps.

Related posts

[1] Strictly speaking, Primecoin requires finding probable prime chains, as explained here.

[2] The origin of a prime chain is n if the first item in a Cunningham chain of the first kind is n + 1, or if the first item in a Cunningham chain of the first kind or a bi-twin chain is n − 1.

Largest known compositorial prime

I ran across a blog post here that said a new record has been set for the largest compositorial prime. [1]

OK, so what is a compositorial prime? It is a prime number of the form

n! / n# + 1

where n# denotes n primorial, the product of the prime numbers no greater than n.

The newly discovered prime is

N = 751882!/751882# + 1

It was described in the article cited above as

751882!/751879# + 1,

but the two numbers are the same because there are no primes greater than 751879 and less than 751882, i.e.

751879# = 751882#.

About how large is N? We can calculate the log of the numerator easily enough:

>>> import scipy.special
>>> scipy.special.loggamma(751883)
9421340.780760147

However, the denominator is harder to compute. According to OIES we have

n# = exp((1 + o(1)) n)

which would give us the estimate

log(751882#) ≈ 751882.

So

log10(N) = log(N) / log(10) ≈ (9421340 − 751882) / log(10) ≈ 3765097.

According to this page,

log10(N) = 3765620.3395779

and so our approximation above was good to four figures.

So N has between 3 and 4 million digits, making it much smaller than the largest known prime, which has roughly 41 million digits. Overall, N is the 110th largest known prime.

 

[1] I misread the post at first and thought it said there was a new prime record (skipping over the “compositorial” part) and was surprised because the number is not a Mersenne number. For a long time now the largest known prime has been a Mersenne prime because there is a special algorithm for testing whether Mersenne numbers are prime, one that is much more efficient than testing numbers in general.

 

log2(3) and log2(5)

AlmostSure on X pointed out that

log2 3 ≈ 19/12,

an approximation that’s pretty good relative to the size of the denominator. To get an approximation that’s as accurate or better requires a larger denominator for log2 5.

log2 5 ≈ 65/28

This above observations are correct, but are they indicative of a more general pattern? Is log2 3 easier to approximate than log2 5 using rational numbers? There are theoretical ways to quantify this—irrationality measures—but they’re hard to compute.

If you look at the series of approximations for both numbers, based on continued fraction convergents, the nth convergent for log2 5 is more accurate than the nth convergent for log2 3, at least for the first 16 terms. After that I ran out of floating point precision and wasn’t sufficiently interested to resort to extended precision.

Admittedly this is a non-standard way to evaluate approximation error. Typically you look at the approximation error relative to the size of the denominator, not relative to the index of the convergents.

Here’s a more conventional comparison, plotting the log of approximation error against the log of the denominators.

Continued fraction posts

In-shuffles and out-shuffles

The previous post talked about doing perfect shuffles: divide a deck in half, and alternately let one card from each half fall.

It matters which half lets a card fall first. If the top half’s bottom card falls first, this is called an in-shuffle. If the bottom half’s bottom card falls first, it’s called an out-shuffle.

With an out-shuffle, the top and bottom cards don’t move. Presumably it’s called an out-shuffle because the outside cards remain in place.

An out-shuffle amounts to an in-shuffle of the inner cards, i.e. the rest of the deck not including the top and bottom card.

The previous post had a Python function for doing an in-shuffle. Here we generalize the function to do either an in-shuffle or an out-shuffle. We also get rid of the list comprehension, making the code longer but easier to understand.

def shuffle2(deck, inside = True):
    n = len(deck)
    top = deck[: n//2]
    bottom = deck[n//2 :]
    if inside:
        first, second = bottom, top
    else:
        first, second = top, bottom
    newdeck = []
    for p in zip(first, second):
        newdeck.extend(p)
    return newdeck

Let’s use this code to demonstrate that an out-shuffle amounts to an in-shuffle of the inner cards.

deck = list(range(10))
d1 = shuffle2(deck, False) 
d2 = [deck[0]] + shuffle2(deck[1:9], True) + [deck[9]]
print(d1)
print(d2)

Both print statements produce [0, 5, 1, 6, 2, 7, 3, 8, 4, 9].

I said in the previous post that k perfect in-shuffles will restore the order of a deck of n cards if

2k = 1 (mod n + 1).

It follows that k perfect out-shuffles will restore the order of a deck of n cards if

2k = 1 (mod n − 1)

since an out-shuffle of n cards is essentially an in-shuffle of the n − 2 cards in the middle.

So, for example, it only takes 8 out-shuffles to return a deck of 52 cards to its original order. In the previous post we said it takes 52 in-shuffles, so it takes a lot fewer out-shuffles than in-shuffles.

It’s plausible to conjecture that it takes fewer out-shuffles than in-shuffles to return a deck to its initial order, since the former leaves the two outside cards in place. But that’s not always true. It’s true for a deck of 52 cards, but not for a deck of 14, for example. For a deck of 14 cards, it takes 4 in-shuffles or 12 out-shuffles to restore the deck.

Perfect and imperfect shuffles

Take a deck of cards and cut it in half, placing the top half of the deck in one hand and the bottom half in the other. Now bend the stack of cards in each hand and let cards alternately fall from each hand. This is called a rifle shuffle.

Random shuffles

Persi Diaconis proved that it takes seven shuffles to fully randomize a desk of 52 cards. He studied videos of people shuffling cards in order to construct a realistic model of the shuffling process.

Shuffling randomizes a deck of cards due to imperfections in the process. You may not cut the deck exactly in half, and you don’t exactly interleave the two halves of the deck. Maybe one card falls from your left hand, then two from your right, etc.

Diaconis modeled the process with a probability distribution on how many cards are likely to fall each time. And because his model was realistic, after seven shuffles a deck really is well randomized.

Perfect shuffles

Now suppose we take the imperfection out of shuffling. We do cut the deck of cards exactly in half each time, and we let exactly one card fall from each half each time. And to be specific, let’s say the first card will always fall from the top half of the deck. That is, we do an in-shuffle. (See the next post for a discussion of in-shuffles and out-shuffles.) A perfect shuffle does not randomize a deck because it’s a deterministic permutation.

To illustrate a perfect in-shuffle, suppose you start with a deck of these six cards.

A23456

Then you divide the deck into two halves.

A23 456

Then after the shuffle you have the following.

4A5263

Incidentally, I created the images above using a font that included glyphs for the Unicode characters for playing cards. More on that here. The font produced black-and-white images, so I edited the output in GIMP to turn things red that should be red.

Coming full circle

If you do enough perfect shuffles, the deck returns to its original order. This could be the basis for a magic trick, if the magician has the skill to repeatedly perform a perfect shuffle.

Performing k perfect in-shuffles will restore the order of a deck of n cards if

2k = 1 (mod n + 1).

So, for example, after 52 in-shuffles, a deck of 52 cards returns to its original order. We can see this from a quick calculation at the Python REPL:

>>> 2**52 % 53
1

With slightly more work we can show that less than 52 shuffles won’t do.

>>> for k in range(1, 53):
    ... if 2**k % 53 == 1: print(k)
52

The minimum number of shuffles is not always the same as the size of the deck. For example, it takes 4 shuffles to restore the order of a desk of 14 cards.

>>> 2**4 % 15
1

Shuffle code

Here’s a function to perform a perfect in-shuffle.

def shuffle(deck):
    n = len(deck)
    return [item for pair in zip(deck[n//2 :], deck[:n//2]) for item in pair]

With this you can confirm the results above. For example,

n = 14
k = 4
deck = list(range(n))
for _ in range(k):
    deck = shuffle(deck)
print(deck)

This prints 0, 1, 2, …, 13 as expected.

Related posts