Suppose you start with some large number x and want to find a prime number at least as big as x. First you test whether x is prime. Then you test whether x + 1 is prime. Then you test whether x + 2 is prime, and so on until you find a prime. Of course you would just test odd numbers, but how far might you have to look? How much larger than x might your result be?
Bertrand conjectured, and Chebyshev proved, that you’ll find a prime before you get to 2x. Said another way, the width of the interval you need to explore is no larger than x.
Note that if we just want to find a prime of a certain order of magnitude, the most efficient approach is to try numbers at random until you find one. But if you want to find the next prime you have to search more methodically. Maybe there’s a better way to do this than sequentially testing odd numbers. In any case, this post looks at an upper on the result you get, regardless of how you go about finding it.
New results
In [1] the authors prove that the length of the interval to search can be orders of magnitude smaller than x. For example, if
x > e60
then the interval can be of length x/1016.
The authors state their results in the form of an interval up to x rather than an interval starting at x, so their theorems immediately apply to starting at x and searching backward for the largest prime no greater than x. But by changing what you call x you could apply the results to looking forward for the smallest prime no smaller than x.
Illustration related to cryptography
Let’s see what the results in [1] might say about primes of the size used in cryptography. For example, RSA and Diffie-Hellman work with primes on the order of 22048 or 23072. More on that here.
If x is a 2048-bit integer x, then log x is around 1420. The authors in [1] show that if
log x > 1200
then the length of the interval to explore has width less than x/Δ where Δ = 3.637×10263.
Now if x is a 2048-bit number then we need to explore an interval of length less than
22048 / 3.637 × 10263 ≈ 22048/ 2857.5 = 21172.5.
So for a 2048-bit number, the length of the interval to search is a 1173 bit number. Said another way, we can turn x into a prime number by only changing the last 1173 bits, leaving the top 875 bits alone.
The number of bits in the length of the interval to search is a little more than half the number of bits in x; we can turn x into a prime by modifying a little more than half its bits.This is a general result for sufficiently large x.
The key contribution of [1] is that they’re explicit about what “sufficiently large” means for their theorems. There are other theorems that are not explicit. For example, in [2] the authors prove that for large enough x, the interval to search has width x0.525, but they don’t say how large x has to be.
If 22048 is large enough for the theorem in [2] to hold, then the length of our interval could be a 1076-bit integer.
Related posts
[1] Michaela Cully-Hugill and Ethan S. Lee. Explicit interval estimates for prime numbers. Mathematics of Computation. https://doi.org/10.1090/mcom/3719. Article electronically published on January 25, 2022
[2] R. C. Baker, G. Harman, and J. Pintz, The difference between consecutive primes. II, Proc. London Math. Soc. (3) 83 (2001), no. 3, 532–562.
This is a hilariously suboptimal bound – the next few primes after 2^2048 are 2^2048+{981, 1617, 3063}, the longest known gap between consecutive proven primes is 1113106 between 587*43103#/2310-455704 and 587*43103#/2310+657402 (which are 18662-digit numbers)
I would be amazed if there weren’t a K such that p_{n+1} – p_n < K log^2 p_n for all n, but I don't think analytic number theory has the tools to prove that yet.
My “I would be amazed” is in fact precisely Cramér’s conjecture; it was proved in 1931 by Westzynthius that (p_{n+1)-p_n)/log p_n can be arbitrarily large, and some recent work by Green and Tao on this sort of thing proves the same with the hilarious denominator log * loglog * loglogloglog / (logloglog)^2.