AGI, ASI, A*I – Do we have all we need to get there?

Demis: “[to get to AGI] maybe there’s one or two big innovations needed

Sam: “everything based off what we see today is that it will happen.”

Ilya: “But is the belief really that if you just 100x the scale, everything would be transformed? I don’t think that’s true.

Dario: “If you just kind of like eyeball the rate at which these capabilities are increasing, it does make you think that we’ll get there by 2026 or 2027.

Jerry: “is [the transformer architecture] the last thing? I’m pretty sure it isn’t.

For years leading researchers have been speculating one way or the other as to whether better algorithms are needed to get to AGI, artificial general intelligence (however that might be defined).

Around the time of the release of GPT-4, some were saying they felt something more was needed. Since then, we have had several major new advances, like reasoning models and tool use. If we’d said, “we don’t need anything else” three years ago, where would we be now?

For frankness, I like this from John Schulman: “it’s hard to know what we need.” And for strategy, Demis: “you can think of as 50% of our effort is on scaling, 50% of it is on innovation. My betting is you’re going to need both to get to AGI.”

Bridging secrets is hard

Cryptocurrency and privacy don’t fit together as easily as you might expect. Blockchains give you the illusion of privacy via pseudonymization: you don’t put your name on a blockchain, but you do put information on a blockchain that can be used to determine your name. Blockchain analysis can often reveal information that no one intended to share.

This is true even for privacy coins like Monero and Zcash. These coins put less information directly on chain in the clear, but they still have to be used with skill to maintain privacy. And because they can offer more privacy, they are harder to use. For example, an exchange might let you swap between a thousand different currencies, but privacy coins are conspicuously missing from the list of options. Or maybe you can move money into Zcash, but not with privacy, i.e. not into the shielded pool.

The Privacy trends for 2026 report from a16z summarizes the current situation very well.

Thanks to bridging protocols, it’s trivial to move from one chain to another as long as everything is public. But, as soon as you make things private, that is no longer true: Bridging tokens is easy, bridging secrets is hard. There is always a risk when moving in or out of a private zone that people who are watching the chain, mempool, or network traffic could figure out who you are. Crossing the boundary between a private chain and a public one—or even between two private chains—leaks all kinds of metadata like transaction timing and size correlations that makes it easier to track someone.

As is often the case, the weak link is the metadata, not the data per se.

Fortunes and Geometric Means

I saw a post on X recently that said

Bill Gates is closer to you in wealth than he is to Elon Musk. Mind blown.

For round numbers, let’s say Elon Musk’s net worth is 800 billion and Bill Gates’ net worth is 100 billion. So if your net worth is less 450 billion, the statement in the post is true.

The reason the statement above is mind blowing is that in this context you naturally think on a logarithmic scale, even if you don’t know what a logarithm is.

Or to put it another way, we think in terms of orders of magnitude. Musk’s net worth is an order of magnitude greater than Gates’, and Gates’ net worth would be an order of magnitude greater than that of someone worth 10 billion. Musk is a notch above Gates, and Gates is a notch above someone with a net worth around 10 billion, where a “notch” is an order of magnitude.

To put it another way, we think in terms of geometric mean √ab rather than arithmetic mean (a + b)/2 in this context. 100 billion is the geometric mean between 12.5 billion and 800 billion. Geometric mean corresponds to the arithmetic mean on a log scale. And on this scale, Gates is closer to Musk than you are, unless you’re worth more than 12.5 billion.

Here are three more examples of geometric means.

The size of Jupiter is about midway between that of earth and the sun; it’s the geometric mean. On a linear scale Jupiter is much closer to the size of the earth than the sun, but on a logarithmic scale it’s about in the middle. More on that here.

The tritone (augmented fourth) is half an octave. So, for example, an F# is in the middle between a C and the C an octave higher. Its frequency is the geometric mean of the frequencies of the two C’s. More here.

Finally, the humans body is a middle-sized object in the universe. From Kevin Kelly:

Our body size is, weirdly, almost exactly in the middle of the size of the universe. The smallest things we know about are approximately 30 orders of magnitude smaller than we are, and the largest structures in the universe are about 30 orders of magnitude bigger.

Proving you know a product

There is a way to prove that you know two numbers a and b, and their product cab, without revealing ab, or c. This isn’t very exciting without more context — maybe you know that 7 × 3 = 21 — but it’s a building block of more interesting zero knowledge proofs, such as proving that a cryptocurrency transaction is valid without revealing the amount of the transaction.

The proof mechanism requires an elliptic curve G and a pairing of G with itself. (More on pairings shortly.) It also requires a generator g of the group structure on G.

The prover takes the three secret numbers and multiplies the generator g by each, encrypting the numbers as agbg, and cg. When G is a large elliptic curve, say one with on the order of 2256 points, then computing products like ag can be done quickly, but recovering a from g and ag is impractical. In a nutshell, multiplication is easy but division [1] is practically impossible [2].

The verifier receives agbg, and cg. How can he verify that ab = c without knowing ab, or c? Here’s where pairing come in.

I go more into pairings here, but essentially a pairing is a mapping from two groups to a third group

eG1 × G2 → GT

such that

e(aPbQ) = e(PQ)ab.

In our case G1 and G2 are both equal to the group G above, and the target group GT doesn’t matter for our discussion here. Also, P and Q will both be our generator g.

By the defining property of a pairing,

e(agbg) = e(gg)ab

and

e(cgg) = e(gg)c.

So if abc, then e(gg)ab and e(gg)c will be equal.

Related posts

[1] The literature will usually speak of discrete logarithms rather than division. The group structure on an elliptic curve is Abelian, and so it is usually written as addition. If you write the group operation as multiplication, then you’re taking logs rather than dividing. The multiplicative notation highlights the similarity to working in the multiplicative group modulo a large prime.

[2] The computation is theoretically possible but not possible in practice without spending enormous resources, or inventing a large scale quantum computer. This is the discrete logarithm assumption.

How to prove you know a discrete logarithm

In a high school math class, the solution to the equation

bxy

is the logarithm of y in base b. The implicit context of the equation is the real numbers, and the solution is easy to calculate.

The same problem in the context of finite groups is called the discrete logarithm problem, and it is difficult to solve for large groups. In particular, it is impractical to solve when working modulo a sufficiently large prime number or when working over a sufficiently large elliptic curve [1]. In either context, the exponential bx can be computed efficiently but its inverse cannot.

Now suppose you want to prove that you know x without revealing x itself. That is, you’d like to construct a zero knowledge proof that you know x. How could you do this?

Here’s one way.

  1. You, the prover, create a random number r, compute t = br, and send the verifier t.
  2. The other party, the verifier, creates a random number c, the challenge, and sends it to you.
  3. You calculate sr + cx and send s to the verifier.
  4. The verifier checks whether bst yc. and believes you if and only if equality holds.

Let’s see why this works.

First of all, what have you revealed to the prover? Two values: t and s. The value t is the exponential of a random number, and so another random number. The value s is based on x, and so conceivably you’ve revealed your secret. But the verifier does not know r, only a value computed from r (i.e. t) and the verifier cannot recover r from t because this would require computing a discrete logarithm.

Next, why should bst yc? Because

bsbr + cx = br bcx = t (bx)c = t yc.

Finally, why should the verifier believe you know x? If you don’t know x, but were able to come up with an s that satisfies the verifier, then you were able to compute the discrete logarithm of t yc.

Related posts

[1] At least without a large-scale quantum computer. Schor’s algorithm could efficiently compute discrete logarithms if only there were a large quantum computer to run it on.

Mills ratio and tail thickness

The Mills ratio [1] is the ratio of the CCDF to the PDF. That is, for a random variable X, the Mills ratio at x is the complementary cumulative distribution function divided by the density function. If the density function of X is f, then

m(x) = \frac{\int_x^\infty f(x)\, dx}{f(x)}

The Mills ratio highlights an important difference between the Student t distribution and the normal distribution.

Introductory statistics classes will say things like “you can approximate a t distribution with a normal if it has more than 30 degrees of freedom.” That may be true, depending on the application. A t(30) distribution and a normal distribution behave similarly in the middle but not in the tails.

Mills ratio plot for t(30) vs normal

The Mills ratio for a t distribution with ν degrees of freedom is asymptotically x/ν,  while the Mills ratio for a standard normal distribution is asymptotically 1/x. Note that increasing ν does make the Mills function smaller, but it still eventually grows linearly whereas the Mills function of a normal distribution decays linearly.

In general, the Mills ratio is a decreasing function for thin-tailed distributions and an increasing function for fat-tailed distributions. The exponential distribution is in the middle, with constant Mills function.

Related posts

[1] Named after John P. Mills, so there’s no apostrophe before the s.

Sigmas and Student

I saw something yesterday saying that the Japanese bond market had experienced a six standard deviation move. This brought to mind a post I’d written eight years ago.

All probability statements depend on a model. And if you’re probability model says an event had a probability six standard deviations from the mean, it’s more likely that your model is wrong than that you’ve actually seen something that rare. I expand on this idea here.

How likely is it that a sample from a random variable will be six standard deviations from its mean? If you have in mind a normal (Gaussian) distribution, as most people do, then the probability is on the order of 1 chance in 10,000,000. Six sigma events are not common for any distribution, but they’re not unheard of for distributions with heavy tails.

Let X be a random variable with a Student t distribution and ν degrees of freedom. When ν is small, i.e. no more than 2, the tails of X are so fat that the standard deviation doesn’t exist. As ν → ∞ the Student t distribution approaches the normal distribution. So in some sense this distribution interpolates between fat tails and thin tails.

What is the probability that X takes on a value more than six standard deviations from its mean at 0, i.e. what does the function

f(ν) = Prob(X > 6σ)

look like as a function of ν where σ² = ν/(ν − 2) is the variance of X?

As you’d expect, the limit of f(ν) as ν → ∞ is the probability of a six-sigma event for a normal distribution, around 10−7 as mentioned above. Here’s a plot of f(ν) for ν > 3. Notice that the vertical axis is on a log scale, i.e. the probability decreases exponentially.

What you might not expect is that f(ν) isn’t monotone. It rises to a maximum value before it decays exponentially. In hindsight this makes sense. As ν → 2+ the variance becomes infinite, and the probability of being infinitely far from the mean is 0. Here’s a plot of f(ν) between 2 and 3.

So six sigma probabilities for a Student t distribution rise from 0 up to a maximum of around 10−3 then decrease exponentially, then asymptotically approach a value around 10−7.

Related posts

Stylometry

I was reading an article this morning that mentioned a styometric analysis of a controversial paragraph written by Roman historian Flavius Josephus. I’ve written several posts that could be called stylometry or adjacent, but I haven’t used that word. Here are some posts that touch on the statistical analysis of a text or of an author.

Two cheers for ugly code

Ugly code may be very valuable, depending on why it’s ugly. I’m not saying that it’s good for code to be ugly, but that code that is already ugly may be valuable.

Some of the ugliest code was started by someone who knew the problem domain well but did not know how to write maintainable code. It may implicitly contain information that is not explicitly codified anywhere else. It may contain information the original programmer isn’t even consciously aware of. It’s often easier to clean up the code than to surface the information it contains using any other source.

Another way code gets ugly is undisciplined modification by multiple programmers over a long period of time. In that case the code has proved to be useful. It’s the opposite of Field of Dreams code that you hope will be used if you build it.

Working effectively with legacy code is hard. It may be easier, and certainly more pleasant, to start from scratch. Even so, there may be more to learn from the old code than is immediately obvious.

***

This post was motivated by looking to extend some code I use in my business. It wouldn’t win a beauty pageant, but it’s very useful.

Writing this post reminded me of a post Productive Productivity that I wrote a while back. From that post:

The scripts that have been most useful are of zero interest to anyone else because they are very specific to my work. I imagine that’s true of most scripts ever written.

Prime gaps and Gapcoin

The previous post looked at tightly clustered primes. This post looks at the opposite, large gaps between primes.

Riecoin is a cryptocurrency that uses finding prime clusters as its proof of work task. Gapcoin uses finding prime gaps as its proof of work task.

There’s some nuance to defining prime gaps. It’s trivial to produce a gap of any size. For example, [n! + 2, n! + n] is an interval of length n − 1 that contains no primes. It is more interesting to find gaps that are large relative to the size of the endpoints. The merit of a gap is the ratio of the gap length to the log of the initial number in the interval.

To be specific, suppose p and q are consecutive primes. The length of the gap between them is defined to be qp and the merit of that gap is (qp) / log p. For large p, the average gap between primes near p is log p and so the merit function measures how large the gap is relative to what you would expect for the size of p.

The following code will compute the merit function.

>>> from sympy import nextprime, log, N
>>> merit = lambda p: (nextprime(p) - p)/log(p)

Gapcoin adjusts its mining difficulty by adjusting the minimum merit value the miner must search for. Gapcoin miners must find a prime p of the form

ph × 2a + b

where h is the SHA256 hash of the previous block in the blockchain and b < 2a.

The prime gap with the largest known merit is [pp + 8350] where

p = 293703234068022590158723766104419463425709075574811762098588798217895728858676728143227

The code

>>> N(merit(p))

shows that the merit is 41.94.

This record was found by the Gapcoin network. I don’t know the backstory, but I presume the mining task wasn’t to find a world record gap. Instead, the miner got lucky and found a much larger gap than necessary.

Related posts