There’s no direct way to define the sum of an infinite number of terms. Addition takes two arguments, and you can apply the definition repeatedly to define the sum of any finite number of terms. But an infinite sum depends on a theory of convergence. Without a definition of convergence, you have no way to define the value of an infinite sum. And with different definitions of convergence, you can get different values.
In this post I’ll review two ways of assigning a meaning to divergent series that I’ve written about before, then mention a third way.
Asymptotic series
A few months ago I wrote about an asymptotic series solution to the differential equation
You end up with the solution
which diverges for all x. That is, for each x, the partial sums of the series do not get closer to any number that you could call the sum. In fact, the individual terms of the series eventually get bigger and bigger. Surely this is a useless solution, right?
Actually, it is useful if you change your perspective. Instead of holding x fixed and letting n go to infinity, fix a value of n and let x go to infinity. In that sense, the series converges. For fixed n and large x, this gives accurate approximations to the solution of the differential equation.
Analytic continuation
At the end of a post on Bernoulli numbers I briefly explain the interpretation of the apparently nonsensical equation
1 + 2 + 3 + … = −1/12.
In a nutshell, the Riemann zeta function is defined by a two-step process. First define
for s with real part strictly bigger than 1. Then define the zeta function for the rest of the complex plane (except the point s = 1) by analytic continuation. If the infinite sum for zeta were valid for s = −1, which is it not, then it would equal 1 + 2 + 3 + …
The analytic continuation of the zeta function is defined at −1, and there the function equals −1/12. So to make sense of the sum of the positive integers, interpret the sum as a sort of pun, a funny way to write ζ(−1).
p-adic numbers
This is the most radical way to make sense of divergent series: change your number system so that they aren’t divergent!
The sum
1 + 2 + 4 + 8 + …
diverges because the partial sums (1, 3, 7, 15, …) are not getting closer to anything. But you can make the series converge by changing the way you measure distance between numbers. That’s what p-adic numbers do. For any fixed prime number p, define the distance between two numbers as the reciprocal of the largest power of p that divides their difference. That is, numbers are close together if they differ by a large power of p. We can make sense of the sum above in the 2-adic numbers, i.e. the p-adic numbers with p = 2.
The nth partial sum of the series above is 2n − 1. The 2-adic distance between 2n − 1 and −1 is 2−n, which goes to zero, so the series converges to −1.
1 + 2 + 4 + 8 + … = −1.
Note that all the partial sums are the same, whether in the real numbers or the 2-adics, but the two number systems disagree on whether the partial sums converge.
If that explanation went by too quickly, here’s a 15-minute video expands on the same derivation.
The series 1 + 2 + 4 + 8 + … can also be solved by analytic continuation. It is on the form x^0 + x^1 + x^2 + x^3 + … with x = 2. For |x| < 1 this series converges to 1/(1-x), which suggests the value 1/(1-2) = -1 as a value for 1 + 2 + 4 + 8 + …
It’s kind of funny that the p-adic method and the one suggested above both give -1 for the sum of 2^n. Especially when considering that a binary number of increasingly consecutive 1 digits is equivalent to a sum of 2^n, e.g.
00000001 = 2^1
00000011 = 2^1 + 2^2
00000111 = 2^1 + 2^2 + 2^3
and when the bit field (however wide it is) is finally filled with 1’s (which might be consider as a sum to ‘infinity’), the result represents -1 in Two’s Compliment!
That last example, incidentally, does the “right thing” when you apply a Shanks transformation to it.
I think it was Carl Bender who famously commented that when faced with a series, the worst thing you can do is sum it up.
There is also the algebraic solution, e.g. x=1+2+4+8+…; x=1+2x; x=-1.
A much better way to handle this is to completely avoid all higher math, and use recursion. You get the strange number, and have a straightforward way to get the divergent (infinite) value, to solve it for a finite number of expansions, or to get the strange integer that the recursively defined value has when it expands to the series.
A “…” is ambiguous. But recursion and the avoidance of indefinitely large numbers is not ambiguous at all; particularly because when a substitution is done, they sometimes will add numbers in pairs and cause it to converge on a single value. (ie: “1 – 1” added in pairs is not the same as adding individually with alternating signs.)
Recursive equations follow the pattern:
S = Limit[S] + Tail[S} # Limit and Tail might BOTH be infinitely large
Limit[S] = S – Tail[S] # The value we get when we ITERATE
A non-divergent example:
S = 1 + 1/2 S
1/2 S = 1
S = 2
= 1 + 1/2 + 1/4 + 1/8 + … 1/(2^N) S
It is IMPORTANT that it maintain its recursive form!
= sum[ 1/(2^i), 0..N ] + 1/(2^N)
There is no notion of how big N even is. It doesn’t matter. There is no ambiguous infinite “…”.
But…
S = 1 + 2 S
-S = 1
S = -1
We can forget that we did this, and find it in a different manner….
S = 1 + 2 S
= 1 + 2 + 4 S
= 1 + 2 + 4 + 8 S
= (2^N – 1) + (2^N)S
S – (2^N)S = (2^N – 1)
Limit[S] = (2^N – 1)
S(1 – 2^N) = (2^N – 1)
We have independently determined that S = -1. That the part that we named Limit[S] is (2^N – 1) (not necessarily “infinite”!)
The only thing novel in here is
Limit[S] = S – Tail[S]
Where a convergent series has a Tail that tends to 0. When both Limit and Tail grow without bound, it is easily possible that S will solve for some odd looking number. But note that this does NOT support a bald statement of “1 + 2 + 4 + 8 + … = -1” in general. It is crucial that it is defined recursively.
(Check out X = 1 – Y; Y = 1 – X. It’s the “1 – 1 + 1 – 1 + … = 1/2” where you can actually solve it for ANY number you like as long as X + Y = 1.)
Btw? What would the recursive equation for “1 + 2 + 3 + … ” be, so that Limit[S] and Tail[S] are separately defined, and S = -1/12. I suspect that this is possible. A recursive definition is far less suspicious, and amenable to being programmed in code in a completely finite manner.
oops… missing final S in the example that is finite…
S = 1 + 1/2 S
1/2 S = 1
S = 2
S = 1 + 1/2 S
S = sum[ 1/(2^i), 0..N ] + 1/(2^N) S
= Limit[S] + Tail[S]
S – 1/(2^N) S = sum[ 1/(2^i), 0..N ]
S(1 – 1/(2^N)) = sum[ 1/(2^i), 0..N ]
as N is large… Tail[S] goes to 0. Because of that, the value of S is just 2
Because of this, we have a trivial check for convergence, the strange recursive value (it’s important to not just do a “…” and lose the definition of S!)
The fourth way to sum a divergent series (that way we get a kind of an infinitely dimensional vector) is Discontinuous Analysis:
https://teachsector.com/limit/
In this course a generalization of limit is defined for EVERY function at every point, so we can also calculate derivatives, integrals, etc. of arbitrary functions (the result is again an infinitely dimensional vector, not a real or complex number). Take convenience of always having f'(x) – f'(x) = 0 without any need to check that f is differentiable.