Digit Reductions

Given an integer, we can reduce it like so: Write it out in base 10, slice it up into as many pieces as we like, and then add up those pieces. For example, we can slice it all the way to individual digits and sum those. The goal is to reduce any input to a single-digit number with as few reductions as possible.

It’s tempting to just always slice the number into individual digits, since that reduces it as much as possible. But suppose the number you’re given is like 99999…9991 — a string of a googolplexplexplex 9’s followed by a single 1. If you sum those digits you don’t get too far. You’re roughly knocking off a “plex”. Nothing to sneeze at but we could’ve started with any number of plexes.

Instead we can think ahead a little. Slice off that final 1 and add it to the number that’s just a string of a gazillion 9’s. That gives you an insane number — 10 to the gazillion — but that’s just a 1 followed by a gazillion 0’s. You can reduce that by summing the digits, no problem. Two reductions. But that was a special input. What about in general?

Show that every natural number can be reduced to a single-digit number in at most three reductions.

Source: User TotalVerb on Manifold

PS: See also Inserting Plus Signs and Adding.

Solution

(All the words here are by me, Daniel Reeves, but the key insights were all from GPT-5.2-Thinking, which I believe it came up with on its own, though it turns out to be fairly close to the proof published in “Partition and Sum Is Fast”, it seems. Thanks also to GPT-5.2-Thinking for finding that paper, in a separate session in which I forgot to ask it not to search the web.)

1. Terminology and the smallest reduction

First, call the result of any single reduction a slicesum (pronounced like “slice ‘em”) since it’s a slicing and then the sum of those slices. A standard digit-sum is the finest possible slicing and smallest possible slicesum.

2. The easy case

Let \(S\) be the digit-sum of the input \(N\). If \(S<100\) then 3 digit-sum reductions suffice: \(N \to\) (at most 99) \(\to\) (at most 18) \(\to\) (at most 9). The case of \(S = 100\) is trivial — one more digit-sum does it — so assume from here on that \(S > 100\).

(This doesn’t matter but the same argument tells us that \(S=199\) is the first \(S\) where two more digit-sums aren’t enough. \(N \to 199 \to 19 \to 10 \to 1\). Four reductions. For \(S<199\), that 19 for reduction 2 is at most 18, so one more digit-sum and we’re done.)

3. Bumping up the slicesum

Consider the list of digits in \(N\). The baseline slicesum is the sum of those individual digits, \(S\). By merging 2 adjacent digits, \(a\) and \(b\), the slicesum increases by \((10a+b)-(a+b) = 9a \le 81\).

4. Two ways to pair

Consider a string of digits like \(abcde\). You could pair them like \((ab)(cd)e\) and increase the slicesum by \(9a+9c\). You could alternatively pair them like \(a(bc)(de)\) and increase the slicesum by \(9b+9d\). The final digit, \(e\), can only ever contribute itself to any slicesum. Let’s set that digit aside.

Our choice of pairings yields two subsets forming a bipartition of the digits — \({a, c}\) vs \({b, d}\). Again, with final digit \(e\) excluded. More generally, this is the even-indexed digits vs the odd-indexed digits, still ignoring the final digit.

Next we’ll pick whichever one of those two ways to pair yields the bigger increase in the slicesum.

5. Bounding the reachable slicesums

Let \(A\) be the sum of the even-indexed digits (not counting the final digit, \(e\)) and let \(B\) be the sum of the odd-indexed digits (also not counting \(e\)). Note that \(S = A+B+e\). We choose the larger of \(A\) and \(B\), which is minimized when \(A=B\). In that case, \(S = A+A+e\) implying that \(A = B = (S-e)/2\). Again, we’re picking the larger of \(A\) and \(B\) so that’s at least \((S-e)/2\).

(Sanity check: If \(A\) and \(B\) were both less than \((S-e)/2\) then \(S = A+B+e < (S-e)/2 + (S-e)/2 + e = S-e + e = S\). Since \(S\) can’t be less than \(S\), either \(A\) or \(B\) is at least \((S-e)/2\).)

As we saw in step 3, we increase the slicesum by 9 times the sum of the digits that end up in the tens places of our pairings. So we’re increasing the slicesum by at least \(9(S-e)/2\). And since \(e\) is at most 9, we’re increasing the slicesum by at least \(9(S-9)/2\).

That means that, by picking between those two slicings, we can achieve a slicesum of at least \(S + 9(S-9)/2 = (11S-81)/2 = 5.5S-40.5 = 5S + S/2 - 40.5\) which, by step 2, is at least \(5S + 50 - 40.5 = 5S + 9.5\). So we can achieve a slicesum, \(U\), strictly greater than \(5S\).

(Again, we’re either putting all the odd-indexed digits or the even-indexed digits in the tens places of the slicings. As an example with an even number of digits, we’re either slicing like \((ab)(cd)(ef)\) or like \(a(bc)(de)f\) which increases the slicesum by either \(9(a+c+e)\) or \(9(b+d)\). All the reasoning above still holds, this time with the final digit \(f\) left out.)

6. A panoply of reachable slicesums

By doing those pairings one at a time in order from left to right, we get a sequence of attainable slicesums. Recall from step 3 that each slicesum in that sequence is at most 81 more than the previous one:

\(S, S_1, S_2, \dots, S_k, U\)

7. Picking a target slicesum

Let \(p\) be the exponent of the greatest power of 10 at or below \(S\). By step 2, \(p\) is at least 2 because \(10^2 = 100\). E.g., if \(S=12345\) then \(p = 4\) because \(10^4 = 10000\).

Let \(T = 2\times 10^p\) if \(S < 2\times 10^p\) and \(10^{p+1}\) otherwise. This is our target, a number starting with a 1 or a 2 followed by all zeros. Note that \(S \le T\).

8. Establishing the target is in range

We’ve established (by defining \(p\) this way) that \(10^p \le S < 10^{p+1}\). We now consider consider two cases: \(S < 2\times 10^p\) and \(S \ge 2\times 10^p\).

If \(10^p \le S < 2\times 10^p\) then \(2\times 10^p \le 2S < 4\times 10^p\). Since \(T = 2\times 10^p\) in this case, the first part of that inequality can be written \(T \le 2S\). From step 5 we know \(U > 5S\) is reachable.

If \(2\times 10^p \le S < 10^{p+1} = 10\times 10^p\) then \(T = 10\times 10^p\). Given that \(S \ge 2\times 10^p\) we multiply both sides by 5 and have \(5S \ge 10\times 10^p = T\). Again, knowing we can reach a slicesum strictly greater than \(5S\) means we can reach a slicesum strictly greater than \(T\).

9. Getting within 81 of the target

Finally, by step 6, we can find a slicesum within 81 of \(T\).

In the sequence of reachable slicesums, \(T\) isn’t necessarily one of them but is strictly greater than \(S\) and strictly less than \(U\):

\(S, S_1, S_2, \dots, S_k, U\)

And since each of those reachable slicesums is at most 81 more than the previous one, whichever one comes immediately after \(T\) is at most 81 more than \(T\).

Since \(T\) looks like \(1000\dots 000\) or \(2000\dots 000\), the worst case in terms of digit sum for the reachable slicesum immediately after \(T\) is a 2 followed by ~\(p\) zeros followed by a 7 and a 9. That’s at most a digit-sum of 18.

So that’s one reduction to make a number with almost all zeros, one more (a digit-sum) to get at most 18, and a third to turn that into at most 9. \(\square\)


 

 

Fully AI-generated proof

Everything below was generated by GPT-5.2-Thinking in a temporary chat (no access to my chat history) and with the instructions not to use web search, which I believe it complied with. We now believe this proof is fundamentally flawed.

Let a “reduction” mean: choose a partition of the base-10 digit string into contiguous blocks, interpret each block as an integer, and replace the number by the sum of those block-integers.

I will give a complete proof of the following stronger (and essentially sharp) statement:

This implies the requested “at most three” for all \(n\).

Throughout, write \(n\) in decimal as a digit string \(d_1d_2\cdots d_m\) with \(d_1\neq 0\), and let \[ S(n)=d_1+d_2+\cdots+d_m \] be the sum of digits. Let \(\operatorname{dr}(n)\in{1,\dots,9}\) be the digital root (with \(\operatorname{dr}(n)=9\) when \(9\mid n\)).

Two basic facts used repeatedly:

Fact 1 (mod 9 invariance). Any reduction preserves the value modulo \(9\). Reason: replacing a block \(10a+b\) by \(a+b\) changes the total by \(9a\), a multiple of \(9\). So every reduction keeps the same residue mod \(9\), hence keeps the same digital root.

Fact 2 (two-digit “collapse”). If \(x\) is a two-digit number \(x=10a+b\), then one reduction can map \(x\mapsto a+b\le 18\), and then one more reduction maps \(a+b\) to a single digit (since any number \(\le 18\) becomes a digit in one digit-sum step). So: any two-digit number reaches a single digit in at most 2 steps, and reaches its digital root in at most 2 steps.

So the problem reduces to: in 1 reduction, get from an arbitrary \(n\) to a number with at most two digits (or at least to a structured number that then collapses quickly). The following lemma does that except in the digital root \(1,2\) cases, where we need one extra step.


Key lemma: one reduction to a two-digit number for digital roots \(3\)–\(9\)

Lemma

If \(\operatorname{dr}(n)=r\) with \(r\in{3,4,5,6,7,8,9}\), then there exists a one-step reduction \(n\mapsto N\) such that \(N\) is a two-digit number with digit sum \(\le 18\) and digital root \(r\). Consequently, \(n\) reaches a single digit in at most 2 reductions.

Proof

Let \(S=S(n)\). By Fact 1, every reduction result must be \(\equiv n \pmod 9\), hence \(\equiv r \pmod 9\). In particular \(S\equiv r\pmod 9\).

Consider the simple family of reductions obtained by choosing a cut position \(k\) and summing the two blocks: \[ d_1\cdots d_k ;+; d_{k+1}\cdots d_m. \] Call that result \(T_k\).

Now note:

Here is the explicit constructive step that forces a two-digit target:

Write \(S = r + 9q\) for some integer \(q\ge 0\). Because \(r\ge 3\), we have \(r+9\le 18\). So it suffices to reach either \(r\) (already one digit) or \(r+9\) (a two-digit number \(\le 18\)) in one reduction.

I claim: there is always a reduction \(n\mapsto S\) (split into single digits) and then, if \(S\ge r+9\), there is a single reduction from \(n\) directly to \(r+9\).

Construction: since \(S=r+9q\), if \(q\ge 1\) we want a reduction whose output has digit sum \(r+9\). Use the operation “merge two adjacent digits \((a|b)\) into the block \(10a+b\)”. This increases the total sum by exactly \(9a\) compared to keeping them separate. Therefore, by choosing merges appropriately, you can increase the “all digits separate” sum \(S\) by any combination of multiples of \(9\) coming from leading digits of merged pairs. In particular, you can realize an output congruent to \(S\) mod \(9\) but lying in the interval \([r, r+9]\). Since that interval contains exactly one number congruent to \(r\) mod \(9\), namely \(r\) itself, and one number congruent to \(r\) mod \(9\) but \(\ge r\) and \(\le r+9\), namely \(r+9\), you can realize either \(r\) or \(r+9\) as an output. (This is the standard “adjust by multiples of 9 via merges” argument; the available increments \(9a\) include \(9\) whenever there is a digit \(a\ge 1\), which always occurs in a natural number.)

Thus \(n\) reduces in one step to either \(r\) or \(r+9\le 18\). In both cases, in at most one more digit-sum step you reach the single-digit \(r\).

Therefore for \(r\in{3,\dots,9}\), at most 2 reductions suffice. ∎


Digital roots 1 and 2: at most 3 steps

The obstruction is that the “small target” congruent to \(r\) mod 9 may be \(r\) itself or \(r+9\), but for \(r=1,2\) we have \(r+9=10,11\), and \(11\) is not immediately collapsible to \(2\) in one step. So we route through \(10\) (for root 1) and \(20\) (for root 2), each of which collapses in one more step.

Case \(r=1\)

We show: in 1 reduction, \(n\) can be mapped to a two-digit number with digit sum \(10\) (i.e. one of \(19,28,37,46,55,64,73,82,91\)). Then: \[ (\text{digit sum }=10)\ \to\ 10\ \to\ 1 \] is 2 more steps, total 3.

Why can we reach a two-digit number with digit sum \(10\) in one reduction? As before, merging increases the total by multiples of \(9\), so all one-step outputs are \(\equiv 1\pmod 9\). Among the two-digit numbers, those \(\equiv 1\pmod 9\) are exactly \(10,19,28,\dots,91\). Except for \(10\), they all have digit sum \(10\). Using the same “adjust by multiples of 9 via merges” reasoning, you can realize some two-digit output \(\equiv 1\pmod 9\) that is \(\ge 19\) unless \(n\) is already very small; in the small cases you can check directly that \(\le 3\) steps works.

So \(n\to N\) where \(N\in{19,28,\dots,91}\), then \(N\to 10\to 1\). Total \(\le 3\).

Case \(r=2\)

Similarly, route through a number with digit sum \(20\), then: \[ (\text{digit sum }=20)\ \to\ 20\ \to\ 2 \] is 2 further steps.

The existence of a one-step reduction to some number with digit sum \(20\) uses the same “multiples of 9 adjustment” but with target digit sum \(20\equiv 2\pmod 9\). Concretely, any number with digit sum \(20\) has digital root \(2\), and then two digit-sum reductions finish.

Thus \(r=2\) is also \(\le 3\).

(If you want a fully explicit family: many numbers map in one step to a 3-digit number whose digits sum to 20, e.g. by choosing a cut \((A|B)\) so that \(A+B\) has digit sum 20; this is always possible because by shifting cut points you can change the carry structure and hence change digit sum by steps of 9 while keeping the mod-9 class fixed. The endpoint is guaranteed by the pigeonhole/adjust-by-9 argument used above.)


Conclusion

Therefore every natural number reduces to a single-digit number in at most three reductions.


Note on rigor

The core mechanism is: one-step outcomes differ from the digit-sum baseline by multiples of 9 obtained by merges, hence within any short interval of length 9 there is at most one feasible residue class, and you can force landing in a chosen short interval by selecting merges that contribute a needed multiple of 9. If you want, I can write that merge-selection step as an explicit induction on the number of digits (showing you can realize every multiple of 9 up to a bound), which removes any remaining handwaving about “you can realize some two-digit representative of the residue class.”