2025, Nov 17 17:00

Understanding Python’s Arbitrary-Precision Int Limits: Bits vs Bytes, MemoryError and OverflowError in Practice

Python 3 ints are arbitrary precision, not infinite. Learn real limits, bits vs bytes, and how MemoryError vs OverflowError arise with a script to test limits.

Python 3 integers are arbitrary precision, but that never meant “infinite.” The practical ceiling is still bound by resources and implementation limits. A frequent pitfall is to equate a machine’s RAM size with the largest integer value you can store directly. The mismatch between bits and bytes, and the difference between “number of bits” and “numeric value,” makes that intuition break down quickly.

What people usually get wrong

The starting point often sounds like this: take a 64-bit system with N gigabytes of RAM, for example 32 GB. Then assume the biggest power of two you can represent is tied to that memory size. The arithmetic gets muddled right there. Thirty-two gigabytes is not 32 · 2^30 bits; it is 32 · 8 · 2^30, which is 2^38 bits. That’s the capacity in bits, not the value of any integer.

With b bits you can encode values from 0 up to 2^b − 1. If you could dedicate all 2^38 bits exclusively to the binary digits of one integer, the numerical upper bound for that single integer would be 2^(2^38) − 1. It is astronomically large, and at the same time purely theoretical, because you will run into practical limits long before you marshal every last bit of RAM into one Python int.

Minimal code that exposes the issue

The quickest way to see where things fall apart is to attempt to construct integers by shifting 1 left by a huge number of positions. A single line already demonstrates what happens at extreme sizes:

probe_value = 1 << (1 << 66)

This expression tries to create an integer with an enormous number of bits. On a real machine, you won’t get far before Python or the OS complains.

What’s actually going on

The confusion stems from mixing the unit of storage capacity with the range of values that capacity can represent. The 32 GB example amounts to 2^38 bits of storage, but 2^38 is a count of bits, not a numeric value. An 8-bit quantity ranges up to 255; by the same logic, a pool of 2^38 bits, if it could be used as a single contiguous integer, would cap out at 2^(2^38) − 1. There is also an implementation side to consider. In practice, large allocations fail earlier due to memory pressure, and there is an additional boundary where Python raises an exception for integers that exceed a certain internal size threshold.

Empirical checks illustrate it well. When constructing integers of the form 1 << (1 << p), creating values at p of 39 and below worked; attempting p from 40 through 65 led to MemoryError; and p of 66 or more produced OverflowError: too many digits in integer. That pattern shows two different failure modes: memory exhaustion first, and then a hard limit for even larger targets. There is also a note that, apparently, a bound of the form 7.5 * (2**63 - 1) exists, but you are likely to encounter a MemoryError long before reaching such a threshold.

A practical way to test on your machine

When theory isn’t enough, probe the boundary in a controlled fashion. The idea is simple: try to build integers with rapidly growing bit lengths and watch for exceptions. Here is a compact driver that does exactly that:

def try_bigints(max_pow: int) -> None:
    for p in range(max_pow + 1):
        try:
            giant = 1 << (1 << p)
            print(f"power={p}: OK")
        except MemoryError:
            print(f"power={p}: MemoryError")
            break
        except OverflowError as err:
            print(f"power={p}: OverflowError: {err}")
            break
# Example invocation
try_bigints(70)

This pattern mirrors the observed outcomes: success at smaller p, MemoryError in a mid range, and OverflowError beyond that. The exact p at which each behavior appears will depend on your environment.

Why this matters

“Arbitrary precision” is a guarantee about semantics, not an infinite resource pool. You can rely on Python to keep growing an integer as needed, but the growth is tethered to memory and to implementation guardrails. Under real workloads, you will hit MemoryError sooner than any theoretical maximum suggested by counting bits in your RAM. And even without exhausting memory, at some point Python will stop with an explicit OverflowError for excessively large integers.

Takeaways

Do not estimate the largest representable Python integer by treating system RAM as if it directly translated into a maximum value. Bytes and bits account for capacity; integer values scale exponentially with the number of bits. If you must know your practical ceiling, measure it. Use a small probing script to see where MemoryError happens on your setup and be aware that beyond a certain point you may receive OverflowError: too many digits in integer. If you see a figure like 7.5 * (2**63 - 1) referenced as an upper bound, remember that in practice you will usually run out of memory long before approaching it.