2026, Jan 13 11:00

Why Python float shows 10000 for 0.1 × 100000 while Decimal reveals a residual, and how 55 digits come from a 53-bit float

Why Python float rounds 0.1 × 100000 to exactly 10000 in IEEE-754 binary64, while Decimal shows a residual. Also: printing 55 digits from a 53-bit value.

Why does 0.1 multiplied by 100000 print as a perfectly clean 10000.0000000000000000000000 with float formatting, yet show a residual 0.00000000000055511151231 when using Decimal? And how can you print 55 decimal digits from a float that only has 53 bits of precision? Here is what actually happens.

Reproducing the case

Consider this minimal script:

from decimal import Decimal
x = 0.1
M = 100000

val = M * x

print(' %.22f ' % val)
print(Decimal(0.1) * 100000)

And the separate precision probe:

print("%.55f" % 0.1)

What is really stored and multiplied

On implementations where Python's float follows IEEE-754 binary64, the source literal 0.1 is not representable exactly. It is converted to the nearest representable value, which is 0.1000000000000000055511151231257827021181583404541015625.

Now look at the real-number product of 100000 and that stored value. In real arithmetic, 100000 × 0.1000000000000000055511151231257827021181583404541015625 equals 10000.00000000000055511151231257827021181583404541015625. That result is not representable in binary64 either. The two closest representable values around it are 10000 and 10000.000000000001818989403545856475830078125. The floating-point multiplication rounds to the closer of the two, which is exactly 10000. Therefore, formatting the float with many decimal places still shows a clean 10000.0000000000000000000000.

Why Decimal shows a residual

In the Decimal branch, 0.1 is first turned into the same binary float value as above. Then Decimal(0.1) converts that float to a Decimal value without changing it, preserving 0.1000000000000000055511151231257827021181583404541015625. The subsequent multiplication by 100000 is performed under Decimal arithmetic. By default, Python uses 28 digits of precision for Decimal operations, so the product is rounded to 10000.00000000000055511151231. This exposes the tiny residual that was rounded away to exactly 10000 in the binary64 operation.

About printing 55 decimal digits from a 53-bit float

It may seem contradictory that you can request 55 decimal digits from a value with only 53 bits of binary precision. The key is that the decimal conversion works from the exact stored binary value. Trailing zeros beyond the stored significand do not change the value; treating the binary number as if it had more trailing zero bits is harmless and still represents the same number. When you ask for more decimal digits, the conversion keeps emitting digits consistent with that fixed binary value. You are not gaining more information than is in the 53-bit significand; you are seeing a longer decimal expansion of the same number.

Putting it all together

The multiplication 100000 × 0.1 looks perfect in binary64 because the exact real product lies closer to the representable value 10000 than to its next neighbor, so it rounds to exactly 10000. Printing more decimal places cannot reveal an error that was already rounded away. In contrast, the Decimal calculation preserves the stored float value and performs the multiplication under a 28-digit Decimal context, which yields 10000.00000000000055511151231.

Practical takeaway

Some floating-point operations will round to deceptively “nice” results, even when the inputs are not exactly representable. Increasing the number of printed decimal places does not manufacture new precision; it merely extends the decimal expansion of the same stored binary value. If you need to inspect how a non-representable input influences a computation, converting that input’s actual binary64 value to Decimal and then performing arithmetic under Decimal’s precision will make the residual visible, as shown above.

Complete demonstration with renamed identifiers

from decimal import Decimal
x = 0.1
M = 100000

prod = M * x

# Float path: rounds to the nearest binary64, which is exactly 10000 here
print(' %.22f ' % prod)

# Decimal path: preserves the float's decimal expansion, then multiplies
# with the default 28-digit Decimal precision
print(Decimal(0.1) * 100000)

# Long decimal expansion of the stored binary64 value of 0.1
print("%.55f" % 0.1)

Understanding this behavior matters because it explains why some results appear perfectly rounded while others expose tiny discrepancies. It also helps set the right expectations: formatted output length does not equal precision, and different numeric models round at different steps according to their own rules.