Why 0.1 + 0.2 Doesn't Equal 0.3 in Your Code
Source: Dev.to
print(0.1 + 0.2)
You expect to see 0.3. Instead, you get 0.30000000000000004.
Your calculator says 0.3. Excel says 0.3. Your brain says 0.3. But your code disagrees. And this isn’t a Python bug—every programming language does this. JavaScript, Java, C++, Ruby, Go—they all betray basic arithmetic in exactly the same way.
This seemingly tiny quirk has caused multi‑million‑dollar disasters. In 1991, a Patriot missile‑defense system failed to intercept an Iraqi Scud missile because of accumulated floating‑point errors, killing 28 soldiers. Financial systems lose thousands daily to rounding errors in currency calculations. Scientific simulations produce garbage results. Medical‑dosing software miscalculates.
Yet most developers never learn why this happens or how to fix it. They just add random rounding functions until things look right. Let’s end that today.
The Real Problem
Computers don’t actually store decimal numbers the way you think. When you write 0.1 in your code, the computer converts it to binary—ones and zeros. But here’s the kicker: most decimal fractions can’t be represented exactly in binary.
Think about the fraction one‑third. In decimal you write 0.333… forever. No matter how many threes you write, you never capture the exact value. Binary has the same issue with tenths, which is why 0.1 becomes
0.0001100110011001100110011… (repeating forever)
Your computer has finite memory, so it cuts off that infinite sequence. The truncated version is close to 0.1, but not exact. When you add two approximate numbers, their tiny errors combine and become visible.
This affects every calculation with decimals. Multiplication, division, subtraction—they all inherit this fundamental approximation. The error compounds with each operation. Run a million calculations in a loop, and your result can drift miles from the truth.
Why Nobody Warned You
- Blind spot in learning – Beginners learn integers and strings first, so by the time they hit floating‑point math they assume it works like normal math.
- Sparse coverage – Computer‑science courses mention the IEEE‑754 floating‑point standard in passing, but rarely explain the real‑world consequences. Many graduates never learn why financial software treats money as integers.
- Hidden symptoms – Most languages automatically round the display of a float, so
0.30000000000000004often appears as0.3in logs. The truth only surfaces when equality checks mysteriously fail or when precision matters. - Small‑scale invisibility – Adding a few numbers works fine; errors stay invisible until you work with massive datasets, financial calculations, or iterative algorithms where small errors explode.
The Fix That Actually Works
1. Stop comparing floats with exact equality
Never write if x == 0.3 when x comes from floating‑point math. Instead, check if the values are close enough:
def float_equal(a, b, tolerance=1e-9):
"""Return True if a and b differ by less than tolerance."""
return abs(a - b) < tolerance
result = 0.1 + 0.2
if float_equal(result, 0.3):
print("Close enough!")
The tolerance (often called epsilon) defines your acceptable margin of error. For most applications, one‑billionth (1e-9) is plenty; adjust based on your precision requirements.
2. Use integers for money
Never use floats for currency. Store the smallest unit in an integer (cents, paise, etc.):
price_cents = 1999 # $19.99
tax_cents = int(price_cents * 0.07) # 7 % tax
total_cents = price_cents + tax_cents
print(f"Total: ${total_cents / 100:.2f}")
All arithmetic is exact; you only convert back to a decimal representation for display.
3. Use a decimal library for exact decimal arithmetic
When you need true decimal precision (accounting, scientific measurements, legal documents), use a decimal type. Python ships one built‑in:
from decimal import Decimal
a = Decimal('0.1')
b = Decimal('0.2')
print(a + b) # Exactly 0.3
Important: Pass numbers as strings. Decimal(0.1) would first create a float (0.10000000000000000555…) and inherit the error.
4. Increase precision when necessary
Python’s float is a 64‑bit double, giving about 15‑17 decimal digits of accuracy. If that isn’t enough, the decimal module lets you set arbitrary precision:
from decimal import Decimal, getcontext
getcontext().prec = 50 # 50 decimal places
result = Decimal(1) / Decimal(3)
print(result) # 0.33333333333333333333333333333333333333333333333333
Higher precision slows calculations, but sometimes correctness outweighs speed.
When Precision Attacks
Financial systems are obvious victims, but the danger extends further:
- Scientific simulations – Millions of iterations can drift completely off course.
- Machine‑learning pipelines – Accumulated rounding errors degrade model accuracy.
- Game physics engines – Ignoring float precision creates glitches where objects phase through walls.
- GPS, weather forecasting, cryptocurrency exchanges, medical equipment – Small sensor‑reading errors become catastrophic when propagated through calculations.
A famous cautionary tale is the Vancouver Stock Exchange index. Launched in 1982 at 1000.00 points, it inexplicably fell to 524.811 after 22 months—far from the expected ~1100—due to accumulated floating‑point rounding errors.
Takeaway
Floating‑point representation is an approximation, not a bug. Understanding its limits and applying the right tools—tolerant comparisons, integer arithmetic for money, decimal libraries, or higher‑precision types—keeps your programs accurate and reliable.
The Survival Strategy
1. Know when precision matters
- Blog post view counts? Floats are fine.
- Bank‑account balances? Absolutely not.
- If you can’t tolerate any error—financial transactions, legal documents, life‑critical systems—use exact arithmetic from the start.
2. Test your edge cases
- Write unit tests that specifically check calculations with repeating decimals.
- Feed your functions values like
0.1,0.01,0.001. - Many bugs hide until someone enters an amount like $10.10 instead of a round number.
3. Understand your tools
- Read the documentation for whatever decimal or bignum library your language provides.
- Learn its quirks: some libraries have precision limits, others behave differently with various operations.
- Know these details before you commit to a solution.
4. Watch for accumulation
- When summing thousands of values in a loop, small errors stack up.
- Consider algorithms designed for accurate summation, such as Kahan summation, which tracks and compensates for lost precision.
5. Document your decisions
- When you choose integers for money or decimals for precision, leave a comment explaining why.
- Future you—or another developer—will appreciate the reasoning when they’re tempted to “simplify” by switching to floats.
The Takeaway
Computers lie about decimals. They always have, and they always will. This isn’t a bug—it’s how binary arithmetic works at the hardware level.
Once you accept that floats are approximations, not exact values, you’ll write more resilient code.
- The next time you see a weird calculation result, before assuming your code is broken, check if floating‑point precision is the real culprit.
- Then reach for the appropriate tool: epsilon comparisons, integer arithmetic, or decimal libraries.
Your financial software will balance. Your scientific simulations will converge. And you’ll never again wonder why 0.1 + 0.2 betrays everything you learned in elementary school.