← All articles

Why 0.1 + 0.2 ≠ 0.3

Type 0.1 + 0.2 into JavaScript and you get 0.30000000000000004. This isn't a bug — it's a fundamental consequence of how computers store decimal numbers.

Open a browser console and type 0.1 + 0.2. You’ll see:

0.30000000000000004

This surprises almost everyone the first time they encounter it. Is this a JavaScript bug? A rounding error? Bad programming?

None of the above. It’s a fundamental property of how computers represent decimal numbers in binary — and it affects every major programming language, not just JavaScript.

How Computers Store Numbers

Computers store numbers in binary (base 2). Integers are straightforward: 5 in binary is 101, 12 is 1100, and so on. But fractions require a different approach.

Most programming languages use IEEE 754 floating-point arithmetic, a standard published in 1985. In this system, a number like 0.1 is stored in 64 bits: 1 bit for the sign, 11 bits for the exponent, and 52 bits for the significant digits (the “mantissa”).

The problem: 0.1 cannot be represented exactly in binary.

The Binary Fraction Problem

In decimal, some fractions terminate (1/4 = 0.25) and others repeat (1/3 = 0.333…). The same is true in binary — but different fractions cause problems.

In binary, 1/2 = 0.1, 1/4 = 0.01, 1/8 = 0.001. Any fraction whose denominator is a power of 2 terminates cleanly. But 1/10 — which is 0.1 in decimal — cannot be written as a terminating binary fraction, because 10 is not a power of 2.

Instead, 0.1 in binary is:

0.0001100110011001100110011001100110011001100110011001101...

It repeats forever, just like 1/3 repeats in decimal. Since a computer only has 64 bits to work with, it has to cut off this infinite sequence at some point — and that truncation introduces a tiny error.

What Actually Gets Stored

When you write 0.1 in JavaScript (or Python, or Java, or C), the computer stores the closest 64-bit binary approximation:

0.1 → actually stored as 0.1000000000000000055511151231257827021181583404541015625

0.2 → actually stored as 0.200000000000000011102230246251565404236316680908203125

When you add these two imperfect approximations, the result is:

0.30000000000000004440892098500626161694526672363281250

Displayed to a reasonable precision: 0.30000000000000004

Not a bug — just the predictable result of adding two approximate numbers.

Why Doesn’t It Always Show Up?

If 0.1 is always imprecise, why does 0.1 + 0.7 display as 0.8 instead of something weird?

Sometimes the errors cancel. Sometimes the imprecision is so small that when the number is converted back to a string for display, it rounds to the “right” answer. Programming languages apply smart rounding when printing numbers — they try to show the shortest decimal string that uniquely identifies the floating-point value.

0.1 + 0.7 happens to round to 0.8. 0.1 + 0.2 doesn’t.

How Calculators Handle It

CalcNow uses JavaScript’s native floating-point arithmetic — which means, in theory, 0.1 + 0.2 could produce 0.30000000000000004. In practice, CalcNow applies output rounding: before displaying a result, it rounds to a reasonable number of significant digits, which eliminates the visible artifact in most everyday calculations.

Most consumer calculators — physical and digital — do the same. They compute in floating-point and then round the display. The underlying imprecision exists; the display hides it.

Spreadsheets like Excel use a similar approach. If you type =0.1+0.2 in Excel, it displays 0.3 — but if you format the cell with 20 decimal places, you’ll see the floating-point imprecision hiding underneath.

When It Matters

For casual arithmetic, floating-point imprecision is irrelevant. But there are domains where it creates real problems:

Financial applications. When you’re calculating money, a tiny floating-point error can compound across millions of transactions. This is why financial software typically uses fixed-point arithmetic or decimal types that represent numbers exactly.

Equality comparisons. In code, checking if (0.1 + 0.2 === 0.3) returns false. This has caused countless bugs in software. The standard fix is to check if the difference is smaller than some tiny epsilon: if (Math.abs(result - 0.3) < 0.000001).

Scientific computing. In simulations and numerical analysis, floating-point errors can accumulate over millions of operations. Entire subfields of mathematics (numerical stability, condition numbers) exist to understand and manage this.

The Decimal Alternative

IEEE 754 also defines a decimal floating-point format that can represent 0.1 exactly. SQL’s DECIMAL type and Python’s decimal module work this way. These are slower but precise.

Some languages are moving toward better defaults: Swift and Rust both expose the rounding issues honestly and provide library tools to manage them. But IEEE 754 binary floating-point remains the default in most languages because it’s fast and implemented directly in CPU hardware.

The Simple Summary

  • Computers store most numbers in binary floating-point, which can’t exactly represent fractions like 0.1
  • 0.1 + 0.2 is slightly more than 0.3 because both values are stored as close-but-not-exact binary approximations
  • This is correct IEEE 754 behavior, not a bug
  • Calculators and spreadsheets mask this by rounding before display
  • For money or exact decimal math, use a decimal-aware numeric type
Try it yourself CalcNow — fast, private, no tracking.
Open Calculator →