Why is the floating point number of 0.1 larger than 0.1, but when added 10 times, it is smaller than 1.0 [Part 1]

In this article, I found that converting 0.1 to a floating point number results in a number slightly larger than 0.1. Why is 0.1 displayed as 0.1 when printed even though 0.1 cannot be accurately represented by a floating point number That means that if you add 0.1, positive errors should accumulate. For example, I expected that adding 0.1 10 times would be slightly larger than 1. I'll give it a try.

total = 0

for i in range(10):
    total += 0.1

print(total)
#=> 0.9999999999999999

Unexpectedly, a number smaller than 1 was output. This time I investigated this phenomenon.

environment

This article uses Python 3.7.

[Premise] Floating point number

In this article, the term "floating point number" hereafter refers to "IEEE 754 double precision".

The floating-point number format is a number converted to the following format, with sign, `ʻexp, and frac`` arranged in that order.

(-1)^{sign} \times 2^{exp - 1023} \times (1 + frac \times 2^{-52})

The name and number of bits of each symbol are as follows.

symbol Japanese name English name Bit number
sign Code part sign 1
exp Index part exponent 11
frac Mantissa fraction 52

When 0.1 is expressed as a floating point number

to_float_point_0_1.png https://tools.m-bsys.com/calculators/ieee754.php

0.1 is the binary representation of floating point numbers

Code= 0
index= 01111111011
mantissa= 1001100110011001100110011001100110011001100110011010

When converted to decimal,

Code= 0
index= 1019
mantissa= 2702159776422298

is. The floating-point exponent has 1023 added to it, so subtracting this gives -4.

Converting 0.1, which is represented by a floating point number, to a decimal number

0.1000000000000000055511151231257827021181583404541015625

is.

Number when adding 0.1 each

The floating point number of 0.1 is slightly larger than 0.1, but if it does not become 1 when adding 10 times, it seems that a reversal phenomenon is occurring somewhere. Let's see the calculation result on the way. If you pass a floating point number to Decimal and then print, the value when the floating point number is converted to a decimal number will be displayed.

from decimal import Decimal

total = 0
d_total = Decimal('0')

for i in range(10):
    total += 0.1
    d_total += Decimal('0.1')
    print(f'{d_total}|{Decimal(total)}')

Here is the execution result.

Decimal Calculation result
0.1 0.1000000000000000055511151231257827021181583404541015625
0.2 0.200000000000000011102230246251565404236316680908203125
0.3 0.3000000000000000444089209850062616169452667236328125
0.4 0.40000000000000002220446049250313080847263336181640625
0.5 0.5
0.6 0.59999999999999997779553950749686919152736663818359375
0.7 0.6999999999999999555910790149937383830547332763671875
0.8 0.79999999999999993338661852249060757458209991455078125
0.9 0.899999999999999911182158029987476766109466552734375
1.0 0.99999999999999988897769753748434595763683319091796875

0.1 to 0.4 are more than calculated as decimal numbers, 0.5 is just right, and 0.6 to 1.0 are more than calculated as decimal numbers. The result was that it was also small.

When you add 0.1 each, check how much you are adding

Now, let's check how much is actually added when adding 0.1 at a time. The program is here. Python's Decimal has 28 valid digits by default, so change it to a sufficient size in advance.

from decimal import Decimal, getcontext

#Changed the number of valid digits of Decimal to 64 digits
getcontext().prec = 64

total = prev = 0
d_total = Decimal('0')

for i in range(10):
    total += 0.1
    d_total += Decimal('0.1')
    print(f' |{Decimal(total) - Decimal(prev)}')
    print(f'{d_total}|{Decimal(total)}')
    prev = total

This is the execution result. Floating-point calculation results are displayed on odd-numbered lines excluding the header, and the differences are displayed on even-numbered lines.

Decimal Calculation result / difference
0.1 0.1000000000000000055511151231257827021181583404541015625
0.1000000000000000055511151231257827021181583404541015625
0.2 0.200000000000000011102230246251565404236316680908203125
0.100000000000000033306690738754696212708950042724609375
0.3 0.3000000000000000444089209850062616169452667236328125
0.09999999999999997779553950749686919152736663818359375
0.4 0.40000000000000002220446049250313080847263336181640625
0.09999999999999997779553950749686919152736663818359375
0.5 0.5
0.09999999999999997779553950749686919152736663818359375
0.6 0.59999999999999997779553950749686919152736663818359375
0.09999999999999997779553950749686919152736663818359375
0.7 0.6999999999999999555910790149937383830547332763671875
0.09999999999999997779553950749686919152736663818359375
0.8 0.79999999999999993338661852249060757458209991455078125
0.09999999999999997779553950749686919152736663818359375
0.9 0.899999999999999911182158029987476766109466552734375
0.09999999999999997779553950749686919152736663818359375
1.0 0.99999999999999988897769753748434595763683319091796875

The difference of 0.1 → 0.2 is the same as the value of 0.1 expressed as a floating point number, but 0.2 → 0.3 is larger. After 0.3, the same value is added, which is smaller than the value of 0.1 expressed as a floating point number.

By the way, not all values smaller than 0.1 are added after 0.4, but increase again with 1.1 → 1.2.

Expected value Calculation result / difference
1.0 0.99999999999999988897769753748434595763683319091796875
0.09999999999999997779553950749686919152736663818359375
1.1 1.0999999999999998667732370449812151491641998291015625
0.1000000000000000888178419700125232338905334472656250
1.2 1.1999999999999999555910790149937383830547332763671875
0.1000000000000000888178419700125232338905334472656250
1.3 1.3000000000000000444089209850062616169452667236328125
0.1000000000000000888178419700125232338905334472656250
1.4 1.4000000000000001332267629550187848508358001708984375
0.1000000000000000888178419700125232338905334472656250
1.5 1.5000000000000002220446049250313080847263336181640625

Why doesn't 0.1 increase by the floating point number when adding 0.1? It involves rounding errors.

Floating point addition procedure

Addition of positive floating point numbers is done by the following procedure.

① Match the smaller index to the larger index (2) Adjust by reducing the mantissa as the index is increased. ③ Add the mantissa ④ If a carry occurs, add 1 to the index and reduce the mantissa by that amount. ⑤ Round the overflowing digits of the mantissa (even rounding)

Example of addition in decimal

The procedure for adding floating-point numbers is shown using a decimal number example. Consider the addition of 5 significant digits to a decimal number.

\begin{array}{llcll}
    &9.8192 & \times & 10^2 & (= 981.92)\\
 + &4.7533 & \times & 10^1 & (= 47.533)\\
\hline
\end{array}

The procedure is explained below.

① Match the smaller index to the larger index (2) Adjust by reducing the mantissa as the index is increased.

Align the exponents for addition. This time, the one with the larger index is 9.8192 x 10 ^ 2, so we will transform 4.7533 x 10 ^ 1 to fit this.

\begin{array}{llcr}
&4.7533 & \times & 10^1\\
 = &0.47533 & \times & 10^2
\end{array}

③ Add the mantissa Add.

\begin{array}{llcr}
    &\phantom{1}9.8192 & \times & 10^2\\
 + &\phantom{1}0.47533 & \times & 10^2\\
\hline
 & 10.29453 & \times & 10^2
\end{array}

④ If a carry occurs, add 1 to the index and reduce the mantissa by that amount.

Since the integer part is 10, carry is occurring. To make the integer part one digit, add 1 to the exponent and divide the mantissa by 10.

\begin{array}{llcr}
&10.29453 & \times & 10^2\\
 = &1.029453 & \times & 10^3
\end{array}

⑤ Round the overflowing digits (even rounding)

This time, the number of valid digits is 5, so we have to consider the processing of the trailing 53. Since even rounding is used here, it will be rounded up.

\begin{array}{llcr}
&1.029453 & \times & 10^3\\
\rightarrow &1.0295 & \times & 10^3
\end{array}

The procedure for adding floating point numbers has been confirmed in decimal numbers.

Even rounding

Even rounding is a rounding method similar to rounding, but when the object to be rounded is exactly in the middle of two values, it is rounded up or down so that the upper digit is even.

For example, when rounding even numbers to the first decimal place, 1.5 becomes 2 and 4.5 becomes 4. 4.51 is 5.

Protective girder and sticky bit

In the example of addition in decimal numbers, the last digit ( 53 in the above example) overflowed from the effective digits when the exponents were combined. These values should be set aside as they will be rounded at the end. (Don't truncate just because the exponent is increased and the mantissa is decreased and adjusted in step 2).

With even rounding,

-Greater than 5 → Round up ―― 5 Perfect → Round up or round down in even numbers ―― Less than 5 → Truncate

Therefore, we need information on the digits below it as well as the most significant digit of the overflowing digits. However, since all that is needed for information on digits other than the most significant digit is whether it is exactly 0, it seems sufficient to prepare one boolean value.

Guard digits are the area to store the overflowing digits. For addition of positive numbers, one protective digit is sufficient. (One more bit is needed when dealing with subtraction and negative numbers). If the protection digit is one digit, the above example stores the most significant bit 5 of the overflowing digits 53. A sticky bit is a boolean value that indicates whether or not a number less than or equal to the protection digit contains a number greater than or equal to 1. In the above example, the overflowing digit 3 is 1 or more, so true (or 1) is set.

Rewrite decimal addition using protected digits and sticky bits

Apply the protection digits and sticky bits to the decimal addition example above.

\begin{array}{llcll}
    &9.8192 & \times & 10^2 & (= 981.92)\\
 + &4.7533 & \times & 10^1 & (= 47.533)\\
\hline
\end{array}

① Match the smaller index to the larger index (2) Adjust by reducing the mantissa as the index is increased.

\begin{array}{lrcll}
&4.7533 & \times & 10^1&\\
 = &0.47533 & \times & 10^2&(Protective girder=3)
\end{array}

Since the last 3 overflowed, I saved it in the protection digit.

③ Add the mantissa

\begin{array}{llcr}
    &\phantom{1}9.8192 & \times & 10^2&\\
 + &\phantom{1}0.4753 & \times & 10^2&(Protective girder=3)\\
\hline
 & 10.2945 & \times & 10^2&(Protective girder=3)
\end{array}

④ If a carry occurs, add 1 to the index and reduce the mantissa by that amount.

\begin{array}{llcrl}
&10.2945 & \times & 10^2&\\
 = &1.0294 & \times & 10^3&(Protective girder=5,Sticky bit=true)
\end{array}

The trailing 5 has overflowed, so I saved it in the protection digit. At this time, 3, which was originally saved in the protection digit, is saved in the sticky bit. The sticky bit holds whether the number is 1 or more, so true is set.

⑤ Round the overflowing digits (even rounding) Round using a protective girder and sticky bit, Rounding up is performed because the sticky bit changestrue with the protection digit 00.

\begin{array}{llcrl}
&1.0294 & \times & 10^3&(Protective girder=5,Sticky bit=true)\\
\rightarrow &1.0295 & \times & 10^3
\end{array}

The procedure for adding floating point numbers using protected digits and sticky bits has been confirmed in decimal numbers. Do the same for binary numbers.

To the second part

Since it has become long, I will divide the article. In the second part, we will check the behavior when adding 0.1. The sites that I referred to are also summarized in the second part.

Why is the floating point number of 0.1 larger than 0.1, but it is smaller than 1.0 when added 10 times [Part 2]

Recommended Posts

Why is the floating point number of 0.1 larger than 0.1, but when added 10 times, it is smaller than 1.0 [Part 1]
Why is the floating point number of 0.1 larger than 0.1, but when added 10 times, it is smaller than 1.0 [Part 2]
Why is 0.1 displayed as 0.1 when printed, even though 0.1 cannot be accurately represented by a floating point number?
Check the in-memory bytes of a floating point number float in Python
It seems that the version of pyflakes is not the latest when flake8 is installed
Deep Learning! The story of the data itself that is read when it does not follow after handwritten number recognition