Floating point police 24:00

I found an article like the one below, so I screamed "Don't move! Floating point police!" And boarded.

-[The result of huge floating point calculation was strange in Python3, so I investigated the reason --paiza development diary](http://paiza.hatenablog.com/entry/2017/08/01/Python3%E3%81 % A7% E5% B7% A8% E5% A4% A7% E3% 81% AA% E6% B5% AE% E5% 8B% 95% E5% B0% 8F% E6% 95% B0% E8% A8% 88 % E7% AE% 97% E3% 81% AE% E7% B5% 90% E6% 9E% 9C% E3% 81% 8C% E5% A4% 89% E3% 81% A0% E3% 81% A3% E3 % 81% 9F% E3% 81% AE% E3% 81% A7% E7% 90% 86)

However, I don't think there is such a big deal.

As many people think, it is a general topic of floating point numbers and not a topic specialized in Python, or it is said that it is strange because it is huge ("Floating point numbers of huge integers exceeding 2 ^ 53". There are some strange words, such as "If you calculate a number, it's strange", you can understand the contents in one shot), but I think there is no big misunderstanding.

However, there is only one description as below,

109999999999999995805696 0x1.74b1ca8ab05a8p + 76 ~ All during this time 0x1.74b1ca8ab05a9p + 76 ~ 110000000000000012582912 0x1.74b1ca8ab05aap + 76 The width of the section is 2 to the 24th power, 16777216

I was a little worried about this area. Is it okay to say that the width of this section is 2 ^ 24? Let's take a closer look.

.hex()Floating point representation in The smallest integer that has the same representation Maximum integer with the same representation Number of integers included in this interval
0x1.74b1ca8ab05a8p+76 109999999999999979028480 109999999999999995805696 16777217
0x1.74b1ca8ab05a9p+76 109999999999999995805697 110000000000000012582911 16777215
0x1.74b1ca8ab05aap+76 110000000000000012582912 110000000000000029360128 16777217

As you can see, it is not constant considering the width of the intervals that have the same floating-point number representation. It's about 2 ^ 24, so that's fine! It's okay to understand that, but since I've researched so far, I think it's okay to dig a little deeper.

Even rounding

The culprit of this seemingly mysterious phenomenon is the "even rounding" method of rounding floating point numbers. This is sometimes called banker rounding, but roughly speaking, it means "rounding to an even number if it is in the middle of the number that can be expressed, and rounding to the closer side otherwise."

In short, if you round 0.5 to an integer, you round it to 0 instead of 1, and if you round 1.5, you round it to 2.

You may not understand the meaning of "rounding to an even number" even though you are only dealing with integers now, but in other words, "the least significant bit of the mantissa if it is in the middle of the number that can be expressed". Round to the one where is 0 ".

If you round 109999999999999995805696 to a floating point number, the closest floating point numbers are 109999999999999987417088.0 (= 0x1.74b1ca8ab05a8p + 76) and 110000000000000004194304.0 (= 0x1.74b1ca8ab05a9p + 76), both of which are the same distance. In this case, the former is even, so it is rounded to the former.

In other words, in even rounding, the integer rounded to even number is one more than the original width (?) Of the interval, and the integer rounded to odd number is one less, so the width of the interval increases or decreases alternately as shown above. That's why.

Serpentine

~~ Arbitrary precision integer police will leave it to others ~~

The hexadecimal notation of floating-point numbers is wonderful because you can tell at a glance whether the least significant bit of the mantissa is 0 or 1. I want it to be adopted in more languages.

Recommended Posts

Floating point police 24:00
Floating point calculation for the awk command
Display output of a list of floating point numbers