An ordinary programmer understands intuitively and reflexively, I would like to write an article about a rudimentary and simple story. Because of its rudimentary nature, heavy programmers like me often do it.
If there are any mistakes or inappropriate content, please comment We will consider corrections and adjustments to the content.
In many programming languages, it returns the absolute value of an integer. Functions and methods are available.
For example
When I execute ʻint result = Math.abs (-15); , the argument specifies a negative integer, but The signed
15 is assigned to
result`.
This doesn't mean much to many programmers.
Many programming languages such as C, Java, and C # have integer types. It's a type for dealing with integers, but this time it's not the main subject in itself. Let's consider an int type that is easy for experienced people in C, Java, and C # to imagine.
I think the size of the int type depends on the language and environment, For now, consider a 32-bit signed integer type.
The range that a 32-bit signed integer type can represent is, for many programming languages,
-2,147,483,648 to 2,147,483,647
It should be.
Just looking at this already has a suspicious atmosphere.
ʻInt result = Math.abs (-2147483648);What happens to
result`?
As a positive integer, it is up to ** 2147483647 **, so it seems that it will not be good.
By the way, C # System.Math
throws an OverflowException.
try
{
int result = System.Math.Abs(Int32.MinValue);
}
catch(OverflowException)
{
Console.WriteLine("OverflowException occurs");
}
In Java's java.lang.Math
, the absolute value is the same, but it is a negative integer -2147483648
.
(This works as specified.)
int result = Math.abs(Integer.MIN_VALUE);
//It's an absolute value,-Manma of 2147483648.
System.out.printf("result = %d%n", result);
For C / C ++ cmath
, in my environment it was a negative integer -2147483648
.
What happens here is undefined, so it may have different results depending on the environment.
#include <cmath>
int main()
{
int result = abs(INT_MIN);
return 0;
}
If you write code that may pass the minimum value of a signed integer type to a function or method that returns an absolute value, "** Eventually **" There is a possibility that something wrong will happen.
The point is "someday", and in this example, the problem is
Of the 4,294,967,296 integer values from -2,147,483,648 to 2,147,483,647
There will be only one (-2,147,483,648
only).
The problem may not be obvious.
Smaller integer types should be more problematic.
If by design you don't have the possibility of passing a signed integer minimum, that's fine. If not, you need to take action.
Basically, special processing is done for the minimum value of the signed integer type. For example, implement so that the minimum value is excluded, If you also need the absolute value of the minimum value, you may want to use an integer type that can handle larger sizes. (Even larger integer types have problems with minimum values.)
Depending on the requirements, it is possible to use an integer type with arbitrary precision such as BigInteger
.
For example, in Java and C #, there are BigInteger
classes and structs,
There is also an abs () method (static method in C #).
With BigInteger
, you don't have to worry about overflowing.
(You may run out of memory, but
I would like to pay tribute to anyone who deals with numbers that are so huge that they run out of memory. )
Also, if possible, build unit tests that can evaluate the minimum value of a signed integer type.
Thank you to everyone who commented.
The content related to @ saka1029's comment will be an independent article in the future. @tsuyoshi_cho's comment has been reflected in this article.
Recommended Posts