I discovered on my x86 VM (32 bit) that the following program:
#include <stdio.h>
void foo (long double x) {
int y = x;
printf("(int)%Lf = %d\n", x, y);
}
int main () {
foo(.9999999999999999999728949456878623891498136799780L);
foo(.999999999999999999972894945687862389149813679978L);
return 0;
}
Produces the following output:
(int)1.000000 = 1
(int)1.000000 = 0
Ideone also produces this behavior.
What is the compiler doing to allow this to happen?
I found this constant as I was tracking down why the following program didn't produce 0 as I expected (using 19 9s produced the 0 I expected):
int main () {
long double x = .99999999999999999999L; /* 20 9's */
int y = x;
printf("%d\n", y);
return 0;
}
As I tried to compute the value at which the result switches from expected to unexpected, I arrived at the constant this question is about.
<1representable in a double, so it being rounded to the closestdouble(i.e.1.0) before anything else happens.