Consider the following program.
#include <stdio.h>
int negative(int A) {
return (A & 0x80000000) != 0;
}
int divide(int A, int B) {
printf("A = %d\n", A);
printf("negative(A) = %d\n", negative(A));
if (negative(A)) {
A = ~A + 1;
printf("A = %d\n", A);
printf("negative(A) = %d\n", negative(A));
}
if (A < B) return 0;
return 1;
}
int main(){
divide(-2147483648, -1);
}
When it is compiled without compiler optimizations, it produces expected results.
gcc -Wall -Werror -g -o TestNegative TestNegative.c
./TestNegative
A = -2147483648
negative(A) = 1
A = -2147483648
negative(A) = 1
When it is compiled with compiler optimizations, it produces the following incorrect output.
gcc -O3 -Wall -Werror -g -o TestNegative TestNegative.c
./TestNegative
A = -2147483648
negative(A) = 1
A = -2147483648
negative(A) = 0
I am running gcc version 5.4.0.
Is there a change I can make in the source code to prevent the compiler from producing this behavior under -O3?
A = ~A + 1;is UB, ifA == INT_MIN, the+1makes a signed integer overflow.0x7FFFFFFF + 1undefined behaviour anyway for 32-bitinttype?-O3is a safe default, and there are not fundamentally more correctness-affecting bugs in that setting than in GCC in general. stackoverflow.com/a/11546263/1968-O3used to be experimental, and hence buggy; but it hasn’t been for a long time). To say that it “is known to sometimes generate faulty code” is flat out wrong.