0

I would like to better understand floating point values and the imprecisions associated.

Following are two snippets with slight modifications

Snippet 1

#include <stdio.h>

int main(void)
{
   float a = 12.59;
   printf("%.100f\n", a);
}

Output:

12.5900001525878906250000000000000000000000000000000000000000000000000000000000000000000000000000000000

Snippet 2:

#include <stdio.h>

int main(void)
{
   printf("%.100f\n", 12.59);
}

Output:

12.589999999999999857891452847979962825775146484375000000000000000000000000000000000000000000000000000

Why is there a difference in both the outputs? I'm unable to understand the catch.

3 Answers 3

2

In first case you have defined variable as a float and in second case you directly given the number.

System might be consider direct number as a double and not float.

So,I think may be it is because of system definition of float and double.

Sign up to request clarification or add additional context in comments.

Comments

2

to get the consistent behaviour you can explicitly use floating point literal:

printf("%.100f\n", 12.59f);

Comments

1

In Snippet 1, your float gets cast into a double, and this casting causes a change in the value (due to the intricacies of floating point representation).

In Snippet 2, this cast doesn't happen, it's printed directly as a double.

To understand, try running the snippets below:

#include <stdio.h>

int main(void) {
    double a = 12.59;
    printf("%.100f\n", a);  
    return 0;
}

and

int main(void) {
    float a = 12.59;
    printf("%.100f\n", (double)a);  
    return 0;
}

Refer to this for more information: How does printf and co differentiate beetween float and double

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.