It might help to step away from ASCII for a moment and consider the much more sensible (albeit nonstandard) SCSCII, the Steven Charles Summit Code for Information Interchange. In SCSCII, the digits have the following values (in decimal):
'0' 37
'1' 38
'2' 39
'3' 40
'4' 41
'5' 42
'6' 43
'7' 44
'8' 45
'9' 46
So if you have the digit 0 and you want to convert it to the character '0', you have to add 37, because 0 + 37 = 37 = '0'.
Similarly, if you have the digit 3 and you want to convert it to the character '3', you have to add 37, because 3 + 37 = 40 = '3'.
And in fact, for any digit d, you can convert it to its corresponding character by adding 37, because the digit characters 0..9 have consecutive values 37..46, so there's a constant offset, 37.
And since 0 + 37 = 37 = '0', that magic offset 37 Just Happens To Be the same as the numeric value of the character '0', which is 37.
So if you have a digit d and you want to convert it to the corresponding character, and if for some reason you don't remember that in SCSCII the value of the character '0' is 37, you don't have to write d + 37, you can cheat and write d + '0'. Except, this isn't really "cheating", it's really just "taking a shortcut". And in fact, as we're about to see, it's not even "taking a shortcut", in fact it's really "being totally reasonable".
I was joking about the "much more sensible SCSCII character set", of course. But suppose we went through the same analysis for the much more standard ASCII, in which '0' is 48 and '1' is 49, up to '9' is 57. In ASCII, the magic offset is 48, so to convert a digit to a character you'd have to write d + 48.
...Or, well, you could write d + 48, but if you didn't feel like remembering that the offset for ASCII was 48, you could apply the same shortcut as above and write d + '0' instead.
Which is exactly the same shortcut we used for SCSCII! How can that work? How can the code d + '0' properly convert digits to characters on a SCSCII machine, while the exact same code d + '0' properly converts digits to characters on an ASCII machine?
Well, it works because on a SCSCII machine, that character constant '0' has the value 37, while on an ASCII machine, the character constant '0' has the value 48. It's as if you had written
#define DIGIT_TO_CHARACTER_OFFSET 37
...
... d + DIGIT_TO_CHARACTER_OFFSET ...
on the SCSCII machine, with the intention of changing it to
#define DIGIT_TO_CHARACTER_OFFSET 48
when you moved your code to an ASCII machine. Except, you don't have to muck around defining your own DIGIT_TO_CHARACTER_OFFSET macro, because the plain old character constant '0' always has just the right value, automatically, so it can perform precisely the same function for you.
You also asked:
How can this be used to print numbers as characters?
If you're not getting it, try this simple code:
printf("'0' has code %d\n", '0');
printf("48 is character %c\n", 48);
This code might bother you a bit. The first line uses %d to print a character, and you might have thought that %d was only for printing integers. Contrariwise, the second line uses %c to print an integer, and you might have thought %c was only for printing characters. But try it, and see what you get.
In fact, in C, characters are just tiny integers, having as their value the code for a character in the machine's character set. So there's nothing wrong with using %c to print an integer, or %d to print a character, because you're basically doing both of those things all the time, whether you realize it or not.
You also asked,
How does the modulo % operator help in extracting digits from a number?
Suppose you have the number 123, and you want to extract the last decimal digit. Now, 123 ÷ 10 is 12, remainder 3. Or, in C, 123 / 10 is 12, while 123 % 10 is 3. So, dividing modulo 10, or % 10, is how you extract the last base-10 digit of an integer in C (or just about any language).
You also asked,
Why is '0' represented as 48 in ASCII?
There's not really a reason — in the end it's basically arbitrary. You can read about the history on Wikipedia. They could have picked 240 like EBCDIC did, or 37 like SCSCII did, or any other value. The virtue of the ASCII character — the ASCII Standard character set — is not that the value 48 for the character '0' means anything. The virtue is simply that it's a standard that we all agree on. So if I transmit the character codes 72 101 108 108 111 44 32 119 111 114 108 100 33 to you (or my screen, or my printer, or whatever) we all agree that those codes represent exactly the string "Hello, world!", and not anything else.
But no matter what value the character '0' has in the character set you're using today, that magic code d + '0' will work, as long as all ten digits have consecutive codes.
'0'is character. That is an integer. So,'0'is an alias of48So'0'+5is48+5that is53that is'5''0'is 48 rather than 100 or 12. Well, its a code. You have to assign a value to each byte. What matters in your case, is that ASCII codes of 0, 1, 2, 3, ..., 9 are consecutive (I would assume that when ASCII code was defined, choosing a quite round number for ASCII code of '0' (48 is 110000 in binary) made it easier to compute ascii code of numbers (just have to 110000 OR digit)chartype is an integer type, it's the smallest integer type with size of 1 byte (= 8 bits), so it can only store integer numbers. When you attribute a character to a char variable, such as inchar c = '0';, that character is converted to its respective integer number in a character table (such as the ASCII table), and that number is then stored in the variable. And when you want to print it as a character the opposite conversion is made.