3

I am learning about data allocation and am a little confused.

If you are looking for the smallest or greatest value that can be stored in a certain number of bits then does it matter what the data type is?

Wouldn't the smallest or biggest number that could be stored in 22 bits would be 22 1's positive or negative? Is the first part of this question a red herring? Wouldn't the smallest value be -4194303?

6
  • 1
    It sure does. If the type is float, what is the smallest number it could represent if it had 22 bits of capacity? Commented Sep 8, 2020 at 20:46
  • 2
    Of course it matters what data type it is. Floating point data types have a wider range than integer types. Or you can come up with some esoteric data types, which will only count tens, or hundreds. Or powers of ten. You will lose resolution of course, but will gain range. Commented Sep 8, 2020 at 20:47
  • 2
    The question is not altogether clear. I think you mean the title to be interpreted more or less as "what is the smallest value that a signed integer type with 1 sign bit and 21 value bits can represent?" But contrast that with the alternative "what is the smallest value that can be represented by the 22 least-significant bits of a [larger] signed integer?" or "What is the smallest value that can be represented by a 32-bit signed integer whose representation has exactly twenty-two 1 bits set?" Commented Sep 8, 2020 at 20:53
  • 1
    Please note that INT_MIN and FLT_MIN use different concepts of what 'smallest' means. Commented Sep 8, 2020 at 20:54
  • Looking at the last paragraph I venture -2097152. Commented Sep 8, 2020 at 20:59

4 Answers 4

6

A 22-bit data element can store any one of 2^22 distinct values. What those values actually mean is a matter of interpretation. That interpretation may be imposed by a compiler or some piece of hardware, or may be under the control of the programmer, and suit some specific application.

A simple interpretation, of course, would be to treat the 22 bits as an unsigned integer, with values from 0 to (2^22)-1. A two's-complement, signed integer is a slightly more sophisticated interpretation of the same bits. Or you (or the compiler, or CPU) could divide the 22 bits up into a mantissa and exponent, and store a range of decimal numbers. The range and precision would depend on how many bits were allocated to the mantissa, and how many to the exponent.

Or you could split the bits up and use some for the numerator and some for the denominator of a fraction. Or, in fact, anything else.

Some of these interpretations of the bits are built into hardware, some are implemented by compilers or libraries, and some are entirely under the programmer's control. Not all programming languages allow the programmer to manipulate individual bits in a natural or efficient way, but some do. Sometimes, using a highly unconventional interpretation of binary data can give significant efficiency gains, but usually at the expense of readability and maintainability.

So, yes, it matters what the data type is.

Sign up to request clarification or add additional context in comments.

2 Comments

A reasonable answer! However, the OP does have the phrase "C signed integer type" in the title. If taken literally, that limits the representation to one of those allowed by the C Standard: Two's Complement (most likely) or Sign-Magnitude (if that's still allowed). Maybe there are others allowed by the C Standard - I don't know.
You have not even come close to over-answering it. 😀 The title only asks about “signed integer types,” not “standard signed integer types.” So it could permit any scheme for the bits representing arbitrary integer values, as long as it includes a sign.
2

There is no law (of humans, logic, or nature) that says bits must represent numbers only in the pattern that one of the bits represents 20, another represents 21, another represents 22, and so on (and the number represented is the sum of those values for the bits that are 1). We have choices about how to use bits to represent numbers, including:

  • The bits do use that pattern, and so 22 bits can represent any number from 0 to the sum of 20 + 21 + 22 + … + 221 = 222 − 1 = 4,194,303. The smallest representable value is 0.
  • The bits mostly use that pattern, but it is modified so that one bit represents −221 instead of +221. This is called two’s complement, and the smallest value representable is −221 = −2,097,152.
  • The bits represent numbers as described above except the represent value is divided by 1000. This is called fixed-point. In the first case, the value represent by all bits 1 would be 4194.303, but the smallest representable value would be 0. With a combination of two’s complement and fixed-point scaled by 1/1000, the smallest representable value would be −2097.152.
  • The bits represent a floating-point number, where one bit represents a sign (+ or −), certain bits represent an exponent and other information, and the remaining bits represent a significand. In common floating-point formats, when all the bits in that exponent-and-other field are 1s and the significand field bits are 0s, the number represents +∞ or −∞, according to the sign bit. In such a format, the smallest representable value is −∞.
  • As an example, we could designate patterns of bits to represent numbers arbitrarily. We could say that 0000000000000000000000 represents 34, 0000000000000000000001 represents −15, 0000000000000000000010 represents 5, 0000000000000000000011 represents 3+4i, and so on. The smallest representable value would be whichever of those arbitrary values is smallest.

So what the smallest representable value is depends entirely on the type, since the “type” of the data includes the scheme by which the bits represent values.

If the type is a “signed integer type,” there is still some flexibility in the representation. Most modern C implementations (and other programming languages) use the two’s complement scheme described above. But the C standard still allows two other schemes:

  • One’s complement: If the first bit is 1, the value represented is negative, and its magnitude is given by complementing the remaining bits and interpreting them as binary. Using six bits for an example, 101001 would be negative with the magnitude of 101102 = 22, so −22.
  • Sign-and-magnitude: If the first bit is 1, the value represented is negative, and its magnitude is given by interpreting the remaining bits as binary. Using the same bits, 101001 would negative with the magnitude of 010012 = 9, so −9.

In both one’s complement and sign-and-magnitude, the smallest representable value with 22 bits is −(221−1) = −2,097,151.

To stretch the question further, C defines standard integer types but allows implementations to extend the language. An implementation could define some “signed integer type” with an arbitrary scheme for representing numbers, as long as that scheme included a sign, to make the name correct.

Comments

1

Without going into technical jargon about doing maths with Two's compliment, I'll try to explain in easy words.

First you need to raise 2 with power of 'number of bits'. Let's take an example of an 8 bit type,

An un-signed 8-bit integer can store 2 ^ 8 = 256 values. Since values are indexed starting from 0, so values range from 0 - 255.

Assuming you want to store signed values, so you need to get the half (simply divide it by 2), 256 / 2 = 128. Remember we start from zero, You might be rightly thinking you can store -127 to 127 starting from zero on both sides.

Just know that there is only zero (there is nothing like +0 or -0), so you start with zero to positive half. 0 to 127, that leaves you with negative half starting from -1 to -128

Hence the range will be -128 to 127.

For a 22 bit signed integer you can do the math,

2 ^ 22 = 4,194,304

4194304 / 2 = 2,097,152

-1 for positive side,

range will be, -2097152 to 2097151.

To answer your question, -2097152 would be the smallest number you can store.

Comments

0

Thanks everyone for the replies. I figured it out with the help of all of your info but I will explain the answer to show exactly what gaps of knowledge I had that lead to my misunderstanding.

The data type does matter in this question because for signed data types the first bit is used to represent whether or not a binary number is positive or negative. 0111 = 7 and 1111 = -7

sign int and unsigned int use the same number of bits, 32 bits. Since an unsigned int is unsigned: the first bit isn't used to represent positive or negative so it can represent a larger number with that extra bit. 1111 converted to an unsigned int is 15 whereas with the signed int it was -7 since the furthest left bit represents the sign: 1 is negative and 0 is positive.

Now to answer "If a C signed integer type is stored in 22 bits, what is the smallest value it can store?":

If you convert binary to decimal you get 1111111111111111111111 = 4194304 This decimal value -1 is the maximum value an unsigned could hold. Since our data type is signed it has to use one less bit for the number value since the first bit represents the sign. This gives us -2097152.

Thanks again, everyone.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.