3

I have a 64 element JavaScript array that I'm using as a bitmask. Unfortunately, I've run into a problem when converting from a string to binary, and back. This has worked for some other arrays, but what is going on here?

var a = [1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 0, 0, 1, 1, 1, 1,
         1, 1, 0, 0, 1, 1, 1, 1,
         1, 1, 0, 0, 0, 0, 1, 1,
         1, 1, 0, 0, 0, 0, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1,
         1, 1, 1, 1, 1, 1, 1, 1];

var str1 = a.join('');
  //-> '1111111111111111110011111100111111000011110000111111111111111111'

var str2 = parseInt(str1, 2).toString(2);
  //-> '1111111111111111110011111100111111000011110001000000000000000000'

str1 === str2  //-> false

I would expect str2 to be the same as str1, which is not the case.

4
  • 1
    You're loosing precision. See stackoverflow.com/questions/307179/…. Commented Jan 9, 2012 at 0:36
  • If you use a more flexible language (like python) you can see that those two binary strings are only 1 value off. 18446691089982423040 and 18446691089982423039 respectively. (As @zie was saying about precision) Commented Jan 9, 2012 at 0:41
  • May I just say that using parseInt on a string-representation of a bitmask sound .... rather stupid? If you must have it as a string representation, how about just doing someting like function matchMask(str, pos) { return str.charAt(pos) != '0'; }. Commented Jan 9, 2012 at 0:41
  • 1
    If you don't care about politeness, you can say whatever you like. But calling matchMask for each element sounds a tad inefficient. Commented Jan 9, 2012 at 0:51

1 Answer 1

6

In JavaScript, the Number type is a 64-bit double-precision value (more, and more). You've specified 64 bits there, which is beyond the realm that a 64-bit double-precision value can specify accurately (as it's a floating point type, and so must devote some bits to precision). JavaScript doesn't have an integer type (much less a 64-bit version of one), which is what a perfect-fidelity conversion would require.

I'm not all that up on floating point bit representations, but IIRC a 64-bit double-precision number can accurately represent integer values on the order of 53 significant bits, see the links for details.

Sign up to request clarification or add additional context in comments.

2 Comments

According to this, it's only accurate to 53 bits.
@FakeRainBrigand: Thanks, I've added a link to Section 8.5. (I'd already added a mention of the 53-bit thing, but I couldn't remember a source for it; I read it somewhere other than the spec, thanks.)

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.