66

Anyone know the time-complexity of ECMAScript5's Object.keys() in common implementations? Is it O(n) for n keys? Is time proportional to the size of the hash table, assuming a hash implementation?

I'm looking for either guarantees by language implementors or some real world benchmarking.

8
  • 3
    How many keys do you expect to be having, such that the time complexity of enumerating them matters? Commented Oct 10, 2011 at 18:06
  • 2
    I don't think it can be less than O(n) Commented Oct 10, 2011 at 18:06
  • 6
    @IAbstractDownvoteFactory: While the function Object.keys() may be able to return in O(1), enumerating the results can't be done in less than O(n). Commented Oct 10, 2011 at 18:16
  • 1
    @Gabe, one million. Does that change the answer? Commented Apr 2, 2016 at 17:32
  • 1
    @user137717, The Object implementation could theoretically store an array of keys, changing the array whenever a new property is added or one is deleted. The keys function would then simply return the stored array. This has consequences on storage and offloads time spent in the keys function to time spent when creating and deleting keys. Commented Apr 24, 2016 at 14:24

2 Answers 2

64

(V8 developer here.)

The answer by Mark Kahn is correct for sufficiently dense integer-keyed/"indexed" properties, where complexity of Object.keys() is indeed O(n).

While the JavaScript spec pretends that all object properties are string-keyed/"named", that's not how modern high-performance engines implement it. Internally there's a huge difference! Indexed properties are stored in an array (as long as they are dense enough), which gives generally much better performance than a {'1': 1, ...} dictionary would.

For objects with thousands of named properties, our implementation indeed uses a hash table (as the question guessed), and that means complexity of Object.keys() is O(n log n). That's because a hash table (of course) stores entries in its own order. Object.keys() must return named properties in the order in which they were created though, which we store as additional metadata, so that means we must sort the keys after retrieving them from the hash table, which is an O(n log n) operation.

Named properties on most objects that occur in practice (up to about a thousand properties) are (usually) stored in creation order in a special kind of internal array, so they can be retrieved in O(n) and don't need to be sorted.

So the summary is really "it depends" :-)

Sign up to request clarification or add additional context in comments.

18 Comments

I'm curious if Object.keys(bigObject).length would still be O(n) or is the js optimizer something that can be relied on to ensure this is O(1)
I'm pretty sure Python has insertion-order dictionaries with O(n) iteration, so it's definitely possibloe.
@HansBrende Yes. for-in additionally has to walk the prototype chain, which (depending on how you define n) might not change asymptotic complexity but is generally more work.
@HansBrende for-in must always visit prototypes, and its fast path allocates an array under the hood. I doubt that it is ever faster than any alternative, but when in doubt, measure it yourself (using your full app or a very realistic simulation of it, not just a simple microbenchmark).
@JattCode113 it's off-topic for this question, but if I'm reading the code right, iterating over a Map should be O(n) in our current implementation.
|
60

It appears to be O(n) in V8 (chrome, node.js) at least:

> var hash = {}
>   ,    c = 0;
> 
> var s = +new Date();Object.keys(hash);console.log(+new Date() - s);
0
> for(var i=0; i<100000; i++, c++){ hash[c] = 1; }
> var s = +new Date();Object.keys(hash);console.log(+new Date() - s);
26
> for(var i=0; i<100000; i++, c++){ hash[c] = 1; }
> var s = +new Date();Object.keys(hash);console.log(+new Date() - s);
49
> for(var i=0; i<100000; i++, c++){ hash[c] = 1; }
> var s = +new Date();Object.keys(hash);console.log(+new Date() - s);
75
> for(var i=0; i<100000; i++, c++){ hash[c] = 1; }
> var s = +new Date();Object.keys(hash);console.log(+new Date() - s);
102    

15 Comments

Those results make sense for a dense hash map. Does performance degrade as keys get more sparse?
@hurrymaplelad - What? All JS hash keys are strings. This code effectively generates {'1':1, '2':1, '3':1, ...} sparse vs dense keys don't make sense for hash implementations, only arrays. And it really doesn't make any sense for an engine to ever implement a hash as an array since numerical indexes are generally rather rare. Though if you want to test that for some reason, just change c++ to c+=Math.random() which will give you totally un-associatable keys.
@cwolves: An Array object is just an object whose properties are expected to be whole numbers. Those aren't terribly rare, and there are certainly JS implementations that use arrays to back Array instances.
JS Objects are not implemented as hashmaps, rather as pairs of arrays (that is C arrays, not JS Array objects) code.google.com/intl/de-DE/chrome/devtools/docs/…
@Frozenfrank -- it's as fast as possible
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.