- Why does
structuredClone() sometimes perform worse than optimized manual cloning for large hierarchical data?
Node.js's structuredClone() reuses V8's implementation of object serialization/deserialization, so it first writes the input object into an internal serialized format, and then deserializes that data to create the result object. That's not exactly the fastest way to implement object cloning.
(But on the bright side, it avoids having to build a relatively complicated alternative implementation, and it provides the nice guarantee that the behavior will be the same as for other related operations, such as postMessage()ing an object to a worker.)
- Does the JavaScript engine (V8) allocate new hidden classes for cloned objects?
No. When suitable hidden classes exist already, they are reused. There's even a fast path that specifically targets this situation.
- Are there recommended patterns for cloning "hot path" data structures without causing GC pressure?
Cloning objects means allocating new objects, which is probably what you mean by "GC pressure". If you don't want that, then don't create new objects.
Don't expect cloning to be faster than creating a new object of the same size/structure.
- For large objects, is it better to restructure the data model instead of cloning?
The definition of "better" depends on your requirements. Cloning large nested object structures will always be a relatively costly operation; avoiding that will usually yield a performance benefit (but of course that depends on the specific alternative you choose).
Personally I would choose modifiable state over cloned objects in almost all cases, especially if the objects are big, for performance reasons. But I'm aware that some programming paradigms have other priorities, and that's cool too if it works for you.