I am currently writing an archetype-based ECS for learning purposes. What I noticed is that my current implementation is incredibly slow with large amounts of archetypes.
Each of my queries iterates over ALL archetypes and uses a dynamic bitset to check if the archetype is eligible for the query or not. So there are unnecessary iterations every time a query is executed. This is slow because the bitsets are also different sizes to support "infinite components". The bitsets are too long to be fast and too small to get a speed up by vectorization.
// Pseudo code, In query iterator
foreach(var archetype : archetypes){
unit[] queryBitSet = myQuery.BitSet;
unit[] archetypeBitSet = archetype.BitSet;
// Checking if query matches archetype
var min = Math.min(queryBitSet.Length, archetypeBitSet.Length);
for(var index = 0; index < min; index++){
if(queryBitSet[index] & archetypeBitSet[index] != queryBitSet[index]){
...
}
}
... Process archetype if query matches
}
The bitsets are all quite small, have only a few uint items. Nevertheless, with a large mass of archetypes, the queries become incredibly slow. I can't think of any improvement.
Therefore, something else is needed: query caching. But how would this be implemented in an archetype-based ECS anyway? What would this look like in pseudo code? What else would you improve for ideal and performant archetype queries?