0

Lets assume a condition if i have 500k logs in last 15mins which would be better for my elastic stack performance

  1. Having 10 index to hold these 500k logs.
  2. Having 1 index to hold 500k logs with more shards.

Which helps to improve my dashboard performance can someone help me ?

2 Answers 2

1

tldr use ILM to manage this for you and set a shard size of somewhere between 30-50GB, and let it manage all this for you

the longer answer is that it's not indices that are the issue, it's shards

eg if you use 10 indices with 1 primary and 1 replica shard, and that 500K of events is 500MB, then you have 20 shards with 25MB of data in them. the resources - heap, CPU - that Elasticsearch needs to manage these is the same as if you had the same shard and index count, but with 50GB of data in each shard

the recommended size of shards is 30-50GB, but that depends on use case and a bunch of other things like cluster sizing, query and indexing SLAs and more. for most logging use cases, 50GB is a good balance between density and responsiveness

Sign up to request clarification or add additional context in comments.

1 Comment

Thank you this was helpful
0

The official website suggests aim for shard sizes between 10GB and 50GB.500k logs just use an index(1 shard is enough).You can read the following article.

https://www.elastic.co/guide/en/elasticsearch/reference/current/size-your-shards.html#shard-size-recommendation

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.