tldr use ILM to manage this for you and set a shard size of somewhere between 30-50GB, and let it manage all this for you
the longer answer is that it's not indices that are the issue, it's shards
eg if you use 10 indices with 1 primary and 1 replica shard, and that 500K of events is 500MB, then you have 20 shards with 25MB of data in them. the resources - heap, CPU - that Elasticsearch needs to manage these is the same as if you had the same shard and index count, but with 50GB of data in each shard
the recommended size of shards is 30-50GB, but that depends on use case and a bunch of other things like cluster sizing, query and indexing SLAs and more. for most logging use cases, 50GB is a good balance between density and responsiveness