0

High Memory Usage (96%) in AWS Elastic Beanstalk – How to Optimize Auto Scaling Policy? I am running an AWS Elastic Beanstalk environment with the following Auto Scaling configuration:

Current Auto Scaling Policy Min Instances: 3

Max Instances: 5

Instance Type: t3a.micro

Metric: TargetResponseTime (Average, Seconds)

Upper Threshold: 1 (Scale up)

Lower Threshold: 0.6 (Scale down)

Scale Up Increment: +1

Scale Down Increment: -1

Scaling Cooldown: 360s

Fleet Composition: On-Demand (Base: 0, Above Base: 0)

Capacity Rebalancing: Deactivated

Load Balancer: Application Load Balancer (Public)

Problem My application is experiencing very high memory usage (96%), while CPU utilization remains normal. Despite this, the Auto Scaling policy is not scaling up instances efficiently. This results in performance issues.

Questions Should I change the scaling metric from TargetResponseTime to MemoryUtilization?

What is the best threshold for scaling up and down based on memory usage?

Would enabling Capacity Rebalancing improve stability?

Are there other best practices for managing memory-heavy workloads on Elastic Beanstalk?

I want to keep using t3a.micro instances and avoid changing instance types. Any guidance on improving the Auto Scaling policy would be greatly appreciated!

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.