amazon web services - High Memory Usage (96%) in AWS Elastic Beanstalk – How to Optimize Auto Scaling Policy? - Stack Overflow

admin2025-04-06  0

High Memory Usage (96%) in AWS Elastic Beanstalk – How to Optimize Auto Scaling Policy? I am running an AWS Elastic Beanstalk environment with the following Auto Scaling configuration:

Current Auto Scaling Policy Min Instances: 3

Max Instances: 5

Instance Type: t3a.micro

Metric: TargetResponseTime (Average, Seconds)

Upper Threshold: 1 (Scale up)

Lower Threshold: 0.6 (Scale down)

Scale Up Increment: +1

Scale Down Increment: -1

Scaling Cooldown: 360s

Fleet Composition: On-Demand (Base: 0, Above Base: 0)

Capacity Rebalancing: Deactivated

Load Balancer: Application Load Balancer (Public)

Problem My application is experiencing very high memory usage (96%), while CPU utilization remains normal. Despite this, the Auto Scaling policy is not scaling up instances efficiently. This results in performance issues.

Questions Should I change the scaling metric from TargetResponseTime to MemoryUtilization?

What is the best threshold for scaling up and down based on memory usage?

Would enabling Capacity Rebalancing improve stability?

Are there other best practices for managing memory-heavy workloads on Elastic Beanstalk?

I want to keep using t3a.micro instances and avoid changing instance types. Any guidance on improving the Auto Scaling policy would be greatly appreciated!

转载请注明原文地址:http://conceptsofalgorithm.com/Algorithm/1743952567a225384.html

最新回复(0)