6

I have java(spring boot framework version: 2.7.11 & open jdk:11) based microservice. When this microservice is not serving any requests, each pod of the microservice is using about 470MB. We have kept the memory limit as 1200Mi for this microservice and this microservice is deployed on K8 and max heap is -Xmx750M.

I did some load testing on this microservice. On this microservice, I made 20 API around requests per second. A consistent pattern that I have observed is that: the memory utilization goes up as I increase the load and memory does not get released after the load stop. Once I complete the load testing, I wait for an hour or so. Yet, the memory utilization does not come down. The pod continues occupying same amount of memory as before. I also got a shell inside the container of such a pod, I can see only the java process and the sh process running. And indeed, I see that memory usage of the pod has increased to about 1024MB.

So, a pod which usually takes 470MB (in absence of any load testing), takes 1024MB under load test, and continues to use 1024MB long after the load testing has completed. As I can see max heap size is not going beyond 400-500M.

I do not understand why the pod usage is not coming down.

Could it be:

JVM heap usage problem, probably garbage collection not happening? Something related to Kubernetes pod requests and limits settings? Here are my requests and limits for Kubernetes pods:

resources: limits: memory: 1200Mi cpu: 1 requests: memory: 900Mi cpu: 1 By the way, I am using Apache JMeter for API load testing. The Docker image being used is : oracle openjdk:8 I also downloaded the heap dump and opened it in Eclipse Memory analyzer tool. But, I do not see any application classes (from our code) using a lot of heap. I see classes related to JDK and Spring Boot framework only, and they do not show any memory leak.

So, the mystery for me is : why does memory usage not come down even 7-8 hours after load testing? Is it the Kubernetes config of requests and limits OR is it the JVM GC settings tuning? Is it possible that JVM GC settings are not playing nicely with Kubernetes requests and memory limits?

I have mentioned all things in above description.

8
  • where are you checking memory usage ? on Grafana if so what metrics you have configured ? Commented Oct 17, 2023 at 10:00
  • I am checking it two ways 1) Through pod shell on below path /app $ cat /sys/fs/cgroup/memory/memory.usage_in_bytes 2) Dynatrace tool. Commented Oct 17, 2023 at 11:19
  • -XX:+UseContainerSupport is enabled for your service ? for java 10 and above its enabled by default anyway but please double check for this param once Commented Oct 17, 2023 at 13:23
  • also please provide JVM params provided for your service Commented Oct 17, 2023 at 13:27
  • @prasannajoshi Using -Xmx750M for heap. Commented Oct 18, 2023 at 6:46

1 Answer 1

1

This seems to be an problem comes woth JDK 11 & G1GC. The memory is not released back to the OS even after completing load goes down. I have experienced the exact same symptomps you mentioned while doing load tests.

Nothing wrong with Garbage collector. I checked it using jcmd and jstat commands.

This article also mentions the same issue.
https://thomas.preissler.me/blog/2021/05/02/release-memory-back-to-the-os-with-java-11

As a permanent solution, upgrading the JDK to newer version or different garbage collector can be considered.

In addition to that I was able to obtain lower memory consumption by tuning Xms JVM parameter. However this does not address the original issue but sets the memory utilization in a lower level.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.