I'm running multiple microservices (Spring cloud + docker) in small/medium machines on AWS and recently I found that these machines are often exhausted and need rebooting. I'm investigating the causes of this loss of power, thinking of possible memory leaks or misconfigurations on the instance/container.
I tried to limit the amount of memory these containers can use by doing:
docker run -m 500M --memory-swap 500M -d my-service:latest
At this point my service (standard spring cloud service with one single endpoint that writes stuff to a Redis DB, using spring-data-redis) didn't even start.
Increased the memory to 760M and it worked, but monitoring it with docker I see the minimum is:
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
cd5f64aa371e 0.18% 606.9 MiB / 762.9 MiB 79.55% 102.4 MB / 99 MB 1.012 MB / 4.153 MB 60
I added some parameters to limit the JVM memory heap but it doesn't seem to reduce it very much:
_JAVA_OPTIONS: "-Xms8m -Xss256k -Xmx512m"
I'm running
- Spring Cloud Brixton.M5
- Spring Boot 1.3.2
- Java 8 (Oracle JVM)
- Docker
- Spring data Redis 1.7.1
Is there a reason why such simple service uses so much memory to run? Are there any features I should disable to improve that?
FROM java:8-jre-alpineinstead of the non-alpine JDK base image I was originally using ... but that didn't help much. Anyone have other ideas?