I've been stress testing my .NET application that uses PostgreSQL to find memory leaks. The app is very stable memory-wise — memory usage only depends on the number of concurrent connections.
However, I’ve noticed that the RAM usage marked as cache is slowly growing over time until it fills up, causing a RAM shortage (the cache is not being released).
I’m testing my app by making 1000 concurrent POST requests to the backend, which is configured with a pool of up to 100 PostgreSQL connections. PostgreSQL has shared_buffers set to 128MB and a maximum of 100 connections.
I haven’t found any high memory usage from the .NET process or from my simple Node app that I’m using for the tests.
The test simply makes many concurrent HTTPS connections: The test is just a simple https client trying to make many concurrent connections.
while (true) {
await stressREST(token, url, msg, 1000);
}
const httpsAgent = new https.Agent({
keepAlive: true,
maxSockets: 100,
maxTotalSockets: 100,
});
const client = axios.create({
httpsAgent,
timeout: 10_000,
});
export default async function stressREST(
token: string,
url: string,
content: object,
times: number
) {
const promises = [];
for (let i = 0; i < times; i++) {
promises.push(
client.post(url, content, {
headers: { Authorization: `Bearer ${token}` },
})
);
}
await Promise.all(promises);
console.log(times, "requests done");
}
I added the HTTPS client because I thought the backend wasn’t keeping up with the connections and Node or the server was caching the requests, but it didn’t help. When I stop the Node process, the high cache usage still persists in RAM even after stopping my backend. Using SetInterval() instead of while true loop didn't help either.
I also set the kernel shmmax value to 8GB, but it didn’t make any difference.
I use Fedora 43 with 32GB of RAM.