I have a .Net Core (C#) application that takes user requests via websocket and then creates a connection to a PostgreSQL database to manipulate/handle data.
When a user makes a new request to the backend, the endpoint function called creates a new SQL connection and run the query:
// Endpoint available via Websocket
public async Task someRequest(someClass someArg)
{
/* Create a new SQL connection for this user's request */
using var conn = new NpgsqlConnection(connstr.getConnStr());
conn.Open();
/* Call functions and pass this SQL connection for any queries to process this user request */
somefunction(conn, someArg);
anotherFunction(conn, someArg);
/* Request processing is done */
/* conn is closed automatically by the "using" statement above */
}
When this request is finished, the connection is closed by the using statement. However, this connection by default is returned to the Postgres "connection pool" and is shown as idle.
Since every new user request here creates a new SQL connection, those old "idle" SQL connections in the connection pool are never used again.
Currently:
- Due to these idle connections pooling up and reaching the max pool size, I have temporarily set the idle connection timeout very low. Otherwise these idle connections stack up and hit the artificial ceiling for open connections.
- I've also tried adding
Pooling=falseto the connection string. My understanding is this would stop the connection from idling once closed by the .Net app, but it seems to still be idling.
Question: What is the best practice for handling Postgres's connection pooling in .Net/C#?
- If I can more properly utilize the Postgres connection pool, it would be more efficient to re-use already opened connections than creating a new one for every user request.
- My idea for this was to have a function that creates new Postgres connections, keeps track of them, and hands them out to callers when a user makes a new request. Is this a terrible idea?
- Or, do I just keep pooling disabled/a very low idle timeout and create a new SQL connection per-request like I am now?
I've been unable to find many examples of properly utilizing Postgre's connection pool short of what I'm doing. This application has an average of 3,000-4,000 concurrent users at anytime, so I can't have a single static connection handling everything. What is the best practice for handling this in .Net?
EDIT So it looks like the pooling is native to NPGSQL and not Postgres. If a new connection is made with the same database, user, password it will use one of the idle "pooled" connections instead of opening another one.
The issue is that it seemed to not be doing so before I had disabled pooling. There was an error spammed that took the application down for an hour or more over the night.
The connection pool has been exhausted, either raise MaxPoolSize (currently 100) or Timeout (currently 15 seconds)
Now it's possible that it was really needing 100+ active connections at once... but my guess is that most of those were idle as I had seen before.
Edit #2 So I've tried now with allowing pooling (the default) and it seems to shoot up idle connections instantly as they're created by requests, but does not re-use these connections. Once it reaches the max pool cap, the application locks up due to no new connections/requests being able to be made.
DBeaver Img - Server Sessions: The red here is active connections, blue is idle.
Every single SQL connection in the application is created from a single/shared connection string environment variable.
Host=IP;Port=somePort;Username=someUser;Password=somePass;Database=someDb;Maximum Pool Size=100
The only way I'm able to keep the application running is by setting idle_in_transaction_session_timeout to '10s' to clear out idle connections frequently as the pooling does not seem to work.
When I have postgres clearing out idle connections with idle_in_transaction_session_timeout and Pooling=false this is what my DB activity looks like:
I also ran a search in my code, every instance of making a new SQL connection has the using statement as well. As shown in the code example above. This should prevent any sort of connection leak.
Is there some sort of postgres config item that would cause this issue? The connection string is the same every time, and every connection has the C# using statement. I'm not sure why NPGSQL isn't re-using these idle connections when pooling is enabled.
I've tested spamming new connections in a loop on my dev server, and pooling seems to work just fine. So I can tell that the using statement format I have causes no issues. But if I enable pooling now on my production server, the idle connections instantly spam upwards as shown and hit the cap allowing no new connections. Metrics for the production server show ~1,000 transactions per second and ~4-5 active SQL sessions/connections a second. Is it possible that I just really need to increase the max pool limit?



Host=IP;Port=5432;Username=someUser;Password=somePass;Database=someDb;Maximum Pool Size=200I'll edit the main post again to show some more details.