3

Using pgBouncer in transaction mode seems to be the optimal way of reducing the number of active connection at any given time, but it's also given rise to a problem that I'm not sure how to solve effectively.

Say you want to have a blanket statement_timeout of 5 mins at the DB level just for the safety of preventing any unexpected super long running queries, but have a bunch of workers that run statements that normally take much longer than that. Without pgBouncer, you could just temporarily set the statement_timeout on the session and set it back to default afterwords and that all works fine. However, with pgBouncer in transaction mode this doesn't work reliably without wrapping the offending operations in a transaction and setting a local statement_timeout. However, that is not desirable either because then you have arbitrarily long running transactions just to achieve the desired timeout settings.

Is there any way to apply local timeouts to statements when using pgBouncer in transaction mode, but without having to use a transaction?

2 Answers 2

5

You could use SET LOCAL to change the parameter only for the current transaction:

BEGIN;
SET LOCAL statement_timeout = '1h';
SELECT /* long running query */;
COMMIT;
-- now statement_timeout is reset to the original value
Sign up to request clarification or add additional context in comments.

Comments

2

pgbouncer transaction pooling mode is mostly a way to paper over poorly designed or implemented applications. I wouldn't say it is "optimal" for anything.

but have a bunch of workers that run statements that normally take much longer than that

Maybe make those workers run through a different pool? Or bypass pgbouncer and hit the database directly.

However, that is not desirable either because then you have arbitrarily long running transactions

Every statement runs inside a transaction. Whether you open and close them explicitly or implicitly, they are still there.

4 Comments

I think pgbouncer transaction pooling mode is a good solution when you have lots of shortlived clients and have to rapidly create new database connection and every transaction is pretty short. Especially so if the database is running on another system so being able to use already opened connection in the pool reduces some work for the PostgreSQL and totally removes TCP/IP (+ maybe TLS) handshake latency.
@MikkoRantalainen If you have a bunch of short-lived clients which need to rapidly create new connections, then that itself is the problem. It should be solved by connection pooling inside the app. If you have no control over the app, maybe using an external pooler like pgbouncer will be a small improvement over not using it, but it is no where near the optimal solution. I don't understand the TCP/IP thing, as establishing a new connection to pgbouncer also involved doing that. Maybe you are doing it one host versus another host, but is that really such a big difference?
Full TCP/IP + TLS handshake requires 6x ping latency. If you establish the connection between localhost ports, the "ping" is pretty much zero but even on 1 Gbps LAN it's more like 0.1 ms meaning you get extra ~0.5 ms overhead if you don't have pre-established connection ready. If your transactions have average length around 5 ms, that's already 10% overhead!
@jjanes When application running in Kubernetes, it's a common practice to rapidly and frequently scale out the number of replicas, going from just a few pods to potentially dozens or even hundreds, depending on the configured scaling thresholds. However, it's essential to manage concurrent connections, ensuring they don't exceed the database's limitations. To achieve this, an external connection pooler like PgBouncer can resolve this issue.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.