1

I have virtually the same join query, the difference between my ( >two ) queries being one of the tables on which the join is made. Performance-wise is it better to: 1)rewrite the queries (in one stored procedure ?) OR 2)pass the table on which the join is made as a parameter in a stored procedure (written in plpgsql BTW) and run the query using EXECUTE

I assume 2) is more elegant but word is out that by using EXECUTE one cannot benefit from query optimization

Also, what about when i have a varying number of conditions. How can i make sure the query runs in optimal time? (I take it rewriting the query more than 10 times isn't the way to go :D)

1
  • make sure an index is used every time, don't worry about having duplicate SQL Commented Jun 23, 2011 at 19:35

4 Answers 4

1

If you want to benefit from the query optimization, you should definitely rewrite the queries.

It does result in less elegant and longer code, that's harder to maintain, but this is a price sometimes necessary to pay for performance.

Sign up to request clarification or add additional context in comments.

1 Comment

i heard about pgscript which is something like a preprocessor; is this a case where using pgscript is recommended?
1

There is some overhead for using execute, due to repeat planning of the executed query.

For best results and maintainability, write a function that writes the various functions you need. Example:

PostgreSQL trigger to generate codes for multiple tables dynamically

1 Comment

is right, but to clarify, EXECUTE in PL/pgsql is not the same as EXECUTE in PostgreSQL's native SQL environment. EXECUTE in PL executes a string (e.g. "EXECUTE 'SELECT ...'"), but EXECUTE in SQL dispatches a PREPARE'ed query (e.g. "PREPARE some_statement; EXECUTE some_statement...; EXECUTE some_statement...;").
0

the EXECUTE is dynamic and requires a fresh parse at a minimum - so more overhead.

Comments

0

1)rewrite the queries (in one stored procedure ?)

If you have the ability to cache the query plan, do so. Dynamically executing SQL means that the backend needs to re-plan the query every time. Check out PREPARE for more details on this.

2)pass the table on which the join is made as a parameter in a stored procedure (written in plpgsql BTW) and run the query using EXECUTE

Not necessary! Pl/PgSQL automatically does a PREPARE/EXECUTE for you. This is one of the primary speed gains that can be had from using Pl/PGSQL. Rhetorical: do you think generating the plan shown in EXPLAIN was cheap or easy? Cache that chunk of work if at all possible.

Also, what about when i have a varying number of conditions. How can i make sure the query runs in optimal time? (I take it rewriting the query more than 10 times isn't the way to go :D)

Using individual PREPAREed statements is one way, and the most "bare-metal" way of optimizing the execution of queries. You could do exotic things like using a single set returning PL function that you pass different arguments in to and it conditionally executes different SQL, but I wouldn't recommend it. For optimal performance, stick to PREPARE/EXECUTE and manage which named statement inside of your application.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.