1

How can I bench mark SQL performance in postgreSQL? I tried using Explain Analyze but that gives varied Execution time every time when I repeat same query.

I am applying some tuning techniques on my query and trying to see whether my tuning technique is improving the query performace. The Explain analyze has varying execution times that I cant bechmark and compare . The tuning has imapact in MilliSeconds so I am looking for bechmarch that can give fixed values to compare against.

2
  • The best you can do is to run a query (or explain analyze) several times and calculate the average of results omitting the first one. Benchmarks resulting in milliseconds make no sense at all. You have to construct a query in the way that it repeats an examined operation so many times that the differences are noticeable (e.g. measured in seconds) Commented Jul 30, 2018 at 22:05
  • Thanks for the details. It does make sen's for my business case , where MS are important for my application. There are RDBMS systems (Teradata for sure) which will give fixed CPU for every query that is executed. This can be used as bench mark and will give confidence on approach to tune. Appreciate sharing the details if there is any similar approach. Commented Jul 30, 2018 at 23:41

1 Answer 1

1

There will always be variations in the time it takes a statement to complete:

  • Pages may be cached in memory or have to be read from disk. This is usually the source of the greatest deviations.

  • Concurrent processes may need CPU time

  • You have to wait for internal short time locks (latches) to access a data structure

These are just the first three things that come to my mind.

In short, execution time is always subject to small variations.

Run the query several times and take the the median of the execution times. That is as good as it gets.

Tuning for milliseconds only makes sense if it is a query that is executed a lot.

Also, tuning only makes sense if you have realistic test data. Don't make the mistake to examine and tune a query with only a few test data when it will have to perform with millions of rows.

Sign up to request clarification or add additional context in comments.

2 Comments

Thanks for the info and My application is very critical and looking for improvement in milesec. Yes my query run very frequently like 60 time per minute. The testing is being done on full production volume and looking for fixed bench mark value to fine tune further. ( there are other RDBMS (Teradata for sure) will provide the CPU for each query and that is fixed . I am looking some thing similar ).
Another thing that makes that impossible is EXPLAIN (ANALYZE) itself. It has to look at the clock ever so often, and that system call itself is known to vary in duration. Run pg_test_timimg and see for yourself. Use the law of large numbers and run the test often enough to get a reliable average.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.