I'm not sure what the question is, but I'll take a stab at a few things:
The "Query peak" metric is referring to three separate seconds where you saw a peak throughput of 4 queries per second.
Here's how I would approach pinpointing your problematic queries:
- Define "overloaded" for this instance. That will help you determine what is actually causing the problem. Let's assume that overloaded is defined as "slow queries"
- Examine slow queries in the pgFouine output. It helpfully groups them in the "Queries that took the most time (N)" section. Looking in there you can also click on "Show Examples" to see a few queries that are giving you grief.
- Take a sample of a few of those queries and run an
EXPLAIN ANALYZE on them to get actual execution plans.
- Look at the other plans running at the same time. These may be causing I/O contention
- Analyze the plans yourself or else use http://explain.depesz.com/ to get analysis of your execution plans. Watch for things like table spools.
- Tune queries or adjust PostgreSQL settings accordingly.
- Rinse and repeat
In the long run, I would change settings in pgFouine to only log queries that execute for over 100ms. You can do that using the log_min_duration_statement setting in the postgresql.conf file.
atoptool is very good in this respect, as it summarizes lots of resources in a single glimpse.