0

I have a a query which takes too much time on the very first run, and I need help reducing the time. It has too many large tables involved and has an index on all the columns involved in joins/where clause/order by clause. I am sharing the explain analyze result for some suggestions for reducing the time taken to execute this query.

Gather Merge  (cost=16474.38..16487.69 rows=114 width=246) (actual time=24169.085..24225.617 rows=331 loops=1)
  Workers Planned: 2
  Workers Launched: 2
  ->  Sort  (cost=15474.36..15474.50 rows=57 width=246) (actual time=24015.878..24015.887 rows=110 loops=3)
        Sort Key: addres.share, ship.master_tracking
        Sort Method: quicksort  Memory: 71kB
        Worker 0:  Sort Method: quicksort  Memory: 71kB
        Worker 1:  Sort Method: quicksort  Memory: 70kB
        ->  Nested Loop  (cost=450.64..15472.70 rows=57 width=246) (actual time=1332.644..24014.884 rows=110 loops=3)
              ->  Nested Loop  (cost=450.08..15391.47 rows=115 width=186) (actual time=1066.578..22766.463 rows=556 loops=3)
                    ->  Nested Loop Semi Join  (cost=449.51..15070.09 rows=116 width=36) (actual time=1063.621..21932.130 rows=557 loops=3)
                          ->  Parallel Bitmap Heap Scan on ship  (cost=448.95..8780.92 rows=3171 width=52) (actual time=972.526..14376.908 rows=13719 loops=3)
                                Recheck Cond: ((admin_loc)::text = 'ABC'::text)
                                Heap Blocks: exact=14253
                                ->  Bitmap Index Scan on ship_idx07_admin_loc  (cost=0.00..447.05 rows=7611 width=0) (actual time=1107.608..1107.608 rows=54073 loops=1)
                                      Index Cond: ((admin_loc)::text = 'ABC'::text)
                          ->  Index Only Scan using package_idx05_ship_uid_edd on package  (cost=0.56..1.98 rows=1 width=16) (actual time=0.548..0.548 rows=0 loops=41157)
                                Index Cond: ((ship_uid = ship.uid) AND (edd_date= '2023-08-30'::date))
                                Heap Fetches: 611
                    ->  Index Scan using addres_pkey on address  (cost=0.56..2.77 rows=1 width=166) (actual time=1.492..1.492 rows=1 loops=1672)
                          Index Cond: (uid = ship.consig_add_uid)
              ->  Index Scan using add_aefs_pk on add_aefs  (cost=0.56..0.71 rows=1 width=60) (actual time=2.243..2.243 rows=0 loops=1667)
                    Index Cond: ((share)::text = (addres.share)::text)
                    Filter: (geo_rank > 100)
                    Rows Removed by Filter: 1
Planning Time: 1.007 ms
Execution Time: 24226.135 ms

I have tried changing work_mem from 512 MB to 900 MB, but there was not much progress. I also tried selecting specific columns within JOINS. The time taken is still high. I am somewhat new to this and appreciate any suggestions.

4
  • 1
    We don't know the query, (or EXPLAIN (ANALYZE, BUFFERS, VERBOSE) output), so it is difficult to help. Commented Oct 9, 2023 at 11:35
  • 'first run' per what? Per server reboot? Per day? Commented Oct 9, 2023 at 12:02
  • Could you please also share the query itself and the DDL for all tables and indexes involved? I don't see any issue with work_mem since this query is only sorting just a few kB's of data, not MB's. Commented Oct 9, 2023 at 14:53
  • Please trim your code to make it easier to find your problem. Follow these guidelines to create a minimal reproducible example. Commented Oct 11, 2023 at 12:42

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.