3

I have an AWS RDS PostgreSQL 12.3 (t3.small, 2CPU 2GB RAM). I have this table:

CREATE TABLE public.phones_infos
(
    phone_id integer NOT NULL DEFAULT nextval('phones_infos_phone_id_seq'::regclass),
    phone character varying(50) COLLATE pg_catalog."default" NOT NULL,
    company_id integer,
    phone_tested boolean DEFAULT false,
    imported_at timestamp with time zone NOT NULL,
    CONSTRAINT phones_infos_pkey PRIMARY KEY (phone_id),
    CONSTRAINT fk_phones_infos FOREIGN KEY (company_id)
        REFERENCES public.companies_infos (id) MATCH SIMPLE
        ON UPDATE NO ACTION
        ON DELETE CASCADE
)

There are exactly 137468 records in this table, using:

SELECT count(1) FROM phones_infos;

The ERROR: out of memory for query result occurs with this simple query when I use pgAdmin 4.6:

SELECT * FROM phones_infos;

I have tables with 5M+ records and never had this problem before.

EXPLAIN SELECT * FROM phones_infos;
Seq Scan on phones_infos  (cost=0.00..2546.68 rows=137468 width=33)

I read this article to see if I could find answers, but unfortunately as we can see on metrics:enter image description here there are no old pending connections that could eat memory.

As suggested, the shared_buffers seems to be correctly sized:

SHOW shared_buffers;
449920kB

What should I try?

7
  • That OOM must be occurring on the client side. What are you using there? Commented Sep 30, 2020 at 9:53
  • pgAdmin 4.6. Now that you say that I admit I have issues with pgadmin always loosing connexions since last upgrade. I asked a mate on the same network he does have any problem with this query. I try to install pdagmin again Commented Sep 30, 2020 at 9:59
  • pgAdmin is known to have a hard time with large result sets. Use psql. Commented Sep 30, 2020 at 10:02
  • It's working again. I never had problems even querying millions of results. Do you have an alternative to pgAdmin ? (a better one) Commented Sep 30, 2020 at 10:07
  • 1
    Yes, like I said, psql. I never use a different client. Commented Sep 30, 2020 at 10:21

1 Answer 1

6

The problem must be on the client side. A sequential scan does not require much memory in PostgreSQL.

pgAdmin will cache the complete result set in RAM, which probably explains the out-of-memory condition.

I see two options:

  • Limit the number of result rows in pgAdmin:

    SELECT * FROM phones_infos LIMIT 1000;
    
  • Use a different client, for example psql. There you can avoid the problem by setting

    \set FETCH_COUNT 1000
    

    so that the result set is fetched in batches.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.