1

I have a database with a lot (3000+) of schema identical in structure. I'm trying to drop two tables from each schema. So I wrote a function that will loop over the schema and try to drop those tables.

When I execute my function I get ERROR: out of shared memory and no tables get dropped.

Is there a possibility to force PostgreSQL to commit the drop table statements in batches?

Here is my function (simplified to the problem in question):

CREATE OR REPLACE FUNCTION utils.drop_webstat_from_schema(schema_name character varying default '')
  RETURNS SETOF record AS
$BODY$
declare
    s record;
    sql text;
BEGIN
    for s in 
        select distinct t.table_schema from information_schema.tables t
        where 
                schema_name <> '' and t.table_schema = schema_name 
                or 
                schema_name = '' and t.table_schema like 'myprefix_%'
    loop
        sql := 'DROP TABLE IF EXISTS ' || s.table_schema || '.webstat_hit, ' || s.table_schema || '.webstat_visit';
        execute sql;
        raise info '%; -- OK', sql;
        return next s;
    end loop;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100
  ROWS 1000;

As you can see, the function loops over the set of schema and for each schema the following SQL is constructed and then executed.

DROP TABLE IF EXISTS <schema_name>.webstat_hit, <schema_name>.webstat_visit

I guess PostgreSQL is trying to lock all those tables before drop and it reached the max configured limit. Probably if I increase the max_locks_per_transaction to a certain fixed number I might manage to lock all the tables and drop them.

But I'm looking for a solution that will drop tables in steps and lock only those tables inside the step. Like for every 10 schema, lock and drop in a batch.

Can I do that in PostgreSQL and if so, how? Thank you.

1 Answer 1

1

I'd take schema loop out of the function with drops.

So lets assume fn_loop plpgsql loops schemas and calls fn_drop. You can commit in batches in plpgsql if you execute fn_drop over dblink.

Another way to have commit between schemas, loop in bash, eg:

for i in $(psql -c "select nspname form pg_namespaces where blah blah"); do
  psql -c "drop damn table";
done;

example of local calls in different transaction with dblink (notice difference in now() on same db):

t=# do
$$
declare
  _t text;
begin
  for _r in 1..2 loop
    select t into _t from dblink('dbname=t'::text,'select now()::text'::text) rtn (t text);
    raise info '%',concat('local: ',now(),', dblink: ',_t);

  end loop;
end;
$$
;
INFO:  local: 2017-04-28 07:38:11.352026+00, dblink: 2017-04-28 07:38:11.355149+00
INFO:  local: 2017-04-28 07:38:11.352026+00, dblink: 2017-04-28 07:38:11.358211+00
DO
Time: 6.811 ms
Sign up to request clarification or add additional context in comments.

1 Comment

yes - that is the whole idea, to have commits inside function, use dblink to localhost, same db

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.