I have a database with a lot (3000+) of schema identical in structure. I'm trying to drop two tables from each schema. So I wrote a function that will loop over the schema and try to drop those tables.
When I execute my function I get ERROR: out of shared memory and no tables get dropped.
Is there a possibility to force PostgreSQL to commit the drop table statements in batches?
Here is my function (simplified to the problem in question):
CREATE OR REPLACE FUNCTION utils.drop_webstat_from_schema(schema_name character varying default '')
RETURNS SETOF record AS
$BODY$
declare
s record;
sql text;
BEGIN
for s in
select distinct t.table_schema from information_schema.tables t
where
schema_name <> '' and t.table_schema = schema_name
or
schema_name = '' and t.table_schema like 'myprefix_%'
loop
sql := 'DROP TABLE IF EXISTS ' || s.table_schema || '.webstat_hit, ' || s.table_schema || '.webstat_visit';
execute sql;
raise info '%; -- OK', sql;
return next s;
end loop;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100
ROWS 1000;
As you can see, the function loops over the set of schema and for each schema the following SQL is constructed and then executed.
DROP TABLE IF EXISTS <schema_name>.webstat_hit, <schema_name>.webstat_visit
I guess PostgreSQL is trying to lock all those tables before drop and it reached the max configured limit. Probably if I increase the max_locks_per_transaction to a certain fixed number I might manage to lock all the tables and drop them.
But I'm looking for a solution that will drop tables in steps and lock only those tables inside the step. Like for every 10 schema, lock and drop in a batch.
Can I do that in PostgreSQL and if so, how? Thank you.