13

I have a "Parent Table" and partition table by year with a lot column and now I need change a column VARCHAR(32) to TEXT because we need more length flexibility.

So I will alter the parent them will also change all partition.

But the table have 2 unique index with this column and also 1 index.

This query lock the table:

ALTER TABLE my_schema.my_table
ALTER COLUMN column_need_change TYPE VARCHAR(64) USING 
column_need_change :: VARCHAR(64);

Also this one :

ALTER TABLE my_schema.my_table
ALTER COLUMN column_need_change TYPE TEXT USING column_need_change :: TEXT;

I see this solution :

UPDATE pg_attribute SET atttypmod = 64+4
WHERE attrelid = 'my_schema.my_table'::regclass
AND attname = 'column_need_change ';

But I dislike this solution.

How can change VARCHAR(32) type to TEXT without lock table, I need continue to push some data in table between the update.

My Postgresql version : 9.6

EDIT :

This is the solution I ended up taking:

ALTER TABLE my_schema.my_table
ALTER COLUMN column_need_change TYPE TEXT USING column_need_change :: TEXT;

The query lock my table between : 1m 52s 548ms for 2.6 millions rows but it's fine.

1 Answer 1

14

The supported and safe variant is to use ALTER TABLE. This will not rewrite the table, since varchar and text have the same on-disk representation, so it will be done in a split second once the ACCESS EXCLUSIVE table lock is obtained.

Provided that your transactions are short, you will only experience a short stall while ALTER TABLE waits for all prior transactions to finish.

Messing with the system catalogs is dangerous, and you do so on your own risk.

You might get away with

UPDATE pg_attribute
SET atttypmod = -1,
    atttypid  = 25
WHERE attrelid = 'my_schema.my_table'::regclass
  AND attname = 'column_need_change';

But if it breaks something, you get to keep the pieces…

Sign up to request clarification or add additional context in comments.

7 Comments

What might be non-obvious to some readers is that in Postgres (somebody please correct me if I'm wrong!) the length limit on a varchar field is effectively just a constraint on allowed values, and doesn't affect how the column is stored on disk. In most other DBMSes, increasing that length would require a table rewrite, because the row would be laid out on disk with only enough room for the originally specified limit.
You are right as far as PostgreSQL is concerned (I don't know about other DBMS).
@IMSoP: that "fixed width" is used when estimating the memory needed for a query result, not for storing the data on disk. See e.g. here: dba.stackexchange.com/a/162117/1822
Thank @LaurenzAlbe I will use the ALTER TABLE solution it lock my table between 2 minutes but it's fine.
The Postgres docs indicate that while the ALTER TABLE version will not rewrite the table for these types (the column contents isn't changing and the types are binary coercible), it will rewrite any indexes using the column regardless, which will lock the table for a time. If this is a big table, then the indexes might also be large, and so rewriting them might take considerable time. I hit this case. Just leaving a note for other intrepid readers.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.