Huge pile of WAL files are generated in Master-Standby replication.
walfiles are archived at one of the standby node and every 2 hour, we are using tar to compress the archived WALs in standby node. Still, it becomes a huge size to store.
When it comes to 30, 90 days' backup it becomes a huge storage issue. Also, ends up taking more time to download and replay the WAL's during restoration.
I have used the below options.
wal_level=replica
wal_compression=on
archive_mode = always
And below parameters are commented/not used.
archive_timeout
checkpoint_timeout
Is there any other way, we can reduce the number of WAL's generated or an easier way to manage them? pg_waldump is showing around 70-90% of the data is full page images.
Also, Can I make above parameters in effect by changing in standby node? Is standby archiving the same WAL's sent by the master? OR is it regenerating based on standby's configuration?
-- Update: Modified to below values
name | setting | unit
--------------------+---------+------
archive_timeout | 0 | s
checkpoint_timeout | 3600 | s
checkpoint_warning | 3600 | s
max_wal_size | 4000 | MB
min_wal_size | 2000 | MB
shared_buffers | 458752 | 8kB
wal_buffers | 4096 | 8kB
wal_compression | on |
wal_level | replica |
still seeing 3-4 WAL files generated every minute. I am making these changes on hot standby node(From where backup is taken). Should I change this in Master? Does master settings have affect on Standby's WAL generation?
Example pg_waldump showing FPI size=87%
pg_waldump --stats 0000000100000498000000B2
Type N (%) Record size (%) FPI size (%) Combined size (%)
---- - --- ----------- --- -------- --- ------------- ---
XLOG 1 ( 0.00) 114 ( 0.01) 0 ( 0.00) 114 ( 0.00)
Transaction 3070 ( 10.35) 104380 ( 4.86) 0 ( 0.00) 104380 ( 0.63)
Storage 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
CLOG 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
Database 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
Tablespace 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
MultiXact 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
RelMap 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
Standby 2 ( 0.01) 100 ( 0.00) 0 ( 0.00) 100 ( 0.00)
Heap2 590 ( 1.99) 33863 ( 1.58) 46192 ( 0.32) 80055 ( 0.48)
Heap 6679 ( 22.51) 578232 ( 26.92) 4482508 ( 30.92) 5060740 ( 30.41)
Btree 19330 ( 65.14) 1430918 ( 66.62) 9967524 ( 68.76) 11398442 ( 68.48)
Hash 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
Gin 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
Gist 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
Sequence 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
SPGist 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
BRIN 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
CommitTs 4 ( 0.01) 120 ( 0.01) 0 ( 0.00) 120 ( 0.00)
ReplicationOrigin 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
Generic 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
LogicalMessage 0 ( 0.00) 0 ( 0.00) 0 ( 0.00) 0 ( 0.00)
-------- -------- -------- --------
Total 29676 2147727 [12.90%] 14496224 [87.10%] 16643951 [100%]
After using log_checkpoints=on
2022-06-15 07:08:57 UTC [11] LOG: checkpoint starting: time
2022-06-15 07:29:57 UTC [11] LOG: checkpoint complete: wrote 67010 buffers (14.6%); 0 WAL file(s) added, 12 removed, 56 recycled; write=1259.767 s, sync=0.010 s, total=1259.961 s; sync files=253, longest=0.003 s, average=0.001 s; distance=1125728 kB, estimate=2176006 kB
2022-06-15 07:38:57 UTC [11] LOG: checkpoint starting: time
2022-06-15 07:59:57 UTC [11] LOG: checkpoint complete: wrote 61886 buffers (13.5%); 0 WAL file(s) added, 20 removed, 10 recycled; write=1259.740 s, sync=0.005 s, total=1259.878 s; sync files=185, longest=0.002 s, average=0.001 s; distance=491822 kB, estimate=2007588 kB
max_wal_sizeandcheckpoint_timeoutto reduce the number of checkpoints and full page images in the WAL, which will reduce the amount of WAL somewhat at the price of longer crash recovery.checkpoint_timeoutnot set. based on the number of WALs, I think none of the WALs are empty. none of them are generated because a checkpoint was reached. by the way I reached here cybertec-postgresql.com/en/… and enabled wal_compression=on. I am already using tar to keep them compresses. Need to see the difference. Thank you !pg_waldumpshowing around70-90% of data is FPI.