I resized an ext4 software RAID1 consisting of two disks sda3 and sdb3 (in rescue mode, i.e. disks are not mounted):
e2fsck -f /dev/md126
resize2fs /dev/md126 39G
mdadm --grow /dev/md126 --size 40G
mdadm /dev/md126 --fail /dev/sda3
mdadm /dev/md126 --remove /dev/sda3
parted /dev/sda
(parted) print
Number Start End Size Type File system Flags
1 1049kB 537MB 536MB primary linux-swap(v1)
2 537MB 1166MB 629MB primary boot, raid
3 1166MB 1024GB 1023GB primary raid
(parted) resizepart
Partition number? 3
End? 48GB
mdadm /dev/md126 --add /dev/sda3
Then the same procedure for sdb3.
The RAID successfully rebuilds and I am able to boot in the normal mode.
However, now I am getting negative used space:
root@server:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md126 23G -13G 34G - /mnt/md126
Digging deeper:
root@server:~# tune2fs -l /dev/md126
Filesystem state: clean
Filesystem OS type: Linux
Block count: 10223616
Reserved block count: 511180
Overhead blocks: 4201607
Free blocks: 9411982
First block: 0
Block size: 4096
Reserved GDT blocks: 1024
Blocks per group: 32768
It seems the culprit is the "Overhead blocks" which did not change after resizing the RAID (it is still the same number). Originally the size of sda3 and sdb3 partitions was 952.8G
Am I correct that the "negative" disk usage is caused by the "Overhead blocks"? Secondly, is there any way to reduce these blocks and thus eliminate the negative "Used" disk space?