0

After shrinking an ext4 file system with resize2fs M I mounted it to realize it has only 57% used. So I ran

$ resize2fs -d 32 -M rootfs-2021-06-28.img
resize2fs 1.44.5 (15-Dec-2018)
fs has 9698 inodes, 1 groups required.
fs requires 115683 data blocks.
With 1 group(s), we have 16352 blocks available.
Added 4 extra group(s), blks_needed 115683, data_blocks 147424, last_start 30686
Last group's overhead is 16416
Need 84997 data blocks in last group
Final size of last group is 101413
Estimated blocks needed: 232485
The filesystem is already 232485 (4k) blocks long.  Nothing to do!

I guess that 115683 data blocks plus 16416 overhead make the 57% of 232485, but I fail to add up those numbers to 232485 blocks. Honestly, I don't understand that calculation. The man page admits

KNOWN BUGS

The minimum size of the filesystem as estimated by resize2fs may be incorrect, especially for filesystems with 1k and 2k blocksizes.

But I have 4k block size.

If resize2fs is the wrong tool for small numbers of blocks, please recomment a different approach to shrink the file system.

0

1 Answer 1

1

As explained here the overhead margin used is big:

Also se the linked c-code for detail.

To visualize it a bit more I did a little test using a 1G image file, filled it with less and less data in one file and used resize2fs -M on it:

From data file being 900M down to 600M estimated size is above the size of the partition.

  • 900M: Estimated blocks needed: 440607 (1.68G)
  • 800M: Estimated blocks needed: 382239 (1.46G)
  • 700M: Estimated blocks needed: 323871 (1.24G)
  • 600M: Estimated blocks needed: 298271 (1.14G)

At 500M and below

  • 500M: Estimated blocks needed: 239903 (937M)
  • 400M: Estimated blocks needed: 181534 (709M)
  • 300M: Estimated blocks needed: 123166 (481M)
  • 200M: Estimated blocks needed: 64798 (253M)
  • 100M: Estimated blocks needed: 39198 (153M)

Tools like gparted etc. uses resize2fs under the hood. To make a partition that is filled one way could be to create a new image file of desired size and move the data there. One would of course need space to work with.

Else play around with the -f (force) flag and use:

resize2fs -d 32 -f file.img NEW_BLOCK_COUNT
-f  Forces resize2fs to proceed with the filesystem resize operation,
    overriding some safety checks which resize2fs normally enforces.
3
  • Thank you. I've seen that linked Q&A before, but it doesn't explain the output of resize2fs -d 32, at least not in a way I understand. You answer doesn't either. Anyhow, your experiment is interesting. The bottom line seems to be: resize2fs is a coward and during summer I'll try to find the time to add a »brave mode«. There is no risk with disk images. Commented Jun 29, 2021 at 5:56
  • @Philippos If you can include dumpe2fs -h file.img in question, it would likely be easier to expand on explanation. Commented Jun 29, 2021 at 14:06
  • Also added a comment on the -f flag. Commented Jun 29, 2021 at 14:57

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.