4

On this filesystem, there are enough inodes already, I only need more filesystem size:

# df -i
Filesystem                      Inodes    IUsed      IFree IUse% Mounted on
/dev/mapper/spinning-backup 4293918720 56250724 4237667996    2% /srv/backup

In fact, if I try to resize, it fails because the number of inodes would be too many:

EXT4-fs warning (device dm-0): ext4_resize_fs:2061: resize would cause inodes_count overflow
EXT4-fs warning (device dm-0): ext4_group_extend:1870: can't shrink FS - resize aborted

OK, I do not need more inodes, so, how can I resize without adding even more inodes?

More details about the failing command: (the above warnings are from dmesg)

# resize2fs -p /dev/mapper/spinning-backup
resize2fs 1.47.0 (5-Feb-2023)
Filesystem at /dev/mapper/spinning-backup is mounted on /srv/backup; on-line resizing required
old_desc_blocks = 5760, new_desc_blocks = 7040
resize2fs: Invalid argument While checking for on-line resizing support

More details about the file system:

# tune2fs -l /dev/mapper/spinning-backup

Filesystem features:      has_journal ext_attr dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         clean
Filesystem OS type:       Linux
Inode count:              4293918720
Block count:              12079595520
Reserved block count:     0
Overhead clusters:        269519130
Free blocks:              3090624005
Free inodes:              4236049566
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         11648
Inode blocks per group:   728
RAID stride:              128
RAID stripe width:        1024
Flex block group size:    16
First inode:              11
Inode size:               256
2
  • hm, if I'm doing the math correctly, that file system created has a little more than 2% (1/45) of storage allocated to inodes. Which, at 46 TB overall size, is a lot in absolute terms, but: before you do anything fancy with your data, is saving less than 2% of space really worth it? Commented 11 hours ago
  • @MarcusMüller, it's not so much saving space, but the fact that the filesystem can't be extended any more because it's hit the maximum number of inodes (2^32). Commented 1 hour ago

2 Answers 2

6

It isn’t possible to resize an Ext4 file system without adding more inodes: that would require changing the number of bytes per inode (as a ratio of the storage size over the number of inodes), and that’s not possible in Ext4. As explained in man mke2fs:

-i bytes-per-inode
Specify the bytes/inode ratio. mke2fs creates an inode for every bytes-per-inode bytes of space on the disk. The larger the bytes-per-inode ratio, the fewer inodes will be created. This value generally shouldn't be smaller than the blocksize of the file system, since in that case more inodes would be made than can ever be used. Be warned that it is not possible to change this ratio on a file system after it is created, so be careful deciding the correct value for this parameter. Note that resizing a file system changes the number of inodes to maintain this ratio.

Instead of resizing the existing volume, you could create a new volume using the available free space, with a much lower number of inodes, then move the data across and resize the new volume. You might have to do this in several steps if the free space isn’t large enough to store all the data currently on the volume.

4
  • 2
    Note that if unpredictable (or very low) number of "average bytes used by files per inode" is a problem here, the reader might want to consider different file systems than the ext family offers – a fixed byters-per-inode ratio is not necessary; at least XFS and Btrfs and ZFS have dynamic inode allocations. Considering the asker's resize2fs version and assuming they are on "mainstream" Linux distributions that are still in support today, they're probably using debian 12, Ubuntu 24.04LTS or openWRT 23.x or 24.x; on ubuntu, ZFS support is first-hand, on debian it's relatively commonly used, Commented 10 hours ago
  • and on OpenWRT (I only mention this because 46 TB might sound a bit like a network-attached appliance) I have no clue. ZFS might be a good choice here, assuming you want to make use of its built-in multi-volume / mirroring abilities while also making use of in-filesystem checksums (zRAID). If not (i.e., you are happy with your current mapper, probably LVM), and if you don't need to be able to shrink filesystems later on, XFS simply works and is (in my experience) quite fast. Commented 10 hours ago
  • ZFS also supports transparent compression (with compression=on|off|gzip|gzip-N|lz4|lzjb|zle|zstd|zstd-N|zstd-fast|zstd-fast-N. see man zfsprops) which serves the OP's goal of "I only need more filesystem size" if the data being stored is compressible. Commented 8 hours ago
  • I was wondering how on earth a filesytem ended up with that structure that bytes per inode was a fixed quantity once formatting, so I pulled up the ext4 header. It should be possible to construct an ext4 filesystem consisting of one group that doesn't have this problem anymore; however that is another version of erase the disk. Commented 5 hours ago
0

You cannot do this with the existing filesystem.

EXT at creation time defines a relationship between inodes and free space. This relationship tells the system where to find any particular inode as an offset from the start. Changing the relationship (so that you have fewer inodes) would invalidate existing inodes, so is not possible.

Unfortunately this means that the maximum number of inodes (2^32) may limit your filesystem before other limits are reached.

From an IBM support document with reference to this problem:

recreating new filesystem is only way to set new bytes-per-inode, make backup plan properly before recreating filesystem

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.