2

First, filesystem was backed up and unmounted succesfully.

Then, an lvresize was executed, and is already running:

lvresize --resizefs --size 1024G /dev/dbdrp/db

And shows the output:

fsck from util-linux-ng 2.17.2
/dev/mapper/dbdrp-db: 1718907/201326592 files (0.4% non-contiguous), 92969270/805304320 blocks
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/mapper/dbdrp-db to 268435456 (4k) blocks.

Filesystems output was the former:

[root@generic-linux-hostname ~]# df -hP
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/dbdrp-db  3.0T  310G  2.7T  11% /usr/local/oracle

And physical volumes tables where the following:

PV         VG       Fmt  Attr PSize  PFree
  /dev/xvdc1 dbdrp    lvm2 a--   1.50t    0
  /dev/xvdd1 dbdrp    lvm2 a--   1.50t    0

After resizing finishes, one of this volumes will be pvremoved to recycle the virtual hard disk.

How can I see the progress of this lvresize? It's been running an hour, and not much information been thrown out.

Thanks guys :)

2 Answers 2

3

This may be an overshoot because I don't know the internals but, to have an idea, you can try the following:

Get the pid of the running process:

pgrep -afl resize2fs
2377 resize2fs -M /dev/vg0/lv-3

Here it was resize2fs but change to lvresize (if it is the command that actually does the resize). Next, run strace -e pread64,pwrite64 -p 2377 to monitor what are the syscalls that the command is doing so you will see from where it reads and where it writes to.

The output would be something like this:

pread64(3, "<!-- ..........................."..., 1236992, 2441878626304) = 1236992
pwrite64(3, "<!-- ..........................."..., 1236992, 181592702976) = 1236992
pread64(3, "<!-- ..........................."..., 479232, 2441880231936) = 479232
pwrite64(3, "<!-- ..........................."..., 479232, 181593939968) = 479232

and if you check the man page for pread64, you see that its signature is ssize_t pread(int fd, void *buf, size_t count, off_t offset) so, the last parameter of the call, is the offset or from which point of the block it is reading right now. If you convert 2441880231936 bytes to TB, it would be roughly 2.2 TB and my volume as shown below has 3.44 TB, 2.2/3.34 = ~65%. But, this is just a ballpark on where it is now because in that case, it will resize the partition to the minimum it can (because of resize2f -M) based on free space. Also, I am not sure if ext4 (in my case) would write data into contiguous blocks, so it could be less work than the full disk.

lv-3 vg0  -wi-ao----  <3.34t
2

The lvresize doesn't has progress bar option. But, if you do the resize in more step, you can check that:

  1. Check the initial size of the fs:
# df -h /mnt
Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/vg00-vol_projects   19G  5.3G   13G  30% /mnt
  1. umount:
# umount /mnt
  1. check the fs:
# e2fsck -f /dev/mapper/vg00-vol_projects
e2fsck 1.42.5 (29-Jul-2012)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/vg00-vol_projects: 13/1245184 files (0.0% non-contiguous), 1447987/4980736 blocks
  1. resize the fs with progress (-p) option:
# resize2fs -p /dev/mapper/vg00-vol_projects 6G
resize2fs 1.42.5 (29-Jul-2012)
Resizing the filesystem on /dev/mapper/vg00-vol_projects to 1572864 (4k) blocks.
Begin pass 2 (max = 32768)
Relocating blocks             XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 3 (max = 152)
Scanning inode table          XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The filesystem on /dev/mapper/vg00-vol_projects is now 1572864 blocks long.
  1. check the initial size of the LV:
# lvs vg00/vol_projects
  LV           VG   Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
  vol_projects vg00 -wi-a--- 19.00g
  1. resize the LV without resizefs option (we did in the earlier step):
# lvresize --size 6G /dev/mapper/vg00-vol_projects
  WARNING: Reducing active logical volume to 6.00 GiB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vol_projects? [y/n]: y
  Reducing logical volume vol_projects to 6.00 GiB
  Logical volume vol_projects successfully resized
  1. check the size:
# lvs vg00/vol_projects
  LV           VG   Attr     LSize Pool Origin Data%  Move Log Copy%  Convert
  vol_projects vg00 -wi-a--- 6.00g
  1. mount and check the fs size:
# mount /dev/mapper/vg00-vol_projects /mnt
# df -h /mnt
Filesystem                     Size  Used Avail Use% Mounted on
/dev/mapper/vg00-vol_projects  6.0G  5.3G  402M  94% /mnt

... but I think this way is more complicated :/

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.