0

I have a setup with a logical volume lvvm within a volume group vgsata. The lvvm was created with the thin -T option:

sudo lvcreate -L 925G -T vgsata/lvvm
lsblk
...
sdc                     8:32   0 931,5G  0 disk  
└─sdc1                  8:33   0 931,5G  0 part  
  ├─vgsata-lvvm_tmeta 252:0    0   116M  0 lvm   
  │ └─vgsata-lvvm     252:2    0   925G  0 lvm   
  └─vgsata-lvvm_tdata 252:1    0   925G  0 lvm   
    └─vgsata-lvvm     252:2    0   925G  0 lvm   

When the system is started, I can mount the volume from the terminal with no problem:

sudo mount /dev/vgsata/lvvm /VM

But when I try to accomplish this with fstab, by adding the line

/dev/mapper/vgsata-lvvm /VM ext4    defaults    0   2

The boot process now takes 1-2 minutes longer without mounting the volume. In boot.log I find some time-out related issues:

[ TIME ] Timed out waiting for device dev-m…m.device - /dev/mapper/vgsata-lvvm.
[DEPEND] Dependency failed for systemd-fsck…m Check on /dev/mapper/vgsata-lvvm.
[DEPEND] Dependency failed for VM.mount - /VM.
[DEPEND] Dependency failed for local-fs.target - Local File Systems.

I also tried using UUIDs, but the fact aside that this doesn't seem to be the recommended way for LVM, it did not solve the problem.

So what would be the best way to mount a logical volume during boot time? Or did I make a mistake creating the thin volume?

4
  • 2
    1) Have you tried writing /dev/vgsata/lvvm in /etc/fstab ? 2) Does it fsck cleanly? Commented Mar 9 at 12:41
  • I guess dev/mapper/vgsata-lvvm is just a typo (please correct that). It seems that LVM activated the LV too late. I am not aware of any possible reason for that. Are you using other LVM LVs during the boot process? Commented Mar 9 at 17:35
  • The missing leading "/" was indeed a type in my post. Commented Mar 12 at 21:26
  • Thank you @telcoM: My mistake was indeed, that I did not create a thinpool first before the logical volume. I don't know why LVM did not complain or why it worked from the terminal. But then it worked right out of the box. If you want to post this as an answer I would be happy to accept it. Commented Mar 12 at 21:30

1 Answer 1

0

With thin LVs, there apparently should first be a thin-pool LV which will be the storage location of one or more thin LVs, which can then "overbook" the storage of the thin-pool. To me, it looks sort of like only the thin-pool LV may have been created and the actual thin LV may be missing: I would want to double-check the attributes of the LV(s) in the volume group.

Please run sudo lvs -o+lv_when_full and add the output to your initial post. The output of sudo lvdisplay /dev/vgsata/lvvm might be helpful too.

I don't know why manual mounting works: perhaps there is a large file at /dev/vgsata/lvvm that is acting as a filesystem image? Or maybe the physical storage space available for the thin LV is now 100% full?

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.