Bug#659762: lvm2: LVM commands freeze after snapshot delete fails
Chris Dunlop
chris at onthe.net.au
Thu Apr 26 06:04:18 UTC 2012
On Mon, Feb 13, 2012 at 04:03:53PM +0000, Paul LeoNerd Evans wrote:
> Package: lvm2
> Version: 2.02.88-2
> Severity: normal
>
> Tried and failed to remove an LVM snapshot:
>
>
> root at cel:~
> # lvremove vg_cel/backups-20110930
> Do you really want to remove active logical volume backups-20110930? [y/n]: ^C
> Logical volume backups-20110930 not removed
>
> root at cel:~
> # lvchange -an vg_cel/backups-20110930
> Can't change snapshot logical volume "backups-20110930"
>
> root at cel:~
> # lvremove vg_cel/backups-20110930
> Do you really want to remove active logical volume backups-20110930? [y/n]: y
> Unable to deactivate open vg_cel-backups--20110930-cow (254:35)
> Failed to resume backups-20110930.
> libdevmapper exiting with 7 device(s) still suspended.
>
>
> At this point now the entire LVM subsystem is totally frozen. No commands
> ever complete. Any LVM-related command hangs and is not SIGKILLable.
"Me too", with same lvm2 package, on home-grown linux-3.3.1.
Synopsis...
I was able to get out of this state without rebooting using:
dmsetup resume /dev/mapper/vg00-foo
dmsetup remove /dev/mapper/vg00-foo-real
Expansion...
I had 2 snapshots, foo and foo2, mounted at /mnt/foo and
/mnt/foo2. I tried removing them like:
# for d in foo foo2
do
umount /mnt/${d} && lvremove -f vg00/${d}-snap
done
...and got:
Logical volume "foo-snap" successfully removed
Unable to deactivate open vg00-foo2--snap-cow (253:10)
Failed to resume foo2-snap.
libdevmapper exiting with 1 device(s) still suspended.
After that my 'lvs' hung as Paul decribes.
Note: this was actually the second time I've had this problem
under the same circumstances, the first time I ended up
reluctantly rebooting. I wonder if doing multiple lvremoves in
quick succession has anything to do with it?
This time I looked a little deeper into it and found this bug
report which prompted me to look at the dmsetup stuff with which
I've been extravagently unfamiliar. So...
# dmsetup info /dev/mapper/*foo2*
Name: vg00-foo2
State: SUSPENDED <<<<<<<<<<<<<<<
Read Ahead: 26624
Tables present: LIVE & INACTIVE <<<<<<<<<<<<<<<
Open count: 2
Event number: 0
Major, minor: 253, 2
Number of targets: 1
UUID: LVM-nxK1Vn04ULIJaEIiwxsldXVoJAS9rp3APs0zI7cK2SDQf1lM2CHiXAfQsvKRfeWg
Name: vg00-foo2-real
State: ACTIVE
Read Ahead: 0
Tables present: LIVE
Open count: 1
Event number: 0
Major, minor: 253, 9
Number of targets: 3
UUID: LVM-nxK1Vn04ULIJaEIiwxsldXVoJAS9rp3APs0zI7cK2SDQf1lM2CHiXAfQsvKRfeWg-real
Name: vg00-foo2--snap
State: ACTIVE
Read Ahead: 26624
Tables present: LIVE
Open count: 0
Event number: 0
Major, minor: 253, 8
Number of targets: 1
UUID: LVM-nxK1Vn04ULIJaEIiwxsldXVoJAS9rp3AV6mYEfmfj24I3epL0ldVOHeOXfLDi3SI
Name: vg00-foo2--snap-cow
State: ACTIVE
Read Ahead: 0
Tables present: LIVE
Open count: 0
Event number: 0
Major, minor: 253, 10
Number of targets: 1
UUID: LVM-nxK1Vn04ULIJaEIiwxsldXVoJAS9rp3AV6mYEfmfj24I3epL0ldVOHeOXfLDi3SI-cow
Not knowing what I was doing, but because of the SUSPENDED state I
tried to resume that device:
# dmsetup resume /dev/mapper/vg00-foo2
At this point the previously-hung 'lvs' returned. A new 'lvs' showed
the snapshot in question was not longer present, however I still had:
# ls -l /dev/mapper/*foo2*
lrwxrwxrwx 1 root root 7 2012-04-26 14:51 vg00-foo2 -> ../dm-2
lrwxrwxrwx 1 root root 7 2012-04-26 14:21 vg00-foo2-real -> ../dm-9
Using dmsetup to remove that extra device worked:
# dmsetup remove /dev/mapper/vg00-foo2-real
# ls -l /dev/mapper/*foo2*
lrwxrwxrwx 1 root root 7 2012-04-26 14:51 vg00-foo2 -> ../dm-2
Cheers,
Chris
More information about the pkg-lvm-maintainers
mailing list