I went a bit further and I think I discovered what is blocking lvm.

When I use "lvremove" I see that the involved "dm" are not deleted, but SUSPENDED:



[root@node2 ~]# lvremove -f /dev/vgtest01/snap20
  Logical volume "snap20" successfully removed
[root@node2 ~]# dmsetup -vvv status vgtest01-snap20
dm version   OF   [16384]
dm status vgtest01-snap20  OF   [16384]
Name:              vgtest01-snap20
State:             SUSPENDED
vgtest01-snap20: read ahead is 256
Read Ahead:        256
Tables present:    LIVE & INACTIVE
Open count:        0
Event number:      0
Major, minor:      253, 4
Number of targets: 1
UUID: LVM-fU6kuI1yVWxAjsu1WmL1TmvishGAZaZNWytFJGEg5qYFByZ79PjPNoKPnf8KyiiZ

0 409600 snapshot 16/40960 16
[root@node2 ~]# dmsetup -vvv status vgtest01-snap20-cow
dm version   OF   [16384]
dm status vgtest01-snap20-cow  OF   [16384]
Name:              vgtest01-snap20-cow
State:             SUSPENDED
vgtest01-snap20-cow: read ahead is 256
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 3
Number of targets: 1
UUID: LVM-fU6kuI1yVWxAjsu1WmL1TmvishGAZaZNWytFJGEg5qYFByZ79PjPNoKPnf8KyiiZ-cow

0 40960 linear

In that situation if I execute any "lvm" command, those suspended "dm" are blocking any I/O activity, and the lvm command hangs for ever.

If, after or before, trying the command I  resume them:
[root@node2 ~]# dmsetup resume vgtest01-snap20
[root@node2 ~]# dmsetup resume vgtest01-snap20-cow

It will be unblocked. The lvdisplay will work perfectly.
But, for some reason something is not consistent in "lvm". I am not able to create new snapshots:

[root@node2 ~]# lvcreate -s -L 20M -n snap30 /dev/vgtest01/lvtest-snap01
  /dev/vgtest01/snap30: not found: device not cleared
  Aborting. Failed to wipe snapshot exception store.


So, once here. Do someone know if this situation implies something critical? is there any way to solve it without restarting clvm ?

I will keep in it.

Thanks!