From: Zdenek Kabelac <zkabelac@redhat.com>
To: LVM general discussion and development <linux-lvm@redhat.com>,
"Lentes, Bernd" <bernd.lentes@helmholtz-muenchen.de>
Cc: Zdenek Kabelac <zkabelac@redhat.com>
Subject: Re: [linux-lvm] can't remove snapshot
Date: Thu, 11 Apr 2019 17:59:01 +0200 [thread overview]
Message-ID: <37332d31-8ab2-af15-1e0b-cb26a2421603@redhat.com> (raw)
In-Reply-To: <787104895.51532.1554994161932.JavaMail.zimbra@helmholtz-muenchen.de>
Dne 11. 04. 19 v 16:49 Lentes, Bernd napsal(a):
> ----- On Apr 11, 2019, at 2:32 PM, Bernd Lentes bernd.lentes@helmholtz-muenchen.de wrote:
>
>> ----- On Apr 11, 2019, at 1:09 PM, Zdenek Kabelac zkabelac@redhat.com wrote:
>>
>>
>>>>>> Hello list,
>>>>>>
>>>>>> I have a two-node HA-cluster which uses local and cluster LVM.
>>>>>> cLVM is currently stopped, I try to remove a snapshot from the root lv which
>>>>>> is located on a local VG.
>>>>>> I get this error:
>>>>>> ha-idg-2:/mnt/spp # lvremove -fv vg_local/lv_snap_pre_sp4
>>>>>> connect() failed on local socket: No such file or directory
>>>>>> Internal cluster locking initialisation failed.
>>>>>> WARNING: Falling back to local file-based locking.
>>>>>> Volume Groups with the clustered attribute will be inaccessible.
>>>>>> Archiving volume group "vg_local" metadata (seqno 26).
>>>>>> Removing snapshot volume vg_local/lv_snap_pre_sp4.
>>>>>> Loading table for vg_local-lv_root (254:8).
>>>>>> device-mapper: reload ioctl on (254:8) failed: Invalid argument
>>>>>> Failed to refresh lv_root without snapshot.
>>>>>>
>>>>>
>
>
>>>
>>> What is the related error message from kernel (IOCTL) - check and show dmesg
>>> messages. Eventually please supply
>>>
>>> 'dmsetup table'
>>> 'dmsetup status'
>>> 'dmsetup info -c'
>>
>> Hi Zdenek,
>>
>> thanks for your support. I managed to delete the snapshot via "dmsetup remove",
>> but in lvs and lvdisplay it still appears.
>> And i'm still not able to remove it via lvremove:
>>
>> ha-idg-2:~ # lvremove -fv /dev/vg_local/lv_snap_pre_sp4
>> connect() failed on local socket: No such file or directory
>> Internal cluster locking initialisation failed.
>> WARNING: Falling back to local file-based locking.
>> Volume Groups with the clustered attribute will be inaccessible.
>> Archiving volume group "vg_local" metadata (seqno 26).
>> Removing snapshot volume vg_local/lv_snap_pre_sp4.
>> Loading table for vg_local-lv_root (254:9).
>> device-mapper: reload ioctl on (254:9) failed: Invalid argument
>> Failed to refresh lv_root without snapshot.
>>
>> dmesg:
>> [88310.980351] device-mapper: ioctl: can't change device type after initial
>> table load.
>>
>> dmsetup:
>> ha-idg-2:~ # dmsetup -c table
>> vg_local-lv_root-real: 0 209715200 linear 254:4 167774208
>> vg_local-lv_var: 0 83886080 linear 254:4 83888128
>> 3600508b1001c5037520913a9b581d78d-part3: 0 2081866639 linear 254:0 262248448
>> 3600508b1001c5037520913a9b581d78d-part2: 0 262144000 linear 254:0 104448
>> 3600c0ff00012824b04af7a5201000000: 0 3738281088 multipath 1 queue_if_no_path 1
>> alua 2 1 service-time 0 1 2 8:32 1 1 service-time 0 1 2 8:16 1 1
>> 3600508b1001c5037520913a9b581d78d-part1: 0 102400 linear 254:0 2048
>> 3600c0ff00012824b04af7a5201000000-part3: 0 626951296 linear 254:1 3111329792
>> 3600c0ff00012824b04af7a5201000000-part2: 0 999999744 linear 254:1 2111328256
>> vg_local-lv_tmp: 0 83886080 linear 254:4 2048
>> vg_local-lv_root: 0 209715200 snapshot-origin 254:8
>> 3600c0ff00012824b04af7a5201000000-part1: 0 2111325952 linear 254:1 2048
>> 3600508b1001c5037520913a9b581d78d: 0 2344115120 multipath 1 queue_if_no_path 0 1
>> 1 service-time 0 1 2 8:0 1 1
Hi
So here is the reason:
ioctl: can't change device type after initial table load.
You already have snapshot-origin in the table - which likely is not what lvm2
would have expected - you could either try 'lvchange --refresh' to get the dm
table into matching state - or reboot and start from beginning.
Clearly you are not supposed to partial modify DM table targets yourself while
lvm2 holds the metadata state for them - so ATM it looks like lvm2 cannot
proceed with the command - as the content of DM node is different and
transition is not allowed.
lvm2 should probably detect the case sooner and report error about
incompatible state of device for present metadata (but this will not help
you to resolve the problem).
So waht you can do is to probably restore to the metadata you had before
you've took your snapshot and try change into this table - but looking into
your current DM table - such transition might be untrivial.
Is there a reason why you cannot reboot - as that's IMHO the simplest fix??
Regards
Zdenek
next prev parent reply other threads:[~2019-04-11 15:59 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-09 13:00 [linux-lvm] can't remove snapshot Lentes, Bernd
2019-04-09 13:24 ` Zdenek Kabelac
2019-04-09 13:33 ` Lentes, Bernd
2019-04-10 13:36 ` Lentes, Bernd
2019-04-11 11:09 ` Zdenek Kabelac
2019-04-11 12:32 ` Lentes, Bernd
2019-04-11 14:49 ` Lentes, Bernd
2019-04-11 15:59 ` Zdenek Kabelac [this message]
2019-04-11 17:25 ` Lentes, Bernd
2019-04-12 16:18 ` Lentes, Bernd
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=37332d31-8ab2-af15-1e0b-cb26a2421603@redhat.com \
--to=zkabelac@redhat.com \
--cc=bernd.lentes@helmholtz-muenchen.de \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).