From: Duncan Townsend <duncancmt@gmail.com>
To: Zdenek Kabelac <zkabelac@redhat.com>
Cc: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] thin: pool target too small
Date: Wed, 30 Sep 2020 13:00:21 -0500 [thread overview]
Message-ID: <CAODnkUDLbQ12itWB8OaOzbwhem8ozF+L4eq0z+=KhJX6fQ_=eQ@mail.gmail.com> (raw)
In-Reply-To: <d572fdec-c2f1-b02a-7697-45ce932f9220@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 4055 bytes --]
On Tue, Sep 29, 2020, 10:54 AM Zdenek Kabelac <zkabelac@redhat.com> wrote:
> Dne 29. 09. 20 v 16:33 Duncan Townsend napsal(a):
> > On Sat, Sep 26, 2020, 8:30 AM Duncan Townsend <duncancmt@gmail.com
> > <mailto:duncancmt@gmail.com>> wrote:
> >
> >> > > There were further error messages as further snapshots were
> attempted,
> > > > but I was unable to capture them as my system went down. Upon
> reboot,
> > > > the "transaction_id" message that I referred to in my previous
> message
> > > > was repeated (but with increased transaction IDs).
> > >
> > > For better fix it would need to be better understood what has
> happened
> > > in parallel while 'lvm' inside dmeventd was resizing pool data.
> >
>
> So the lvm2 has been fixed upstream to report more educative messages to
> the user - although it still does require some experience in managing
> thin-pool kernel metadata and lvm2 metadata.
>
That's good news! However, I believe I lack the requisite experience. Is
there some documentation that I ought to read as a starting point? Or is it
best to just read the source?
> To the best of my knowledge, no other LVM operations were in flight at
> > the time. The script that I use issues LVM commands strictly
>
> In your case - dmeventd did 'unlocked' resize - while other command
> was taking a snapshot - and it happened the sequence with 'snapshot' has
> won - so until the reload of thin-pool - lvm2 has not spotted difference.
> (which is simply a bad race cause due to badly working locking on your
> system)
>
After reading more about lvm locking, it looks like the original issue
might have been that the locking directory lives on a lv instead of on a
non-lvm-managed block device. (Although, the locking directory is on a
different vg on a different pv from the one that had the error.)
Is there a way to make dmeventd (or any other lvm program) abort if this
locking fails? Should I switch to using a clustered locking daemon (even
though I have only the single, non-virtualized host)?
> Would it be reasonable to use vgcfgrestore again on the
> > manually-repaired metadata I used before? I'm not entirely sure what
>
> You will need to vgcfgrestore - but I think you've misused my passed
> recoverd
> piece, where I've specifically asked to only replace specific segments of
> resized thin-pool within your latest VG metadata - since those likely have
> all the proper mappings to thin LVs.
>
All I did was use vgcfgrestore to apply the metadata file attached to your
previous private email. I had to edit the transaction number, as I noted
previously. That was a single line change. Was that the wrong thing to do?
I lack the experience with lvm/thin metadata, so I am flying a bit blind
here. I apologize if I've made things worse.
While you have taken the metadata from 'resize' moment - you've lost all
> the thinLV lvm2 metadata for later created one.
>
> I'll try to make one for you.
>
Thank you very much. I am extremely grateful that you've helped me so much
in repairing my system.
> to look for while editing the XML from thin_dump, and I would very
> > much like to avoid causing further damage to my system. (Also, FWIW,
> > thin_dump appears to segfault when run with musl-libc instead of
>
> Well - lvm2 is glibc oriented project - so users of those 'esoteric'
> distribution need to be expert on its own.
>
> If you can provide coredump or even better patch for crash - we might
> replace the code with something better usable - but there is zero testing
> with anything else then glibc...
>
Noted. I believe I'll be switching to glibc because there are a number of
other packages that are broken for this distro.
If you have an interest, this is the issue I've opened with my distro about
the crash: https://github.com/void-linux/void-packages/issues/25125 . I
despair that this will receive much attention, given that not even gdb
works properly.
Thanks again!
--Duncan Townsend
P.S. This was written on mobile. Please forgive my typos.
[-- Attachment #2: Type: text/html, Size: 6124 bytes --]
next prev parent reply other threads:[~2020-09-30 18:00 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-20 23:48 [linux-lvm] thin: pool target too small Duncan Townsend
2020-09-21 9:23 ` Zdenek Kabelac
2020-09-21 13:47 ` Duncan Townsend
2020-09-22 22:02 ` Zdenek Kabelac
2020-09-23 18:13 ` Duncan Townsend
2020-09-23 18:49 ` Zdenek Kabelac
2020-09-23 19:54 ` Duncan Townsend
2020-09-24 17:54 ` Zdenek Kabelac
2020-09-26 13:30 ` Duncan Townsend
2020-09-29 14:33 ` Duncan Townsend
2020-09-29 15:53 ` Zdenek Kabelac
2020-09-30 18:00 ` Duncan Townsend [this message]
2020-10-02 13:05 ` Duncan Townsend
2020-10-09 21:15 ` Duncan Townsend
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAODnkUDLbQ12itWB8OaOzbwhem8ozF+L4eq0z+=KhJX6fQ_=eQ@mail.gmail.com' \
--to=duncancmt@gmail.com \
--cc=linux-lvm@redhat.com \
--cc=zkabelac@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).