linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Ming-Hung Tsai <mingnus@gmail.com>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] probable lvm thin_pool exhaustion
Date: Fri, 13 Mar 2020 02:11:41 +0800	[thread overview]
Message-ID: <CAAYit8RAwszwqP+c+4Dp9iTNiWYaici_nKEm1zdZeHD_Cb3dCQ@mail.gmail.com> (raw)
In-Reply-To: <20200310202535.Horde.eID8hNJGj7q-b0zb4iXm6A3@webmail.df.eu>

According to step3, it sounds like the mapping tree is health, thus
the metadata could be simply repaired by lvconvert/thin_repair. The
error message might caused by the following reasons:
1. There are too many snapshots, which exhausted the capacity of
metadata spare. Expanding the metadata spare might work.
2. Bugs in thin_repair. What's the version of thin-provisioning-tools
you are using?

Also, before running lvconvert, I suggest to run thin_check first, to
check if the metadata is suitable for automatically repair or not.

$ lvchange -ay qubes_dom0/pool00_tmeta
$ thin_check /dev/mapper/qubes_dom0-pool00_tmeta
$ lvchange -an qubes_dom0/pool00_tmeta

(Maybe "lvconvert --repair" could provide options for setting repair
levels, to prevent novice users from discarding missing mappings.)

If you're not sure about the detail steps, you can upload compressed
metadata for further analysis:
$ lvchange -ay qubes_dom0/pool00_tmeta
$ dd if=/dev/mapper/qubes_dom0-pool00_tmeta of=tmeta.bin
$ tar -czf tmeta.tar.gz tmeta.bin

Finally, the options in step2 are for dmeventd to expand online
thin-pools. They make no help for expanding offline, broken
thin-pools, although that the VG is not full.

On Thu, Mar 12, 2020 at 4:14 PM <maiski@maiski.net> wrote:
>
> step 1:
> lvm vgscan vgchange -ay
> lvm lvconvert --repair qubes_dom0/pool00
> Result:
> using default stripesize 64.00 KiB.
> Terminate called after throwing an instance of 'std::runtime_error'
> what(): transaction_manager::new_block() couldn't allocate new block
> Child 7212 exited abnormally
> Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed (status:1). Manual repair required!
>
> step 2:
> since i suspect that my lvm is full (though it does mark 15 g as free)
> i tried the following changes in the /etc/lvm/lvm.conf
...
> and tried step 1 again, did not work, got the same result as above with qubes_swap as active only
>
> step 3 tried
> lvextend -L+1G qubes_dom0/pool00_tmeta
> Result:
> metadata reference count differ for block xxxxxx, expected 0, but got 1 ...
> Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!

  reply	other threads:[~2020-03-12 18:12 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-10 19:25 maiski
2020-03-12 18:11 ` Ming-Hung Tsai [this message]
2020-03-11 17:24 maiski
2020-03-18 11:45 ` Marian Csontos

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAAYit8RAwszwqP+c+4Dp9iTNiWYaici_nKEm1zdZeHD_Cb3dCQ@mail.gmail.com \
    --to=mingnus@gmail.com \
    --cc=linux-lvm@redhat.com \
    --subject='Re: [linux-lvm] probable lvm thin_pool exhaustion' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).