archive mirror
 help / color / mirror / Atom feed
From: Marian Csontos <>
To: LVM general discussion and development <>,
Subject: Re: [linux-lvm] probable lvm thin_pool exhaustion
Date: Wed, 18 Mar 2020 12:45:25 +0100	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On 3/11/20 6:24 PM, wrote:
> Hello all,
> i am a total newbie besides the general knowledge of lvm.
> With this disclaimer written I have the following problem,
> which may def need some expert knowledge of lvm, because i couldn't
> find solutions online for now :/
> I am booting my system (in my case is Qubes, but I suppose that does not 
> matter at this point)
> and after entering my luks password get to the dracut emergency shell.
> "Check for pool qubes-dom/pool00 failed (status:1). Manual repair 
> required!"
> The only aclive lv is qubes_dom0/swap.
> All the others are inactive.
> step 1:
> lvm vgscan vgchange -ay
> lvm lvconvert --repair qubes_dom0/pool00
> Result:
> using default stripesize 64.00 KiB.
> Terminate called after throwing an instance of 'std::runtime_error'
> what(): transaction_manager::new_block() couldn't allocate new block
> Child 7212 exited abnormally
> Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed 
> (status:1). Manual repair required!

One the first glance this looks like the problem reported in Bug 1763895 
- thin_restore fails with transaction_manager::new_block() couldn't 
allocate new block:

> step 2:
> since i suspect that my lvm is full (though it does mark 15 g as free)

IIUC it is the metada which is full, not the data.
What's the size of the below _tmeta volume?

What's `thin_check --version` and `lvm version` output?

-- Martian

> i tried the following changes in the /etc/lvm/lvm.conf
> thin_pool_autoextend_threshold = 80
> thin_pool_autoextend_percent = 2 (Since my the pvs output gives PSize: 
> 465.56g Pfree 15.78g, I set this to 2% to be overly cautious not to 
> extend beyond the 15 G marked as free, since idk)
> auto_activation_volume_list = to hold the group, root, pool00, swap and 
> a vm that would like to delete to free some space
> volume_list = the same as auto_activation_volume_list
> and tried step 1 again, did not work, got the same result as above with 
> qubes_swap as active only
> step 3 tried
> lvextend -L+1G qubes_dom0/pool00_tmeta
> Result:
> metadata reference count differ for block xxxxxx, expected 0, but got 1 ...
> Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!
> Since I do not know my way around lvm, what do you think, would be the 
> best way out of this?
> Adding another external PV? migrating to a bigger PV?
> I did not play with backup or achive out of fear to loose any unbackuped 
> data which happens to be a bit :|
> Any help will be highly appreciated!
> Thanks in advance,
> m
> _______________________________________________
> linux-lvm mailing list
> read the LVM HOW-TO at

  reply	other threads:[~2020-03-18 11:45 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-11 17:24 maiski
2020-03-18 11:45 ` Marian Csontos [this message]
  -- strict thread matches above, loose matches on Subject: below --
2020-03-10 19:25 maiski
2020-03-12 18:11 ` Ming-Hung Tsai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \
    --subject='Re: [linux-lvm] probable lvm thin_pool exhaustion' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).