linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: maiski@maiski.net
To: linux-lvm@redhat.com
Subject: [linux-lvm] probable lvm thin_pool exhaustion
Date: Tue, 10 Mar 2020 20:25:35 +0100	[thread overview]
Message-ID: <20200310202535.Horde.eID8hNJGj7q-b0zb4iXm6A3@webmail.df.eu> (raw)

[-- Attachment #1: Plaintext Message --]
[-- Type: text/plain, Size: 2158 bytes --]

Hello all,

i am a total newbie besides the general knowledge of lvm.
With this disclaimer written I have the following problem,
which may def need some expert knowledge of lvm, because i couldn't
find solutions online for now :/

I am booting my system (in my case is Qubes, but I suppose that does not
matter at this point)
and after entering my luks password get to the dracut emergency shell.
/"Check for pool qubes-dom/pool00 failed (status:1). Manual repair
required!"/
The only aclive lv is qubes_dom0/swap.
All the others are inactive.

step 1:
/lvm vgscan vgchange -ay
lvm lvconvert --repair qubes_dom0/pool00/
Result:
/using default stripesize 64.00 KiB.
Terminate called after throwing an instance of 'std::runtime_error'
what(): transaction_manager::new_block() couldn't allocate new block
Child 7212 exited abnormally
Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed
(status:1). Manual repair required!/

step 2:
since i suspect that my lvm is full (though it does mark 15 g as free)
i tried the following changes in the /etc/lvm/lvm.conf
thin_pool_autoextend_threshold = 80
thin_pool_autoextend_percent = 2 (Since my the pvs output gives PSize:
465.56g Pfree 15.78g, I set this to 2% to be overly cautious not to extend
beyond the 15 G marked as free, since idk)
auto_activation_volume_list = to hold the group, root, pool00, swap and a
vm that would like to delete to free some space
volume_list = the same as auto_activation_volume_list

and tried step 1 again, did not work, got the same result as above with
qubes_swap as active only

step 3 tried
/lvextend -L+1G qubes_dom0/pool00_tmeta/
Result:
/metadata reference count differ for block xxxxxx, expected 0, but got 1
...
Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!/

Since I do not know my way around lvm, what do you think, would be the best
way out of this?
Adding another external PV? migrating to a bigger PV?
I did not play with backup or achive out of fear to loose any unbackuped
data which happens to be a bit :|
Any help will be highly appreciated!

Thanks in advance,
m

[-- Attachment #2: HTML Message --]
[-- Type: text/html, Size: 2529 bytes --]

             reply	other threads:[~2020-03-10 19:25 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-10 19:25 maiski [this message]
2020-03-12 18:11 ` [linux-lvm] probable lvm thin_pool exhaustion Ming-Hung Tsai
2020-03-11 17:24 maiski
2020-03-18 11:45 ` Marian Csontos

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200310202535.Horde.eID8hNJGj7q-b0zb4iXm6A3@webmail.df.eu \
    --to=maiski@maiski.net \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).