linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] probable lvm thin_pool exhaustion
@ 2020-03-10 19:25 maiski
  2020-03-12 18:11 ` Ming-Hung Tsai
  0 siblings, 1 reply; 4+ messages in thread
From: maiski @ 2020-03-10 19:25 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Plaintext Message --]
[-- Type: text/plain, Size: 2158 bytes --]

Hello all,

i am a total newbie besides the general knowledge of lvm.
With this disclaimer written I have the following problem,
which may def need some expert knowledge of lvm, because i couldn't
find solutions online for now :/

I am booting my system (in my case is Qubes, but I suppose that does not
matter at this point)
and after entering my luks password get to the dracut emergency shell.
/"Check for pool qubes-dom/pool00 failed (status:1). Manual repair
required!"/
The only aclive lv is qubes_dom0/swap.
All the others are inactive.

step 1:
/lvm vgscan vgchange -ay
lvm lvconvert --repair qubes_dom0/pool00/
Result:
/using default stripesize 64.00 KiB.
Terminate called after throwing an instance of 'std::runtime_error'
what(): transaction_manager::new_block() couldn't allocate new block
Child 7212 exited abnormally
Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed
(status:1). Manual repair required!/

step 2:
since i suspect that my lvm is full (though it does mark 15 g as free)
i tried the following changes in the /etc/lvm/lvm.conf
thin_pool_autoextend_threshold = 80
thin_pool_autoextend_percent = 2 (Since my the pvs output gives PSize:
465.56g Pfree 15.78g, I set this to 2% to be overly cautious not to extend
beyond the 15 G marked as free, since idk)
auto_activation_volume_list = to hold the group, root, pool00, swap and a
vm that would like to delete to free some space
volume_list = the same as auto_activation_volume_list

and tried step 1 again, did not work, got the same result as above with
qubes_swap as active only

step 3 tried
/lvextend -L+1G qubes_dom0/pool00_tmeta/
Result:
/metadata reference count differ for block xxxxxx, expected 0, but got 1
...
Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!/

Since I do not know my way around lvm, what do you think, would be the best
way out of this?
Adding another external PV? migrating to a bigger PV?
I did not play with backup or achive out of fear to loose any unbackuped
data which happens to be a bit :|
Any help will be highly appreciated!

Thanks in advance,
m

[-- Attachment #2: HTML Message --]
[-- Type: text/html, Size: 2529 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread
* [linux-lvm] probable lvm thin_pool exhaustion
@ 2020-03-11 17:24 maiski
  2020-03-18 11:45 ` Marian Csontos
  0 siblings, 1 reply; 4+ messages in thread
From: maiski @ 2020-03-11 17:24 UTC (permalink / raw)
  To: linux-lvm


Hello all,

i am a total newbie besides the general knowledge of lvm.
With this disclaimer written I have the following problem,
which may def need some expert knowledge of lvm, because i couldn't
find solutions online for now :/

I am booting my system (in my case is Qubes, but I suppose that does  
not matter at this point)
and after entering my luks password get to the dracut emergency shell.
"Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!"
The only aclive lv is qubes_dom0/swap.
All the others are inactive.

step 1:
lvm vgscan vgchange -ay
lvm lvconvert --repair qubes_dom0/pool00
Result:
using default stripesize 64.00 KiB.
Terminate called after throwing an instance of 'std::runtime_error'
what(): transaction_manager::new_block() couldn't allocate new block
Child 7212 exited abnormally
Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed  
(status:1). Manual repair required!

step 2:
since i suspect that my lvm is full (though it does mark 15 g as free)
i tried the following changes in the /etc/lvm/lvm.conf
thin_pool_autoextend_threshold = 80
thin_pool_autoextend_percent = 2 (Since my the pvs output gives PSize:  
465.56g Pfree 15.78g, I set this to 2% to be overly cautious not to  
extend beyond the 15 G marked as free, since idk)
auto_activation_volume_list = to hold the group, root, pool00, swap  
and a vm that would like to delete to free some space
volume_list = the same as auto_activation_volume_list

and tried step 1 again, did not work, got the same result as above  
with qubes_swap as active only

step 3 tried
lvextend -L+1G qubes_dom0/pool00_tmeta
Result:
metadata reference count differ for block xxxxxx, expected 0, but got 1 ...
Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!


Since I do not know my way around lvm, what do you think, would be the  
best way out of this?
Adding another external PV? migrating to a bigger PV?
I did not play with backup or achive out of fear to loose any  
unbackuped data which happens to be a bit :|
Any help will be highly appreciated!

Thanks in advance,
m

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-03-18 11:45 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-10 19:25 [linux-lvm] probable lvm thin_pool exhaustion maiski
2020-03-12 18:11 ` Ming-Hung Tsai
2020-03-11 17:24 maiski
2020-03-18 11:45 ` Marian Csontos

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).