linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] probable lvm thin_pool exhaustion
@ 2020-03-10 19:25 maiski
  2020-03-12 18:11 ` Ming-Hung Tsai
  0 siblings, 1 reply; 4+ messages in thread
From: maiski @ 2020-03-10 19:25 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Plaintext Message --]
[-- Type: text/plain, Size: 2158 bytes --]

Hello all,

i am a total newbie besides the general knowledge of lvm.
With this disclaimer written I have the following problem,
which may def need some expert knowledge of lvm, because i couldn't
find solutions online for now :/

I am booting my system (in my case is Qubes, but I suppose that does not
matter at this point)
and after entering my luks password get to the dracut emergency shell.
/"Check for pool qubes-dom/pool00 failed (status:1). Manual repair
required!"/
The only aclive lv is qubes_dom0/swap.
All the others are inactive.

step 1:
/lvm vgscan vgchange -ay
lvm lvconvert --repair qubes_dom0/pool00/
Result:
/using default stripesize 64.00 KiB.
Terminate called after throwing an instance of 'std::runtime_error'
what(): transaction_manager::new_block() couldn't allocate new block
Child 7212 exited abnormally
Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed
(status:1). Manual repair required!/

step 2:
since i suspect that my lvm is full (though it does mark 15 g as free)
i tried the following changes in the /etc/lvm/lvm.conf
thin_pool_autoextend_threshold = 80
thin_pool_autoextend_percent = 2 (Since my the pvs output gives PSize:
465.56g Pfree 15.78g, I set this to 2% to be overly cautious not to extend
beyond the 15 G marked as free, since idk)
auto_activation_volume_list = to hold the group, root, pool00, swap and a
vm that would like to delete to free some space
volume_list = the same as auto_activation_volume_list

and tried step 1 again, did not work, got the same result as above with
qubes_swap as active only

step 3 tried
/lvextend -L+1G qubes_dom0/pool00_tmeta/
Result:
/metadata reference count differ for block xxxxxx, expected 0, but got 1
...
Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!/

Since I do not know my way around lvm, what do you think, would be the best
way out of this?
Adding another external PV? migrating to a bigger PV?
I did not play with backup or achive out of fear to loose any unbackuped
data which happens to be a bit :|
Any help will be highly appreciated!

Thanks in advance,
m

[-- Attachment #2: HTML Message --]
[-- Type: text/html, Size: 2529 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [linux-lvm] probable lvm thin_pool exhaustion
  2020-03-10 19:25 [linux-lvm] probable lvm thin_pool exhaustion maiski
@ 2020-03-12 18:11 ` Ming-Hung Tsai
  0 siblings, 0 replies; 4+ messages in thread
From: Ming-Hung Tsai @ 2020-03-12 18:11 UTC (permalink / raw)
  To: LVM general discussion and development

According to step3, it sounds like the mapping tree is health, thus
the metadata could be simply repaired by lvconvert/thin_repair. The
error message might caused by the following reasons:
1. There are too many snapshots, which exhausted the capacity of
metadata spare. Expanding the metadata spare might work.
2. Bugs in thin_repair. What's the version of thin-provisioning-tools
you are using?

Also, before running lvconvert, I suggest to run thin_check first, to
check if the metadata is suitable for automatically repair or not.

$ lvchange -ay qubes_dom0/pool00_tmeta
$ thin_check /dev/mapper/qubes_dom0-pool00_tmeta
$ lvchange -an qubes_dom0/pool00_tmeta

(Maybe "lvconvert --repair" could provide options for setting repair
levels, to prevent novice users from discarding missing mappings.)

If you're not sure about the detail steps, you can upload compressed
metadata for further analysis:
$ lvchange -ay qubes_dom0/pool00_tmeta
$ dd if=/dev/mapper/qubes_dom0-pool00_tmeta of=tmeta.bin
$ tar -czf tmeta.tar.gz tmeta.bin

Finally, the options in step2 are for dmeventd to expand online
thin-pools. They make no help for expanding offline, broken
thin-pools, although that the VG is not full.

On Thu, Mar 12, 2020 at 4:14 PM <maiski@maiski.net> wrote:
>
> step 1:
> lvm vgscan vgchange -ay
> lvm lvconvert --repair qubes_dom0/pool00
> Result:
> using default stripesize 64.00 KiB.
> Terminate called after throwing an instance of 'std::runtime_error'
> what(): transaction_manager::new_block() couldn't allocate new block
> Child 7212 exited abnormally
> Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed (status:1). Manual repair required!
>
> step 2:
> since i suspect that my lvm is full (though it does mark 15 g as free)
> i tried the following changes in the /etc/lvm/lvm.conf
...
> and tried step 1 again, did not work, got the same result as above with qubes_swap as active only
>
> step 3 tried
> lvextend -L+1G qubes_dom0/pool00_tmeta
> Result:
> metadata reference count differ for block xxxxxx, expected 0, but got 1 ...
> Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [linux-lvm] probable lvm thin_pool exhaustion
  2020-03-11 17:24 maiski
@ 2020-03-18 11:45 ` Marian Csontos
  0 siblings, 0 replies; 4+ messages in thread
From: Marian Csontos @ 2020-03-18 11:45 UTC (permalink / raw)
  To: LVM general discussion and development, maiski

On 3/11/20 6:24 PM, maiski@maiski.net wrote:
> 
> Hello all,
> 
> i am a total newbie besides the general knowledge of lvm.
> With this disclaimer written I have the following problem,
> which may def need some expert knowledge of lvm, because i couldn't
> find solutions online for now :/
> 
> I am booting my system (in my case is Qubes, but I suppose that does not 
> matter at this point)
> and after entering my luks password get to the dracut emergency shell.
> "Check for pool qubes-dom/pool00 failed (status:1). Manual repair 
> required!"
> The only aclive lv is qubes_dom0/swap.
> All the others are inactive.
> 
> step 1:
> lvm vgscan vgchange -ay
> lvm lvconvert --repair qubes_dom0/pool00
> Result:
> using default stripesize 64.00 KiB.
> Terminate called after throwing an instance of 'std::runtime_error'
> what(): transaction_manager::new_block() couldn't allocate new block
> Child 7212 exited abnormally
> Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed 
> (status:1). Manual repair required!

One the first glance this looks like the problem reported in Bug 1763895 
- thin_restore fails with transaction_manager::new_block() couldn't 
allocate new block:

https://bugzilla.redhat.com/show_bug.cgi?id=1763895

> 
> step 2:
> since i suspect that my lvm is full (though it does mark 15 g as free)

IIUC it is the metada which is full, not the data.
What's the size of the below _tmeta volume?

What's `thin_check --version` and `lvm version` output?

-- Martian


> i tried the following changes in the /etc/lvm/lvm.conf
> thin_pool_autoextend_threshold = 80
> thin_pool_autoextend_percent = 2 (Since my the pvs output gives PSize: 
> 465.56g Pfree 15.78g, I set this to 2% to be overly cautious not to 
> extend beyond the 15 G marked as free, since idk)
> auto_activation_volume_list = to hold the group, root, pool00, swap and 
> a vm that would like to delete to free some space
> volume_list = the same as auto_activation_volume_list
> 
> and tried step 1 again, did not work, got the same result as above with 
> qubes_swap as active only
> 
> step 3 tried
> lvextend -L+1G qubes_dom0/pool00_tmeta
> Result:
> metadata reference count differ for block xxxxxx, expected 0, but got 1 ...
> Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!
> 
> 
> Since I do not know my way around lvm, what do you think, would be the 
> best way out of this?
> Adding another external PV? migrating to a bigger PV?
> I did not play with backup or achive out of fear to loose any unbackuped 
> data which happens to be a bit :|
> Any help will be highly appreciated!
> 
> Thanks in advance,
> m
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [linux-lvm] probable lvm thin_pool exhaustion
@ 2020-03-11 17:24 maiski
  2020-03-18 11:45 ` Marian Csontos
  0 siblings, 1 reply; 4+ messages in thread
From: maiski @ 2020-03-11 17:24 UTC (permalink / raw)
  To: linux-lvm


Hello all,

i am a total newbie besides the general knowledge of lvm.
With this disclaimer written I have the following problem,
which may def need some expert knowledge of lvm, because i couldn't
find solutions online for now :/

I am booting my system (in my case is Qubes, but I suppose that does  
not matter at this point)
and after entering my luks password get to the dracut emergency shell.
"Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!"
The only aclive lv is qubes_dom0/swap.
All the others are inactive.

step 1:
lvm vgscan vgchange -ay
lvm lvconvert --repair qubes_dom0/pool00
Result:
using default stripesize 64.00 KiB.
Terminate called after throwing an instance of 'std::runtime_error'
what(): transaction_manager::new_block() couldn't allocate new block
Child 7212 exited abnormally
Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed  
(status:1). Manual repair required!

step 2:
since i suspect that my lvm is full (though it does mark 15 g as free)
i tried the following changes in the /etc/lvm/lvm.conf
thin_pool_autoextend_threshold = 80
thin_pool_autoextend_percent = 2 (Since my the pvs output gives PSize:  
465.56g Pfree 15.78g, I set this to 2% to be overly cautious not to  
extend beyond the 15 G marked as free, since idk)
auto_activation_volume_list = to hold the group, root, pool00, swap  
and a vm that would like to delete to free some space
volume_list = the same as auto_activation_volume_list

and tried step 1 again, did not work, got the same result as above  
with qubes_swap as active only

step 3 tried
lvextend -L+1G qubes_dom0/pool00_tmeta
Result:
metadata reference count differ for block xxxxxx, expected 0, but got 1 ...
Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!


Since I do not know my way around lvm, what do you think, would be the  
best way out of this?
Adding another external PV? migrating to a bigger PV?
I did not play with backup or achive out of fear to loose any  
unbackuped data which happens to be a bit :|
Any help will be highly appreciated!

Thanks in advance,
m

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-03-18 11:45 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-10 19:25 [linux-lvm] probable lvm thin_pool exhaustion maiski
2020-03-12 18:11 ` Ming-Hung Tsai
2020-03-11 17:24 maiski
2020-03-18 11:45 ` Marian Csontos

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).