linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] thinpool metadata got way too large, how to handle?
@ 2020-01-02 18:19 Ede Wolf
  2020-01-08 11:29 ` Zdenek Kabelac
  0 siblings, 1 reply; 5+ messages in thread
From: Ede Wolf @ 2020-01-02 18:19 UTC (permalink / raw)
  To: linux-lvm

Hello,

While having tried to extend my thinpool LV, after the underlying md
raid had been enlarged, somehow the metadata LV has gotten all the
free space and now is 2,2 TB in size. Space, that is obviously now
missing for the thinpool data LV, where it should have gone in first
place.

And since reducing the metadata LV of the thinpool is not possible, I
am now wondering, what options I may have to reclaim the space for its
intended purpose?

# lvs -a
LV                    VG       Attr       LSize   Pool          Origin
Data%  Meta%  Move Log Cpy%Sync Convert ThinPoolRaid6         VG_Raid6
twi-aotz--   5,97t                      40,27  0,22

[ThinPoolRaid6_tdata] VG_Raid6 Twi-ao----   5,97t 

[ThinPoolRaid6_tmeta]
VG_Raid6 ewi-ao----  <2,21t 

[lvol0_pmspare]       VG_Raid6 ewi-------
72,00m

This is despite not even being sure on how to calculate the proper size
for the metadata. 0,22% indicated metadata use of currently 6TB
thinpool would equal roughly 12GB, but the RAID is supposed to grow up
to ~25TB and is yet not even filled up half way. So plan it times 10 =
120GB? 24TB/6TB * 2.5 [=100%/40%]? Does that sound reasonable?

The lvthin man page recommends moving the metadata to a dedicated PV,
and eventually I would like to do so, but it does not explain how to
move the existing metadata, just how to create the metadata LV for a new
thinpool. 
But my thinpool is already existing. Anyway, if this migration scenario
is somwhow possible, maybe this could be done here as well, albeit for
now even only on the same PV?
Just migrate the metadata to a smaller LV, that then will become the new
metadata LV?

Or should I rather try to repair and thus get get the metadata moved to
the pmspare? That in turn probably needs to grow significantly before.
But if this should be possible and the spare becomes the new main
metadata LV, how go I get a new spare, since explicit creation is not
possible? 
But more importantly, can I repair a non defect metadata LV
at all in the first place?

Currently I have no extends left - all eaten up by the metadata LV, but
I would be able to add another drive to enlargen the md raid and
therefore the PV/VG

Thanks for any hints on this

Ede

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] thinpool metadata got way too large, how to handle?
  2020-01-02 18:19 [linux-lvm] thinpool metadata got way too large, how to handle? Ede Wolf
@ 2020-01-08 11:29 ` Zdenek Kabelac
  2020-01-08 14:23   ` Ede Wolf
  2020-01-10 16:30   ` Ede Wolf
  0 siblings, 2 replies; 5+ messages in thread
From: Zdenek Kabelac @ 2020-01-08 11:29 UTC (permalink / raw)
  To: LVM general discussion and development, Ede Wolf

Dne 02. 01. 20 v 19:19 Ede Wolf napsal(a):
> Hello,
> 
> While having tried to extend my thinpool LV, after the underlying md
> raid had been enlarged, somehow the metadata LV has gotten all the
> free space and now is 2,2 TB in size. Space, that is obviously now
> missing for the thinpool data LV, where it should have gone in first
> place.
> 


Hi

I might guess you were affected by bug in 'percent' resize logic,
that has been possibly addressed by this upstream patch:

https://www.redhat.com/archives/lvm-devel/2019-November/msg00028.html

Although your observed result of having 2.2TB metadata size looks strange - it 
should not normally extend the size of LV to this extreme dimension - unless 
we miss some more context here.

> And since reducing the metadata LV of the thinpool is not possible, I
> am now wondering, what options I may have to reclaim the space for its
> intended purpose?

You can reduce the size of metadata this way:
(It might be in future automated somehow in LV - as there
are further enhancements on thin tools which can make 'reduction' of -tmeta 
size a 'wanted' feature)

For now you need to active thin-pool metadata in read-only mode (so called 
'component activation' (which means no thin-pool nor any thinLV is active - 
only _tmeta LV and it's supported with some recent versions of lvm)
(For older version of lvm2 - you would need to first 'swap-out' existing 
metadata to get access to them)

Then create some 15GiB sized LV  (used as your rightly sized new metadata)
Then run from 2.2T -> 15G LV:

  thin_repair  -i /dev/vg/pool_tmeta -o /dev/vg/newtmeta

This might take some time (depending on CPU speed and disk speed) - and also 
be sure you have  >= 0.8.5 of thin_repair tool (do not try this with older 
version...)


Once this thin_repair is finished - swap in your new tmeta LV:

lvconvert --thinpool vg/pool --poolmetadata vg/newtmeta

And now try to active your thinLVs and check all works.

If all is ok - then you can 'lvremove' now unused  2.2TiB LV  (with the name 
newtmeta -  as  LV content has been swapped - just check with 'lvs -a' output
the sizes are whan you are expecting.

If you are unsure with any step - consult further here your issue please
(better before you do some irreversible mistake).

> Currently I have no extends left - all eaten up by the metadata LV, but
> I would be able to add another drive to enlargen the md raid and
> therefore the PV/VG

You will certainly need at least temporarily some extra space of ~15GiB.

You can try with i.e. USB attached drive - you add such PV into VG (vgextend)

You then create your LV for new tmeta (as described above)

Once you are happy with 'repaired'  thin-pool and your 2.2TiB LV is removed,
then you just 'pvmove' your new tmeta into VG on 'old' storage,
And finally you will simply vgreduce your (now again) unused USB drive.

Hopefully this will work well.

Regards

Zdenek

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] thinpool metadata got way too large, how to handle?
  2020-01-08 11:29 ` Zdenek Kabelac
@ 2020-01-08 14:23   ` Ede Wolf
  2020-01-10 16:30   ` Ede Wolf
  1 sibling, 0 replies; 5+ messages in thread
From: Ede Wolf @ 2020-01-08 14:23 UTC (permalink / raw)
  To: LVM general discussion and development

Thanks VERY much for your help, I'll try this out, it just takes a 
couple of days to resize the raid after having added a new drive. Or 
I'll organise a seperate one for the metadata. Maybe a good idea.

I've completely missed the -o switch for thin_repair.

Bare with me, I'll definately try this out, after having checked the 
repair version, and report back.

Ede

P.S. in case it matters or helps, these are the steps from the bash 
history I've taken once the resync of the mdraid with the added 3TB 
drive had completed and that led to the somewhat enlarged metadata lv:

lvextend -l 80%VG VG_Raid6/ThinPoolRaid6
pvresize /dev/md2
lvextend -l 80%VG VG_Raid6/ThinPoolRaid6
lvextend -l 100%VG VG_Raid6/ThinPoolRaid6
lvextend -l +100%VG VG_Raid6/ThinPoolRaid6

As you can see, initially I've forgott about pvresize. And the to me 
somewhat counter intuitive way having to specify "+" for an absolute 
value had made me use lvextend multiple times.
No complaint, just for the sake of completeness, even though I left out 
all the pv- or lvdisplay commands.
But I never touched the metadatapool directly.




Am 08.01.20 um 12:29 schrieb Zdenek Kabelac:
> Dne 02. 01. 20 v 19:19 Ede Wolf napsal(a):
>> Hello,
>>
>> While having tried to extend my thinpool LV, after the underlying md
>> raid had been enlarged, somehow the metadata LV has gotten all the
>> free space and now is 2,2 TB in size. Space, that is obviously now
>> missing for the thinpool data LV, where it should have gone in first
>> place.
>>
> 
> 
> Hi
> 
> I might guess you were affected by bug in 'percent' resize logic,
> that has been possibly addressed by this upstream patch:
> 
> https://www.redhat.com/archives/lvm-devel/2019-November/msg00028.html
> 
> Although your observed result of having 2.2TB metadata size looks 
> strange - it should not normally extend the size of LV to this extreme 
> dimension - unless we miss some more context here.
> 
>> And since reducing the metadata LV of the thinpool is not possible, I
>> am now wondering, what options I may have to reclaim the space for its
>> intended purpose?
> 
> You can reduce the size of metadata this way:
> (It might be in future automated somehow in LV - as there
> are further enhancements on thin tools which can make 'reduction' of 
> -tmeta size a 'wanted' feature)
> 
> For now you need to active thin-pool metadata in read-only mode (so 
> called 'component activation' (which means no thin-pool nor any thinLV 
> is active - only _tmeta LV and it's supported with some recent versions 
> of lvm)
> (For older version of lvm2 - you would need to first 'swap-out' existing 
> metadata to get access to them)
> 
> Then create some 15GiB sized LV� (used as your rightly sized new metadata)
> Then run from 2.2T -> 15G LV:
> 
>  �thin_repair� -i /dev/vg/pool_tmeta -o /dev/vg/newtmeta
> 
> This might take some time (depending on CPU speed and disk speed) - and 
> also be sure you have� >= 0.8.5 of thin_repair tool (do not try this 
> with older version...)
> 
> 
> Once this thin_repair is finished - swap in your new tmeta LV:
> 
> lvconvert --thinpool vg/pool --poolmetadata vg/newtmeta
> 
> And now try to active your thinLVs and check all works.
> 
> If all is ok - then you can 'lvremove' now unused� 2.2TiB LV� (with the 
> name newtmeta -� as� LV content has been swapped - just check with 'lvs 
> -a' output
> the sizes are whan you are expecting.
> 
> If you are unsure with any step - consult further here your issue please
> (better before you do some irreversible mistake).
> 
>> Currently I have no extends left - all eaten up by the metadata LV, but
>> I would be able to add another drive to enlargen the md raid and
>> therefore the PV/VG
> 
> You will certainly need at least temporarily some extra space of ~15GiB.
> 
> You can try with i.e. USB attached drive - you add such PV into VG 
> (vgextend)
> 
> You then create your LV for new tmeta (as described above)
> 
> Once you are happy with 'repaired'� thin-pool and your 2.2TiB LV is 
> removed,
> then you just 'pvmove' your new tmeta into VG on 'old' storage,
> And finally you will simply vgreduce your (now again) unused USB drive.
> 
> Hopefully this will work well.
> 
> Regards
> 
> Zdenek
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] thinpool metadata got way too large, how to handle?
  2020-01-08 11:29 ` Zdenek Kabelac
  2020-01-08 14:23   ` Ede Wolf
@ 2020-01-10 16:30   ` Ede Wolf
  2020-01-10 16:51     ` Zdenek Kabelac
  1 sibling, 1 reply; 5+ messages in thread
From: Ede Wolf @ 2020-01-10 16:30 UTC (permalink / raw)
  To: linux-lvm

Hello,

I am afraid I have been a bit too optimistic. Being a bit embarassed, 
but I am not not able to find any reference to component activation. 
I've deactivated all LVs and tried to set the thinpool itself or its 
metadata into read only mode:

# lvchange -pr VG_Raid6/ThinPoolRaid6
   Command on LV VG_Raid6/ThinPoolRaid6 uses options invalid with LV 
type thinpool.
   Command not permitted on LV VG_Raid6/ThinPoolRaid6.

# lvchange -pr /dev/mapper/VG_Raid6-ThinPoolRaid6_tmeta
   Operation not permitted on hidden LV VG_Raid6/ThinPoolRaid6_tmeta.

I can lvchange -an the thinpool, but then obviously I have no path/file 
for for the thin_repair input anynmore that I could provide.

So please, how do I properly set the metadata into read only?

Thanks

Ede



Am 08.01.20 um 12:29 schrieb Zdenek Kabelac:
> Dne 02. 01. 20 v 19:19 Ede Wolf napsal(a):
>> Hello,
>>
>> While having tried to extend my thinpool LV, after the underlying md
>> raid had been enlarged, somehow the metadata LV has gotten all the
>> free space and now is 2,2 TB in size. Space, that is obviously now
>> missing for the thinpool data LV, where it should have gone in first
>> place.
>>
> 
> 
> Hi
> 
> I might guess you were affected by bug in 'percent' resize logic,
> that has been possibly addressed by this upstream patch:
> 
> https://www.redhat.com/archives/lvm-devel/2019-November/msg00028.html
> 
> Although your observed result of having 2.2TB metadata size looks 
> strange - it should not normally extend the size of LV to this extreme 
> dimension - unless we miss some more context here.
> 
>> And since reducing the metadata LV of the thinpool is not possible, I
>> am now wondering, what options I may have to reclaim the space for its
>> intended purpose?
> 
> You can reduce the size of metadata this way:
> (It might be in future automated somehow in LV - as there
> are further enhancements on thin tools which can make 'reduction' of 
> -tmeta size a 'wanted' feature)
> 
> For now you need to active thin-pool metadata in read-only mode (so 
> called 'component activation' (which means no thin-pool nor any thinLV 
> is active - only _tmeta LV and it's supported with some recent versions 
> of lvm)
> (For older version of lvm2 - you would need to first 'swap-out' existing 
> metadata to get access to them)
> 
> Then create some 15GiB sized LV� (used as your rightly sized new metadata)
> Then run from 2.2T -> 15G LV:
> 
>  �thin_repair� -i /dev/vg/pool_tmeta -o /dev/vg/newtmeta
> 
> This might take some time (depending on CPU speed and disk speed) - and 
> also be sure you have� >= 0.8.5 of thin_repair tool (do not try this 
> with older version...)
> 
> 
> Once this thin_repair is finished - swap in your new tmeta LV:
> 
> lvconvert --thinpool vg/pool --poolmetadata vg/newtmeta
> 
> And now try to active your thinLVs and check all works.
> 
> If all is ok - then you can 'lvremove' now unused� 2.2TiB LV� (with the 
> name newtmeta -� as� LV content has been swapped - just check with 'lvs 
> -a' output
> the sizes are whan you are expecting.
> 
> If you are unsure with any step - consult further here your issue please
> (better before you do some irreversible mistake).
> 
>> Currently I have no extends left - all eaten up by the metadata LV, but
>> I would be able to add another drive to enlargen the md raid and
>> therefore the PV/VG
> 
> You will certainly need at least temporarily some extra space of ~15GiB.
> 
> You can try with i.e. USB attached drive - you add such PV into VG 
> (vgextend)
> 
> You then create your LV for new tmeta (as described above)
> 
> Once you are happy with 'repaired'� thin-pool and your 2.2TiB LV is 
> removed,
> then you just 'pvmove' your new tmeta into VG on 'old' storage,
> And finally you will simply vgreduce your (now again) unused USB drive.
> 
> Hopefully this will work well.
> 
> Regards
> 
> Zdenek
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [linux-lvm] thinpool metadata got way too large, how to handle?
  2020-01-10 16:30   ` Ede Wolf
@ 2020-01-10 16:51     ` Zdenek Kabelac
  0 siblings, 0 replies; 5+ messages in thread
From: Zdenek Kabelac @ 2020-01-10 16:51 UTC (permalink / raw)
  To: listac, LVM general discussion and development

Dne 10. 01. 20 v 17:30 Ede Wolf napsal(a):
> Hello,
> 
> I am afraid I have been a bit too optimistic. Being a bit embarassed, but I am 
> not not able to find any reference to component activation. I've deactivated 
> all LVs and tried to set the thinpool itself or its metadata into read only mode:
> 
> # lvchange -pr VG_Raid6/ThinPoolRaid6
>  � Command on LV VG_Raid6/ThinPoolRaid6 uses options invalid with LV type 
> thinpool.
>  � Command not permitted on LV VG_Raid6/ThinPoolRaid6.
> 
> # lvchange -pr /dev/mapper/VG_Raid6-ThinPoolRaid6_tmeta
>  � Operation not permitted on hidden LV VG_Raid6/ThinPoolRaid6_tmeta.
> 
> I can lvchange -an the thinpool, but then obviously I have no path/file for 
> for the thin_repair input anynmore that I could provide.
> 
> So please, how do I properly set the metadata into read only?
> 

Your lvm2 is too old   (component activation is relatively new feature)

In this case you need to simply 'swap-out'  your existing _tmeta into a 
regular LV.


Easy to do -

Just create any LV you want -   lvcreate -an -L1 -n mytestlv vg

then 'swap' content of _tmeta with mytestvl  with:

lvconvert --thinpool  vg/poolname  --poolmetadata  vg/mytestlv


and now  vg/mytestlv should be you 2.2TiB metadata volume you can easily activate.


Regards

Zdenek

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-01-10 16:51 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-02 18:19 [linux-lvm] thinpool metadata got way too large, how to handle? Ede Wolf
2020-01-08 11:29 ` Zdenek Kabelac
2020-01-08 14:23   ` Ede Wolf
2020-01-10 16:30   ` Ede Wolf
2020-01-10 16:51     ` Zdenek Kabelac

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).