linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Ede Wolf <listac@nebelschwaden.de>
To: linux-lvm@redhat.com
Subject: [linux-lvm] thinpool metadata got way too large, how to handle?
Date: Thu, 2 Jan 2020 19:19:52 +0100	[thread overview]
Message-ID: <20200102191952.1a2c44a7@kaperfahrt.nebelschwaden.de> (raw)

Hello,

While having tried to extend my thinpool LV, after the underlying md
raid had been enlarged, somehow the metadata LV has gotten all the
free space and now is 2,2 TB in size. Space, that is obviously now
missing for the thinpool data LV, where it should have gone in first
place.

And since reducing the metadata LV of the thinpool is not possible, I
am now wondering, what options I may have to reclaim the space for its
intended purpose?

# lvs -a
LV                    VG       Attr       LSize   Pool          Origin
Data%  Meta%  Move Log Cpy%Sync Convert ThinPoolRaid6         VG_Raid6
twi-aotz--   5,97t                      40,27  0,22

[ThinPoolRaid6_tdata] VG_Raid6 Twi-ao----   5,97t 

[ThinPoolRaid6_tmeta]
VG_Raid6 ewi-ao----  <2,21t 

[lvol0_pmspare]       VG_Raid6 ewi-------
72,00m

This is despite not even being sure on how to calculate the proper size
for the metadata. 0,22% indicated metadata use of currently 6TB
thinpool would equal roughly 12GB, but the RAID is supposed to grow up
to ~25TB and is yet not even filled up half way. So plan it times 10 =
120GB? 24TB/6TB * 2.5 [=100%/40%]? Does that sound reasonable?

The lvthin man page recommends moving the metadata to a dedicated PV,
and eventually I would like to do so, but it does not explain how to
move the existing metadata, just how to create the metadata LV for a new
thinpool. 
But my thinpool is already existing. Anyway, if this migration scenario
is somwhow possible, maybe this could be done here as well, albeit for
now even only on the same PV?
Just migrate the metadata to a smaller LV, that then will become the new
metadata LV?

Or should I rather try to repair and thus get get the metadata moved to
the pmspare? That in turn probably needs to grow significantly before.
But if this should be possible and the spare becomes the new main
metadata LV, how go I get a new spare, since explicit creation is not
possible? 
But more importantly, can I repair a non defect metadata LV
at all in the first place?

Currently I have no extends left - all eaten up by the metadata LV, but
I would be able to add another drive to enlargen the md raid and
therefore the PV/VG

Thanks for any hints on this

Ede

             reply	other threads:[~2020-01-02 18:20 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-02 18:19 Ede Wolf [this message]
2020-01-08 11:29 ` [linux-lvm] thinpool metadata got way too large, how to handle? Zdenek Kabelac
2020-01-08 14:23   ` Ede Wolf
2020-01-10 16:30   ` Ede Wolf
2020-01-10 16:51     ` Zdenek Kabelac

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200102191952.1a2c44a7@kaperfahrt.nebelschwaden.de \
    --to=listac@nebelschwaden.de \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).