From: Zdenek Kabelac <zkabelac@redhat.com>
To: LVM general discussion and development <linux-lvm@redhat.com>,
Gionatan Danti <g.danti@assyoma.it>
Cc: "Tomas Dalebjörk" <tomas.dalebjork@gmail.com>
Subject: Re: [linux-lvm] lvm limitations
Date: Sun, 30 Aug 2020 21:30:25 +0200 [thread overview]
Message-ID: <bf377813-8600-865a-71b7-dd6873113f46@redhat.com> (raw)
In-Reply-To: <29c466317b90d36bff995b3f3d0f4cf2@assyoma.it>
Dne 30. 08. 20 v 20:01 Gionatan Danti napsal(a):
> Il 2020-08-30 19:33 Zdenek Kabelac ha scritto:
>> For illustration� for 12.000 LVs you need ~4MiB just store Ascii
>> metadata itself, and you need metadata space for keeping at least 2 of
>> them.
>
> Hi Zdenek, are you speaking of classical LVM metadata, right?
Hi
Lvm2 has only ascii metadata (so basically what is stored in
/etc/lvm/archive is the same as in PV header metadata area -
just without spaces and some comments)
And while this is great for manual recovery, it's not
very efficient in storing larger number of LVs - there basically
some sort of DB attemp would likely be needed.
So far however there was no real worthy use case - so safety
for recovery scenarios wins ATM.
>> Handling of operations like� 'vgremove' with so many LVs requires
>> signification amount of your CPU time.
>>
>> Basically to stay within bounds - unless you have very good reasons
>> you should probably stay in range of low thousands to keep lvm2 performing
>> reasonably well.
>
> What about thin vols? Can you suggest any practical limit with lvmthin?
Thin - just like any other LV takes some 'space' - so if you want
to go with higher amount - you need to specify bigger metadata areas
to be able to store such large lvm2 metadata.
There is probably not a big issue with lots of thin LVs in thin-pool as long
as user doesn't need to have them active at the same time. Due to a nature of
kernel metadata handling, the larger amount of active thin LVs from the same
thin-pool v1 may start to compete for the locking when allocating thin pool
chunks thus killing performance - so here is rather better to stay in some
'tens' of actively provisioning thin volumes when the 'performance' is factor.
Worth to note there is fixed strict limit of the ~16GiB maximum thin-pool
kernel metadata size - which surely can be exhausted - mapping holds info
about bTree mappings and sharing chunks between devices....
Zdenek
next prev parent reply other threads:[~2020-08-30 19:30 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-08-29 23:25 [linux-lvm] lvm limitations Tomas Dalebjörk
2020-08-30 17:33 ` Zdenek Kabelac
2020-08-30 18:01 ` Gionatan Danti
2020-08-30 19:30 ` Zdenek Kabelac [this message]
2020-09-01 13:21 ` Gionatan Danti
2020-09-15 19:16 ` Tomas Dalebjörk
2020-09-15 20:08 ` Zdenek Kabelac
2020-09-15 21:24 ` Tomas Dalebjörk
2020-09-15 21:30 ` Stuart D Gathman
2020-09-15 22:24 ` Gionatan Danti
2020-09-15 21:47 ` Zdenek Kabelac
2020-09-15 22:26 ` Gionatan Danti
2020-09-16 4:25 ` Tomas Dalebjörk
2020-09-17 19:24 ` Zdenek Kabelac
2020-09-16 4:31 ` Tomas Dalebjörk
2020-09-16 4:58 ` Tomas Dalebjörk
2020-09-17 19:17 ` Zdenek Kabelac
2020-09-17 19:32 ` Zdenek Kabelac
2020-09-14 6:03 Tomas Dalebjörk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bf377813-8600-865a-71b7-dd6873113f46@redhat.com \
--to=zkabelac@redhat.com \
--cc=g.danti@assyoma.it \
--cc=linux-lvm@redhat.com \
--cc=tomas.dalebjork@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).