linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Zdenek Kabelac <zkabelac@redhat.com>
To: Gionatan Danti <g.danti@assyoma.it>,
	LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Higher than expected metadata usage?
Date: Tue, 27 Mar 2018 14:52:25 +0200	[thread overview]
Message-ID: <12b56776-daf3-12e3-1847-f381fa52f1d0@redhat.com> (raw)
In-Reply-To: <3aa5c210-13f1-02d7-6fec-79762087325c@assyoma.it>

Dne 27.3.2018 v 13:05 Gionatan Danti napsal(a):
> On 27/03/2018 12:39, Zdenek Kabelac wrote:
>> Hi
>>
>> And last but not least comment -� when you pointed out 4MB extent usage - 
>> it's relatively huge chunk - and if the 'fstrim' wants to succeed - those 
>> 4MB blocks fitting thin-pool chunks needs to be fully released. >
>> So i.e. if there are some 'sparse' filesystem metadata blocks places - they 
>> may prevent TRIM to successeed - so while your filesystem may have a lot of 
>> free space for its data - the actually amount if physically trimmed space 
>> can be much much smaller.
>>
>> So beware if the 4MB chunk-size for a thin-pool is good fit here....
>> The smaller the chunk is - the better change of TRIM there is...
> 
> Sure, I understand that. Anyway, please note that 4MB chunk size was 
> *automatically* chosen by the system during pool creation. It seems to me that 
> the default is to constrain the metadata volume to be < 128 MB, right?

Yes - on default lvm2 'targets' to fit metadata into this 128MB size.

Obviously there is nothing like  'one size fits all' - so it really the user 
thinks about the use-case and pick better parameters then defaults.

Size 128MB is picked to have metadata that easily fit in RAM.

>> For heavily fragmented XFS even 64K chunks might be a challenge....
> 
> True, but chunk size *always* is a performance/efficiency tradeoff. Making a 
> 64K chunk-sided volume will end with even more fragmentation for the 
> underlying disk subsystem. Obviously, if many snapshot are expected, a small 
> chunk size is the right choice (CoW filesystem as BTRFS and ZFS face similar 
> problems, by the way).


Yep - the smaller the chunk is - the less 'max' size of data device can be 
supported as there is final number of chunks you can address from maximal 
metadata size which is ~16GB and can't get any bigger.

The bigger the chunk is - the less sharing in snapshot happens, but it gets 
less fragments.

Regards

Zdenek

      reply	other threads:[~2018-03-27 12:52 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-27  7:44 [linux-lvm] Higher than expected metadata usage? Gionatan Danti
2018-03-27  8:30 ` Zdenek Kabelac
2018-03-27  9:40   ` Gionatan Danti
2018-03-27 10:18     ` Zdenek Kabelac
2018-03-27 10:58       ` Gionatan Danti
2018-03-27 11:06         ` Gionatan Danti
2018-03-27 10:39 ` Zdenek Kabelac
2018-03-27 11:05   ` Gionatan Danti
2018-03-27 12:52     ` Zdenek Kabelac [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=12b56776-daf3-12e3-1847-f381fa52f1d0@redhat.com \
    --to=zkabelac@redhat.com \
    --cc=g.danti@assyoma.it \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).