linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Zdenek Kabelac <zkabelac@redhat.com>
To: LVM general discussion and development <linux-lvm@redhat.com>,
	Gionatan Danti <g.danti@assyoma.it>
Subject: Re: [linux-lvm] Higher than expected metadata usage?
Date: Tue, 27 Mar 2018 12:39:35 +0200	[thread overview]
Message-ID: <c5d1945b-a58e-4e4a-4f0f-83b32635b250@redhat.com> (raw)
In-Reply-To: <597ba4e4-2028-ed62-6835-86ae9015ea5b@assyoma.it>

Dne 27.3.2018 v 09:44 Gionatan Danti napsal(a):

> What am I missing? Is the "data%" field a measure of how many data chunks are 
> allocated, or does it even track "how full" are these data chunks? This would 
> benignly explain the observed discrepancy, as a partially-full data chunks can 
> be used to store other data without any new metadata allocation.
> 

Hi

I've forget to mention  there is  "thin_ls" tool (comes with 
device-mapper-persistent-data package (with thin_check) - for those who want 
to know precise amount of allocation and what amount of blocks is owned 
exclusively by a single thinLV and what is shared.

It's worth to note - numbers printed by 'lvs' are *JUST* really rough 
estimations of data usage for both  thin_pool & thin_volumes.

Kernel is not maintaining full data-set - only a needed portion of it - and 
since 'detailed' precise evaluation is expensive it's deferred to the tool 
thin_ls...


And last but not least comment -  when you pointed out 4MB extent usage - it's 
relatively huge chunk - and if the 'fstrim' wants to succeed - those 4MB 
blocks fitting thin-pool chunks needs to be fully released.

So i.e. if there are some 'sparse' filesystem metadata blocks places - they 
may prevent TRIM to successeed - so while your filesystem may have a lot of 
free space for its data - the actually amount if physically trimmed space can 
be much much smaller.

So beware if the 4MB chunk-size for a thin-pool is good fit here....
The smaller the chunk is - the better change of TRIM there is...
For heavily fragmented XFS even 64K chunks might be a challenge....


Regards


Zdenek

  parent reply	other threads:[~2018-03-27 10:39 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-27  7:44 [linux-lvm] Higher than expected metadata usage? Gionatan Danti
2018-03-27  8:30 ` Zdenek Kabelac
2018-03-27  9:40   ` Gionatan Danti
2018-03-27 10:18     ` Zdenek Kabelac
2018-03-27 10:58       ` Gionatan Danti
2018-03-27 11:06         ` Gionatan Danti
2018-03-27 10:39 ` Zdenek Kabelac [this message]
2018-03-27 11:05   ` Gionatan Danti
2018-03-27 12:52     ` Zdenek Kabelac

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c5d1945b-a58e-4e4a-4f0f-83b32635b250@redhat.com \
    --to=zkabelac@redhat.com \
    --cc=g.danti@assyoma.it \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).