linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Why is the performance of my lvmthin snapshot so poor
Date: Thu, 16 Jun 2022 12:19:24 -0400	[thread overview]
Message-ID: <YqtYDgld/PAOLhpr@itl-email> (raw)
In-Reply-To: <7caac0c00c5c7cd93fdf50b62e2e7907@assyoma.it>


[-- Attachment #1.1: Type: text/plain, Size: 3444 bytes --]

On Thu, Jun 16, 2022 at 03:22:09PM +0200, Gionatan Danti wrote:
> Il 2022-06-16 09:53 Demi Marie Obenour ha scritto:
> > That seems reasonable.  My conclusion is that dm-thin (which is what LVM
> > uses) is not a good fit for workloads with a lot of small random writes
> > and frequent snapshots, due to the 64k minimum chunk size.  This also
> > explains why dm-thin does not allow smaller blocks: not only would it
> > only support very small thin pools, it would also have massive metadata
> > write overhead.  Hopefully dm-thin v2 will improve the situation.
> 
> I think that, in this case, no free lunch really exists. I tried the
> following thin provisioning methods, each with its strong & weak points:
> 
> lvmthin: probably the more flexible of the mainline kernel options. You pay
> for r/m/w only when allocating a small block (say 4K) the first time after
> taking a snapshot. It is fast and well integrated with lvm command line.
> Con: bad behavior on out-of-space condition

Also, the LVM command line is slow, and there is very large write
amplification with lots of random writes immediately after taking a
snapshot.  Furthermore, because of the mismatch between the dm-thin
block size and the filesystem block size, fstrim might not reclaim as
much space in the pool as one would expect.

> xfs + reflink: a great, simple to use tool when applicable. It has a very
> small granularity (4K) with no r/m/w. Cons: requires fine tuning for good
> performance when reflinking big files; IO freezes during metadata copy for
> reflink; a very small granularity means sequential IO is going to suffer
> heavily (see here for more details:
> https://marc.info/?l=linux-xfs&m=157891132109888&w=2)

Also heavy fragmentation can make journal replay very slow, to the point
of taking days on spinning hard drives.  Dave Chinner explains this here:
https://lore.kernel.org/linux-xfs/20220509230918.GP1098723@dread.disaster.area/.

> btrfs: very small granularity (4K) and many integrated features. Cons: bad
> performance overall, especially when using mechanical HDD

Also poor out-of-space handling and unbounded worst-case latency.

> vdo: is provides small granularity (4K) thin provisioning, compression and
> deduplication. Cons: (still) out-of-tree; requires a powerloss protected
> writeback cache to maintain good performance; no snapshot capability
> 
> zfs: designed for the ground up for pervasive CoW, with many features and
> ARC/L2ARC. Cons: out-of-tree; using small granularity (4K) means bad overall
> performance; using big granularity (128K by default) is a necessary
> compromise for most HDD pools.

Is this still a problem on NVMe storage?  HDDs will not really be fast
no matter what one does, at least unless there is a write-back cache
that can convert random I/O to sequential I/O.  Even that only helps
much if your working set fits in cache, or if your workload is
write-mostly.

> For what it is worth, I settled on ZFS when using out-of-tree modules is not
> an issue and lvmthin otherwise (but I plan to use xfs + reflink more in the
> future).
> 
> Do you have any information to share about dm-thin v2? I heard about it some
> years ago, but I found no recent info.

It does not exist yet.  Joe Thornber would be the person to ask
regarding any plans to create it.

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 202 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

  reply	other threads:[~2022-06-16 16:19 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-13  8:49 [linux-lvm] Why is the performance of my lvmthin snapshot so poor Zhiyong Ye
2022-06-14  7:04 ` Gionatan Danti
2022-06-14 10:16   ` Zhiyong Ye
2022-06-14 12:56     ` Gionatan Danti
2022-06-14 13:29       ` Zhiyong Ye
2022-06-14 14:54         ` Gionatan Danti
2022-06-15  7:42           ` Zhiyong Ye
2022-06-15  9:34             ` Gionatan Danti
2022-06-15  9:46               ` Zhiyong Ye
2022-06-15 12:40                 ` Gionatan Danti
2022-06-15 16:39                   ` Demi Marie Obenour
2022-06-16  7:53             ` Demi Marie Obenour
2022-06-16 13:22               ` Gionatan Danti
2022-06-16 16:19                 ` Demi Marie Obenour [this message]
2022-06-16 19:50                   ` Gionatan Danti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YqtYDgld/PAOLhpr@itl-email \
    --to=demi@invisiblethingslab.com \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).