linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Zdenek Kabelac <zdenek.kabelac@gmail.com>
Cc: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Running thin_trim before activating a thin pool
Date: Sun, 30 Jan 2022 12:30:04 -0500	[thread overview]
Message-ID: <YfbLMQ7a6D479vz6@itl-email> (raw)
In-Reply-To: <b66e90ec-28ed-3962-ac99-69f8e1b01936@gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 4102 bytes --]

On Sun, Jan 30, 2022 at 12:18:32PM +0100, Zdenek Kabelac wrote:
> Dne 30. 01. 22 v 2:20 Demi Marie Obenour napsal(a):
> > On Sat, Jan 29, 2022 at 10:40:34PM +0100, Zdenek Kabelac wrote:
> > > Dne 29. 01. 22 v 21:09 Demi Marie Obenour napsal(a):
> > > > On Sat, Jan 29, 2022 at 08:42:21PM +0100, Zdenek Kabelac wrote:
> > > > > Dne 29. 01. 22 v 19:52 Demi Marie Obenour napsal(a):
> > > > > > Is it possible to configure LVM2 so that it runs thin_trim before it
> > > > > > activates a thin pool?  Qubes OS currently runs blkdiscard on every thin
> > > > > > volume before deleting it, which is slow and unreliable.  Would running
> > > > > > thin_trim during system startup provide a better alternative?
> > > > > 
> > > > > Hi
> > > > > 
> > > > > 
> > > > > Nope there is currently no support from lvm2 side for this.
> > > > > Feel free to open RFE.
> > > > 
> > > > Done: https://bugzilla.redhat.com/show_bug.cgi?id=2048160
> > > > 
> > > > 
> > > 
> > > Thanks
> > > 
> > > Although your use-case Thinpool on top of VDO is not really a good plan and
> > > there is a good reason behind why lvm2 does not support this device stack
> > > directly (aka thin-pool data LV as VDO LV).
> > > I'd say you are stepping on very very thin ice...
> > 
> > Thin pool on VDO is not my actual use-case.  The actual reason for the
> > ticket is slow discards of thin devices that are about to be deleted;
> 
> Hi
> 
> Discard of thins itself is AFAIC pretty fast - unless you have massively
> sized thin devices with many GiB of metadata - obviously you cannot process
> this amount of metadata in nanoseconds (and there are prepared kernel
> patches to make it even faster)

Would you be willing and able to share those patches?

> What is the problem is the speed of discard of physical devices.
> You could actually try to feel difference with:
> lvchange --discards passdown|nopassdown thinpool

In Qubes OS I believe we do need the discards to be passed down
eventually, but I doubt it needs to be synchronous.  Being able to run
the equivalent of `fstrim -av` periodically would be amazing.  I’m
CC’ing Marek Marczykowski-Górecki (Qubes OS project lead) in case he
has something to say.

> Also it's very important to keep metadata on fast storage device (SSD/NVMe)!
> Keeping metadata on same hdd spindle as data is always going to feel slow
> (in fact it's quite pointless to talk about performance and use hdd...)

That explains why I had such a horrible experience with my initial
(split between NVMe and HDD) install.  I would not be surprised if some
or all of the metadata volume wound up on the spinning disk.

> > you can find more details in the linked GitHub issue.  That said, now I
> > am curious why you state that dm-thin on top of dm-vdo (that is,
> > userspace/filesystem/VM/etc ⇒ dm-thin data (*not* metadata) ⇒ dm-vdo ⇒
> > hardware/dm-crypt/etc) is a bad idea.  It seems to be a decent way to
> 
> Out-of-space recoveries are ATM much harder then what we want.

Okay, thanks!  Will this be fixed in a future version?

> So as long as user can maintain free space of your VDO and thin-pool it's
> ok. Once user runs out of space - recovery is pretty hard task (and there is
> reason we have support...)

Out of space is already a tricky issue in Qubes OS.  I certainly would
not want to make it worse.

> > add support for efficient snapshots of data stored on a VDO volume, and
> > to have multiple volumes on top of a single VDO volume.  Furthermore,
> 
> We hope we will add some direct 'snapshot' support to VDO so users will not
> need to combine both technologies together.

Does that include support for splitting a VDO volume into multiple,
individually-snapshottable volumes, the way thin works?

> Thin is more oriented towards extreme speed.
> VDO is more about 'compression & deduplication' - so space efficiency.
> 
> Combining both together is kind of harming their advantages.

That makes sense.

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 201 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

  reply	other threads:[~2022-01-30 17:30 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-29 18:52 [linux-lvm] Running thin_trim before activating a thin pool Demi Marie Obenour
2022-01-29 19:42 ` Zdenek Kabelac
2022-01-29 20:09   ` Demi Marie Obenour
2022-01-29 21:40     ` Zdenek Kabelac
2022-01-30  1:20       ` Demi Marie Obenour
2022-01-30 11:18         ` Zdenek Kabelac
2022-01-30 17:30           ` Demi Marie Obenour [this message]
2022-01-30 17:56             ` Zdenek Kabelac
2022-01-30 18:01               ` Demi Marie Obenour
2022-01-30 18:42                 ` Zdenek Kabelac
2022-01-30 20:22           ` Gionatan Danti
  -- strict thread matches above, loose matches on Subject: below --
2022-01-29 17:45 Demi Marie Obenour
2022-01-31 11:02 ` Gionatan Danti
2022-01-31 13:41   ` Zdenek Kabelac
2022-01-31 14:12     ` Gionatan Danti
2022-01-31 15:28   ` Demi Marie Obenour
2022-01-31 17:54     ` Gionatan Danti
2022-01-31 19:04       ` Demi Marie Obenour

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YfbLMQ7a6D479vz6@itl-email \
    --to=demi@invisiblethingslab.com \
    --cc=linux-lvm@redhat.com \
    --cc=zdenek.kabelac@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).