linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Zdenek Kabelac <zdenek.kabelac@gmail.com>
Cc: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] LVM performance vs direct dm-thin
Date: Wed, 2 Feb 2022 19:23:13 -0500	[thread overview]
Message-ID: <YfsgcwXnOouviZgc@itl-email> (raw)
In-Reply-To: <3adf6eb1-94ac-dd91-e3e6-f0d44cd36b89@gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 3111 bytes --]

On Wed, Feb 02, 2022 at 11:04:37AM +0100, Zdenek Kabelac wrote:
> Dne 02. 02. 22 v 3:09 Demi Marie Obenour napsal(a):
> > On Sun, Jan 30, 2022 at 06:43:13PM +0100, Zdenek Kabelac wrote:
> > > Dne 30. 01. 22 v 17:45 Demi Marie Obenour napsal(a):
> > > > On Sun, Jan 30, 2022 at 11:52:52AM +0100, Zdenek Kabelac wrote:
> > > > > Dne 30. 01. 22 v 1:32 Demi Marie Obenour napsal(a):
> > > > > > On Sat, Jan 29, 2022 at 10:32:52PM +0100, Zdenek Kabelac wrote:
> > > > > > > Dne 29. 01. 22 v 21:34 Demi Marie Obenour napsal(a):
> > > My biased advice would be to stay with lvm2. There is lot of work, many
> > > things are not well documented and getting everything running correctly will
> > > take a lot of effort  (Docker in fact did not managed to do it well and was
> > > incapable to provide any recoverability)
> > 
> > What did Docker do wrong?  Would it be possible for a future version of
> > lvm2 to be able to automatically recover from off-by-one thin pool
> > transaction IDs?
> 
> Ensuring all steps in state-machine are always correct is not exactly simple.
> But since I've not heard about off-by-one problem for a long while -  I
> believe we've managed to close all the holes and bugs in double-commit
> system
> and metadata handling by thin-pool and lvm2.... (for recent lvm2 & kernel)

How recent are you talking about?  Are there fixes that can be
cherry-picked?  I somewhat recently triggered this issue on a test
machine, so I would like to know.

> > > It's difficult - if you would be distributing lvm2 with exact kernel version
> > > & udev & systemd with a single linux distro - it reduces huge set of
> > > troubles...
> > 
> > Qubes OS comes close to this in practice.  systemd and udev versions are
> > known and fixed, and Qubes OS ships its own kernels.
> 
> Systemd/udev evolves - so fixed today doesn't really mean same version will
> be there tomorrow.  And unfortunately systemd is known to introduce
> backward incompatible changes from time to time...

Thankfully, in Qubes OS’s dom0, the version of systemd is frozen and
will never change throughout an entire release.

> > > Chain filesystem->block_layer->filesystem->block_layer is something you most
> > > likely do not want to use for any well performing solution...
> > > But it's ok for testing...
> > 
> > How much of this is due to the slow loop driver?  How much of it could
> > be mitigated if btrfs supported an equivalent of zvols?
> 
> Here you are missing the core of problem from kernel POV aka
> how the memory allocation is working and what are the approximation in
> kernel with buffer handling and so on.
> So whoever is using  'loop' devices in production systems in the way
> described above has never really tested any corner case logic....

In Qubes OS the loop device is always passed through to a VM or used as
the base device for an old-style device-mapper snapshot.  It is never
mounted on the host.  Are there known problems with either of these
configurations?

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 201 bytes --]

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

  reply	other threads:[~2022-02-03  0:23 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-29 20:34 [linux-lvm] LVM performance vs direct dm-thin Demi Marie Obenour
2022-01-29 21:32 ` Zdenek Kabelac
2022-01-30  0:32   ` Demi Marie Obenour
2022-01-30 10:52     ` Zdenek Kabelac
2022-01-30 16:45       ` Demi Marie Obenour
2022-01-30 17:43         ` Zdenek Kabelac
2022-01-30 20:27           ` Gionatan Danti
2022-01-30 21:17             ` Demi Marie Obenour
2022-01-31  7:52               ` Gionatan Danti
2022-02-02  2:09           ` Demi Marie Obenour
2022-02-02 10:04             ` Zdenek Kabelac
2022-02-03  0:23               ` Demi Marie Obenour [this message]
2022-02-03 12:04                 ` Zdenek Kabelac
2022-02-03 12:04                   ` Zdenek Kabelac
2022-01-30 21:39         ` Stuart D. Gathman
2022-01-30 22:14           ` Demi Marie Obenour
2022-01-31 21:29             ` Marian Csontos
2022-02-03  4:48               ` Demi Marie Obenour
2022-02-03 12:28                 ` Zdenek Kabelac
2022-02-04  0:01                   ` Demi Marie Obenour
2022-02-04 10:16                     ` Zdenek Kabelac
2022-01-31  7:47           ` Gionatan Danti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YfsgcwXnOouviZgc@itl-email \
    --to=demi@invisiblethingslab.com \
    --cc=linux-lvm@redhat.com \
    --cc=zdenek.kabelac@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).