On Wed, Feb 02, 2022 at 11:04:37AM +0100, Zdenek Kabelac wrote: > Dne 02. 02. 22 v 3:09 Demi Marie Obenour napsal(a): > > On Sun, Jan 30, 2022 at 06:43:13PM +0100, Zdenek Kabelac wrote: > > > Dne 30. 01. 22 v 17:45 Demi Marie Obenour napsal(a): > > > > On Sun, Jan 30, 2022 at 11:52:52AM +0100, Zdenek Kabelac wrote: > > > > > Dne 30. 01. 22 v 1:32 Demi Marie Obenour napsal(a): > > > > > > On Sat, Jan 29, 2022 at 10:32:52PM +0100, Zdenek Kabelac wrote: > > > > > > > Dne 29. 01. 22 v 21:34 Demi Marie Obenour napsal(a): > > > My biased advice would be to stay with lvm2. There is lot of work, many > > > things are not well documented and getting everything running correctly will > > > take a lot of effort (Docker in fact did not managed to do it well and was > > > incapable to provide any recoverability) > > > > What did Docker do wrong? Would it be possible for a future version of > > lvm2 to be able to automatically recover from off-by-one thin pool > > transaction IDs? > > Ensuring all steps in state-machine are always correct is not exactly simple. > But since I've not heard about off-by-one problem for a long while - I > believe we've managed to close all the holes and bugs in double-commit > system > and metadata handling by thin-pool and lvm2.... (for recent lvm2 & kernel) How recent are you talking about? Are there fixes that can be cherry-picked? I somewhat recently triggered this issue on a test machine, so I would like to know. > > > It's difficult - if you would be distributing lvm2 with exact kernel version > > > & udev & systemd with a single linux distro - it reduces huge set of > > > troubles... > > > > Qubes OS comes close to this in practice. systemd and udev versions are > > known and fixed, and Qubes OS ships its own kernels. > > Systemd/udev evolves - so fixed today doesn't really mean same version will > be there tomorrow. And unfortunately systemd is known to introduce > backward incompatible changes from time to time... Thankfully, in Qubes OS’s dom0, the version of systemd is frozen and will never change throughout an entire release. > > > Chain filesystem->block_layer->filesystem->block_layer is something you most > > > likely do not want to use for any well performing solution... > > > But it's ok for testing... > > > > How much of this is due to the slow loop driver? How much of it could > > be mitigated if btrfs supported an equivalent of zvols? > > Here you are missing the core of problem from kernel POV aka > how the memory allocation is working and what are the approximation in > kernel with buffer handling and so on. > So whoever is using 'loop' devices in production systems in the way > described above has never really tested any corner case logic.... In Qubes OS the loop device is always passed through to a VM or used as the base device for an old-style device-mapper snapshot. It is never mounted on the host. Are there known problems with either of these configurations? -- Sincerely, Demi Marie Obenour (she/her/hers) Invisible Things Lab