All of lore.kernel.org
 help / color / mirror / Atom feed
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Eric Wheeler <bcachefs@lists.ewheeler.net>,
	Kent Overstreet <kent.overstreet@gmail.com>
Cc: linux-bcachefs@vger.kernel.org
Subject: Re: bcachefs loop devs
Date: Thu, 2 Jun 2022 04:45:47 -0400	[thread overview]
Message-ID: <Yph4vK0KAjokd1UL@itl-email> (raw)
In-Reply-To: <51a52bd6-b535-e5ac-12a1-2f6dc1a84353@ewheeler.net>

[-- Attachment #1: Type: text/plain, Size: 3537 bytes --]

On Tue, Apr 19, 2022 at 01:42:49PM -0700, Eric Wheeler wrote:
> On Mon, 18 Apr 2022, Kent Overstreet wrote:
> > On Mon, Apr 18, 2022 at 06:16:09PM -0700, Eric Wheeler wrote:
> > > If bcachefs immediately calls .ki_complete() after queueing the IO within 
> > > bcachefs but before it commits to bcachefs's disk, then loop.c will mark 
> > > the IO as complete (blk_mq_complete_request via lo_rw_aio_complete) too 
> > > soon after .write_iter is called, thus breaking the expected ordering in 
> > > the filesystem (eg, xfs) atop of the loop device.
> > 
> > We don't call .ki_complete (in DIO mode) until the write has been complete,
> > including the btree update - this is necessary for read-after-write consistency. 
> 
> Good, I figured it would and thought I would ask in case that was the 
> issue.  
>  
> > If your description of the loopback code is correct that does sound suspicious
> > though - queuing every IO to work item shouldn't hurt anything from a
> > correctness POV but it definitely shouldn't be needed or wanted from a
> > performance POV.
> 
> REQ_OP_FLUSH just calls vfs_sync (not WQ-queued) and all READ/WRITE IO's
> hit the WQ.  Parallel per-socket WQ's might help performance since block
> layer doesn't care about ordering and filesystems (or at least bcachefs!)
> call ki_complete() after the write finishes so consistency should be ok.
> 
> Generally speaking I avoid loop devs for production systems unless
> absolutely necessary.
> 
> > What are you seeing?
> 
> Nothing real-world.
> 
> I was just reviewing loop.c in preparation for leaving bcache+dm-thin
> for bcachefs+loop to see if there are any DIO issues to consider.
> 
> IMHO, it would be neat to have native bcachefs block devices and avoid
> the weird loop.c serial WQ (and possibly other issues loop.c has to deal
> with that native bcachefs wouldn't).
> 
> This is a possible workflow for native bcachefs devices.  Since bcachefs 
> is awesome - it would provide SSD caching, snapshots, encryption, and raw 
> DIO block devices into VMs:
> 
> 	]# bcachefs subvolume create /volumes/vol1
> 	]# truncate -s 1T /volumes/vol1/data.raw
> 	]# bcachefs blkdev register /volumes/vol1/data.raw
> 	/dev/bcachefs0
> 	]# bcachefs subvolume snapshot /volumes/vol1 /volumes/2022-04-19_vol1
> 	]# bcachefs blkdev register /volumes/2022-04-19_vol1/data.raw
> 	/dev/bcachefs1
> 	]# bcachefs blkdev unregister /dev/bcachefs0
> 
> And udev could be made to do something like this:
> 	]# ls -l /dev/bcachefs/volumes/vol1/data.raw
> 	lrwxrwxrwx 1 root root 7 Apr  9 17:35   data.raw -> /dev/bcachefs0
> 
> Which means the VM can have a its disk defined as 
> /dev/bcachefs/volumes/vol1/data.raw in its libvirt config, and thus point 
> at a real block device!
> 
> That would make bcachefs the most awesome disk volume manager, ever!

Kent, if you do decide to go this route, please use the disk sequence
number as the number part of the device name.  So instead of
/dev/bcachefs<minor>, it would be /dev/bcachefs<diskseq>.  The latter is
guaranteed to never be reused, while the former is not.

Yes, other block device drivers all have the same problem, but I would
rather fix it in at least one of them.  Also, this would mean that
opening /dev/bcachefs/volumes/something would be just as race-free as
opening a filesystem path, which otherwise could not be guaranteed
without some additional kernel support.

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

      reply	other threads:[~2022-06-02  8:45 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-06  6:55 Comparison to ZFS and BTRFS Demi Marie Obenour
2022-04-13 22:43 ` Eric Wheeler
2022-04-15 19:11 ` Kent Overstreet
2022-04-18 14:07   ` Demi Marie Obenour
2022-04-19  1:35     ` Kent Overstreet
2022-04-19 13:16       ` Demi Marie Obenour
2022-04-19  1:16   ` bcachefs loop devs (was: Comparison to ZFS and BTRFS) Eric Wheeler
2022-04-19  1:41     ` Kent Overstreet
2022-04-19 20:42       ` bcachefs loop devs Eric Wheeler
2022-06-02  8:45         ` Demi Marie Obenour [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yph4vK0KAjokd1UL@itl-email \
    --to=demi@invisiblethingslab.com \
    --cc=bcachefs@lists.ewheeler.net \
    --cc=kent.overstreet@gmail.com \
    --cc=linux-bcachefs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.