On Tue, Apr 19, 2022 at 01:42:49PM -0700, Eric Wheeler wrote: > On Mon, 18 Apr 2022, Kent Overstreet wrote: > > On Mon, Apr 18, 2022 at 06:16:09PM -0700, Eric Wheeler wrote: > > > If bcachefs immediately calls .ki_complete() after queueing the IO within > > > bcachefs but before it commits to bcachefs's disk, then loop.c will mark > > > the IO as complete (blk_mq_complete_request via lo_rw_aio_complete) too > > > soon after .write_iter is called, thus breaking the expected ordering in > > > the filesystem (eg, xfs) atop of the loop device. > > > > We don't call .ki_complete (in DIO mode) until the write has been complete, > > including the btree update - this is necessary for read-after-write consistency. > > Good, I figured it would and thought I would ask in case that was the > issue. > > > If your description of the loopback code is correct that does sound suspicious > > though - queuing every IO to work item shouldn't hurt anything from a > > correctness POV but it definitely shouldn't be needed or wanted from a > > performance POV. > > REQ_OP_FLUSH just calls vfs_sync (not WQ-queued) and all READ/WRITE IO's > hit the WQ. Parallel per-socket WQ's might help performance since block > layer doesn't care about ordering and filesystems (or at least bcachefs!) > call ki_complete() after the write finishes so consistency should be ok. > > Generally speaking I avoid loop devs for production systems unless > absolutely necessary. > > > What are you seeing? > > Nothing real-world. > > I was just reviewing loop.c in preparation for leaving bcache+dm-thin > for bcachefs+loop to see if there are any DIO issues to consider. > > IMHO, it would be neat to have native bcachefs block devices and avoid > the weird loop.c serial WQ (and possibly other issues loop.c has to deal > with that native bcachefs wouldn't). > > This is a possible workflow for native bcachefs devices. Since bcachefs > is awesome - it would provide SSD caching, snapshots, encryption, and raw > DIO block devices into VMs: > > ]# bcachefs subvolume create /volumes/vol1 > ]# truncate -s 1T /volumes/vol1/data.raw > ]# bcachefs blkdev register /volumes/vol1/data.raw > /dev/bcachefs0 > ]# bcachefs subvolume snapshot /volumes/vol1 /volumes/2022-04-19_vol1 > ]# bcachefs blkdev register /volumes/2022-04-19_vol1/data.raw > /dev/bcachefs1 > ]# bcachefs blkdev unregister /dev/bcachefs0 > > And udev could be made to do something like this: > ]# ls -l /dev/bcachefs/volumes/vol1/data.raw > lrwxrwxrwx 1 root root 7 Apr 9 17:35 data.raw -> /dev/bcachefs0 > > Which means the VM can have a its disk defined as > /dev/bcachefs/volumes/vol1/data.raw in its libvirt config, and thus point > at a real block device! > > That would make bcachefs the most awesome disk volume manager, ever! Kent, if you do decide to go this route, please use the disk sequence number as the number part of the device name. So instead of /dev/bcachefs, it would be /dev/bcachefs. The latter is guaranteed to never be reused, while the former is not. Yes, other block device drivers all have the same problem, but I would rather fix it in at least one of them. Also, this would mean that opening /dev/bcachefs/volumes/something would be just as race-free as opening a filesystem path, which otherwise could not be guaranteed without some additional kernel support. -- Sincerely, Demi Marie Obenour (she/her/hers) Invisible Things Lab