linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kanchan Joshi <joshi.k@samsung.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: Hannes Reinecke <hare@suse.de>,
	Gabriel Krisman Bertazi <krisman@collabora.com>,
	lsf-pc@lists.linux-foundation.org, linux-block@vger.kernel.org,
	Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>,
	linux-mm@kvack.org
Subject: Re: [LSF/MM/BPF TOPIC] block drivers in user space
Date: Mon, 28 Mar 2022 11:17:17 +0530	[thread overview]
Message-ID: <20220328054717.GA16252@test-zns> (raw)
In-Reply-To: <YkCSVSk1SwvtABIW@T590>

[-- Attachment #1: Type: text/plain, Size: 3497 bytes --]

On Mon, Mar 28, 2022 at 12:35:33AM +0800, Ming Lei wrote:
>On Tue, Feb 22, 2022 at 07:57:27AM +0100, Hannes Reinecke wrote:
>> On 2/21/22 20:59, Gabriel Krisman Bertazi wrote:
>> > I'd like to discuss an interface to implement user space block devices,
>> > while avoiding local network NBD solutions.  There has been reiterated
>> > interest in the topic, both from researchers [1] and from the community,
>> > including a proposed session in LSFMM2018 [2] (though I don't think it
>> > happened).
>> >
>> > I've been working on top of the Google iblock implementation to find
>> > something upstreamable and would like to present my design and gather
>> > feedback on some points, in particular zero-copy and overall user space
>> > interface.
>> >
>> > The design I'm pending towards uses special fds opened by the driver to
>> > transfer data to/from the block driver, preferably through direct
>> > splicing as much as possible, to keep data only in kernel space.  This
>> > is because, in my use case, the driver usually only manipulates
>> > metadata, while data is forwarded directly through the network, or
>> > similar. It would be neat if we can leverage the existing
>> > splice/copy_file_range syscalls such that we don't ever need to bring
>> > disk data to user space, if we can avoid it.  I've also experimented
>> > with regular pipes, But I found no way around keeping a lot of pipes
>> > opened, one for each possible command 'slot'.
>> >
>> > [1] https://protect2.fireeye.com/v1/url?k=894d9ec4-e83076bc-894c158b-74fe485fffb1-3de06c94a9e9abfa&q=1&e=40f886a9-b53a-42b0-8e68-c94bc3813a9c&u=https%3A%2F%2Fdl.acm.org%2Fdoi%2F10.1145%2F3456727.3463768
>> > [2] https://www.spinics.net/lists/linux-fsdevel/msg120674.html
>> >
>> Actually, I'd rather have something like an 'inverse io_uring', where an
>> application creates a memory region separated into several 'ring' for
>> submission and completion.
>> Then the kernel could write/map the incoming data onto the rings, and
>> application can read from there.
>> Maybe it'll be worthwhile to look at virtio here.
>
>IMO it needn't 'inverse io_uring', the normal io_uring SQE/CQE model
>does cover this case, the userspace part can submit SQEs beforehand
>for getting notification of each incoming io request from kernel driver,
>then after one io request is queued to the driver, the driver can
>queue a CQE for the previous submitted SQE. Recent posted patch of
>IORING_OP_URING_CMD[1] is perfect for such purpose.
I had added that as one of the potential usecases to discuss for
uring-cmd:
https://lore.kernel.org/linux-block/20220228092511.458285-1-joshi.k@samsung.com/
And your email is already bringing lot of clarity on this.

>I have written one such userspace block driver recently, and [2] is the
>kernel part blk-mq driver(ubd driver), the userspace part is ubdsrv[3].
>Both the two parts look quite simple, but still in very early stage, so
>far only ubd-loop and ubd-null targets are implemented in [3]. Not only
>the io command communication channel is done via IORING_OP_URING_CMD, but
>also IO handling for ubd-loop is implemented via plain io_uring too.
>
>It is basically working, for ubd-loop, not see regression in 'xfstests -g auto'
>on the ubd block device compared with same xfstests on underlying disk, and
>my simple performance test on VM shows the result isn't worse than kernel loop
>driver with dio, or even much better on some test situations.
Added this in my to-be-read list. Thanks for sharing.




[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



  reply	other threads:[~2022-03-28  5:55 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-21 19:59 [LSF/MM/BPF TOPIC] block drivers in user space Gabriel Krisman Bertazi
2022-02-21 23:16 ` Damien Le Moal
2022-02-21 23:30   ` Gabriel Krisman Bertazi
2022-02-22  6:57 ` Hannes Reinecke
2022-02-22 14:46   ` Sagi Grimberg
2022-02-22 17:46     ` Hannes Reinecke
2022-02-22 18:05     ` Gabriel Krisman Bertazi
2022-02-24  9:37       ` Xiaoguang Wang
2022-02-24 10:12       ` Sagi Grimberg
2022-03-01 23:24         ` Khazhy Kumykov
2022-03-02 16:16         ` Mike Christie
2022-03-13 21:15           ` Sagi Grimberg
2022-03-14 17:12             ` Mike Christie
2022-03-15  8:03               ` Sagi Grimberg
2022-03-14 19:21             ` Bart Van Assche
2022-03-15  6:52               ` Hannes Reinecke
2022-03-15  8:08                 ` Sagi Grimberg
2022-03-15  8:12                   ` Christoph Hellwig
2022-03-15  8:38                     ` Sagi Grimberg
2022-03-15  8:42                       ` Christoph Hellwig
2022-03-23 19:42                       ` Gabriel Krisman Bertazi
2022-03-24 17:05                         ` Sagi Grimberg
2022-03-15  8:04               ` Sagi Grimberg
2022-02-22 18:05   ` Bart Van Assche
2022-03-02 23:04   ` Gabriel Krisman Bertazi
2022-03-03  7:17     ` Hannes Reinecke
2022-03-27 16:35   ` Ming Lei
2022-03-28  5:47     ` Kanchan Joshi [this message]
2022-03-28  5:48     ` Hannes Reinecke
2022-03-28 20:20     ` Gabriel Krisman Bertazi
2022-03-29  0:30       ` Ming Lei
2022-03-29 17:20         ` Gabriel Krisman Bertazi
2022-03-30  1:55           ` Ming Lei
2022-03-30 18:22             ` Gabriel Krisman Bertazi
2022-03-31  1:38               ` Ming Lei
2022-03-31  3:49                 ` Bart Van Assche
2022-04-08  6:52     ` Xiaoguang Wang
2022-04-08  7:44       ` Ming Lei
2022-02-23  5:57 ` Gao Xiang
2022-02-23  7:46   ` Damien Le Moal
2022-02-23  8:11     ` Gao Xiang
2022-02-23 22:40       ` Damien Le Moal
2022-02-24  0:58         ` Gao Xiang
2022-06-09  2:01           ` Ming Lei
2022-06-09  2:28             ` Gao Xiang
2022-06-09  4:06               ` Ming Lei
2022-06-09  4:55                 ` Gao Xiang
2022-06-10  1:52                   ` Ming Lei
2022-07-28  8:23                 ` Pavel Machek
2022-03-02 16:52 ` Mike Christie
2022-03-03  7:09   ` Hannes Reinecke
2022-03-14 17:04     ` Mike Christie
2022-03-15  6:45       ` Hannes Reinecke
2022-03-05  7:29 ` Dongsheng Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220328054717.GA16252@test-zns \
    --to=joshi.k@samsung.com \
    --cc=hare@suse.de \
    --cc=krisman@collabora.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=ming.lei@redhat.com \
    --cc=xiaoguang.wang@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).