From: Ming Lei <ming.lei@redhat.com>
To: Gabriel Krisman Bertazi <krisman@collabora.com>
Cc: Hannes Reinecke <hare@suse.de>,
lsf-pc@lists.linux-foundation.org, linux-block@vger.kernel.org,
Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>,
linux-mm@kvack.org
Subject: Re: [LSF/MM/BPF TOPIC] block drivers in user space
Date: Tue, 29 Mar 2022 08:30:57 +0800 [thread overview]
Message-ID: <YkJTQW7aAjDGKL9p@T590> (raw)
In-Reply-To: <87o81prfrg.fsf@collabora.com>
On Mon, Mar 28, 2022 at 04:20:03PM -0400, Gabriel Krisman Bertazi wrote:
> Ming Lei <ming.lei@redhat.com> writes:
>
> > IMO it needn't 'inverse io_uring', the normal io_uring SQE/CQE model
> > does cover this case, the userspace part can submit SQEs beforehand
> > for getting notification of each incoming io request from kernel driver,
> > then after one io request is queued to the driver, the driver can
> > queue a CQE for the previous submitted SQE. Recent posted patch of
> > IORING_OP_URING_CMD[1] is perfect for such purpose.
> >
> > I have written one such userspace block driver recently, and [2] is the
> > kernel part blk-mq driver(ubd driver), the userspace part is ubdsrv[3].
> > Both the two parts look quite simple, but still in very early stage, so
> > far only ubd-loop and ubd-null targets are implemented in [3]. Not only
> > the io command communication channel is done via IORING_OP_URING_CMD, but
> > also IO handling for ubd-loop is implemented via plain io_uring too.
> >
> > It is basically working, for ubd-loop, not see regression in 'xfstests -g auto'
> > on the ubd block device compared with same xfstests on underlying disk, and
> > my simple performance test on VM shows the result isn't worse than kernel loop
> > driver with dio, or even much better on some test situations.
>
> Thanks for sharing. This is a very interesting implementation that
> seems to cover quite well the original use case. I'm giving it a try and
> will report back.
>
> > Wrt. this userspace block driver things, I am more interested in the following
> > sub-topics:
> >
> > 1) zero copy
> > - the ubd driver[2] needs one data copy: for WRITE request, copy pages
> > in io request to userspace buffer before handling the WRITE IO by ubdsrv;
> > for READ request, the reverse copy is done after READ request is
> > handled by ubdsrv
> >
> > - I tried to apply zero copy via remap_pfn_range() for avoiding this
> > data copy, but looks it can't work for ubd driver, since pages in the
> > remapped vm area can't be retrieved by get_user_pages_*() which is called in
> > direct io code path
> >
> > - recently Xiaoguang Wang posted one RFC patch[4] for support zero copy on
> > tcmu, and vm_insert_page(s)_mkspecial() is added for such purpose, but
> > it has same limit of remap_pfn_range; Also Xiaoguang mentioned that
> > vm_insert_pages may work, but anonymous pages can not be remapped by
> > vm_insert_pages.
> >
> > - here the requirement is to remap either anonymous pages or page cache
> > pages into userspace vm, and the mapping/unmapping can be done for
> > each IO runtime. Is this requirement reasonable? If yes, is there any
> > easy way to implement it in kernel?
>
> I've run into the same issue with my fd implementation and haven't been
> able to workaround it.
>
> > 4) apply eBPF in userspace block driver
> > - it is one open topic, still not have specific or exact idea yet,
> >
> > - is there chance to apply ebpf for mapping ubd io into its target handling
> > for avoiding data copy and remapping cost for zero copy?
>
> I was thinking of something like this, or having a way for the server to
> only operate on the fds and do splice/sendfile. But, I don't know if it
> would be useful for many use cases. We also want to be able to send the
> data to userspace, for instance, for userspace networking.
I understand the big point is that how to pass the io data to ubd driver's
request/bio pages. But splice/sendfile just transfers data between two FDs,
then how can the block request/bio's pages get filled with expected data?
Can you explain a bit in detail?
If block layer is bypassed, it won't be exposed as block disk to userspace.
thanks,
Ming
next prev parent reply other threads:[~2022-03-29 0:31 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-21 19:59 [LSF/MM/BPF TOPIC] block drivers in user space Gabriel Krisman Bertazi
2022-02-21 23:16 ` Damien Le Moal
2022-02-21 23:30 ` Gabriel Krisman Bertazi
2022-02-22 6:57 ` Hannes Reinecke
2022-02-22 14:46 ` Sagi Grimberg
2022-02-22 17:46 ` Hannes Reinecke
2022-02-22 18:05 ` Gabriel Krisman Bertazi
2022-02-24 9:37 ` Xiaoguang Wang
2022-02-24 10:12 ` Sagi Grimberg
2022-03-01 23:24 ` Khazhy Kumykov
2022-03-02 16:16 ` Mike Christie
2022-03-13 21:15 ` Sagi Grimberg
2022-03-14 17:12 ` Mike Christie
2022-03-15 8:03 ` Sagi Grimberg
2022-03-14 19:21 ` Bart Van Assche
2022-03-15 6:52 ` Hannes Reinecke
2022-03-15 8:08 ` Sagi Grimberg
2022-03-15 8:12 ` Christoph Hellwig
2022-03-15 8:38 ` Sagi Grimberg
2022-03-15 8:42 ` Christoph Hellwig
2022-03-23 19:42 ` Gabriel Krisman Bertazi
2022-03-24 17:05 ` Sagi Grimberg
2022-03-15 8:04 ` Sagi Grimberg
2022-02-22 18:05 ` Bart Van Assche
2022-03-02 23:04 ` Gabriel Krisman Bertazi
2022-03-03 7:17 ` Hannes Reinecke
2022-03-27 16:35 ` Ming Lei
2022-03-28 5:47 ` Kanchan Joshi
2022-03-28 5:48 ` Hannes Reinecke
2022-03-28 20:20 ` Gabriel Krisman Bertazi
2022-03-29 0:30 ` Ming Lei [this message]
2022-03-29 17:20 ` Gabriel Krisman Bertazi
2022-03-30 1:55 ` Ming Lei
2022-03-30 18:22 ` Gabriel Krisman Bertazi
2022-03-31 1:38 ` Ming Lei
2022-03-31 3:49 ` Bart Van Assche
2022-04-08 6:52 ` Xiaoguang Wang
2022-04-08 7:44 ` Ming Lei
2022-02-23 5:57 ` Gao Xiang
2022-02-23 7:46 ` Damien Le Moal
2022-02-23 8:11 ` Gao Xiang
2022-02-23 22:40 ` Damien Le Moal
2022-02-24 0:58 ` Gao Xiang
2022-06-09 2:01 ` Ming Lei
2022-06-09 2:28 ` Gao Xiang
2022-06-09 4:06 ` Ming Lei
2022-06-09 4:55 ` Gao Xiang
2022-06-10 1:52 ` Ming Lei
2022-07-28 8:23 ` Pavel Machek
2022-03-02 16:52 ` Mike Christie
2022-03-03 7:09 ` Hannes Reinecke
2022-03-14 17:04 ` Mike Christie
2022-03-15 6:45 ` Hannes Reinecke
2022-03-05 7:29 ` Dongsheng Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YkJTQW7aAjDGKL9p@T590 \
--to=ming.lei@redhat.com \
--cc=hare@suse.de \
--cc=krisman@collabora.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=xiaoguang.wang@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).