All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Gabriel Krisman Bertazi <krisman@collabora.com>
Cc: Hannes Reinecke <hare@suse.de>,
	lsf-pc@lists.linux-foundation.org, linux-block@vger.kernel.org,
	Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>,
	linux-mm@kvack.org
Subject: Re: [LSF/MM/BPF TOPIC] block drivers in user space
Date: Thu, 31 Mar 2022 09:38:42 +0800	[thread overview]
Message-ID: <YkUGImrOCAzMT2t5@T590> (raw)
In-Reply-To: <87tubfpag3.fsf@collabora.com>

On Wed, Mar 30, 2022 at 02:22:20PM -0400, Gabriel Krisman Bertazi wrote:
> Ming Lei <ming.lei@redhat.com> writes:
> 
> > On Tue, Mar 29, 2022 at 01:20:57PM -0400, Gabriel Krisman Bertazi wrote:
> >> Ming Lei <ming.lei@redhat.com> writes:
> >> 
> >> >> I was thinking of something like this, or having a way for the server to
> >> >> only operate on the fds and do splice/sendfile.  But, I don't know if it
> >> >> would be useful for many use cases.  We also want to be able to send the
> >> >> data to userspace, for instance, for userspace networking.
> >> >
> >> > I understand the big point is that how to pass the io data to ubd driver's
> >> > request/bio pages. But splice/sendfile just transfers data between two FDs,
> >> > then how can the block request/bio's pages get filled with expected data?
> >> > Can you explain a bit in detail?
> >> 
> >> Hi Ming,
> >> 
> >> My idea was to split the control and dataplanes in different file
> >> descriptors.
> >> 
> >> A queue has a fd that is mapped to a shared memory area where the
> >> request descriptors are.  Submission/completion are done by read/writing
> >> the index of the request on the shared memory area.
> >> 
> >> For the data plane, each request descriptor in the queue has an
> >> associated file descriptor to be used for data transfer, which is
> >> preallocated at queue creation time.  I'm mapping the bio linearly, from
> >> offset 0, on these descriptors on .queue_rq().  Userspace operates on
> >> these data file descriptors with regular RW syscalls, direct splice to
> >> another fd or pipe, or mmap it to move data around. The data is
> >> available on that fd until IO is completed through the queue fd.  After
> >> an operation is completed, the fds are reused for the next IO on that
> >> queue position.
> >> 
> >> Hannes has pointed out the issues with fd limits. :)
> >
> > OK, thanks for the detailed explanation!
> >
> > Also you may switch to map each request queue/disk into a FD, and every
> > request is mapped to one fixed extent of the 'file' via rq->tag since we
> > have max sectors limit for each request, then fd limits can be avoided.
> >
> > But I am wondering if this way is friendly to userspace side implementation,
> > since there isn't buffer, only FDs visible to userspace.
> 
> The advantages would be not mapping the request data in userspace if we
> could avoid it, since it would be possible to just forward the data
> inside the kernel.  But my latest understanding is that most use cases
> will want to directly manipulate the data anyway, maybe to checksum, or
> even for sending through userspace networking.  It is not clear to me
> anymore that we'd benefit from not always mapping the requests to
> userspace.

Yeah, I think it is more flexible or usable to allow userspace to
operate on data directly as one generic solution, such as, implement one disk
to read/write on qcow2 image, or read from/write to network by parsing
protocol, or whatever.

> I've been looking at your implementation and I really like how simple it
> is. I think it's the most promising approach for this feature I've
> reviewed so far.  I'd like to send you a few patches for bugs I found
> when testing it and keep working on making it upstreamable.  How can I
> send you those patches?  Is it fine to just email you or should I also
> cc linux-block, even though this is yet out-of-tree code?

The topic has been discussed for a bit long, and looks people are still
interested in it, so I prefer to send out patches on linux-block if no
one objects. Then we can still discuss further when reviewing patches.

Thanks,
Ming


  reply	other threads:[~2022-03-31  1:39 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-21 19:59 [LSF/MM/BPF TOPIC] block drivers in user space Gabriel Krisman Bertazi
2022-02-21 23:16 ` Damien Le Moal
2022-02-21 23:30   ` Gabriel Krisman Bertazi
2022-02-22  6:57 ` Hannes Reinecke
2022-02-22 14:46   ` Sagi Grimberg
2022-02-22 17:46     ` Hannes Reinecke
2022-02-22 18:05     ` Gabriel Krisman Bertazi
2022-02-24  9:37       ` Xiaoguang Wang
2022-02-24 10:12       ` Sagi Grimberg
2022-03-01 23:24         ` Khazhy Kumykov
2022-03-02 16:16         ` Mike Christie
2022-03-13 21:15           ` Sagi Grimberg
2022-03-14 17:12             ` Mike Christie
2022-03-15  8:03               ` Sagi Grimberg
2022-03-14 19:21             ` Bart Van Assche
2022-03-15  6:52               ` Hannes Reinecke
2022-03-15  8:08                 ` Sagi Grimberg
2022-03-15  8:12                   ` Christoph Hellwig
2022-03-15  8:38                     ` Sagi Grimberg
2022-03-15  8:42                       ` Christoph Hellwig
2022-03-23 19:42                       ` Gabriel Krisman Bertazi
2022-03-24 17:05                         ` Sagi Grimberg
2022-03-15  8:04               ` Sagi Grimberg
2022-02-22 18:05   ` Bart Van Assche
2022-03-02 23:04   ` Gabriel Krisman Bertazi
2022-03-03  7:17     ` Hannes Reinecke
2022-03-27 16:35   ` Ming Lei
2022-03-28  5:47     ` Kanchan Joshi
2022-03-28  5:48     ` Hannes Reinecke
2022-03-28 20:20     ` Gabriel Krisman Bertazi
2022-03-29  0:30       ` Ming Lei
2022-03-29 17:20         ` Gabriel Krisman Bertazi
2022-03-30  1:55           ` Ming Lei
2022-03-30 18:22             ` Gabriel Krisman Bertazi
2022-03-31  1:38               ` Ming Lei [this message]
2022-03-31  3:49                 ` Bart Van Assche
2022-04-08  6:52     ` Xiaoguang Wang
2022-04-08  7:44       ` Ming Lei
2022-02-23  5:57 ` Gao Xiang
2022-02-23  7:46   ` Damien Le Moal
2022-02-23  8:11     ` Gao Xiang
2022-02-23 22:40       ` Damien Le Moal
2022-02-24  0:58         ` Gao Xiang
2022-06-09  2:01           ` Ming Lei
2022-06-09  2:28             ` Gao Xiang
2022-06-09  4:06               ` Ming Lei
2022-06-09  4:55                 ` Gao Xiang
2022-06-10  1:52                   ` Ming Lei
2022-07-28  8:23                 ` Pavel Machek
2022-03-02 16:52 ` Mike Christie
2022-03-03  7:09   ` Hannes Reinecke
2022-03-14 17:04     ` Mike Christie
2022-03-15  6:45       ` Hannes Reinecke
2022-03-05  7:29 ` Dongsheng Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YkUGImrOCAzMT2t5@T590 \
    --to=ming.lei@redhat.com \
    --cc=hare@suse.de \
    --cc=krisman@collabora.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=xiaoguang.wang@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.