All of lore.kernel.org
 help / color / mirror / Atom feed
From: Gao Xiang <hsiangkao@linux.alibaba.com>
To: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Cc: Gabriel Krisman Bertazi <krisman@collabora.com>,
	lsf-pc@lists.linux-foundation.org, linux-block@vger.kernel.org
Subject: Re: [LSF/MM/BPF TOPIC] block drivers in user space
Date: Thu, 24 Feb 2022 08:58:33 +0800	[thread overview]
Message-ID: <YhbYOeMUv5+U1XdQ@B-P7TQMD6M-0146.local> (raw)
In-Reply-To: <3702afe7-2918-42e7-110b-efa75c0b58e8@opensource.wdc.com>

On Thu, Feb 24, 2022 at 07:40:47AM +0900, Damien Le Moal wrote:
> On 2/23/22 17:11, Gao Xiang wrote:
> > On Wed, Feb 23, 2022 at 04:46:41PM +0900, Damien Le Moal wrote:
> >> On 2/23/22 14:57, Gao Xiang wrote:
> >>> On Mon, Feb 21, 2022 at 02:59:48PM -0500, Gabriel Krisman Bertazi wrote:
> >>>> I'd like to discuss an interface to implement user space block devices,
> >>>> while avoiding local network NBD solutions.  There has been reiterated
> >>>> interest in the topic, both from researchers [1] and from the community,
> >>>> including a proposed session in LSFMM2018 [2] (though I don't think it
> >>>> happened).
> >>>>
> >>>> I've been working on top of the Google iblock implementation to find
> >>>> something upstreamable and would like to present my design and gather
> >>>> feedback on some points, in particular zero-copy and overall user space
> >>>> interface.
> >>>>
> >>>> The design I'm pending towards uses special fds opened by the driver to
> >>>> transfer data to/from the block driver, preferably through direct
> >>>> splicing as much as possible, to keep data only in kernel space.  This
> >>>> is because, in my use case, the driver usually only manipulates
> >>>> metadata, while data is forwarded directly through the network, or
> >>>> similar. It would be neat if we can leverage the existing
> >>>> splice/copy_file_range syscalls such that we don't ever need to bring
> >>>> disk data to user space, if we can avoid it.  I've also experimented
> >>>> with regular pipes, But I found no way around keeping a lot of pipes
> >>>> opened, one for each possible command 'slot'.
> >>>>
> >>>> [1] https://dl.acm.org/doi/10.1145/3456727.3463768
> >>>> [2] https://www.spinics.net/lists/linux-fsdevel/msg120674.html
> >>>
> >>> I'm interested in this general topic too. One of our use cases is
> >>> that we need to process network data in some degree since many
> >>> protocols are application layer protocols so it seems more reasonable
> >>> to process such protocols in userspace. And another difference is that
> >>> we may have thousands of devices in a machine since we'd better to run
> >>> containers as many as possible so the block device solution seems
> >>> suboptimal to us. Yet I'm still interested in this topic to get more
> >>> ideas.
> >>>
> >>> Btw, As for general userspace block device solutions, IMHO, there could
> >>> be some deadlock issues out of direct reclaim, writeback, and userspace
> >>> implementation due to writeback user requests can be tripped back to
> >>> the kernel side (even the dependency crosses threads). I think they are
> >>> somewhat hard to fix with user block device solutions. For example,
> >>> https://lore.kernel.org/r/CAM1OiDPxh0B1sXkyGCSTEpdgDd196-ftzLE-ocnM8Jd2F9w7AA@mail.gmail.com
> >>
> >> This is already fixed with prctl() support. See:
> >>
> >> https://lore.kernel.org/linux-fsdevel/20191112001900.9206-1-mchristi@redhat.com/
> > 
> > As I mentioned above, IMHO, we could add some per-task state to avoid
> > the majority of such deadlock cases (also what I mentioned above), but
> > there may still some potential dependency could happen between threads,
> > such as using another kernel workqueue and waiting on it (in principle
> > at least) since userspace program can call any syscall in principle (
> > which doesn't like in-kernel drivers). So I think it can cause some
> > risk due to generic userspace block device restriction, please kindly
> > correct me if I'm wrong.
> 
> Not sure what you mean with all this. prctl() works per process/thread
> and a context that has PR_SET_IO_FLUSHER set will have PF_MEMALLOC_NOIO
> set. So for the case of a user block device driver, setting this means
> that it cannot reenter itself during a memory allocation, regardless of
> the system call it executes (FS etc): all memory allocations in any
> syscall executed by the context will have GFP_NOIO.

I mean,

assuming PR_SET_IO_FLUSHER is already set on Thread A by using prctl,
but since it can call any valid system call, therefore, after it
received data due to direct reclaim and writeback, it is still
allowed to call some system call which may do something as follows:

   Thread A (PR_SET_IO_FLUSHER)   Kernel thread B (another context)

   (call some syscall which)

   submit something to Thread B
                                  
                                  ... (do something)

                                  memory allocation with GFP_KERNEL (it
                                  may trigger direct memory reclaim
                                  again and reenter the original fs.)

                                  wake up Thread A

   wait Thread B to complete

Normally such system call won't cause any problem since userspace
programs cannot be in a context out of writeback and direct reclaim.
Yet I'm not sure if it works under userspace block driver
writeback/direct reclaim cases.

> 
> If the kernel-side driver for the user block device driver does any
> allocation that does not have GFP_NOIO, or cause any such allocation
> (e.g. within a workqueue it is waiting for), then that is a kernel bug.
> Block device drivers are not supposed to ever do a memory allocation in
> the IO hot path without GFP_NOIO.

Yes, all in-kernel driver implementations needs to be audited with
proper memory allocation with GFP_NOIO, but userspace programs are
allowed to call any system call. And such system call can rely on
another process context with can do __GFP_FS allocation again. 

Thanks,
Gao Xiang

> 
> > 
> > Thanks,
> > Gao Xiang
> > 
> >>
> >>
> >> -- 
> >> Damien Le Moal
> >> Western Digital Research
> 
> 
> -- 
> Damien Le Moal
> Western Digital Research

  reply	other threads:[~2022-02-24  1:33 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-21 19:59 [LSF/MM/BPF TOPIC] block drivers in user space Gabriel Krisman Bertazi
2022-02-21 23:16 ` Damien Le Moal
2022-02-21 23:30   ` Gabriel Krisman Bertazi
2022-02-22  6:57 ` Hannes Reinecke
2022-02-22 14:46   ` Sagi Grimberg
2022-02-22 17:46     ` Hannes Reinecke
2022-02-22 18:05     ` Gabriel Krisman Bertazi
2022-02-24  9:37       ` Xiaoguang Wang
2022-02-24 10:12       ` Sagi Grimberg
2022-03-01 23:24         ` Khazhy Kumykov
2022-03-02 16:16         ` Mike Christie
2022-03-13 21:15           ` Sagi Grimberg
2022-03-14 17:12             ` Mike Christie
2022-03-15  8:03               ` Sagi Grimberg
2022-03-14 19:21             ` Bart Van Assche
2022-03-15  6:52               ` Hannes Reinecke
2022-03-15  8:08                 ` Sagi Grimberg
2022-03-15  8:12                   ` Christoph Hellwig
2022-03-15  8:38                     ` Sagi Grimberg
2022-03-15  8:42                       ` Christoph Hellwig
2022-03-23 19:42                       ` Gabriel Krisman Bertazi
2022-03-24 17:05                         ` Sagi Grimberg
2022-03-15  8:04               ` Sagi Grimberg
2022-02-22 18:05   ` Bart Van Assche
2022-03-02 23:04   ` Gabriel Krisman Bertazi
2022-03-03  7:17     ` Hannes Reinecke
2022-03-27 16:35   ` Ming Lei
2022-03-28  5:47     ` Kanchan Joshi
2022-03-28  5:48     ` Hannes Reinecke
2022-03-28 20:20     ` Gabriel Krisman Bertazi
2022-03-29  0:30       ` Ming Lei
2022-03-29 17:20         ` Gabriel Krisman Bertazi
2022-03-30  1:55           ` Ming Lei
2022-03-30 18:22             ` Gabriel Krisman Bertazi
2022-03-31  1:38               ` Ming Lei
2022-03-31  3:49                 ` Bart Van Assche
2022-04-08  6:52     ` Xiaoguang Wang
2022-04-08  7:44       ` Ming Lei
2022-02-23  5:57 ` Gao Xiang
2022-02-23  7:46   ` Damien Le Moal
2022-02-23  8:11     ` Gao Xiang
2022-02-23 22:40       ` Damien Le Moal
2022-02-24  0:58         ` Gao Xiang [this message]
2022-06-09  2:01           ` Ming Lei
2022-06-09  2:28             ` Gao Xiang
2022-06-09  4:06               ` Ming Lei
2022-06-09  4:55                 ` Gao Xiang
2022-06-10  1:52                   ` Ming Lei
2022-07-28  8:23                 ` Pavel Machek
2022-03-02 16:52 ` Mike Christie
2022-03-03  7:09   ` Hannes Reinecke
2022-03-14 17:04     ` Mike Christie
2022-03-15  6:45       ` Hannes Reinecke
2022-03-05  7:29 ` Dongsheng Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YhbYOeMUv5+U1XdQ@B-P7TQMD6M-0146.local \
    --to=hsiangkao@linux.alibaba.com \
    --cc=damien.lemoal@opensource.wdc.com \
    --cc=krisman@collabora.com \
    --cc=linux-block@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.