linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Damien Le Moal <damien.lemoal@opensource.wdc.com>
To: Gabriel Krisman Bertazi <krisman@collabora.com>,
	lsf-pc@lists.linux-foundation.org, linux-block@vger.kernel.org
Subject: Re: [LSF/MM/BPF TOPIC] block drivers in user space
Date: Thu, 24 Feb 2022 07:40:47 +0900	[thread overview]
Message-ID: <3702afe7-2918-42e7-110b-efa75c0b58e8@opensource.wdc.com> (raw)
In-Reply-To: <YhXsQdkOpBY2nmFG@B-P7TQMD6M-0146.local>

On 2/23/22 17:11, Gao Xiang wrote:
> On Wed, Feb 23, 2022 at 04:46:41PM +0900, Damien Le Moal wrote:
>> On 2/23/22 14:57, Gao Xiang wrote:
>>> On Mon, Feb 21, 2022 at 02:59:48PM -0500, Gabriel Krisman Bertazi wrote:
>>>> I'd like to discuss an interface to implement user space block devices,
>>>> while avoiding local network NBD solutions.  There has been reiterated
>>>> interest in the topic, both from researchers [1] and from the community,
>>>> including a proposed session in LSFMM2018 [2] (though I don't think it
>>>> happened).
>>>>
>>>> I've been working on top of the Google iblock implementation to find
>>>> something upstreamable and would like to present my design and gather
>>>> feedback on some points, in particular zero-copy and overall user space
>>>> interface.
>>>>
>>>> The design I'm pending towards uses special fds opened by the driver to
>>>> transfer data to/from the block driver, preferably through direct
>>>> splicing as much as possible, to keep data only in kernel space.  This
>>>> is because, in my use case, the driver usually only manipulates
>>>> metadata, while data is forwarded directly through the network, or
>>>> similar. It would be neat if we can leverage the existing
>>>> splice/copy_file_range syscalls such that we don't ever need to bring
>>>> disk data to user space, if we can avoid it.  I've also experimented
>>>> with regular pipes, But I found no way around keeping a lot of pipes
>>>> opened, one for each possible command 'slot'.
>>>>
>>>> [1] https://dl.acm.org/doi/10.1145/3456727.3463768
>>>> [2] https://www.spinics.net/lists/linux-fsdevel/msg120674.html
>>>
>>> I'm interested in this general topic too. One of our use cases is
>>> that we need to process network data in some degree since many
>>> protocols are application layer protocols so it seems more reasonable
>>> to process such protocols in userspace. And another difference is that
>>> we may have thousands of devices in a machine since we'd better to run
>>> containers as many as possible so the block device solution seems
>>> suboptimal to us. Yet I'm still interested in this topic to get more
>>> ideas.
>>>
>>> Btw, As for general userspace block device solutions, IMHO, there could
>>> be some deadlock issues out of direct reclaim, writeback, and userspace
>>> implementation due to writeback user requests can be tripped back to
>>> the kernel side (even the dependency crosses threads). I think they are
>>> somewhat hard to fix with user block device solutions. For example,
>>> https://lore.kernel.org/r/CAM1OiDPxh0B1sXkyGCSTEpdgDd196-ftzLE-ocnM8Jd2F9w7AA@mail.gmail.com
>>
>> This is already fixed with prctl() support. See:
>>
>> https://lore.kernel.org/linux-fsdevel/20191112001900.9206-1-mchristi@redhat.com/
> 
> As I mentioned above, IMHO, we could add some per-task state to avoid
> the majority of such deadlock cases (also what I mentioned above), but
> there may still some potential dependency could happen between threads,
> such as using another kernel workqueue and waiting on it (in principle
> at least) since userspace program can call any syscall in principle (
> which doesn't like in-kernel drivers). So I think it can cause some
> risk due to generic userspace block device restriction, please kindly
> correct me if I'm wrong.

Not sure what you mean with all this. prctl() works per process/thread
and a context that has PR_SET_IO_FLUSHER set will have PF_MEMALLOC_NOIO
set. So for the case of a user block device driver, setting this means
that it cannot reenter itself during a memory allocation, regardless of
the system call it executes (FS etc): all memory allocations in any
syscall executed by the context will have GFP_NOIO.

If the kernel-side driver for the user block device driver does any
allocation that does not have GFP_NOIO, or cause any such allocation
(e.g. within a workqueue it is waiting for), then that is a kernel bug.
Block device drivers are not supposed to ever do a memory allocation in
the IO hot path without GFP_NOIO.

> 
> Thanks,
> Gao Xiang
> 
>>
>>
>> -- 
>> Damien Le Moal
>> Western Digital Research


-- 
Damien Le Moal
Western Digital Research

  reply	other threads:[~2022-02-23 22:40 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-21 19:59 [LSF/MM/BPF TOPIC] block drivers in user space Gabriel Krisman Bertazi
2022-02-21 23:16 ` Damien Le Moal
2022-02-21 23:30   ` Gabriel Krisman Bertazi
2022-02-22  6:57 ` Hannes Reinecke
2022-02-22 14:46   ` Sagi Grimberg
2022-02-22 17:46     ` Hannes Reinecke
2022-02-22 18:05     ` Gabriel Krisman Bertazi
2022-02-24  9:37       ` Xiaoguang Wang
2022-02-24 10:12       ` Sagi Grimberg
2022-03-01 23:24         ` Khazhy Kumykov
2022-03-02 16:16         ` Mike Christie
2022-03-13 21:15           ` Sagi Grimberg
2022-03-14 17:12             ` Mike Christie
2022-03-15  8:03               ` Sagi Grimberg
2022-03-14 19:21             ` Bart Van Assche
2022-03-15  6:52               ` Hannes Reinecke
2022-03-15  8:08                 ` Sagi Grimberg
2022-03-15  8:12                   ` Christoph Hellwig
2022-03-15  8:38                     ` Sagi Grimberg
2022-03-15  8:42                       ` Christoph Hellwig
2022-03-23 19:42                       ` Gabriel Krisman Bertazi
2022-03-24 17:05                         ` Sagi Grimberg
2022-03-15  8:04               ` Sagi Grimberg
2022-02-22 18:05   ` Bart Van Assche
2022-03-02 23:04   ` Gabriel Krisman Bertazi
2022-03-03  7:17     ` Hannes Reinecke
2022-03-27 16:35   ` Ming Lei
2022-03-28  5:47     ` Kanchan Joshi
2022-03-28  5:48     ` Hannes Reinecke
2022-03-28 20:20     ` Gabriel Krisman Bertazi
2022-03-29  0:30       ` Ming Lei
2022-03-29 17:20         ` Gabriel Krisman Bertazi
2022-03-30  1:55           ` Ming Lei
2022-03-30 18:22             ` Gabriel Krisman Bertazi
2022-03-31  1:38               ` Ming Lei
2022-03-31  3:49                 ` Bart Van Assche
2022-04-08  6:52     ` Xiaoguang Wang
2022-04-08  7:44       ` Ming Lei
2022-02-23  5:57 ` Gao Xiang
2022-02-23  7:46   ` Damien Le Moal
2022-02-23  8:11     ` Gao Xiang
2022-02-23 22:40       ` Damien Le Moal [this message]
2022-02-24  0:58         ` Gao Xiang
2022-06-09  2:01           ` Ming Lei
2022-06-09  2:28             ` Gao Xiang
2022-06-09  4:06               ` Ming Lei
2022-06-09  4:55                 ` Gao Xiang
2022-06-10  1:52                   ` Ming Lei
2022-07-28  8:23                 ` Pavel Machek
2022-03-02 16:52 ` Mike Christie
2022-03-03  7:09   ` Hannes Reinecke
2022-03-14 17:04     ` Mike Christie
2022-03-15  6:45       ` Hannes Reinecke
2022-03-05  7:29 ` Dongsheng Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3702afe7-2918-42e7-110b-efa75c0b58e8@opensource.wdc.com \
    --to=damien.lemoal@opensource.wdc.com \
    --cc=krisman@collabora.com \
    --cc=linux-block@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).