linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Damien Le Moal <damien.lemoal@opensource.wdc.com>,
	Gabriel Krisman Bertazi <krisman@collabora.com>,
	lsf-pc@lists.linux-foundation.org, linux-block@vger.kernel.org,
	Pavel Machek <pavel@ucw.cz>,
	linux-fsdevel@vger.kernel.org
Subject: Re: [LSF/MM/BPF TOPIC] block drivers in user space
Date: Fri, 10 Jun 2022 09:52:53 +0800	[thread overview]
Message-ID: <YqKj9UqPjbYqnSii@T590> (raw)
In-Reply-To: <YqF9X0sJjeCxwxBb@B-P7TQMD6M-0146.local>

On Thu, Jun 09, 2022 at 12:55:59PM +0800, Gao Xiang wrote:
> On Thu, Jun 09, 2022 at 12:06:48PM +0800, Ming Lei wrote:
> > On Thu, Jun 09, 2022 at 10:28:02AM +0800, Gao Xiang wrote:
> > > On Thu, Jun 09, 2022 at 10:01:23AM +0800, Ming Lei wrote:
> > > > On Thu, Feb 24, 2022 at 08:58:33AM +0800, Gao Xiang wrote:
> > > > > On Thu, Feb 24, 2022 at 07:40:47AM +0900, Damien Le Moal wrote:
> > > > > > On 2/23/22 17:11, Gao Xiang wrote:
> > > > > > > On Wed, Feb 23, 2022 at 04:46:41PM +0900, Damien Le Moal wrote:
> > > > > > >> On 2/23/22 14:57, Gao Xiang wrote:
> > > > > > >>> On Mon, Feb 21, 2022 at 02:59:48PM -0500, Gabriel Krisman Bertazi wrote:
> > > > > > >>>> I'd like to discuss an interface to implement user space block devices,
> > > > > > >>>> while avoiding local network NBD solutions.  There has been reiterated
> > > > > > >>>> interest in the topic, both from researchers [1] and from the community,
> > > > > > >>>> including a proposed session in LSFMM2018 [2] (though I don't think it
> > > > > > >>>> happened).
> > > > > > >>>>
> > > > > > >>>> I've been working on top of the Google iblock implementation to find
> > > > > > >>>> something upstreamable and would like to present my design and gather
> > > > > > >>>> feedback on some points, in particular zero-copy and overall user space
> > > > > > >>>> interface.
> > > > > > >>>>
> > > > > > >>>> The design I'm pending towards uses special fds opened by the driver to
> > > > > > >>>> transfer data to/from the block driver, preferably through direct
> > > > > > >>>> splicing as much as possible, to keep data only in kernel space.  This
> > > > > > >>>> is because, in my use case, the driver usually only manipulates
> > > > > > >>>> metadata, while data is forwarded directly through the network, or
> > > > > > >>>> similar. It would be neat if we can leverage the existing
> > > > > > >>>> splice/copy_file_range syscalls such that we don't ever need to bring
> > > > > > >>>> disk data to user space, if we can avoid it.  I've also experimented
> > > > > > >>>> with regular pipes, But I found no way around keeping a lot of pipes
> > > > > > >>>> opened, one for each possible command 'slot'.
> > > > > > >>>>
> > > > > > >>>> [1] https://dl.acm.org/doi/10.1145/3456727.3463768
> > > > > > >>>> [2] https://www.spinics.net/lists/linux-fsdevel/msg120674.html
> > > > > > >>>
> > > > > > >>> I'm interested in this general topic too. One of our use cases is
> > > > > > >>> that we need to process network data in some degree since many
> > > > > > >>> protocols are application layer protocols so it seems more reasonable
> > > > > > >>> to process such protocols in userspace. And another difference is that
> > > > > > >>> we may have thousands of devices in a machine since we'd better to run
> > > > > > >>> containers as many as possible so the block device solution seems
> > > > > > >>> suboptimal to us. Yet I'm still interested in this topic to get more
> > > > > > >>> ideas.
> > > > > > >>>
> > > > > > >>> Btw, As for general userspace block device solutions, IMHO, there could
> > > > > > >>> be some deadlock issues out of direct reclaim, writeback, and userspace
> > > > > > >>> implementation due to writeback user requests can be tripped back to
> > > > > > >>> the kernel side (even the dependency crosses threads). I think they are
> > > > > > >>> somewhat hard to fix with user block device solutions. For example,
> > > > > > >>> https://lore.kernel.org/r/CAM1OiDPxh0B1sXkyGCSTEpdgDd196-ftzLE-ocnM8Jd2F9w7AA@mail.gmail.com
> > > > > > >>
> > > > > > >> This is already fixed with prctl() support. See:
> > > > > > >>
> > > > > > >> https://lore.kernel.org/linux-fsdevel/20191112001900.9206-1-mchristi@redhat.com/
> > > > > > > 
> > > > > > > As I mentioned above, IMHO, we could add some per-task state to avoid
> > > > > > > the majority of such deadlock cases (also what I mentioned above), but
> > > > > > > there may still some potential dependency could happen between threads,
> > > > > > > such as using another kernel workqueue and waiting on it (in principle
> > > > > > > at least) since userspace program can call any syscall in principle (
> > > > > > > which doesn't like in-kernel drivers). So I think it can cause some
> > > > > > > risk due to generic userspace block device restriction, please kindly
> > > > > > > correct me if I'm wrong.
> > > > > > 
> > > > > > Not sure what you mean with all this. prctl() works per process/thread
> > > > > > and a context that has PR_SET_IO_FLUSHER set will have PF_MEMALLOC_NOIO
> > > > > > set. So for the case of a user block device driver, setting this means
> > > > > > that it cannot reenter itself during a memory allocation, regardless of
> > > > > > the system call it executes (FS etc): all memory allocations in any
> > > > > > syscall executed by the context will have GFP_NOIO.
> > > > > 
> > > > > I mean,
> > > > > 
> > > > > assuming PR_SET_IO_FLUSHER is already set on Thread A by using prctl,
> > > > > but since it can call any valid system call, therefore, after it
> > > > > received data due to direct reclaim and writeback, it is still
> > > > > allowed to call some system call which may do something as follows:
> > > > > 
> > > > >    Thread A (PR_SET_IO_FLUSHER)   Kernel thread B (another context)
> > > > > 
> > > > >    (call some syscall which)
> > > > > 
> > > > >    submit something to Thread B
> > > > >                                   
> > > > >                                   ... (do something)
> > > > > 
> > > > >                                   memory allocation with GFP_KERNEL (it
> > > > >                                   may trigger direct memory reclaim
> > > > >                                   again and reenter the original fs.)
> > > > > 
> > > > >                                   wake up Thread A
> > > > > 
> > > > >    wait Thread B to complete
> > > > > 
> > > > > Normally such system call won't cause any problem since userspace
> > > > > programs cannot be in a context out of writeback and direct reclaim.
> > > > > Yet I'm not sure if it works under userspace block driver
> > > > > writeback/direct reclaim cases.
> > > > 
> > > > Hi Gao Xiang,
> > > > 
> > > > I'd rather to reply you in this original thread, and the recent
> > > > discussion is from the following link:
> > > > 
> > > > https://lore.kernel.org/linux-block/Yp1jRw6kiUf5jCrW@B-P7TQMD6M-0146.local/
> > > > 
> > > > kernel loop & nbd is really in the same situation.
> > > > 
> > > > For example of kernel loop, PF_MEMALLOC_NOIO is added in commit
> > > > d0a255e795ab ("loop: set PF_MEMALLOC_NOIO for the worker thread"),
> > > > so loop's worker thread can be thought as the above Thread A, and
> > > > of course, writeback/swapout IO can reach the loop worker thread(
> > > > the above Thread A), then loop just calls into FS from the worker
> > > > thread for handling the loop IO, that is same with user space driver's
> > > > case, and the kernel 'thread B' should be in FS code.
> > > > 
> > > > Your theory might be true, but it does depend on FS's implementation,
> > > > and we don't see such report in reality.
> > > > 
> > > > Also you didn't mentioned that what kernel thread B exactly is? And what
> > > > the allocation is in kernel thread B.
> > > > 
> > > > If you have actual report, I am happy to take account into it, otherwise not
> > > > sure if it is worth of time/effort in thinking/addressing one pure theoretical
> > > > concern.
> > > 
> > > Hi Ming,
> > > 
> > > Thanks for your look & reply.
> > > 
> > > That is not a wild guess. That is a basic difference between
> > > in-kernel native block-based drivers and user-space block drivers.
> > 
> > Please look at my comment, wrt. your pure theoretical concern, userspace
> > block driver is same with kernel loop/nbd.
> 
> Hi Ming,
> 
> I don't have time to audit some potential risky system call, but I guess
> security folks or researchers may be interested in finding such path.

Why do you think system call has potential risk? Isn't syscall designed
for userspace? Any syscall called from the userspace context is covered
by PR_SET_IO_FLUSHER, and your concern is just in Kernel thread B,
right?

If yes, let's focus on this scenario, so I posted it one more time:

>    Thread A (PR_SET_IO_FLUSHER)   Kernel thread B (another context)
> 
>    (call some syscall which)
> 
>    submit something to Thread B
>                                   
>                                   ... (do something)
> 
>                                   memory allocation with GFP_KERNEL (it
>                                   may trigger direct memory reclaim
>                                   again and reenter the original fs.)
> 
>                                   wake up Thread A
> 
>    wait Thread B to complete

You didn't mention why normal writeback IO from other context won't call
into this kind of kernel thread B too, so can you explain it a bit?

As I said, both loop/nbd has same situation, for example of loop, thread
A is loop worker thread with PF_MEMALLOC_NOIO, and generic FS code(read,
write, fallocate, fsync, ...) is called into from the worker thread, so
there might be the so called kernel thread B for loop. But we don't see
such report.

Yeah, you may argue that other non-FS syscalls may be involved in
userspace driver. But in reality, userspace block driver should only deal
with FS and network IO most of times, and both network and FS code path
are already in normal IO code path for long time, so your direct claim
concern shouldn't be one problem. Not mention nbd/tcmu/... have been used
or long long time, so far so good. 

If you think it is real risk, please find it for nbd/tcmu/dm-multipath/...
first. IMO, it isn't useful to say there is such generic concern without
further investigation and without providing any detail, and devil is always
in details.

> 
> The big problem is, you cannot avoid people to write such system call (or 
> ioctls) in their user daemon, since most system call (or ioctls)
> implementation assumes that they're never called under the kernel memory
> direct reclaim context (even with PR_SET_IO_FLUSHER) but userspace block
> driver can give such context to userspace and user problems can do
> whatever they do in principle.
> 
> IOWs, we can audit in-kernel block drivers and fix all buggy paths with
> GFP_NOIO since the source code is already there and they should be fixed.
> 
> But you have no way to audit all user programs to call proper system calls
> or random ioctls which can be safely worked in the direct reclaim context
> (even with PR_SET_IO_FLUSHER).
> 
> > 
> > Did you see such report on loop & nbd? Can you answer my questions wrt.
> > kernel thread B?
> 
> I don't think it has some relationship with in-kernel loop device, since
> the loop device I/O paths are all under control.

No, it is completely same situation wrt. your concern, please look at the above
scenario.

> 
> > 
> > > 
> > > That is userspace block driver can call _any_ system call if they want.
> > > Since users can call any system call and any _new_ system call can be
> > > introduced later, you have to audit all system calls "Which are safe
> > > and which are _not_ safe" all the time. Otherwise, attacker can make
> > 
> > Isn't nbd server capable of calling any system call? Is there any
> > security risk for nbd?
> 
> Note that I wrote this email initially as a generic concern (prior to your
> ubd annoucement ), so that isn't related to your ubd from my POV.

OK, I guess I needn't to waste time on this 'generic concern'.

> 
> > 
> > > use of it to hung the system if such userspace driver is used widely.
> > 
> > >From the beginning, only ADMIN can create ubd, that is same with
> > nbd/loop, and it gets default permission as disk device.
> 
> loop device is different since the path can be totally controlled by the
> kernel.
> 
> > 
> > ubd is really in same situation with nbd wrt. security, the only difference
> > is just that nbd uses socket for communication, and ubd uses io_uring, that
> > is all.
> > 
> > Yeah, Stefan Hajnoczi and I discussed to make ubd as one container
> > block device, so normal user can create & use ubd, but it won't be done
> > from the beginning, and won't be enabled until the potential security
> > risks are addressed, and there should be more limits on ubd when normal user
> > can create & use it, such as:
> > 
> > - not allow unprivileged ubd device to be mounted
> > - not allow unprivileged ubd device's partition table to be read from
> >   kernel
> > - not support buffered io for unprivileged ubd device, and only direct io
> >   is allowed
> 
> How could you do that? I think it needs a wide modification to mm/fs.
> and how about mmap I/O?

Firstly mount isn't allowed, then we can deal with mmap on def_blk_fops, and
only allow open with O_DIRECT.

> 
> > - maybe more limit for minimizing security risk.
> > 
> > > 
> > > IOWs, in my humble opinion, that is quite a fundamental security
> > > concern of all userspace block drivers.
> > 
> > But nbd is still there and widely used, and there are lots of people who
> > shows interest in userspace block device. Then think about who is wrong?
> > 
> > As one userspace block driver, it is normal to see some limits there,
> > but I don't agree that there is fundamental security issue.
> 
> That depends, if you think it's a real security issue that there could be
> a path reported to public to trigger that after it's widely used, that is
> fine.

But nbd/tcmu is widely used already...

> 
> > 
> > > 
> > > Actually, you cannot ignore block I/O requests if they actually push
> > 
> > Who wants to ignore block I/O? And why ignore it?
> 
> I don't know how to express that properly. Sorry for my bad English.
> 
> For example, userspace FS implementation can ignore any fs operations
> triggered under direct reclaim.
> 
> But if you runs a userspace block driver under a random fs, they will
> just send data & metadata I/O to your driver unconditionally. I think
> that is too late to avoid such deadlock.

What is the deadlock? Is that triggered with your kernel thread B deadlock?

> 
> > 
> > > into block layer, since that is too late if I/O actually is submitted
> > > by some FS. And you don't even know which type of such I/O is.
> > 
> > We do know the I/O type.
> 
> 1) you don't know meta or data I/O. I know there is a REQ_META, but
>    that is not a strict mark.
> 
> 2) even you know an I/O is under direct reclaim, how to deal with that?
>   just send to userspace unconditionally?

All block driver doesn't care REQ_META, why is it special for userspace
block driver?


Thanks,
Ming


  reply	other threads:[~2022-06-10  1:53 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-21 19:59 [LSF/MM/BPF TOPIC] block drivers in user space Gabriel Krisman Bertazi
2022-02-21 23:16 ` Damien Le Moal
2022-02-21 23:30   ` Gabriel Krisman Bertazi
2022-02-22  6:57 ` Hannes Reinecke
2022-02-22 14:46   ` Sagi Grimberg
2022-02-22 17:46     ` Hannes Reinecke
2022-02-22 18:05     ` Gabriel Krisman Bertazi
2022-02-24  9:37       ` Xiaoguang Wang
2022-02-24 10:12       ` Sagi Grimberg
2022-03-01 23:24         ` Khazhy Kumykov
2022-03-02 16:16         ` Mike Christie
2022-03-13 21:15           ` Sagi Grimberg
2022-03-14 17:12             ` Mike Christie
2022-03-15  8:03               ` Sagi Grimberg
2022-03-14 19:21             ` Bart Van Assche
2022-03-15  6:52               ` Hannes Reinecke
2022-03-15  8:08                 ` Sagi Grimberg
2022-03-15  8:12                   ` Christoph Hellwig
2022-03-15  8:38                     ` Sagi Grimberg
2022-03-15  8:42                       ` Christoph Hellwig
2022-03-23 19:42                       ` Gabriel Krisman Bertazi
2022-03-24 17:05                         ` Sagi Grimberg
2022-03-15  8:04               ` Sagi Grimberg
2022-02-22 18:05   ` Bart Van Assche
2022-03-02 23:04   ` Gabriel Krisman Bertazi
2022-03-03  7:17     ` Hannes Reinecke
2022-03-27 16:35   ` Ming Lei
2022-03-28  5:47     ` Kanchan Joshi
2022-03-28  5:48     ` Hannes Reinecke
2022-03-28 20:20     ` Gabriel Krisman Bertazi
2022-03-29  0:30       ` Ming Lei
2022-03-29 17:20         ` Gabriel Krisman Bertazi
2022-03-30  1:55           ` Ming Lei
2022-03-30 18:22             ` Gabriel Krisman Bertazi
2022-03-31  1:38               ` Ming Lei
2022-03-31  3:49                 ` Bart Van Assche
2022-04-08  6:52     ` Xiaoguang Wang
2022-04-08  7:44       ` Ming Lei
2022-02-23  5:57 ` Gao Xiang
2022-02-23  7:46   ` Damien Le Moal
2022-02-23  8:11     ` Gao Xiang
2022-02-23 22:40       ` Damien Le Moal
2022-02-24  0:58         ` Gao Xiang
2022-06-09  2:01           ` Ming Lei
2022-06-09  2:28             ` Gao Xiang
2022-06-09  4:06               ` Ming Lei
2022-06-09  4:55                 ` Gao Xiang
2022-06-10  1:52                   ` Ming Lei [this message]
2022-07-28  8:23                 ` Pavel Machek
2022-03-02 16:52 ` Mike Christie
2022-03-03  7:09   ` Hannes Reinecke
2022-03-14 17:04     ` Mike Christie
2022-03-15  6:45       ` Hannes Reinecke
2022-03-05  7:29 ` Dongsheng Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YqKj9UqPjbYqnSii@T590 \
    --to=ming.lei@redhat.com \
    --cc=damien.lemoal@opensource.wdc.com \
    --cc=krisman@collabora.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=pavel@ucw.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).