linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>
To: Long Li <longli@microsoft.com>
Cc: Bart Van Assche <bvanassche@acm.org>,
	Christoph Hellwig <hch@infradead.org>,
	"longli@linuxonhyperv.com" <longli@linuxonhyperv.com>,
	"linux-fs@vger.kernel.org" <linux-fs@vger.kernel.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>
Subject: Re: [Patch v4 0/3] Introduce a driver to support host accelerated access to Microsoft Azure Blob
Date: Tue, 20 Jul 2021 20:16:03 +0200	[thread overview]
Message-ID: <YPcS41v9x6+VlQXt@kroah.com> (raw)
In-Reply-To: <BY5PR21MB1506822C71ED70366E1B1BCBCEE29@BY5PR21MB1506.namprd21.prod.outlook.com>

On Tue, Jul 20, 2021 at 05:33:47PM +0000, Long Li wrote:
> > Subject: Re: [Patch v4 0/3] Introduce a driver to support host accelerated
> > access to Microsoft Azure Blob
> > 
> > On 7/20/21 12:05 AM, Long Li wrote:
> > >> Subject: Re: [Patch v4 0/3] Introduce a driver to support host
> > >> accelerated access to Microsoft Azure Blob
> > >>
> > >> On Mon, Jul 19, 2021 at 09:37:56PM -0700, Bart Van Assche wrote:
> > >>> such that this object storage driver can be implemented as a
> > >>> user-space library instead of as a kernel driver? As you may know
> > >>> vfio users can either use eventfds for completion notifications or polling.
> > >>> An interface like io_uring can be built easily on top of vfio.
> > >>
> > >> Yes.  Similar to say the NVMe K/V command set this does not look like
> > >> a candidate for a kernel driver.
> > >
> > > The driver is modeled to support multiple processes/users over a VMBUS
> > > channel. I don't see a way that this can be implemented through VFIO?
> > >
> > > Even if it can be done, this exposes a security risk as the same VMBUS
> > > channel is shared by multiple processes in user-mode.
> > 
> > Sharing a VMBUS channel among processes is not necessary. I propose to
> > assign one VMBUS channel to each process and to multiplex I/O submitted to
> > channels associated with the same blob storage object inside e.g. the
> > hypervisor. This is not a new idea. In the NVMe specification there is a
> > diagram that shows that multiple NVMe controllers can provide access to the
> > same NVMe namespace. See also diagram "Figure 416: NVM Subsystem with
> > Three I/O Controllers" in version 1.4 of the NVMe specification.
> > 
> > Bart.
> 
> Currently, the Hyper-V is not designed to have one VMBUS channel for each process.

So it's a slow interface :(

> In Hyper-V, a channel is offered from the host to the guest VM. The host doesn't
> know in advance how many processes are going to use this service so it can't
> offer those channels in advance. There is no mechanism to offer dynamic
> per-process allocated channels based on guest needs. Some devices (e.g.
> network and storage) use multiple channels for scalability but they are not
> for serving individual processes.
> 
> Assigning one VMBUS channel per process needs significant change on the Hyper-V side.

What is the throughput of a single channel as-is?  You provided no
benchmarks or numbers at all in this patchset which would justify this
new kernel driver :(

thanks,

greg k-h

  reply	other threads:[~2021-07-20 18:17 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-20  3:31 [Patch v4 0/3] Introduce a driver to support host accelerated access to Microsoft Azure Blob longli
2021-07-20  3:31 ` [Patch v4 1/3] Drivers: hv: vmbus: add support to ignore certain PCIE devices longli
2021-07-20  3:31 ` [Patch v4 2/3] Drivers: hv: add Azure Blob driver longli
2021-07-20  5:26   ` Greg Kroah-Hartman
2021-07-20  5:30   ` Greg Kroah-Hartman
2021-07-20  7:34   ` Greg Kroah-Hartman
2021-07-20 19:57     ` Long Li
2021-07-21  5:08       ` Greg Kroah-Hartman
2021-07-20 11:10   ` Jiri Slaby
2021-07-20 22:12     ` Long Li
2021-07-21  4:57       ` Jiri Slaby
2021-07-21 16:07         ` Long Li
2021-07-20  3:31 ` [Patch v4 3/3] Drivers: hv: Add to maintainer for Hyper-V/Azure drivers longli
2021-07-20  4:37 ` [Patch v4 0/3] Introduce a driver to support host accelerated access to Microsoft Azure Blob Bart Van Assche
2021-07-20  6:01   ` Christoph Hellwig
2021-07-20  7:05     ` Long Li
2021-07-20 15:15       ` Bart Van Assche
2021-07-20 17:33         ` Long Li
2021-07-20 18:16           ` gregkh [this message]
2021-07-20 18:52             ` Long Li
2021-07-20 15:54 ` Greg KH
2021-07-20 18:37   ` Long Li
2021-07-21  5:18     ` Greg KH

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YPcS41v9x6+VlQXt@kroah.com \
    --to=gregkh@linuxfoundation.org \
    --cc=bvanassche@acm.org \
    --cc=hch@infradead.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fs@vger.kernel.org \
    --cc=linux-hyperv@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longli@linuxonhyperv.com \
    --cc=longli@microsoft.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).