All of lore.kernel.org
 help / color / mirror / Atom feed
From: Keith Busch <keith.busch@intel.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Stefan Hajnoczi <stefanha@gmail.com>,
	Yang Zhong <yang.zhong@intel.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	fam@euphon.net
Subject: Re: [Qemu-devel] If Qemu support NVMe over Fabrics ?y
Date: Fri, 11 Jan 2019 09:26:42 -0700	[thread overview]
Message-ID: <20190111162642.GE21095@localhost.localdomain> (raw)
In-Reply-To: <72a500b8-4681-94c5-a258-5ba8b0e8fa9b@redhat.com>

On Fri, Jan 11, 2019 at 05:07:26PM +0100, Paolo Bonzini wrote:
> On 11/01/19 16:58, Stefan Hajnoczi wrote:
> > Before investing time in doing that, what is the goal?
> > 
> > Is this for test and bring-up of NVMe-oF?  Or why does the guest need to
> > know that the storage is NVMe-oF?
> > 
> > As I mentioned before, if your host supports NVMe-oF you can simply give
> > the block device to QEMU and let the guest access it via virtio-blk,
> > virtio-scsi, NVMe, etc.
> 
> NVMe-OF is an RDMA protocol, basically a different transport for the
> NVMe command set and queue abstraction.  It would allow QEMU to access
> the device directly (similar to what block/nvme.c does for PCI using
> VFIO) without going through the host kernel.  The guest would see the
> device as virtio-blk/scsi, NVMe or anything else.
> 
> Paolo

I think there's multiple ways to interpret the original request, and I'm
not sure which is the right one.

If he just wants a guest's backing storage to be an nvmeof target,
then Stefan's response sounds right with qemu abstracted from the
underlying protocol as long as the host supports it.

The other way I interpreted this is he wants a guest to connect to an
nvmeof target. If so, then the guest just needs to be provisioned with an
RDMA NIC or fibre channel, or even a virtio-nic may work if the fabric
target is NVMe-over-TCP. You'd just need a capable nvmeof driver stack
in the guest.

If the request is really to implement a vfio based solution to bypass
the host kernel ... that sounds like an interesting project. :)

      reply	other threads:[~2019-01-11 16:28 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-10  8:37 [Qemu-devel] If Qemu support NVMe over Fabrics ? Yang Zhong
2019-01-10 10:36 ` Stefan Hajnoczi
2019-01-11  5:46   ` [Qemu-devel] If Qemu support NVMe over Fabrics ?y Yang Zhong
2019-01-11 10:48     ` Paolo Bonzini
2019-01-11 15:58       ` Stefan Hajnoczi
2019-01-11 16:07         ` Paolo Bonzini
2019-01-11 16:26           ` Keith Busch [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190111162642.GE21095@localhost.localdomain \
    --to=keith.busch@intel.com \
    --cc=fam@euphon.net \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    --cc=yang.zhong@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.