kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@gmail.com>
To: Jason Wang <jasowang@redhat.com>
Cc: kvm@vger.kernel.org, linux-s390@vger.kernel.org,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	intel-gvt-dev@lists.freedesktop.org, kwankhede@nvidia.com,
	alex.williamson@redhat.com, mst@redhat.com, tiwei.bie@intel.com,
	virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, cohuck@redhat.com,
	maxime.coquelin@redhat.com, cunming.liang@intel.com,
	zhihong.wang@intel.com, rob.miller@broadcom.com,
	xiao.w.wang@intel.com, haotian.wang@sifive.com,
	zhenyuw@linux.intel.com, zhi.a.wang@intel.com,
	jani.nikula@linux.intel.com, joonas.lahtinen@linux.intel.com,
	rodrigo.vivi@intel.com, airlied@linux.ie, daniel@ffwll.ch,
	farman@linux.ibm.com, pasic@linux.ibm.com, sebott@linux.ibm.com,
	oberpar@linux.ibm.com, heiko.carstens@de.ibm.com,
	gor@linux.ibm.com, borntraeger@de.ibm.com,
	akrowiak@linux.ibm.com, freude@linux.ibm.com,
	lingshan.zhu@intel.com, idos@mellanox.com, eperezma@redhat.com,
	lulu@redhat.com, parav@mellanox.com,
	christophe.de.dinechin@gmail.com, kevin.tian@intel.com
Subject: Re: [PATCH V3 0/7] mdev based hardware virtio offloading support
Date: Tue, 15 Oct 2019 15:37:20 +0100	[thread overview]
Message-ID: <20191015143720.GA13108@stefanha-x1.localdomain> (raw)
In-Reply-To: <6d12ad8f-8137-e07d-d735-da59a326e8ed@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 2548 bytes --]

On Tue, Oct 15, 2019 at 11:37:17AM +0800, Jason Wang wrote:
> 
> On 2019/10/15 上午1:49, Stefan Hajnoczi wrote:
> > On Fri, Oct 11, 2019 at 04:15:50PM +0800, Jason Wang wrote:
> > > There are hardware that can do virtio datapath offloading while having
> > > its own control path. This path tries to implement a mdev based
> > > unified API to support using kernel virtio driver to drive those
> > > devices. This is done by introducing a new mdev transport for virtio
> > > (virtio_mdev) and register itself as a new kind of mdev driver. Then
> > > it provides a unified way for kernel virtio driver to talk with mdev
> > > device implementation.
> > > 
> > > Though the series only contains kernel driver support, the goal is to
> > > make the transport generic enough to support userspace drivers. This
> > > means vhost-mdev[1] could be built on top as well by resuing the
> > > transport.
> > > 
> > > A sample driver is also implemented which simulate a virito-net
> > > loopback ethernet device on top of vringh + workqueue. This could be
> > > used as a reference implementation for real hardware driver.
> > > 
> > > Consider mdev framework only support VFIO device and driver right now,
> > > this series also extend it to support other types. This is done
> > > through introducing class id to the device and pairing it with
> > > id_talbe claimed by the driver. On top, this seris also decouple
> > > device specific parents ops out of the common ones.
> > I was curious so I took a quick look and posted comments.
> > 
> > I guess this driver runs inside the guest since it registers virtio
> > devices?
> 
> 
> It could run in either guest or host. But the main focus is to run in the
> host then we can use virtio drivers in containers.
> 
> 
> > 
> > If this is used with physical PCI devices that support datapath
> > offloading then how are physical devices presented to the guest without
> > SR-IOV?
> 
> 
> We will do control path meditation through vhost-mdev[1] and vhost-vfio[2].
> Then we will present a full virtio compatible ethernet device for guest.
> 
> SR-IOV is not a must, any mdev device that implements the API defined in
> patch 5 can be used by this framework.

What I'm trying to understand is: if you want to present a virtio-pci
device to the guest (e.g. using vhost-mdev or vhost-vfio), then how is
that related to this patch series?

Does this mean this patch series is useful mostly for presenting virtio
devices to containers or the host?

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  reply	other threads:[~2019-10-15 14:37 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-11  8:15 [PATCH V3 0/7] mdev based hardware virtio offloading support Jason Wang
2019-10-11  8:15 ` [PATCH V3 1/7] mdev: class id support Jason Wang
2019-10-15 10:26   ` Cornelia Huck
2019-10-15 12:12     ` Jason Wang
2019-10-15 16:38   ` Alex Williamson
2019-10-16  4:38     ` Jason Wang
2019-10-16  4:57   ` Parav Pandit
2019-10-16 10:37     ` Jason Wang
2019-10-17  8:27     ` Jason Wang
2019-10-11  8:15 ` [PATCH V3 2/7] mdev: bus uevent support Jason Wang
2019-10-15 10:27   ` Cornelia Huck
2019-10-15 12:13     ` Jason Wang
2019-10-16  5:06   ` Parav Pandit
2019-10-11  8:15 ` [PATCH V3 3/7] modpost: add support for mdev class id Jason Wang
2019-10-11  8:15 ` [PATCH V3 4/7] mdev: introduce device specific ops Jason Wang
2019-10-15 10:41   ` Cornelia Huck
2019-10-15 12:17     ` Jason Wang
2019-10-15 17:26       ` Alex Williamson
2019-10-11  8:15 ` [PATCH V3 5/7] mdev: introduce virtio device and its device ops Jason Wang
2019-10-14 17:23   ` Stefan Hajnoczi
2019-10-15  3:27     ` Jason Wang
2019-10-11  8:15 ` [PATCH V3 6/7] virtio: introduce a mdev based transport Jason Wang
2019-10-14 17:39   ` Stefan Hajnoczi
2019-10-15  3:29     ` Jason Wang
2019-10-11  8:15 ` [PATCH V3 7/7] docs: sample driver to demonstrate how to implement virtio-mdev framework Jason Wang
2019-10-14 17:49 ` [PATCH V3 0/7] mdev based hardware virtio offloading support Stefan Hajnoczi
2019-10-15  3:37   ` Jason Wang
2019-10-15 14:37     ` Stefan Hajnoczi [this message]
2019-10-17  1:42       ` Jason Wang
2019-10-17  9:43         ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191015143720.GA13108@stefanha-x1.localdomain \
    --to=stefanha@gmail.com \
    --cc=airlied@linux.ie \
    --cc=akrowiak@linux.ibm.com \
    --cc=alex.williamson@redhat.com \
    --cc=borntraeger@de.ibm.com \
    --cc=christophe.de.dinechin@gmail.com \
    --cc=cohuck@redhat.com \
    --cc=cunming.liang@intel.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=eperezma@redhat.com \
    --cc=farman@linux.ibm.com \
    --cc=freude@linux.ibm.com \
    --cc=gor@linux.ibm.com \
    --cc=haotian.wang@sifive.com \
    --cc=heiko.carstens@de.ibm.com \
    --cc=idos@mellanox.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=intel-gvt-dev@lists.freedesktop.org \
    --cc=jani.nikula@linux.intel.com \
    --cc=jasowang@redhat.com \
    --cc=joonas.lahtinen@linux.intel.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwankhede@nvidia.com \
    --cc=lingshan.zhu@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=lulu@redhat.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=oberpar@linux.ibm.com \
    --cc=parav@mellanox.com \
    --cc=pasic@linux.ibm.com \
    --cc=rob.miller@broadcom.com \
    --cc=rodrigo.vivi@intel.com \
    --cc=sebott@linux.ibm.com \
    --cc=tiwei.bie@intel.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=xiao.w.wang@intel.com \
    --cc=zhenyuw@linux.intel.com \
    --cc=zhi.a.wang@intel.com \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).