All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: "Elena Ufimtseva" <elena.ufimtseva@oracle.com>,
	"Janosch Frank" <frankja@linux.vnet.ibm.com>,
	"mst@redhat.com" <mtsirkin@redhat.com>,
	"John G Johnson" <john.g.johnson@oracle.com>,
	qemu-devel <qemu-devel@nongnu.org>,
	"Kirti Wankhede" <kwankhede@nvidia.com>,
	"Gerd Hoffmann" <kraxel@redhat.com>,
	"Yan Vugenfirer" <yan@daynix.com>,
	"Jag Raman" <jag.raman@oracle.com>,
	"Eugenio Pérez" <eperezma@redhat.com>,
	"Anup Patel" <anup@brainfault.org>,
	"Claudio Imbrenda" <imbrenda@linux.vnet.ibm.com>,
	"Christian Borntraeger" <borntraeger@de.ibm.com>,
	"Roman Kagan" <rkagan@virtuozzo.com>,
	"Felipe Franciosi" <felipe@nutanix.com>,
	"Marc-André Lureau" <marcandre.lureau@redhat.com>,
	"Jens Freimann" <jfreimann@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@redhat.com>,
	"Stefano Garzarella" <sgarzare@redhat.com>,
	"Eduardo Habkost" <ehabkost@redhat.com>,
	"Sergio Lopez" <slp@redhat.com>,
	"Kashyap Chamarthy" <kchamart@redhat.com>,
	"Darren Kenny" <darren.kenny@oracle.com>,
	"Alex Williamson" <alex.williamson@redhat.com>,
	"Liran Alon" <liran.alon@oracle.com>,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Thanos Makatos" <thanos.makatos@nutanix.com>,
	"Alex Bennée" <alex.bennee@linaro.org>,
	"David Gibson" <david@gibson.dropbear.id.au>,
	"Kevin Wolf" <kwolf@redhat.com>,
	"Halil Pasic" <pasic@linux.vnet.ibm.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	"Christophe de Dinechin" <dinechin@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>, fam <fam@euphon.net>
Subject: Re: Out-of-Process Device Emulation session at KVM Forum 2020
Date: Fri, 30 Oct 2020 20:07:44 +0800	[thread overview]
Message-ID: <95432b0c-919f-3868-b3f5-fc45a1eef721@redhat.com> (raw)
In-Reply-To: <CAJSP0QXQmFgtSsJL1B3eMUr8teQc3cvvEFvr7LvnFkJPcE3ZpA@mail.gmail.com>


On 2020/10/30 下午7:13, Stefan Hajnoczi wrote:
> On Fri, Oct 30, 2020 at 9:46 AM Jason Wang <jasowang@redhat.com> wrote:
>> On 2020/10/30 下午2:21, Stefan Hajnoczi wrote:
>>> On Fri, Oct 30, 2020 at 3:04 AM Alex Williamson
>>> <alex.williamson@redhat.com> wrote:
>>>> It's great to revisit ideas, but proclaiming a uAPI is bad solely
>>>> because the data transfer is opaque, without defining why that's bad,
>>>> evaluating the feasibility and implementation of defining a well
>>>> specified data format rather than protocol, including cross-vendor
>>>> support, or proposing any sort of alternative is not so helpful imo.
>>> The migration approaches in VFIO and vDPA/vhost were designed for
>>> different requirements and I think this is why there are different
>>> perspectives on this. Here is a comparison and how VFIO could be
>>> extended in the future. I see 3 levels of device state compatibility:
>>>
>>> 1. The device cannot save/load state blobs, instead userspace fetches
>>> and restores specific values of the device's runtime state (e.g. last
>>> processed ring index). This is the vhost approach.
>>>
>>> 2. The device can save/load state in a standard format. This is
>>> similar to #1 except that there is a single read/write blob interface
>>> instead of fine-grained get_FOO()/set_FOO() interfaces. This approach
>>> pushes the migration state parsing into the device so that userspace
>>> doesn't need knowledge of every device type. With this approach it is
>>> possible for a device from vendor A to migrate to a device from vendor
>>> B, as long as they both implement the same standard migration format.
>>> The limitation of this approach is that vendor-specific state cannot
>>> be transferred.
>>>
>>> 3. The device can save/load opaque blobs. This is the initial VFIO
>>> approach.
>>
>> I still don't get why it must be opaque.
> If the device state format needs to be in the VMM then each device
> needs explicit enablement in each VMM (QEMU, cloud-hypervisor, etc).
>
> Let's invert the question: why does the VMM need to understand the
> device state of a _passthrough_ device?


For better manageability, compatibility and debug-ability. If we depends 
on a opaque structure, do we encourage device to implement its own 
migration protocol? It would be very challenge.

For VFIO in the kernel, I suspect a uAPI that may result a opaque data 
to be read or wrote from guest violates the Linux uAPI principle. It 
will be very hard to maintain uABI or even impossible. It looks to me 
VFIO is the first subsystem that is trying to do this.


>
>>>    A device from vendor A cannot migrate to a device from
>>> vendor B because the format is incompatible. This approach works well
>>> when devices have unique guest-visible hardware interfaces so the
>>> guest wouldn't be able to handle migrating a device from vendor A to a
>>> device from vendor B anyway.
>>
>> For VFIO I guess cross vendor live migration can't succeed unless we do
>> some cheats in device/vendor id.
> Yes. I haven't looked into the details of PCI (Sub-)Device/Vendor IDs
> and how to best enable migration but I hope that can be solved. The
> simplest approach is to override the IDs and make them part of the
> guest configuration.


That would be very tricky (or requires whitelist). E.g the opaque of the 
src may match the opaque of the dst by chance.


>
>> For at least virtio, they will still go with virtio/vDPA. The advantages
>> are:
>>
>> 1) virtio/vDPA can serve kernel subsystems which VFIO can't, this is
>> very important for containers
> I'm not sure I understand this. If the kernel wants to use the device
> then it doesn't use VFIO, it runs the kernel driver instead.


Current spec is not suitable for all type of device. We've received many 
feedbacks that virtio(pci) might not work very well. Another point is 
that there could be vendor that don't want go with virtio control path. 
Mellanox mlx5 vdpa driver is one example. Yes, they can use mlx5_en, but 
there are vendors that want to build a vendor specific control path from 
scratch.


>
> One part I believe is missing from VFIO/mdev is attaching an mdev
> device to the kernel. That seems to be an example of the limitation
> you mentioned.


Yes, exactly.


>
>> 2) virtio/vDPA is bus independent, we can present a virtio-mmio device
>> which is based on vDPA PCI hardware for e.g microvm
> Yes. This is neat although microvm supports PCI now
> (https://www.kraxel.org/blog/2020/10/qemu-microvm-acpi/).
>
>> I'm not familiar with NVME but they should go with the same way instead
>> of depending on VFIO.
> There are pros/cons with both approaches. I'm not even sure all VIRTIO
> hardware vendors will use vDPA. Two examples:
> 1. A tiny VMM with strict security requirements. The VFIO approach is
> less complex because the VMM is much less involved with the device.


I suspect VFIO could be more secure. It exposes a lot of hardware 
details while vDPA is trying to hide.


> 2. A vendor shipping a hardware VIRTIO PCI device as a PF - no SR-IOV,
> no software VFs, just a single instance. A passthrough PCI device is a
> much simpler way to deliver this device than vDPA + vhost + VMM
> support.


It could be simple but note that there's no live migration support in 
the spec. So it can't be live migrated. We could extend the spec for 
sure, but there're vendor that has already implemented the virtio plus 
their vendor specific extensions for live migration.


>
> vDPA is very useful but there are situations when the VFIO approach is
> attractive too.


Note that it's probably better to differ virtio from vDPA. For virtio 
control path compatible device, we should keep it work in both 
subsystems. For the rest vDPA devices (control path is not virtio), 
exposing them via VFIO doesn't help much or even impossible (e.g the 
abstraction requires the communication with PF).

Thanks


>
> Stefan
>



  reply	other threads:[~2020-10-30 12:10 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-27 15:14 Out-of-Process Device Emulation session at KVM Forum 2020 Stefan Hajnoczi
2020-10-28  9:32 ` Thanos Makatos
2020-10-28 10:07   ` Thanos Makatos
2020-10-28 11:09 ` Michael S. Tsirkin
2020-10-29  8:21 ` Stefan Hajnoczi
2020-10-29 12:08 ` Stefan Hajnoczi
2020-10-29 13:02   ` Jason Wang
2020-10-29 13:06     ` Paolo Bonzini
2020-10-29 14:08     ` Stefan Hajnoczi
2020-10-29 14:31     ` Alex Williamson
2020-10-29 15:09       ` Jason Wang
2020-10-29 15:46         ` Alex Williamson
2020-10-29 16:10           ` Paolo Bonzini
2020-10-30  1:11           ` Jason Wang
2020-10-30  3:04             ` Alex Williamson
2020-10-30  6:21               ` Stefan Hajnoczi
2020-10-30  9:45                 ` Jason Wang
2020-10-30 11:13                   ` Stefan Hajnoczi
2020-10-30 12:07                     ` Jason Wang [this message]
2020-10-30 13:15                       ` Stefan Hajnoczi
2020-11-02  2:51                         ` Jason Wang
2020-11-02 10:13                           ` Stefan Hajnoczi
2020-11-03  7:52                             ` Jason Wang
2020-11-03 14:26                               ` Stefan Hajnoczi
2020-11-04  6:50                                 ` Gerd Hoffmann
2020-11-04  7:42                                   ` Michael S. Tsirkin
2020-10-31 21:49                     ` Michael S. Tsirkin
2020-11-01  8:26                       ` Paolo Bonzini
2020-11-02  2:54                         ` Jason Wang
2020-11-02  3:00                     ` Jason Wang
2020-11-02 10:27                       ` Stefan Hajnoczi
2020-11-02 10:34                         ` Michael S. Tsirkin
2020-11-02 14:59                           ` Stefan Hajnoczi
2020-10-30  7:51               ` Michael S. Tsirkin
2020-10-30  9:31               ` Jason Wang
2020-10-29 16:15     ` David Edmondson
2020-10-29 16:42       ` Daniel P. Berrangé
2020-10-29 17:47         ` Kirti Wankhede
2020-10-29 18:07           ` Paolo Bonzini
2020-10-30  1:15             ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=95432b0c-919f-3868-b3f5-fc45a1eef721@redhat.com \
    --to=jasowang@redhat.com \
    --cc=alex.bennee@linaro.org \
    --cc=alex.williamson@redhat.com \
    --cc=anup@brainfault.org \
    --cc=berrange@redhat.com \
    --cc=borntraeger@de.ibm.com \
    --cc=darren.kenny@oracle.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=dinechin@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=elena.ufimtseva@oracle.com \
    --cc=eperezma@redhat.com \
    --cc=fam@euphon.net \
    --cc=felipe@nutanix.com \
    --cc=frankja@linux.vnet.ibm.com \
    --cc=imbrenda@linux.vnet.ibm.com \
    --cc=jag.raman@oracle.com \
    --cc=jfreimann@redhat.com \
    --cc=john.g.johnson@oracle.com \
    --cc=kchamart@redhat.com \
    --cc=kraxel@redhat.com \
    --cc=kwankhede@nvidia.com \
    --cc=kwolf@redhat.com \
    --cc=liran.alon@oracle.com \
    --cc=marcandre.lureau@redhat.com \
    --cc=mtsirkin@redhat.com \
    --cc=pasic@linux.vnet.ibm.com \
    --cc=pbonzini@redhat.com \
    --cc=philmd@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=rkagan@virtuozzo.com \
    --cc=sgarzare@redhat.com \
    --cc=slp@redhat.com \
    --cc=stefanha@gmail.com \
    --cc=stefanha@redhat.com \
    --cc=thanos.makatos@nutanix.com \
    --cc=yan@daynix.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.