From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Kirti Wankhede <kwankhede@nvidia.com>
Cc: cohuck@redhat.com, cjia@nvidia.com, aik@ozlabs.ru,
Zhengxiao.zx@alibaba-inc.com, shuangtai.tst@alibaba-inc.com,
qemu-devel@nongnu.org, peterx@redhat.com, eauger@redhat.com,
yi.l.liu@intel.com, quintela@redhat.com, ziye.yang@intel.com,
armbru@redhat.com, mlevitsk@redhat.com, pasic@linux.ibm.com,
felipe@nutanix.com, zhi.a.wang@intel.com, kevin.tian@intel.com,
yan.y.zhao@intel.com, alex.williamson@redhat.com,
changpeng.liu@intel.com, eskultet@redhat.com, Ken.Xue@amd.com,
jonathan.davies@nutanix.com, pbonzini@redhat.com
Subject: Re: [PATCH QEMU v23 04/18] vfio: Add save and load functions for VFIO PCI devices
Date: Thu, 21 May 2020 20:28:10 +0100 [thread overview]
Message-ID: <20200521192810.GQ2752@work-vm> (raw)
In-Reply-To: <edc6b7d1-6b85-44ac-836a-bca0fef110eb@nvidia.com>
* Kirti Wankhede (kwankhede@nvidia.com) wrote:
>
>
> On 5/21/2020 3:20 PM, Dr. David Alan Gilbert wrote:
> > * Kirti Wankhede (kwankhede@nvidia.com) wrote:
> > > These functions save and restore PCI device specific data - config
> > > space of PCI device.
> > > Tested save and restore with MSI and MSIX type.
> > >
> > > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > > Reviewed-by: Neo Jia <cjia@nvidia.com>
> >
> > So I'm OK with this from the migration side, but I'd like Alex to check it from
> > the PCI side of things.
> >
> > I still think that you'd be better:
> > a) Using a VMStateDescription to encode the structure
>
> Can we use VMStateDescription and SaveVMHandlers at a same time for
> migration?
Yes, one trick you can do is to call vmstate_save_state from your save
function, passing it as a vmsd; e.g. hw/virtio/virtio-pci.c virtio_pci_save_extra_state
does that.
> > b) or at least adding a flag at the end so you can add more data later
> >
>
> Sure, I'm thinking of this option.
Great.
> > Experience with every other device shows you're shooting yourself in the
> > foot by hard coding the layout and not giving yourself a chance to
> > expand it.
> >
> > but for now,
> >
> >
> > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
>
> Thanks, this version of QEMU patches was posted without addressing comments
> on previous version so that others can test kernel interface for now. We
> need to freeze kernel interface and get those patches in v5.8 kernel.
OK, I'm more on the qemu side; and I think I've reviewed all the patches
I understand (left a few deep vfio ones for Alex). The kernel interface
looks like it fits qemu's use here OK.
> I'll revisit all the remaining comments on v16 and v18 QEMU patches again
> and reiterate the patches. So this is not the final version.
Great.
Dave
> Thanks,
> Kirti
>
> >
> > > ---
> > > hw/vfio/pci.c | 163 ++++++++++++++++++++++++++++++++++++++++++
> > > include/hw/vfio/vfio-common.h | 2 +
> > > 2 files changed, 165 insertions(+)
> > >
> > > diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
> > > index 0514ba373d1c..94535f0e27cd 100644
> > > --- a/hw/vfio/pci.c
> > > +++ b/hw/vfio/pci.c
> > > @@ -41,6 +41,7 @@
> > > #include "trace.h"
> > > #include "qapi/error.h"
> > > #include "migration/blocker.h"
> > > +#include "migration/qemu-file.h"
> > > #define TYPE_VFIO_PCI "vfio-pci"
> > > #define PCI_VFIO(obj) OBJECT_CHECK(VFIOPCIDevice, obj, TYPE_VFIO_PCI)
> > > @@ -1632,6 +1633,50 @@ static void vfio_bars_prepare(VFIOPCIDevice *vdev)
> > > }
> > > }
> > > +static int vfio_bar_validate(VFIOPCIDevice *vdev, int nr)
> > > +{
> > > + PCIDevice *pdev = &vdev->pdev;
> > > + VFIOBAR *bar = &vdev->bars[nr];
> > > + uint64_t addr;
> > > + uint32_t addr_lo, addr_hi = 0;
> > > +
> > > + /* Skip unimplemented BARs and the upper half of 64bit BARS. */
> > > + if (!bar->size) {
> > > + return 0;
> > > + }
> > > +
> > > + addr_lo = pci_default_read_config(pdev, PCI_BASE_ADDRESS_0 + nr * 4, 4);
> > > +
> > > + addr_lo &= (bar->ioport ? PCI_BASE_ADDRESS_IO_MASK :
> > > + PCI_BASE_ADDRESS_MEM_MASK);
> > > + if (bar->type == PCI_BASE_ADDRESS_MEM_TYPE_64) {
> > > + addr_hi = pci_default_read_config(pdev,
> > > + PCI_BASE_ADDRESS_0 + (nr + 1) * 4, 4);
> > > + }
> > > +
> > > + addr = ((uint64_t)addr_hi << 32) | addr_lo;
> > > +
> > > + if (!QEMU_IS_ALIGNED(addr, bar->size)) {
> > > + return -EINVAL;
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static int vfio_bars_validate(VFIOPCIDevice *vdev)
> > > +{
> > > + int i, ret;
> > > +
> > > + for (i = 0; i < PCI_ROM_SLOT; i++) {
> > > + ret = vfio_bar_validate(vdev, i);
> > > + if (ret) {
> > > + error_report("vfio: BAR address %d validation failed", i);
> > > + return ret;
> > > + }
> > > + }
> > > + return 0;
> > > +}
> > > +
> > > static void vfio_bar_register(VFIOPCIDevice *vdev, int nr)
> > > {
> > > VFIOBAR *bar = &vdev->bars[nr];
> > > @@ -2414,11 +2459,129 @@ static Object *vfio_pci_get_object(VFIODevice *vbasedev)
> > > return OBJECT(vdev);
> > > }
> > > +static void vfio_pci_save_config(VFIODevice *vbasedev, QEMUFile *f)
> > > +{
> > > + VFIOPCIDevice *vdev = container_of(vbasedev, VFIOPCIDevice, vbasedev);
> > > + PCIDevice *pdev = &vdev->pdev;
> > > + uint16_t pci_cmd;
> > > + int i;
> > > +
> > > + for (i = 0; i < PCI_ROM_SLOT; i++) {
> > > + uint32_t bar;
> > > +
> > > + bar = pci_default_read_config(pdev, PCI_BASE_ADDRESS_0 + i * 4, 4);
> > > + qemu_put_be32(f, bar);
> > > + }
> > > +
> > > + qemu_put_be32(f, vdev->interrupt);
> > > + if (vdev->interrupt == VFIO_INT_MSI) {
> > > + uint32_t msi_flags, msi_addr_lo, msi_addr_hi = 0, msi_data;
> > > + bool msi_64bit;
> > > +
> > > + msi_flags = pci_default_read_config(pdev, pdev->msi_cap + PCI_MSI_FLAGS,
> > > + 2);
> > > + msi_64bit = (msi_flags & PCI_MSI_FLAGS_64BIT);
> > > +
> > > + msi_addr_lo = pci_default_read_config(pdev,
> > > + pdev->msi_cap + PCI_MSI_ADDRESS_LO, 4);
> > > + qemu_put_be32(f, msi_addr_lo);
> > > +
> > > + if (msi_64bit) {
> > > + msi_addr_hi = pci_default_read_config(pdev,
> > > + pdev->msi_cap + PCI_MSI_ADDRESS_HI,
> > > + 4);
> > > + }
> > > + qemu_put_be32(f, msi_addr_hi);
> > > +
> > > + msi_data = pci_default_read_config(pdev,
> > > + pdev->msi_cap + (msi_64bit ? PCI_MSI_DATA_64 : PCI_MSI_DATA_32),
> > > + 2);
> > > + qemu_put_be16(f, msi_data);
> > > + } else if (vdev->interrupt == VFIO_INT_MSIX) {
> > > + uint16_t offset;
> > > +
> > > + /* save enable bit and maskall bit */
> > > + offset = pci_default_read_config(pdev,
> > > + pdev->msix_cap + PCI_MSIX_FLAGS + 1, 2);
> > > + qemu_put_be16(f, offset);
> > > + msix_save(pdev, f);
> > > + }
> > > + pci_cmd = pci_default_read_config(pdev, PCI_COMMAND, 2);
> > > + qemu_put_be16(f, pci_cmd);
> > > +}
> > > +
> > > +static int vfio_pci_load_config(VFIODevice *vbasedev, QEMUFile *f)
> > > +{
> > > + VFIOPCIDevice *vdev = container_of(vbasedev, VFIOPCIDevice, vbasedev);
> > > + PCIDevice *pdev = &vdev->pdev;
> > > + uint32_t interrupt_type;
> > > + uint32_t msi_flags, msi_addr_lo, msi_addr_hi = 0, msi_data;
> > > + uint16_t pci_cmd;
> > > + bool msi_64bit;
> > > + int i, ret;
> > > +
> > > + /* retore pci bar configuration */
> > > + pci_cmd = pci_default_read_config(pdev, PCI_COMMAND, 2);
> > > + vfio_pci_write_config(pdev, PCI_COMMAND,
> > > + pci_cmd & (!(PCI_COMMAND_IO | PCI_COMMAND_MEMORY)), 2);
> > > + for (i = 0; i < PCI_ROM_SLOT; i++) {
> > > + uint32_t bar = qemu_get_be32(f);
> > > +
> > > + vfio_pci_write_config(pdev, PCI_BASE_ADDRESS_0 + i * 4, bar, 4);
> > > + }
> > > +
> > > + ret = vfio_bars_validate(vdev);
> > > + if (ret) {
> > > + return ret;
> > > + }
> > > +
> > > + interrupt_type = qemu_get_be32(f);
> > > +
> > > + if (interrupt_type == VFIO_INT_MSI) {
> > > + /* restore msi configuration */
> > > + msi_flags = pci_default_read_config(pdev,
> > > + pdev->msi_cap + PCI_MSI_FLAGS, 2);
> > > + msi_64bit = (msi_flags & PCI_MSI_FLAGS_64BIT);
> > > +
> > > + vfio_pci_write_config(pdev, pdev->msi_cap + PCI_MSI_FLAGS,
> > > + msi_flags & (!PCI_MSI_FLAGS_ENABLE), 2);
> > > +
> > > + msi_addr_lo = qemu_get_be32(f);
> > > + vfio_pci_write_config(pdev, pdev->msi_cap + PCI_MSI_ADDRESS_LO,
> > > + msi_addr_lo, 4);
> > > +
> > > + msi_addr_hi = qemu_get_be32(f);
> > > + if (msi_64bit) {
> > > + vfio_pci_write_config(pdev, pdev->msi_cap + PCI_MSI_ADDRESS_HI,
> > > + msi_addr_hi, 4);
> > > + }
> > > + msi_data = qemu_get_be16(f);
> > > + vfio_pci_write_config(pdev,
> > > + pdev->msi_cap + (msi_64bit ? PCI_MSI_DATA_64 : PCI_MSI_DATA_32),
> > > + msi_data, 2);
> > > +
> > > + vfio_pci_write_config(pdev, pdev->msi_cap + PCI_MSI_FLAGS,
> > > + msi_flags | PCI_MSI_FLAGS_ENABLE, 2);
> > > + } else if (interrupt_type == VFIO_INT_MSIX) {
> > > + uint16_t offset = qemu_get_be16(f);
> > > +
> > > + /* load enable bit and maskall bit */
> > > + vfio_pci_write_config(pdev, pdev->msix_cap + PCI_MSIX_FLAGS + 1,
> > > + offset, 2);
> > > + msix_load(pdev, f);
> > > + }
> > > + pci_cmd = qemu_get_be16(f);
> > > + vfio_pci_write_config(pdev, PCI_COMMAND, pci_cmd, 2);
> > > + return 0;
> > > +}
> > > +
> > > static VFIODeviceOps vfio_pci_ops = {
> > > .vfio_compute_needs_reset = vfio_pci_compute_needs_reset,
> > > .vfio_hot_reset_multi = vfio_pci_hot_reset_multi,
> > > .vfio_eoi = vfio_intx_eoi,
> > > .vfio_get_object = vfio_pci_get_object,
> > > + .vfio_save_config = vfio_pci_save_config,
> > > + .vfio_load_config = vfio_pci_load_config,
> > > };
> > > int vfio_populate_vga(VFIOPCIDevice *vdev, Error **errp)
> > > diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
> > > index 74261feaeac9..d69a7f3ae31e 100644
> > > --- a/include/hw/vfio/vfio-common.h
> > > +++ b/include/hw/vfio/vfio-common.h
> > > @@ -120,6 +120,8 @@ struct VFIODeviceOps {
> > > int (*vfio_hot_reset_multi)(VFIODevice *vdev);
> > > void (*vfio_eoi)(VFIODevice *vdev);
> > > Object *(*vfio_get_object)(VFIODevice *vdev);
> > > + void (*vfio_save_config)(VFIODevice *vdev, QEMUFile *f);
> > > + int (*vfio_load_config)(VFIODevice *vdev, QEMUFile *f);
> > > };
> > > typedef struct VFIOGroup {
> > > --
> > > 2.7.0
> > >
> > --
> > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> >
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2020-05-21 19:41 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-20 18:24 [PATCH QEMU v23 00/18] Add migration support for VFIO devices Kirti Wankhede
2020-05-20 18:24 ` [PATCH QEMU v23 01/18] vfio: KABI for migration interface - Kernel header placeholder Kirti Wankhede
2020-05-20 18:24 ` [PATCH QEMU v23 02/18] vfio: Add function to unmap VFIO region Kirti Wankhede
2020-05-20 18:24 ` [PATCH QEMU v23 03/18] vfio: Add vfio_get_object callback to VFIODeviceOps Kirti Wankhede
2020-05-20 18:24 ` [PATCH QEMU v23 04/18] vfio: Add save and load functions for VFIO PCI devices Kirti Wankhede
2020-05-21 9:50 ` Dr. David Alan Gilbert
2020-05-21 12:12 ` Kirti Wankhede
2020-05-21 19:28 ` Dr. David Alan Gilbert [this message]
2020-05-20 18:24 ` [PATCH QEMU v23 05/18] vfio: Add migration region initialization and finalize function Kirti Wankhede
2020-05-21 9:59 ` Dr. David Alan Gilbert
2020-05-20 18:24 ` [PATCH QEMU v23 06/18] vfio: Add VM state change handler to know state of VM Kirti Wankhede
2020-05-21 11:19 ` Dr. David Alan Gilbert
2020-05-20 18:24 ` [PATCH QEMU v23 07/18] vfio: Add migration state change notifier Kirti Wankhede
2020-05-21 11:31 ` Dr. David Alan Gilbert
2020-05-20 18:24 ` [PATCH QEMU v23 08/18] vfio: Register SaveVMHandlers for VFIO device Kirti Wankhede
2020-05-21 14:18 ` Dr. David Alan Gilbert
2020-05-21 18:00 ` Kirti Wankhede
2020-05-21 19:35 ` Dr. David Alan Gilbert
2020-05-20 18:24 ` [PATCH QEMU v23 09/18] vfio: Add save state functions to SaveVMHandlers Kirti Wankhede
2020-05-21 15:37 ` Dr. David Alan Gilbert
2020-05-21 20:43 ` Peter Xu
2020-05-20 18:24 ` [PATCH QEMU v23 10/18] vfio: Add load " Kirti Wankhede
2020-05-21 15:53 ` Dr. David Alan Gilbert
2020-05-20 18:24 ` [PATCH QEMU v23 11/18] iommu: add callback to get address limit IOMMU supports Kirti Wankhede
2020-05-21 16:13 ` Peter Xu
2020-05-20 18:24 ` [PATCH QEMU v23 12/18] memory: Set DIRTY_MEMORY_MIGRATION when IOMMU is enabled Kirti Wankhede
2020-05-21 16:20 ` Dr. David Alan Gilbert
2020-05-20 18:24 ` [PATCH QEMU v23 13/18] vfio: Get migration capability flags for container Kirti Wankhede
2020-05-20 18:24 ` [PATCH QEMU v23 14/18] vfio: Add function to start and stop dirty pages tracking Kirti Wankhede
2020-05-21 16:50 ` Dr. David Alan Gilbert
2020-05-20 18:24 ` [PATCH QEMU v23 15/18] vfio: Add vfio_listener_log_sync to mark dirty pages Kirti Wankhede
2020-05-21 18:52 ` Dr. David Alan Gilbert
2020-05-20 18:24 ` [PATCH QEMU v23 16/18] vfio: Add ioctl to get dirty pages bitmap during dma unmap Kirti Wankhede
2020-05-21 19:05 ` Dr. David Alan Gilbert
2020-05-20 18:24 ` [PATCH QEMU v23 17/18] vfio: Make vfio-pci device migration capable Kirti Wankhede
2020-05-21 19:16 ` Dr. David Alan Gilbert
2020-05-20 18:24 ` [PATCH QEMU v23 18/18] qapi: Add VFIO devices migration stats in Migration stats Kirti Wankhede
2020-05-21 19:23 ` Dr. David Alan Gilbert
2020-05-25 14:34 ` Markus Armbruster
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200521192810.GQ2752@work-vm \
--to=dgilbert@redhat.com \
--cc=Ken.Xue@amd.com \
--cc=Zhengxiao.zx@alibaba-inc.com \
--cc=aik@ozlabs.ru \
--cc=alex.williamson@redhat.com \
--cc=armbru@redhat.com \
--cc=changpeng.liu@intel.com \
--cc=cjia@nvidia.com \
--cc=cohuck@redhat.com \
--cc=eauger@redhat.com \
--cc=eskultet@redhat.com \
--cc=felipe@nutanix.com \
--cc=jonathan.davies@nutanix.com \
--cc=kevin.tian@intel.com \
--cc=kwankhede@nvidia.com \
--cc=mlevitsk@redhat.com \
--cc=pasic@linux.ibm.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=shuangtai.tst@alibaba-inc.com \
--cc=yan.y.zhao@intel.com \
--cc=yi.l.liu@intel.com \
--cc=zhi.a.wang@intel.com \
--cc=ziye.yang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).