All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@gmail.com>
To: "Eugenio Pérez" <eperezma@redhat.com>
Cc: qemu-devel@nongnu.org, Lars Ganrot <lars.ganrot@gmail.com>,
	virtualization@lists.linux-foundation.org,
	Salil Mehta <mehta.salil.lnk@gmail.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Liran Alon <liralon@gmail.com>,
	Rob Miller <rob.miller@broadcom.com>,
	Max Gurtovoy <maxgu14@gmail.com>,
	Alex Barba <alex.barba@broadcom.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Jim Harford <jim.harford@broadcom.com>,
	Jason Wang <jasowang@redhat.com>,
	Harpreet Singh Anand <hanand@xilinx.com>,
	Christophe Fontaine <cfontain@redhat.com>,
	vm <vmireyno@marvell.com>, Daniel Daly <dandaly0@gmail.com>,
	Michael Lilja <ml@napatech.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Nitin Shrivastav <nitin.shrivastav@broadcom.com>,
	Lee Ballard <ballle98@gmail.com>,
	Dmytro Kazantsev <dmytro.kazantsev@gmail.com>,
	Juan Quintela <quintela@redhat.com>,
	kvm@vger.kernel.org, Howard Cai <howard.cai@gmail.com>,
	Xiao W Wang <xiao.w.wang@intel.com>,
	Sean Mooney <smooney@redhat.com>,
	Parav Pandit <parav@mellanox.com>, Eli Cohen <eli@mellanox.com>,
	Siwei Liu <loseweigh@gmail.com>,
	Stephen Finucane <stephenfin@redhat.com>
Subject: Re: [RFC PATCH 07/27] vhost: Route guest->host notification through qemu
Date: Mon, 7 Dec 2020 17:42:33 +0000	[thread overview]
Message-ID: <20201207174233.GN203660@stefanha-x1.localdomain> (raw)
In-Reply-To: <20201120185105.279030-8-eperezma@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 11448 bytes --]

On Fri, Nov 20, 2020 at 07:50:45PM +0100, Eugenio Pérez wrote:
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  hw/virtio/vhost-sw-lm-ring.h |  26 +++++++++
>  include/hw/virtio/vhost.h    |   3 ++
>  hw/virtio/vhost-sw-lm-ring.c |  60 +++++++++++++++++++++
>  hw/virtio/vhost.c            | 100 +++++++++++++++++++++++++++++++++--
>  hw/virtio/meson.build        |   2 +-
>  5 files changed, 187 insertions(+), 4 deletions(-)
>  create mode 100644 hw/virtio/vhost-sw-lm-ring.h
>  create mode 100644 hw/virtio/vhost-sw-lm-ring.c
> 
> diff --git a/hw/virtio/vhost-sw-lm-ring.h b/hw/virtio/vhost-sw-lm-ring.h
> new file mode 100644
> index 0000000000..86dc081b93
> --- /dev/null
> +++ b/hw/virtio/vhost-sw-lm-ring.h
> @@ -0,0 +1,26 @@
> +/*
> + * vhost software live migration ring
> + *
> + * SPDX-FileCopyrightText: Red Hat, Inc. 2020
> + * SPDX-FileContributor: Author: Eugenio Pérez <eperezma@redhat.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#ifndef VHOST_SW_LM_RING_H
> +#define VHOST_SW_LM_RING_H
> +
> +#include "qemu/osdep.h"
> +
> +#include "hw/virtio/virtio.h"
> +#include "hw/virtio/vhost.h"
> +
> +typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;

Here it's called a shadow virtqueue while the file calls it a
sw-lm-ring. Please use a single name.

> +
> +bool vhost_vring_kick(VhostShadowVirtqueue *vq);

vhost_shadow_vq_kick()?

> +
> +VhostShadowVirtqueue *vhost_sw_lm_shadow_vq(struct vhost_dev *dev, int idx);

vhost_dev_get_shadow_vq()? This could be in include/hw/virtio/vhost.h
with the other vhost_dev_*() functions.

> +
> +void vhost_sw_lm_shadow_vq_free(VhostShadowVirtqueue *vq);

Hmm...now I wonder what the lifecycle is. Does vhost_sw_lm_shadow_vq()
allocate it?

Please add doc comments explaining these functions either in this header
file or in vhost-sw-lm-ring.c.

> +
> +#endif
> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> index b5b7496537..93cc3f1ae3 100644
> --- a/include/hw/virtio/vhost.h
> +++ b/include/hw/virtio/vhost.h
> @@ -54,6 +54,8 @@ struct vhost_iommu {
>      QLIST_ENTRY(vhost_iommu) iommu_next;
>  };
>  
> +typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
> +
>  typedef struct VhostDevConfigOps {
>      /* Vhost device config space changed callback
>       */
> @@ -83,6 +85,7 @@ struct vhost_dev {
>      bool started;
>      bool log_enabled;
>      uint64_t log_size;
> +    VhostShadowVirtqueue *sw_lm_shadow_vq[2];

The hardcoded 2 is probably for single-queue virtio-net? I guess this
will eventually become VhostShadowVirtqueue *shadow_vqs or
VhostShadowVirtqueue **shadow_vqs, depending on whether each one should
be allocated individually.

>      VirtIOHandleOutput sw_lm_vq_handler;
>      Error *migration_blocker;
>      const VhostOps *vhost_ops;
> diff --git a/hw/virtio/vhost-sw-lm-ring.c b/hw/virtio/vhost-sw-lm-ring.c
> new file mode 100644
> index 0000000000..0192e77831
> --- /dev/null
> +++ b/hw/virtio/vhost-sw-lm-ring.c
> @@ -0,0 +1,60 @@
> +/*
> + * vhost software live migration ring
> + *
> + * SPDX-FileCopyrightText: Red Hat, Inc. 2020
> + * SPDX-FileContributor: Author: Eugenio Pérez <eperezma@redhat.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#include "hw/virtio/vhost-sw-lm-ring.h"
> +#include "hw/virtio/vhost.h"
> +
> +#include "standard-headers/linux/vhost_types.h"
> +#include "standard-headers/linux/virtio_ring.h"
> +
> +#include "qemu/event_notifier.h"
> +
> +typedef struct VhostShadowVirtqueue {
> +    EventNotifier hdev_notifier;
> +    VirtQueue *vq;
> +} VhostShadowVirtqueue;
> +
> +static inline bool vhost_vring_should_kick(VhostShadowVirtqueue *vq)
> +{
> +    return virtio_queue_get_used_notify_split(vq->vq);
> +}
> +
> +bool vhost_vring_kick(VhostShadowVirtqueue *vq)
> +{
> +    return vhost_vring_should_kick(vq) ? event_notifier_set(&vq->hdev_notifier)
> +                                       : true;
> +}

How is the return value used? event_notifier_set() returns -errno so
this function returns false on success, and true when notifications are
disabled or event_notifier_set() failed. I'm not sure this return value
can be used for anything.

> +
> +VhostShadowVirtqueue *vhost_sw_lm_shadow_vq(struct vhost_dev *dev, int idx)

I see now that this function allocates the VhostShadowVirtqueue. Maybe
adding _new() to the name would make that clear?

> +{
> +    struct vhost_vring_file file = {
> +        .index = idx
> +    };
> +    VirtQueue *vq = virtio_get_queue(dev->vdev, idx);
> +    VhostShadowVirtqueue *svq;
> +    int r;
> +
> +    svq = g_new0(VhostShadowVirtqueue, 1);
> +    svq->vq = vq;
> +
> +    r = event_notifier_init(&svq->hdev_notifier, 0);
> +    assert(r == 0);
> +
> +    file.fd = event_notifier_get_fd(&svq->hdev_notifier);
> +    r = dev->vhost_ops->vhost_set_vring_kick(dev, &file);
> +    assert(r == 0);
> +
> +    return svq;
> +}

I guess there are assumptions about the status of the device? Does the
virtqueue need to be disabled when this function is called?

> +
> +void vhost_sw_lm_shadow_vq_free(VhostShadowVirtqueue *vq)
> +{
> +    event_notifier_cleanup(&vq->hdev_notifier);
> +    g_free(vq);
> +}
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 9cbd52a7f1..a55b684b5f 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -13,6 +13,8 @@
>   * GNU GPL, version 2 or (at your option) any later version.
>   */
>  
> +#include "hw/virtio/vhost-sw-lm-ring.h"
> +
>  #include "qemu/osdep.h"
>  #include "qapi/error.h"
>  #include "hw/virtio/vhost.h"
> @@ -61,6 +63,20 @@ bool vhost_has_free_slot(void)
>      return slots_limit > used_memslots;
>  }
>  
> +static struct vhost_dev *vhost_dev_from_virtio(const VirtIODevice *vdev)
> +{
> +    struct vhost_dev *hdev;
> +
> +    QLIST_FOREACH(hdev, &vhost_devices, entry) {
> +        if (hdev->vdev == vdev) {
> +            return hdev;
> +        }
> +    }
> +
> +    assert(hdev);
> +    return NULL;
> +}
> +
>  static bool vhost_dev_can_log(const struct vhost_dev *hdev)
>  {
>      return hdev->features & (0x1ULL << VHOST_F_LOG_ALL);
> @@ -148,6 +164,12 @@ static int vhost_sync_dirty_bitmap(struct vhost_dev *dev,
>      return 0;
>  }
>  
> +static void vhost_log_sync_nop(MemoryListener *listener,
> +                               MemoryRegionSection *section)
> +{
> +    return;
> +}
> +
>  static void vhost_log_sync(MemoryListener *listener,
>                            MemoryRegionSection *section)
>  {
> @@ -928,6 +950,71 @@ static void vhost_log_global_stop(MemoryListener *listener)
>      }
>  }
>  
> +static void handle_sw_lm_vq(VirtIODevice *vdev, VirtQueue *vq)
> +{
> +    struct vhost_dev *hdev = vhost_dev_from_virtio(vdev);

If this lookup becomes a performance bottleneck there are other options
for determining the vhost_dev. For example VirtIODevice could have a
field for stashing the vhost_dev pointer.

> +    uint16_t idx = virtio_get_queue_index(vq);
> +
> +    VhostShadowVirtqueue *svq = hdev->sw_lm_shadow_vq[idx];
> +
> +    vhost_vring_kick(svq);
> +}

I'm a confused. Do we need to pop elements from vq and push equivalent
elements onto svq before kicking? Either a todo comment is missing or I
misunderstand how this works.

> +
> +static int vhost_sw_live_migration_stop(struct vhost_dev *dev)
> +{
> +    int idx;
> +
> +    vhost_dev_enable_notifiers(dev, dev->vdev);
> +    for (idx = 0; idx < dev->nvqs; ++idx) {
> +        vhost_sw_lm_shadow_vq_free(dev->sw_lm_shadow_vq[idx]);
> +    }
> +
> +    return 0;
> +}
> +
> +static int vhost_sw_live_migration_start(struct vhost_dev *dev)
> +{
> +    int idx;
> +
> +    for (idx = 0; idx < dev->nvqs; ++idx) {
> +        dev->sw_lm_shadow_vq[idx] = vhost_sw_lm_shadow_vq(dev, idx);
> +    }
> +
> +    vhost_dev_disable_notifiers(dev, dev->vdev);

There is a race condition if the guest kicks the vq while this is
happening. The shadow vq hdev_notifier needs to be set so the vhost
device checks the virtqueue for requests that slipped in during the
race window.

> +
> +    return 0;
> +}
> +
> +static int vhost_sw_live_migration_enable(struct vhost_dev *dev,
> +                                          bool enable_lm)
> +{
> +    if (enable_lm) {
> +        return vhost_sw_live_migration_start(dev);
> +    } else {
> +        return vhost_sw_live_migration_stop(dev);
> +    }
> +}
> +
> +static void vhost_sw_lm_global_start(MemoryListener *listener)
> +{
> +    int r;
> +
> +    r = vhost_migration_log(listener, true, vhost_sw_live_migration_enable);
> +    if (r < 0) {
> +        abort();
> +    }
> +}
> +
> +static void vhost_sw_lm_global_stop(MemoryListener *listener)
> +{
> +    int r;
> +
> +    r = vhost_migration_log(listener, false, vhost_sw_live_migration_enable);
> +    if (r < 0) {
> +        abort();
> +    }
> +}
> +
>  static void vhost_log_start(MemoryListener *listener,
>                              MemoryRegionSection *section,
>                              int old, int new)
> @@ -1334,9 +1421,14 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
>          .region_nop = vhost_region_addnop,
>          .log_start = vhost_log_start,
>          .log_stop = vhost_log_stop,
> -        .log_sync = vhost_log_sync,
> -        .log_global_start = vhost_log_global_start,
> -        .log_global_stop = vhost_log_global_stop,
> +        .log_sync = !vhost_dev_can_log(hdev) ?
> +                    vhost_log_sync_nop :
> +                    vhost_log_sync,

Why is this change necessary now? It's not clear to me why it was
previously okay to call vhost_log_sync().

> +        .log_global_start = !vhost_dev_can_log(hdev) ?
> +                            vhost_sw_lm_global_start :
> +                            vhost_log_global_start,
> +        .log_global_stop = !vhost_dev_can_log(hdev) ? vhost_sw_lm_global_stop :
> +                                                      vhost_log_global_stop,
>          .eventfd_add = vhost_eventfd_add,
>          .eventfd_del = vhost_eventfd_del,
>          .priority = 10
> @@ -1364,6 +1456,8 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
>              error_free(hdev->migration_blocker);
>              goto fail_busyloop;
>          }
> +    } else {
> +        hdev->sw_lm_vq_handler = handle_sw_lm_vq;
>      }
>  
>      hdev->mem = g_malloc0(offsetof(struct vhost_memory, regions));
> diff --git a/hw/virtio/meson.build b/hw/virtio/meson.build
> index fbff9bc9d4..17419cb13e 100644
> --- a/hw/virtio/meson.build
> +++ b/hw/virtio/meson.build
> @@ -11,7 +11,7 @@ softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('vhost-stub.c'))
>  
>  virtio_ss = ss.source_set()
>  virtio_ss.add(files('virtio.c'))
> -virtio_ss.add(when: 'CONFIG_VHOST', if_true: files('vhost.c', 'vhost-backend.c'))
> +virtio_ss.add(when: 'CONFIG_VHOST', if_true: files('vhost.c', 'vhost-backend.c', 'vhost-sw-lm-ring.c'))
>  virtio_ss.add(when: 'CONFIG_VHOST_USER', if_true: files('vhost-user.c'))
>  virtio_ss.add(when: 'CONFIG_VHOST_VDPA', if_true: files('vhost-vdpa.c'))
>  virtio_ss.add(when: 'CONFIG_VIRTIO_BALLOON', if_true: files('virtio-balloon.c'))
> -- 
> 2.18.4
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

WARNING: multiple messages have this Message-ID (diff)
From: Stefan Hajnoczi <stefanha@gmail.com>
To: "Eugenio Pérez" <eperezma@redhat.com>
Cc: kvm@vger.kernel.org, "Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	qemu-devel@nongnu.org, Daniel Daly <dandaly0@gmail.com>,
	virtualization@lists.linux-foundation.org,
	Liran Alon <liralon@gmail.com>, Eli Cohen <eli@mellanox.com>,
	Nitin Shrivastav <nitin.shrivastav@broadcom.com>,
	Alex Barba <alex.barba@broadcom.com>,
	Christophe Fontaine <cfontain@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Lee Ballard <ballle98@gmail.com>,
	Lars Ganrot <lars.ganrot@gmail.com>,
	Rob Miller <rob.miller@broadcom.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Howard Cai <howard.cai@gmail.com>,
	Parav Pandit <parav@mellanox.com>, vm <vmireyno@marvell.com>,
	Salil Mehta <mehta.salil.lnk@gmail.com>,
	Stephen Finucane <stephenfin@redhat.com>,
	Xiao W Wang <xiao.w.wang@intel.com>,
	Sean Mooney <smooney@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Jim Harford <jim.harford@broadcom.com>,
	Dmytro Kazantsev <dmytro.kazantsev@gmail.com>,
	Siwei Liu <loseweigh@gmail.com>,
	Harpreet Singh Anand <hanand@xilinx.com>,
	Michael Lilja <ml@napatech.com>, Max Gurtovoy <maxgu14@gmail.com>
Subject: Re: [RFC PATCH 07/27] vhost: Route guest->host notification through qemu
Date: Mon, 7 Dec 2020 17:42:33 +0000	[thread overview]
Message-ID: <20201207174233.GN203660@stefanha-x1.localdomain> (raw)
In-Reply-To: <20201120185105.279030-8-eperezma@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 11448 bytes --]

On Fri, Nov 20, 2020 at 07:50:45PM +0100, Eugenio Pérez wrote:
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  hw/virtio/vhost-sw-lm-ring.h |  26 +++++++++
>  include/hw/virtio/vhost.h    |   3 ++
>  hw/virtio/vhost-sw-lm-ring.c |  60 +++++++++++++++++++++
>  hw/virtio/vhost.c            | 100 +++++++++++++++++++++++++++++++++--
>  hw/virtio/meson.build        |   2 +-
>  5 files changed, 187 insertions(+), 4 deletions(-)
>  create mode 100644 hw/virtio/vhost-sw-lm-ring.h
>  create mode 100644 hw/virtio/vhost-sw-lm-ring.c
> 
> diff --git a/hw/virtio/vhost-sw-lm-ring.h b/hw/virtio/vhost-sw-lm-ring.h
> new file mode 100644
> index 0000000000..86dc081b93
> --- /dev/null
> +++ b/hw/virtio/vhost-sw-lm-ring.h
> @@ -0,0 +1,26 @@
> +/*
> + * vhost software live migration ring
> + *
> + * SPDX-FileCopyrightText: Red Hat, Inc. 2020
> + * SPDX-FileContributor: Author: Eugenio Pérez <eperezma@redhat.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#ifndef VHOST_SW_LM_RING_H
> +#define VHOST_SW_LM_RING_H
> +
> +#include "qemu/osdep.h"
> +
> +#include "hw/virtio/virtio.h"
> +#include "hw/virtio/vhost.h"
> +
> +typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;

Here it's called a shadow virtqueue while the file calls it a
sw-lm-ring. Please use a single name.

> +
> +bool vhost_vring_kick(VhostShadowVirtqueue *vq);

vhost_shadow_vq_kick()?

> +
> +VhostShadowVirtqueue *vhost_sw_lm_shadow_vq(struct vhost_dev *dev, int idx);

vhost_dev_get_shadow_vq()? This could be in include/hw/virtio/vhost.h
with the other vhost_dev_*() functions.

> +
> +void vhost_sw_lm_shadow_vq_free(VhostShadowVirtqueue *vq);

Hmm...now I wonder what the lifecycle is. Does vhost_sw_lm_shadow_vq()
allocate it?

Please add doc comments explaining these functions either in this header
file or in vhost-sw-lm-ring.c.

> +
> +#endif
> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> index b5b7496537..93cc3f1ae3 100644
> --- a/include/hw/virtio/vhost.h
> +++ b/include/hw/virtio/vhost.h
> @@ -54,6 +54,8 @@ struct vhost_iommu {
>      QLIST_ENTRY(vhost_iommu) iommu_next;
>  };
>  
> +typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
> +
>  typedef struct VhostDevConfigOps {
>      /* Vhost device config space changed callback
>       */
> @@ -83,6 +85,7 @@ struct vhost_dev {
>      bool started;
>      bool log_enabled;
>      uint64_t log_size;
> +    VhostShadowVirtqueue *sw_lm_shadow_vq[2];

The hardcoded 2 is probably for single-queue virtio-net? I guess this
will eventually become VhostShadowVirtqueue *shadow_vqs or
VhostShadowVirtqueue **shadow_vqs, depending on whether each one should
be allocated individually.

>      VirtIOHandleOutput sw_lm_vq_handler;
>      Error *migration_blocker;
>      const VhostOps *vhost_ops;
> diff --git a/hw/virtio/vhost-sw-lm-ring.c b/hw/virtio/vhost-sw-lm-ring.c
> new file mode 100644
> index 0000000000..0192e77831
> --- /dev/null
> +++ b/hw/virtio/vhost-sw-lm-ring.c
> @@ -0,0 +1,60 @@
> +/*
> + * vhost software live migration ring
> + *
> + * SPDX-FileCopyrightText: Red Hat, Inc. 2020
> + * SPDX-FileContributor: Author: Eugenio Pérez <eperezma@redhat.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#include "hw/virtio/vhost-sw-lm-ring.h"
> +#include "hw/virtio/vhost.h"
> +
> +#include "standard-headers/linux/vhost_types.h"
> +#include "standard-headers/linux/virtio_ring.h"
> +
> +#include "qemu/event_notifier.h"
> +
> +typedef struct VhostShadowVirtqueue {
> +    EventNotifier hdev_notifier;
> +    VirtQueue *vq;
> +} VhostShadowVirtqueue;
> +
> +static inline bool vhost_vring_should_kick(VhostShadowVirtqueue *vq)
> +{
> +    return virtio_queue_get_used_notify_split(vq->vq);
> +}
> +
> +bool vhost_vring_kick(VhostShadowVirtqueue *vq)
> +{
> +    return vhost_vring_should_kick(vq) ? event_notifier_set(&vq->hdev_notifier)
> +                                       : true;
> +}

How is the return value used? event_notifier_set() returns -errno so
this function returns false on success, and true when notifications are
disabled or event_notifier_set() failed. I'm not sure this return value
can be used for anything.

> +
> +VhostShadowVirtqueue *vhost_sw_lm_shadow_vq(struct vhost_dev *dev, int idx)

I see now that this function allocates the VhostShadowVirtqueue. Maybe
adding _new() to the name would make that clear?

> +{
> +    struct vhost_vring_file file = {
> +        .index = idx
> +    };
> +    VirtQueue *vq = virtio_get_queue(dev->vdev, idx);
> +    VhostShadowVirtqueue *svq;
> +    int r;
> +
> +    svq = g_new0(VhostShadowVirtqueue, 1);
> +    svq->vq = vq;
> +
> +    r = event_notifier_init(&svq->hdev_notifier, 0);
> +    assert(r == 0);
> +
> +    file.fd = event_notifier_get_fd(&svq->hdev_notifier);
> +    r = dev->vhost_ops->vhost_set_vring_kick(dev, &file);
> +    assert(r == 0);
> +
> +    return svq;
> +}

I guess there are assumptions about the status of the device? Does the
virtqueue need to be disabled when this function is called?

> +
> +void vhost_sw_lm_shadow_vq_free(VhostShadowVirtqueue *vq)
> +{
> +    event_notifier_cleanup(&vq->hdev_notifier);
> +    g_free(vq);
> +}
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 9cbd52a7f1..a55b684b5f 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -13,6 +13,8 @@
>   * GNU GPL, version 2 or (at your option) any later version.
>   */
>  
> +#include "hw/virtio/vhost-sw-lm-ring.h"
> +
>  #include "qemu/osdep.h"
>  #include "qapi/error.h"
>  #include "hw/virtio/vhost.h"
> @@ -61,6 +63,20 @@ bool vhost_has_free_slot(void)
>      return slots_limit > used_memslots;
>  }
>  
> +static struct vhost_dev *vhost_dev_from_virtio(const VirtIODevice *vdev)
> +{
> +    struct vhost_dev *hdev;
> +
> +    QLIST_FOREACH(hdev, &vhost_devices, entry) {
> +        if (hdev->vdev == vdev) {
> +            return hdev;
> +        }
> +    }
> +
> +    assert(hdev);
> +    return NULL;
> +}
> +
>  static bool vhost_dev_can_log(const struct vhost_dev *hdev)
>  {
>      return hdev->features & (0x1ULL << VHOST_F_LOG_ALL);
> @@ -148,6 +164,12 @@ static int vhost_sync_dirty_bitmap(struct vhost_dev *dev,
>      return 0;
>  }
>  
> +static void vhost_log_sync_nop(MemoryListener *listener,
> +                               MemoryRegionSection *section)
> +{
> +    return;
> +}
> +
>  static void vhost_log_sync(MemoryListener *listener,
>                            MemoryRegionSection *section)
>  {
> @@ -928,6 +950,71 @@ static void vhost_log_global_stop(MemoryListener *listener)
>      }
>  }
>  
> +static void handle_sw_lm_vq(VirtIODevice *vdev, VirtQueue *vq)
> +{
> +    struct vhost_dev *hdev = vhost_dev_from_virtio(vdev);

If this lookup becomes a performance bottleneck there are other options
for determining the vhost_dev. For example VirtIODevice could have a
field for stashing the vhost_dev pointer.

> +    uint16_t idx = virtio_get_queue_index(vq);
> +
> +    VhostShadowVirtqueue *svq = hdev->sw_lm_shadow_vq[idx];
> +
> +    vhost_vring_kick(svq);
> +}

I'm a confused. Do we need to pop elements from vq and push equivalent
elements onto svq before kicking? Either a todo comment is missing or I
misunderstand how this works.

> +
> +static int vhost_sw_live_migration_stop(struct vhost_dev *dev)
> +{
> +    int idx;
> +
> +    vhost_dev_enable_notifiers(dev, dev->vdev);
> +    for (idx = 0; idx < dev->nvqs; ++idx) {
> +        vhost_sw_lm_shadow_vq_free(dev->sw_lm_shadow_vq[idx]);
> +    }
> +
> +    return 0;
> +}
> +
> +static int vhost_sw_live_migration_start(struct vhost_dev *dev)
> +{
> +    int idx;
> +
> +    for (idx = 0; idx < dev->nvqs; ++idx) {
> +        dev->sw_lm_shadow_vq[idx] = vhost_sw_lm_shadow_vq(dev, idx);
> +    }
> +
> +    vhost_dev_disable_notifiers(dev, dev->vdev);

There is a race condition if the guest kicks the vq while this is
happening. The shadow vq hdev_notifier needs to be set so the vhost
device checks the virtqueue for requests that slipped in during the
race window.

> +
> +    return 0;
> +}
> +
> +static int vhost_sw_live_migration_enable(struct vhost_dev *dev,
> +                                          bool enable_lm)
> +{
> +    if (enable_lm) {
> +        return vhost_sw_live_migration_start(dev);
> +    } else {
> +        return vhost_sw_live_migration_stop(dev);
> +    }
> +}
> +
> +static void vhost_sw_lm_global_start(MemoryListener *listener)
> +{
> +    int r;
> +
> +    r = vhost_migration_log(listener, true, vhost_sw_live_migration_enable);
> +    if (r < 0) {
> +        abort();
> +    }
> +}
> +
> +static void vhost_sw_lm_global_stop(MemoryListener *listener)
> +{
> +    int r;
> +
> +    r = vhost_migration_log(listener, false, vhost_sw_live_migration_enable);
> +    if (r < 0) {
> +        abort();
> +    }
> +}
> +
>  static void vhost_log_start(MemoryListener *listener,
>                              MemoryRegionSection *section,
>                              int old, int new)
> @@ -1334,9 +1421,14 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
>          .region_nop = vhost_region_addnop,
>          .log_start = vhost_log_start,
>          .log_stop = vhost_log_stop,
> -        .log_sync = vhost_log_sync,
> -        .log_global_start = vhost_log_global_start,
> -        .log_global_stop = vhost_log_global_stop,
> +        .log_sync = !vhost_dev_can_log(hdev) ?
> +                    vhost_log_sync_nop :
> +                    vhost_log_sync,

Why is this change necessary now? It's not clear to me why it was
previously okay to call vhost_log_sync().

> +        .log_global_start = !vhost_dev_can_log(hdev) ?
> +                            vhost_sw_lm_global_start :
> +                            vhost_log_global_start,
> +        .log_global_stop = !vhost_dev_can_log(hdev) ? vhost_sw_lm_global_stop :
> +                                                      vhost_log_global_stop,
>          .eventfd_add = vhost_eventfd_add,
>          .eventfd_del = vhost_eventfd_del,
>          .priority = 10
> @@ -1364,6 +1456,8 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
>              error_free(hdev->migration_blocker);
>              goto fail_busyloop;
>          }
> +    } else {
> +        hdev->sw_lm_vq_handler = handle_sw_lm_vq;
>      }
>  
>      hdev->mem = g_malloc0(offsetof(struct vhost_memory, regions));
> diff --git a/hw/virtio/meson.build b/hw/virtio/meson.build
> index fbff9bc9d4..17419cb13e 100644
> --- a/hw/virtio/meson.build
> +++ b/hw/virtio/meson.build
> @@ -11,7 +11,7 @@ softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('vhost-stub.c'))
>  
>  virtio_ss = ss.source_set()
>  virtio_ss.add(files('virtio.c'))
> -virtio_ss.add(when: 'CONFIG_VHOST', if_true: files('vhost.c', 'vhost-backend.c'))
> +virtio_ss.add(when: 'CONFIG_VHOST', if_true: files('vhost.c', 'vhost-backend.c', 'vhost-sw-lm-ring.c'))
>  virtio_ss.add(when: 'CONFIG_VHOST_USER', if_true: files('vhost-user.c'))
>  virtio_ss.add(when: 'CONFIG_VHOST_VDPA', if_true: files('vhost-vdpa.c'))
>  virtio_ss.add(when: 'CONFIG_VIRTIO_BALLOON', if_true: files('virtio-balloon.c'))
> -- 
> 2.18.4
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

WARNING: multiple messages have this Message-ID (diff)
From: Stefan Hajnoczi <stefanha@gmail.com>
To: "Eugenio Pérez" <eperezma@redhat.com>
Cc: kvm@vger.kernel.org, "Michael S. Tsirkin" <mst@redhat.com>,
	qemu-devel@nongnu.org, Daniel Daly <dandaly0@gmail.com>,
	virtualization@lists.linux-foundation.org,
	Liran Alon <liralon@gmail.com>, Eli Cohen <eli@mellanox.com>,
	Nitin Shrivastav <nitin.shrivastav@broadcom.com>,
	Alex Barba <alex.barba@broadcom.com>,
	Christophe Fontaine <cfontain@redhat.com>,
	Lee Ballard <ballle98@gmail.com>,
	Lars Ganrot <lars.ganrot@gmail.com>,
	Rob Miller <rob.miller@broadcom.com>,
	Howard Cai <howard.cai@gmail.com>,
	Parav Pandit <parav@mellanox.com>, vm <vmireyno@marvell.com>,
	Salil Mehta <mehta.salil.lnk@gmail.com>,
	Stephen Finucane <stephenfin@redhat.com>,
	Xiao W Wang <xiao.w.wang@intel.com>,
	Sean Mooney <smooney@redhat.com>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Jim Harford <jim.harford@broadcom.com>,
	Dmytro Kazantsev <dmytro.kazantsev@gmail.com>,
	Siwei Liu <loseweigh@gmail.com>,
	Harpreet Singh Anand <hanand@xilinx.com>,
	Michael Lilja <ml@napatech.com>, Max Gurtovoy <maxgu14@gmail.com>
Subject: Re: [RFC PATCH 07/27] vhost: Route guest->host notification through qemu
Date: Mon, 7 Dec 2020 17:42:33 +0000	[thread overview]
Message-ID: <20201207174233.GN203660@stefanha-x1.localdomain> (raw)
In-Reply-To: <20201120185105.279030-8-eperezma@redhat.com>


[-- Attachment #1.1: Type: text/plain, Size: 11448 bytes --]

On Fri, Nov 20, 2020 at 07:50:45PM +0100, Eugenio Pérez wrote:
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  hw/virtio/vhost-sw-lm-ring.h |  26 +++++++++
>  include/hw/virtio/vhost.h    |   3 ++
>  hw/virtio/vhost-sw-lm-ring.c |  60 +++++++++++++++++++++
>  hw/virtio/vhost.c            | 100 +++++++++++++++++++++++++++++++++--
>  hw/virtio/meson.build        |   2 +-
>  5 files changed, 187 insertions(+), 4 deletions(-)
>  create mode 100644 hw/virtio/vhost-sw-lm-ring.h
>  create mode 100644 hw/virtio/vhost-sw-lm-ring.c
> 
> diff --git a/hw/virtio/vhost-sw-lm-ring.h b/hw/virtio/vhost-sw-lm-ring.h
> new file mode 100644
> index 0000000000..86dc081b93
> --- /dev/null
> +++ b/hw/virtio/vhost-sw-lm-ring.h
> @@ -0,0 +1,26 @@
> +/*
> + * vhost software live migration ring
> + *
> + * SPDX-FileCopyrightText: Red Hat, Inc. 2020
> + * SPDX-FileContributor: Author: Eugenio Pérez <eperezma@redhat.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#ifndef VHOST_SW_LM_RING_H
> +#define VHOST_SW_LM_RING_H
> +
> +#include "qemu/osdep.h"
> +
> +#include "hw/virtio/virtio.h"
> +#include "hw/virtio/vhost.h"
> +
> +typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;

Here it's called a shadow virtqueue while the file calls it a
sw-lm-ring. Please use a single name.

> +
> +bool vhost_vring_kick(VhostShadowVirtqueue *vq);

vhost_shadow_vq_kick()?

> +
> +VhostShadowVirtqueue *vhost_sw_lm_shadow_vq(struct vhost_dev *dev, int idx);

vhost_dev_get_shadow_vq()? This could be in include/hw/virtio/vhost.h
with the other vhost_dev_*() functions.

> +
> +void vhost_sw_lm_shadow_vq_free(VhostShadowVirtqueue *vq);

Hmm...now I wonder what the lifecycle is. Does vhost_sw_lm_shadow_vq()
allocate it?

Please add doc comments explaining these functions either in this header
file or in vhost-sw-lm-ring.c.

> +
> +#endif
> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> index b5b7496537..93cc3f1ae3 100644
> --- a/include/hw/virtio/vhost.h
> +++ b/include/hw/virtio/vhost.h
> @@ -54,6 +54,8 @@ struct vhost_iommu {
>      QLIST_ENTRY(vhost_iommu) iommu_next;
>  };
>  
> +typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
> +
>  typedef struct VhostDevConfigOps {
>      /* Vhost device config space changed callback
>       */
> @@ -83,6 +85,7 @@ struct vhost_dev {
>      bool started;
>      bool log_enabled;
>      uint64_t log_size;
> +    VhostShadowVirtqueue *sw_lm_shadow_vq[2];

The hardcoded 2 is probably for single-queue virtio-net? I guess this
will eventually become VhostShadowVirtqueue *shadow_vqs or
VhostShadowVirtqueue **shadow_vqs, depending on whether each one should
be allocated individually.

>      VirtIOHandleOutput sw_lm_vq_handler;
>      Error *migration_blocker;
>      const VhostOps *vhost_ops;
> diff --git a/hw/virtio/vhost-sw-lm-ring.c b/hw/virtio/vhost-sw-lm-ring.c
> new file mode 100644
> index 0000000000..0192e77831
> --- /dev/null
> +++ b/hw/virtio/vhost-sw-lm-ring.c
> @@ -0,0 +1,60 @@
> +/*
> + * vhost software live migration ring
> + *
> + * SPDX-FileCopyrightText: Red Hat, Inc. 2020
> + * SPDX-FileContributor: Author: Eugenio Pérez <eperezma@redhat.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#include "hw/virtio/vhost-sw-lm-ring.h"
> +#include "hw/virtio/vhost.h"
> +
> +#include "standard-headers/linux/vhost_types.h"
> +#include "standard-headers/linux/virtio_ring.h"
> +
> +#include "qemu/event_notifier.h"
> +
> +typedef struct VhostShadowVirtqueue {
> +    EventNotifier hdev_notifier;
> +    VirtQueue *vq;
> +} VhostShadowVirtqueue;
> +
> +static inline bool vhost_vring_should_kick(VhostShadowVirtqueue *vq)
> +{
> +    return virtio_queue_get_used_notify_split(vq->vq);
> +}
> +
> +bool vhost_vring_kick(VhostShadowVirtqueue *vq)
> +{
> +    return vhost_vring_should_kick(vq) ? event_notifier_set(&vq->hdev_notifier)
> +                                       : true;
> +}

How is the return value used? event_notifier_set() returns -errno so
this function returns false on success, and true when notifications are
disabled or event_notifier_set() failed. I'm not sure this return value
can be used for anything.

> +
> +VhostShadowVirtqueue *vhost_sw_lm_shadow_vq(struct vhost_dev *dev, int idx)

I see now that this function allocates the VhostShadowVirtqueue. Maybe
adding _new() to the name would make that clear?

> +{
> +    struct vhost_vring_file file = {
> +        .index = idx
> +    };
> +    VirtQueue *vq = virtio_get_queue(dev->vdev, idx);
> +    VhostShadowVirtqueue *svq;
> +    int r;
> +
> +    svq = g_new0(VhostShadowVirtqueue, 1);
> +    svq->vq = vq;
> +
> +    r = event_notifier_init(&svq->hdev_notifier, 0);
> +    assert(r == 0);
> +
> +    file.fd = event_notifier_get_fd(&svq->hdev_notifier);
> +    r = dev->vhost_ops->vhost_set_vring_kick(dev, &file);
> +    assert(r == 0);
> +
> +    return svq;
> +}

I guess there are assumptions about the status of the device? Does the
virtqueue need to be disabled when this function is called?

> +
> +void vhost_sw_lm_shadow_vq_free(VhostShadowVirtqueue *vq)
> +{
> +    event_notifier_cleanup(&vq->hdev_notifier);
> +    g_free(vq);
> +}
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 9cbd52a7f1..a55b684b5f 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -13,6 +13,8 @@
>   * GNU GPL, version 2 or (at your option) any later version.
>   */
>  
> +#include "hw/virtio/vhost-sw-lm-ring.h"
> +
>  #include "qemu/osdep.h"
>  #include "qapi/error.h"
>  #include "hw/virtio/vhost.h"
> @@ -61,6 +63,20 @@ bool vhost_has_free_slot(void)
>      return slots_limit > used_memslots;
>  }
>  
> +static struct vhost_dev *vhost_dev_from_virtio(const VirtIODevice *vdev)
> +{
> +    struct vhost_dev *hdev;
> +
> +    QLIST_FOREACH(hdev, &vhost_devices, entry) {
> +        if (hdev->vdev == vdev) {
> +            return hdev;
> +        }
> +    }
> +
> +    assert(hdev);
> +    return NULL;
> +}
> +
>  static bool vhost_dev_can_log(const struct vhost_dev *hdev)
>  {
>      return hdev->features & (0x1ULL << VHOST_F_LOG_ALL);
> @@ -148,6 +164,12 @@ static int vhost_sync_dirty_bitmap(struct vhost_dev *dev,
>      return 0;
>  }
>  
> +static void vhost_log_sync_nop(MemoryListener *listener,
> +                               MemoryRegionSection *section)
> +{
> +    return;
> +}
> +
>  static void vhost_log_sync(MemoryListener *listener,
>                            MemoryRegionSection *section)
>  {
> @@ -928,6 +950,71 @@ static void vhost_log_global_stop(MemoryListener *listener)
>      }
>  }
>  
> +static void handle_sw_lm_vq(VirtIODevice *vdev, VirtQueue *vq)
> +{
> +    struct vhost_dev *hdev = vhost_dev_from_virtio(vdev);

If this lookup becomes a performance bottleneck there are other options
for determining the vhost_dev. For example VirtIODevice could have a
field for stashing the vhost_dev pointer.

> +    uint16_t idx = virtio_get_queue_index(vq);
> +
> +    VhostShadowVirtqueue *svq = hdev->sw_lm_shadow_vq[idx];
> +
> +    vhost_vring_kick(svq);
> +}

I'm a confused. Do we need to pop elements from vq and push equivalent
elements onto svq before kicking? Either a todo comment is missing or I
misunderstand how this works.

> +
> +static int vhost_sw_live_migration_stop(struct vhost_dev *dev)
> +{
> +    int idx;
> +
> +    vhost_dev_enable_notifiers(dev, dev->vdev);
> +    for (idx = 0; idx < dev->nvqs; ++idx) {
> +        vhost_sw_lm_shadow_vq_free(dev->sw_lm_shadow_vq[idx]);
> +    }
> +
> +    return 0;
> +}
> +
> +static int vhost_sw_live_migration_start(struct vhost_dev *dev)
> +{
> +    int idx;
> +
> +    for (idx = 0; idx < dev->nvqs; ++idx) {
> +        dev->sw_lm_shadow_vq[idx] = vhost_sw_lm_shadow_vq(dev, idx);
> +    }
> +
> +    vhost_dev_disable_notifiers(dev, dev->vdev);

There is a race condition if the guest kicks the vq while this is
happening. The shadow vq hdev_notifier needs to be set so the vhost
device checks the virtqueue for requests that slipped in during the
race window.

> +
> +    return 0;
> +}
> +
> +static int vhost_sw_live_migration_enable(struct vhost_dev *dev,
> +                                          bool enable_lm)
> +{
> +    if (enable_lm) {
> +        return vhost_sw_live_migration_start(dev);
> +    } else {
> +        return vhost_sw_live_migration_stop(dev);
> +    }
> +}
> +
> +static void vhost_sw_lm_global_start(MemoryListener *listener)
> +{
> +    int r;
> +
> +    r = vhost_migration_log(listener, true, vhost_sw_live_migration_enable);
> +    if (r < 0) {
> +        abort();
> +    }
> +}
> +
> +static void vhost_sw_lm_global_stop(MemoryListener *listener)
> +{
> +    int r;
> +
> +    r = vhost_migration_log(listener, false, vhost_sw_live_migration_enable);
> +    if (r < 0) {
> +        abort();
> +    }
> +}
> +
>  static void vhost_log_start(MemoryListener *listener,
>                              MemoryRegionSection *section,
>                              int old, int new)
> @@ -1334,9 +1421,14 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
>          .region_nop = vhost_region_addnop,
>          .log_start = vhost_log_start,
>          .log_stop = vhost_log_stop,
> -        .log_sync = vhost_log_sync,
> -        .log_global_start = vhost_log_global_start,
> -        .log_global_stop = vhost_log_global_stop,
> +        .log_sync = !vhost_dev_can_log(hdev) ?
> +                    vhost_log_sync_nop :
> +                    vhost_log_sync,

Why is this change necessary now? It's not clear to me why it was
previously okay to call vhost_log_sync().

> +        .log_global_start = !vhost_dev_can_log(hdev) ?
> +                            vhost_sw_lm_global_start :
> +                            vhost_log_global_start,
> +        .log_global_stop = !vhost_dev_can_log(hdev) ? vhost_sw_lm_global_stop :
> +                                                      vhost_log_global_stop,
>          .eventfd_add = vhost_eventfd_add,
>          .eventfd_del = vhost_eventfd_del,
>          .priority = 10
> @@ -1364,6 +1456,8 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaque,
>              error_free(hdev->migration_blocker);
>              goto fail_busyloop;
>          }
> +    } else {
> +        hdev->sw_lm_vq_handler = handle_sw_lm_vq;
>      }
>  
>      hdev->mem = g_malloc0(offsetof(struct vhost_memory, regions));
> diff --git a/hw/virtio/meson.build b/hw/virtio/meson.build
> index fbff9bc9d4..17419cb13e 100644
> --- a/hw/virtio/meson.build
> +++ b/hw/virtio/meson.build
> @@ -11,7 +11,7 @@ softmmu_ss.add(when: 'CONFIG_ALL', if_true: files('vhost-stub.c'))
>  
>  virtio_ss = ss.source_set()
>  virtio_ss.add(files('virtio.c'))
> -virtio_ss.add(when: 'CONFIG_VHOST', if_true: files('vhost.c', 'vhost-backend.c'))
> +virtio_ss.add(when: 'CONFIG_VHOST', if_true: files('vhost.c', 'vhost-backend.c', 'vhost-sw-lm-ring.c'))
>  virtio_ss.add(when: 'CONFIG_VHOST_USER', if_true: files('vhost-user.c'))
>  virtio_ss.add(when: 'CONFIG_VHOST_VDPA', if_true: files('vhost-vdpa.c'))
>  virtio_ss.add(when: 'CONFIG_VIRTIO_BALLOON', if_true: files('virtio-balloon.c'))
> -- 
> 2.18.4
> 

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2020-12-07 17:43 UTC|newest]

Thread overview: 186+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-20 18:50 [RFC PATCH 00/27] vDPA software assisted live migration Eugenio Pérez
2020-11-20 18:50 ` Eugenio Pérez
2020-11-20 18:50 ` [RFC PATCH 01/27] vhost: Add vhost_dev_can_log Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-11-20 18:50 ` [RFC PATCH 02/27] vhost: Add device callback in vhost_migration_log Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-12-07 16:19   ` Stefan Hajnoczi
2020-12-07 16:19     ` Stefan Hajnoczi
2020-12-07 16:19     ` Stefan Hajnoczi
2020-12-09 12:20     ` Eugenio Perez Martin
2020-12-09 12:20       ` Eugenio Perez Martin
2020-11-20 18:50 ` [RFC PATCH 03/27] vhost: Move log resize/put to vhost_dev_set_log Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-11-20 18:50 ` [RFC PATCH 04/27] vhost: add vhost_kernel_set_vring_enable Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-12-07 16:43   ` Stefan Hajnoczi
2020-12-07 16:43     ` Stefan Hajnoczi
2020-12-07 16:43     ` Stefan Hajnoczi
2020-12-09 12:00     ` Eugenio Perez Martin
2020-12-09 12:00       ` Eugenio Perez Martin
2020-12-09 16:08       ` Stefan Hajnoczi
2020-12-09 16:08         ` Stefan Hajnoczi
2020-12-09 16:08         ` Stefan Hajnoczi
2020-11-20 18:50 ` [RFC PATCH 05/27] vhost: Add hdev->dev.sw_lm_vq_handler Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-12-07 16:52   ` Stefan Hajnoczi
2020-12-07 16:52     ` Stefan Hajnoczi
2020-12-07 16:52     ` Stefan Hajnoczi
2020-12-09 15:02     ` Eugenio Perez Martin
2020-12-09 15:02       ` Eugenio Perez Martin
2020-12-10 11:30       ` Stefan Hajnoczi
2020-12-10 11:30         ` Stefan Hajnoczi
2020-12-10 11:30         ` Stefan Hajnoczi
2020-11-20 18:50 ` [RFC PATCH 06/27] virtio: Add virtio_queue_get_used_notify_split Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-12-07 16:58   ` Stefan Hajnoczi
2020-12-07 16:58     ` Stefan Hajnoczi
2020-12-07 16:58     ` Stefan Hajnoczi
2021-01-12 18:21     ` Eugenio Perez Martin
2021-01-12 18:21       ` Eugenio Perez Martin
2021-03-02 11:22       ` Stefan Hajnoczi
2021-03-02 11:22         ` Stefan Hajnoczi
2021-03-02 11:22         ` Stefan Hajnoczi
2021-03-02 18:34         ` Eugenio Perez Martin
2021-03-02 18:34           ` Eugenio Perez Martin
2021-03-08 10:46           ` Stefan Hajnoczi
2021-03-08 10:46             ` Stefan Hajnoczi
2021-03-08 10:46             ` Stefan Hajnoczi
2020-11-20 18:50 ` [RFC PATCH 07/27] vhost: Route guest->host notification through qemu Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-12-07 17:42   ` Stefan Hajnoczi [this message]
2020-12-07 17:42     ` Stefan Hajnoczi
2020-12-07 17:42     ` Stefan Hajnoczi
2020-12-09 17:08     ` Eugenio Perez Martin
2020-12-09 17:08       ` Eugenio Perez Martin
2020-12-10 11:50       ` Stefan Hajnoczi
2020-12-10 11:50         ` Stefan Hajnoczi
2020-12-10 11:50         ` Stefan Hajnoczi
2021-01-21 20:10         ` Eugenio Perez Martin
2021-01-21 20:10           ` Eugenio Perez Martin
2020-11-20 18:50 ` [RFC PATCH 08/27] vhost: Add a flag for software assisted Live Migration Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-12-08  7:20   ` Stefan Hajnoczi
2020-12-08  7:20     ` Stefan Hajnoczi
2020-12-08  7:20     ` Stefan Hajnoczi
2020-12-09 17:57     ` Eugenio Perez Martin
2020-12-09 17:57       ` Eugenio Perez Martin
2020-11-20 18:50 ` [RFC PATCH 09/27] vhost: Route host->guest notification through qemu Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-12-08  7:34   ` Stefan Hajnoczi
2020-12-08  7:34     ` Stefan Hajnoczi
2020-12-08  7:34     ` Stefan Hajnoczi
2020-11-20 18:50 ` [RFC PATCH 10/27] vhost: Allocate shadow vring Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-12-08  7:49   ` Stefan Hajnoczi
2020-12-08  7:49     ` Stefan Hajnoczi
2020-12-08  7:49     ` Stefan Hajnoczi
2020-12-08  8:17   ` Stefan Hajnoczi
2020-12-08  8:17     ` Stefan Hajnoczi
2020-12-08  8:17     ` Stefan Hajnoczi
2020-12-09 18:15     ` Eugenio Perez Martin
2020-12-09 18:15       ` Eugenio Perez Martin
2020-11-20 18:50 ` [RFC PATCH 11/27] virtio: const-ify all virtio_tswap* functions Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-11-20 18:50 ` [RFC PATCH 12/27] virtio: Add virtio_queue_full Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-11-20 18:50 ` [RFC PATCH 13/27] vhost: Send buffers to device Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-12-08  8:16   ` Stefan Hajnoczi
2020-12-08  8:16     ` Stefan Hajnoczi
2020-12-08  8:16     ` Stefan Hajnoczi
2020-12-09 18:41     ` Eugenio Perez Martin
2020-12-09 18:41       ` Eugenio Perez Martin
2020-12-10 11:55       ` Stefan Hajnoczi
2020-12-10 11:55         ` Stefan Hajnoczi
2020-12-10 11:55         ` Stefan Hajnoczi
2021-01-22 18:18         ` Eugenio Perez Martin
2021-01-22 18:18           ` Eugenio Perez Martin
     [not found]           ` <CAJaqyWdNeaboGaSsXPA8r=mUsbctFLzACFKLX55yRQpTvjqxJw@mail.gmail.com>
2021-03-22 10:51             ` Stefan Hajnoczi
2021-03-22 15:55               ` Eugenio Perez Martin
2021-03-22 17:40                 ` Stefan Hajnoczi
2021-03-24 19:04                   ` Eugenio Perez Martin
2021-03-24 19:56                     ` Stefan Hajnoczi
2020-11-20 18:50 ` [RFC PATCH 14/27] virtio: Remove virtio_queue_get_used_notify_split Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-11-20 18:50 ` [RFC PATCH 15/27] vhost: Do not invalidate signalled used Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-11-20 18:50 ` [RFC PATCH 16/27] virtio: Expose virtqueue_alloc_element Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-12-08  8:25   ` Stefan Hajnoczi
2020-12-08  8:25     ` Stefan Hajnoczi
2020-12-08  8:25     ` Stefan Hajnoczi
2020-12-09 18:46     ` Eugenio Perez Martin
2020-12-09 18:46       ` Eugenio Perez Martin
2020-12-10 11:57       ` Stefan Hajnoczi
2020-12-10 11:57         ` Stefan Hajnoczi
2020-12-10 11:57         ` Stefan Hajnoczi
2020-11-20 18:50 ` [RFC PATCH 17/27] vhost: add vhost_vring_set_notification_rcu Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-11-20 18:50 ` [RFC PATCH 18/27] vhost: add vhost_vring_poll_rcu Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-12-08  8:41   ` Stefan Hajnoczi
2020-12-08  8:41     ` Stefan Hajnoczi
2020-12-08  8:41     ` Stefan Hajnoczi
2020-12-09 18:48     ` Eugenio Perez Martin
2020-12-09 18:48       ` Eugenio Perez Martin
2020-11-20 18:50 ` [RFC PATCH 19/27] vhost: add vhost_vring_get_buf_rcu Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-11-20 18:50 ` [RFC PATCH 20/27] vhost: Return used buffers Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-12-08  8:50   ` Stefan Hajnoczi
2020-12-08  8:50     ` Stefan Hajnoczi
2020-12-08  8:50     ` Stefan Hajnoczi
2020-11-20 18:50 ` [RFC PATCH 21/27] vhost: Add vhost_virtqueue_memory_unmap Eugenio Pérez
2020-11-20 18:50   ` Eugenio Pérez
2020-11-20 18:51 ` [RFC PATCH 22/27] vhost: Add vhost_virtqueue_memory_map Eugenio Pérez
2020-11-20 18:51   ` Eugenio Pérez
2020-11-20 18:51 ` [RFC PATCH 23/27] vhost: unmap qemu's shadow virtqueues on sw live migration Eugenio Pérez
2020-11-20 18:51   ` Eugenio Pérez
2020-11-27 15:29   ` Stefano Garzarella
2020-11-27 15:29     ` Stefano Garzarella
2020-11-27 15:29     ` Stefano Garzarella
2020-11-30  7:54     ` Eugenio Perez Martin
2020-11-30  7:54       ` Eugenio Perez Martin
2020-11-20 18:51 ` [RFC PATCH 24/27] vhost: iommu changes Eugenio Pérez
2020-11-20 18:51   ` Eugenio Pérez
2020-12-08  9:02   ` Stefan Hajnoczi
2020-12-08  9:02     ` Stefan Hajnoczi
2020-12-08  9:02     ` Stefan Hajnoczi
2020-11-20 18:51 ` [RFC PATCH 25/27] vhost: Do not commit vhost used idx on vhost_virtqueue_stop Eugenio Pérez
2020-11-20 18:51   ` Eugenio Pérez
2020-11-20 19:35   ` Eugenio Perez Martin
2020-11-20 19:35     ` Eugenio Perez Martin
2020-11-20 18:51 ` [RFC PATCH 26/27] vhost: Add vhost_hdev_can_sw_lm Eugenio Pérez
2020-11-20 18:51   ` Eugenio Pérez
2020-11-20 18:51 ` [RFC PATCH 27/27] vhost: forbid vhost devices logging Eugenio Pérez
2020-11-20 18:51   ` Eugenio Pérez
2020-11-20 19:03 ` [RFC PATCH 00/27] vDPA software assisted live migration Eugenio Perez Martin
2020-11-20 19:03   ` Eugenio Perez Martin
2020-11-20 19:30 ` no-reply
2020-11-20 19:30   ` no-reply
2020-11-25  7:08 ` Jason Wang
2020-11-25  7:08   ` Jason Wang
2020-11-25  7:08   ` Jason Wang
2020-11-25 12:03   ` Eugenio Perez Martin
2020-11-25 12:03     ` Eugenio Perez Martin
2020-11-25 12:14     ` Eugenio Perez Martin
2020-11-25 12:14       ` Eugenio Perez Martin
2020-11-26  3:07     ` Jason Wang
2020-11-26  3:07       ` Jason Wang
2020-11-26  3:07       ` Jason Wang
2020-11-27 15:44 ` Stefano Garzarella
2020-11-27 15:44   ` Stefano Garzarella
2020-11-27 15:44   ` Stefano Garzarella
2020-12-08  9:37 ` Stefan Hajnoczi
2020-12-08  9:37   ` Stefan Hajnoczi
2020-12-08  9:37   ` Stefan Hajnoczi
2020-12-09  9:26   ` Jason Wang
2020-12-09  9:26     ` Jason Wang
2020-12-09  9:26     ` Jason Wang
2020-12-09 15:57     ` Stefan Hajnoczi
2020-12-09 15:57       ` Stefan Hajnoczi
2020-12-09 15:57       ` Stefan Hajnoczi
2020-12-10  9:12       ` Jason Wang
2020-12-10  9:12         ` Jason Wang
2020-12-10  9:12         ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201207174233.GN203660@stefanha-x1.localdomain \
    --to=stefanha@gmail.com \
    --cc=alex.barba@broadcom.com \
    --cc=ballle98@gmail.com \
    --cc=cfontain@redhat.com \
    --cc=dandaly0@gmail.com \
    --cc=dmytro.kazantsev@gmail.com \
    --cc=eli@mellanox.com \
    --cc=eperezma@redhat.com \
    --cc=hanand@xilinx.com \
    --cc=howard.cai@gmail.com \
    --cc=jasowang@redhat.com \
    --cc=jim.harford@broadcom.com \
    --cc=kvm@vger.kernel.org \
    --cc=lars.ganrot@gmail.com \
    --cc=liralon@gmail.com \
    --cc=loseweigh@gmail.com \
    --cc=maxgu14@gmail.com \
    --cc=mehta.salil.lnk@gmail.com \
    --cc=ml@napatech.com \
    --cc=mst@redhat.com \
    --cc=nitin.shrivastav@broadcom.com \
    --cc=parav@mellanox.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=rob.miller@broadcom.com \
    --cc=sgarzare@redhat.com \
    --cc=smooney@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=stephenfin@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=vmireyno@marvell.com \
    --cc=xiao.w.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.