From: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
To: qemu-devel@nongnu.org
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
riel@surriel.com, mst@redhat.com, david@redhat.com,
dgilbert@redhat.com, pankaj.gupta@ionos.com, stefanha@redhat.com,
dan.j.williams@intel.com
Subject: [RFC] virtio_pmem: enable live migration support
Date: Fri, 31 Dec 2021 13:01:27 +0100 [thread overview]
Message-ID: <20211231120127.22394-1-pankaj.gupta.linux@gmail.com> (raw)
From: Pankaj Gupta <pankaj.gupta.linux@gmail.com>>
Enable live migration support for virtio-pmem device.
Tested this: with live migration on same host.
Need suggestion on below points to support virtio-pmem live migration
between two separate host systems:
- There is still possibility of stale page cache page at the
destination host which we cannot invalidate currently as done in 1]
for write-back mode because virtio-pmem memory backend file is mmaped
in guest address space and invalidating corresponding page cache pages
would also fault all the other userspace process mappings on the same file.
Or we make it strict no other process would mmap this backing file?
-- In commit 1] we first fsync and then invalidate all the pages from destination
page cache. fsync would sync the stale dirty page cache page, Is this the right
thing to do as we might end up in data discrepency?
- Thinking, alternatively if we transfer active corresponding guest page cache
pages information from active LRU list source to destination host and refault
those pages. This would also help to enable hot page cache in destination host
for the guest and solve stale page cache issue as well. How we can achieve this
so that we make sure we get rid of all the stale page cache pages in destination
host?
Looking for suggestions on recommended and feasible solution we can implement?
Thank you!
1] dd577a26ff ("block/file-posix: implement bdrv_co_invalidate_cache() on Linux")
Signed-off-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
---
hw/virtio/virtio-pmem.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/hw/virtio/virtio-pmem.c b/hw/virtio/virtio-pmem.c
index d1aeb90a31..a19619a387 100644
--- a/hw/virtio/virtio-pmem.c
+++ b/hw/virtio/virtio-pmem.c
@@ -123,6 +123,7 @@ static void virtio_pmem_realize(DeviceState *dev, Error **errp)
}
host_memory_backend_set_mapped(pmem->memdev, true);
+ vmstate_register_ram(&pmem->memdev->mr, DEVICE(pmem));
virtio_init(vdev, TYPE_VIRTIO_PMEM, VIRTIO_ID_PMEM,
sizeof(struct virtio_pmem_config));
pmem->rq_vq = virtio_add_queue(vdev, 128, virtio_pmem_flush);
@@ -133,6 +134,7 @@ static void virtio_pmem_unrealize(DeviceState *dev)
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
VirtIOPMEM *pmem = VIRTIO_PMEM(dev);
+ vmstate_unregister_ram(&pmem->memdev->mr, DEVICE(pmem));
host_memory_backend_set_mapped(pmem->memdev, false);
virtio_delete_queue(pmem->rq_vq);
virtio_cleanup(vdev);
@@ -157,6 +159,16 @@ static MemoryRegion *virtio_pmem_get_memory_region(VirtIOPMEM *pmem,
return &pmem->memdev->mr;
}
+static const VMStateDescription vmstate_virtio_pmem = {
+ .name = "virtio-pmem",
+ .minimum_version_id = 1,
+ .version_id = 1,
+ .fields = (VMStateField[]) {
+ VMSTATE_VIRTIO_DEVICE,
+ VMSTATE_END_OF_LIST()
+ },
+};
+
static Property virtio_pmem_properties[] = {
DEFINE_PROP_UINT64(VIRTIO_PMEM_ADDR_PROP, VirtIOPMEM, start, 0),
DEFINE_PROP_LINK(VIRTIO_PMEM_MEMDEV_PROP, VirtIOPMEM, memdev,
@@ -171,6 +183,7 @@ static void virtio_pmem_class_init(ObjectClass *klass, void *data)
VirtIOPMEMClass *vpc = VIRTIO_PMEM_CLASS(klass);
device_class_set_props(dc, virtio_pmem_properties);
+ dc->vmsd = &vmstate_virtio_pmem;
vdc->realize = virtio_pmem_realize;
vdc->unrealize = virtio_pmem_unrealize;
--
2.25.1
next reply other threads:[~2021-12-31 12:04 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-31 12:01 Pankaj Gupta [this message]
2022-01-12 10:36 ` [RFC] virtio_pmem: enable live migration support David Hildenbrand
2022-01-12 15:44 ` Pankaj Gupta
2022-01-12 15:49 ` David Hildenbrand
2022-01-12 16:08 ` Pankaj Gupta
2022-01-12 16:26 ` David Hildenbrand
2022-01-12 16:42 ` Pankaj Gupta
2022-01-12 16:48 ` Pankaj Gupta
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211231120127.22394-1-pankaj.gupta.linux@gmail.com \
--to=pankaj.gupta.linux@gmail.com \
--cc=dan.j.williams@intel.com \
--cc=david@redhat.com \
--cc=dgilbert@redhat.com \
--cc=mst@redhat.com \
--cc=pankaj.gupta@ionos.com \
--cc=qemu-devel@nongnu.org \
--cc=riel@surriel.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.