nvdimm.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [RFC v3 0/2] kvm "fake DAX" device flushing
@ 2018-07-13  7:52 Pankaj Gupta
  2018-07-13  7:52 ` [RFC v3 1/2] libnvdimm: Add flush callback for virtio pmem Pankaj Gupta
                   ` (2 more replies)
  0 siblings, 3 replies; 28+ messages in thread
From: Pankaj Gupta @ 2018-07-13  7:52 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A,
	linux-nvdimm-y27Ovi1pjclAfugRpC6u6w
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong.eric-Re5JQEeQqe8AvxtiuMwx3w,
	riel-ebMLmSuQjDVBDgjK7y7TUQ,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	david-H+wXaHxf7aLQT0dZR+AlfA,
	ross.zwisler-ral2JQCrhuEAvxtiuMwx3w,
	lcapitulino-H+wXaHxf7aLQT0dZR+AlfA, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	mst-H+wXaHxf7aLQT0dZR+AlfA, stefanha-H+wXaHxf7aLQT0dZR+AlfA,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, pbonzini-H+wXaHxf7aLQT0dZR+AlfA,
	nilal-H+wXaHxf7aLQT0dZR+AlfA, eblake-H+wXaHxf7aLQT0dZR+AlfA

This is RFC V3 for 'fake DAX' flushing interface sharing
for review. This patchset has two parts:

- Guest virtio-pmem driver
  Guest driver reads persistent memory range from paravirt device 
  and registers with 'nvdimm_bus'. 'nvdimm/pmem' driver uses this 
  information to allocate persistent memory range. Also, we have 
  implemented guest side of VIRTIO flushing interface.

- Qemu virtio-pmem device
  It exposes a persistent memory range to KVM guest which at host 
  side is file backed memory and works as persistent memory device. 
  In addition to this it provides virtio device handling of flushing 
  interface. KVM guest performs Qemu side asynchronous sync using 
  this interface.

Changes from RFC v2:
- Add flush function in the nd_region in place of switching
  on a flag - Dan & Stefan
- Add flush completion function with proper locking and wait
  for host side flush completion - Stefan & Dan
- Keep userspace API in uapi header file - Stefan, MST
- Use LE fields & New device id - MST
- Indentation & spacing suggestions - MST & Eric
- Remove extra header files & add licensing - Stefan

Changes from RFC v1:
- Reuse existing 'pmem' code for registering persistent 
  memory and other operations instead of creating an entirely 
  new block driver.
- Use VIRTIO driver to register memory information with 
  nvdimm_bus and create region_type accordingly. 
- Call VIRTIO flush from existing pmem driver.

Details of project idea for 'fake DAX' flushing interface is 
shared [2] & [3].

Pankaj Gupta (2):
   Add virtio-pmem guest driver
   pmem: device flush over VIRTIO

[1] https://marc.info/?l=linux-mm&m=150782346802290&w=2
[2] https://www.spinics.net/lists/kvm/msg149761.html
[3] https://www.spinics.net/lists/kvm/msg153095.html  

 drivers/nvdimm/nd.h              |    1 
 drivers/nvdimm/pmem.c            |    4 
 drivers/nvdimm/region_devs.c     |   24 +++-
 drivers/virtio/Kconfig           |    9 +
 drivers/virtio/Makefile          |    1 
 drivers/virtio/virtio_pmem.c     |  190 +++++++++++++++++++++++++++++++++++++++
 include/linux/libnvdimm.h        |    5 -
 include/linux/virtio_pmem.h      |   44 +++++++++
 include/uapi/linux/virtio_ids.h  |    1 
 include/uapi/linux/virtio_pmem.h |   40 ++++++++
 10 files changed, 310 insertions(+), 9 deletions(-)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [RFC v3 1/2] libnvdimm: Add flush callback for virtio pmem
  2018-07-13  7:52 [RFC v3 0/2] kvm "fake DAX" device flushing Pankaj Gupta
@ 2018-07-13  7:52 ` Pankaj Gupta
  2018-07-13 20:35   ` Luiz Capitulino
       [not found] ` <20180713075232.9575-1-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2018-07-13  7:52 ` [RFC v3] qemu: Add virtio pmem device Pankaj Gupta
  2 siblings, 1 reply; 28+ messages in thread
From: Pankaj Gupta @ 2018-07-13  7:52 UTC (permalink / raw)
  To: linux-kernel, kvm, qemu-devel, linux-nvdimm
  Cc: jack, stefanha, dan.j.williams, riel, haozhong.zhang, nilal,
	kwolf, pbonzini, ross.zwisler, david, xiaoguangrong.eric, hch,
	mst, niteshnarayanlal, lcapitulino, imammedo, eblake, pagupta

This patch adds functionality to perform flush from guest to host
over VIRTIO. We are registering a callback based on 'nd_region' type.
As virtio_pmem driver requires this special flush interface, for rest
of the region types we are registering existing flush function.
Also report the error returned by virtio flush interface.

Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
---
 drivers/nvdimm/nd.h          |  1 +
 drivers/nvdimm/pmem.c        |  4 ++--
 drivers/nvdimm/region_devs.c | 24 ++++++++++++++++++------
 include/linux/libnvdimm.h    |  5 ++++-
 4 files changed, 25 insertions(+), 9 deletions(-)

diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
index 32e0364..1b62f79 100644
--- a/drivers/nvdimm/nd.h
+++ b/drivers/nvdimm/nd.h
@@ -159,6 +159,7 @@ struct nd_region {
 	struct badblocks bb;
 	struct nd_interleave_set *nd_set;
 	struct nd_percpu_lane __percpu *lane;
+	int (*flush)(struct device *dev);
 	struct nd_mapping mapping[0];
 };
 
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 9d71492..29fd2cd 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -180,7 +180,7 @@ static blk_qc_t pmem_make_request(struct request_queue *q, struct bio *bio)
 	struct nd_region *nd_region = to_region(pmem);
 
 	if (bio->bi_opf & REQ_FLUSH)
-		nvdimm_flush(nd_region);
+		bio->bi_status = nvdimm_flush(nd_region);
 
 	do_acct = nd_iostat_start(bio, &start);
 	bio_for_each_segment(bvec, bio, iter) {
@@ -196,7 +196,7 @@ static blk_qc_t pmem_make_request(struct request_queue *q, struct bio *bio)
 		nd_iostat_end(bio, start);
 
 	if (bio->bi_opf & REQ_FUA)
-		nvdimm_flush(nd_region);
+		bio->bi_status = nvdimm_flush(nd_region);
 
 	bio_endio(bio);
 	return BLK_QC_T_NONE;
diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
index a612be6..124aae7 100644
--- a/drivers/nvdimm/region_devs.c
+++ b/drivers/nvdimm/region_devs.c
@@ -1025,6 +1025,7 @@ static struct nd_region *nd_region_create(struct nvdimm_bus *nvdimm_bus,
 	dev->of_node = ndr_desc->of_node;
 	nd_region->ndr_size = resource_size(ndr_desc->res);
 	nd_region->ndr_start = ndr_desc->res->start;
+	nd_region->flush = ndr_desc->flush;
 	nd_device_register(dev);
 
 	return nd_region;
@@ -1065,13 +1066,10 @@ struct nd_region *nvdimm_volatile_region_create(struct nvdimm_bus *nvdimm_bus,
 }
 EXPORT_SYMBOL_GPL(nvdimm_volatile_region_create);
 
-/**
- * nvdimm_flush - flush any posted write queues between the cpu and pmem media
- * @nd_region: blk or interleaved pmem region
- */
-void nvdimm_flush(struct nd_region *nd_region)
+void pmem_flush(struct device *dev)
 {
-	struct nd_region_data *ndrd = dev_get_drvdata(&nd_region->dev);
+	struct nd_region_data *ndrd = dev_get_drvdata(dev);
+	struct nd_region *nd_region = to_nd_region(dev);
 	int i, idx;
 
 	/*
@@ -1094,6 +1092,20 @@ void nvdimm_flush(struct nd_region *nd_region)
 			writeq(1, ndrd_get_flush_wpq(ndrd, i, idx));
 	wmb();
 }
+
+/**
+ * nvdimm_flush - flush any posted write queues between the cpu and pmem media
+ * @nd_region: blk or interleaved pmem region
+ */
+int nvdimm_flush(struct nd_region *nd_region)
+{
+	if (nd_region->flush)
+		return(nd_region->flush(&nd_region->dev));
+
+	pmem_flush(&nd_region->dev);
+
+	return 0;
+}
 EXPORT_SYMBOL_GPL(nvdimm_flush);
 
 /**
diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
index 097072c..33b617f 100644
--- a/include/linux/libnvdimm.h
+++ b/include/linux/libnvdimm.h
@@ -126,6 +126,7 @@ struct nd_region_desc {
 	int numa_node;
 	unsigned long flags;
 	struct device_node *of_node;
+	int (*flush)(struct device *dev);
 };
 
 struct device;
@@ -201,7 +202,9 @@ unsigned long nd_blk_memremap_flags(struct nd_blk_region *ndbr);
 unsigned int nd_region_acquire_lane(struct nd_region *nd_region);
 void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane);
 u64 nd_fletcher64(void *addr, size_t len, bool le);
-void nvdimm_flush(struct nd_region *nd_region);
+int nvdimm_flush(struct nd_region *nd_region);
+void pmem_set_flush(struct nd_region *nd_region, void (*flush)
+					(struct device *));
 int nvdimm_has_flush(struct nd_region *nd_region);
 int nvdimm_has_cache(struct nd_region *nd_region);
 
-- 
2.9.3

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [RFC v3 2/2] virtio-pmem: Add virtio pmem driver
       [not found] ` <20180713075232.9575-1-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-07-13  7:52   ` Pankaj Gupta
       [not found]     ` <20180713075232.9575-3-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2018-07-17 13:11     ` Stefan Hajnoczi
  2018-08-28 12:13   ` [RFC v3 0/2] kvm "fake DAX" device flushing David Hildenbrand
  1 sibling, 2 replies; 28+ messages in thread
From: Pankaj Gupta @ 2018-07-13  7:52 UTC (permalink / raw)
  To: linux-kernel-u79uwXL29TY76Z2rM5mHXA, kvm-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A,
	linux-nvdimm-y27Ovi1pjclAfugRpC6u6w
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong.eric-Re5JQEeQqe8AvxtiuMwx3w,
	riel-ebMLmSuQjDVBDgjK7y7TUQ,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	david-H+wXaHxf7aLQT0dZR+AlfA,
	ross.zwisler-ral2JQCrhuEAvxtiuMwx3w,
	lcapitulino-H+wXaHxf7aLQT0dZR+AlfA, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	mst-H+wXaHxf7aLQT0dZR+AlfA, stefanha-H+wXaHxf7aLQT0dZR+AlfA,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, pbonzini-H+wXaHxf7aLQT0dZR+AlfA,
	nilal-H+wXaHxf7aLQT0dZR+AlfA, eblake-H+wXaHxf7aLQT0dZR+AlfA

This patch adds virtio-pmem driver for KVM guest.

Guest reads the persistent memory range information from Qemu over 
VIRTIO and registers it on nvdimm_bus. It also creates a nd_region 
object with the persistent memory range information so that existing 
'nvdimm/pmem' driver can reserve this into system memory map. This way 
'virtio-pmem' driver uses existing functionality of pmem driver to 
register persistent memory compatible for DAX capable filesystems.

This also provides function to perform guest flush over VIRTIO from 
'pmem' driver when userspace performs flush on DAX memory range.

Signed-off-by: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 drivers/virtio/Kconfig           |   9 ++
 drivers/virtio/Makefile          |   1 +
 drivers/virtio/virtio_pmem.c     | 190 +++++++++++++++++++++++++++++++++++++++
 include/linux/virtio_pmem.h      |  44 +++++++++
 include/uapi/linux/virtio_ids.h  |   1 +
 include/uapi/linux/virtio_pmem.h |  40 +++++++++
 6 files changed, 285 insertions(+)
 create mode 100644 drivers/virtio/virtio_pmem.c
 create mode 100644 include/linux/virtio_pmem.h
 create mode 100644 include/uapi/linux/virtio_pmem.h

diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
index 3589764..a331e23 100644
--- a/drivers/virtio/Kconfig
+++ b/drivers/virtio/Kconfig
@@ -42,6 +42,15 @@ config VIRTIO_PCI_LEGACY
 
 	  If unsure, say Y.
 
+config VIRTIO_PMEM
+	tristate "Support for virtio pmem driver"
+	depends on VIRTIO
+	help
+	This driver provides support for virtio based flushing interface
+	for persistent memory range.
+
+	If unsure, say M.
+
 config VIRTIO_BALLOON
 	tristate "Virtio balloon driver"
 	depends on VIRTIO
diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile
index 3a2b5c5..cbe91c6 100644
--- a/drivers/virtio/Makefile
+++ b/drivers/virtio/Makefile
@@ -6,3 +6,4 @@ virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o
 virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o
 obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o
 obj-$(CONFIG_VIRTIO_INPUT) += virtio_input.o
+obj-$(CONFIG_VIRTIO_PMEM) += virtio_pmem.o
diff --git a/drivers/virtio/virtio_pmem.c b/drivers/virtio/virtio_pmem.c
new file mode 100644
index 0000000..6200b5e
--- /dev/null
+++ b/drivers/virtio/virtio_pmem.c
@@ -0,0 +1,190 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * virtio_pmem.c: Virtio pmem Driver
+ *
+ * Discovers persistent memory range information
+ * from host and provides a virtio based flushing
+ * interface.
+ */
+#include <linux/virtio.h>
+#include <linux/module.h>
+#include <linux/virtio_pmem.h>
+
+static struct virtio_device_id id_table[] = {
+	{ VIRTIO_ID_PMEM, VIRTIO_DEV_ANY_ID },
+	{ 0 },
+};
+
+ /* The interrupt handler */
+static void host_ack(struct virtqueue *vq)
+{
+	unsigned int len;
+	unsigned long flags;
+	struct virtio_pmem_request *req;
+	struct virtio_pmem *vpmem = vq->vdev->priv;
+
+	spin_lock_irqsave(&vpmem->pmem_lock, flags);
+	while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
+		req->done = true;
+		wake_up(&req->acked);
+	}
+	spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
+}
+ /* Initialize virt queue */
+static int init_vq(struct virtio_pmem *vpmem)
+{
+	struct virtqueue *vq;
+
+	/* single vq */
+	vpmem->req_vq = vq = virtio_find_single_vq(vpmem->vdev,
+				host_ack, "flush_queue");
+	if (IS_ERR(vq))
+		return PTR_ERR(vq);
+	spin_lock_init(&vpmem->pmem_lock);
+
+	return 0;
+};
+
+ /* The request submission function */
+static int virtio_pmem_flush(struct device *dev)
+{
+	int err;
+	unsigned long flags;
+	struct scatterlist *sgs[2], sg, ret;
+	struct virtio_device *vdev = dev_to_virtio(dev->parent->parent);
+	struct virtio_pmem *vpmem = vdev->priv;
+	struct virtio_pmem_request *req = kmalloc(sizeof(*req), GFP_KERNEL);
+
+	req->done = false;
+	init_waitqueue_head(&req->acked);
+	spin_lock_irqsave(&vpmem->pmem_lock, flags);
+
+	sg_init_one(&sg, req, sizeof(req));
+	sgs[0] = &sg;
+	sg_init_one(&ret, &req->ret, sizeof(req->ret));
+	sgs[1] = &ret;
+	err = virtqueue_add_sgs(vpmem->req_vq, sgs, 1, 1, req, GFP_ATOMIC);
+	if (err) {
+		dev_err(&vdev->dev, "failed to send command to virtio pmem device\n");
+		spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
+		return -ENOSPC;
+	}
+	virtqueue_kick(vpmem->req_vq);
+	spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
+
+	/* When host has read buffer, this completes via host_ack */
+	wait_event(req->acked, req->done);
+	err = req->ret;
+	kfree(req);
+
+	return err;
+};
+
+static int virtio_pmem_probe(struct virtio_device *vdev)
+{
+	int err = 0;
+	struct resource res;
+	struct virtio_pmem *vpmem;
+	struct nvdimm_bus *nvdimm_bus;
+	struct nd_region_desc ndr_desc;
+	int nid = dev_to_node(&vdev->dev);
+	struct nd_region *nd_region;
+
+	if (!vdev->config->get) {
+		dev_err(&vdev->dev, "%s failure: config disabled\n",
+			__func__);
+		return -EINVAL;
+	}
+
+	vdev->priv = vpmem = devm_kzalloc(&vdev->dev, sizeof(*vpmem),
+			GFP_KERNEL);
+	if (!vpmem) {
+		err = -ENOMEM;
+		goto out_err;
+	}
+
+	vpmem->vdev = vdev;
+	err = init_vq(vpmem);
+	if (err)
+		goto out_err;
+
+	virtio_cread(vpmem->vdev, struct virtio_pmem_config,
+			start, &vpmem->start);
+	virtio_cread(vpmem->vdev, struct virtio_pmem_config,
+			size, &vpmem->size);
+
+	res.start = vpmem->start;
+	res.end   = vpmem->start + vpmem->size-1;
+	vpmem->nd_desc.provider_name = "virtio-pmem";
+	vpmem->nd_desc.module = THIS_MODULE;
+
+	vpmem->nvdimm_bus = nvdimm_bus = nvdimm_bus_register(&vdev->dev,
+						&vpmem->nd_desc);
+	if (!nvdimm_bus)
+		goto out_vq;
+
+	dev_set_drvdata(&vdev->dev, nvdimm_bus);
+	memset(&ndr_desc, 0, sizeof(ndr_desc));
+
+	ndr_desc.res = &res;
+	ndr_desc.numa_node = nid;
+	ndr_desc.flush = virtio_pmem_flush;
+	set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags);
+	nd_region = nvdimm_pmem_region_create(nvdimm_bus, &ndr_desc);
+
+	if (!nd_region)
+		goto out_nd;
+
+	virtio_device_ready(vdev);
+	return 0;
+out_nd:
+	err = -ENXIO;
+	nvdimm_bus_unregister(nvdimm_bus);
+out_vq:
+	vdev->config->del_vqs(vdev);
+out_err:
+	dev_err(&vdev->dev, "failed to register virtio pmem memory\n");
+	return err;
+}
+
+static void virtio_pmem_remove(struct virtio_device *vdev)
+{
+	struct virtio_pmem *vpmem = vdev->priv;
+	struct nvdimm_bus *nvdimm_bus = dev_get_drvdata(&vdev->dev);
+
+	nvdimm_bus_unregister(nvdimm_bus);
+	vdev->config->del_vqs(vdev);
+	kfree(vpmem);
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int virtio_pmem_freeze(struct virtio_device *vdev)
+{
+	/* todo: handle freeze function */
+	return -EPERM;
+}
+
+static int virtio_pmem_restore(struct virtio_device *vdev)
+{
+	/* todo: handle restore function */
+	return -EPERM;
+}
+#endif
+
+
+static struct virtio_driver virtio_pmem_driver = {
+	.driver.name		= KBUILD_MODNAME,
+	.driver.owner		= THIS_MODULE,
+	.id_table		= id_table,
+	.probe			= virtio_pmem_probe,
+	.remove			= virtio_pmem_remove,
+#ifdef CONFIG_PM_SLEEP
+	.freeze                 = virtio_pmem_freeze,
+	.restore                = virtio_pmem_restore,
+#endif
+};
+
+module_virtio_driver(virtio_pmem_driver);
+MODULE_DEVICE_TABLE(virtio, id_table);
+MODULE_DESCRIPTION("Virtio pmem driver");
+MODULE_LICENSE("GPL");
diff --git a/include/linux/virtio_pmem.h b/include/linux/virtio_pmem.h
new file mode 100644
index 0000000..0f83d9c
--- /dev/null
+++ b/include/linux/virtio_pmem.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * virtio_pmem.h: virtio pmem Driver
+ *
+ * Discovers persistent memory range information
+ * from host and provides a virtio based flushing
+ * interface.
+ */
+#ifndef _LINUX_VIRTIO_PMEM_H
+#define _LINUX_VIRTIO_PMEM_H
+
+#include <linux/virtio_ids.h>
+#include <linux/virtio_config.h>
+#include <uapi/linux/virtio_pmem.h>
+#include <linux/libnvdimm.h>
+#include <linux/spinlock.h>
+
+struct virtio_pmem_request {
+	/* Host return status corresponding to flush request */
+	int ret;
+
+	/* Wait queue to process deferred work after ack from host */
+	wait_queue_head_t acked;
+	bool done;
+};
+
+struct virtio_pmem {
+	struct virtio_device *vdev;
+
+	/* Virtio pmem request queue */
+	struct virtqueue *req_vq;
+
+	/* nvdimm bus registers virtio pmem device */
+	struct nvdimm_bus *nvdimm_bus;
+	struct nvdimm_bus_descriptor nd_desc;
+
+	/* Synchronize virtqueue data */
+	spinlock_t pmem_lock;
+
+	/* Memory region information */
+	uint64_t start;
+	uint64_t size;
+};
+#endif
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 6d5c3b2..3463895 100644
--- a/include/uapi/linux/virtio_ids.h
+++ b/include/uapi/linux/virtio_ids.h
@@ -43,5 +43,6 @@
 #define VIRTIO_ID_INPUT        18 /* virtio input */
 #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
 #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
+#define VIRTIO_ID_PMEM         25 /* virtio pmem */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_pmem.h b/include/uapi/linux/virtio_pmem.h
new file mode 100644
index 0000000..c7c22a5
--- /dev/null
+++ b/include/uapi/linux/virtio_pmem.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * This header, excluding the #ifdef __KERNEL__ part, is BSD licensed so
+ * anyone can use the definitions to implement compatible drivers/servers:
+ *
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of IBM nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ * Copyright (C) Red Hat, Inc., 2018-2019
+ * Copyright (C) Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, 2018
+ */
+#ifndef _UAPI_LINUX_VIRTIO_PMEM_H
+#define _UAPI_LINUX_VIRTIO_PMEM_H
+
+struct virtio_pmem_config {
+	__le64 start;
+	__le64 size;
+};
+#endif
-- 
2.9.3

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [RFC v3] qemu: Add virtio pmem device
  2018-07-13  7:52 [RFC v3 0/2] kvm "fake DAX" device flushing Pankaj Gupta
  2018-07-13  7:52 ` [RFC v3 1/2] libnvdimm: Add flush callback for virtio pmem Pankaj Gupta
       [not found] ` <20180713075232.9575-1-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-07-13  7:52 ` Pankaj Gupta
       [not found]   ` <20180713075232.9575-4-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2 siblings, 1 reply; 28+ messages in thread
From: Pankaj Gupta @ 2018-07-13  7:52 UTC (permalink / raw)
  To: linux-kernel, kvm, qemu-devel, linux-nvdimm
  Cc: jack, stefanha, dan.j.williams, riel, haozhong.zhang, nilal,
	kwolf, pbonzini, ross.zwisler, david, xiaoguangrong.eric, hch,
	mst, niteshnarayanlal, lcapitulino, imammedo, eblake, pagupta

 This patch adds virtio-pmem Qemu device.

 This device presents memory address range information to guest
 which is backed by file backend type. It acts like persistent
 memory device for KVM guest. Guest can perform read and persistent
 write operations on this memory range with the help of DAX capable
 filesystem.

 Persistent guest writes are assured with the help of virtio based
 flushing interface. When guest userspace space performs fsync on
 file fd on pmem device, a flush command is send to Qemu over VIRTIO
 and host side flush/sync is done on backing image file.

Changes from RFC v2:
- Use aio_worker() to avoid Qemu from hanging with blocking fsync
  call - Stefan
- Use virtio_st*_p() for endianess - Stefan
- Correct indentation in qapi/misc.json - Eric

Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
---
 hw/virtio/Makefile.objs                     |   3 +
 hw/virtio/virtio-pci.c                      |  44 +++++
 hw/virtio/virtio-pci.h                      |  14 ++
 hw/virtio/virtio-pmem.c                     | 241 ++++++++++++++++++++++++++++
 include/hw/pci/pci.h                        |   1 +
 include/hw/virtio/virtio-pmem.h             |  42 +++++
 include/standard-headers/linux/virtio_ids.h |   1 +
 qapi/misc.json                              |  26 ++-
 8 files changed, 371 insertions(+), 1 deletion(-)
 create mode 100644 hw/virtio/virtio-pmem.c
 create mode 100644 include/hw/virtio/virtio-pmem.h

diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
index 1b2799cfd8..7f914d45d0 100644
--- a/hw/virtio/Makefile.objs
+++ b/hw/virtio/Makefile.objs
@@ -10,6 +10,9 @@ obj-$(CONFIG_VIRTIO_CRYPTO) += virtio-crypto.o
 obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) += virtio-crypto-pci.o
 
 obj-$(CONFIG_LINUX) += vhost.o vhost-backend.o vhost-user.o
+ifeq ($(CONFIG_MEM_HOTPLUG),y)
+obj-$(CONFIG_LINUX) += virtio-pmem.o
+endif
 obj-$(CONFIG_VHOST_VSOCK) += vhost-vsock.o
 endif
 
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 3a01fe90f0..93d3fc05c7 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -2521,6 +2521,49 @@ static const TypeInfo virtio_rng_pci_info = {
     .class_init    = virtio_rng_pci_class_init,
 };
 
+/* virtio-pmem-pci */
+
+static void virtio_pmem_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
+{
+    VirtIOPMEMPCI *vpmem = VIRTIO_PMEM_PCI(vpci_dev);
+    DeviceState *vdev = DEVICE(&vpmem->vdev);
+
+    qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
+    object_property_set_bool(OBJECT(vdev), true, "realized", errp);
+}
+
+static void virtio_pmem_pci_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+    VirtioPCIClass *k = VIRTIO_PCI_CLASS(klass);
+    PCIDeviceClass *pcidev_k = PCI_DEVICE_CLASS(klass);
+    k->realize = virtio_pmem_pci_realize;
+    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
+    pcidev_k->vendor_id = PCI_VENDOR_ID_REDHAT_QUMRANET;
+    pcidev_k->device_id = PCI_DEVICE_ID_VIRTIO_PMEM;
+    pcidev_k->revision = VIRTIO_PCI_ABI_VERSION;
+    pcidev_k->class_id = PCI_CLASS_OTHERS;
+}
+
+static void virtio_pmem_pci_instance_init(Object *obj)
+{
+    VirtIOPMEMPCI *dev = VIRTIO_PMEM_PCI(obj);
+
+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
+                                TYPE_VIRTIO_PMEM);
+    object_property_add_alias(obj, "memdev", OBJECT(&dev->vdev), "memdev",
+                              &error_abort);
+}
+
+static const TypeInfo virtio_pmem_pci_info = {
+    .name          = TYPE_VIRTIO_PMEM_PCI,
+    .parent        = TYPE_VIRTIO_PCI,
+    .instance_size = sizeof(VirtIOPMEMPCI),
+    .instance_init = virtio_pmem_pci_instance_init,
+    .class_init    = virtio_pmem_pci_class_init,
+};
+
+
 /* virtio-input-pci */
 
 static Property virtio_input_pci_properties[] = {
@@ -2714,6 +2757,7 @@ static void virtio_pci_register_types(void)
     type_register_static(&virtio_balloon_pci_info);
     type_register_static(&virtio_serial_pci_info);
     type_register_static(&virtio_net_pci_info);
+    type_register_static(&virtio_pmem_pci_info);
 #ifdef CONFIG_VHOST_SCSI
     type_register_static(&vhost_scsi_pci_info);
 #endif
diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
index 813082b0d7..fe74fcad3f 100644
--- a/hw/virtio/virtio-pci.h
+++ b/hw/virtio/virtio-pci.h
@@ -19,6 +19,7 @@
 #include "hw/virtio/virtio-blk.h"
 #include "hw/virtio/virtio-net.h"
 #include "hw/virtio/virtio-rng.h"
+#include "hw/virtio/virtio-pmem.h"
 #include "hw/virtio/virtio-serial.h"
 #include "hw/virtio/virtio-scsi.h"
 #include "hw/virtio/virtio-balloon.h"
@@ -57,6 +58,7 @@ typedef struct VirtIOInputHostPCI VirtIOInputHostPCI;
 typedef struct VirtIOGPUPCI VirtIOGPUPCI;
 typedef struct VHostVSockPCI VHostVSockPCI;
 typedef struct VirtIOCryptoPCI VirtIOCryptoPCI;
+typedef struct VirtIOPMEMPCI VirtIOPMEMPCI;
 
 /* virtio-pci-bus */
 
@@ -274,6 +276,18 @@ struct VirtIOBlkPCI {
     VirtIOBlock vdev;
 };
 
+/*
+ * virtio-pmem-pci: This extends VirtioPCIProxy.
+ */
+#define TYPE_VIRTIO_PMEM_PCI "virtio-pmem-pci"
+#define VIRTIO_PMEM_PCI(obj) \
+        OBJECT_CHECK(VirtIOPMEMPCI, (obj), TYPE_VIRTIO_PMEM_PCI)
+
+struct VirtIOPMEMPCI {
+    VirtIOPCIProxy parent_obj;
+    VirtIOPMEM vdev;
+};
+
 /*
  * virtio-balloon-pci: This extends VirtioPCIProxy.
  */
diff --git a/hw/virtio/virtio-pmem.c b/hw/virtio/virtio-pmem.c
new file mode 100644
index 0000000000..08c96d7e80
--- /dev/null
+++ b/hw/virtio/virtio-pmem.c
@@ -0,0 +1,241 @@
+/*
+ * Virtio pmem device
+ *
+ * Copyright (C) 2018 Red Hat, Inc.
+ * Copyright (C) 2018 Pankaj Gupta <pagupta@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "qemu-common.h"
+#include "qemu/error-report.h"
+#include "hw/virtio/virtio-access.h"
+#include "hw/virtio/virtio-pmem.h"
+#include "hw/mem/memory-device.h"
+#include "block/aio.h"
+#include "block/thread-pool.h"
+
+typedef struct VirtIOPMEMresp {
+    int ret;
+} VirtIOPMEMResp;
+
+typedef struct VirtIODeviceRequest {
+    VirtQueueElement elem;
+    int fd;
+    VirtIOPMEM *pmem;
+    VirtIOPMEMResp resp;
+} VirtIODeviceRequest;
+
+static int worker_cb(void *opaque)
+{
+    VirtIODeviceRequest *req = opaque;
+    int err = 0;
+
+    /* flush raw backing image */
+    err = fsync(req->fd);
+    if (err != 0) {
+        err = errno;
+    }
+    req->resp.ret = err;
+
+    return 0;
+}
+
+static void done_cb(void *opaque, int ret)
+{
+    VirtIODeviceRequest *req = opaque;
+    int len = iov_from_buf(req->elem.in_sg, req->elem.in_num, 0,
+                              &req->resp, sizeof(VirtIOPMEMResp));
+
+    /* Callbacks are serialized, so no need to use atomic ops.  */
+    virtqueue_push(req->pmem->rq_vq, &req->elem, len);
+    virtio_notify((VirtIODevice *)req->pmem, req->pmem->rq_vq);
+    g_free(req);
+}
+
+static void virtio_pmem_flush(VirtIODevice *vdev, VirtQueue *vq)
+{
+    VirtIODeviceRequest *req;
+    VirtIOPMEM *pmem = VIRTIO_PMEM(vdev);
+    HostMemoryBackend *backend = MEMORY_BACKEND(pmem->memdev);
+    ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
+
+    req = virtqueue_pop(vq, sizeof(VirtIODeviceRequest));
+    if (!req) {
+        virtio_error(vdev, "virtio-pmem missing request data");
+        return;
+    }
+
+    if (req->elem.out_num < 1 || req->elem.in_num < 1) {
+        virtio_error(vdev, "virtio-pmem request not proper");
+        g_free(req);
+        return;
+    }
+    req->fd = memory_region_get_fd(&backend->mr);
+    req->pmem = pmem;
+    thread_pool_submit_aio(pool, worker_cb, req, done_cb, req);
+}
+
+static void virtio_pmem_get_config(VirtIODevice *vdev, uint8_t *config)
+{
+    VirtIOPMEM *pmem = VIRTIO_PMEM(vdev);
+    struct virtio_pmem_config *pmemcfg = (struct virtio_pmem_config *) config;
+
+    virtio_stq_p(vdev, &pmemcfg->start, pmem->start);
+    virtio_stq_p(vdev, &pmemcfg->size, pmem->size);
+}
+
+static uint64_t virtio_pmem_get_features(VirtIODevice *vdev, uint64_t features,
+                                        Error **errp)
+{
+    return features;
+}
+
+static void virtio_pmem_realize(DeviceState *dev, Error **errp)
+{
+    VirtIODevice   *vdev   = VIRTIO_DEVICE(dev);
+    VirtIOPMEM     *pmem   = VIRTIO_PMEM(dev);
+    MachineState   *ms     = MACHINE(qdev_get_machine());
+    uint64_t align;
+    Error *local_err = NULL;
+    MemoryRegion *mr;
+
+    if (!pmem->memdev) {
+        error_setg(errp, "virtio-pmem memdev not set");
+        return;
+    }
+
+    mr  = host_memory_backend_get_memory(pmem->memdev);
+    align = memory_region_get_alignment(mr);
+    pmem->size = QEMU_ALIGN_DOWN(memory_region_size(mr), align);
+    pmem->start = memory_device_get_free_addr(ms, NULL, align, pmem->size,
+                                                               &local_err);
+    if (local_err) {
+        error_setg(errp, "Can't get free address in mem device");
+        return;
+    }
+    memory_region_init_alias(&pmem->mr, OBJECT(pmem),
+                             "virtio_pmem-memory", mr, 0, pmem->size);
+    memory_device_plug_region(ms, &pmem->mr, pmem->start);
+
+    host_memory_backend_set_mapped(pmem->memdev, true);
+    virtio_init(vdev, TYPE_VIRTIO_PMEM, VIRTIO_ID_PMEM,
+                                          sizeof(struct virtio_pmem_config));
+    pmem->rq_vq = virtio_add_queue(vdev, 128, virtio_pmem_flush);
+}
+
+static void virtio_mem_check_memdev(Object *obj, const char *name, Object *val,
+                                    Error **errp)
+{
+    if (host_memory_backend_is_mapped(MEMORY_BACKEND(val))) {
+        char *path = object_get_canonical_path_component(val);
+        error_setg(errp, "Can't use already busy memdev: %s", path);
+        g_free(path);
+        return;
+    }
+
+    qdev_prop_allow_set_link_before_realize(obj, name, val, errp);
+}
+
+static const char *virtio_pmem_get_device_id(VirtIOPMEM *vm)
+{
+    Object *obj = OBJECT(vm);
+    DeviceState *parent_dev;
+
+    /* always use the ID of the proxy device */
+    if (obj->parent && object_dynamic_cast(obj->parent, TYPE_DEVICE)) {
+        parent_dev = DEVICE(obj->parent);
+        return parent_dev->id;
+    }
+    return NULL;
+}
+
+static void virtio_pmem_md_fill_device_info(const MemoryDeviceState *md,
+                                           MemoryDeviceInfo *info)
+{
+    VirtioPMemDeviceInfo *vi = g_new0(VirtioPMemDeviceInfo, 1);
+    VirtIOPMEM *vm = VIRTIO_PMEM(md);
+    const char *id = virtio_pmem_get_device_id(vm);
+
+    if (id) {
+        vi->has_id = true;
+        vi->id = g_strdup(id);
+    }
+
+    vi->start = vm->start;
+    vi->size = vm->size;
+    vi->memdev = object_get_canonical_path(OBJECT(vm->memdev));
+
+    info->u.virtio_pmem.data = vi;
+    info->type = MEMORY_DEVICE_INFO_KIND_VIRTIO_PMEM;
+}
+
+static uint64_t virtio_pmem_md_get_addr(const MemoryDeviceState *md)
+{
+    VirtIOPMEM *vm = VIRTIO_PMEM(md);
+
+    return vm->start;
+}
+
+static uint64_t virtio_pmem_md_get_plugged_size(const MemoryDeviceState *md)
+{
+    VirtIOPMEM *vm = VIRTIO_PMEM(md);
+
+    return vm->size;
+}
+
+static uint64_t virtio_pmem_md_get_region_size(const MemoryDeviceState *md)
+{
+    VirtIOPMEM *vm = VIRTIO_PMEM(md);
+
+    return vm->size;
+}
+
+static void virtio_pmem_instance_init(Object *obj)
+{
+    VirtIOPMEM *vm = VIRTIO_PMEM(obj);
+    object_property_add_link(obj, "memdev", TYPE_MEMORY_BACKEND,
+                                (Object **)&vm->memdev,
+                                (void *) virtio_mem_check_memdev,
+                                OBJ_PROP_LINK_STRONG,
+                                &error_abort);
+}
+
+
+static void virtio_pmem_class_init(ObjectClass *klass, void *data)
+{
+    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
+    MemoryDeviceClass *mdc = MEMORY_DEVICE_CLASS(klass);
+
+    vdc->realize      =  virtio_pmem_realize;
+    vdc->get_config   =  virtio_pmem_get_config;
+    vdc->get_features =  virtio_pmem_get_features;
+
+    mdc->get_addr         = virtio_pmem_md_get_addr;
+    mdc->get_plugged_size = virtio_pmem_md_get_plugged_size;
+    mdc->get_region_size  = virtio_pmem_md_get_region_size;
+    mdc->fill_device_info = virtio_pmem_md_fill_device_info;
+}
+
+static TypeInfo virtio_pmem_info = {
+    .name          = TYPE_VIRTIO_PMEM,
+    .parent        = TYPE_VIRTIO_DEVICE,
+    .class_init    = virtio_pmem_class_init,
+    .instance_size = sizeof(VirtIOPMEM),
+    .instance_init = virtio_pmem_instance_init,
+    .interfaces = (InterfaceInfo[]) {
+        { TYPE_MEMORY_DEVICE },
+        { }
+  },
+};
+
+static void virtio_register_types(void)
+{
+    type_register_static(&virtio_pmem_info);
+}
+
+type_init(virtio_register_types)
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index 990d6fcbde..28829b6437 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -85,6 +85,7 @@ extern bool pci_available;
 #define PCI_DEVICE_ID_VIRTIO_RNG         0x1005
 #define PCI_DEVICE_ID_VIRTIO_9P          0x1009
 #define PCI_DEVICE_ID_VIRTIO_VSOCK       0x1012
+#define PCI_DEVICE_ID_VIRTIO_PMEM        0x1013
 
 #define PCI_VENDOR_ID_REDHAT             0x1b36
 #define PCI_DEVICE_ID_REDHAT_BRIDGE      0x0001
diff --git a/include/hw/virtio/virtio-pmem.h b/include/hw/virtio/virtio-pmem.h
new file mode 100644
index 0000000000..fda3ee691c
--- /dev/null
+++ b/include/hw/virtio/virtio-pmem.h
@@ -0,0 +1,42 @@
+/*
+ * Virtio pmem Device
+ *
+ * Copyright Red Hat, Inc. 2018
+ * Copyright Pankaj Gupta <pagupta@redhat.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or
+ * (at your option) any later version.  See the COPYING file in the
+ * top-level directory.
+ */
+
+#ifndef QEMU_VIRTIO_PMEM_H
+#define QEMU_VIRTIO_PMEM_H
+
+#include "hw/virtio/virtio.h"
+#include "exec/memory.h"
+#include "sysemu/hostmem.h"
+#include "standard-headers/linux/virtio_ids.h"
+#include "hw/boards.h"
+#include "hw/i386/pc.h"
+
+#define TYPE_VIRTIO_PMEM "virtio-pmem"
+
+#define VIRTIO_PMEM(obj) \
+        OBJECT_CHECK(VirtIOPMEM, (obj), TYPE_VIRTIO_PMEM)
+
+/* VirtIOPMEM device structure */
+typedef struct VirtIOPMEM {
+    VirtIODevice parent_obj;
+
+    VirtQueue *rq_vq;
+    uint64_t start;
+    uint64_t size;
+    MemoryRegion mr;
+    HostMemoryBackend *memdev;
+} VirtIOPMEM;
+
+struct virtio_pmem_config {
+    uint64_t start;
+    uint64_t size;
+};
+#endif
diff --git a/include/standard-headers/linux/virtio_ids.h b/include/standard-headers/linux/virtio_ids.h
index 6d5c3b2d4f..346389565a 100644
--- a/include/standard-headers/linux/virtio_ids.h
+++ b/include/standard-headers/linux/virtio_ids.h
@@ -43,5 +43,6 @@
 #define VIRTIO_ID_INPUT        18 /* virtio input */
 #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
 #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
+#define VIRTIO_ID_PMEM         25 /* virtio pmem */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/qapi/misc.json b/qapi/misc.json
index 29da7856e3..fb85dd6f6c 100644
--- a/qapi/misc.json
+++ b/qapi/misc.json
@@ -2907,6 +2907,29 @@
           }
 }
 
+##
+# @VirtioPMemDeviceInfo:
+#
+# VirtioPMem state information
+#
+# @id: device's ID
+#
+# @start: physical address, where device is mapped
+#
+# @size: size of memory that the device provides
+#
+# @memdev: memory backend linked with device
+#
+# Since: 2.13
+##
+{ 'struct': 'VirtioPMemDeviceInfo',
+  'data': { '*id': 'str',
+            'start': 'size',
+            'size': 'size',
+            'memdev': 'str'
+          }
+}
+
 ##
 # @MemoryDeviceInfo:
 #
@@ -2916,7 +2939,8 @@
 ##
 { 'union': 'MemoryDeviceInfo',
   'data': { 'dimm': 'PCDIMMDeviceInfo',
-            'nvdimm': 'PCDIMMDeviceInfo'
+            'nvdimm': 'PCDIMMDeviceInfo',
+	    'virtio-pmem': 'VirtioPMemDeviceInfo'
           }
 }
 
-- 
2.14.3

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC v3 1/2] libnvdimm: Add flush callback for virtio pmem
  2018-07-13  7:52 ` [RFC v3 1/2] libnvdimm: Add flush callback for virtio pmem Pankaj Gupta
@ 2018-07-13 20:35   ` Luiz Capitulino
  2018-07-16  8:13     ` Pankaj Gupta
  0 siblings, 1 reply; 28+ messages in thread
From: Luiz Capitulino @ 2018-07-13 20:35 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: linux-kernel, kvm, qemu-devel, linux-nvdimm, jack, stefanha,
	dan.j.williams, riel, haozhong.zhang, nilal, kwolf, pbonzini,
	ross.zwisler, david, xiaoguangrong.eric, hch, mst,
	niteshnarayanlal, imammedo, eblake

On Fri, 13 Jul 2018 13:22:30 +0530
Pankaj Gupta <pagupta@redhat.com> wrote:

> This patch adds functionality to perform flush from guest to host
> over VIRTIO. We are registering a callback based on 'nd_region' type.
> As virtio_pmem driver requires this special flush interface, for rest
> of the region types we are registering existing flush function.
> Also report the error returned by virtio flush interface.

This patch doesn't apply against latest upstream. A few more comments
below.

> 
> Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> ---
>  drivers/nvdimm/nd.h          |  1 +
>  drivers/nvdimm/pmem.c        |  4 ++--
>  drivers/nvdimm/region_devs.c | 24 ++++++++++++++++++------
>  include/linux/libnvdimm.h    |  5 ++++-
>  4 files changed, 25 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
> index 32e0364..1b62f79 100644
> --- a/drivers/nvdimm/nd.h
> +++ b/drivers/nvdimm/nd.h
> @@ -159,6 +159,7 @@ struct nd_region {
>  	struct badblocks bb;
>  	struct nd_interleave_set *nd_set;
>  	struct nd_percpu_lane __percpu *lane;
> +	int (*flush)(struct device *dev);
>  	struct nd_mapping mapping[0];
>  };
>  
> diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
> index 9d71492..29fd2cd 100644
> --- a/drivers/nvdimm/pmem.c
> +++ b/drivers/nvdimm/pmem.c
> @@ -180,7 +180,7 @@ static blk_qc_t pmem_make_request(struct request_queue *q, struct bio *bio)
>  	struct nd_region *nd_region = to_region(pmem);
>  
>  	if (bio->bi_opf & REQ_FLUSH)
> -		nvdimm_flush(nd_region);
> +		bio->bi_status = nvdimm_flush(nd_region);
>  
>  	do_acct = nd_iostat_start(bio, &start);
>  	bio_for_each_segment(bvec, bio, iter) {
> @@ -196,7 +196,7 @@ static blk_qc_t pmem_make_request(struct request_queue *q, struct bio *bio)
>  		nd_iostat_end(bio, start);
>  
>  	if (bio->bi_opf & REQ_FUA)
> -		nvdimm_flush(nd_region);
> +		bio->bi_status = nvdimm_flush(nd_region);
>  
>  	bio_endio(bio);
>  	return BLK_QC_T_NONE;
> diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
> index a612be6..124aae7 100644
> --- a/drivers/nvdimm/region_devs.c
> +++ b/drivers/nvdimm/region_devs.c
> @@ -1025,6 +1025,7 @@ static struct nd_region *nd_region_create(struct nvdimm_bus *nvdimm_bus,
>  	dev->of_node = ndr_desc->of_node;
>  	nd_region->ndr_size = resource_size(ndr_desc->res);
>  	nd_region->ndr_start = ndr_desc->res->start;
> +	nd_region->flush = ndr_desc->flush;
>  	nd_device_register(dev);
>  
>  	return nd_region;
> @@ -1065,13 +1066,10 @@ struct nd_region *nvdimm_volatile_region_create(struct nvdimm_bus *nvdimm_bus,
>  }
>  EXPORT_SYMBOL_GPL(nvdimm_volatile_region_create);
>  
> -/**
> - * nvdimm_flush - flush any posted write queues between the cpu and pmem media
> - * @nd_region: blk or interleaved pmem region
> - */
> -void nvdimm_flush(struct nd_region *nd_region)
> +void pmem_flush(struct device *dev)
>  {
> -	struct nd_region_data *ndrd = dev_get_drvdata(&nd_region->dev);
> +	struct nd_region_data *ndrd = dev_get_drvdata(dev);
> +	struct nd_region *nd_region = to_nd_region(dev);
>  	int i, idx;
>  
>  	/*
> @@ -1094,6 +1092,20 @@ void nvdimm_flush(struct nd_region *nd_region)
>  			writeq(1, ndrd_get_flush_wpq(ndrd, i, idx));
>  	wmb();
>  }
> +
> +/**
> + * nvdimm_flush - flush any posted write queues between the cpu and pmem media
> + * @nd_region: blk or interleaved pmem region
> + */
> +int nvdimm_flush(struct nd_region *nd_region)
> +{
> +	if (nd_region->flush)
> +		return(nd_region->flush(&nd_region->dev));
> +
> +	pmem_flush(&nd_region->dev);

IMHO, a better way of doing this would be to allow nvdimm_flush() to
be overridden. That is, in nd_region_create() you set nd_region->flush
to the original nvdimm_flush() if ndr_desc->flush is NULL. And then
always call nd_region->flush() where nvdimm_flush() is called today.

> +
> +	return 0;
> +}
>  EXPORT_SYMBOL_GPL(nvdimm_flush);
>  
>  /**
> diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
> index 097072c..33b617f 100644
> --- a/include/linux/libnvdimm.h
> +++ b/include/linux/libnvdimm.h
> @@ -126,6 +126,7 @@ struct nd_region_desc {
>  	int numa_node;
>  	unsigned long flags;
>  	struct device_node *of_node;
> +	int (*flush)(struct device *dev);
>  };
>  
>  struct device;
> @@ -201,7 +202,9 @@ unsigned long nd_blk_memremap_flags(struct nd_blk_region *ndbr);
>  unsigned int nd_region_acquire_lane(struct nd_region *nd_region);
>  void nd_region_release_lane(struct nd_region *nd_region, unsigned int lane);
>  u64 nd_fletcher64(void *addr, size_t len, bool le);
> -void nvdimm_flush(struct nd_region *nd_region);
> +int nvdimm_flush(struct nd_region *nd_region);
> +void pmem_set_flush(struct nd_region *nd_region, void (*flush)
> +					(struct device *));

It seems pmem_set_flush() doesn't exist.

>  int nvdimm_has_flush(struct nd_region *nd_region);
>  int nvdimm_has_cache(struct nd_region *nd_region);
>  

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC v3 2/2] virtio-pmem: Add virtio pmem driver
       [not found]     ` <20180713075232.9575-3-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-07-13 20:38       ` Luiz Capitulino
  2018-07-16 11:46         ` [Qemu-devel] " Pankaj Gupta
  0 siblings, 1 reply; 28+ messages in thread
From: Luiz Capitulino @ 2018-07-13 20:38 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong.eric-Re5JQEeQqe8AvxtiuMwx3w,
	kvm-u79uwXL29TY76Z2rM5mHXA, riel-ebMLmSuQjDVBDgjK7y7TUQ,
	linux-nvdimm-y27Ovi1pjclAfugRpC6u6w,
	david-H+wXaHxf7aLQT0dZR+AlfA,
	ross.zwisler-ral2JQCrhuEAvxtiuMwx3w,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	stefanha-H+wXaHxf7aLQT0dZR+AlfA,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA,
	eblake-H+wXaHxf7aLQT0dZR+AlfA

On Fri, 13 Jul 2018 13:22:31 +0530
Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> This patch adds virtio-pmem driver for KVM guest.
> 
> Guest reads the persistent memory range information from Qemu over 
> VIRTIO and registers it on nvdimm_bus. It also creates a nd_region 
> object with the persistent memory range information so that existing 
> 'nvdimm/pmem' driver can reserve this into system memory map. This way 
> 'virtio-pmem' driver uses existing functionality of pmem driver to 
> register persistent memory compatible for DAX capable filesystems.
> 
> This also provides function to perform guest flush over VIRTIO from 
> 'pmem' driver when userspace performs flush on DAX memory range.
> 
> Signed-off-by: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> ---
>  drivers/virtio/Kconfig           |   9 ++
>  drivers/virtio/Makefile          |   1 +
>  drivers/virtio/virtio_pmem.c     | 190 +++++++++++++++++++++++++++++++++++++++
>  include/linux/virtio_pmem.h      |  44 +++++++++
>  include/uapi/linux/virtio_ids.h  |   1 +
>  include/uapi/linux/virtio_pmem.h |  40 +++++++++
>  6 files changed, 285 insertions(+)
>  create mode 100644 drivers/virtio/virtio_pmem.c
>  create mode 100644 include/linux/virtio_pmem.h
>  create mode 100644 include/uapi/linux/virtio_pmem.h
> 
> diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> index 3589764..a331e23 100644
> --- a/drivers/virtio/Kconfig
> +++ b/drivers/virtio/Kconfig
> @@ -42,6 +42,15 @@ config VIRTIO_PCI_LEGACY
>  
>  	  If unsure, say Y.
>  
> +config VIRTIO_PMEM
> +	tristate "Support for virtio pmem driver"
> +	depends on VIRTIO
> +	help
> +	This driver provides support for virtio based flushing interface
> +	for persistent memory range.
> +
> +	If unsure, say M.
> +
>  config VIRTIO_BALLOON
>  	tristate "Virtio balloon driver"
>  	depends on VIRTIO
> diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile
> index 3a2b5c5..cbe91c6 100644
> --- a/drivers/virtio/Makefile
> +++ b/drivers/virtio/Makefile
> @@ -6,3 +6,4 @@ virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o
>  virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o
>  obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o
>  obj-$(CONFIG_VIRTIO_INPUT) += virtio_input.o
> +obj-$(CONFIG_VIRTIO_PMEM) += virtio_pmem.o
> diff --git a/drivers/virtio/virtio_pmem.c b/drivers/virtio/virtio_pmem.c
> new file mode 100644
> index 0000000..6200b5e
> --- /dev/null
> +++ b/drivers/virtio/virtio_pmem.c
> @@ -0,0 +1,190 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * virtio_pmem.c: Virtio pmem Driver
> + *
> + * Discovers persistent memory range information
> + * from host and provides a virtio based flushing
> + * interface.
> + */
> +#include <linux/virtio.h>
> +#include <linux/module.h>
> +#include <linux/virtio_pmem.h>
> +
> +static struct virtio_device_id id_table[] = {
> +	{ VIRTIO_ID_PMEM, VIRTIO_DEV_ANY_ID },
> +	{ 0 },
> +};
> +
> + /* The interrupt handler */
> +static void host_ack(struct virtqueue *vq)
> +{
> +	unsigned int len;
> +	unsigned long flags;
> +	struct virtio_pmem_request *req;
> +	struct virtio_pmem *vpmem = vq->vdev->priv;
> +
> +	spin_lock_irqsave(&vpmem->pmem_lock, flags);
> +	while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
> +		req->done = true;
> +		wake_up(&req->acked);
> +	}
> +	spin_unlock_irqrestore(&vpmem->pmem_lock, flags);

Honest question: why do you need to disable interrupts here?

> +}
> + /* Initialize virt queue */
> +static int init_vq(struct virtio_pmem *vpmem)
> +{
> +	struct virtqueue *vq;
> +
> +	/* single vq */
> +	vpmem->req_vq = vq = virtio_find_single_vq(vpmem->vdev,
> +				host_ack, "flush_queue");
> +	if (IS_ERR(vq))
> +		return PTR_ERR(vq);
> +	spin_lock_init(&vpmem->pmem_lock);
> +
> +	return 0;
> +};
> +
> + /* The request submission function */
> +static int virtio_pmem_flush(struct device *dev)
> +{
> +	int err;
> +	unsigned long flags;
> +	struct scatterlist *sgs[2], sg, ret;
> +	struct virtio_device *vdev = dev_to_virtio(dev->parent->parent);
> +	struct virtio_pmem *vpmem = vdev->priv;
> +	struct virtio_pmem_request *req = kmalloc(sizeof(*req), GFP_KERNEL);

Not checking kmalloc() return.

> +
> +	req->done = false;
> +	init_waitqueue_head(&req->acked);
> +	spin_lock_irqsave(&vpmem->pmem_lock, flags);

Why do you need spin_lock_irqsave()? There are two points consider:

1. Will virtio_pmem_flush() ever be called with interrupts disabled?
   If yes, then it's broken since you should be using GFP_ATOMIC in the
   kmalloc() call and you can't call wait_event()

2. If virtio_pmem_flush() is never called with interrupts disabled, do
   you really need to disable interrupts? If yes, why?

Another point to consider is whether or not virtio_pmem_flush()
can be called from atomic context. nvdimm_flush() itself is called
from a few atomic sites, but I can't tell if virtio_pmem_flush()
will ever be called from those sites. If it can be called atomic
context, then item 1 applies here. If you're sure it can't, then
you should probably call might_sleep().

> +
> +	sg_init_one(&sg, req, sizeof(req));
> +	sgs[0] = &sg;
> +	sg_init_one(&ret, &req->ret, sizeof(req->ret));
> +	sgs[1] = &ret;
> +	err = virtqueue_add_sgs(vpmem->req_vq, sgs, 1, 1, req, GFP_ATOMIC);
> +	if (err) {
> +		dev_err(&vdev->dev, "failed to send command to virtio pmem device\n");
> +		spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
> +		return -ENOSPC;
> +	}
> +	virtqueue_kick(vpmem->req_vq);
> +	spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
> +
> +	/* When host has read buffer, this completes via host_ack */
> +	wait_event(req->acked, req->done);
> +	err = req->ret;
> +	kfree(req);
> +
> +	return err;
> +};
> +
> +static int virtio_pmem_probe(struct virtio_device *vdev)
> +{
> +	int err = 0;
> +	struct resource res;
> +	struct virtio_pmem *vpmem;
> +	struct nvdimm_bus *nvdimm_bus;
> +	struct nd_region_desc ndr_desc;
> +	int nid = dev_to_node(&vdev->dev);
> +	struct nd_region *nd_region;
> +
> +	if (!vdev->config->get) {
> +		dev_err(&vdev->dev, "%s failure: config disabled\n",
> +			__func__);
> +		return -EINVAL;
> +	}
> +
> +	vdev->priv = vpmem = devm_kzalloc(&vdev->dev, sizeof(*vpmem),
> +			GFP_KERNEL);
> +	if (!vpmem) {
> +		err = -ENOMEM;
> +		goto out_err;
> +	}
> +
> +	vpmem->vdev = vdev;
> +	err = init_vq(vpmem);
> +	if (err)
> +		goto out_err;
> +
> +	virtio_cread(vpmem->vdev, struct virtio_pmem_config,
> +			start, &vpmem->start);
> +	virtio_cread(vpmem->vdev, struct virtio_pmem_config,
> +			size, &vpmem->size);
> +
> +	res.start = vpmem->start;
> +	res.end   = vpmem->start + vpmem->size-1;
> +	vpmem->nd_desc.provider_name = "virtio-pmem";
> +	vpmem->nd_desc.module = THIS_MODULE;
> +
> +	vpmem->nvdimm_bus = nvdimm_bus = nvdimm_bus_register(&vdev->dev,
> +						&vpmem->nd_desc);
> +	if (!nvdimm_bus)
> +		goto out_vq;
> +
> +	dev_set_drvdata(&vdev->dev, nvdimm_bus);
> +	memset(&ndr_desc, 0, sizeof(ndr_desc));
> +
> +	ndr_desc.res = &res;
> +	ndr_desc.numa_node = nid;
> +	ndr_desc.flush = virtio_pmem_flush;
> +	set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags);
> +	nd_region = nvdimm_pmem_region_create(nvdimm_bus, &ndr_desc);
> +
> +	if (!nd_region)
> +		goto out_nd;
> +
> +	virtio_device_ready(vdev);
> +	return 0;
> +out_nd:
> +	err = -ENXIO;
> +	nvdimm_bus_unregister(nvdimm_bus);
> +out_vq:
> +	vdev->config->del_vqs(vdev);
> +out_err:
> +	dev_err(&vdev->dev, "failed to register virtio pmem memory\n");
> +	return err;
> +}
> +
> +static void virtio_pmem_remove(struct virtio_device *vdev)
> +{
> +	struct virtio_pmem *vpmem = vdev->priv;
> +	struct nvdimm_bus *nvdimm_bus = dev_get_drvdata(&vdev->dev);
> +
> +	nvdimm_bus_unregister(nvdimm_bus);
> +	vdev->config->del_vqs(vdev);
> +	kfree(vpmem);
> +}
> +
> +#ifdef CONFIG_PM_SLEEP
> +static int virtio_pmem_freeze(struct virtio_device *vdev)
> +{
> +	/* todo: handle freeze function */
> +	return -EPERM;
> +}
> +
> +static int virtio_pmem_restore(struct virtio_device *vdev)
> +{
> +	/* todo: handle restore function */
> +	return -EPERM;
> +}
> +#endif
> +
> +
> +static struct virtio_driver virtio_pmem_driver = {
> +	.driver.name		= KBUILD_MODNAME,
> +	.driver.owner		= THIS_MODULE,
> +	.id_table		= id_table,
> +	.probe			= virtio_pmem_probe,
> +	.remove			= virtio_pmem_remove,
> +#ifdef CONFIG_PM_SLEEP
> +	.freeze                 = virtio_pmem_freeze,
> +	.restore                = virtio_pmem_restore,
> +#endif
> +};
> +
> +module_virtio_driver(virtio_pmem_driver);
> +MODULE_DEVICE_TABLE(virtio, id_table);
> +MODULE_DESCRIPTION("Virtio pmem driver");
> +MODULE_LICENSE("GPL");
> diff --git a/include/linux/virtio_pmem.h b/include/linux/virtio_pmem.h
> new file mode 100644
> index 0000000..0f83d9c
> --- /dev/null
> +++ b/include/linux/virtio_pmem.h
> @@ -0,0 +1,44 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * virtio_pmem.h: virtio pmem Driver
> + *
> + * Discovers persistent memory range information
> + * from host and provides a virtio based flushing
> + * interface.
> + */
> +#ifndef _LINUX_VIRTIO_PMEM_H
> +#define _LINUX_VIRTIO_PMEM_H
> +
> +#include <linux/virtio_ids.h>
> +#include <linux/virtio_config.h>
> +#include <uapi/linux/virtio_pmem.h>
> +#include <linux/libnvdimm.h>
> +#include <linux/spinlock.h>
> +
> +struct virtio_pmem_request {
> +	/* Host return status corresponding to flush request */
> +	int ret;
> +
> +	/* Wait queue to process deferred work after ack from host */
> +	wait_queue_head_t acked;
> +	bool done;
> +};
> +
> +struct virtio_pmem {
> +	struct virtio_device *vdev;
> +
> +	/* Virtio pmem request queue */
> +	struct virtqueue *req_vq;
> +
> +	/* nvdimm bus registers virtio pmem device */
> +	struct nvdimm_bus *nvdimm_bus;
> +	struct nvdimm_bus_descriptor nd_desc;
> +
> +	/* Synchronize virtqueue data */
> +	spinlock_t pmem_lock;
> +
> +	/* Memory region information */
> +	uint64_t start;
> +	uint64_t size;
> +};
> +#endif
> diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
> index 6d5c3b2..3463895 100644
> --- a/include/uapi/linux/virtio_ids.h
> +++ b/include/uapi/linux/virtio_ids.h
> @@ -43,5 +43,6 @@
>  #define VIRTIO_ID_INPUT        18 /* virtio input */
>  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> +#define VIRTIO_ID_PMEM         25 /* virtio pmem */
>  
>  #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/include/uapi/linux/virtio_pmem.h b/include/uapi/linux/virtio_pmem.h
> new file mode 100644
> index 0000000..c7c22a5
> --- /dev/null
> +++ b/include/uapi/linux/virtio_pmem.h
> @@ -0,0 +1,40 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * This header, excluding the #ifdef __KERNEL__ part, is BSD licensed so
> + * anyone can use the definitions to implement compatible drivers/servers:
> + *
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + * 1. Redistributions of source code must retain the above copyright
> + *    notice, this list of conditions and the following disclaimer.
> + * 2. Redistributions in binary form must reproduce the above copyright
> + *    notice, this list of conditions and the following disclaimer in the
> + *    documentation and/or other materials provided with the distribution.
> + * 3. Neither the name of IBM nor the names of its contributors
> + *    may be used to endorse or promote products derived from this software
> + *    without specific prior written permission.
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS''
> + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE
> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + *
> + * Copyright (C) Red Hat, Inc., 2018-2019
> + * Copyright (C) Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, 2018
> + */
> +#ifndef _UAPI_LINUX_VIRTIO_PMEM_H
> +#define _UAPI_LINUX_VIRTIO_PMEM_H
> +
> +struct virtio_pmem_config {
> +	__le64 start;
> +	__le64 size;
> +};
> +#endif

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC v3 1/2] libnvdimm: Add flush callback for virtio pmem
  2018-07-13 20:35   ` Luiz Capitulino
@ 2018-07-16  8:13     ` Pankaj Gupta
  0 siblings, 0 replies; 28+ messages in thread
From: Pankaj Gupta @ 2018-07-16  8:13 UTC (permalink / raw)
  To: Luiz Capitulino
  Cc: linux-kernel, kvm, qemu-devel, linux-nvdimm, jack, stefanha,
	dan j williams, riel, nilal, kwolf, pbonzini, ross zwisler,
	david, xiaoguangrong eric, hch, mst, niteshnarayanlal, imammedo,
	eblake


Hi Luiz,

> 
> > This patch adds functionality to perform flush from guest to host
> > over VIRTIO. We are registering a callback based on 'nd_region' type.
> > As virtio_pmem driver requires this special flush interface, for rest
> > of the region types we are registering existing flush function.
> > Also report the error returned by virtio flush interface.
> 
> This patch doesn't apply against latest upstream. A few more comments
> below.

My bad, I tested it with 4.17-rc1. Will rebase it.

> 
> > 
> > Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> > ---
> >  drivers/nvdimm/nd.h          |  1 +
> >  drivers/nvdimm/pmem.c        |  4 ++--
> >  drivers/nvdimm/region_devs.c | 24 ++++++++++++++++++------
> >  include/linux/libnvdimm.h    |  5 ++++-
> >  4 files changed, 25 insertions(+), 9 deletions(-)
> > 
> > diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
> > index 32e0364..1b62f79 100644
> > --- a/drivers/nvdimm/nd.h
> > +++ b/drivers/nvdimm/nd.h
> > @@ -159,6 +159,7 @@ struct nd_region {
> >  	struct badblocks bb;
> >  	struct nd_interleave_set *nd_set;
> >  	struct nd_percpu_lane __percpu *lane;
> > +	int (*flush)(struct device *dev);
> >  	struct nd_mapping mapping[0];
> >  };
> >  
> > diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
> > index 9d71492..29fd2cd 100644
> > --- a/drivers/nvdimm/pmem.c
> > +++ b/drivers/nvdimm/pmem.c
> > @@ -180,7 +180,7 @@ static blk_qc_t pmem_make_request(struct request_queue
> > *q, struct bio *bio)
> >  	struct nd_region *nd_region = to_region(pmem);
> >  
> >  	if (bio->bi_opf & REQ_FLUSH)
> > -		nvdimm_flush(nd_region);
> > +		bio->bi_status = nvdimm_flush(nd_region);
> >  
> >  	do_acct = nd_iostat_start(bio, &start);
> >  	bio_for_each_segment(bvec, bio, iter) {
> > @@ -196,7 +196,7 @@ static blk_qc_t pmem_make_request(struct request_queue
> > *q, struct bio *bio)
> >  		nd_iostat_end(bio, start);
> >  
> >  	if (bio->bi_opf & REQ_FUA)
> > -		nvdimm_flush(nd_region);
> > +		bio->bi_status = nvdimm_flush(nd_region);
> >  
> >  	bio_endio(bio);
> >  	return BLK_QC_T_NONE;
> > diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
> > index a612be6..124aae7 100644
> > --- a/drivers/nvdimm/region_devs.c
> > +++ b/drivers/nvdimm/region_devs.c
> > @@ -1025,6 +1025,7 @@ static struct nd_region *nd_region_create(struct
> > nvdimm_bus *nvdimm_bus,
> >  	dev->of_node = ndr_desc->of_node;
> >  	nd_region->ndr_size = resource_size(ndr_desc->res);
> >  	nd_region->ndr_start = ndr_desc->res->start;
> > +	nd_region->flush = ndr_desc->flush;
> >  	nd_device_register(dev);
> >  
> >  	return nd_region;
> > @@ -1065,13 +1066,10 @@ struct nd_region
> > *nvdimm_volatile_region_create(struct nvdimm_bus *nvdimm_bus,
> >  }
> >  EXPORT_SYMBOL_GPL(nvdimm_volatile_region_create);
> >  
> > -/**
> > - * nvdimm_flush - flush any posted write queues between the cpu and pmem
> > media
> > - * @nd_region: blk or interleaved pmem region
> > - */
> > -void nvdimm_flush(struct nd_region *nd_region)
> > +void pmem_flush(struct device *dev)
> >  {
> > -	struct nd_region_data *ndrd = dev_get_drvdata(&nd_region->dev);
> > +	struct nd_region_data *ndrd = dev_get_drvdata(dev);
> > +	struct nd_region *nd_region = to_nd_region(dev);
> >  	int i, idx;
> >  
> >  	/*
> > @@ -1094,6 +1092,20 @@ void nvdimm_flush(struct nd_region *nd_region)
> >  			writeq(1, ndrd_get_flush_wpq(ndrd, i, idx));
> >  	wmb();
> >  }
> > +
> > +/**
> > + * nvdimm_flush - flush any posted write queues between the cpu and pmem
> > media
> > + * @nd_region: blk or interleaved pmem region
> > + */
> > +int nvdimm_flush(struct nd_region *nd_region)
> > +{
> > +	if (nd_region->flush)
> > +		return(nd_region->flush(&nd_region->dev));
> > +
> > +	pmem_flush(&nd_region->dev);
> 
> IMHO, a better way of doing this would be to allow nvdimm_flush() to
> be overridden. That is, in nd_region_create() you set nd_region->flush
> to the original nvdimm_flush() if ndr_desc->flush is NULL. And then
> always call nd_region->flush() where nvdimm_flush() is called today.

I wanted to do minimal changes for actual 'nvdimm_flush' function because it
does not return an error or return status for fsync. So, I needed to differentiate
between 'fake DAX' & 'NVDIMM' at the time of calling 'flush', otherwise I need to 
change 'nvdimm_flush' to return zero for all the calls.

Looks like I am already doing this, will change as suggested.  
 
> 
> > +
> > +	return 0;
> > +}
> >  EXPORT_SYMBOL_GPL(nvdimm_flush);
> >  
> >  /**
> > diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
> > index 097072c..33b617f 100644
> > --- a/include/linux/libnvdimm.h
> > +++ b/include/linux/libnvdimm.h
> > @@ -126,6 +126,7 @@ struct nd_region_desc {
> >  	int numa_node;
> >  	unsigned long flags;
> >  	struct device_node *of_node;
> > +	int (*flush)(struct device *dev);
> >  };
> >  
> >  struct device;
> > @@ -201,7 +202,9 @@ unsigned long nd_blk_memremap_flags(struct
> > nd_blk_region *ndbr);
> >  unsigned int nd_region_acquire_lane(struct nd_region *nd_region);
> >  void nd_region_release_lane(struct nd_region *nd_region, unsigned int
> >  lane);
> >  u64 nd_fletcher64(void *addr, size_t len, bool le);
> > -void nvdimm_flush(struct nd_region *nd_region);
> > +int nvdimm_flush(struct nd_region *nd_region);
> > +void pmem_set_flush(struct nd_region *nd_region, void (*flush)
> > +					(struct device *));
> 
> It seems pmem_set_flush() doesn't exist.

Sorry! will remove it.
> 
> >  int nvdimm_has_flush(struct nd_region *nd_region);
> >  int nvdimm_has_cache(struct nd_region *nd_region);
> >  
> 
> 

Thanks,
Pankaj

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3 2/2] virtio-pmem: Add virtio pmem driver
  2018-07-13 20:38       ` Luiz Capitulino
@ 2018-07-16 11:46         ` Pankaj Gupta
       [not found]           ` <633297685.51039804.1531741590092.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 28+ messages in thread
From: Pankaj Gupta @ 2018-07-16 11:46 UTC (permalink / raw)
  To: Luiz Capitulino
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	riel-ebMLmSuQjDVBDgjK7y7TUQ, linux-nvdimm-y27Ovi1pjclAfugRpC6u6w,
	david-H+wXaHxf7aLQT0dZR+AlfA, ross zwisler,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	stefanha-H+wXaHxf7aLQT0dZR+AlfA,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA



> 
> > This patch adds virtio-pmem driver for KVM guest.
> > 
> > Guest reads the persistent memory range information from Qemu over
> > VIRTIO and registers it on nvdimm_bus. It also creates a nd_region
> > object with the persistent memory range information so that existing
> > 'nvdimm/pmem' driver can reserve this into system memory map. This way
> > 'virtio-pmem' driver uses existing functionality of pmem driver to
> > register persistent memory compatible for DAX capable filesystems.
> > 
> > This also provides function to perform guest flush over VIRTIO from
> > 'pmem' driver when userspace performs flush on DAX memory range.
> > 
> > Signed-off-by: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > ---
> >  drivers/virtio/Kconfig           |   9 ++
> >  drivers/virtio/Makefile          |   1 +
> >  drivers/virtio/virtio_pmem.c     | 190
> >  +++++++++++++++++++++++++++++++++++++++
> >  include/linux/virtio_pmem.h      |  44 +++++++++
> >  include/uapi/linux/virtio_ids.h  |   1 +
> >  include/uapi/linux/virtio_pmem.h |  40 +++++++++
> >  6 files changed, 285 insertions(+)
> >  create mode 100644 drivers/virtio/virtio_pmem.c
> >  create mode 100644 include/linux/virtio_pmem.h
> >  create mode 100644 include/uapi/linux/virtio_pmem.h
> > 
> > diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> > index 3589764..a331e23 100644
> > --- a/drivers/virtio/Kconfig
> > +++ b/drivers/virtio/Kconfig
> > @@ -42,6 +42,15 @@ config VIRTIO_PCI_LEGACY
> >  
> >  	  If unsure, say Y.
> >  
> > +config VIRTIO_PMEM
> > +	tristate "Support for virtio pmem driver"
> > +	depends on VIRTIO
> > +	help
> > +	This driver provides support for virtio based flushing interface
> > +	for persistent memory range.
> > +
> > +	If unsure, say M.
> > +
> >  config VIRTIO_BALLOON
> >  	tristate "Virtio balloon driver"
> >  	depends on VIRTIO
> > diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile
> > index 3a2b5c5..cbe91c6 100644
> > --- a/drivers/virtio/Makefile
> > +++ b/drivers/virtio/Makefile
> > @@ -6,3 +6,4 @@ virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o
> >  virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o
> >  obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o
> >  obj-$(CONFIG_VIRTIO_INPUT) += virtio_input.o
> > +obj-$(CONFIG_VIRTIO_PMEM) += virtio_pmem.o
> > diff --git a/drivers/virtio/virtio_pmem.c b/drivers/virtio/virtio_pmem.c
> > new file mode 100644
> > index 0000000..6200b5e
> > --- /dev/null
> > +++ b/drivers/virtio/virtio_pmem.c
> > @@ -0,0 +1,190 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * virtio_pmem.c: Virtio pmem Driver
> > + *
> > + * Discovers persistent memory range information
> > + * from host and provides a virtio based flushing
> > + * interface.
> > + */
> > +#include <linux/virtio.h>
> > +#include <linux/module.h>
> > +#include <linux/virtio_pmem.h>
> > +
> > +static struct virtio_device_id id_table[] = {
> > +	{ VIRTIO_ID_PMEM, VIRTIO_DEV_ANY_ID },
> > +	{ 0 },
> > +};
> > +
> > + /* The interrupt handler */
> > +static void host_ack(struct virtqueue *vq)
> > +{
> > +	unsigned int len;
> > +	unsigned long flags;
> > +	struct virtio_pmem_request *req;
> > +	struct virtio_pmem *vpmem = vq->vdev->priv;
> > +
> > +	spin_lock_irqsave(&vpmem->pmem_lock, flags);
> > +	while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
> > +		req->done = true;
> > +		wake_up(&req->acked);
> > +	}
> > +	spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
> 
> Honest question: why do you need to disable interrupts here?

To avoid interrupt for VQ trying to take same spinlock already taken by process 
context and resulting in deadlock. Looks like interrupts are already disabled in 
function call, see [1]. But still to protect with any future work. 

[1]
   vp_interrupt
       vp_vring_interrupt
           vring_interrupt
> 
> > +}
> > + /* Initialize virt queue */
> > +static int init_vq(struct virtio_pmem *vpmem)
> > +{
> > +	struct virtqueue *vq;
> > +
> > +	/* single vq */
> > +	vpmem->req_vq = vq = virtio_find_single_vq(vpmem->vdev,
> > +				host_ack, "flush_queue");
> > +	if (IS_ERR(vq))
> > +		return PTR_ERR(vq);
> > +	spin_lock_init(&vpmem->pmem_lock);
> > +
> > +	return 0;
> > +};
> > +
> > + /* The request submission function */
> > +static int virtio_pmem_flush(struct device *dev)
> > +{
> > +	int err;
> > +	unsigned long flags;
> > +	struct scatterlist *sgs[2], sg, ret;
> > +	struct virtio_device *vdev = dev_to_virtio(dev->parent->parent);
> > +	struct virtio_pmem *vpmem = vdev->priv;
> > +	struct virtio_pmem_request *req = kmalloc(sizeof(*req), GFP_KERNEL);
> 
> Not checking kmalloc() return.

Will add it.
> 
> > +
> > +	req->done = false;
> > +	init_waitqueue_head(&req->acked);
> > +	spin_lock_irqsave(&vpmem->pmem_lock, flags);
> 
> Why do you need spin_lock_irqsave()? There are two points consider:
> 
> 1. Will virtio_pmem_flush() ever be called with interrupts disabled?
>    If yes, then it's broken since you should be using GFP_ATOMIC in the
>    kmalloc() call and you can't call wait_event()

Yes, GFP_ATOMIC should be right thing.

> 
> 2. If virtio_pmem_flush() is never called with interrupts disabled, do
>    you really need to disable interrupts? If yes, why?

Same reason as discussed above. Data is shared between interrupt handler
and process context 'virtio-pmem_flush' function. To avoid a deadlock resulting 
between interrupt context and process context on same spinlock.

> 
> Another point to consider is whether or not virtio_pmem_flush()
> can be called from atomic context. nvdimm_flush() itself is called
> from a few atomic sites, but I can't tell if virtio_pmem_flush()
> will ever be called from those sites. If it can be called atomic
> context, then item 1 applies here. If you're sure it can't, then
> you should probably call might_sleep().

I think 'virtio_pmem_flush' can be called from atomic context.

Thanks,
Pankaj

> 
> > +
> > +	sg_init_one(&sg, req, sizeof(req));
> > +	sgs[0] = &sg;
> > +	sg_init_one(&ret, &req->ret, sizeof(req->ret));
> > +	sgs[1] = &ret;
> > +	err = virtqueue_add_sgs(vpmem->req_vq, sgs, 1, 1, req, GFP_ATOMIC);
> > +	if (err) {
> > +		dev_err(&vdev->dev, "failed to send command to virtio pmem device\n");
> > +		spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
> > +		return -ENOSPC;
> > +	}
> > +	virtqueue_kick(vpmem->req_vq);
> > +	spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
> > +
> > +	/* When host has read buffer, this completes via host_ack */
> > +	wait_event(req->acked, req->done);
> > +	err = req->ret;
> > +	kfree(req);
> > +
> > +	return err;
> > +};
> > +
> > +static int virtio_pmem_probe(struct virtio_device *vdev)
> > +{
> > +	int err = 0;
> > +	struct resource res;
> > +	struct virtio_pmem *vpmem;
> > +	struct nvdimm_bus *nvdimm_bus;
> > +	struct nd_region_desc ndr_desc;
> > +	int nid = dev_to_node(&vdev->dev);
> > +	struct nd_region *nd_region;
> > +
> > +	if (!vdev->config->get) {
> > +		dev_err(&vdev->dev, "%s failure: config disabled\n",
> > +			__func__);
> > +		return -EINVAL;
> > +	}
> > +
> > +	vdev->priv = vpmem = devm_kzalloc(&vdev->dev, sizeof(*vpmem),
> > +			GFP_KERNEL);
> > +	if (!vpmem) {
> > +		err = -ENOMEM;
> > +		goto out_err;
> > +	}
> > +
> > +	vpmem->vdev = vdev;
> > +	err = init_vq(vpmem);
> > +	if (err)
> > +		goto out_err;
> > +
> > +	virtio_cread(vpmem->vdev, struct virtio_pmem_config,
> > +			start, &vpmem->start);
> > +	virtio_cread(vpmem->vdev, struct virtio_pmem_config,
> > +			size, &vpmem->size);
> > +
> > +	res.start = vpmem->start;
> > +	res.end   = vpmem->start + vpmem->size-1;
> > +	vpmem->nd_desc.provider_name = "virtio-pmem";
> > +	vpmem->nd_desc.module = THIS_MODULE;
> > +
> > +	vpmem->nvdimm_bus = nvdimm_bus = nvdimm_bus_register(&vdev->dev,
> > +						&vpmem->nd_desc);
> > +	if (!nvdimm_bus)
> > +		goto out_vq;
> > +
> > +	dev_set_drvdata(&vdev->dev, nvdimm_bus);
> > +	memset(&ndr_desc, 0, sizeof(ndr_desc));
> > +
> > +	ndr_desc.res = &res;
> > +	ndr_desc.numa_node = nid;
> > +	ndr_desc.flush = virtio_pmem_flush;
> > +	set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags);
> > +	nd_region = nvdimm_pmem_region_create(nvdimm_bus, &ndr_desc);
> > +
> > +	if (!nd_region)
> > +		goto out_nd;
> > +
> > +	virtio_device_ready(vdev);
> > +	return 0;
> > +out_nd:
> > +	err = -ENXIO;
> > +	nvdimm_bus_unregister(nvdimm_bus);
> > +out_vq:
> > +	vdev->config->del_vqs(vdev);
> > +out_err:
> > +	dev_err(&vdev->dev, "failed to register virtio pmem memory\n");
> > +	return err;
> > +}
> > +
> > +static void virtio_pmem_remove(struct virtio_device *vdev)
> > +{
> > +	struct virtio_pmem *vpmem = vdev->priv;
> > +	struct nvdimm_bus *nvdimm_bus = dev_get_drvdata(&vdev->dev);
> > +
> > +	nvdimm_bus_unregister(nvdimm_bus);
> > +	vdev->config->del_vqs(vdev);
> > +	kfree(vpmem);
> > +}
> > +
> > +#ifdef CONFIG_PM_SLEEP
> > +static int virtio_pmem_freeze(struct virtio_device *vdev)
> > +{
> > +	/* todo: handle freeze function */
> > +	return -EPERM;
> > +}
> > +
> > +static int virtio_pmem_restore(struct virtio_device *vdev)
> > +{
> > +	/* todo: handle restore function */
> > +	return -EPERM;
> > +}
> > +#endif
> > +
> > +
> > +static struct virtio_driver virtio_pmem_driver = {
> > +	.driver.name		= KBUILD_MODNAME,
> > +	.driver.owner		= THIS_MODULE,
> > +	.id_table		= id_table,
> > +	.probe			= virtio_pmem_probe,
> > +	.remove			= virtio_pmem_remove,
> > +#ifdef CONFIG_PM_SLEEP
> > +	.freeze                 = virtio_pmem_freeze,
> > +	.restore                = virtio_pmem_restore,
> > +#endif
> > +};
> > +
> > +module_virtio_driver(virtio_pmem_driver);
> > +MODULE_DEVICE_TABLE(virtio, id_table);
> > +MODULE_DESCRIPTION("Virtio pmem driver");
> > +MODULE_LICENSE("GPL");
> > diff --git a/include/linux/virtio_pmem.h b/include/linux/virtio_pmem.h
> > new file mode 100644
> > index 0000000..0f83d9c
> > --- /dev/null
> > +++ b/include/linux/virtio_pmem.h
> > @@ -0,0 +1,44 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * virtio_pmem.h: virtio pmem Driver
> > + *
> > + * Discovers persistent memory range information
> > + * from host and provides a virtio based flushing
> > + * interface.
> > + */
> > +#ifndef _LINUX_VIRTIO_PMEM_H
> > +#define _LINUX_VIRTIO_PMEM_H
> > +
> > +#include <linux/virtio_ids.h>
> > +#include <linux/virtio_config.h>
> > +#include <uapi/linux/virtio_pmem.h>
> > +#include <linux/libnvdimm.h>
> > +#include <linux/spinlock.h>
> > +
> > +struct virtio_pmem_request {
> > +	/* Host return status corresponding to flush request */
> > +	int ret;
> > +
> > +	/* Wait queue to process deferred work after ack from host */
> > +	wait_queue_head_t acked;
> > +	bool done;
> > +};
> > +
> > +struct virtio_pmem {
> > +	struct virtio_device *vdev;
> > +
> > +	/* Virtio pmem request queue */
> > +	struct virtqueue *req_vq;
> > +
> > +	/* nvdimm bus registers virtio pmem device */
> > +	struct nvdimm_bus *nvdimm_bus;
> > +	struct nvdimm_bus_descriptor nd_desc;
> > +
> > +	/* Synchronize virtqueue data */
> > +	spinlock_t pmem_lock;
> > +
> > +	/* Memory region information */
> > +	uint64_t start;
> > +	uint64_t size;
> > +};
> > +#endif
> > diff --git a/include/uapi/linux/virtio_ids.h
> > b/include/uapi/linux/virtio_ids.h
> > index 6d5c3b2..3463895 100644
> > --- a/include/uapi/linux/virtio_ids.h
> > +++ b/include/uapi/linux/virtio_ids.h
> > @@ -43,5 +43,6 @@
> >  #define VIRTIO_ID_INPUT        18 /* virtio input */
> >  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
> >  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> > +#define VIRTIO_ID_PMEM         25 /* virtio pmem */
> >  
> >  #endif /* _LINUX_VIRTIO_IDS_H */
> > diff --git a/include/uapi/linux/virtio_pmem.h
> > b/include/uapi/linux/virtio_pmem.h
> > new file mode 100644
> > index 0000000..c7c22a5
> > --- /dev/null
> > +++ b/include/uapi/linux/virtio_pmem.h
> > @@ -0,0 +1,40 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * This header, excluding the #ifdef __KERNEL__ part, is BSD licensed so
> > + * anyone can use the definitions to implement compatible drivers/servers:
> > + *
> > + *
> > + * Redistribution and use in source and binary forms, with or without
> > + * modification, are permitted provided that the following conditions
> > + * are met:
> > + * 1. Redistributions of source code must retain the above copyright
> > + *    notice, this list of conditions and the following disclaimer.
> > + * 2. Redistributions in binary form must reproduce the above copyright
> > + *    notice, this list of conditions and the following disclaimer in the
> > + *    documentation and/or other materials provided with the distribution.
> > + * 3. Neither the name of IBM nor the names of its contributors
> > + *    may be used to endorse or promote products derived from this
> > software
> > + *    without specific prior written permission.
> > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> > ``AS IS''
> > + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
> > THE
> > + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
> > PURPOSE
> > + * ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE
> > + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
> > CONSEQUENTIAL
> > + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> > + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> > + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> > STRICT
> > + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
> > WAY
> > + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> > + * SUCH DAMAGE.
> > + *
> > + * Copyright (C) Red Hat, Inc., 2018-2019
> > + * Copyright (C) Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, 2018
> > + */
> > +#ifndef _UAPI_LINUX_VIRTIO_PMEM_H
> > +#define _UAPI_LINUX_VIRTIO_PMEM_H
> > +
> > +struct virtio_pmem_config {
> > +	__le64 start;
> > +	__le64 size;
> > +};
> > +#endif
> 
> 
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3 2/2] virtio-pmem: Add virtio pmem driver
       [not found]           ` <633297685.51039804.1531741590092.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-07-16 14:03             ` Luiz Capitulino
  2018-07-16 15:11               ` Pankaj Gupta
  0 siblings, 1 reply; 28+ messages in thread
From: Luiz Capitulino @ 2018-07-16 14:03 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	riel-ebMLmSuQjDVBDgjK7y7TUQ, linux-nvdimm-y27Ovi1pjclAfugRpC6u6w,
	david-H+wXaHxf7aLQT0dZR+AlfA, ross zwisler,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	stefanha-H+wXaHxf7aLQT0dZR+AlfA,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA

On Mon, 16 Jul 2018 07:46:30 -0400 (EDT)
Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> >   
> > > This patch adds virtio-pmem driver for KVM guest.
> > > 
> > > Guest reads the persistent memory range information from Qemu over
> > > VIRTIO and registers it on nvdimm_bus. It also creates a nd_region
> > > object with the persistent memory range information so that existing
> > > 'nvdimm/pmem' driver can reserve this into system memory map. This way
> > > 'virtio-pmem' driver uses existing functionality of pmem driver to
> > > register persistent memory compatible for DAX capable filesystems.
> > > 
> > > This also provides function to perform guest flush over VIRTIO from
> > > 'pmem' driver when userspace performs flush on DAX memory range.
> > > 
> > > Signed-off-by: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > ---
> > >  drivers/virtio/Kconfig           |   9 ++
> > >  drivers/virtio/Makefile          |   1 +
> > >  drivers/virtio/virtio_pmem.c     | 190
> > >  +++++++++++++++++++++++++++++++++++++++
> > >  include/linux/virtio_pmem.h      |  44 +++++++++
> > >  include/uapi/linux/virtio_ids.h  |   1 +
> > >  include/uapi/linux/virtio_pmem.h |  40 +++++++++
> > >  6 files changed, 285 insertions(+)
> > >  create mode 100644 drivers/virtio/virtio_pmem.c
> > >  create mode 100644 include/linux/virtio_pmem.h
> > >  create mode 100644 include/uapi/linux/virtio_pmem.h
> > > 
> > > diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> > > index 3589764..a331e23 100644
> > > --- a/drivers/virtio/Kconfig
> > > +++ b/drivers/virtio/Kconfig
> > > @@ -42,6 +42,15 @@ config VIRTIO_PCI_LEGACY
> > >  
> > >  	  If unsure, say Y.
> > >  
> > > +config VIRTIO_PMEM
> > > +	tristate "Support for virtio pmem driver"
> > > +	depends on VIRTIO
> > > +	help
> > > +	This driver provides support for virtio based flushing interface
> > > +	for persistent memory range.
> > > +
> > > +	If unsure, say M.
> > > +
> > >  config VIRTIO_BALLOON
> > >  	tristate "Virtio balloon driver"
> > >  	depends on VIRTIO
> > > diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile
> > > index 3a2b5c5..cbe91c6 100644
> > > --- a/drivers/virtio/Makefile
> > > +++ b/drivers/virtio/Makefile
> > > @@ -6,3 +6,4 @@ virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o
> > >  virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o
> > >  obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o
> > >  obj-$(CONFIG_VIRTIO_INPUT) += virtio_input.o
> > > +obj-$(CONFIG_VIRTIO_PMEM) += virtio_pmem.o
> > > diff --git a/drivers/virtio/virtio_pmem.c b/drivers/virtio/virtio_pmem.c
> > > new file mode 100644
> > > index 0000000..6200b5e
> > > --- /dev/null
> > > +++ b/drivers/virtio/virtio_pmem.c
> > > @@ -0,0 +1,190 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * virtio_pmem.c: Virtio pmem Driver
> > > + *
> > > + * Discovers persistent memory range information
> > > + * from host and provides a virtio based flushing
> > > + * interface.
> > > + */
> > > +#include <linux/virtio.h>
> > > +#include <linux/module.h>
> > > +#include <linux/virtio_pmem.h>
> > > +
> > > +static struct virtio_device_id id_table[] = {
> > > +	{ VIRTIO_ID_PMEM, VIRTIO_DEV_ANY_ID },
> > > +	{ 0 },
> > > +};
> > > +
> > > + /* The interrupt handler */
> > > +static void host_ack(struct virtqueue *vq)
> > > +{
> > > +	unsigned int len;
> > > +	unsigned long flags;
> > > +	struct virtio_pmem_request *req;
> > > +	struct virtio_pmem *vpmem = vq->vdev->priv;
> > > +
> > > +	spin_lock_irqsave(&vpmem->pmem_lock, flags);
> > > +	while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
> > > +		req->done = true;
> > > +		wake_up(&req->acked);
> > > +	}
> > > +	spin_unlock_irqrestore(&vpmem->pmem_lock, flags);  
> > 
> > Honest question: why do you need to disable interrupts here?  
> 
> To avoid interrupt for VQ trying to take same spinlock already taken by process 
> context and resulting in deadlock. Looks like interrupts are already disabled in 
> function call, see [1]. But still to protect with any future work. 
> 
> [1]
>    vp_interrupt
>        vp_vring_interrupt
>            vring_interrupt

I think you're right, and I think I may have caused some confusion. See
below.

> >   
> > > +}
> > > + /* Initialize virt queue */
> > > +static int init_vq(struct virtio_pmem *vpmem)
> > > +{
> > > +	struct virtqueue *vq;
> > > +
> > > +	/* single vq */
> > > +	vpmem->req_vq = vq = virtio_find_single_vq(vpmem->vdev,
> > > +				host_ack, "flush_queue");
> > > +	if (IS_ERR(vq))
> > > +		return PTR_ERR(vq);
> > > +	spin_lock_init(&vpmem->pmem_lock);
> > > +
> > > +	return 0;
> > > +};
> > > +
> > > + /* The request submission function */
> > > +static int virtio_pmem_flush(struct device *dev)
> > > +{
> > > +	int err;
> > > +	unsigned long flags;
> > > +	struct scatterlist *sgs[2], sg, ret;
> > > +	struct virtio_device *vdev = dev_to_virtio(dev->parent->parent);
> > > +	struct virtio_pmem *vpmem = vdev->priv;
> > > +	struct virtio_pmem_request *req = kmalloc(sizeof(*req), GFP_KERNEL);  
> > 
> > Not checking kmalloc() return.  
> 
> Will add it.
> >   
> > > +
> > > +	req->done = false;
> > > +	init_waitqueue_head(&req->acked);
> > > +	spin_lock_irqsave(&vpmem->pmem_lock, flags);  
> > 
> > Why do you need spin_lock_irqsave()? There are two points consider:
> > 
> > 1. Will virtio_pmem_flush() ever be called with interrupts disabled?
> >    If yes, then it's broken since you should be using GFP_ATOMIC in the
> >    kmalloc() call and you can't call wait_event()  
> 
> Yes, GFP_ATOMIC should be right thing.
> 
> > 
> > 2. If virtio_pmem_flush() is never called with interrupts disabled, do
> >    you really need to disable interrupts? If yes, why?  
> 
> Same reason as discussed above. Data is shared between interrupt handler
> and process context 'virtio-pmem_flush' function. To avoid a deadlock resulting 
> between interrupt context and process context on same spinlock.
> 
> > 
> > Another point to consider is whether or not virtio_pmem_flush()
> > can be called from atomic context. nvdimm_flush() itself is called
> > from a few atomic sites, but I can't tell if virtio_pmem_flush()
> > will ever be called from those sites. If it can be called atomic
> > context, then item 1 applies here. If you're sure it can't, then
> > you should probably call might_sleep().  
> 
> I think 'virtio_pmem_flush' can be called from atomic context.

If you're certain of this, then everything I said in my previous
email should be correct (ie. GFP_ATOMIC in kmalloc() and the fact
that you can't sleep).

Now, if for some reason virtio_pmem_flush() is not called from
atomic context (say, because it's never called from the ACPI code),
then my review was wrong and I think your code is correct, since
you're disabling irqs to protect the virtqueue against the interrupt
handler. In this case you can sleep outside the atomic context.

> 
> Thanks,
> Pankaj
> 
> >   
> > > +
> > > +	sg_init_one(&sg, req, sizeof(req));
> > > +	sgs[0] = &sg;
> > > +	sg_init_one(&ret, &req->ret, sizeof(req->ret));
> > > +	sgs[1] = &ret;
> > > +	err = virtqueue_add_sgs(vpmem->req_vq, sgs, 1, 1, req, GFP_ATOMIC);
> > > +	if (err) {
> > > +		dev_err(&vdev->dev, "failed to send command to virtio pmem device\n");
> > > +		spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
> > > +		return -ENOSPC;
> > > +	}
> > > +	virtqueue_kick(vpmem->req_vq);
> > > +	spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
> > > +
> > > +	/* When host has read buffer, this completes via host_ack */
> > > +	wait_event(req->acked, req->done);
> > > +	err = req->ret;
> > > +	kfree(req);
> > > +
> > > +	return err;
> > > +};
> > > +
> > > +static int virtio_pmem_probe(struct virtio_device *vdev)
> > > +{
> > > +	int err = 0;
> > > +	struct resource res;
> > > +	struct virtio_pmem *vpmem;
> > > +	struct nvdimm_bus *nvdimm_bus;
> > > +	struct nd_region_desc ndr_desc;
> > > +	int nid = dev_to_node(&vdev->dev);
> > > +	struct nd_region *nd_region;
> > > +
> > > +	if (!vdev->config->get) {
> > > +		dev_err(&vdev->dev, "%s failure: config disabled\n",
> > > +			__func__);
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	vdev->priv = vpmem = devm_kzalloc(&vdev->dev, sizeof(*vpmem),
> > > +			GFP_KERNEL);
> > > +	if (!vpmem) {
> > > +		err = -ENOMEM;
> > > +		goto out_err;
> > > +	}
> > > +
> > > +	vpmem->vdev = vdev;
> > > +	err = init_vq(vpmem);
> > > +	if (err)
> > > +		goto out_err;
> > > +
> > > +	virtio_cread(vpmem->vdev, struct virtio_pmem_config,
> > > +			start, &vpmem->start);
> > > +	virtio_cread(vpmem->vdev, struct virtio_pmem_config,
> > > +			size, &vpmem->size);
> > > +
> > > +	res.start = vpmem->start;
> > > +	res.end   = vpmem->start + vpmem->size-1;
> > > +	vpmem->nd_desc.provider_name = "virtio-pmem";
> > > +	vpmem->nd_desc.module = THIS_MODULE;
> > > +
> > > +	vpmem->nvdimm_bus = nvdimm_bus = nvdimm_bus_register(&vdev->dev,
> > > +						&vpmem->nd_desc);
> > > +	if (!nvdimm_bus)
> > > +		goto out_vq;
> > > +
> > > +	dev_set_drvdata(&vdev->dev, nvdimm_bus);
> > > +	memset(&ndr_desc, 0, sizeof(ndr_desc));
> > > +
> > > +	ndr_desc.res = &res;
> > > +	ndr_desc.numa_node = nid;
> > > +	ndr_desc.flush = virtio_pmem_flush;
> > > +	set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags);
> > > +	nd_region = nvdimm_pmem_region_create(nvdimm_bus, &ndr_desc);
> > > +
> > > +	if (!nd_region)
> > > +		goto out_nd;
> > > +
> > > +	virtio_device_ready(vdev);
> > > +	return 0;
> > > +out_nd:
> > > +	err = -ENXIO;
> > > +	nvdimm_bus_unregister(nvdimm_bus);
> > > +out_vq:
> > > +	vdev->config->del_vqs(vdev);
> > > +out_err:
> > > +	dev_err(&vdev->dev, "failed to register virtio pmem memory\n");
> > > +	return err;
> > > +}
> > > +
> > > +static void virtio_pmem_remove(struct virtio_device *vdev)
> > > +{
> > > +	struct virtio_pmem *vpmem = vdev->priv;
> > > +	struct nvdimm_bus *nvdimm_bus = dev_get_drvdata(&vdev->dev);
> > > +
> > > +	nvdimm_bus_unregister(nvdimm_bus);
> > > +	vdev->config->del_vqs(vdev);
> > > +	kfree(vpmem);
> > > +}
> > > +
> > > +#ifdef CONFIG_PM_SLEEP
> > > +static int virtio_pmem_freeze(struct virtio_device *vdev)
> > > +{
> > > +	/* todo: handle freeze function */
> > > +	return -EPERM;
> > > +}
> > > +
> > > +static int virtio_pmem_restore(struct virtio_device *vdev)
> > > +{
> > > +	/* todo: handle restore function */
> > > +	return -EPERM;
> > > +}
> > > +#endif
> > > +
> > > +
> > > +static struct virtio_driver virtio_pmem_driver = {
> > > +	.driver.name		= KBUILD_MODNAME,
> > > +	.driver.owner		= THIS_MODULE,
> > > +	.id_table		= id_table,
> > > +	.probe			= virtio_pmem_probe,
> > > +	.remove			= virtio_pmem_remove,
> > > +#ifdef CONFIG_PM_SLEEP
> > > +	.freeze                 = virtio_pmem_freeze,
> > > +	.restore                = virtio_pmem_restore,
> > > +#endif
> > > +};
> > > +
> > > +module_virtio_driver(virtio_pmem_driver);
> > > +MODULE_DEVICE_TABLE(virtio, id_table);
> > > +MODULE_DESCRIPTION("Virtio pmem driver");
> > > +MODULE_LICENSE("GPL");
> > > diff --git a/include/linux/virtio_pmem.h b/include/linux/virtio_pmem.h
> > > new file mode 100644
> > > index 0000000..0f83d9c
> > > --- /dev/null
> > > +++ b/include/linux/virtio_pmem.h
> > > @@ -0,0 +1,44 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +/*
> > > + * virtio_pmem.h: virtio pmem Driver
> > > + *
> > > + * Discovers persistent memory range information
> > > + * from host and provides a virtio based flushing
> > > + * interface.
> > > + */
> > > +#ifndef _LINUX_VIRTIO_PMEM_H
> > > +#define _LINUX_VIRTIO_PMEM_H
> > > +
> > > +#include <linux/virtio_ids.h>
> > > +#include <linux/virtio_config.h>
> > > +#include <uapi/linux/virtio_pmem.h>
> > > +#include <linux/libnvdimm.h>
> > > +#include <linux/spinlock.h>
> > > +
> > > +struct virtio_pmem_request {
> > > +	/* Host return status corresponding to flush request */
> > > +	int ret;
> > > +
> > > +	/* Wait queue to process deferred work after ack from host */
> > > +	wait_queue_head_t acked;
> > > +	bool done;
> > > +};
> > > +
> > > +struct virtio_pmem {
> > > +	struct virtio_device *vdev;
> > > +
> > > +	/* Virtio pmem request queue */
> > > +	struct virtqueue *req_vq;
> > > +
> > > +	/* nvdimm bus registers virtio pmem device */
> > > +	struct nvdimm_bus *nvdimm_bus;
> > > +	struct nvdimm_bus_descriptor nd_desc;
> > > +
> > > +	/* Synchronize virtqueue data */
> > > +	spinlock_t pmem_lock;
> > > +
> > > +	/* Memory region information */
> > > +	uint64_t start;
> > > +	uint64_t size;
> > > +};
> > > +#endif
> > > diff --git a/include/uapi/linux/virtio_ids.h
> > > b/include/uapi/linux/virtio_ids.h
> > > index 6d5c3b2..3463895 100644
> > > --- a/include/uapi/linux/virtio_ids.h
> > > +++ b/include/uapi/linux/virtio_ids.h
> > > @@ -43,5 +43,6 @@
> > >  #define VIRTIO_ID_INPUT        18 /* virtio input */
> > >  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
> > >  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> > > +#define VIRTIO_ID_PMEM         25 /* virtio pmem */
> > >  
> > >  #endif /* _LINUX_VIRTIO_IDS_H */
> > > diff --git a/include/uapi/linux/virtio_pmem.h
> > > b/include/uapi/linux/virtio_pmem.h
> > > new file mode 100644
> > > index 0000000..c7c22a5
> > > --- /dev/null
> > > +++ b/include/uapi/linux/virtio_pmem.h
> > > @@ -0,0 +1,40 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +/*
> > > + * This header, excluding the #ifdef __KERNEL__ part, is BSD licensed so
> > > + * anyone can use the definitions to implement compatible drivers/servers:
> > > + *
> > > + *
> > > + * Redistribution and use in source and binary forms, with or without
> > > + * modification, are permitted provided that the following conditions
> > > + * are met:
> > > + * 1. Redistributions of source code must retain the above copyright
> > > + *    notice, this list of conditions and the following disclaimer.
> > > + * 2. Redistributions in binary form must reproduce the above copyright
> > > + *    notice, this list of conditions and the following disclaimer in the
> > > + *    documentation and/or other materials provided with the distribution.
> > > + * 3. Neither the name of IBM nor the names of its contributors
> > > + *    may be used to endorse or promote products derived from this
> > > software
> > > + *    without specific prior written permission.
> > > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> > > ``AS IS''
> > > + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
> > > THE
> > > + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
> > > PURPOSE
> > > + * ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE
> > > + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
> > > CONSEQUENTIAL
> > > + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> > > + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> > > + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> > > STRICT
> > > + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
> > > WAY
> > > + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> > > + * SUCH DAMAGE.
> > > + *
> > > + * Copyright (C) Red Hat, Inc., 2018-2019
> > > + * Copyright (C) Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, 2018
> > > + */
> > > +#ifndef _UAPI_LINUX_VIRTIO_PMEM_H
> > > +#define _UAPI_LINUX_VIRTIO_PMEM_H
> > > +
> > > +struct virtio_pmem_config {
> > > +	__le64 start;
> > > +	__le64 size;
> > > +};
> > > +#endif  
> > 
> > 
> >   
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3 2/2] virtio-pmem: Add virtio pmem driver
  2018-07-16 14:03             ` Luiz Capitulino
@ 2018-07-16 15:11               ` Pankaj Gupta
  0 siblings, 0 replies; 28+ messages in thread
From: Pankaj Gupta @ 2018-07-16 15:11 UTC (permalink / raw)
  To: Luiz Capitulino
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	riel-ebMLmSuQjDVBDgjK7y7TUQ, linux-nvdimm-y27Ovi1pjclAfugRpC6u6w,
	david-H+wXaHxf7aLQT0dZR+AlfA, ross zwisler,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	stefanha-H+wXaHxf7aLQT0dZR+AlfA,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA


> 
> > >   
> > > > This patch adds virtio-pmem driver for KVM guest.
> > > > 
> > > > Guest reads the persistent memory range information from Qemu over
> > > > VIRTIO and registers it on nvdimm_bus. It also creates a nd_region
> > > > object with the persistent memory range information so that existing
> > > > 'nvdimm/pmem' driver can reserve this into system memory map. This way
> > > > 'virtio-pmem' driver uses existing functionality of pmem driver to
> > > > register persistent memory compatible for DAX capable filesystems.
> > > > 
> > > > This also provides function to perform guest flush over VIRTIO from
> > > > 'pmem' driver when userspace performs flush on DAX memory range.
> > > > 
> > > > Signed-off-by: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > > ---
> > > >  drivers/virtio/Kconfig           |   9 ++
> > > >  drivers/virtio/Makefile          |   1 +
> > > >  drivers/virtio/virtio_pmem.c     | 190
> > > >  +++++++++++++++++++++++++++++++++++++++
> > > >  include/linux/virtio_pmem.h      |  44 +++++++++
> > > >  include/uapi/linux/virtio_ids.h  |   1 +
> > > >  include/uapi/linux/virtio_pmem.h |  40 +++++++++
> > > >  6 files changed, 285 insertions(+)
> > > >  create mode 100644 drivers/virtio/virtio_pmem.c
> > > >  create mode 100644 include/linux/virtio_pmem.h
> > > >  create mode 100644 include/uapi/linux/virtio_pmem.h
> > > > 
> > > > diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> > > > index 3589764..a331e23 100644
> > > > --- a/drivers/virtio/Kconfig
> > > > +++ b/drivers/virtio/Kconfig
> > > > @@ -42,6 +42,15 @@ config VIRTIO_PCI_LEGACY
> > > >  
> > > >            If unsure, say Y.
> > > >  
> > > > +config VIRTIO_PMEM
> > > > +        tristate "Support for virtio pmem driver"
> > > > +        depends on VIRTIO
> > > > +        help
> > > > +        This driver provides support for virtio based flushing interface
> > > > +        for persistent memory range.
> > > > +
> > > > +        If unsure, say M.
> > > > +
> > > >  config VIRTIO_BALLOON
> > > >          tristate "Virtio balloon driver"
> > > >          depends on VIRTIO
> > > > diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile
> > > > index 3a2b5c5..cbe91c6 100644
> > > > --- a/drivers/virtio/Makefile
> > > > +++ b/drivers/virtio/Makefile
> > > > @@ -6,3 +6,4 @@ virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o
> > > >  virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o
> > > >  obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o
> > > >  obj-$(CONFIG_VIRTIO_INPUT) += virtio_input.o
> > > > +obj-$(CONFIG_VIRTIO_PMEM) += virtio_pmem.o
> > > > diff --git a/drivers/virtio/virtio_pmem.c
> > > > b/drivers/virtio/virtio_pmem.c
> > > > new file mode 100644
> > > > index 0000000..6200b5e
> > > > --- /dev/null
> > > > +++ b/drivers/virtio/virtio_pmem.c
> > > > @@ -0,0 +1,190 @@
> > > > +// SPDX-License-Identifier: GPL-2.0
> > > > +/*
> > > > + * virtio_pmem.c: Virtio pmem Driver
> > > > + *
> > > > + * Discovers persistent memory range information
> > > > + * from host and provides a virtio based flushing
> > > > + * interface.
> > > > + */
> > > > +#include <linux/virtio.h>
> > > > +#include <linux/module.h>
> > > > +#include <linux/virtio_pmem.h>
> > > > +
> > > > +static struct virtio_device_id id_table[] = {
> > > > +        { VIRTIO_ID_PMEM, VIRTIO_DEV_ANY_ID },
> > > > +        { 0 },
> > > > +};
> > > > +
> > > > + /* The interrupt handler */
> > > > +static void host_ack(struct virtqueue *vq)
> > > > +{
> > > > +        unsigned int len;
> > > > +        unsigned long flags;
> > > > +        struct virtio_pmem_request *req;
> > > > +        struct virtio_pmem *vpmem = vq->vdev->priv;
> > > > +
> > > > +        spin_lock_irqsave(&vpmem->pmem_lock, flags);
> > > > +        while ((req = virtqueue_get_buf(vq, &len)) != NULL) {
> > > > +                req->done = true;
> > > > +                wake_up(&req->acked);
> > > > +        }
> > > > +        spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
> > > 
> > > Honest question: why do you need to disable interrupts here?
> > 
> > To avoid interrupt for VQ trying to take same spinlock already taken by
> > process
> > context and resulting in deadlock. Looks like interrupts are already
> > disabled in
> > function call, see [1]. But still to protect with any future work.
> > 
> > [1]
> >    vp_interrupt
> >        vp_vring_interrupt
> >            vring_interrupt
> 
> I think you're right, and I think I may have caused some confusion. See
> below.
> 
> > >   
> > > > +}
> > > > + /* Initialize virt queue */
> > > > +static int init_vq(struct virtio_pmem *vpmem)
> > > > +{
> > > > +        struct virtqueue *vq;
> > > > +
> > > > +        /* single vq */
> > > > +        vpmem->req_vq = vq = virtio_find_single_vq(vpmem->vdev,
> > > > +                                host_ack, "flush_queue");
> > > > +        if (IS_ERR(vq))
> > > > +                return PTR_ERR(vq);
> > > > +        spin_lock_init(&vpmem->pmem_lock);
> > > > +
> > > > +        return 0;
> > > > +};
> > > > +
> > > > + /* The request submission function */
> > > > +static int virtio_pmem_flush(struct device *dev)
> > > > +{
> > > > +        int err;
> > > > +        unsigned long flags;
> > > > +        struct scatterlist *sgs[2], sg, ret;
> > > > +        struct virtio_device *vdev = dev_to_virtio(dev->parent->parent);
> > > > +        struct virtio_pmem *vpmem = vdev->priv;
> > > > +        struct virtio_pmem_request *req = kmalloc(sizeof(*req), GFP_KERNEL);
> > > 
> > > Not checking kmalloc() return.
> > 
> > Will add it.
> > >   
> > > > +
> > > > +        req->done = false;
> > > > +        init_waitqueue_head(&req->acked);
> > > > +        spin_lock_irqsave(&vpmem->pmem_lock, flags);
> > > 
> > > Why do you need spin_lock_irqsave()? There are two points consider:
> > > 
> > > 1. Will virtio_pmem_flush() ever be called with interrupts disabled?
> > >    If yes, then it's broken since you should be using GFP_ATOMIC in the
> > >    kmalloc() call and you can't call wait_event()
> > 
> > Yes, GFP_ATOMIC should be right thing.
> > 
> > > 
> > > 2. If virtio_pmem_flush() is never called with interrupts disabled, do
> > >    you really need to disable interrupts? If yes, why?
> > 
> > Same reason as discussed above. Data is shared between interrupt handler
> > and process context 'virtio-pmem_flush' function. To avoid a deadlock
> > resulting
> > between interrupt context and process context on same spinlock.
> > 
> > > 
> > > Another point to consider is whether or not virtio_pmem_flush()
> > > can be called from atomic context. nvdimm_flush() itself is called
> > > from a few atomic sites, but I can't tell if virtio_pmem_flush()
> > > will ever be called from those sites. If it can be called atomic
> > > context, then item 1 applies here. If you're sure it can't, then
> > > you should probably call might_sleep().
> > 
> > I think 'virtio_pmem_flush' can be called from atomic context.
> 
> If you're certain of this, then everything I said in my previous
> email should be correct (ie. GFP_ATOMIC in kmalloc() and the fact
> that you can't sleep).

A correction:
AFAICS for virtio_pmem there is no atomic path so this can sleep. wait_event 
is doing same thing outside spinlock.

> 
> Now, if for some reason virtio_pmem_flush() is not called from
> atomic context (say, because it's never called from the ACPI code),
> then my review was wrong and I think your code is correct, since
> you're disabling irqs to protect the virtqueue against the interrupt
> handler. In this case you can sleep outside the atomic context.

yes, we are sleeping outside the spinlock.

Thanks,
Pankaj

> 
> > 
> > Thanks,
> > Pankaj
> > 
> > >   
> > > > +
> > > > +        sg_init_one(&sg, req, sizeof(req));
> > > > +        sgs[0] = &sg;
> > > > +        sg_init_one(&ret, &req->ret, sizeof(req->ret));
> > > > +        sgs[1] = &ret;
> > > > +        err = virtqueue_add_sgs(vpmem->req_vq, sgs, 1, 1, req, GFP_ATOMIC);
> > > > +        if (err) {
> > > > +                dev_err(&vdev->dev, "failed to send command to virtio pmem
> > > > device\n");
> > > > +                spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
> > > > +                return -ENOSPC;
> > > > +        }
> > > > +        virtqueue_kick(vpmem->req_vq);
> > > > +        spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
> > > > +
> > > > +        /* When host has read buffer, this completes via host_ack */
> > > > +        wait_event(req->acked, req->done);
> > > > +        err = req->ret;
> > > > +        kfree(req);
> > > > +
> > > > +        return err;
> > > > +};
> > > > +
> > > > +static int virtio_pmem_probe(struct virtio_device *vdev)
> > > > +{
> > > > +        int err = 0;
> > > > +        struct resource res;
> > > > +        struct virtio_pmem *vpmem;
> > > > +        struct nvdimm_bus *nvdimm_bus;
> > > > +        struct nd_region_desc ndr_desc;
> > > > +        int nid = dev_to_node(&vdev->dev);
> > > > +        struct nd_region *nd_region;
> > > > +
> > > > +        if (!vdev->config->get) {
> > > > +                dev_err(&vdev->dev, "%s failure: config disabled\n",
> > > > +                        __func__);
> > > > +                return -EINVAL;
> > > > +        }
> > > > +
> > > > +        vdev->priv = vpmem = devm_kzalloc(&vdev->dev, sizeof(*vpmem),
> > > > +                        GFP_KERNEL);
> > > > +        if (!vpmem) {
> > > > +                err = -ENOMEM;
> > > > +                goto out_err;
> > > > +        }
> > > > +
> > > > +        vpmem->vdev = vdev;
> > > > +        err = init_vq(vpmem);
> > > > +        if (err)
> > > > +                goto out_err;
> > > > +
> > > > +        virtio_cread(vpmem->vdev, struct virtio_pmem_config,
> > > > +                        start, &vpmem->start);
> > > > +        virtio_cread(vpmem->vdev, struct virtio_pmem_config,
> > > > +                        size, &vpmem->size);
> > > > +
> > > > +        res.start = vpmem->start;
> > > > +        res.end   = vpmem->start + vpmem->size-1;
> > > > +        vpmem->nd_desc.provider_name = "virtio-pmem";
> > > > +        vpmem->nd_desc.module = THIS_MODULE;
> > > > +
> > > > +        vpmem->nvdimm_bus = nvdimm_bus = nvdimm_bus_register(&vdev->dev,
> > > > +                                                &vpmem->nd_desc);
> > > > +        if (!nvdimm_bus)
> > > > +                goto out_vq;
> > > > +
> > > > +        dev_set_drvdata(&vdev->dev, nvdimm_bus);
> > > > +        memset(&ndr_desc, 0, sizeof(ndr_desc));
> > > > +
> > > > +        ndr_desc.res = &res;
> > > > +        ndr_desc.numa_node = nid;
> > > > +        ndr_desc.flush = virtio_pmem_flush;
> > > > +        set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags);
> > > > +        nd_region = nvdimm_pmem_region_create(nvdimm_bus, &ndr_desc);
> > > > +
> > > > +        if (!nd_region)
> > > > +                goto out_nd;
> > > > +
> > > > +        virtio_device_ready(vdev);
> > > > +        return 0;
> > > > +out_nd:
> > > > +        err = -ENXIO;
> > > > +        nvdimm_bus_unregister(nvdimm_bus);
> > > > +out_vq:
> > > > +        vdev->config->del_vqs(vdev);
> > > > +out_err:
> > > > +        dev_err(&vdev->dev, "failed to register virtio pmem memory\n");
> > > > +        return err;
> > > > +}
> > > > +
> > > > +static void virtio_pmem_remove(struct virtio_device *vdev)
> > > > +{
> > > > +        struct virtio_pmem *vpmem = vdev->priv;
> > > > +        struct nvdimm_bus *nvdimm_bus = dev_get_drvdata(&vdev->dev);
> > > > +
> > > > +        nvdimm_bus_unregister(nvdimm_bus);
> > > > +        vdev->config->del_vqs(vdev);
> > > > +        kfree(vpmem);
> > > > +}
> > > > +
> > > > +#ifdef CONFIG_PM_SLEEP
> > > > +static int virtio_pmem_freeze(struct virtio_device *vdev)
> > > > +{
> > > > +        /* todo: handle freeze function */
> > > > +        return -EPERM;
> > > > +}
> > > > +
> > > > +static int virtio_pmem_restore(struct virtio_device *vdev)
> > > > +{
> > > > +        /* todo: handle restore function */
> > > > +        return -EPERM;
> > > > +}
> > > > +#endif
> > > > +
> > > > +
> > > > +static struct virtio_driver virtio_pmem_driver = {
> > > > +        .driver.name                = KBUILD_MODNAME,
> > > > +        .driver.owner                = THIS_MODULE,
> > > > +        .id_table                = id_table,
> > > > +        .probe                        = virtio_pmem_probe,
> > > > +        .remove                        = virtio_pmem_remove,
> > > > +#ifdef CONFIG_PM_SLEEP
> > > > +        .freeze                 = virtio_pmem_freeze,
> > > > +        .restore                = virtio_pmem_restore,
> > > > +#endif
> > > > +};
> > > > +
> > > > +module_virtio_driver(virtio_pmem_driver);
> > > > +MODULE_DEVICE_TABLE(virtio, id_table);
> > > > +MODULE_DESCRIPTION("Virtio pmem driver");
> > > > +MODULE_LICENSE("GPL");
> > > > diff --git a/include/linux/virtio_pmem.h b/include/linux/virtio_pmem.h
> > > > new file mode 100644
> > > > index 0000000..0f83d9c
> > > > --- /dev/null
> > > > +++ b/include/linux/virtio_pmem.h
> > > > @@ -0,0 +1,44 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > +/*
> > > > + * virtio_pmem.h: virtio pmem Driver
> > > > + *
> > > > + * Discovers persistent memory range information
> > > > + * from host and provides a virtio based flushing
> > > > + * interface.
> > > > + */
> > > > +#ifndef _LINUX_VIRTIO_PMEM_H
> > > > +#define _LINUX_VIRTIO_PMEM_H
> > > > +
> > > > +#include <linux/virtio_ids.h>
> > > > +#include <linux/virtio_config.h>
> > > > +#include <uapi/linux/virtio_pmem.h>
> > > > +#include <linux/libnvdimm.h>
> > > > +#include <linux/spinlock.h>
> > > > +
> > > > +struct virtio_pmem_request {
> > > > +        /* Host return status corresponding to flush request */
> > > > +        int ret;
> > > > +
> > > > +        /* Wait queue to process deferred work after ack from host */
> > > > +        wait_queue_head_t acked;
> > > > +        bool done;
> > > > +};
> > > > +
> > > > +struct virtio_pmem {
> > > > +        struct virtio_device *vdev;
> > > > +
> > > > +        /* Virtio pmem request queue */
> > > > +        struct virtqueue *req_vq;
> > > > +
> > > > +        /* nvdimm bus registers virtio pmem device */
> > > > +        struct nvdimm_bus *nvdimm_bus;
> > > > +        struct nvdimm_bus_descriptor nd_desc;
> > > > +
> > > > +        /* Synchronize virtqueue data */
> > > > +        spinlock_t pmem_lock;
> > > > +
> > > > +        /* Memory region information */
> > > > +        uint64_t start;
> > > > +        uint64_t size;
> > > > +};
> > > > +#endif
> > > > diff --git a/include/uapi/linux/virtio_ids.h
> > > > b/include/uapi/linux/virtio_ids.h
> > > > index 6d5c3b2..3463895 100644
> > > > --- a/include/uapi/linux/virtio_ids.h
> > > > +++ b/include/uapi/linux/virtio_ids.h
> > > > @@ -43,5 +43,6 @@
> > > >  #define VIRTIO_ID_INPUT        18 /* virtio input */
> > > >  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
> > > >  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> > > > +#define VIRTIO_ID_PMEM         25 /* virtio pmem */
> > > >  
> > > >  #endif /* _LINUX_VIRTIO_IDS_H */
> > > > diff --git a/include/uapi/linux/virtio_pmem.h
> > > > b/include/uapi/linux/virtio_pmem.h
> > > > new file mode 100644
> > > > index 0000000..c7c22a5
> > > > --- /dev/null
> > > > +++ b/include/uapi/linux/virtio_pmem.h
> > > > @@ -0,0 +1,40 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > +/*
> > > > + * This header, excluding the #ifdef __KERNEL__ part, is BSD licensed
> > > > so
> > > > + * anyone can use the definitions to implement compatible
> > > > drivers/servers:
> > > > + *
> > > > + *
> > > > + * Redistribution and use in source and binary forms, with or without
> > > > + * modification, are permitted provided that the following conditions
> > > > + * are met:
> > > > + * 1. Redistributions of source code must retain the above copyright
> > > > + *    notice, this list of conditions and the following disclaimer.
> > > > + * 2. Redistributions in binary form must reproduce the above
> > > > copyright
> > > > + *    notice, this list of conditions and the following disclaimer in
> > > > the
> > > > + *    documentation and/or other materials provided with the
> > > > distribution.
> > > > + * 3. Neither the name of IBM nor the names of its contributors
> > > > + *    may be used to endorse or promote products derived from this
> > > > software
> > > > + *    without specific prior written permission.
> > > > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> > > > ``AS IS''
> > > > + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
> > > > TO,
> > > > THE
> > > > + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
> > > > PURPOSE
> > > > + * ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE
> > > > + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
> > > > CONSEQUENTIAL
> > > > + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
> > > > GOODS
> > > > + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
> > > > INTERRUPTION)
> > > > + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
> > > > STRICT
> > > > + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
> > > > ANY
> > > > WAY
> > > > + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY
> > > > OF
> > > > + * SUCH DAMAGE.
> > > > + *
> > > > + * Copyright (C) Red Hat, Inc., 2018-2019
> > > > + * Copyright (C) Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, 2018
> > > > + */
> > > > +#ifndef _UAPI_LINUX_VIRTIO_PMEM_H
> > > > +#define _UAPI_LINUX_VIRTIO_PMEM_H
> > > > +
> > > > +struct virtio_pmem_config {
> > > > +        __le64 start;
> > > > +        __le64 size;
> > > > +};
> > > > +#endif
> > > 
> > > 
> > >   
> > 
> 
> 
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC v3 2/2] virtio-pmem: Add virtio pmem driver
  2018-07-13  7:52   ` [RFC v3 2/2] virtio-pmem: Add virtio pmem driver Pankaj Gupta
       [not found]     ` <20180713075232.9575-3-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-07-17 13:11     ` Stefan Hajnoczi
       [not found]       ` <20180717131156.GA13498-lxVrvc10SDRcolVlb+j0YCZi+YwRKgec@public.gmane.org>
  1 sibling, 1 reply; 28+ messages in thread
From: Stefan Hajnoczi @ 2018-07-17 13:11 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: linux-kernel, kvm, qemu-devel, linux-nvdimm, jack,
	dan.j.williams, riel, haozhong.zhang, nilal, kwolf, pbonzini,
	ross.zwisler, david, xiaoguangrong.eric, hch, mst,
	niteshnarayanlal, lcapitulino, imammedo, eblake

[-- Attachment #1: Type: text/plain, Size: 2028 bytes --]


On Fri, Jul 13, 2018 at 01:22:31PM +0530, Pankaj Gupta wrote:
> + /* The request submission function */
> +static int virtio_pmem_flush(struct device *dev)
> +{
> +	int err;
> +	unsigned long flags;
> +	struct scatterlist *sgs[2], sg, ret;
> +	struct virtio_device *vdev = dev_to_virtio(dev->parent->parent);
> +	struct virtio_pmem *vpmem = vdev->priv;
> +	struct virtio_pmem_request *req = kmalloc(sizeof(*req), GFP_KERNEL);
> +
> +	req->done = false;
> +	init_waitqueue_head(&req->acked);
> +	spin_lock_irqsave(&vpmem->pmem_lock, flags);
> +
> +	sg_init_one(&sg, req, sizeof(req));

What are you trying to do here?

sizeof(req) == sizeof(struct virtio_pmem_request *) == sizeof(void *)

Did you mean sizeof(*req)?

But why map struct virtio_pmem_request to the device?  struct
virtio_pmem_request is the driver-internal request state and is not part
of the hardware interface.

> +	sgs[0] = &sg;
> +	sg_init_one(&ret, &req->ret, sizeof(req->ret));
> +	sgs[1] = &ret;
> +	err = virtqueue_add_sgs(vpmem->req_vq, sgs, 1, 1, req, GFP_ATOMIC);
> +	if (err) {
> +		dev_err(&vdev->dev, "failed to send command to virtio pmem device\n");

This can happen if the virtqueue is full.  Printing a message and
failing the flush isn't appropriate.  This thread needs to wait until
virtqueue space becomes available.

> +		spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
> +		return -ENOSPC;

req is leaked.

> +	virtio_device_ready(vdev);

This call isn't needed.  Driver use it when they wish to submit buffers
on virtqueues before ->probe() returns.

> diff --git a/include/linux/virtio_pmem.h b/include/linux/virtio_pmem.h
> new file mode 100644
> index 0000000..0f83d9c
> --- /dev/null
> +++ b/include/linux/virtio_pmem.h

include/ is for declarations (e.g. kernel APIs) needed by other
compilation units.  The contents of this header are internal to the
virtio_pmem driver implementation and can therefore be in virtio_pmem.c.
include/linux/virtio_pmem.h isn't necessary since nothing besides
virtio_pmem.c will need to include it.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC v3 2/2] virtio-pmem: Add virtio pmem driver
       [not found]       ` <20180717131156.GA13498-lxVrvc10SDRcolVlb+j0YCZi+YwRKgec@public.gmane.org>
@ 2018-07-18  7:05         ` Pankaj Gupta
  0 siblings, 0 replies; 28+ messages in thread
From: Pankaj Gupta @ 2018-07-18  7:05 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	riel-ebMLmSuQjDVBDgjK7y7TUQ, linux-nvdimm-y27Ovi1pjclAfugRpC6u6w,
	david-H+wXaHxf7aLQT0dZR+AlfA, ross zwisler,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	lcapitulino-H+wXaHxf7aLQT0dZR+AlfA,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA,
	eblake-H+wXaHxf7aLQT0dZR+AlfA


Hi Stefan,

> > + /* The request submission function */
> > +static int virtio_pmem_flush(struct device *dev)
> > +{
> > +	int err;
> > +	unsigned long flags;
> > +	struct scatterlist *sgs[2], sg, ret;
> > +	struct virtio_device *vdev = dev_to_virtio(dev->parent->parent);
> > +	struct virtio_pmem *vpmem = vdev->priv;
> > +	struct virtio_pmem_request *req = kmalloc(sizeof(*req), GFP_KERNEL);
> > +
> > +	req->done = false;
> > +	init_waitqueue_head(&req->acked);
> > +	spin_lock_irqsave(&vpmem->pmem_lock, flags);
> > +
> > +	sg_init_one(&sg, req, sizeof(req));
> 
> What are you trying to do here?
> 
> sizeof(req) == sizeof(struct virtio_pmem_request *) == sizeof(void *)
> 
> Did you mean sizeof(*req)?

yes, I meant: sizeof(struct virtio_pmem_request)

Thanks for catching this.

> 
> But why map struct virtio_pmem_request to the device?  struct
> virtio_pmem_request is the driver-internal request state and is not part
> of the hardware interface.

o.k. I will separate out request from 'virtio_pmem_request' struct
and use that. 

> 
> > +	sgs[0] = &sg;
> > +	sg_init_one(&ret, &req->ret, sizeof(req->ret));
> > +	sgs[1] = &ret;
> > +	err = virtqueue_add_sgs(vpmem->req_vq, sgs, 1, 1, req, GFP_ATOMIC);
> > +	if (err) {
> > +		dev_err(&vdev->dev, "failed to send command to virtio pmem device\n");
> 
> This can happen if the virtqueue is full.  Printing a message and
> failing the flush isn't appropriate.  This thread needs to wait until
> virtqueue space becomes available.

o.k. I will implement this.

> 
> > +		spin_unlock_irqrestore(&vpmem->pmem_lock, flags);
> > +		return -ENOSPC;
> 
> req is leaked.

will free req.

> 
> > +	virtio_device_ready(vdev);
> 
> This call isn't needed.  Driver use it when they wish to submit buffers
> on virtqueues before ->probe() returns.

o.k. I will remove it.

> 
> > diff --git a/include/linux/virtio_pmem.h b/include/linux/virtio_pmem.h
> > new file mode 100644
> > index 0000000..0f83d9c
> > --- /dev/null
> > +++ b/include/linux/virtio_pmem.h
> 
> include/ is for declarations (e.g. kernel APIs) needed by other
> compilation units.  The contents of this header are internal to the
> virtio_pmem driver implementation and can therefore be in virtio_pmem.c.
> include/linux/virtio_pmem.h isn't necessary since nothing besides
> virtio_pmem.c will need to include it.

Agree. Will move declarations from virtio_pmem.h to virtio_pmem.c

Thanks,
Pankaj

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC v3] qemu: Add virtio pmem device
       [not found]   ` <20180713075232.9575-4-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-07-18 12:55     ` Luiz Capitulino
  2018-07-19  5:48       ` [Qemu-devel] " Pankaj Gupta
  2018-07-24 16:13     ` Eric Blake
  1 sibling, 1 reply; 28+ messages in thread
From: Luiz Capitulino @ 2018-07-18 12:55 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong.eric-Re5JQEeQqe8AvxtiuMwx3w,
	kvm-u79uwXL29TY76Z2rM5mHXA, riel-ebMLmSuQjDVBDgjK7y7TUQ,
	linux-nvdimm-y27Ovi1pjclAfugRpC6u6w,
	david-H+wXaHxf7aLQT0dZR+AlfA,
	ross.zwisler-ral2JQCrhuEAvxtiuMwx3w,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	stefanha-H+wXaHxf7aLQT0dZR+AlfA,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA,
	eblake-H+wXaHxf7aLQT0dZR+AlfA

On Fri, 13 Jul 2018 13:22:32 +0530
Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

>  This patch adds virtio-pmem Qemu device.
> 
>  This device presents memory address range information to guest
>  which is backed by file backend type. It acts like persistent
>  memory device for KVM guest. Guest can perform read and persistent
>  write operations on this memory range with the help of DAX capable
>  filesystem.
> 
>  Persistent guest writes are assured with the help of virtio based
>  flushing interface. When guest userspace space performs fsync on
>  file fd on pmem device, a flush command is send to Qemu over VIRTIO
>  and host side flush/sync is done on backing image file.
> 
> Changes from RFC v2:
> - Use aio_worker() to avoid Qemu from hanging with blocking fsync
>   call - Stefan
> - Use virtio_st*_p() for endianess - Stefan
> - Correct indentation in qapi/misc.json - Eric
> 
> Signed-off-by: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> ---
>  hw/virtio/Makefile.objs                     |   3 +
>  hw/virtio/virtio-pci.c                      |  44 +++++
>  hw/virtio/virtio-pci.h                      |  14 ++
>  hw/virtio/virtio-pmem.c                     | 241 ++++++++++++++++++++++++++++
>  include/hw/pci/pci.h                        |   1 +
>  include/hw/virtio/virtio-pmem.h             |  42 +++++
>  include/standard-headers/linux/virtio_ids.h |   1 +
>  qapi/misc.json                              |  26 ++-
>  8 files changed, 371 insertions(+), 1 deletion(-)
>  create mode 100644 hw/virtio/virtio-pmem.c
>  create mode 100644 include/hw/virtio/virtio-pmem.h
> 
> diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
> index 1b2799cfd8..7f914d45d0 100644
> --- a/hw/virtio/Makefile.objs
> +++ b/hw/virtio/Makefile.objs
> @@ -10,6 +10,9 @@ obj-$(CONFIG_VIRTIO_CRYPTO) += virtio-crypto.o
>  obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) += virtio-crypto-pci.o
>  
>  obj-$(CONFIG_LINUX) += vhost.o vhost-backend.o vhost-user.o
> +ifeq ($(CONFIG_MEM_HOTPLUG),y)
> +obj-$(CONFIG_LINUX) += virtio-pmem.o
> +endif
>  obj-$(CONFIG_VHOST_VSOCK) += vhost-vsock.o
>  endif
>  
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index 3a01fe90f0..93d3fc05c7 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -2521,6 +2521,49 @@ static const TypeInfo virtio_rng_pci_info = {
>      .class_init    = virtio_rng_pci_class_init,
>  };
>  
> +/* virtio-pmem-pci */
> +
> +static void virtio_pmem_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
> +{
> +    VirtIOPMEMPCI *vpmem = VIRTIO_PMEM_PCI(vpci_dev);
> +    DeviceState *vdev = DEVICE(&vpmem->vdev);
> +
> +    qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
> +    object_property_set_bool(OBJECT(vdev), true, "realized", errp);
> +}
> +
> +static void virtio_pmem_pci_class_init(ObjectClass *klass, void *data)
> +{
> +    DeviceClass *dc = DEVICE_CLASS(klass);
> +    VirtioPCIClass *k = VIRTIO_PCI_CLASS(klass);
> +    PCIDeviceClass *pcidev_k = PCI_DEVICE_CLASS(klass);
> +    k->realize = virtio_pmem_pci_realize;
> +    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
> +    pcidev_k->vendor_id = PCI_VENDOR_ID_REDHAT_QUMRANET;
> +    pcidev_k->device_id = PCI_DEVICE_ID_VIRTIO_PMEM;
> +    pcidev_k->revision = VIRTIO_PCI_ABI_VERSION;
> +    pcidev_k->class_id = PCI_CLASS_OTHERS;
> +}
> +
> +static void virtio_pmem_pci_instance_init(Object *obj)
> +{
> +    VirtIOPMEMPCI *dev = VIRTIO_PMEM_PCI(obj);
> +
> +    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> +                                TYPE_VIRTIO_PMEM);
> +    object_property_add_alias(obj, "memdev", OBJECT(&dev->vdev), "memdev",
> +                              &error_abort);
> +}
> +
> +static const TypeInfo virtio_pmem_pci_info = {
> +    .name          = TYPE_VIRTIO_PMEM_PCI,
> +    .parent        = TYPE_VIRTIO_PCI,
> +    .instance_size = sizeof(VirtIOPMEMPCI),
> +    .instance_init = virtio_pmem_pci_instance_init,
> +    .class_init    = virtio_pmem_pci_class_init,
> +};
> +
> +
>  /* virtio-input-pci */
>  
>  static Property virtio_input_pci_properties[] = {
> @@ -2714,6 +2757,7 @@ static void virtio_pci_register_types(void)
>      type_register_static(&virtio_balloon_pci_info);
>      type_register_static(&virtio_serial_pci_info);
>      type_register_static(&virtio_net_pci_info);
> +    type_register_static(&virtio_pmem_pci_info);
>  #ifdef CONFIG_VHOST_SCSI
>      type_register_static(&vhost_scsi_pci_info);
>  #endif
> diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
> index 813082b0d7..fe74fcad3f 100644
> --- a/hw/virtio/virtio-pci.h
> +++ b/hw/virtio/virtio-pci.h
> @@ -19,6 +19,7 @@
>  #include "hw/virtio/virtio-blk.h"
>  #include "hw/virtio/virtio-net.h"
>  #include "hw/virtio/virtio-rng.h"
> +#include "hw/virtio/virtio-pmem.h"
>  #include "hw/virtio/virtio-serial.h"
>  #include "hw/virtio/virtio-scsi.h"
>  #include "hw/virtio/virtio-balloon.h"
> @@ -57,6 +58,7 @@ typedef struct VirtIOInputHostPCI VirtIOInputHostPCI;
>  typedef struct VirtIOGPUPCI VirtIOGPUPCI;
>  typedef struct VHostVSockPCI VHostVSockPCI;
>  typedef struct VirtIOCryptoPCI VirtIOCryptoPCI;
> +typedef struct VirtIOPMEMPCI VirtIOPMEMPCI;
>  
>  /* virtio-pci-bus */
>  
> @@ -274,6 +276,18 @@ struct VirtIOBlkPCI {
>      VirtIOBlock vdev;
>  };
>  
> +/*
> + * virtio-pmem-pci: This extends VirtioPCIProxy.
> + */
> +#define TYPE_VIRTIO_PMEM_PCI "virtio-pmem-pci"
> +#define VIRTIO_PMEM_PCI(obj) \
> +        OBJECT_CHECK(VirtIOPMEMPCI, (obj), TYPE_VIRTIO_PMEM_PCI)
> +
> +struct VirtIOPMEMPCI {
> +    VirtIOPCIProxy parent_obj;
> +    VirtIOPMEM vdev;
> +};
> +
>  /*
>   * virtio-balloon-pci: This extends VirtioPCIProxy.
>   */
> diff --git a/hw/virtio/virtio-pmem.c b/hw/virtio/virtio-pmem.c
> new file mode 100644
> index 0000000000..08c96d7e80
> --- /dev/null
> +++ b/hw/virtio/virtio-pmem.c
> @@ -0,0 +1,241 @@
> +/*
> + * Virtio pmem device
> + *
> + * Copyright (C) 2018 Red Hat, Inc.
> + * Copyright (C) 2018 Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2.
> + * See the COPYING file in the top-level directory.
> + *
> + */
> +
> +#include "qemu/osdep.h"
> +#include "qapi/error.h"
> +#include "qemu-common.h"
> +#include "qemu/error-report.h"
> +#include "hw/virtio/virtio-access.h"
> +#include "hw/virtio/virtio-pmem.h"
> +#include "hw/mem/memory-device.h"
> +#include "block/aio.h"
> +#include "block/thread-pool.h"
> +
> +typedef struct VirtIOPMEMresp {
> +    int ret;
> +} VirtIOPMEMResp;
> +
> +typedef struct VirtIODeviceRequest {
> +    VirtQueueElement elem;
> +    int fd;
> +    VirtIOPMEM *pmem;
> +    VirtIOPMEMResp resp;
> +} VirtIODeviceRequest;
> +
> +static int worker_cb(void *opaque)
> +{
> +    VirtIODeviceRequest *req = opaque;
> +    int err = 0;
> +
> +    /* flush raw backing image */
> +    err = fsync(req->fd);
> +    if (err != 0) {
> +        err = errno;
> +    }
> +    req->resp.ret = err;

Host question: are you returning the guest errno code to the host?

If yes, I don't think this is right. I think you probably want to
define a constant for error and let the host decide on the errno
code to return to the application.

> +
> +    return 0;
> +}
> +
> +static void done_cb(void *opaque, int ret)
> +{
> +    VirtIODeviceRequest *req = opaque;
> +    int len = iov_from_buf(req->elem.in_sg, req->elem.in_num, 0,
> +                              &req->resp, sizeof(VirtIOPMEMResp));
> +
> +    /* Callbacks are serialized, so no need to use atomic ops.  */
> +    virtqueue_push(req->pmem->rq_vq, &req->elem, len);
> +    virtio_notify((VirtIODevice *)req->pmem, req->pmem->rq_vq);
> +    g_free(req);
> +}
> +
> +static void virtio_pmem_flush(VirtIODevice *vdev, VirtQueue *vq)
> +{
> +    VirtIODeviceRequest *req;
> +    VirtIOPMEM *pmem = VIRTIO_PMEM(vdev);
> +    HostMemoryBackend *backend = MEMORY_BACKEND(pmem->memdev);
> +    ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
> +
> +    req = virtqueue_pop(vq, sizeof(VirtIODeviceRequest));
> +    if (!req) {
> +        virtio_error(vdev, "virtio-pmem missing request data");
> +        return;
> +    }
> +
> +    if (req->elem.out_num < 1 || req->elem.in_num < 1) {
> +        virtio_error(vdev, "virtio-pmem request not proper");
> +        g_free(req);
> +        return;
> +    }
> +    req->fd = memory_region_get_fd(&backend->mr);
> +    req->pmem = pmem;
> +    thread_pool_submit_aio(pool, worker_cb, req, done_cb, req);
> +}
> +
> +static void virtio_pmem_get_config(VirtIODevice *vdev, uint8_t *config)
> +{
> +    VirtIOPMEM *pmem = VIRTIO_PMEM(vdev);
> +    struct virtio_pmem_config *pmemcfg = (struct virtio_pmem_config *) config;
> +
> +    virtio_stq_p(vdev, &pmemcfg->start, pmem->start);
> +    virtio_stq_p(vdev, &pmemcfg->size, pmem->size);
> +}
> +
> +static uint64_t virtio_pmem_get_features(VirtIODevice *vdev, uint64_t features,
> +                                        Error **errp)
> +{
> +    return features;
> +}
> +
> +static void virtio_pmem_realize(DeviceState *dev, Error **errp)
> +{
> +    VirtIODevice   *vdev   = VIRTIO_DEVICE(dev);
> +    VirtIOPMEM     *pmem   = VIRTIO_PMEM(dev);
> +    MachineState   *ms     = MACHINE(qdev_get_machine());
> +    uint64_t align;
> +    Error *local_err = NULL;
> +    MemoryRegion *mr;
> +
> +    if (!pmem->memdev) {
> +        error_setg(errp, "virtio-pmem memdev not set");
> +        return;
> +    }
> +
> +    mr  = host_memory_backend_get_memory(pmem->memdev);
> +    align = memory_region_get_alignment(mr);
> +    pmem->size = QEMU_ALIGN_DOWN(memory_region_size(mr), align);
> +    pmem->start = memory_device_get_free_addr(ms, NULL, align, pmem->size,
> +                                                               &local_err);
> +    if (local_err) {
> +        error_setg(errp, "Can't get free address in mem device");
> +        return;
> +    }
> +    memory_region_init_alias(&pmem->mr, OBJECT(pmem),
> +                             "virtio_pmem-memory", mr, 0, pmem->size);
> +    memory_device_plug_region(ms, &pmem->mr, pmem->start);
> +
> +    host_memory_backend_set_mapped(pmem->memdev, true);
> +    virtio_init(vdev, TYPE_VIRTIO_PMEM, VIRTIO_ID_PMEM,
> +                                          sizeof(struct virtio_pmem_config));
> +    pmem->rq_vq = virtio_add_queue(vdev, 128, virtio_pmem_flush);
> +}
> +
> +static void virtio_mem_check_memdev(Object *obj, const char *name, Object *val,
> +                                    Error **errp)
> +{
> +    if (host_memory_backend_is_mapped(MEMORY_BACKEND(val))) {
> +        char *path = object_get_canonical_path_component(val);
> +        error_setg(errp, "Can't use already busy memdev: %s", path);
> +        g_free(path);
> +        return;
> +    }
> +
> +    qdev_prop_allow_set_link_before_realize(obj, name, val, errp);
> +}
> +
> +static const char *virtio_pmem_get_device_id(VirtIOPMEM *vm)
> +{
> +    Object *obj = OBJECT(vm);
> +    DeviceState *parent_dev;
> +
> +    /* always use the ID of the proxy device */
> +    if (obj->parent && object_dynamic_cast(obj->parent, TYPE_DEVICE)) {
> +        parent_dev = DEVICE(obj->parent);
> +        return parent_dev->id;
> +    }
> +    return NULL;
> +}
> +
> +static void virtio_pmem_md_fill_device_info(const MemoryDeviceState *md,
> +                                           MemoryDeviceInfo *info)
> +{
> +    VirtioPMemDeviceInfo *vi = g_new0(VirtioPMemDeviceInfo, 1);
> +    VirtIOPMEM *vm = VIRTIO_PMEM(md);
> +    const char *id = virtio_pmem_get_device_id(vm);
> +
> +    if (id) {
> +        vi->has_id = true;
> +        vi->id = g_strdup(id);
> +    }
> +
> +    vi->start = vm->start;
> +    vi->size = vm->size;
> +    vi->memdev = object_get_canonical_path(OBJECT(vm->memdev));
> +
> +    info->u.virtio_pmem.data = vi;
> +    info->type = MEMORY_DEVICE_INFO_KIND_VIRTIO_PMEM;
> +}
> +
> +static uint64_t virtio_pmem_md_get_addr(const MemoryDeviceState *md)
> +{
> +    VirtIOPMEM *vm = VIRTIO_PMEM(md);
> +
> +    return vm->start;
> +}
> +
> +static uint64_t virtio_pmem_md_get_plugged_size(const MemoryDeviceState *md)
> +{
> +    VirtIOPMEM *vm = VIRTIO_PMEM(md);
> +
> +    return vm->size;
> +}
> +
> +static uint64_t virtio_pmem_md_get_region_size(const MemoryDeviceState *md)
> +{
> +    VirtIOPMEM *vm = VIRTIO_PMEM(md);
> +
> +    return vm->size;
> +}
> +
> +static void virtio_pmem_instance_init(Object *obj)
> +{
> +    VirtIOPMEM *vm = VIRTIO_PMEM(obj);
> +    object_property_add_link(obj, "memdev", TYPE_MEMORY_BACKEND,
> +                                (Object **)&vm->memdev,
> +                                (void *) virtio_mem_check_memdev,
> +                                OBJ_PROP_LINK_STRONG,
> +                                &error_abort);
> +}
> +
> +
> +static void virtio_pmem_class_init(ObjectClass *klass, void *data)
> +{
> +    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
> +    MemoryDeviceClass *mdc = MEMORY_DEVICE_CLASS(klass);
> +
> +    vdc->realize      =  virtio_pmem_realize;
> +    vdc->get_config   =  virtio_pmem_get_config;
> +    vdc->get_features =  virtio_pmem_get_features;
> +
> +    mdc->get_addr         = virtio_pmem_md_get_addr;
> +    mdc->get_plugged_size = virtio_pmem_md_get_plugged_size;
> +    mdc->get_region_size  = virtio_pmem_md_get_region_size;
> +    mdc->fill_device_info = virtio_pmem_md_fill_device_info;
> +}
> +
> +static TypeInfo virtio_pmem_info = {
> +    .name          = TYPE_VIRTIO_PMEM,
> +    .parent        = TYPE_VIRTIO_DEVICE,
> +    .class_init    = virtio_pmem_class_init,
> +    .instance_size = sizeof(VirtIOPMEM),
> +    .instance_init = virtio_pmem_instance_init,
> +    .interfaces = (InterfaceInfo[]) {
> +        { TYPE_MEMORY_DEVICE },
> +        { }
> +  },
> +};
> +
> +static void virtio_register_types(void)
> +{
> +    type_register_static(&virtio_pmem_info);
> +}
> +
> +type_init(virtio_register_types)
> diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
> index 990d6fcbde..28829b6437 100644
> --- a/include/hw/pci/pci.h
> +++ b/include/hw/pci/pci.h
> @@ -85,6 +85,7 @@ extern bool pci_available;
>  #define PCI_DEVICE_ID_VIRTIO_RNG         0x1005
>  #define PCI_DEVICE_ID_VIRTIO_9P          0x1009
>  #define PCI_DEVICE_ID_VIRTIO_VSOCK       0x1012
> +#define PCI_DEVICE_ID_VIRTIO_PMEM        0x1013
>  
>  #define PCI_VENDOR_ID_REDHAT             0x1b36
>  #define PCI_DEVICE_ID_REDHAT_BRIDGE      0x0001
> diff --git a/include/hw/virtio/virtio-pmem.h b/include/hw/virtio/virtio-pmem.h
> new file mode 100644
> index 0000000000..fda3ee691c
> --- /dev/null
> +++ b/include/hw/virtio/virtio-pmem.h
> @@ -0,0 +1,42 @@
> +/*
> + * Virtio pmem Device
> + *
> + * Copyright Red Hat, Inc. 2018
> + * Copyright Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or
> + * (at your option) any later version.  See the COPYING file in the
> + * top-level directory.
> + */
> +
> +#ifndef QEMU_VIRTIO_PMEM_H
> +#define QEMU_VIRTIO_PMEM_H
> +
> +#include "hw/virtio/virtio.h"
> +#include "exec/memory.h"
> +#include "sysemu/hostmem.h"
> +#include "standard-headers/linux/virtio_ids.h"
> +#include "hw/boards.h"
> +#include "hw/i386/pc.h"
> +
> +#define TYPE_VIRTIO_PMEM "virtio-pmem"
> +
> +#define VIRTIO_PMEM(obj) \
> +        OBJECT_CHECK(VirtIOPMEM, (obj), TYPE_VIRTIO_PMEM)
> +
> +/* VirtIOPMEM device structure */
> +typedef struct VirtIOPMEM {
> +    VirtIODevice parent_obj;
> +
> +    VirtQueue *rq_vq;
> +    uint64_t start;
> +    uint64_t size;
> +    MemoryRegion mr;
> +    HostMemoryBackend *memdev;
> +} VirtIOPMEM;
> +
> +struct virtio_pmem_config {
> +    uint64_t start;
> +    uint64_t size;
> +};
> +#endif
> diff --git a/include/standard-headers/linux/virtio_ids.h b/include/standard-headers/linux/virtio_ids.h
> index 6d5c3b2d4f..346389565a 100644
> --- a/include/standard-headers/linux/virtio_ids.h
> +++ b/include/standard-headers/linux/virtio_ids.h
> @@ -43,5 +43,6 @@
>  #define VIRTIO_ID_INPUT        18 /* virtio input */
>  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
>  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> +#define VIRTIO_ID_PMEM         25 /* virtio pmem */
>  
>  #endif /* _LINUX_VIRTIO_IDS_H */
> diff --git a/qapi/misc.json b/qapi/misc.json
> index 29da7856e3..fb85dd6f6c 100644
> --- a/qapi/misc.json
> +++ b/qapi/misc.json
> @@ -2907,6 +2907,29 @@
>            }
>  }
>  
> +##
> +# @VirtioPMemDeviceInfo:
> +#
> +# VirtioPMem state information
> +#
> +# @id: device's ID
> +#
> +# @start: physical address, where device is mapped
> +#
> +# @size: size of memory that the device provides
> +#
> +# @memdev: memory backend linked with device
> +#
> +# Since: 2.13
> +##
> +{ 'struct': 'VirtioPMemDeviceInfo',
> +  'data': { '*id': 'str',
> +            'start': 'size',
> +            'size': 'size',
> +            'memdev': 'str'
> +          }
> +}
> +
>  ##
>  # @MemoryDeviceInfo:
>  #
> @@ -2916,7 +2939,8 @@
>  ##
>  { 'union': 'MemoryDeviceInfo',
>    'data': { 'dimm': 'PCDIMMDeviceInfo',
> -            'nvdimm': 'PCDIMMDeviceInfo'
> +            'nvdimm': 'PCDIMMDeviceInfo',
> +	    'virtio-pmem': 'VirtioPMemDeviceInfo'
>            }
>  }
>  

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3] qemu: Add virtio pmem device
  2018-07-18 12:55     ` Luiz Capitulino
@ 2018-07-19  5:48       ` Pankaj Gupta
  2018-07-19 12:16         ` Stefan Hajnoczi
       [not found]         ` <367397176.52317488.1531979293251.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 2 replies; 28+ messages in thread
From: Pankaj Gupta @ 2018-07-19  5:48 UTC (permalink / raw)
  To: Luiz Capitulino
  Cc: kwolf, haozhong zhang, jack, xiaoguangrong eric, kvm, riel,
	linux-nvdimm, david, ross zwisler, linux-kernel, qemu-devel, hch,
	imammedo, mst, stefanha, niteshnarayanlal, pbonzini,
	dan j williams, nilal


> 
> >  This patch adds virtio-pmem Qemu device.
> > 
> >  This device presents memory address range information to guest
> >  which is backed by file backend type. It acts like persistent
> >  memory device for KVM guest. Guest can perform read and persistent
> >  write operations on this memory range with the help of DAX capable
> >  filesystem.
> > 
> >  Persistent guest writes are assured with the help of virtio based
> >  flushing interface. When guest userspace space performs fsync on
> >  file fd on pmem device, a flush command is send to Qemu over VIRTIO
> >  and host side flush/sync is done on backing image file.
> > 
> > Changes from RFC v2:
> > - Use aio_worker() to avoid Qemu from hanging with blocking fsync
> >   call - Stefan
> > - Use virtio_st*_p() for endianess - Stefan
> > - Correct indentation in qapi/misc.json - Eric
> > 
> > Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> > ---
> >  hw/virtio/Makefile.objs                     |   3 +
> >  hw/virtio/virtio-pci.c                      |  44 +++++
> >  hw/virtio/virtio-pci.h                      |  14 ++
> >  hw/virtio/virtio-pmem.c                     | 241
> >  ++++++++++++++++++++++++++++
> >  include/hw/pci/pci.h                        |   1 +
> >  include/hw/virtio/virtio-pmem.h             |  42 +++++
> >  include/standard-headers/linux/virtio_ids.h |   1 +
> >  qapi/misc.json                              |  26 ++-
> >  8 files changed, 371 insertions(+), 1 deletion(-)
> >  create mode 100644 hw/virtio/virtio-pmem.c
> >  create mode 100644 include/hw/virtio/virtio-pmem.h
> > 
> > diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
> > index 1b2799cfd8..7f914d45d0 100644
> > --- a/hw/virtio/Makefile.objs
> > +++ b/hw/virtio/Makefile.objs
> > @@ -10,6 +10,9 @@ obj-$(CONFIG_VIRTIO_CRYPTO) += virtio-crypto.o
> >  obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) +=
> >  virtio-crypto-pci.o
> >  
> >  obj-$(CONFIG_LINUX) += vhost.o vhost-backend.o vhost-user.o
> > +ifeq ($(CONFIG_MEM_HOTPLUG),y)
> > +obj-$(CONFIG_LINUX) += virtio-pmem.o
> > +endif
> >  obj-$(CONFIG_VHOST_VSOCK) += vhost-vsock.o
> >  endif
> >  
> > diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> > index 3a01fe90f0..93d3fc05c7 100644
> > --- a/hw/virtio/virtio-pci.c
> > +++ b/hw/virtio/virtio-pci.c
> > @@ -2521,6 +2521,49 @@ static const TypeInfo virtio_rng_pci_info = {
> >      .class_init    = virtio_rng_pci_class_init,
> >  };
> >  
> > +/* virtio-pmem-pci */
> > +
> > +static void virtio_pmem_pci_realize(VirtIOPCIProxy *vpci_dev, Error
> > **errp)
> > +{
> > +    VirtIOPMEMPCI *vpmem = VIRTIO_PMEM_PCI(vpci_dev);
> > +    DeviceState *vdev = DEVICE(&vpmem->vdev);
> > +
> > +    qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
> > +    object_property_set_bool(OBJECT(vdev), true, "realized", errp);
> > +}
> > +
> > +static void virtio_pmem_pci_class_init(ObjectClass *klass, void *data)
> > +{
> > +    DeviceClass *dc = DEVICE_CLASS(klass);
> > +    VirtioPCIClass *k = VIRTIO_PCI_CLASS(klass);
> > +    PCIDeviceClass *pcidev_k = PCI_DEVICE_CLASS(klass);
> > +    k->realize = virtio_pmem_pci_realize;
> > +    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
> > +    pcidev_k->vendor_id = PCI_VENDOR_ID_REDHAT_QUMRANET;
> > +    pcidev_k->device_id = PCI_DEVICE_ID_VIRTIO_PMEM;
> > +    pcidev_k->revision = VIRTIO_PCI_ABI_VERSION;
> > +    pcidev_k->class_id = PCI_CLASS_OTHERS;
> > +}
> > +
> > +static void virtio_pmem_pci_instance_init(Object *obj)
> > +{
> > +    VirtIOPMEMPCI *dev = VIRTIO_PMEM_PCI(obj);
> > +
> > +    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> > +                                TYPE_VIRTIO_PMEM);
> > +    object_property_add_alias(obj, "memdev", OBJECT(&dev->vdev), "memdev",
> > +                              &error_abort);
> > +}
> > +
> > +static const TypeInfo virtio_pmem_pci_info = {
> > +    .name          = TYPE_VIRTIO_PMEM_PCI,
> > +    .parent        = TYPE_VIRTIO_PCI,
> > +    .instance_size = sizeof(VirtIOPMEMPCI),
> > +    .instance_init = virtio_pmem_pci_instance_init,
> > +    .class_init    = virtio_pmem_pci_class_init,
> > +};
> > +
> > +
> >  /* virtio-input-pci */
> >  
> >  static Property virtio_input_pci_properties[] = {
> > @@ -2714,6 +2757,7 @@ static void virtio_pci_register_types(void)
> >      type_register_static(&virtio_balloon_pci_info);
> >      type_register_static(&virtio_serial_pci_info);
> >      type_register_static(&virtio_net_pci_info);
> > +    type_register_static(&virtio_pmem_pci_info);
> >  #ifdef CONFIG_VHOST_SCSI
> >      type_register_static(&vhost_scsi_pci_info);
> >  #endif
> > diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
> > index 813082b0d7..fe74fcad3f 100644
> > --- a/hw/virtio/virtio-pci.h
> > +++ b/hw/virtio/virtio-pci.h
> > @@ -19,6 +19,7 @@
> >  #include "hw/virtio/virtio-blk.h"
> >  #include "hw/virtio/virtio-net.h"
> >  #include "hw/virtio/virtio-rng.h"
> > +#include "hw/virtio/virtio-pmem.h"
> >  #include "hw/virtio/virtio-serial.h"
> >  #include "hw/virtio/virtio-scsi.h"
> >  #include "hw/virtio/virtio-balloon.h"
> > @@ -57,6 +58,7 @@ typedef struct VirtIOInputHostPCI VirtIOInputHostPCI;
> >  typedef struct VirtIOGPUPCI VirtIOGPUPCI;
> >  typedef struct VHostVSockPCI VHostVSockPCI;
> >  typedef struct VirtIOCryptoPCI VirtIOCryptoPCI;
> > +typedef struct VirtIOPMEMPCI VirtIOPMEMPCI;
> >  
> >  /* virtio-pci-bus */
> >  
> > @@ -274,6 +276,18 @@ struct VirtIOBlkPCI {
> >      VirtIOBlock vdev;
> >  };
> >  
> > +/*
> > + * virtio-pmem-pci: This extends VirtioPCIProxy.
> > + */
> > +#define TYPE_VIRTIO_PMEM_PCI "virtio-pmem-pci"
> > +#define VIRTIO_PMEM_PCI(obj) \
> > +        OBJECT_CHECK(VirtIOPMEMPCI, (obj), TYPE_VIRTIO_PMEM_PCI)
> > +
> > +struct VirtIOPMEMPCI {
> > +    VirtIOPCIProxy parent_obj;
> > +    VirtIOPMEM vdev;
> > +};
> > +
> >  /*
> >   * virtio-balloon-pci: This extends VirtioPCIProxy.
> >   */
> > diff --git a/hw/virtio/virtio-pmem.c b/hw/virtio/virtio-pmem.c
> > new file mode 100644
> > index 0000000000..08c96d7e80
> > --- /dev/null
> > +++ b/hw/virtio/virtio-pmem.c
> > @@ -0,0 +1,241 @@
> > +/*
> > + * Virtio pmem device
> > + *
> > + * Copyright (C) 2018 Red Hat, Inc.
> > + * Copyright (C) 2018 Pankaj Gupta <pagupta@redhat.com>
> > + *
> > + * This work is licensed under the terms of the GNU GPL, version 2.
> > + * See the COPYING file in the top-level directory.
> > + *
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include "qapi/error.h"
> > +#include "qemu-common.h"
> > +#include "qemu/error-report.h"
> > +#include "hw/virtio/virtio-access.h"
> > +#include "hw/virtio/virtio-pmem.h"
> > +#include "hw/mem/memory-device.h"
> > +#include "block/aio.h"
> > +#include "block/thread-pool.h"
> > +
> > +typedef struct VirtIOPMEMresp {
> > +    int ret;
> > +} VirtIOPMEMResp;
> > +
> > +typedef struct VirtIODeviceRequest {
> > +    VirtQueueElement elem;
> > +    int fd;
> > +    VirtIOPMEM *pmem;
> > +    VirtIOPMEMResp resp;
> > +} VirtIODeviceRequest;
> > +
> > +static int worker_cb(void *opaque)
> > +{
> > +    VirtIODeviceRequest *req = opaque;
> > +    int err = 0;
> > +
> > +    /* flush raw backing image */
> > +    err = fsync(req->fd);
> > +    if (err != 0) {
> > +        err = errno;
> > +    }
> > +    req->resp.ret = err;
> 
> Host question: are you returning the guest errno code to the host?

No. I am returning error code from the host in-case of host fsync
failure, otherwise returning zero.

Thanks,
Pankaj
> 
> If yes, I don't think this is right. I think you probably want to
> define a constant for error and let the host decide on the errno
> code to return to the application.
> 
> > +
> > +    return 0;
> > +}
> > +
> > +static void done_cb(void *opaque, int ret)
> > +{
> > +    VirtIODeviceRequest *req = opaque;
> > +    int len = iov_from_buf(req->elem.in_sg, req->elem.in_num, 0,
> > +                              &req->resp, sizeof(VirtIOPMEMResp));
> > +
> > +    /* Callbacks are serialized, so no need to use atomic ops.  */
> > +    virtqueue_push(req->pmem->rq_vq, &req->elem, len);
> > +    virtio_notify((VirtIODevice *)req->pmem, req->pmem->rq_vq);
> > +    g_free(req);
> > +}
> > +
> > +static void virtio_pmem_flush(VirtIODevice *vdev, VirtQueue *vq)
> > +{
> > +    VirtIODeviceRequest *req;
> > +    VirtIOPMEM *pmem = VIRTIO_PMEM(vdev);
> > +    HostMemoryBackend *backend = MEMORY_BACKEND(pmem->memdev);
> > +    ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
> > +
> > +    req = virtqueue_pop(vq, sizeof(VirtIODeviceRequest));
> > +    if (!req) {
> > +        virtio_error(vdev, "virtio-pmem missing request data");
> > +        return;
> > +    }
> > +
> > +    if (req->elem.out_num < 1 || req->elem.in_num < 1) {
> > +        virtio_error(vdev, "virtio-pmem request not proper");
> > +        g_free(req);
> > +        return;
> > +    }
> > +    req->fd = memory_region_get_fd(&backend->mr);
> > +    req->pmem = pmem;
> > +    thread_pool_submit_aio(pool, worker_cb, req, done_cb, req);
> > +}
> > +
> > +static void virtio_pmem_get_config(VirtIODevice *vdev, uint8_t *config)
> > +{
> > +    VirtIOPMEM *pmem = VIRTIO_PMEM(vdev);
> > +    struct virtio_pmem_config *pmemcfg = (struct virtio_pmem_config *)
> > config;
> > +
> > +    virtio_stq_p(vdev, &pmemcfg->start, pmem->start);
> > +    virtio_stq_p(vdev, &pmemcfg->size, pmem->size);
> > +}
> > +
> > +static uint64_t virtio_pmem_get_features(VirtIODevice *vdev, uint64_t
> > features,
> > +                                        Error **errp)
> > +{
> > +    return features;
> > +}
> > +
> > +static void virtio_pmem_realize(DeviceState *dev, Error **errp)
> > +{
> > +    VirtIODevice   *vdev   = VIRTIO_DEVICE(dev);
> > +    VirtIOPMEM     *pmem   = VIRTIO_PMEM(dev);
> > +    MachineState   *ms     = MACHINE(qdev_get_machine());
> > +    uint64_t align;
> > +    Error *local_err = NULL;
> > +    MemoryRegion *mr;
> > +
> > +    if (!pmem->memdev) {
> > +        error_setg(errp, "virtio-pmem memdev not set");
> > +        return;
> > +    }
> > +
> > +    mr  = host_memory_backend_get_memory(pmem->memdev);
> > +    align = memory_region_get_alignment(mr);
> > +    pmem->size = QEMU_ALIGN_DOWN(memory_region_size(mr), align);
> > +    pmem->start = memory_device_get_free_addr(ms, NULL, align, pmem->size,
> > +
> > &local_err);
> > +    if (local_err) {
> > +        error_setg(errp, "Can't get free address in mem device");
> > +        return;
> > +    }
> > +    memory_region_init_alias(&pmem->mr, OBJECT(pmem),
> > +                             "virtio_pmem-memory", mr, 0, pmem->size);
> > +    memory_device_plug_region(ms, &pmem->mr, pmem->start);
> > +
> > +    host_memory_backend_set_mapped(pmem->memdev, true);
> > +    virtio_init(vdev, TYPE_VIRTIO_PMEM, VIRTIO_ID_PMEM,
> > +                                          sizeof(struct
> > virtio_pmem_config));
> > +    pmem->rq_vq = virtio_add_queue(vdev, 128, virtio_pmem_flush);
> > +}
> > +
> > +static void virtio_mem_check_memdev(Object *obj, const char *name, Object
> > *val,
> > +                                    Error **errp)
> > +{
> > +    if (host_memory_backend_is_mapped(MEMORY_BACKEND(val))) {
> > +        char *path = object_get_canonical_path_component(val);
> > +        error_setg(errp, "Can't use already busy memdev: %s", path);
> > +        g_free(path);
> > +        return;
> > +    }
> > +
> > +    qdev_prop_allow_set_link_before_realize(obj, name, val, errp);
> > +}
> > +
> > +static const char *virtio_pmem_get_device_id(VirtIOPMEM *vm)
> > +{
> > +    Object *obj = OBJECT(vm);
> > +    DeviceState *parent_dev;
> > +
> > +    /* always use the ID of the proxy device */
> > +    if (obj->parent && object_dynamic_cast(obj->parent, TYPE_DEVICE)) {
> > +        parent_dev = DEVICE(obj->parent);
> > +        return parent_dev->id;
> > +    }
> > +    return NULL;
> > +}
> > +
> > +static void virtio_pmem_md_fill_device_info(const MemoryDeviceState *md,
> > +                                           MemoryDeviceInfo *info)
> > +{
> > +    VirtioPMemDeviceInfo *vi = g_new0(VirtioPMemDeviceInfo, 1);
> > +    VirtIOPMEM *vm = VIRTIO_PMEM(md);
> > +    const char *id = virtio_pmem_get_device_id(vm);
> > +
> > +    if (id) {
> > +        vi->has_id = true;
> > +        vi->id = g_strdup(id);
> > +    }
> > +
> > +    vi->start = vm->start;
> > +    vi->size = vm->size;
> > +    vi->memdev = object_get_canonical_path(OBJECT(vm->memdev));
> > +
> > +    info->u.virtio_pmem.data = vi;
> > +    info->type = MEMORY_DEVICE_INFO_KIND_VIRTIO_PMEM;
> > +}
> > +
> > +static uint64_t virtio_pmem_md_get_addr(const MemoryDeviceState *md)
> > +{
> > +    VirtIOPMEM *vm = VIRTIO_PMEM(md);
> > +
> > +    return vm->start;
> > +}
> > +
> > +static uint64_t virtio_pmem_md_get_plugged_size(const MemoryDeviceState
> > *md)
> > +{
> > +    VirtIOPMEM *vm = VIRTIO_PMEM(md);
> > +
> > +    return vm->size;
> > +}
> > +
> > +static uint64_t virtio_pmem_md_get_region_size(const MemoryDeviceState
> > *md)
> > +{
> > +    VirtIOPMEM *vm = VIRTIO_PMEM(md);
> > +
> > +    return vm->size;
> > +}
> > +
> > +static void virtio_pmem_instance_init(Object *obj)
> > +{
> > +    VirtIOPMEM *vm = VIRTIO_PMEM(obj);
> > +    object_property_add_link(obj, "memdev", TYPE_MEMORY_BACKEND,
> > +                                (Object **)&vm->memdev,
> > +                                (void *) virtio_mem_check_memdev,
> > +                                OBJ_PROP_LINK_STRONG,
> > +                                &error_abort);
> > +}
> > +
> > +
> > +static void virtio_pmem_class_init(ObjectClass *klass, void *data)
> > +{
> > +    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
> > +    MemoryDeviceClass *mdc = MEMORY_DEVICE_CLASS(klass);
> > +
> > +    vdc->realize      =  virtio_pmem_realize;
> > +    vdc->get_config   =  virtio_pmem_get_config;
> > +    vdc->get_features =  virtio_pmem_get_features;
> > +
> > +    mdc->get_addr         = virtio_pmem_md_get_addr;
> > +    mdc->get_plugged_size = virtio_pmem_md_get_plugged_size;
> > +    mdc->get_region_size  = virtio_pmem_md_get_region_size;
> > +    mdc->fill_device_info = virtio_pmem_md_fill_device_info;
> > +}
> > +
> > +static TypeInfo virtio_pmem_info = {
> > +    .name          = TYPE_VIRTIO_PMEM,
> > +    .parent        = TYPE_VIRTIO_DEVICE,
> > +    .class_init    = virtio_pmem_class_init,
> > +    .instance_size = sizeof(VirtIOPMEM),
> > +    .instance_init = virtio_pmem_instance_init,
> > +    .interfaces = (InterfaceInfo[]) {
> > +        { TYPE_MEMORY_DEVICE },
> > +        { }
> > +  },
> > +};
> > +
> > +static void virtio_register_types(void)
> > +{
> > +    type_register_static(&virtio_pmem_info);
> > +}
> > +
> > +type_init(virtio_register_types)
> > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
> > index 990d6fcbde..28829b6437 100644
> > --- a/include/hw/pci/pci.h
> > +++ b/include/hw/pci/pci.h
> > @@ -85,6 +85,7 @@ extern bool pci_available;
> >  #define PCI_DEVICE_ID_VIRTIO_RNG         0x1005
> >  #define PCI_DEVICE_ID_VIRTIO_9P          0x1009
> >  #define PCI_DEVICE_ID_VIRTIO_VSOCK       0x1012
> > +#define PCI_DEVICE_ID_VIRTIO_PMEM        0x1013
> >  
> >  #define PCI_VENDOR_ID_REDHAT             0x1b36
> >  #define PCI_DEVICE_ID_REDHAT_BRIDGE      0x0001
> > diff --git a/include/hw/virtio/virtio-pmem.h
> > b/include/hw/virtio/virtio-pmem.h
> > new file mode 100644
> > index 0000000000..fda3ee691c
> > --- /dev/null
> > +++ b/include/hw/virtio/virtio-pmem.h
> > @@ -0,0 +1,42 @@
> > +/*
> > + * Virtio pmem Device
> > + *
> > + * Copyright Red Hat, Inc. 2018
> > + * Copyright Pankaj Gupta <pagupta@redhat.com>
> > + *
> > + * This work is licensed under the terms of the GNU GPL, version 2 or
> > + * (at your option) any later version.  See the COPYING file in the
> > + * top-level directory.
> > + */
> > +
> > +#ifndef QEMU_VIRTIO_PMEM_H
> > +#define QEMU_VIRTIO_PMEM_H
> > +
> > +#include "hw/virtio/virtio.h"
> > +#include "exec/memory.h"
> > +#include "sysemu/hostmem.h"
> > +#include "standard-headers/linux/virtio_ids.h"
> > +#include "hw/boards.h"
> > +#include "hw/i386/pc.h"
> > +
> > +#define TYPE_VIRTIO_PMEM "virtio-pmem"
> > +
> > +#define VIRTIO_PMEM(obj) \
> > +        OBJECT_CHECK(VirtIOPMEM, (obj), TYPE_VIRTIO_PMEM)
> > +
> > +/* VirtIOPMEM device structure */
> > +typedef struct VirtIOPMEM {
> > +    VirtIODevice parent_obj;
> > +
> > +    VirtQueue *rq_vq;
> > +    uint64_t start;
> > +    uint64_t size;
> > +    MemoryRegion mr;
> > +    HostMemoryBackend *memdev;
> > +} VirtIOPMEM;
> > +
> > +struct virtio_pmem_config {
> > +    uint64_t start;
> > +    uint64_t size;
> > +};
> > +#endif
> > diff --git a/include/standard-headers/linux/virtio_ids.h
> > b/include/standard-headers/linux/virtio_ids.h
> > index 6d5c3b2d4f..346389565a 100644
> > --- a/include/standard-headers/linux/virtio_ids.h
> > +++ b/include/standard-headers/linux/virtio_ids.h
> > @@ -43,5 +43,6 @@
> >  #define VIRTIO_ID_INPUT        18 /* virtio input */
> >  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
> >  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> > +#define VIRTIO_ID_PMEM         25 /* virtio pmem */
> >  
> >  #endif /* _LINUX_VIRTIO_IDS_H */
> > diff --git a/qapi/misc.json b/qapi/misc.json
> > index 29da7856e3..fb85dd6f6c 100644
> > --- a/qapi/misc.json
> > +++ b/qapi/misc.json
> > @@ -2907,6 +2907,29 @@
> >            }
> >  }
> >  
> > +##
> > +# @VirtioPMemDeviceInfo:
> > +#
> > +# VirtioPMem state information
> > +#
> > +# @id: device's ID
> > +#
> > +# @start: physical address, where device is mapped
> > +#
> > +# @size: size of memory that the device provides
> > +#
> > +# @memdev: memory backend linked with device
> > +#
> > +# Since: 2.13
> > +##
> > +{ 'struct': 'VirtioPMemDeviceInfo',
> > +  'data': { '*id': 'str',
> > +            'start': 'size',
> > +            'size': 'size',
> > +            'memdev': 'str'
> > +          }
> > +}
> > +
> >  ##
> >  # @MemoryDeviceInfo:
> >  #
> > @@ -2916,7 +2939,8 @@
> >  ##
> >  { 'union': 'MemoryDeviceInfo',
> >    'data': { 'dimm': 'PCDIMMDeviceInfo',
> > -            'nvdimm': 'PCDIMMDeviceInfo'
> > +            'nvdimm': 'PCDIMMDeviceInfo',
> > +	    'virtio-pmem': 'VirtioPMemDeviceInfo'
> >            }
> >  }
> >  
> 
> 
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3] qemu: Add virtio pmem device
  2018-07-19  5:48       ` [Qemu-devel] " Pankaj Gupta
@ 2018-07-19 12:16         ` Stefan Hajnoczi
       [not found]           ` <20180719121635.GA28107-lxVrvc10SDRcolVlb+j0YCZi+YwRKgec@public.gmane.org>
       [not found]         ` <367397176.52317488.1531979293251.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  1 sibling, 1 reply; 28+ messages in thread
From: Stefan Hajnoczi @ 2018-07-19 12:16 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: Luiz Capitulino, kwolf, haozhong zhang, jack, xiaoguangrong eric,
	kvm, riel, linux-nvdimm, david, ross zwisler, linux-kernel,
	qemu-devel, hch, imammedo, mst, niteshnarayanlal, pbonzini,
	dan j williams, nilal

[-- Attachment #1: Type: text/plain, Size: 8489 bytes --]

On Thu, Jul 19, 2018 at 01:48:13AM -0400, Pankaj Gupta wrote:
> 
> > 
> > >  This patch adds virtio-pmem Qemu device.
> > > 
> > >  This device presents memory address range information to guest
> > >  which is backed by file backend type. It acts like persistent
> > >  memory device for KVM guest. Guest can perform read and persistent
> > >  write operations on this memory range with the help of DAX capable
> > >  filesystem.
> > > 
> > >  Persistent guest writes are assured with the help of virtio based
> > >  flushing interface. When guest userspace space performs fsync on
> > >  file fd on pmem device, a flush command is send to Qemu over VIRTIO
> > >  and host side flush/sync is done on backing image file.
> > > 
> > > Changes from RFC v2:
> > > - Use aio_worker() to avoid Qemu from hanging with blocking fsync
> > >   call - Stefan
> > > - Use virtio_st*_p() for endianess - Stefan
> > > - Correct indentation in qapi/misc.json - Eric
> > > 
> > > Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> > > ---
> > >  hw/virtio/Makefile.objs                     |   3 +
> > >  hw/virtio/virtio-pci.c                      |  44 +++++
> > >  hw/virtio/virtio-pci.h                      |  14 ++
> > >  hw/virtio/virtio-pmem.c                     | 241
> > >  ++++++++++++++++++++++++++++
> > >  include/hw/pci/pci.h                        |   1 +
> > >  include/hw/virtio/virtio-pmem.h             |  42 +++++
> > >  include/standard-headers/linux/virtio_ids.h |   1 +
> > >  qapi/misc.json                              |  26 ++-
> > >  8 files changed, 371 insertions(+), 1 deletion(-)
> > >  create mode 100644 hw/virtio/virtio-pmem.c
> > >  create mode 100644 include/hw/virtio/virtio-pmem.h
> > > 
> > > diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
> > > index 1b2799cfd8..7f914d45d0 100644
> > > --- a/hw/virtio/Makefile.objs
> > > +++ b/hw/virtio/Makefile.objs
> > > @@ -10,6 +10,9 @@ obj-$(CONFIG_VIRTIO_CRYPTO) += virtio-crypto.o
> > >  obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) +=
> > >  virtio-crypto-pci.o
> > >  
> > >  obj-$(CONFIG_LINUX) += vhost.o vhost-backend.o vhost-user.o
> > > +ifeq ($(CONFIG_MEM_HOTPLUG),y)
> > > +obj-$(CONFIG_LINUX) += virtio-pmem.o
> > > +endif
> > >  obj-$(CONFIG_VHOST_VSOCK) += vhost-vsock.o
> > >  endif
> > >  
> > > diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> > > index 3a01fe90f0..93d3fc05c7 100644
> > > --- a/hw/virtio/virtio-pci.c
> > > +++ b/hw/virtio/virtio-pci.c
> > > @@ -2521,6 +2521,49 @@ static const TypeInfo virtio_rng_pci_info = {
> > >      .class_init    = virtio_rng_pci_class_init,
> > >  };
> > >  
> > > +/* virtio-pmem-pci */
> > > +
> > > +static void virtio_pmem_pci_realize(VirtIOPCIProxy *vpci_dev, Error
> > > **errp)
> > > +{
> > > +    VirtIOPMEMPCI *vpmem = VIRTIO_PMEM_PCI(vpci_dev);
> > > +    DeviceState *vdev = DEVICE(&vpmem->vdev);
> > > +
> > > +    qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
> > > +    object_property_set_bool(OBJECT(vdev), true, "realized", errp);
> > > +}
> > > +
> > > +static void virtio_pmem_pci_class_init(ObjectClass *klass, void *data)
> > > +{
> > > +    DeviceClass *dc = DEVICE_CLASS(klass);
> > > +    VirtioPCIClass *k = VIRTIO_PCI_CLASS(klass);
> > > +    PCIDeviceClass *pcidev_k = PCI_DEVICE_CLASS(klass);
> > > +    k->realize = virtio_pmem_pci_realize;
> > > +    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
> > > +    pcidev_k->vendor_id = PCI_VENDOR_ID_REDHAT_QUMRANET;
> > > +    pcidev_k->device_id = PCI_DEVICE_ID_VIRTIO_PMEM;
> > > +    pcidev_k->revision = VIRTIO_PCI_ABI_VERSION;
> > > +    pcidev_k->class_id = PCI_CLASS_OTHERS;
> > > +}
> > > +
> > > +static void virtio_pmem_pci_instance_init(Object *obj)
> > > +{
> > > +    VirtIOPMEMPCI *dev = VIRTIO_PMEM_PCI(obj);
> > > +
> > > +    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> > > +                                TYPE_VIRTIO_PMEM);
> > > +    object_property_add_alias(obj, "memdev", OBJECT(&dev->vdev), "memdev",
> > > +                              &error_abort);
> > > +}
> > > +
> > > +static const TypeInfo virtio_pmem_pci_info = {
> > > +    .name          = TYPE_VIRTIO_PMEM_PCI,
> > > +    .parent        = TYPE_VIRTIO_PCI,
> > > +    .instance_size = sizeof(VirtIOPMEMPCI),
> > > +    .instance_init = virtio_pmem_pci_instance_init,
> > > +    .class_init    = virtio_pmem_pci_class_init,
> > > +};
> > > +
> > > +
> > >  /* virtio-input-pci */
> > >  
> > >  static Property virtio_input_pci_properties[] = {
> > > @@ -2714,6 +2757,7 @@ static void virtio_pci_register_types(void)
> > >      type_register_static(&virtio_balloon_pci_info);
> > >      type_register_static(&virtio_serial_pci_info);
> > >      type_register_static(&virtio_net_pci_info);
> > > +    type_register_static(&virtio_pmem_pci_info);
> > >  #ifdef CONFIG_VHOST_SCSI
> > >      type_register_static(&vhost_scsi_pci_info);
> > >  #endif
> > > diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
> > > index 813082b0d7..fe74fcad3f 100644
> > > --- a/hw/virtio/virtio-pci.h
> > > +++ b/hw/virtio/virtio-pci.h
> > > @@ -19,6 +19,7 @@
> > >  #include "hw/virtio/virtio-blk.h"
> > >  #include "hw/virtio/virtio-net.h"
> > >  #include "hw/virtio/virtio-rng.h"
> > > +#include "hw/virtio/virtio-pmem.h"
> > >  #include "hw/virtio/virtio-serial.h"
> > >  #include "hw/virtio/virtio-scsi.h"
> > >  #include "hw/virtio/virtio-balloon.h"
> > > @@ -57,6 +58,7 @@ typedef struct VirtIOInputHostPCI VirtIOInputHostPCI;
> > >  typedef struct VirtIOGPUPCI VirtIOGPUPCI;
> > >  typedef struct VHostVSockPCI VHostVSockPCI;
> > >  typedef struct VirtIOCryptoPCI VirtIOCryptoPCI;
> > > +typedef struct VirtIOPMEMPCI VirtIOPMEMPCI;
> > >  
> > >  /* virtio-pci-bus */
> > >  
> > > @@ -274,6 +276,18 @@ struct VirtIOBlkPCI {
> > >      VirtIOBlock vdev;
> > >  };
> > >  
> > > +/*
> > > + * virtio-pmem-pci: This extends VirtioPCIProxy.
> > > + */
> > > +#define TYPE_VIRTIO_PMEM_PCI "virtio-pmem-pci"
> > > +#define VIRTIO_PMEM_PCI(obj) \
> > > +        OBJECT_CHECK(VirtIOPMEMPCI, (obj), TYPE_VIRTIO_PMEM_PCI)
> > > +
> > > +struct VirtIOPMEMPCI {
> > > +    VirtIOPCIProxy parent_obj;
> > > +    VirtIOPMEM vdev;
> > > +};
> > > +
> > >  /*
> > >   * virtio-balloon-pci: This extends VirtioPCIProxy.
> > >   */
> > > diff --git a/hw/virtio/virtio-pmem.c b/hw/virtio/virtio-pmem.c
> > > new file mode 100644
> > > index 0000000000..08c96d7e80
> > > --- /dev/null
> > > +++ b/hw/virtio/virtio-pmem.c
> > > @@ -0,0 +1,241 @@
> > > +/*
> > > + * Virtio pmem device
> > > + *
> > > + * Copyright (C) 2018 Red Hat, Inc.
> > > + * Copyright (C) 2018 Pankaj Gupta <pagupta@redhat.com>
> > > + *
> > > + * This work is licensed under the terms of the GNU GPL, version 2.
> > > + * See the COPYING file in the top-level directory.
> > > + *
> > > + */
> > > +
> > > +#include "qemu/osdep.h"
> > > +#include "qapi/error.h"
> > > +#include "qemu-common.h"
> > > +#include "qemu/error-report.h"
> > > +#include "hw/virtio/virtio-access.h"
> > > +#include "hw/virtio/virtio-pmem.h"
> > > +#include "hw/mem/memory-device.h"
> > > +#include "block/aio.h"
> > > +#include "block/thread-pool.h"
> > > +
> > > +typedef struct VirtIOPMEMresp {
> > > +    int ret;
> > > +} VirtIOPMEMResp;
> > > +
> > > +typedef struct VirtIODeviceRequest {
> > > +    VirtQueueElement elem;
> > > +    int fd;
> > > +    VirtIOPMEM *pmem;
> > > +    VirtIOPMEMResp resp;
> > > +} VirtIODeviceRequest;
> > > +
> > > +static int worker_cb(void *opaque)
> > > +{
> > > +    VirtIODeviceRequest *req = opaque;
> > > +    int err = 0;
> > > +
> > > +    /* flush raw backing image */
> > > +    err = fsync(req->fd);
> > > +    if (err != 0) {
> > > +        err = errno;
> > > +    }
> > > +    req->resp.ret = err;
> > 
> > Host question: are you returning the guest errno code to the host?
> 
> No. I am returning error code from the host in-case of host fsync
> failure, otherwise returning zero.

I think that's what Luiz meant.  errno constants are not portable
between operating systems and architectures.  Therefore they cannot be
used in external interfaces in software that expects to communicate with
other systems.

It will be necessary to define specific constants for virtio-pmem
instead of passing errno from the host to guest.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3] qemu: Add virtio pmem device
       [not found]         ` <367397176.52317488.1531979293251.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-07-19 12:39           ` Luiz Capitulino
  0 siblings, 0 replies; 28+ messages in thread
From: Luiz Capitulino @ 2018-07-19 12:39 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	riel-ebMLmSuQjDVBDgjK7y7TUQ, linux-nvdimm-y27Ovi1pjclAfugRpC6u6w,
	david-H+wXaHxf7aLQT0dZR+AlfA, ross zwisler,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	stefanha-H+wXaHxf7aLQT0dZR+AlfA,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA

On Thu, 19 Jul 2018 01:48:13 -0400 (EDT)
Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> >   
> > >  This patch adds virtio-pmem Qemu device.
> > > 
> > >  This device presents memory address range information to guest
> > >  which is backed by file backend type. It acts like persistent
> > >  memory device for KVM guest. Guest can perform read and persistent
> > >  write operations on this memory range with the help of DAX capable
> > >  filesystem.
> > > 
> > >  Persistent guest writes are assured with the help of virtio based
> > >  flushing interface. When guest userspace space performs fsync on
> > >  file fd on pmem device, a flush command is send to Qemu over VIRTIO
> > >  and host side flush/sync is done on backing image file.
> > > 
> > > Changes from RFC v2:
> > > - Use aio_worker() to avoid Qemu from hanging with blocking fsync
> > >   call - Stefan
> > > - Use virtio_st*_p() for endianess - Stefan
> > > - Correct indentation in qapi/misc.json - Eric
> > > 
> > > Signed-off-by: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > ---
> > >  hw/virtio/Makefile.objs                     |   3 +
> > >  hw/virtio/virtio-pci.c                      |  44 +++++
> > >  hw/virtio/virtio-pci.h                      |  14 ++
> > >  hw/virtio/virtio-pmem.c                     | 241
> > >  ++++++++++++++++++++++++++++
> > >  include/hw/pci/pci.h                        |   1 +
> > >  include/hw/virtio/virtio-pmem.h             |  42 +++++
> > >  include/standard-headers/linux/virtio_ids.h |   1 +
> > >  qapi/misc.json                              |  26 ++-
> > >  8 files changed, 371 insertions(+), 1 deletion(-)
> > >  create mode 100644 hw/virtio/virtio-pmem.c
> > >  create mode 100644 include/hw/virtio/virtio-pmem.h
> > > 
> > > diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
> > > index 1b2799cfd8..7f914d45d0 100644
> > > --- a/hw/virtio/Makefile.objs
> > > +++ b/hw/virtio/Makefile.objs
> > > @@ -10,6 +10,9 @@ obj-$(CONFIG_VIRTIO_CRYPTO) += virtio-crypto.o
> > >  obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) +=
> > >  virtio-crypto-pci.o
> > >  
> > >  obj-$(CONFIG_LINUX) += vhost.o vhost-backend.o vhost-user.o
> > > +ifeq ($(CONFIG_MEM_HOTPLUG),y)
> > > +obj-$(CONFIG_LINUX) += virtio-pmem.o
> > > +endif
> > >  obj-$(CONFIG_VHOST_VSOCK) += vhost-vsock.o
> > >  endif
> > >  
> > > diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> > > index 3a01fe90f0..93d3fc05c7 100644
> > > --- a/hw/virtio/virtio-pci.c
> > > +++ b/hw/virtio/virtio-pci.c
> > > @@ -2521,6 +2521,49 @@ static const TypeInfo virtio_rng_pci_info = {
> > >      .class_init    = virtio_rng_pci_class_init,
> > >  };
> > >  
> > > +/* virtio-pmem-pci */
> > > +
> > > +static void virtio_pmem_pci_realize(VirtIOPCIProxy *vpci_dev, Error
> > > **errp)
> > > +{
> > > +    VirtIOPMEMPCI *vpmem = VIRTIO_PMEM_PCI(vpci_dev);
> > > +    DeviceState *vdev = DEVICE(&vpmem->vdev);
> > > +
> > > +    qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
> > > +    object_property_set_bool(OBJECT(vdev), true, "realized", errp);
> > > +}
> > > +
> > > +static void virtio_pmem_pci_class_init(ObjectClass *klass, void *data)
> > > +{
> > > +    DeviceClass *dc = DEVICE_CLASS(klass);
> > > +    VirtioPCIClass *k = VIRTIO_PCI_CLASS(klass);
> > > +    PCIDeviceClass *pcidev_k = PCI_DEVICE_CLASS(klass);
> > > +    k->realize = virtio_pmem_pci_realize;
> > > +    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
> > > +    pcidev_k->vendor_id = PCI_VENDOR_ID_REDHAT_QUMRANET;
> > > +    pcidev_k->device_id = PCI_DEVICE_ID_VIRTIO_PMEM;
> > > +    pcidev_k->revision = VIRTIO_PCI_ABI_VERSION;
> > > +    pcidev_k->class_id = PCI_CLASS_OTHERS;
> > > +}
> > > +
> > > +static void virtio_pmem_pci_instance_init(Object *obj)
> > > +{
> > > +    VirtIOPMEMPCI *dev = VIRTIO_PMEM_PCI(obj);
> > > +
> > > +    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> > > +                                TYPE_VIRTIO_PMEM);
> > > +    object_property_add_alias(obj, "memdev", OBJECT(&dev->vdev), "memdev",
> > > +                              &error_abort);
> > > +}
> > > +
> > > +static const TypeInfo virtio_pmem_pci_info = {
> > > +    .name          = TYPE_VIRTIO_PMEM_PCI,
> > > +    .parent        = TYPE_VIRTIO_PCI,
> > > +    .instance_size = sizeof(VirtIOPMEMPCI),
> > > +    .instance_init = virtio_pmem_pci_instance_init,
> > > +    .class_init    = virtio_pmem_pci_class_init,
> > > +};
> > > +
> > > +
> > >  /* virtio-input-pci */
> > >  
> > >  static Property virtio_input_pci_properties[] = {
> > > @@ -2714,6 +2757,7 @@ static void virtio_pci_register_types(void)
> > >      type_register_static(&virtio_balloon_pci_info);
> > >      type_register_static(&virtio_serial_pci_info);
> > >      type_register_static(&virtio_net_pci_info);
> > > +    type_register_static(&virtio_pmem_pci_info);
> > >  #ifdef CONFIG_VHOST_SCSI
> > >      type_register_static(&vhost_scsi_pci_info);
> > >  #endif
> > > diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
> > > index 813082b0d7..fe74fcad3f 100644
> > > --- a/hw/virtio/virtio-pci.h
> > > +++ b/hw/virtio/virtio-pci.h
> > > @@ -19,6 +19,7 @@
> > >  #include "hw/virtio/virtio-blk.h"
> > >  #include "hw/virtio/virtio-net.h"
> > >  #include "hw/virtio/virtio-rng.h"
> > > +#include "hw/virtio/virtio-pmem.h"
> > >  #include "hw/virtio/virtio-serial.h"
> > >  #include "hw/virtio/virtio-scsi.h"
> > >  #include "hw/virtio/virtio-balloon.h"
> > > @@ -57,6 +58,7 @@ typedef struct VirtIOInputHostPCI VirtIOInputHostPCI;
> > >  typedef struct VirtIOGPUPCI VirtIOGPUPCI;
> > >  typedef struct VHostVSockPCI VHostVSockPCI;
> > >  typedef struct VirtIOCryptoPCI VirtIOCryptoPCI;
> > > +typedef struct VirtIOPMEMPCI VirtIOPMEMPCI;
> > >  
> > >  /* virtio-pci-bus */
> > >  
> > > @@ -274,6 +276,18 @@ struct VirtIOBlkPCI {
> > >      VirtIOBlock vdev;
> > >  };
> > >  
> > > +/*
> > > + * virtio-pmem-pci: This extends VirtioPCIProxy.
> > > + */
> > > +#define TYPE_VIRTIO_PMEM_PCI "virtio-pmem-pci"
> > > +#define VIRTIO_PMEM_PCI(obj) \
> > > +        OBJECT_CHECK(VirtIOPMEMPCI, (obj), TYPE_VIRTIO_PMEM_PCI)
> > > +
> > > +struct VirtIOPMEMPCI {
> > > +    VirtIOPCIProxy parent_obj;
> > > +    VirtIOPMEM vdev;
> > > +};
> > > +
> > >  /*
> > >   * virtio-balloon-pci: This extends VirtioPCIProxy.
> > >   */
> > > diff --git a/hw/virtio/virtio-pmem.c b/hw/virtio/virtio-pmem.c
> > > new file mode 100644
> > > index 0000000000..08c96d7e80
> > > --- /dev/null
> > > +++ b/hw/virtio/virtio-pmem.c
> > > @@ -0,0 +1,241 @@
> > > +/*
> > > + * Virtio pmem device
> > > + *
> > > + * Copyright (C) 2018 Red Hat, Inc.
> > > + * Copyright (C) 2018 Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > + *
> > > + * This work is licensed under the terms of the GNU GPL, version 2.
> > > + * See the COPYING file in the top-level directory.
> > > + *
> > > + */
> > > +
> > > +#include "qemu/osdep.h"
> > > +#include "qapi/error.h"
> > > +#include "qemu-common.h"
> > > +#include "qemu/error-report.h"
> > > +#include "hw/virtio/virtio-access.h"
> > > +#include "hw/virtio/virtio-pmem.h"
> > > +#include "hw/mem/memory-device.h"
> > > +#include "block/aio.h"
> > > +#include "block/thread-pool.h"
> > > +
> > > +typedef struct VirtIOPMEMresp {
> > > +    int ret;
> > > +} VirtIOPMEMResp;
> > > +
> > > +typedef struct VirtIODeviceRequest {
> > > +    VirtQueueElement elem;
> > > +    int fd;
> > > +    VirtIOPMEM *pmem;
> > > +    VirtIOPMEMResp resp;
> > > +} VirtIODeviceRequest;
> > > +
> > > +static int worker_cb(void *opaque)
> > > +{
> > > +    VirtIODeviceRequest *req = opaque;
> > > +    int err = 0;
> > > +
> > > +    /* flush raw backing image */
> > > +    err = fsync(req->fd);
> > > +    if (err != 0) {
> > > +        err = errno;
> > > +    }
> > > +    req->resp.ret = err;  
> > 
> > Host question: are you returning the guest errno code to the host?  
> 
> No. I am returning error code from the host in-case of host fsync
> failure, otherwise returning zero.

Sorry, that's what I meant. I exchanged host and guest but what I
said still applies if you do s/host/guest

> 
> Thanks,
> Pankaj
> > 
> > If yes, I don't think this is right. I think you probably want to
> > define a constant for error and let the host decide on the errno
> > code to return to the application.
> >   
> > > +
> > > +    return 0;
> > > +}
> > > +
> > > +static void done_cb(void *opaque, int ret)
> > > +{
> > > +    VirtIODeviceRequest *req = opaque;
> > > +    int len = iov_from_buf(req->elem.in_sg, req->elem.in_num, 0,
> > > +                              &req->resp, sizeof(VirtIOPMEMResp));
> > > +
> > > +    /* Callbacks are serialized, so no need to use atomic ops.  */
> > > +    virtqueue_push(req->pmem->rq_vq, &req->elem, len);
> > > +    virtio_notify((VirtIODevice *)req->pmem, req->pmem->rq_vq);
> > > +    g_free(req);
> > > +}
> > > +
> > > +static void virtio_pmem_flush(VirtIODevice *vdev, VirtQueue *vq)
> > > +{
> > > +    VirtIODeviceRequest *req;
> > > +    VirtIOPMEM *pmem = VIRTIO_PMEM(vdev);
> > > +    HostMemoryBackend *backend = MEMORY_BACKEND(pmem->memdev);
> > > +    ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
> > > +
> > > +    req = virtqueue_pop(vq, sizeof(VirtIODeviceRequest));
> > > +    if (!req) {
> > > +        virtio_error(vdev, "virtio-pmem missing request data");
> > > +        return;
> > > +    }
> > > +
> > > +    if (req->elem.out_num < 1 || req->elem.in_num < 1) {
> > > +        virtio_error(vdev, "virtio-pmem request not proper");
> > > +        g_free(req);
> > > +        return;
> > > +    }
> > > +    req->fd = memory_region_get_fd(&backend->mr);
> > > +    req->pmem = pmem;
> > > +    thread_pool_submit_aio(pool, worker_cb, req, done_cb, req);
> > > +}
> > > +
> > > +static void virtio_pmem_get_config(VirtIODevice *vdev, uint8_t *config)
> > > +{
> > > +    VirtIOPMEM *pmem = VIRTIO_PMEM(vdev);
> > > +    struct virtio_pmem_config *pmemcfg = (struct virtio_pmem_config *)
> > > config;
> > > +
> > > +    virtio_stq_p(vdev, &pmemcfg->start, pmem->start);
> > > +    virtio_stq_p(vdev, &pmemcfg->size, pmem->size);
> > > +}
> > > +
> > > +static uint64_t virtio_pmem_get_features(VirtIODevice *vdev, uint64_t
> > > features,
> > > +                                        Error **errp)
> > > +{
> > > +    return features;
> > > +}
> > > +
> > > +static void virtio_pmem_realize(DeviceState *dev, Error **errp)
> > > +{
> > > +    VirtIODevice   *vdev   = VIRTIO_DEVICE(dev);
> > > +    VirtIOPMEM     *pmem   = VIRTIO_PMEM(dev);
> > > +    MachineState   *ms     = MACHINE(qdev_get_machine());
> > > +    uint64_t align;
> > > +    Error *local_err = NULL;
> > > +    MemoryRegion *mr;
> > > +
> > > +    if (!pmem->memdev) {
> > > +        error_setg(errp, "virtio-pmem memdev not set");
> > > +        return;
> > > +    }
> > > +
> > > +    mr  = host_memory_backend_get_memory(pmem->memdev);
> > > +    align = memory_region_get_alignment(mr);
> > > +    pmem->size = QEMU_ALIGN_DOWN(memory_region_size(mr), align);
> > > +    pmem->start = memory_device_get_free_addr(ms, NULL, align, pmem->size,
> > > +
> > > &local_err);
> > > +    if (local_err) {
> > > +        error_setg(errp, "Can't get free address in mem device");
> > > +        return;
> > > +    }
> > > +    memory_region_init_alias(&pmem->mr, OBJECT(pmem),
> > > +                             "virtio_pmem-memory", mr, 0, pmem->size);
> > > +    memory_device_plug_region(ms, &pmem->mr, pmem->start);
> > > +
> > > +    host_memory_backend_set_mapped(pmem->memdev, true);
> > > +    virtio_init(vdev, TYPE_VIRTIO_PMEM, VIRTIO_ID_PMEM,
> > > +                                          sizeof(struct
> > > virtio_pmem_config));
> > > +    pmem->rq_vq = virtio_add_queue(vdev, 128, virtio_pmem_flush);
> > > +}
> > > +
> > > +static void virtio_mem_check_memdev(Object *obj, const char *name, Object
> > > *val,
> > > +                                    Error **errp)
> > > +{
> > > +    if (host_memory_backend_is_mapped(MEMORY_BACKEND(val))) {
> > > +        char *path = object_get_canonical_path_component(val);
> > > +        error_setg(errp, "Can't use already busy memdev: %s", path);
> > > +        g_free(path);
> > > +        return;
> > > +    }
> > > +
> > > +    qdev_prop_allow_set_link_before_realize(obj, name, val, errp);
> > > +}
> > > +
> > > +static const char *virtio_pmem_get_device_id(VirtIOPMEM *vm)
> > > +{
> > > +    Object *obj = OBJECT(vm);
> > > +    DeviceState *parent_dev;
> > > +
> > > +    /* always use the ID of the proxy device */
> > > +    if (obj->parent && object_dynamic_cast(obj->parent, TYPE_DEVICE)) {
> > > +        parent_dev = DEVICE(obj->parent);
> > > +        return parent_dev->id;
> > > +    }
> > > +    return NULL;
> > > +}
> > > +
> > > +static void virtio_pmem_md_fill_device_info(const MemoryDeviceState *md,
> > > +                                           MemoryDeviceInfo *info)
> > > +{
> > > +    VirtioPMemDeviceInfo *vi = g_new0(VirtioPMemDeviceInfo, 1);
> > > +    VirtIOPMEM *vm = VIRTIO_PMEM(md);
> > > +    const char *id = virtio_pmem_get_device_id(vm);
> > > +
> > > +    if (id) {
> > > +        vi->has_id = true;
> > > +        vi->id = g_strdup(id);
> > > +    }
> > > +
> > > +    vi->start = vm->start;
> > > +    vi->size = vm->size;
> > > +    vi->memdev = object_get_canonical_path(OBJECT(vm->memdev));
> > > +
> > > +    info->u.virtio_pmem.data = vi;
> > > +    info->type = MEMORY_DEVICE_INFO_KIND_VIRTIO_PMEM;
> > > +}
> > > +
> > > +static uint64_t virtio_pmem_md_get_addr(const MemoryDeviceState *md)
> > > +{
> > > +    VirtIOPMEM *vm = VIRTIO_PMEM(md);
> > > +
> > > +    return vm->start;
> > > +}
> > > +
> > > +static uint64_t virtio_pmem_md_get_plugged_size(const MemoryDeviceState
> > > *md)
> > > +{
> > > +    VirtIOPMEM *vm = VIRTIO_PMEM(md);
> > > +
> > > +    return vm->size;
> > > +}
> > > +
> > > +static uint64_t virtio_pmem_md_get_region_size(const MemoryDeviceState
> > > *md)
> > > +{
> > > +    VirtIOPMEM *vm = VIRTIO_PMEM(md);
> > > +
> > > +    return vm->size;
> > > +}
> > > +
> > > +static void virtio_pmem_instance_init(Object *obj)
> > > +{
> > > +    VirtIOPMEM *vm = VIRTIO_PMEM(obj);
> > > +    object_property_add_link(obj, "memdev", TYPE_MEMORY_BACKEND,
> > > +                                (Object **)&vm->memdev,
> > > +                                (void *) virtio_mem_check_memdev,
> > > +                                OBJ_PROP_LINK_STRONG,
> > > +                                &error_abort);
> > > +}
> > > +
> > > +
> > > +static void virtio_pmem_class_init(ObjectClass *klass, void *data)
> > > +{
> > > +    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
> > > +    MemoryDeviceClass *mdc = MEMORY_DEVICE_CLASS(klass);
> > > +
> > > +    vdc->realize      =  virtio_pmem_realize;
> > > +    vdc->get_config   =  virtio_pmem_get_config;
> > > +    vdc->get_features =  virtio_pmem_get_features;
> > > +
> > > +    mdc->get_addr         = virtio_pmem_md_get_addr;
> > > +    mdc->get_plugged_size = virtio_pmem_md_get_plugged_size;
> > > +    mdc->get_region_size  = virtio_pmem_md_get_region_size;
> > > +    mdc->fill_device_info = virtio_pmem_md_fill_device_info;
> > > +}
> > > +
> > > +static TypeInfo virtio_pmem_info = {
> > > +    .name          = TYPE_VIRTIO_PMEM,
> > > +    .parent        = TYPE_VIRTIO_DEVICE,
> > > +    .class_init    = virtio_pmem_class_init,
> > > +    .instance_size = sizeof(VirtIOPMEM),
> > > +    .instance_init = virtio_pmem_instance_init,
> > > +    .interfaces = (InterfaceInfo[]) {
> > > +        { TYPE_MEMORY_DEVICE },
> > > +        { }
> > > +  },
> > > +};
> > > +
> > > +static void virtio_register_types(void)
> > > +{
> > > +    type_register_static(&virtio_pmem_info);
> > > +}
> > > +
> > > +type_init(virtio_register_types)
> > > diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
> > > index 990d6fcbde..28829b6437 100644
> > > --- a/include/hw/pci/pci.h
> > > +++ b/include/hw/pci/pci.h
> > > @@ -85,6 +85,7 @@ extern bool pci_available;
> > >  #define PCI_DEVICE_ID_VIRTIO_RNG         0x1005
> > >  #define PCI_DEVICE_ID_VIRTIO_9P          0x1009
> > >  #define PCI_DEVICE_ID_VIRTIO_VSOCK       0x1012
> > > +#define PCI_DEVICE_ID_VIRTIO_PMEM        0x1013
> > >  
> > >  #define PCI_VENDOR_ID_REDHAT             0x1b36
> > >  #define PCI_DEVICE_ID_REDHAT_BRIDGE      0x0001
> > > diff --git a/include/hw/virtio/virtio-pmem.h
> > > b/include/hw/virtio/virtio-pmem.h
> > > new file mode 100644
> > > index 0000000000..fda3ee691c
> > > --- /dev/null
> > > +++ b/include/hw/virtio/virtio-pmem.h
> > > @@ -0,0 +1,42 @@
> > > +/*
> > > + * Virtio pmem Device
> > > + *
> > > + * Copyright Red Hat, Inc. 2018
> > > + * Copyright Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > + *
> > > + * This work is licensed under the terms of the GNU GPL, version 2 or
> > > + * (at your option) any later version.  See the COPYING file in the
> > > + * top-level directory.
> > > + */
> > > +
> > > +#ifndef QEMU_VIRTIO_PMEM_H
> > > +#define QEMU_VIRTIO_PMEM_H
> > > +
> > > +#include "hw/virtio/virtio.h"
> > > +#include "exec/memory.h"
> > > +#include "sysemu/hostmem.h"
> > > +#include "standard-headers/linux/virtio_ids.h"
> > > +#include "hw/boards.h"
> > > +#include "hw/i386/pc.h"
> > > +
> > > +#define TYPE_VIRTIO_PMEM "virtio-pmem"
> > > +
> > > +#define VIRTIO_PMEM(obj) \
> > > +        OBJECT_CHECK(VirtIOPMEM, (obj), TYPE_VIRTIO_PMEM)
> > > +
> > > +/* VirtIOPMEM device structure */
> > > +typedef struct VirtIOPMEM {
> > > +    VirtIODevice parent_obj;
> > > +
> > > +    VirtQueue *rq_vq;
> > > +    uint64_t start;
> > > +    uint64_t size;
> > > +    MemoryRegion mr;
> > > +    HostMemoryBackend *memdev;
> > > +} VirtIOPMEM;
> > > +
> > > +struct virtio_pmem_config {
> > > +    uint64_t start;
> > > +    uint64_t size;
> > > +};
> > > +#endif
> > > diff --git a/include/standard-headers/linux/virtio_ids.h
> > > b/include/standard-headers/linux/virtio_ids.h
> > > index 6d5c3b2d4f..346389565a 100644
> > > --- a/include/standard-headers/linux/virtio_ids.h
> > > +++ b/include/standard-headers/linux/virtio_ids.h
> > > @@ -43,5 +43,6 @@
> > >  #define VIRTIO_ID_INPUT        18 /* virtio input */
> > >  #define VIRTIO_ID_VSOCK        19 /* virtio vsock transport */
> > >  #define VIRTIO_ID_CRYPTO       20 /* virtio crypto */
> > > +#define VIRTIO_ID_PMEM         25 /* virtio pmem */
> > >  
> > >  #endif /* _LINUX_VIRTIO_IDS_H */
> > > diff --git a/qapi/misc.json b/qapi/misc.json
> > > index 29da7856e3..fb85dd6f6c 100644
> > > --- a/qapi/misc.json
> > > +++ b/qapi/misc.json
> > > @@ -2907,6 +2907,29 @@
> > >            }
> > >  }
> > >  
> > > +##
> > > +# @VirtioPMemDeviceInfo:
> > > +#
> > > +# VirtioPMem state information
> > > +#
> > > +# @id: device's ID
> > > +#
> > > +# @start: physical address, where device is mapped
> > > +#
> > > +# @size: size of memory that the device provides
> > > +#
> > > +# @memdev: memory backend linked with device
> > > +#
> > > +# Since: 2.13
> > > +##
> > > +{ 'struct': 'VirtioPMemDeviceInfo',
> > > +  'data': { '*id': 'str',
> > > +            'start': 'size',
> > > +            'size': 'size',
> > > +            'memdev': 'str'
> > > +          }
> > > +}
> > > +
> > >  ##
> > >  # @MemoryDeviceInfo:
> > >  #
> > > @@ -2916,7 +2939,8 @@
> > >  ##
> > >  { 'union': 'MemoryDeviceInfo',
> > >    'data': { 'dimm': 'PCDIMMDeviceInfo',
> > > -            'nvdimm': 'PCDIMMDeviceInfo'
> > > +            'nvdimm': 'PCDIMMDeviceInfo',
> > > +	    'virtio-pmem': 'VirtioPMemDeviceInfo'
> > >            }
> > >  }
> > >    
> > 
> > 
> >   
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3] qemu: Add virtio pmem device
       [not found]           ` <20180719121635.GA28107-lxVrvc10SDRcolVlb+j0YCZi+YwRKgec@public.gmane.org>
@ 2018-07-19 12:48             ` Luiz Capitulino
  2018-07-19 12:57               ` Luiz Capitulino
  2018-07-20 13:04               ` Pankaj Gupta
  2018-07-19 13:58             ` David Hildenbrand
  1 sibling, 2 replies; 28+ messages in thread
From: Luiz Capitulino @ 2018-07-19 12:48 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-y27Ovi1pjclAfugRpC6u6w, riel-ebMLmSuQjDVBDgjK7y7TUQ,
	david-H+wXaHxf7aLQT0dZR+AlfA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, ross zwisler,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA

On Thu, 19 Jul 2018 13:16:35 +0100
Stefan Hajnoczi <stefanha-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> On Thu, Jul 19, 2018 at 01:48:13AM -0400, Pankaj Gupta wrote:
> >   
> > >   
> > > >  This patch adds virtio-pmem Qemu device.
> > > > 
> > > >  This device presents memory address range information to guest
> > > >  which is backed by file backend type. It acts like persistent
> > > >  memory device for KVM guest. Guest can perform read and persistent
> > > >  write operations on this memory range with the help of DAX capable
> > > >  filesystem.
> > > > 
> > > >  Persistent guest writes are assured with the help of virtio based
> > > >  flushing interface. When guest userspace space performs fsync on
> > > >  file fd on pmem device, a flush command is send to Qemu over VIRTIO
> > > >  and host side flush/sync is done on backing image file.
> > > > 
> > > > Changes from RFC v2:
> > > > - Use aio_worker() to avoid Qemu from hanging with blocking fsync
> > > >   call - Stefan
> > > > - Use virtio_st*_p() for endianess - Stefan
> > > > - Correct indentation in qapi/misc.json - Eric
> > > > 
> > > > Signed-off-by: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > > ---
> > > >  hw/virtio/Makefile.objs                     |   3 +
> > > >  hw/virtio/virtio-pci.c                      |  44 +++++
> > > >  hw/virtio/virtio-pci.h                      |  14 ++
> > > >  hw/virtio/virtio-pmem.c                     | 241
> > > >  ++++++++++++++++++++++++++++
> > > >  include/hw/pci/pci.h                        |   1 +
> > > >  include/hw/virtio/virtio-pmem.h             |  42 +++++
> > > >  include/standard-headers/linux/virtio_ids.h |   1 +
> > > >  qapi/misc.json                              |  26 ++-
> > > >  8 files changed, 371 insertions(+), 1 deletion(-)
> > > >  create mode 100644 hw/virtio/virtio-pmem.c
> > > >  create mode 100644 include/hw/virtio/virtio-pmem.h
> > > > 
> > > > diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
> > > > index 1b2799cfd8..7f914d45d0 100644
> > > > --- a/hw/virtio/Makefile.objs
> > > > +++ b/hw/virtio/Makefile.objs
> > > > @@ -10,6 +10,9 @@ obj-$(CONFIG_VIRTIO_CRYPTO) += virtio-crypto.o
> > > >  obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) +=
> > > >  virtio-crypto-pci.o
> > > >  
> > > >  obj-$(CONFIG_LINUX) += vhost.o vhost-backend.o vhost-user.o
> > > > +ifeq ($(CONFIG_MEM_HOTPLUG),y)
> > > > +obj-$(CONFIG_LINUX) += virtio-pmem.o
> > > > +endif
> > > >  obj-$(CONFIG_VHOST_VSOCK) += vhost-vsock.o
> > > >  endif
> > > >  
> > > > diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> > > > index 3a01fe90f0..93d3fc05c7 100644
> > > > --- a/hw/virtio/virtio-pci.c
> > > > +++ b/hw/virtio/virtio-pci.c
> > > > @@ -2521,6 +2521,49 @@ static const TypeInfo virtio_rng_pci_info = {
> > > >      .class_init    = virtio_rng_pci_class_init,
> > > >  };
> > > >  
> > > > +/* virtio-pmem-pci */
> > > > +
> > > > +static void virtio_pmem_pci_realize(VirtIOPCIProxy *vpci_dev, Error
> > > > **errp)
> > > > +{
> > > > +    VirtIOPMEMPCI *vpmem = VIRTIO_PMEM_PCI(vpci_dev);
> > > > +    DeviceState *vdev = DEVICE(&vpmem->vdev);
> > > > +
> > > > +    qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
> > > > +    object_property_set_bool(OBJECT(vdev), true, "realized", errp);
> > > > +}
> > > > +
> > > > +static void virtio_pmem_pci_class_init(ObjectClass *klass, void *data)
> > > > +{
> > > > +    DeviceClass *dc = DEVICE_CLASS(klass);
> > > > +    VirtioPCIClass *k = VIRTIO_PCI_CLASS(klass);
> > > > +    PCIDeviceClass *pcidev_k = PCI_DEVICE_CLASS(klass);
> > > > +    k->realize = virtio_pmem_pci_realize;
> > > > +    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
> > > > +    pcidev_k->vendor_id = PCI_VENDOR_ID_REDHAT_QUMRANET;
> > > > +    pcidev_k->device_id = PCI_DEVICE_ID_VIRTIO_PMEM;
> > > > +    pcidev_k->revision = VIRTIO_PCI_ABI_VERSION;
> > > > +    pcidev_k->class_id = PCI_CLASS_OTHERS;
> > > > +}
> > > > +
> > > > +static void virtio_pmem_pci_instance_init(Object *obj)
> > > > +{
> > > > +    VirtIOPMEMPCI *dev = VIRTIO_PMEM_PCI(obj);
> > > > +
> > > > +    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> > > > +                                TYPE_VIRTIO_PMEM);
> > > > +    object_property_add_alias(obj, "memdev", OBJECT(&dev->vdev), "memdev",
> > > > +                              &error_abort);
> > > > +}
> > > > +
> > > > +static const TypeInfo virtio_pmem_pci_info = {
> > > > +    .name          = TYPE_VIRTIO_PMEM_PCI,
> > > > +    .parent        = TYPE_VIRTIO_PCI,
> > > > +    .instance_size = sizeof(VirtIOPMEMPCI),
> > > > +    .instance_init = virtio_pmem_pci_instance_init,
> > > > +    .class_init    = virtio_pmem_pci_class_init,
> > > > +};
> > > > +
> > > > +
> > > >  /* virtio-input-pci */
> > > >  
> > > >  static Property virtio_input_pci_properties[] = {
> > > > @@ -2714,6 +2757,7 @@ static void virtio_pci_register_types(void)
> > > >      type_register_static(&virtio_balloon_pci_info);
> > > >      type_register_static(&virtio_serial_pci_info);
> > > >      type_register_static(&virtio_net_pci_info);
> > > > +    type_register_static(&virtio_pmem_pci_info);
> > > >  #ifdef CONFIG_VHOST_SCSI
> > > >      type_register_static(&vhost_scsi_pci_info);
> > > >  #endif
> > > > diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
> > > > index 813082b0d7..fe74fcad3f 100644
> > > > --- a/hw/virtio/virtio-pci.h
> > > > +++ b/hw/virtio/virtio-pci.h
> > > > @@ -19,6 +19,7 @@
> > > >  #include "hw/virtio/virtio-blk.h"
> > > >  #include "hw/virtio/virtio-net.h"
> > > >  #include "hw/virtio/virtio-rng.h"
> > > > +#include "hw/virtio/virtio-pmem.h"
> > > >  #include "hw/virtio/virtio-serial.h"
> > > >  #include "hw/virtio/virtio-scsi.h"
> > > >  #include "hw/virtio/virtio-balloon.h"
> > > > @@ -57,6 +58,7 @@ typedef struct VirtIOInputHostPCI VirtIOInputHostPCI;
> > > >  typedef struct VirtIOGPUPCI VirtIOGPUPCI;
> > > >  typedef struct VHostVSockPCI VHostVSockPCI;
> > > >  typedef struct VirtIOCryptoPCI VirtIOCryptoPCI;
> > > > +typedef struct VirtIOPMEMPCI VirtIOPMEMPCI;
> > > >  
> > > >  /* virtio-pci-bus */
> > > >  
> > > > @@ -274,6 +276,18 @@ struct VirtIOBlkPCI {
> > > >      VirtIOBlock vdev;
> > > >  };
> > > >  
> > > > +/*
> > > > + * virtio-pmem-pci: This extends VirtioPCIProxy.
> > > > + */
> > > > +#define TYPE_VIRTIO_PMEM_PCI "virtio-pmem-pci"
> > > > +#define VIRTIO_PMEM_PCI(obj) \
> > > > +        OBJECT_CHECK(VirtIOPMEMPCI, (obj), TYPE_VIRTIO_PMEM_PCI)
> > > > +
> > > > +struct VirtIOPMEMPCI {
> > > > +    VirtIOPCIProxy parent_obj;
> > > > +    VirtIOPMEM vdev;
> > > > +};
> > > > +
> > > >  /*
> > > >   * virtio-balloon-pci: This extends VirtioPCIProxy.
> > > >   */
> > > > diff --git a/hw/virtio/virtio-pmem.c b/hw/virtio/virtio-pmem.c
> > > > new file mode 100644
> > > > index 0000000000..08c96d7e80
> > > > --- /dev/null
> > > > +++ b/hw/virtio/virtio-pmem.c
> > > > @@ -0,0 +1,241 @@
> > > > +/*
> > > > + * Virtio pmem device
> > > > + *
> > > > + * Copyright (C) 2018 Red Hat, Inc.
> > > > + * Copyright (C) 2018 Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > > + *
> > > > + * This work is licensed under the terms of the GNU GPL, version 2.
> > > > + * See the COPYING file in the top-level directory.
> > > > + *
> > > > + */
> > > > +
> > > > +#include "qemu/osdep.h"
> > > > +#include "qapi/error.h"
> > > > +#include "qemu-common.h"
> > > > +#include "qemu/error-report.h"
> > > > +#include "hw/virtio/virtio-access.h"
> > > > +#include "hw/virtio/virtio-pmem.h"
> > > > +#include "hw/mem/memory-device.h"
> > > > +#include "block/aio.h"
> > > > +#include "block/thread-pool.h"
> > > > +
> > > > +typedef struct VirtIOPMEMresp {
> > > > +    int ret;
> > > > +} VirtIOPMEMResp;
> > > > +
> > > > +typedef struct VirtIODeviceRequest {
> > > > +    VirtQueueElement elem;
> > > > +    int fd;
> > > > +    VirtIOPMEM *pmem;
> > > > +    VirtIOPMEMResp resp;
> > > > +} VirtIODeviceRequest;
> > > > +
> > > > +static int worker_cb(void *opaque)
> > > > +{
> > > > +    VirtIODeviceRequest *req = opaque;
> > > > +    int err = 0;
> > > > +
> > > > +    /* flush raw backing image */
> > > > +    err = fsync(req->fd);
> > > > +    if (err != 0) {
> > > > +        err = errno;
> > > > +    }
> > > > +    req->resp.ret = err;  
> > > 
> > > Host question: are you returning the guest errno code to the host?  
> > 
> > No. I am returning error code from the host in-case of host fsync
> > failure, otherwise returning zero.  
> 
> I think that's what Luiz meant.  errno constants are not portable
> between operating systems and architectures.  Therefore they cannot be
> used in external interfaces in software that expects to communicate with
> other systems.

Oh, thanks. Only saw this email now.

> It will be necessary to define specific constants for virtio-pmem
> instead of passing errno from the host to guest.

Yes, defining your own constants work. But I think the only fsync()
error that will make sense for the guest is EIO. The other errors
only make sense for the host.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3] qemu: Add virtio pmem device
  2018-07-19 12:48             ` Luiz Capitulino
@ 2018-07-19 12:57               ` Luiz Capitulino
  2018-07-20 13:04               ` Pankaj Gupta
  1 sibling, 0 replies; 28+ messages in thread
From: Luiz Capitulino @ 2018-07-19 12:57 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Pankaj Gupta, kwolf, haozhong zhang, jack, xiaoguangrong eric,
	kvm, riel, linux-nvdimm, david, ross zwisler, linux-kernel,
	qemu-devel, hch, imammedo, mst, niteshnarayanlal, pbonzini,
	dan j williams, nilal

On Thu, 19 Jul 2018 08:48:19 -0400
Luiz Capitulino <lcapitulino@redhat.com> wrote:

> > It will be necessary to define specific constants for virtio-pmem
> > instead of passing errno from the host to guest.  
> 
> Yes, defining your own constants work. But I think the only fsync()
> error that will make sense for the guest is EIO. The other errors
> only make sense for the host.

Just to clarify: of course you'll return an error to guest on any
fsync() error. But maybe you should always return EIO even if the error
was EBADF for example. Or just signal the error with some constant,
and let the guest implementation pick any errno it prefers (this was
my first suggestion).

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3] qemu: Add virtio pmem device
       [not found]           ` <20180719121635.GA28107-lxVrvc10SDRcolVlb+j0YCZi+YwRKgec@public.gmane.org>
  2018-07-19 12:48             ` Luiz Capitulino
@ 2018-07-19 13:58             ` David Hildenbrand
       [not found]               ` <b6ef19f3-7f16-5427-bfed-f352a76e48b7-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  1 sibling, 1 reply; 28+ messages in thread
From: David Hildenbrand @ 2018-07-19 13:58 UTC (permalink / raw)
  To: Stefan Hajnoczi, Pankaj Gupta
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	riel-ebMLmSuQjDVBDgjK7y7TUQ, qemu-devel-qX2TKyscuCcdnm+yROfE0A,
	linux-nvdimm-y27Ovi1pjclAfugRpC6u6w, mst-H+wXaHxf7aLQT0dZR+AlfA,
	ross zwisler, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	Luiz Capitulino, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA

On 19.07.2018 14:16, Stefan Hajnoczi wrote:
> On Thu, Jul 19, 2018 at 01:48:13AM -0400, Pankaj Gupta wrote:
>>
>>>
>>>>  This patch adds virtio-pmem Qemu device.
>>>>
>>>>  This device presents memory address range information to guest
>>>>  which is backed by file backend type. It acts like persistent
>>>>  memory device for KVM guest. Guest can perform read and persistent
>>>>  write operations on this memory range with the help of DAX capable
>>>>  filesystem.
>>>>
>>>>  Persistent guest writes are assured with the help of virtio based
>>>>  flushing interface. When guest userspace space performs fsync on
>>>>  file fd on pmem device, a flush command is send to Qemu over VIRTIO
>>>>  and host side flush/sync is done on backing image file.
>>>>
>>>> Changes from RFC v2:
>>>> - Use aio_worker() to avoid Qemu from hanging with blocking fsync
>>>>   call - Stefan
>>>> - Use virtio_st*_p() for endianess - Stefan
>>>> - Correct indentation in qapi/misc.json - Eric
>>>>
>>>> Signed-off-by: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>>>> ---
>>>>  hw/virtio/Makefile.objs                     |   3 +
>>>>  hw/virtio/virtio-pci.c                      |  44 +++++
>>>>  hw/virtio/virtio-pci.h                      |  14 ++
>>>>  hw/virtio/virtio-pmem.c                     | 241
>>>>  ++++++++++++++++++++++++++++
>>>>  include/hw/pci/pci.h                        |   1 +
>>>>  include/hw/virtio/virtio-pmem.h             |  42 +++++
>>>>  include/standard-headers/linux/virtio_ids.h |   1 +
>>>>  qapi/misc.json                              |  26 ++-
>>>>  8 files changed, 371 insertions(+), 1 deletion(-)
>>>>  create mode 100644 hw/virtio/virtio-pmem.c
>>>>  create mode 100644 include/hw/virtio/virtio-pmem.h
>>>>
>>>> diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
>>>> index 1b2799cfd8..7f914d45d0 100644
>>>> --- a/hw/virtio/Makefile.objs
>>>> +++ b/hw/virtio/Makefile.objs
>>>> @@ -10,6 +10,9 @@ obj-$(CONFIG_VIRTIO_CRYPTO) += virtio-crypto.o
>>>>  obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) +=
>>>>  virtio-crypto-pci.o
>>>>  
>>>>  obj-$(CONFIG_LINUX) += vhost.o vhost-backend.o vhost-user.o
>>>> +ifeq ($(CONFIG_MEM_HOTPLUG),y)
>>>> +obj-$(CONFIG_LINUX) += virtio-pmem.o
>>>> +endif
>>>>  obj-$(CONFIG_VHOST_VSOCK) += vhost-vsock.o
>>>>  endif
>>>>  
>>>> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
>>>> index 3a01fe90f0..93d3fc05c7 100644
>>>> --- a/hw/virtio/virtio-pci.c
>>>> +++ b/hw/virtio/virtio-pci.c
>>>> @@ -2521,6 +2521,49 @@ static const TypeInfo virtio_rng_pci_info = {
>>>>      .class_init    = virtio_rng_pci_class_init,
>>>>  };
>>>>  
>>>> +/* virtio-pmem-pci */
>>>> +
>>>> +static void virtio_pmem_pci_realize(VirtIOPCIProxy *vpci_dev, Error
>>>> **errp)
>>>> +{
>>>> +    VirtIOPMEMPCI *vpmem = VIRTIO_PMEM_PCI(vpci_dev);
>>>> +    DeviceState *vdev = DEVICE(&vpmem->vdev);
>>>> +
>>>> +    qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
>>>> +    object_property_set_bool(OBJECT(vdev), true, "realized", errp);
>>>> +}
>>>> +
>>>> +static void virtio_pmem_pci_class_init(ObjectClass *klass, void *data)
>>>> +{
>>>> +    DeviceClass *dc = DEVICE_CLASS(klass);
>>>> +    VirtioPCIClass *k = VIRTIO_PCI_CLASS(klass);
>>>> +    PCIDeviceClass *pcidev_k = PCI_DEVICE_CLASS(klass);
>>>> +    k->realize = virtio_pmem_pci_realize;
>>>> +    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
>>>> +    pcidev_k->vendor_id = PCI_VENDOR_ID_REDHAT_QUMRANET;
>>>> +    pcidev_k->device_id = PCI_DEVICE_ID_VIRTIO_PMEM;
>>>> +    pcidev_k->revision = VIRTIO_PCI_ABI_VERSION;
>>>> +    pcidev_k->class_id = PCI_CLASS_OTHERS;
>>>> +}
>>>> +
>>>> +static void virtio_pmem_pci_instance_init(Object *obj)
>>>> +{
>>>> +    VirtIOPMEMPCI *dev = VIRTIO_PMEM_PCI(obj);
>>>> +
>>>> +    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>>>> +                                TYPE_VIRTIO_PMEM);
>>>> +    object_property_add_alias(obj, "memdev", OBJECT(&dev->vdev), "memdev",
>>>> +                              &error_abort);
>>>> +}
>>>> +
>>>> +static const TypeInfo virtio_pmem_pci_info = {
>>>> +    .name          = TYPE_VIRTIO_PMEM_PCI,
>>>> +    .parent        = TYPE_VIRTIO_PCI,
>>>> +    .instance_size = sizeof(VirtIOPMEMPCI),
>>>> +    .instance_init = virtio_pmem_pci_instance_init,
>>>> +    .class_init    = virtio_pmem_pci_class_init,
>>>> +};
>>>> +
>>>> +
>>>>  /* virtio-input-pci */
>>>>  
>>>>  static Property virtio_input_pci_properties[] = {
>>>> @@ -2714,6 +2757,7 @@ static void virtio_pci_register_types(void)
>>>>      type_register_static(&virtio_balloon_pci_info);
>>>>      type_register_static(&virtio_serial_pci_info);
>>>>      type_register_static(&virtio_net_pci_info);
>>>> +    type_register_static(&virtio_pmem_pci_info);
>>>>  #ifdef CONFIG_VHOST_SCSI
>>>>      type_register_static(&vhost_scsi_pci_info);
>>>>  #endif
>>>> diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
>>>> index 813082b0d7..fe74fcad3f 100644
>>>> --- a/hw/virtio/virtio-pci.h
>>>> +++ b/hw/virtio/virtio-pci.h
>>>> @@ -19,6 +19,7 @@
>>>>  #include "hw/virtio/virtio-blk.h"
>>>>  #include "hw/virtio/virtio-net.h"
>>>>  #include "hw/virtio/virtio-rng.h"
>>>> +#include "hw/virtio/virtio-pmem.h"
>>>>  #include "hw/virtio/virtio-serial.h"
>>>>  #include "hw/virtio/virtio-scsi.h"
>>>>  #include "hw/virtio/virtio-balloon.h"
>>>> @@ -57,6 +58,7 @@ typedef struct VirtIOInputHostPCI VirtIOInputHostPCI;
>>>>  typedef struct VirtIOGPUPCI VirtIOGPUPCI;
>>>>  typedef struct VHostVSockPCI VHostVSockPCI;
>>>>  typedef struct VirtIOCryptoPCI VirtIOCryptoPCI;
>>>> +typedef struct VirtIOPMEMPCI VirtIOPMEMPCI;
>>>>  
>>>>  /* virtio-pci-bus */
>>>>  
>>>> @@ -274,6 +276,18 @@ struct VirtIOBlkPCI {
>>>>      VirtIOBlock vdev;
>>>>  };
>>>>  
>>>> +/*
>>>> + * virtio-pmem-pci: This extends VirtioPCIProxy.
>>>> + */
>>>> +#define TYPE_VIRTIO_PMEM_PCI "virtio-pmem-pci"
>>>> +#define VIRTIO_PMEM_PCI(obj) \
>>>> +        OBJECT_CHECK(VirtIOPMEMPCI, (obj), TYPE_VIRTIO_PMEM_PCI)
>>>> +
>>>> +struct VirtIOPMEMPCI {
>>>> +    VirtIOPCIProxy parent_obj;
>>>> +    VirtIOPMEM vdev;
>>>> +};
>>>> +
>>>>  /*
>>>>   * virtio-balloon-pci: This extends VirtioPCIProxy.
>>>>   */
>>>> diff --git a/hw/virtio/virtio-pmem.c b/hw/virtio/virtio-pmem.c
>>>> new file mode 100644
>>>> index 0000000000..08c96d7e80
>>>> --- /dev/null
>>>> +++ b/hw/virtio/virtio-pmem.c
>>>> @@ -0,0 +1,241 @@
>>>> +/*
>>>> + * Virtio pmem device
>>>> + *
>>>> + * Copyright (C) 2018 Red Hat, Inc.
>>>> + * Copyright (C) 2018 Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>>>> + *
>>>> + * This work is licensed under the terms of the GNU GPL, version 2.
>>>> + * See the COPYING file in the top-level directory.
>>>> + *
>>>> + */
>>>> +
>>>> +#include "qemu/osdep.h"
>>>> +#include "qapi/error.h"
>>>> +#include "qemu-common.h"
>>>> +#include "qemu/error-report.h"
>>>> +#include "hw/virtio/virtio-access.h"
>>>> +#include "hw/virtio/virtio-pmem.h"
>>>> +#include "hw/mem/memory-device.h"
>>>> +#include "block/aio.h"
>>>> +#include "block/thread-pool.h"
>>>> +
>>>> +typedef struct VirtIOPMEMresp {
>>>> +    int ret;
>>>> +} VirtIOPMEMResp;
>>>> +
>>>> +typedef struct VirtIODeviceRequest {
>>>> +    VirtQueueElement elem;
>>>> +    int fd;
>>>> +    VirtIOPMEM *pmem;
>>>> +    VirtIOPMEMResp resp;
>>>> +} VirtIODeviceRequest;
>>>> +
>>>> +static int worker_cb(void *opaque)
>>>> +{
>>>> +    VirtIODeviceRequest *req = opaque;
>>>> +    int err = 0;
>>>> +
>>>> +    /* flush raw backing image */
>>>> +    err = fsync(req->fd);
>>>> +    if (err != 0) {
>>>> +        err = errno;
>>>> +    }
>>>> +    req->resp.ret = err;
>>>
>>> Host question: are you returning the guest errno code to the host?
>>
>> No. I am returning error code from the host in-case of host fsync
>> failure, otherwise returning zero.
> 
> I think that's what Luiz meant.  errno constants are not portable
> between operating systems and architectures.  Therefore they cannot be
> used in external interfaces in software that expects to communicate with
> other systems.
> 
> It will be necessary to define specific constants for virtio-pmem
> instead of passing errno from the host to guest.
> 

In general, I wonder if we should report errors at all or rather *kill*
the guest. That might sound harsh, but think about the following scenario:

fsync() fails due to some block that cannot e.g. be written (e.g.
network connection failed). What happens if our guest tries to
read/write that mmaped block? (e.g. network connection failed).

I assume we'll get a signal an get killed? So we are trying to optimize
one special case (fsync()) although every read/write is prone to kill
the guest. And as soon as the guest will try to access the block that
made fsync fail, we will crash the guest either way.

I assume the main problem is that we are trying to take a file (with all
the errors that can happen during read/write/fsync) and make it look
like memory (dax). On ordinary block access, we can forward errors, but
not if it's memory (maybe using MCE, but it's complicated and
architecture specific).

So I wonder if we should rather assume that our backend file is placed
on some stable storage that cannot easily fail.

(we might have the same problem with NVDIMM right now, at least the
memory reading/writing part)

It's complicated and I am not a block level expert :)

> Stefan
> 


-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3] qemu: Add virtio pmem device
       [not found]               ` <b6ef19f3-7f16-5427-bfed-f352a76e48b7-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-07-19 15:48                 ` Luiz Capitulino
  2018-07-20 13:02                 ` Pankaj Gupta
  1 sibling, 0 replies; 28+ messages in thread
From: Luiz Capitulino @ 2018-07-19 15:48 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	linux-nvdimm-y27Ovi1pjclAfugRpC6u6w, riel-ebMLmSuQjDVBDgjK7y7TUQ,
	ross zwisler, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	Stefan Hajnoczi, niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA

On Thu, 19 Jul 2018 15:58:20 +0200
David Hildenbrand <david-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> On 19.07.2018 14:16, Stefan Hajnoczi wrote:
> > On Thu, Jul 19, 2018 at 01:48:13AM -0400, Pankaj Gupta wrote:  
> >>  
> >>>  
> >>>>  This patch adds virtio-pmem Qemu device.
> >>>>
> >>>>  This device presents memory address range information to guest
> >>>>  which is backed by file backend type. It acts like persistent
> >>>>  memory device for KVM guest. Guest can perform read and persistent
> >>>>  write operations on this memory range with the help of DAX capable
> >>>>  filesystem.
> >>>>
> >>>>  Persistent guest writes are assured with the help of virtio based
> >>>>  flushing interface. When guest userspace space performs fsync on
> >>>>  file fd on pmem device, a flush command is send to Qemu over VIRTIO
> >>>>  and host side flush/sync is done on backing image file.
> >>>>
> >>>> Changes from RFC v2:
> >>>> - Use aio_worker() to avoid Qemu from hanging with blocking fsync
> >>>>   call - Stefan
> >>>> - Use virtio_st*_p() for endianess - Stefan
> >>>> - Correct indentation in qapi/misc.json - Eric
> >>>>
> >>>> Signed-off-by: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> >>>> ---
> >>>>  hw/virtio/Makefile.objs                     |   3 +
> >>>>  hw/virtio/virtio-pci.c                      |  44 +++++
> >>>>  hw/virtio/virtio-pci.h                      |  14 ++
> >>>>  hw/virtio/virtio-pmem.c                     | 241
> >>>>  ++++++++++++++++++++++++++++
> >>>>  include/hw/pci/pci.h                        |   1 +
> >>>>  include/hw/virtio/virtio-pmem.h             |  42 +++++
> >>>>  include/standard-headers/linux/virtio_ids.h |   1 +
> >>>>  qapi/misc.json                              |  26 ++-
> >>>>  8 files changed, 371 insertions(+), 1 deletion(-)
> >>>>  create mode 100644 hw/virtio/virtio-pmem.c
> >>>>  create mode 100644 include/hw/virtio/virtio-pmem.h
> >>>>
> >>>> diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
> >>>> index 1b2799cfd8..7f914d45d0 100644
> >>>> --- a/hw/virtio/Makefile.objs
> >>>> +++ b/hw/virtio/Makefile.objs
> >>>> @@ -10,6 +10,9 @@ obj-$(CONFIG_VIRTIO_CRYPTO) += virtio-crypto.o
> >>>>  obj-$(call land,$(CONFIG_VIRTIO_CRYPTO),$(CONFIG_VIRTIO_PCI)) +=
> >>>>  virtio-crypto-pci.o
> >>>>  
> >>>>  obj-$(CONFIG_LINUX) += vhost.o vhost-backend.o vhost-user.o
> >>>> +ifeq ($(CONFIG_MEM_HOTPLUG),y)
> >>>> +obj-$(CONFIG_LINUX) += virtio-pmem.o
> >>>> +endif
> >>>>  obj-$(CONFIG_VHOST_VSOCK) += vhost-vsock.o
> >>>>  endif
> >>>>  
> >>>> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> >>>> index 3a01fe90f0..93d3fc05c7 100644
> >>>> --- a/hw/virtio/virtio-pci.c
> >>>> +++ b/hw/virtio/virtio-pci.c
> >>>> @@ -2521,6 +2521,49 @@ static const TypeInfo virtio_rng_pci_info = {
> >>>>      .class_init    = virtio_rng_pci_class_init,
> >>>>  };
> >>>>  
> >>>> +/* virtio-pmem-pci */
> >>>> +
> >>>> +static void virtio_pmem_pci_realize(VirtIOPCIProxy *vpci_dev, Error
> >>>> **errp)
> >>>> +{
> >>>> +    VirtIOPMEMPCI *vpmem = VIRTIO_PMEM_PCI(vpci_dev);
> >>>> +    DeviceState *vdev = DEVICE(&vpmem->vdev);
> >>>> +
> >>>> +    qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
> >>>> +    object_property_set_bool(OBJECT(vdev), true, "realized", errp);
> >>>> +}
> >>>> +
> >>>> +static void virtio_pmem_pci_class_init(ObjectClass *klass, void *data)
> >>>> +{
> >>>> +    DeviceClass *dc = DEVICE_CLASS(klass);
> >>>> +    VirtioPCIClass *k = VIRTIO_PCI_CLASS(klass);
> >>>> +    PCIDeviceClass *pcidev_k = PCI_DEVICE_CLASS(klass);
> >>>> +    k->realize = virtio_pmem_pci_realize;
> >>>> +    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
> >>>> +    pcidev_k->vendor_id = PCI_VENDOR_ID_REDHAT_QUMRANET;
> >>>> +    pcidev_k->device_id = PCI_DEVICE_ID_VIRTIO_PMEM;
> >>>> +    pcidev_k->revision = VIRTIO_PCI_ABI_VERSION;
> >>>> +    pcidev_k->class_id = PCI_CLASS_OTHERS;
> >>>> +}
> >>>> +
> >>>> +static void virtio_pmem_pci_instance_init(Object *obj)
> >>>> +{
> >>>> +    VirtIOPMEMPCI *dev = VIRTIO_PMEM_PCI(obj);
> >>>> +
> >>>> +    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> >>>> +                                TYPE_VIRTIO_PMEM);
> >>>> +    object_property_add_alias(obj, "memdev", OBJECT(&dev->vdev), "memdev",
> >>>> +                              &error_abort);
> >>>> +}
> >>>> +
> >>>> +static const TypeInfo virtio_pmem_pci_info = {
> >>>> +    .name          = TYPE_VIRTIO_PMEM_PCI,
> >>>> +    .parent        = TYPE_VIRTIO_PCI,
> >>>> +    .instance_size = sizeof(VirtIOPMEMPCI),
> >>>> +    .instance_init = virtio_pmem_pci_instance_init,
> >>>> +    .class_init    = virtio_pmem_pci_class_init,
> >>>> +};
> >>>> +
> >>>> +
> >>>>  /* virtio-input-pci */
> >>>>  
> >>>>  static Property virtio_input_pci_properties[] = {
> >>>> @@ -2714,6 +2757,7 @@ static void virtio_pci_register_types(void)
> >>>>      type_register_static(&virtio_balloon_pci_info);
> >>>>      type_register_static(&virtio_serial_pci_info);
> >>>>      type_register_static(&virtio_net_pci_info);
> >>>> +    type_register_static(&virtio_pmem_pci_info);
> >>>>  #ifdef CONFIG_VHOST_SCSI
> >>>>      type_register_static(&vhost_scsi_pci_info);
> >>>>  #endif
> >>>> diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
> >>>> index 813082b0d7..fe74fcad3f 100644
> >>>> --- a/hw/virtio/virtio-pci.h
> >>>> +++ b/hw/virtio/virtio-pci.h
> >>>> @@ -19,6 +19,7 @@
> >>>>  #include "hw/virtio/virtio-blk.h"
> >>>>  #include "hw/virtio/virtio-net.h"
> >>>>  #include "hw/virtio/virtio-rng.h"
> >>>> +#include "hw/virtio/virtio-pmem.h"
> >>>>  #include "hw/virtio/virtio-serial.h"
> >>>>  #include "hw/virtio/virtio-scsi.h"
> >>>>  #include "hw/virtio/virtio-balloon.h"
> >>>> @@ -57,6 +58,7 @@ typedef struct VirtIOInputHostPCI VirtIOInputHostPCI;
> >>>>  typedef struct VirtIOGPUPCI VirtIOGPUPCI;
> >>>>  typedef struct VHostVSockPCI VHostVSockPCI;
> >>>>  typedef struct VirtIOCryptoPCI VirtIOCryptoPCI;
> >>>> +typedef struct VirtIOPMEMPCI VirtIOPMEMPCI;
> >>>>  
> >>>>  /* virtio-pci-bus */
> >>>>  
> >>>> @@ -274,6 +276,18 @@ struct VirtIOBlkPCI {
> >>>>      VirtIOBlock vdev;
> >>>>  };
> >>>>  
> >>>> +/*
> >>>> + * virtio-pmem-pci: This extends VirtioPCIProxy.
> >>>> + */
> >>>> +#define TYPE_VIRTIO_PMEM_PCI "virtio-pmem-pci"
> >>>> +#define VIRTIO_PMEM_PCI(obj) \
> >>>> +        OBJECT_CHECK(VirtIOPMEMPCI, (obj), TYPE_VIRTIO_PMEM_PCI)
> >>>> +
> >>>> +struct VirtIOPMEMPCI {
> >>>> +    VirtIOPCIProxy parent_obj;
> >>>> +    VirtIOPMEM vdev;
> >>>> +};
> >>>> +
> >>>>  /*
> >>>>   * virtio-balloon-pci: This extends VirtioPCIProxy.
> >>>>   */
> >>>> diff --git a/hw/virtio/virtio-pmem.c b/hw/virtio/virtio-pmem.c
> >>>> new file mode 100644
> >>>> index 0000000000..08c96d7e80
> >>>> --- /dev/null
> >>>> +++ b/hw/virtio/virtio-pmem.c
> >>>> @@ -0,0 +1,241 @@
> >>>> +/*
> >>>> + * Virtio pmem device
> >>>> + *
> >>>> + * Copyright (C) 2018 Red Hat, Inc.
> >>>> + * Copyright (C) 2018 Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> >>>> + *
> >>>> + * This work is licensed under the terms of the GNU GPL, version 2.
> >>>> + * See the COPYING file in the top-level directory.
> >>>> + *
> >>>> + */
> >>>> +
> >>>> +#include "qemu/osdep.h"
> >>>> +#include "qapi/error.h"
> >>>> +#include "qemu-common.h"
> >>>> +#include "qemu/error-report.h"
> >>>> +#include "hw/virtio/virtio-access.h"
> >>>> +#include "hw/virtio/virtio-pmem.h"
> >>>> +#include "hw/mem/memory-device.h"
> >>>> +#include "block/aio.h"
> >>>> +#include "block/thread-pool.h"
> >>>> +
> >>>> +typedef struct VirtIOPMEMresp {
> >>>> +    int ret;
> >>>> +} VirtIOPMEMResp;
> >>>> +
> >>>> +typedef struct VirtIODeviceRequest {
> >>>> +    VirtQueueElement elem;
> >>>> +    int fd;
> >>>> +    VirtIOPMEM *pmem;
> >>>> +    VirtIOPMEMResp resp;
> >>>> +} VirtIODeviceRequest;
> >>>> +
> >>>> +static int worker_cb(void *opaque)
> >>>> +{
> >>>> +    VirtIODeviceRequest *req = opaque;
> >>>> +    int err = 0;
> >>>> +
> >>>> +    /* flush raw backing image */
> >>>> +    err = fsync(req->fd);
> >>>> +    if (err != 0) {
> >>>> +        err = errno;
> >>>> +    }
> >>>> +    req->resp.ret = err;  
> >>>
> >>> Host question: are you returning the guest errno code to the host?  
> >>
> >> No. I am returning error code from the host in-case of host fsync
> >> failure, otherwise returning zero.  
> > 
> > I think that's what Luiz meant.  errno constants are not portable
> > between operating systems and architectures.  Therefore they cannot be
> > used in external interfaces in software that expects to communicate with
> > other systems.
> > 
> > It will be necessary to define specific constants for virtio-pmem
> > instead of passing errno from the host to guest.
> >   
> 
> In general, I wonder if we should report errors at all or rather *kill*
> the guest. That might sound harsh, but think about the following scenario:

I almost sure that I read in the nvdimm spec that real hardware will
cause a memory error on sync or read/write error. If we truly want
to emulate this, then I guess QEMU should be able to inject a memory
error for the entire region instead of returning the fsync() error
to the guest.

> fsync() fails due to some block that cannot e.g. be written (e.g.
> network connection failed). What happens if our guest tries to
> read/write that mmaped block? (e.g. network connection failed).

I think it gets a SIGBUS? Btw, I think that QEMU already has the
machinery to turn a SIGBUS into an memory error in the guest.

> I assume we'll get a signal an get killed? So we are trying to optimize
> one special case (fsync()) although every read/write is prone to kill
> the guest. And as soon as the guest will try to access the block that
> made fsync fail, we will crash the guest either way.

I think you have a point.

> 
> I assume the main problem is that we are trying to take a file (with all
> the errors that can happen during read/write/fsync) and make it look
> like memory (dax). On ordinary block access, we can forward errors, but
> not if it's memory (maybe using MCE, but it's complicated and
> architecture specific).
> 
> So I wonder if we should rather assume that our backend file is placed
> on some stable storage that cannot easily fail.
> 
> (we might have the same problem with NVDIMM right now, at least the
> memory reading/writing part)
> 
> It's complicated and I am not a block level expert :)
> 
> > Stefan
> >   
> 
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3] qemu: Add virtio pmem device
       [not found]               ` <b6ef19f3-7f16-5427-bfed-f352a76e48b7-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2018-07-19 15:48                 ` Luiz Capitulino
@ 2018-07-20 13:02                 ` Pankaj Gupta
  1 sibling, 0 replies; 28+ messages in thread
From: Pankaj Gupta @ 2018-07-20 13:02 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	riel-ebMLmSuQjDVBDgjK7y7TUQ, qemu-devel-qX2TKyscuCcdnm+yROfE0A,
	linux-nvdimm-y27Ovi1pjclAfugRpC6u6w, mst-H+wXaHxf7aLQT0dZR+AlfA,
	ross zwisler, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	Luiz Capitulino, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, Stefan Hajnoczi,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA


> >>>>  /*
> >>>>   * virtio-balloon-pci: This extends VirtioPCIProxy.
> >>>>   */
> >>>> diff --git a/hw/virtio/virtio-pmem.c b/hw/virtio/virtio-pmem.c
> >>>> new file mode 100644
> >>>> index 0000000000..08c96d7e80
> >>>> --- /dev/null
> >>>> +++ b/hw/virtio/virtio-pmem.c
> >>>> @@ -0,0 +1,241 @@
> >>>> +/*
> >>>> + * Virtio pmem device
> >>>> + *
> >>>> + * Copyright (C) 2018 Red Hat, Inc.
> >>>> + * Copyright (C) 2018 Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> >>>> + *
> >>>> + * This work is licensed under the terms of the GNU GPL, version 2.
> >>>> + * See the COPYING file in the top-level directory.
> >>>> + *
> >>>> + */
> >>>> +
> >>>> +#include "qemu/osdep.h"
> >>>> +#include "qapi/error.h"
> >>>> +#include "qemu-common.h"
> >>>> +#include "qemu/error-report.h"
> >>>> +#include "hw/virtio/virtio-access.h"
> >>>> +#include "hw/virtio/virtio-pmem.h"
> >>>> +#include "hw/mem/memory-device.h"
> >>>> +#include "block/aio.h"
> >>>> +#include "block/thread-pool.h"
> >>>> +
> >>>> +typedef struct VirtIOPMEMresp {
> >>>> +    int ret;
> >>>> +} VirtIOPMEMResp;
> >>>> +
> >>>> +typedef struct VirtIODeviceRequest {
> >>>> +    VirtQueueElement elem;
> >>>> +    int fd;
> >>>> +    VirtIOPMEM *pmem;
> >>>> +    VirtIOPMEMResp resp;
> >>>> +} VirtIODeviceRequest;
> >>>> +
> >>>> +static int worker_cb(void *opaque)
> >>>> +{
> >>>> +    VirtIODeviceRequest *req = opaque;
> >>>> +    int err = 0;
> >>>> +
> >>>> +    /* flush raw backing image */
> >>>> +    err = fsync(req->fd);
> >>>> +    if (err != 0) {
> >>>> +        err = errno;
> >>>> +    }
> >>>> +    req->resp.ret = err;
> >>>
> >>> Host question: are you returning the guest errno code to the host?
> >>
> >> No. I am returning error code from the host in-case of host fsync
> >> failure, otherwise returning zero.
> > 
> > I think that's what Luiz meant.  errno constants are not portable
> > between operating systems and architectures.  Therefore they cannot be
> > used in external interfaces in software that expects to communicate with
> > other systems.
> > 
> > It will be necessary to define specific constants for virtio-pmem
> > instead of passing errno from the host to guest.
> > 
> 
> In general, I wonder if we should report errors at all or rather *kill*
> the guest. That might sound harsh, but think about the following scenario:
> 
> fsync() fails due to some block that cannot e.g. be written (e.g.
> network connection failed). What happens if our guest tries to
> read/write that mmaped block? (e.g. network connection failed).
> 
> I assume we'll get a signal an get killed? So we are trying to optimize
> one special case (fsync()) although every read/write is prone to kill
> the guest. And as soon as the guest will try to access the block that
> made fsync fail, we will crash the guest either way.
> 
> I assume the main problem is that we are trying to take a file (with all
> the errors that can happen during read/write/fsync) and make it look
> like memory (dax). On ordinary block access, we can forward errors, but
> not if it's memory (maybe using MCE, but it's complicated and
> architecture specific).

There are two points which you highlighted:

1] Memory hardware errors:
These type of errors will be notified by MCA. If mce is non-recoverable, KVM gets 
SIG_BUS when hardware detects such error and injects mce in guest vCPU. If guest 
does not recoverable it can decide to kill the user-space process. 

Default option for mce is '1':
1: panic or SIGBUS on uncorrected errors, log corrected errors

2] read/write/fsync failure because of (network connection failure):
I assume you are talking about something like NFS mount where read/write/fsync
responsibility is taken care by NFS. This scenario can happen for any application
accessing a network filesystem and return appropriate error or wait. Until 'fsync' 
is not performed there is no guarantee ram data is backed. I think its
the responsibility of application to perform fsync after write operation or 
a transaction.
 
> 
> So I wonder if we should rather assume that our backend file is placed
> on some stable storage that cannot easily fail.
> 
> (we might have the same problem with NVDIMM right now, at least the
> memory reading/writing part)

NVDIMM NFIT handles this handler and checks if any SPA falls in the range
of mce:address. It creates a list of bad blocks(corresponding to nd_region) and handle 
in function 'pmem_do_bvec' used by 'pmem_mem_request' & 'pmem_read_write'.

void nfit_mce_register(void)
{
        mce_register_decode_chain(&nfit_mce_dec);
}

In 'fake DAX', we bypass NFIT ACPI and using virtio & nvdimm_bus way of registering
memory region. By default it should kill the userspace process or at worst cause guest reboot.
I am thinking how we can integrate the NFIT bad block handling with mce handler approach
for fake DAX. I think we can do this. But I want inputs from NVDIMM guys?

Thanks,
Pankaj

> 
> It's complicated and I am not a block level expert :)

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3] qemu: Add virtio pmem device
  2018-07-19 12:48             ` Luiz Capitulino
  2018-07-19 12:57               ` Luiz Capitulino
@ 2018-07-20 13:04               ` Pankaj Gupta
  1 sibling, 0 replies; 28+ messages in thread
From: Pankaj Gupta @ 2018-07-20 13:04 UTC (permalink / raw)
  To: Luiz Capitulino
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	riel-ebMLmSuQjDVBDgjK7y7TUQ, linux-nvdimm-y27Ovi1pjclAfugRpC6u6w,
	david-H+wXaHxf7aLQT0dZR+AlfA, ross zwisler,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	Stefan Hajnoczi, niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA


> > > > > +
> > > > > +typedef struct VirtIOPMEMresp {
> > > > > +    int ret;
> > > > > +} VirtIOPMEMResp;
> > > > > +
> > > > > +typedef struct VirtIODeviceRequest {
> > > > > +    VirtQueueElement elem;
> > > > > +    int fd;
> > > > > +    VirtIOPMEM *pmem;
> > > > > +    VirtIOPMEMResp resp;
> > > > > +} VirtIODeviceRequest;
> > > > > +
> > > > > +static int worker_cb(void *opaque)
> > > > > +{
> > > > > +    VirtIODeviceRequest *req = opaque;
> > > > > +    int err = 0;
> > > > > +
> > > > > +    /* flush raw backing image */
> > > > > +    err = fsync(req->fd);
> > > > > +    if (err != 0) {
> > > > > +        err = errno;
> > > > > +    }
> > > > > +    req->resp.ret = err;
> > > > 
> > > > Host question: are you returning the guest errno code to the host?
> > > 
> > > No. I am returning error code from the host in-case of host fsync
> > > failure, otherwise returning zero.
> > 
> > I think that's what Luiz meant.  errno constants are not portable
> > between operating systems and architectures.  Therefore they cannot be
> > used in external interfaces in software that expects to communicate with
> > other systems.
> 
> Oh, thanks. Only saw this email now.
> 
> > It will be necessary to define specific constants for virtio-pmem
> > instead of passing errno from the host to guest.
> 
> Yes, defining your own constants work. But I think the only fsync()
> error that will make sense for the guest is EIO. The other errors
> only make sense for the host.

Agree.

Thanks,
Pankaj

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC v3] qemu: Add virtio pmem device
       [not found]   ` <20180713075232.9575-4-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2018-07-18 12:55     ` Luiz Capitulino
@ 2018-07-24 16:13     ` Eric Blake
       [not found]       ` <783786ae-2e85-2376-448c-1e362c3d4d48-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  1 sibling, 1 reply; 28+ messages in thread
From: Eric Blake @ 2018-07-24 16:13 UTC (permalink / raw)
  To: Pankaj Gupta, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	kvm-u79uwXL29TY76Z2rM5mHXA, qemu-devel-qX2TKyscuCcdnm+yROfE0A,
	linux-nvdimm-y27Ovi1pjclAfugRpC6u6w
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong.eric-Re5JQEeQqe8AvxtiuMwx3w,
	riel-ebMLmSuQjDVBDgjK7y7TUQ,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	david-H+wXaHxf7aLQT0dZR+AlfA,
	ross.zwisler-ral2JQCrhuEAvxtiuMwx3w,
	lcapitulino-H+wXaHxf7aLQT0dZR+AlfA, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	mst-H+wXaHxf7aLQT0dZR+AlfA, stefanha-H+wXaHxf7aLQT0dZR+AlfA,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, pbonzini-H+wXaHxf7aLQT0dZR+AlfA,
	nilal-H+wXaHxf7aLQT0dZR+AlfA

On 07/13/2018 02:52 AM, Pankaj Gupta wrote:
>   This patch adds virtio-pmem Qemu device.
> 
>   This device presents memory address range information to guest
>   which is backed by file backend type. It acts like persistent
>   memory device for KVM guest. Guest can perform read and persistent
>   write operations on this memory range with the help of DAX capable
>   filesystem.
> 
>   Persistent guest writes are assured with the help of virtio based
>   flushing interface. When guest userspace space performs fsync on
>   file fd on pmem device, a flush command is send to Qemu over VIRTIO
>   and host side flush/sync is done on backing image file.
> 
> Changes from RFC v2:

This patch has no n/M in the subject line; but is included in a thread 
that also has a 0/2 cover letter, as well as 1/2 and 2/2 patches in 
separate mails.  Is that intentional?

When sending revision notes on a specific patch, it's best to place them...

> - Use aio_worker() to avoid Qemu from hanging with blocking fsync
>    call - Stefan
> - Use virtio_st*_p() for endianess - Stefan
> - Correct indentation in qapi/misc.json - Eric
> 
> Signed-off-by: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> ---

...here, after the --- separator. They are useful to reviewers on the 
list, but are stripped by 'git am' as they don't need to be part of the 
git history (a year from now, we won't care how many iterations the 
patch went through during review, only what actually landed).


> +++ b/qapi/misc.json
> @@ -2907,6 +2907,29 @@
>             }
>   }
>   
> +##
> +# @VirtioPMemDeviceInfo:
> +#
> +# VirtioPMem state information
> +#
> +# @id: device's ID
> +#
> +# @start: physical address, where device is mapped
> +#
> +# @size: size of memory that the device provides
> +#
> +# @memdev: memory backend linked with device
> +#
> +# Since: 2.13

There is no 2.13 release, and you've missed the 3.0 window.  Please 
update this and any other version reference to 3.1.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3] qemu: Add virtio pmem device
       [not found]       ` <783786ae-2e85-2376-448c-1e362c3d4d48-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-07-25  5:01         ` Pankaj Gupta
       [not found]           ` <399916154.53931292.1532494899706.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 28+ messages in thread
From: Pankaj Gupta @ 2018-07-25  5:01 UTC (permalink / raw)
  To: Eric Blake
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA,
	jack-AlSwsSmVLrQ, xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	riel-ebMLmSuQjDVBDgjK7y7TUQ, linux-nvdimm-y27Ovi1pjclAfugRpC6u6w,
	david-H+wXaHxf7aLQT0dZR+AlfA, ross zwisler,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	stefanha-H+wXaHxf7aLQT0dZR+AlfA,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA,
	lcapitulino-H+wXaHxf7aLQT0dZR+AlfA


Hi Eric,

> 
> On 07/13/2018 02:52 AM, Pankaj Gupta wrote:
> >   This patch adds virtio-pmem Qemu device.
> > 
> >   This device presents memory address range information to guest
> >   which is backed by file backend type. It acts like persistent
> >   memory device for KVM guest. Guest can perform read and persistent
> >   write operations on this memory range with the help of DAX capable
> >   filesystem.
> > 
> >   Persistent guest writes are assured with the help of virtio based
> >   flushing interface. When guest userspace space performs fsync on
> >   file fd on pmem device, a flush command is send to Qemu over VIRTIO
> >   and host side flush/sync is done on backing image file.
> > 
> > Changes from RFC v2:
> 
> This patch has no n/M in the subject line; but is included in a thread
> that also has a 0/2 cover letter, as well as 1/2 and 2/2 patches in
> separate mails.  Is that intentional?

Yes, kernel series has 0-2 patches and Qemu has this one. I thought its
good to keep separate numbering for both the sets.  
> 
> When sending revision notes on a specific patch, it's best to place them...

Sure.
> 
> > - Use aio_worker() to avoid Qemu from hanging with blocking fsync
> >    call - Stefan
> > - Use virtio_st*_p() for endianess - Stefan
> > - Correct indentation in qapi/misc.json - Eric
> > 
> > Signed-off-by: Pankaj Gupta <pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > ---
> 
> ...here, after the --- separator. They are useful to reviewers on the
> list, but are stripped by 'git am' as they don't need to be part of the
> git history (a year from now, we won't care how many iterations the
> patch went through during review, only what actually landed).
> 
> 
> > +++ b/qapi/misc.json
> > @@ -2907,6 +2907,29 @@
> >             }
> >   }
> >   
> > +##
> > +# @VirtioPMemDeviceInfo:
> > +#
> > +# VirtioPMem state information
> > +#
> > +# @id: device's ID
> > +#
> > +# @start: physical address, where device is mapped
> > +#
> > +# @size: size of memory that the device provides
> > +#
> > +# @memdev: memory backend linked with device
> > +#
> > +# Since: 2.13
> 
> There is no 2.13 release, and you've missed the 3.0 window.  Please
> update this and any other version reference to 3.1.

okay.

Thanks,
Pankaj

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3] qemu: Add virtio pmem device
       [not found]           ` <399916154.53931292.1532494899706.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-07-25 12:19             ` Eric Blake
       [not found]               ` <d3fe397a-024d-faf7-8854-bb8e9ea17f53-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 28+ messages in thread
From: Eric Blake @ 2018-07-25 12:19 UTC (permalink / raw)
  To: Pankaj Gupta
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA,
	jack-AlSwsSmVLrQ, xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	riel-ebMLmSuQjDVBDgjK7y7TUQ, linux-nvdimm-y27Ovi1pjclAfugRpC6u6w,
	david-H+wXaHxf7aLQT0dZR+AlfA, ross zwisler,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	stefanha-H+wXaHxf7aLQT0dZR+AlfA,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA,
	lcapitulino-H+wXaHxf7aLQT0dZR+AlfA

On 07/25/2018 12:01 AM, Pankaj Gupta wrote:

>>
>> This patch has no n/M in the subject line; but is included in a thread
>> that also has a 0/2 cover letter, as well as 1/2 and 2/2 patches in
>> separate mails.  Is that intentional?
> 
> Yes, kernel series has 0-2 patches and Qemu has this one. I thought its
> good to keep separate numbering for both the sets.

Ah, that makes sense. The cover letter didn't make it obvious to me that 
this was two separate series to two projects, but related enough that 
both series have to be incorporated for the feature to work and thus 
cross-posted under a single cover letter.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3] qemu: Add virtio pmem device
       [not found]               ` <d3fe397a-024d-faf7-8854-bb8e9ea17f53-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-07-25 12:47                 ` Pankaj Gupta
  0 siblings, 0 replies; 28+ messages in thread
From: Pankaj Gupta @ 2018-07-25 12:47 UTC (permalink / raw)
  To: Eric Blake
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	riel-ebMLmSuQjDVBDgjK7y7TUQ, linux-nvdimm-y27Ovi1pjclAfugRpC6u6w,
	david-H+wXaHxf7aLQT0dZR+AlfA, ross zwisler,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA, mst-H+wXaHxf7aLQT0dZR+AlfA,
	stefanha-H+wXaHxf7aLQT0dZR+AlfA,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	lcapitulino-H+wXaHxf7aLQT0dZR+AlfA,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA




> 
> >>
> >> This patch has no n/M in the subject line; but is included in a thread
> >> that also has a 0/2 cover letter, as well as 1/2 and 2/2 patches in
> >> separate mails.  Is that intentional?
> > 
> > Yes, kernel series has 0-2 patches and Qemu has this one. I thought its
> > good to keep separate numbering for both the sets.
> 
> Ah, that makes sense. The cover letter didn't make it obvious to me that
> this was two separate series to two projects, but related enough that
> both series have to be incorporated for the feature to work and thus
> cross-posted under a single cover letter.

Will try to make it more clear in next posting.

Thanks,
Pankaj

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [RFC v3 0/2] kvm "fake DAX" device flushing
       [not found] ` <20180713075232.9575-1-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2018-07-13  7:52   ` [RFC v3 2/2] virtio-pmem: Add virtio pmem driver Pankaj Gupta
@ 2018-08-28 12:13   ` David Hildenbrand
       [not found]     ` <1328e543-0276-8f33-1744-8baa053023c4-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  1 sibling, 1 reply; 28+ messages in thread
From: David Hildenbrand @ 2018-08-28 12:13 UTC (permalink / raw)
  To: Pankaj Gupta, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	kvm-u79uwXL29TY76Z2rM5mHXA, qemu-devel-qX2TKyscuCcdnm+yROfE0A,
	linux-nvdimm-y27Ovi1pjclAfugRpC6u6w
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, jack-AlSwsSmVLrQ,
	xiaoguangrong.eric-Re5JQEeQqe8AvxtiuMwx3w,
	riel-ebMLmSuQjDVBDgjK7y7TUQ,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	mst-H+wXaHxf7aLQT0dZR+AlfA, ross.zwisler-ral2JQCrhuEAvxtiuMwx3w,
	lcapitulino-H+wXaHxf7aLQT0dZR+AlfA, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	stefanha-H+wXaHxf7aLQT0dZR+AlfA, imammedo-H+wXaHxf7aLQT0dZR+AlfA,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA,
	eblake-H+wXaHxf7aLQT0dZR+AlfA

On 13.07.2018 09:52, Pankaj Gupta wrote:
> This is RFC V3 for 'fake DAX' flushing interface sharing
> for review. This patchset has two parts:
> 
> - Guest virtio-pmem driver
>   Guest driver reads persistent memory range from paravirt device 
>   and registers with 'nvdimm_bus'. 'nvdimm/pmem' driver uses this 
>   information to allocate persistent memory range. Also, we have 
>   implemented guest side of VIRTIO flushing interface.
> 
> - Qemu virtio-pmem device
>   It exposes a persistent memory range to KVM guest which at host 
>   side is file backed memory and works as persistent memory device. 
>   In addition to this it provides virtio device handling of flushing 
>   interface. KVM guest performs Qemu side asynchronous sync using 
>   this interface.
> 
> Changes from RFC v2:
> - Add flush function in the nd_region in place of switching
>   on a flag - Dan & Stefan
> - Add flush completion function with proper locking and wait
>   for host side flush completion - Stefan & Dan
> - Keep userspace API in uapi header file - Stefan, MST
> - Use LE fields & New device id - MST
> - Indentation & spacing suggestions - MST & Eric
> - Remove extra header files & add licensing - Stefan
> 
> Changes from RFC v1:
> - Reuse existing 'pmem' code for registering persistent 
>   memory and other operations instead of creating an entirely 
>   new block driver.
> - Use VIRTIO driver to register memory information with 
>   nvdimm_bus and create region_type accordingly. 
> - Call VIRTIO flush from existing pmem driver.
> 
> Details of project idea for 'fake DAX' flushing interface is 
> shared [2] & [3].
> 
> Pankaj Gupta (2):
>    Add virtio-pmem guest driver
>    pmem: device flush over VIRTIO
> 
> [1] https://marc.info/?l=linux-mm&m=150782346802290&w=2
> [2] https://www.spinics.net/lists/kvm/msg149761.html
> [3] https://www.spinics.net/lists/kvm/msg153095.html  
> 
>  drivers/nvdimm/nd.h              |    1 
>  drivers/nvdimm/pmem.c            |    4 
>  drivers/nvdimm/region_devs.c     |   24 +++-
>  drivers/virtio/Kconfig           |    9 +
>  drivers/virtio/Makefile          |    1 
>  drivers/virtio/virtio_pmem.c     |  190 +++++++++++++++++++++++++++++++++++++++
>  include/linux/libnvdimm.h        |    5 -
>  include/linux/virtio_pmem.h      |   44 +++++++++
>  include/uapi/linux/virtio_ids.h  |    1 
>  include/uapi/linux/virtio_pmem.h |   40 ++++++++
>  10 files changed, 310 insertions(+), 9 deletions(-)
> 

Hi Pankaj,

do you have a branch for the QEMU part somewhere available? I want to
see how this works with MemoryDevice changes.

-- 

Thanks,

David / dhildenb

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC v3 0/2] kvm "fake DAX" device flushing
       [not found]     ` <1328e543-0276-8f33-1744-8baa053023c4-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2018-08-28 12:39       ` Pankaj Gupta
  0 siblings, 0 replies; 28+ messages in thread
From: Pankaj Gupta @ 2018-08-28 12:39 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: kwolf-H+wXaHxf7aLQT0dZR+AlfA, nilal-H+wXaHxf7aLQT0dZR+AlfA,
	jack-AlSwsSmVLrQ, xiaoguangrong eric, kvm-u79uwXL29TY76Z2rM5mHXA,
	riel-ebMLmSuQjDVBDgjK7y7TUQ, linux-nvdimm-y27Ovi1pjclAfugRpC6u6w,
	mst-H+wXaHxf7aLQT0dZR+AlfA, ross zwisler,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	qemu-devel-qX2TKyscuCcdnm+yROfE0A, hch-wEGCiKHe2LqWVfeAwA7xHQ,
	pbonzini-H+wXaHxf7aLQT0dZR+AlfA, stefanha-H+wXaHxf7aLQT0dZR+AlfA,
	niteshnarayanlal-PkbjNfxxIARBDgjK7y7TUQ,
	imammedo-H+wXaHxf7aLQT0dZR+AlfA,
	lcapitulino-H+wXaHxf7aLQT0dZR+AlfA


Hi David,

> > for review. This patchset has two parts:
> > 
> > - Guest virtio-pmem driver
> >   Guest driver reads persistent memory range from paravirt device
> >   and registers with 'nvdimm_bus'. 'nvdimm/pmem' driver uses this
> >   information to allocate persistent memory range. Also, we have
> >   implemented guest side of VIRTIO flushing interface.
> > 
> > - Qemu virtio-pmem device
> >   It exposes a persistent memory range to KVM guest which at host
> >   side is file backed memory and works as persistent memory device.
> >   In addition to this it provides virtio device handling of flushing
> >   interface. KVM guest performs Qemu side asynchronous sync using
> >   this interface.
> > 
> > Changes from RFC v2:
> > - Add flush function in the nd_region in place of switching
> >   on a flag - Dan & Stefan
> > - Add flush completion function with proper locking and wait
> >   for host side flush completion - Stefan & Dan
> > - Keep userspace API in uapi header file - Stefan, MST
> > - Use LE fields & New device id - MST
> > - Indentation & spacing suggestions - MST & Eric
> > - Remove extra header files & add licensing - Stefan
> > 
> > Changes from RFC v1:
> > - Reuse existing 'pmem' code for registering persistent
> >   memory and other operations instead of creating an entirely
> >   new block driver.
> > - Use VIRTIO driver to register memory information with
> >   nvdimm_bus and create region_type accordingly.
> > - Call VIRTIO flush from existing pmem driver.
> > 
> > Details of project idea for 'fake DAX' flushing interface is
> > shared [2] & [3].
> > 
> > Pankaj Gupta (2):
> >    Add virtio-pmem guest driver
> >    pmem: device flush over VIRTIO
> > 
> > [1] https://marc.info/?l=linux-mm&m=150782346802290&w=2
> > [2] https://www.spinics.net/lists/kvm/msg149761.html
> > [3] https://www.spinics.net/lists/kvm/msg153095.html
> > 
> >  drivers/nvdimm/nd.h              |    1
> >  drivers/nvdimm/pmem.c            |    4
> >  drivers/nvdimm/region_devs.c     |   24 +++-
> >  drivers/virtio/Kconfig           |    9 +
> >  drivers/virtio/Makefile          |    1
> >  drivers/virtio/virtio_pmem.c     |  190
> >  +++++++++++++++++++++++++++++++++++++++
> >  include/linux/libnvdimm.h        |    5 -
> >  include/linux/virtio_pmem.h      |   44 +++++++++
> >  include/uapi/linux/virtio_ids.h  |    1
> >  include/uapi/linux/virtio_pmem.h |   40 ++++++++
> >  10 files changed, 310 insertions(+), 9 deletions(-)
> > 
> 
> Hi Pankaj,
> 
> do you have a branch for the QEMU part somewhere available? I want to
> see how this works with MemoryDevice changes.

o.k I will update the guest virtio-pmem and Qemu current changes
and share with you. BTW previous version is here:

https://marc.info/?l=kvm&m=153146839627143&w=2

Thanks,
Pankaj
 
> 
> --
> 
> Thanks,
> 
> David / dhildenb
> 
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2018-08-28 12:39 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-13  7:52 [RFC v3 0/2] kvm "fake DAX" device flushing Pankaj Gupta
2018-07-13  7:52 ` [RFC v3 1/2] libnvdimm: Add flush callback for virtio pmem Pankaj Gupta
2018-07-13 20:35   ` Luiz Capitulino
2018-07-16  8:13     ` Pankaj Gupta
     [not found] ` <20180713075232.9575-1-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-07-13  7:52   ` [RFC v3 2/2] virtio-pmem: Add virtio pmem driver Pankaj Gupta
     [not found]     ` <20180713075232.9575-3-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-07-13 20:38       ` Luiz Capitulino
2018-07-16 11:46         ` [Qemu-devel] " Pankaj Gupta
     [not found]           ` <633297685.51039804.1531741590092.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-07-16 14:03             ` Luiz Capitulino
2018-07-16 15:11               ` Pankaj Gupta
2018-07-17 13:11     ` Stefan Hajnoczi
     [not found]       ` <20180717131156.GA13498-lxVrvc10SDRcolVlb+j0YCZi+YwRKgec@public.gmane.org>
2018-07-18  7:05         ` Pankaj Gupta
2018-08-28 12:13   ` [RFC v3 0/2] kvm "fake DAX" device flushing David Hildenbrand
     [not found]     ` <1328e543-0276-8f33-1744-8baa053023c4-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-08-28 12:39       ` [Qemu-devel] " Pankaj Gupta
2018-07-13  7:52 ` [RFC v3] qemu: Add virtio pmem device Pankaj Gupta
     [not found]   ` <20180713075232.9575-4-pagupta-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-07-18 12:55     ` Luiz Capitulino
2018-07-19  5:48       ` [Qemu-devel] " Pankaj Gupta
2018-07-19 12:16         ` Stefan Hajnoczi
     [not found]           ` <20180719121635.GA28107-lxVrvc10SDRcolVlb+j0YCZi+YwRKgec@public.gmane.org>
2018-07-19 12:48             ` Luiz Capitulino
2018-07-19 12:57               ` Luiz Capitulino
2018-07-20 13:04               ` Pankaj Gupta
2018-07-19 13:58             ` David Hildenbrand
     [not found]               ` <b6ef19f3-7f16-5427-bfed-f352a76e48b7-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-07-19 15:48                 ` Luiz Capitulino
2018-07-20 13:02                 ` Pankaj Gupta
     [not found]         ` <367397176.52317488.1531979293251.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-07-19 12:39           ` Luiz Capitulino
2018-07-24 16:13     ` Eric Blake
     [not found]       ` <783786ae-2e85-2376-448c-1e362c3d4d48-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-07-25  5:01         ` [Qemu-devel] " Pankaj Gupta
     [not found]           ` <399916154.53931292.1532494899706.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-07-25 12:19             ` Eric Blake
     [not found]               ` <d3fe397a-024d-faf7-8854-bb8e9ea17f53-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2018-07-25 12:47                 ` Pankaj Gupta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).