All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
@ 2020-04-30 10:02 ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 10:02 UTC (permalink / raw)
  To: konrad.wilk, mst, jasowang, jan.kiszka, will, stefano.stabellini
  Cc: iommu, virtualization, virtio-dev, tsoni, pratikp, vatsa,
	christoffer.dall, alex.bennee, linux-kernel

The Type-1 hypervisor we are dealing with does not allow for MMIO transport. 
[1] summarizes some of the problems we have in making virtio work on such
hypervisors. This patch proposes a solution for transport problem viz how we can
do config space IO on such a hypervisor. Hypervisor specific methods
introduced allows for seamless IO of config space.

This patch is meant to seek comments. If its considered to be in right
direction, will work on making it more complete and send the next version!

1. https://lkml.org/lkml/2020/4/28/427

Srivatsa Vaddagiri (1):
  virtio: Introduce MMIO ops

 drivers/virtio/virtio_mmio.c | 131 ++++++++++++++++++++++++++-----------------
 include/linux/virtio.h       |  14 +++++
 2 files changed, 94 insertions(+), 51 deletions(-)

-- 
2.7.4

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
@ 2020-04-30 10:02 ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 10:02 UTC (permalink / raw)
  To: konrad.wilk, mst, jasowang, jan.kiszka, will, stefano.stabellini
  Cc: tsoni, virtio-dev, alex.bennee, vatsa, christoffer.dall,
	virtualization, iommu, pratikp, linux-kernel

The Type-1 hypervisor we are dealing with does not allow for MMIO transport. 
[1] summarizes some of the problems we have in making virtio work on such
hypervisors. This patch proposes a solution for transport problem viz how we can
do config space IO on such a hypervisor. Hypervisor specific methods
introduced allows for seamless IO of config space.

This patch is meant to seek comments. If its considered to be in right
direction, will work on making it more complete and send the next version!

1. https://lkml.org/lkml/2020/4/28/427

Srivatsa Vaddagiri (1):
  virtio: Introduce MMIO ops

 drivers/virtio/virtio_mmio.c | 131 ++++++++++++++++++++++++++-----------------
 include/linux/virtio.h       |  14 +++++
 2 files changed, 94 insertions(+), 51 deletions(-)

-- 
2.7.4

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [RFC/PATCH 1/1] virtio: Introduce MMIO ops
  2020-04-30 10:02 ` Srivatsa Vaddagiri
@ 2020-04-30 10:02   ` Srivatsa Vaddagiri
  -1 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 10:02 UTC (permalink / raw)
  To: konrad.wilk, mst, jasowang, jan.kiszka, will, stefano.stabellini
  Cc: iommu, virtualization, virtio-dev, tsoni, pratikp, vatsa,
	christoffer.dall, alex.bennee, linux-kernel

Some hypervisors may not support MMIO transport i.e trap config
space access and have it be handled by backend driver. They may
allow other ways to interact with backend such as message-queue
or doorbell API. This patch allows for hypervisor specific
methods for config space IO.

Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
---
 drivers/virtio/virtio_mmio.c | 131 ++++++++++++++++++++++++++-----------------
 include/linux/virtio.h       |  14 +++++
 2 files changed, 94 insertions(+), 51 deletions(-)

diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
index 97d5725..69bfa35 100644
--- a/drivers/virtio/virtio_mmio.c
+++ b/drivers/virtio/virtio_mmio.c
@@ -100,7 +100,35 @@ struct virtio_mmio_vq_info {
 	struct list_head node;
 };
 
+#ifdef CONFIG_VIRTIO_MMIO_OPS
 
+static struct virtio_mmio_ops *mmio_ops;
+
+#define virtio_readb(a)		mmio_ops->mmio_readl((a))
+#define virtio_readw(a)		mmio_ops->mmio_readl((a))
+#define virtio_readl(a)		mmio_ops->mmio_readl((a))
+#define virtio_writeb(val, a)	mmio_ops->mmio_writeb((val), (a))
+#define virtio_writew(val, a)	mmio_ops->mmio_writew((val), (a))
+#define virtio_writel(val, a)	mmio_ops->mmio_writel((val), (a))
+
+int register_virtio_mmio_ops(struct virtio_mmio_ops *ops)
+{
+	pr_info("Registered %s as mmio ops\n", ops->name);
+	mmio_ops = ops;
+
+	return 0;
+}
+
+#else	/* CONFIG_VIRTIO_MMIO_OPS */
+
+#define virtio_readb(a)		readb((a))
+#define virtio_readw(a)		readw((a))
+#define virtio_readl(a)		readl((a))
+#define virtio_writeb(val, a)	writeb((val), (a))
+#define virtio_writew(val, a)	writew((val), (a))
+#define virtio_writel(val, a)	writel((val), (a))
+
+#endif	/* CONFIG_VIRTIO_MMIO_OPS */
 
 /* Configuration interface */
 
@@ -109,12 +137,12 @@ static u64 vm_get_features(struct virtio_device *vdev)
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
 	u64 features;
 
-	writel(1, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
-	features = readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
+	virtio_writel(1, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
+	features = virtio_readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
 	features <<= 32;
 
-	writel(0, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
-	features |= readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
+	virtio_writel(0, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
+	features |= virtio_readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
 
 	return features;
 }
@@ -133,12 +161,12 @@ static int vm_finalize_features(struct virtio_device *vdev)
 		return -EINVAL;
 	}
 
-	writel(1, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
-	writel((u32)(vdev->features >> 32),
+	virtio_writel(1, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
+	virtio_writel((u32)(vdev->features >> 32),
 			vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES);
 
-	writel(0, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
-	writel((u32)vdev->features,
+	virtio_writel(0, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
+	virtio_writel((u32)vdev->features,
 			vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES);
 
 	return 0;
@@ -158,25 +186,25 @@ static void vm_get(struct virtio_device *vdev, unsigned offset,
 		int i;
 
 		for (i = 0; i < len; i++)
-			ptr[i] = readb(base + offset + i);
+			ptr[i] = virtio_readb(base + offset + i);
 		return;
 	}
 
 	switch (len) {
 	case 1:
-		b = readb(base + offset);
+		b = virtio_readb(base + offset);
 		memcpy(buf, &b, sizeof b);
 		break;
 	case 2:
-		w = cpu_to_le16(readw(base + offset));
+		w = cpu_to_le16(virtio_readw(base + offset));
 		memcpy(buf, &w, sizeof w);
 		break;
 	case 4:
-		l = cpu_to_le32(readl(base + offset));
+		l = cpu_to_le32(virtio_readl(base + offset));
 		memcpy(buf, &l, sizeof l);
 		break;
 	case 8:
-		l = cpu_to_le32(readl(base + offset));
+		l = cpu_to_le32(virtio_readl(base + offset));
 		memcpy(buf, &l, sizeof l);
 		l = cpu_to_le32(ioread32(base + offset + sizeof l));
 		memcpy(buf + sizeof l, &l, sizeof l);
@@ -200,7 +228,7 @@ static void vm_set(struct virtio_device *vdev, unsigned offset,
 		int i;
 
 		for (i = 0; i < len; i++)
-			writeb(ptr[i], base + offset + i);
+			virtio_writeb(ptr[i], base + offset + i);
 
 		return;
 	}
@@ -208,21 +236,21 @@ static void vm_set(struct virtio_device *vdev, unsigned offset,
 	switch (len) {
 	case 1:
 		memcpy(&b, buf, sizeof b);
-		writeb(b, base + offset);
+		virtio_writeb(b, base + offset);
 		break;
 	case 2:
 		memcpy(&w, buf, sizeof w);
-		writew(le16_to_cpu(w), base + offset);
+		virtio_writew(le16_to_cpu(w), base + offset);
 		break;
 	case 4:
 		memcpy(&l, buf, sizeof l);
-		writel(le32_to_cpu(l), base + offset);
+		virtio_writel(le32_to_cpu(l), base + offset);
 		break;
 	case 8:
 		memcpy(&l, buf, sizeof l);
-		writel(le32_to_cpu(l), base + offset);
+		virtio_writel(le32_to_cpu(l), base + offset);
 		memcpy(&l, buf + sizeof l, sizeof l);
-		writel(le32_to_cpu(l), base + offset + sizeof l);
+		virtio_writel(le32_to_cpu(l), base + offset + sizeof l);
 		break;
 	default:
 		BUG();
@@ -236,14 +264,14 @@ static u32 vm_generation(struct virtio_device *vdev)
 	if (vm_dev->version == 1)
 		return 0;
 	else
-		return readl(vm_dev->base + VIRTIO_MMIO_CONFIG_GENERATION);
+		return virtio_readl(vm_dev->base + VIRTIO_MMIO_CONFIG_GENERATION);
 }
 
 static u8 vm_get_status(struct virtio_device *vdev)
 {
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
 
-	return readl(vm_dev->base + VIRTIO_MMIO_STATUS) & 0xff;
+	return virtio_readl(vm_dev->base + VIRTIO_MMIO_STATUS) & 0xff;
 }
 
 static void vm_set_status(struct virtio_device *vdev, u8 status)
@@ -253,7 +281,7 @@ static void vm_set_status(struct virtio_device *vdev, u8 status)
 	/* We should never be setting status to 0. */
 	BUG_ON(status == 0);
 
-	writel(status, vm_dev->base + VIRTIO_MMIO_STATUS);
+	virtio_writel(status, vm_dev->base + VIRTIO_MMIO_STATUS);
 }
 
 static void vm_reset(struct virtio_device *vdev)
@@ -261,7 +289,7 @@ static void vm_reset(struct virtio_device *vdev)
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
 
 	/* 0 status means a reset. */
-	writel(0, vm_dev->base + VIRTIO_MMIO_STATUS);
+	virtio_writel(0, vm_dev->base + VIRTIO_MMIO_STATUS);
 }
 
 
@@ -275,7 +303,7 @@ static bool vm_notify(struct virtqueue *vq)
 
 	/* We write the queue's selector into the notification register to
 	 * signal the other end */
-	writel(vq->index, vm_dev->base + VIRTIO_MMIO_QUEUE_NOTIFY);
+	virtio_writel(vq->index, vm_dev->base + VIRTIO_MMIO_QUEUE_NOTIFY);
 	return true;
 }
 
@@ -289,8 +317,8 @@ static irqreturn_t vm_interrupt(int irq, void *opaque)
 	irqreturn_t ret = IRQ_NONE;
 
 	/* Read and acknowledge interrupts */
-	status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
-	writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
+	status = virtio_readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
+	virtio_writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
 
 	if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)) {
 		virtio_config_changed(&vm_dev->vdev);
@@ -321,12 +349,12 @@ static void vm_del_vq(struct virtqueue *vq)
 	spin_unlock_irqrestore(&vm_dev->lock, flags);
 
 	/* Select and deactivate the queue */
-	writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
+	virtio_writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
 	if (vm_dev->version == 1) {
-		writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
+		virtio_writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
 	} else {
-		writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
-		WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
+		virtio_writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
+		WARN_ON(virtio_readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
 	}
 
 	vring_del_virtqueue(vq);
@@ -360,10 +388,10 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 		return NULL;
 
 	/* Select the queue we're interested in */
-	writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
+	virtio_writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
 
 	/* Queue shouldn't already be set up. */
-	if (readl(vm_dev->base + (vm_dev->version == 1 ?
+	if (virtio_readl(vm_dev->base + (vm_dev->version == 1 ?
 			VIRTIO_MMIO_QUEUE_PFN : VIRTIO_MMIO_QUEUE_READY))) {
 		err = -ENOENT;
 		goto error_available;
@@ -376,7 +404,7 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 		goto error_kmalloc;
 	}
 
-	num = readl(vm_dev->base + VIRTIO_MMIO_QUEUE_NUM_MAX);
+	num = virtio_readl(vm_dev->base + VIRTIO_MMIO_QUEUE_NUM_MAX);
 	if (num == 0) {
 		err = -ENOENT;
 		goto error_new_virtqueue;
@@ -391,7 +419,7 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 	}
 
 	/* Activate the queue */
-	writel(virtqueue_get_vring_size(vq), vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
+	virtio_writel(virtqueue_get_vring_size(vq), vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
 	if (vm_dev->version == 1) {
 		u64 q_pfn = virtqueue_get_desc_addr(vq) >> PAGE_SHIFT;
 
@@ -408,27 +436,27 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 			goto error_bad_pfn;
 		}
 
-		writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_QUEUE_ALIGN);
-		writel(q_pfn, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
+		virtio_writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_QUEUE_ALIGN);
+		virtio_writel(q_pfn, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
 	} else {
 		u64 addr;
 
 		addr = virtqueue_get_desc_addr(vq);
-		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_LOW);
-		writel((u32)(addr >> 32),
+		virtio_writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_LOW);
+		virtio_writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_HIGH);
 
 		addr = virtqueue_get_avail_addr(vq);
-		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_LOW);
-		writel((u32)(addr >> 32),
+		virtio_writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_LOW);
+		virtio_writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_HIGH);
 
 		addr = virtqueue_get_used_addr(vq);
-		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_USED_LOW);
-		writel((u32)(addr >> 32),
+		virtio_writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_USED_LOW);
+		virtio_writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_USED_HIGH);
 
-		writel(1, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
+		virtio_writel(1, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
 	}
 
 	vq->priv = info;
@@ -444,10 +472,10 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 	vring_del_virtqueue(vq);
 error_new_virtqueue:
 	if (vm_dev->version == 1) {
-		writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
+		virtio_writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
 	} else {
-		writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
-		WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
+		virtio_writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
+		WARN_ON(virtio_readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
 	}
 	kfree(info);
 error_kmalloc:
@@ -514,6 +542,7 @@ static const struct virtio_config_ops virtio_mmio_config_ops = {
 	.bus_name	= vm_bus_name,
 };
 
+static const struct virtio_mmio_ops virtio_mmio_default_ops;
 
 static void virtio_mmio_release_dev(struct device *_d)
 {
@@ -550,21 +579,21 @@ static int virtio_mmio_probe(struct platform_device *pdev)
 		return PTR_ERR(vm_dev->base);
 
 	/* Check magic value */
-	magic = readl(vm_dev->base + VIRTIO_MMIO_MAGIC_VALUE);
+	magic = virtio_readl(vm_dev->base + VIRTIO_MMIO_MAGIC_VALUE);
 	if (magic != ('v' | 'i' << 8 | 'r' << 16 | 't' << 24)) {
 		dev_warn(&pdev->dev, "Wrong magic value 0x%08lx!\n", magic);
 		return -ENODEV;
 	}
 
 	/* Check device version */
-	vm_dev->version = readl(vm_dev->base + VIRTIO_MMIO_VERSION);
+	vm_dev->version = virtio_readl(vm_dev->base + VIRTIO_MMIO_VERSION);
 	if (vm_dev->version < 1 || vm_dev->version > 2) {
 		dev_err(&pdev->dev, "Version %ld not supported!\n",
 				vm_dev->version);
 		return -ENXIO;
 	}
 
-	vm_dev->vdev.id.device = readl(vm_dev->base + VIRTIO_MMIO_DEVICE_ID);
+	vm_dev->vdev.id.device = virtio_readl(vm_dev->base + VIRTIO_MMIO_DEVICE_ID);
 	if (vm_dev->vdev.id.device == 0) {
 		/*
 		 * virtio-mmio device with an ID 0 is a (dummy) placeholder
@@ -572,10 +601,10 @@ static int virtio_mmio_probe(struct platform_device *pdev)
 		 */
 		return -ENODEV;
 	}
-	vm_dev->vdev.id.vendor = readl(vm_dev->base + VIRTIO_MMIO_VENDOR_ID);
+	vm_dev->vdev.id.vendor = virtio_readl(vm_dev->base + VIRTIO_MMIO_VENDOR_ID);
 
 	if (vm_dev->version == 1) {
-		writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE);
+		virtio_writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE);
 
 		rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
 		/*
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index a493eac..652ccfb 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -119,6 +119,20 @@ struct virtio_device {
 	void *priv;
 };
 
+#define MAX_MMIO_NAME_SIZE 32
+
+struct virtio_mmio_ops {
+	char name[MAX_MMIO_NAME_SIZE];
+	u8 (*mmio_readb)(const volatile void __iomem *addr);
+	u16 (*mmio_readw)(const volatile void __iomem *addr);
+	u32 (*mmio_readl)(const volatile void __iomem *addr);
+	void (*mmio_writeb)(u8 val, volatile void __iomem *addr);
+	void (*mmio_writew)(u16 val, volatile void __iomem *addr);
+	void (*mmio_writel)(u32 val, volatile void __iomem *addr);
+};
+
+extern int register_virtio_mmio_ops(struct virtio_mmio_ops *ops);
+
 static inline struct virtio_device *dev_to_virtio(struct device *_dev)
 {
 	return container_of(_dev, struct virtio_device, dev);
-- 
2.7.4


-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [RFC/PATCH 1/1] virtio: Introduce MMIO ops
@ 2020-04-30 10:02   ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 10:02 UTC (permalink / raw)
  To: konrad.wilk, mst, jasowang, jan.kiszka, will, stefano.stabellini
  Cc: tsoni, virtio-dev, alex.bennee, vatsa, christoffer.dall,
	virtualization, iommu, pratikp, linux-kernel

Some hypervisors may not support MMIO transport i.e trap config
space access and have it be handled by backend driver. They may
allow other ways to interact with backend such as message-queue
or doorbell API. This patch allows for hypervisor specific
methods for config space IO.

Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
---
 drivers/virtio/virtio_mmio.c | 131 ++++++++++++++++++++++++++-----------------
 include/linux/virtio.h       |  14 +++++
 2 files changed, 94 insertions(+), 51 deletions(-)

diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
index 97d5725..69bfa35 100644
--- a/drivers/virtio/virtio_mmio.c
+++ b/drivers/virtio/virtio_mmio.c
@@ -100,7 +100,35 @@ struct virtio_mmio_vq_info {
 	struct list_head node;
 };
 
+#ifdef CONFIG_VIRTIO_MMIO_OPS
 
+static struct virtio_mmio_ops *mmio_ops;
+
+#define virtio_readb(a)		mmio_ops->mmio_readl((a))
+#define virtio_readw(a)		mmio_ops->mmio_readl((a))
+#define virtio_readl(a)		mmio_ops->mmio_readl((a))
+#define virtio_writeb(val, a)	mmio_ops->mmio_writeb((val), (a))
+#define virtio_writew(val, a)	mmio_ops->mmio_writew((val), (a))
+#define virtio_writel(val, a)	mmio_ops->mmio_writel((val), (a))
+
+int register_virtio_mmio_ops(struct virtio_mmio_ops *ops)
+{
+	pr_info("Registered %s as mmio ops\n", ops->name);
+	mmio_ops = ops;
+
+	return 0;
+}
+
+#else	/* CONFIG_VIRTIO_MMIO_OPS */
+
+#define virtio_readb(a)		readb((a))
+#define virtio_readw(a)		readw((a))
+#define virtio_readl(a)		readl((a))
+#define virtio_writeb(val, a)	writeb((val), (a))
+#define virtio_writew(val, a)	writew((val), (a))
+#define virtio_writel(val, a)	writel((val), (a))
+
+#endif	/* CONFIG_VIRTIO_MMIO_OPS */
 
 /* Configuration interface */
 
@@ -109,12 +137,12 @@ static u64 vm_get_features(struct virtio_device *vdev)
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
 	u64 features;
 
-	writel(1, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
-	features = readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
+	virtio_writel(1, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
+	features = virtio_readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
 	features <<= 32;
 
-	writel(0, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
-	features |= readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
+	virtio_writel(0, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
+	features |= virtio_readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
 
 	return features;
 }
@@ -133,12 +161,12 @@ static int vm_finalize_features(struct virtio_device *vdev)
 		return -EINVAL;
 	}
 
-	writel(1, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
-	writel((u32)(vdev->features >> 32),
+	virtio_writel(1, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
+	virtio_writel((u32)(vdev->features >> 32),
 			vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES);
 
-	writel(0, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
-	writel((u32)vdev->features,
+	virtio_writel(0, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
+	virtio_writel((u32)vdev->features,
 			vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES);
 
 	return 0;
@@ -158,25 +186,25 @@ static void vm_get(struct virtio_device *vdev, unsigned offset,
 		int i;
 
 		for (i = 0; i < len; i++)
-			ptr[i] = readb(base + offset + i);
+			ptr[i] = virtio_readb(base + offset + i);
 		return;
 	}
 
 	switch (len) {
 	case 1:
-		b = readb(base + offset);
+		b = virtio_readb(base + offset);
 		memcpy(buf, &b, sizeof b);
 		break;
 	case 2:
-		w = cpu_to_le16(readw(base + offset));
+		w = cpu_to_le16(virtio_readw(base + offset));
 		memcpy(buf, &w, sizeof w);
 		break;
 	case 4:
-		l = cpu_to_le32(readl(base + offset));
+		l = cpu_to_le32(virtio_readl(base + offset));
 		memcpy(buf, &l, sizeof l);
 		break;
 	case 8:
-		l = cpu_to_le32(readl(base + offset));
+		l = cpu_to_le32(virtio_readl(base + offset));
 		memcpy(buf, &l, sizeof l);
 		l = cpu_to_le32(ioread32(base + offset + sizeof l));
 		memcpy(buf + sizeof l, &l, sizeof l);
@@ -200,7 +228,7 @@ static void vm_set(struct virtio_device *vdev, unsigned offset,
 		int i;
 
 		for (i = 0; i < len; i++)
-			writeb(ptr[i], base + offset + i);
+			virtio_writeb(ptr[i], base + offset + i);
 
 		return;
 	}
@@ -208,21 +236,21 @@ static void vm_set(struct virtio_device *vdev, unsigned offset,
 	switch (len) {
 	case 1:
 		memcpy(&b, buf, sizeof b);
-		writeb(b, base + offset);
+		virtio_writeb(b, base + offset);
 		break;
 	case 2:
 		memcpy(&w, buf, sizeof w);
-		writew(le16_to_cpu(w), base + offset);
+		virtio_writew(le16_to_cpu(w), base + offset);
 		break;
 	case 4:
 		memcpy(&l, buf, sizeof l);
-		writel(le32_to_cpu(l), base + offset);
+		virtio_writel(le32_to_cpu(l), base + offset);
 		break;
 	case 8:
 		memcpy(&l, buf, sizeof l);
-		writel(le32_to_cpu(l), base + offset);
+		virtio_writel(le32_to_cpu(l), base + offset);
 		memcpy(&l, buf + sizeof l, sizeof l);
-		writel(le32_to_cpu(l), base + offset + sizeof l);
+		virtio_writel(le32_to_cpu(l), base + offset + sizeof l);
 		break;
 	default:
 		BUG();
@@ -236,14 +264,14 @@ static u32 vm_generation(struct virtio_device *vdev)
 	if (vm_dev->version == 1)
 		return 0;
 	else
-		return readl(vm_dev->base + VIRTIO_MMIO_CONFIG_GENERATION);
+		return virtio_readl(vm_dev->base + VIRTIO_MMIO_CONFIG_GENERATION);
 }
 
 static u8 vm_get_status(struct virtio_device *vdev)
 {
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
 
-	return readl(vm_dev->base + VIRTIO_MMIO_STATUS) & 0xff;
+	return virtio_readl(vm_dev->base + VIRTIO_MMIO_STATUS) & 0xff;
 }
 
 static void vm_set_status(struct virtio_device *vdev, u8 status)
@@ -253,7 +281,7 @@ static void vm_set_status(struct virtio_device *vdev, u8 status)
 	/* We should never be setting status to 0. */
 	BUG_ON(status == 0);
 
-	writel(status, vm_dev->base + VIRTIO_MMIO_STATUS);
+	virtio_writel(status, vm_dev->base + VIRTIO_MMIO_STATUS);
 }
 
 static void vm_reset(struct virtio_device *vdev)
@@ -261,7 +289,7 @@ static void vm_reset(struct virtio_device *vdev)
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
 
 	/* 0 status means a reset. */
-	writel(0, vm_dev->base + VIRTIO_MMIO_STATUS);
+	virtio_writel(0, vm_dev->base + VIRTIO_MMIO_STATUS);
 }
 
 
@@ -275,7 +303,7 @@ static bool vm_notify(struct virtqueue *vq)
 
 	/* We write the queue's selector into the notification register to
 	 * signal the other end */
-	writel(vq->index, vm_dev->base + VIRTIO_MMIO_QUEUE_NOTIFY);
+	virtio_writel(vq->index, vm_dev->base + VIRTIO_MMIO_QUEUE_NOTIFY);
 	return true;
 }
 
@@ -289,8 +317,8 @@ static irqreturn_t vm_interrupt(int irq, void *opaque)
 	irqreturn_t ret = IRQ_NONE;
 
 	/* Read and acknowledge interrupts */
-	status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
-	writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
+	status = virtio_readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
+	virtio_writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
 
 	if (unlikely(status & VIRTIO_MMIO_INT_CONFIG)) {
 		virtio_config_changed(&vm_dev->vdev);
@@ -321,12 +349,12 @@ static void vm_del_vq(struct virtqueue *vq)
 	spin_unlock_irqrestore(&vm_dev->lock, flags);
 
 	/* Select and deactivate the queue */
-	writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
+	virtio_writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
 	if (vm_dev->version == 1) {
-		writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
+		virtio_writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
 	} else {
-		writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
-		WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
+		virtio_writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
+		WARN_ON(virtio_readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
 	}
 
 	vring_del_virtqueue(vq);
@@ -360,10 +388,10 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 		return NULL;
 
 	/* Select the queue we're interested in */
-	writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
+	virtio_writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
 
 	/* Queue shouldn't already be set up. */
-	if (readl(vm_dev->base + (vm_dev->version == 1 ?
+	if (virtio_readl(vm_dev->base + (vm_dev->version == 1 ?
 			VIRTIO_MMIO_QUEUE_PFN : VIRTIO_MMIO_QUEUE_READY))) {
 		err = -ENOENT;
 		goto error_available;
@@ -376,7 +404,7 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 		goto error_kmalloc;
 	}
 
-	num = readl(vm_dev->base + VIRTIO_MMIO_QUEUE_NUM_MAX);
+	num = virtio_readl(vm_dev->base + VIRTIO_MMIO_QUEUE_NUM_MAX);
 	if (num == 0) {
 		err = -ENOENT;
 		goto error_new_virtqueue;
@@ -391,7 +419,7 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 	}
 
 	/* Activate the queue */
-	writel(virtqueue_get_vring_size(vq), vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
+	virtio_writel(virtqueue_get_vring_size(vq), vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
 	if (vm_dev->version == 1) {
 		u64 q_pfn = virtqueue_get_desc_addr(vq) >> PAGE_SHIFT;
 
@@ -408,27 +436,27 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 			goto error_bad_pfn;
 		}
 
-		writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_QUEUE_ALIGN);
-		writel(q_pfn, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
+		virtio_writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_QUEUE_ALIGN);
+		virtio_writel(q_pfn, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
 	} else {
 		u64 addr;
 
 		addr = virtqueue_get_desc_addr(vq);
-		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_LOW);
-		writel((u32)(addr >> 32),
+		virtio_writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_LOW);
+		virtio_writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_HIGH);
 
 		addr = virtqueue_get_avail_addr(vq);
-		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_LOW);
-		writel((u32)(addr >> 32),
+		virtio_writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_LOW);
+		virtio_writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_HIGH);
 
 		addr = virtqueue_get_used_addr(vq);
-		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_USED_LOW);
-		writel((u32)(addr >> 32),
+		virtio_writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_USED_LOW);
+		virtio_writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_USED_HIGH);
 
-		writel(1, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
+		virtio_writel(1, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
 	}
 
 	vq->priv = info;
@@ -444,10 +472,10 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 	vring_del_virtqueue(vq);
 error_new_virtqueue:
 	if (vm_dev->version == 1) {
-		writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
+		virtio_writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
 	} else {
-		writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
-		WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
+		virtio_writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
+		WARN_ON(virtio_readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
 	}
 	kfree(info);
 error_kmalloc:
@@ -514,6 +542,7 @@ static const struct virtio_config_ops virtio_mmio_config_ops = {
 	.bus_name	= vm_bus_name,
 };
 
+static const struct virtio_mmio_ops virtio_mmio_default_ops;
 
 static void virtio_mmio_release_dev(struct device *_d)
 {
@@ -550,21 +579,21 @@ static int virtio_mmio_probe(struct platform_device *pdev)
 		return PTR_ERR(vm_dev->base);
 
 	/* Check magic value */
-	magic = readl(vm_dev->base + VIRTIO_MMIO_MAGIC_VALUE);
+	magic = virtio_readl(vm_dev->base + VIRTIO_MMIO_MAGIC_VALUE);
 	if (magic != ('v' | 'i' << 8 | 'r' << 16 | 't' << 24)) {
 		dev_warn(&pdev->dev, "Wrong magic value 0x%08lx!\n", magic);
 		return -ENODEV;
 	}
 
 	/* Check device version */
-	vm_dev->version = readl(vm_dev->base + VIRTIO_MMIO_VERSION);
+	vm_dev->version = virtio_readl(vm_dev->base + VIRTIO_MMIO_VERSION);
 	if (vm_dev->version < 1 || vm_dev->version > 2) {
 		dev_err(&pdev->dev, "Version %ld not supported!\n",
 				vm_dev->version);
 		return -ENXIO;
 	}
 
-	vm_dev->vdev.id.device = readl(vm_dev->base + VIRTIO_MMIO_DEVICE_ID);
+	vm_dev->vdev.id.device = virtio_readl(vm_dev->base + VIRTIO_MMIO_DEVICE_ID);
 	if (vm_dev->vdev.id.device == 0) {
 		/*
 		 * virtio-mmio device with an ID 0 is a (dummy) placeholder
@@ -572,10 +601,10 @@ static int virtio_mmio_probe(struct platform_device *pdev)
 		 */
 		return -ENODEV;
 	}
-	vm_dev->vdev.id.vendor = readl(vm_dev->base + VIRTIO_MMIO_VENDOR_ID);
+	vm_dev->vdev.id.vendor = virtio_readl(vm_dev->base + VIRTIO_MMIO_VENDOR_ID);
 
 	if (vm_dev->version == 1) {
-		writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE);
+		virtio_writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE);
 
 		rc = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
 		/*
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index a493eac..652ccfb 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -119,6 +119,20 @@ struct virtio_device {
 	void *priv;
 };
 
+#define MAX_MMIO_NAME_SIZE 32
+
+struct virtio_mmio_ops {
+	char name[MAX_MMIO_NAME_SIZE];
+	u8 (*mmio_readb)(const volatile void __iomem *addr);
+	u16 (*mmio_readw)(const volatile void __iomem *addr);
+	u32 (*mmio_readl)(const volatile void __iomem *addr);
+	void (*mmio_writeb)(u8 val, volatile void __iomem *addr);
+	void (*mmio_writew)(u16 val, volatile void __iomem *addr);
+	void (*mmio_writel)(u32 val, volatile void __iomem *addr);
+};
+
+extern int register_virtio_mmio_ops(struct virtio_mmio_ops *ops);
+
 static inline struct virtio_device *dev_to_virtio(struct device *_dev)
 {
 	return container_of(_dev, struct virtio_device, dev);
-- 
2.7.4


-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
  2020-04-30 10:02 ` Srivatsa Vaddagiri
  (?)
@ 2020-04-30 10:07   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 38+ messages in thread
From: Michael S. Tsirkin @ 2020-04-30 10:07 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: konrad.wilk, jasowang, jan.kiszka, will, stefano.stabellini,
	iommu, virtualization, virtio-dev, tsoni, pratikp,
	christoffer.dall, alex.bennee, linux-kernel

On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
> The Type-1 hypervisor we are dealing with does not allow for MMIO transport. 

How about PCI then?

-- 
MST


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
@ 2020-04-30 10:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 38+ messages in thread
From: Michael S. Tsirkin @ 2020-04-30 10:07 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: tsoni, virtio-dev, konrad.wilk, jan.kiszka, jasowang,
	christoffer.dall, virtualization, alex.bennee, iommu,
	stefano.stabellini, will, linux-kernel, pratikp

On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
> The Type-1 hypervisor we are dealing with does not allow for MMIO transport. 

How about PCI then?

-- 
MST

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [virtio-dev] Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
@ 2020-04-30 10:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 38+ messages in thread
From: Michael S. Tsirkin @ 2020-04-30 10:07 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: konrad.wilk, jasowang, jan.kiszka, will, stefano.stabellini,
	iommu, virtualization, virtio-dev, tsoni, pratikp,
	christoffer.dall, alex.bennee, linux-kernel

On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
> The Type-1 hypervisor we are dealing with does not allow for MMIO transport. 

How about PCI then?

-- 
MST


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
  2020-04-30 10:02 ` Srivatsa Vaddagiri
@ 2020-04-30 10:08   ` Will Deacon
  -1 siblings, 0 replies; 38+ messages in thread
From: Will Deacon @ 2020-04-30 10:08 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: konrad.wilk, mst, jasowang, jan.kiszka, stefano.stabellini,
	iommu, virtualization, virtio-dev, tsoni, pratikp,
	christoffer.dall, alex.bennee, linux-kernel

On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
> The Type-1 hypervisor we are dealing with does not allow for MMIO transport. 
> [1] summarizes some of the problems we have in making virtio work on such
> hypervisors. This patch proposes a solution for transport problem viz how we can
> do config space IO on such a hypervisor. Hypervisor specific methods
> introduced allows for seamless IO of config space.

Seamless huh? You'd hope that might obviate the need for extra patches...

> This patch is meant to seek comments. If its considered to be in right
> direction, will work on making it more complete and send the next version!

What's stopping you from implementing the trapping support in the
hypervisor? Unlike the other patches you sent out, where the guest memory
is not accessible to the host, there doesn't seem to be any advantage to
not having trapping support, or am I missing something here?

Will

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
@ 2020-04-30 10:08   ` Will Deacon
  0 siblings, 0 replies; 38+ messages in thread
From: Will Deacon @ 2020-04-30 10:08 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: tsoni, virtio-dev, mst, jan.kiszka, jasowang, konrad.wilk,
	christoffer.dall, virtualization, alex.bennee, iommu,
	stefano.stabellini, pratikp, linux-kernel

On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
> The Type-1 hypervisor we are dealing with does not allow for MMIO transport. 
> [1] summarizes some of the problems we have in making virtio work on such
> hypervisors. This patch proposes a solution for transport problem viz how we can
> do config space IO on such a hypervisor. Hypervisor specific methods
> introduced allows for seamless IO of config space.

Seamless huh? You'd hope that might obviate the need for extra patches...

> This patch is meant to seek comments. If its considered to be in right
> direction, will work on making it more complete and send the next version!

What's stopping you from implementing the trapping support in the
hypervisor? Unlike the other patches you sent out, where the guest memory
is not accessible to the host, there doesn't seem to be any advantage to
not having trapping support, or am I missing something here?

Will
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
  2020-04-30 10:02   ` Srivatsa Vaddagiri
@ 2020-04-30 10:14     ` Will Deacon
  -1 siblings, 0 replies; 38+ messages in thread
From: Will Deacon @ 2020-04-30 10:14 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: konrad.wilk, mst, jasowang, jan.kiszka, stefano.stabellini,
	iommu, virtualization, virtio-dev, tsoni, pratikp,
	christoffer.dall, alex.bennee, linux-kernel

On Thu, Apr 30, 2020 at 03:32:56PM +0530, Srivatsa Vaddagiri wrote:
> Some hypervisors may not support MMIO transport i.e trap config
> space access and have it be handled by backend driver. They may
> allow other ways to interact with backend such as message-queue
> or doorbell API. This patch allows for hypervisor specific
> methods for config space IO.
> 
> Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
> ---
>  drivers/virtio/virtio_mmio.c | 131 ++++++++++++++++++++++++++-----------------
>  include/linux/virtio.h       |  14 +++++
>  2 files changed, 94 insertions(+), 51 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
> index 97d5725..69bfa35 100644
> --- a/drivers/virtio/virtio_mmio.c
> +++ b/drivers/virtio/virtio_mmio.c
> @@ -100,7 +100,35 @@ struct virtio_mmio_vq_info {
>  	struct list_head node;
>  };
>  
> +#ifdef CONFIG_VIRTIO_MMIO_OPS
>  
> +static struct virtio_mmio_ops *mmio_ops;
> +
> +#define virtio_readb(a)		mmio_ops->mmio_readl((a))
> +#define virtio_readw(a)		mmio_ops->mmio_readl((a))
> +#define virtio_readl(a)		mmio_ops->mmio_readl((a))
> +#define virtio_writeb(val, a)	mmio_ops->mmio_writeb((val), (a))
> +#define virtio_writew(val, a)	mmio_ops->mmio_writew((val), (a))
> +#define virtio_writel(val, a)	mmio_ops->mmio_writel((val), (a))

How exactly are these ops hooked up? I'm envisaging something like:

	ops = spec_compliant_ops;
	[...]
	if (firmware_says_hypervisor_is_buggy())
		ops = magic_qcom_ops;

am I wrong?

> +int register_virtio_mmio_ops(struct virtio_mmio_ops *ops)
> +{
> +	pr_info("Registered %s as mmio ops\n", ops->name);
> +	mmio_ops = ops;

Not looking good, and really defeats the point of standardising this stuff
imo.

Will

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
@ 2020-04-30 10:14     ` Will Deacon
  0 siblings, 0 replies; 38+ messages in thread
From: Will Deacon @ 2020-04-30 10:14 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: tsoni, virtio-dev, mst, jan.kiszka, jasowang, konrad.wilk,
	christoffer.dall, virtualization, alex.bennee, iommu,
	stefano.stabellini, pratikp, linux-kernel

On Thu, Apr 30, 2020 at 03:32:56PM +0530, Srivatsa Vaddagiri wrote:
> Some hypervisors may not support MMIO transport i.e trap config
> space access and have it be handled by backend driver. They may
> allow other ways to interact with backend such as message-queue
> or doorbell API. This patch allows for hypervisor specific
> methods for config space IO.
> 
> Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
> ---
>  drivers/virtio/virtio_mmio.c | 131 ++++++++++++++++++++++++++-----------------
>  include/linux/virtio.h       |  14 +++++
>  2 files changed, 94 insertions(+), 51 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
> index 97d5725..69bfa35 100644
> --- a/drivers/virtio/virtio_mmio.c
> +++ b/drivers/virtio/virtio_mmio.c
> @@ -100,7 +100,35 @@ struct virtio_mmio_vq_info {
>  	struct list_head node;
>  };
>  
> +#ifdef CONFIG_VIRTIO_MMIO_OPS
>  
> +static struct virtio_mmio_ops *mmio_ops;
> +
> +#define virtio_readb(a)		mmio_ops->mmio_readl((a))
> +#define virtio_readw(a)		mmio_ops->mmio_readl((a))
> +#define virtio_readl(a)		mmio_ops->mmio_readl((a))
> +#define virtio_writeb(val, a)	mmio_ops->mmio_writeb((val), (a))
> +#define virtio_writew(val, a)	mmio_ops->mmio_writew((val), (a))
> +#define virtio_writel(val, a)	mmio_ops->mmio_writel((val), (a))

How exactly are these ops hooked up? I'm envisaging something like:

	ops = spec_compliant_ops;
	[...]
	if (firmware_says_hypervisor_is_buggy())
		ops = magic_qcom_ops;

am I wrong?

> +int register_virtio_mmio_ops(struct virtio_mmio_ops *ops)
> +{
> +	pr_info("Registered %s as mmio ops\n", ops->name);
> +	mmio_ops = ops;

Not looking good, and really defeats the point of standardising this stuff
imo.

Will
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
  2020-04-30 10:08   ` Will Deacon
@ 2020-04-30 10:29     ` Srivatsa Vaddagiri
  -1 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 10:29 UTC (permalink / raw)
  To: Will Deacon
  Cc: konrad.wilk, mst, jasowang, jan.kiszka, stefano.stabellini,
	iommu, virtualization, virtio-dev, tsoni, pratikp,
	christoffer.dall, alex.bennee, linux-kernel

* Will Deacon <will@kernel.org> [2020-04-30 11:08:22]:

> > This patch is meant to seek comments. If its considered to be in right
> > direction, will work on making it more complete and send the next version!
> 
> What's stopping you from implementing the trapping support in the
> hypervisor? Unlike the other patches you sent out, where the guest memory
> is not accessible to the host, there doesn't seem to be any advantage to
> not having trapping support, or am I missing something here?

Hi Will,
	I have had this discussion with hypervisor folks. They seem to be
concerned about complexity of having a VM's fault be handled in another
untrusted VM. They are not keen to add MMIO support.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
@ 2020-04-30 10:29     ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 10:29 UTC (permalink / raw)
  To: Will Deacon
  Cc: tsoni, virtio-dev, mst, jan.kiszka, jasowang, konrad.wilk,
	christoffer.dall, virtualization, alex.bennee, iommu,
	stefano.stabellini, pratikp, linux-kernel

* Will Deacon <will@kernel.org> [2020-04-30 11:08:22]:

> > This patch is meant to seek comments. If its considered to be in right
> > direction, will work on making it more complete and send the next version!
> 
> What's stopping you from implementing the trapping support in the
> hypervisor? Unlike the other patches you sent out, where the guest memory
> is not accessible to the host, there doesn't seem to be any advantage to
> not having trapping support, or am I missing something here?

Hi Will,
	I have had this discussion with hypervisor folks. They seem to be
concerned about complexity of having a VM's fault be handled in another
untrusted VM. They are not keen to add MMIO support.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
  2020-04-30 10:14     ` Will Deacon
@ 2020-04-30 10:34       ` Srivatsa Vaddagiri
  -1 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 10:34 UTC (permalink / raw)
  To: Will Deacon
  Cc: konrad.wilk, mst, jasowang, jan.kiszka, stefano.stabellini,
	iommu, virtualization, virtio-dev, tsoni, pratikp,
	christoffer.dall, alex.bennee, linux-kernel

* Will Deacon <will@kernel.org> [2020-04-30 11:14:32]:

> > +#ifdef CONFIG_VIRTIO_MMIO_OPS
> >  
> > +static struct virtio_mmio_ops *mmio_ops;
> > +
> > +#define virtio_readb(a)		mmio_ops->mmio_readl((a))
> > +#define virtio_readw(a)		mmio_ops->mmio_readl((a))
> > +#define virtio_readl(a)		mmio_ops->mmio_readl((a))
> > +#define virtio_writeb(val, a)	mmio_ops->mmio_writeb((val), (a))
> > +#define virtio_writew(val, a)	mmio_ops->mmio_writew((val), (a))
> > +#define virtio_writel(val, a)	mmio_ops->mmio_writel((val), (a))
> 
> How exactly are these ops hooked up? I'm envisaging something like:
> 
> 	ops = spec_compliant_ops;
> 	[...]
> 	if (firmware_says_hypervisor_is_buggy())
> 		ops = magic_qcom_ops;
> 
> am I wrong?

If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally
set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
example: message_queue_send() and message_queue_recevie() hypercalls).

> > +int register_virtio_mmio_ops(struct virtio_mmio_ops *ops)
> > +{
> > +	pr_info("Registered %s as mmio ops\n", ops->name);
> > +	mmio_ops = ops;
> 
> Not looking good, and really defeats the point of standardising this stuff
> imo.

Ok. I guess the other option is to standardize on a new virtio transport (like
ivshmem2-virtio)?

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
@ 2020-04-30 10:34       ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 10:34 UTC (permalink / raw)
  To: Will Deacon
  Cc: tsoni, virtio-dev, mst, jan.kiszka, jasowang, konrad.wilk,
	christoffer.dall, virtualization, alex.bennee, iommu,
	stefano.stabellini, pratikp, linux-kernel

* Will Deacon <will@kernel.org> [2020-04-30 11:14:32]:

> > +#ifdef CONFIG_VIRTIO_MMIO_OPS
> >  
> > +static struct virtio_mmio_ops *mmio_ops;
> > +
> > +#define virtio_readb(a)		mmio_ops->mmio_readl((a))
> > +#define virtio_readw(a)		mmio_ops->mmio_readl((a))
> > +#define virtio_readl(a)		mmio_ops->mmio_readl((a))
> > +#define virtio_writeb(val, a)	mmio_ops->mmio_writeb((val), (a))
> > +#define virtio_writew(val, a)	mmio_ops->mmio_writew((val), (a))
> > +#define virtio_writel(val, a)	mmio_ops->mmio_writel((val), (a))
> 
> How exactly are these ops hooked up? I'm envisaging something like:
> 
> 	ops = spec_compliant_ops;
> 	[...]
> 	if (firmware_says_hypervisor_is_buggy())
> 		ops = magic_qcom_ops;
> 
> am I wrong?

If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally
set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
example: message_queue_send() and message_queue_recevie() hypercalls).

> > +int register_virtio_mmio_ops(struct virtio_mmio_ops *ops)
> > +{
> > +	pr_info("Registered %s as mmio ops\n", ops->name);
> > +	mmio_ops = ops;
> 
> Not looking good, and really defeats the point of standardising this stuff
> imo.

Ok. I guess the other option is to standardize on a new virtio transport (like
ivshmem2-virtio)?

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
  2020-04-30 10:29     ` Srivatsa Vaddagiri
@ 2020-04-30 10:39       ` Will Deacon
  -1 siblings, 0 replies; 38+ messages in thread
From: Will Deacon @ 2020-04-30 10:39 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: konrad.wilk, mst, jasowang, jan.kiszka, stefano.stabellini,
	iommu, virtualization, virtio-dev, tsoni, pratikp,
	christoffer.dall, alex.bennee, linux-kernel

Hi Vatsa,

On Thu, Apr 30, 2020 at 03:59:39PM +0530, Srivatsa Vaddagiri wrote:
> * Will Deacon <will@kernel.org> [2020-04-30 11:08:22]:
> 
> > > This patch is meant to seek comments. If its considered to be in right
> > > direction, will work on making it more complete and send the next version!
> > 
> > What's stopping you from implementing the trapping support in the
> > hypervisor? Unlike the other patches you sent out, where the guest memory
> > is not accessible to the host, there doesn't seem to be any advantage to
> > not having trapping support, or am I missing something here?
> 
> 	I have had this discussion with hypervisor folks. They seem to be
> concerned about complexity of having a VM's fault be handled in another
> untrusted VM. They are not keen to add MMIO support.

Right, but I'm concerned about forking the implementation from the spec
and I'm not keen to add these hooks ;)

What does your hook actually do? I'm assuming an HVC? If so, then where the
fault is handled seems to be unrelated and whether the guest exit is due to
an HVC or a stage-2 fault should be immaterial. In other words, I don't
follow why the trapping mechanism necessitates the way in which the fault is
handled.

Will

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
@ 2020-04-30 10:39       ` Will Deacon
  0 siblings, 0 replies; 38+ messages in thread
From: Will Deacon @ 2020-04-30 10:39 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: tsoni, virtio-dev, mst, jan.kiszka, jasowang, konrad.wilk,
	christoffer.dall, virtualization, alex.bennee, iommu,
	stefano.stabellini, pratikp, linux-kernel

Hi Vatsa,

On Thu, Apr 30, 2020 at 03:59:39PM +0530, Srivatsa Vaddagiri wrote:
> * Will Deacon <will@kernel.org> [2020-04-30 11:08:22]:
> 
> > > This patch is meant to seek comments. If its considered to be in right
> > > direction, will work on making it more complete and send the next version!
> > 
> > What's stopping you from implementing the trapping support in the
> > hypervisor? Unlike the other patches you sent out, where the guest memory
> > is not accessible to the host, there doesn't seem to be any advantage to
> > not having trapping support, or am I missing something here?
> 
> 	I have had this discussion with hypervisor folks. They seem to be
> concerned about complexity of having a VM's fault be handled in another
> untrusted VM. They are not keen to add MMIO support.

Right, but I'm concerned about forking the implementation from the spec
and I'm not keen to add these hooks ;)

What does your hook actually do? I'm assuming an HVC? If so, then where the
fault is handled seems to be unrelated and whether the guest exit is due to
an HVC or a stage-2 fault should be immaterial. In other words, I don't
follow why the trapping mechanism necessitates the way in which the fault is
handled.

Will
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
  2020-04-30 10:07   ` Michael S. Tsirkin
@ 2020-04-30 10:40     ` Srivatsa Vaddagiri
  -1 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 10:40 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: konrad.wilk, jasowang, jan.kiszka, will, stefano.stabellini,
	iommu, virtualization, virtio-dev, tsoni, pratikp,
	christoffer.dall, alex.bennee, linux-kernel

* Michael S. Tsirkin <mst@redhat.com> [2020-04-30 06:07:56]:

> On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
> > The Type-1 hypervisor we are dealing with does not allow for MMIO transport. 
> 
> How about PCI then?

Correct me if I am wrong, but basically virtio_pci uses the same low-level
primitive as readl/writel on a platform such as ARM64? So similar issues
there also.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
@ 2020-04-30 10:40     ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 10:40 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: tsoni, virtio-dev, konrad.wilk, jan.kiszka, jasowang,
	christoffer.dall, virtualization, alex.bennee, iommu,
	stefano.stabellini, will, linux-kernel, pratikp

* Michael S. Tsirkin <mst@redhat.com> [2020-04-30 06:07:56]:

> On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
> > The Type-1 hypervisor we are dealing with does not allow for MMIO transport. 
> 
> How about PCI then?

Correct me if I am wrong, but basically virtio_pci uses the same low-level
primitive as readl/writel on a platform such as ARM64? So similar issues
there also.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
  2020-04-30 10:34       ` Srivatsa Vaddagiri
@ 2020-04-30 10:41         ` Will Deacon
  -1 siblings, 0 replies; 38+ messages in thread
From: Will Deacon @ 2020-04-30 10:41 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: konrad.wilk, mst, jasowang, jan.kiszka, stefano.stabellini,
	iommu, virtualization, virtio-dev, tsoni, pratikp,
	christoffer.dall, alex.bennee, linux-kernel

On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote:
> * Will Deacon <will@kernel.org> [2020-04-30 11:14:32]:
> 
> > > +#ifdef CONFIG_VIRTIO_MMIO_OPS
> > >  
> > > +static struct virtio_mmio_ops *mmio_ops;
> > > +
> > > +#define virtio_readb(a)		mmio_ops->mmio_readl((a))
> > > +#define virtio_readw(a)		mmio_ops->mmio_readl((a))
> > > +#define virtio_readl(a)		mmio_ops->mmio_readl((a))
> > > +#define virtio_writeb(val, a)	mmio_ops->mmio_writeb((val), (a))
> > > +#define virtio_writew(val, a)	mmio_ops->mmio_writew((val), (a))
> > > +#define virtio_writel(val, a)	mmio_ops->mmio_writel((val), (a))
> > 
> > How exactly are these ops hooked up? I'm envisaging something like:
> > 
> > 	ops = spec_compliant_ops;
> > 	[...]
> > 	if (firmware_says_hypervisor_is_buggy())
> > 		ops = magic_qcom_ops;
> > 
> > am I wrong?
> 
> If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally
> set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
> example: message_queue_send() and message_queue_recevie() hypercalls).

Hmm, but then how would such a kernel work as a guest under all the
spec-compliant hypervisors out there?

> > > +int register_virtio_mmio_ops(struct virtio_mmio_ops *ops)
> > > +{
> > > +	pr_info("Registered %s as mmio ops\n", ops->name);
> > > +	mmio_ops = ops;
> > 
> > Not looking good, and really defeats the point of standardising this stuff
> > imo.
> 
> Ok. I guess the other option is to standardize on a new virtio transport (like
> ivshmem2-virtio)?

I haven't looked at that, but I suppose it depends on what your hypervisor
folks are willing to accomodate.

Will

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
@ 2020-04-30 10:41         ` Will Deacon
  0 siblings, 0 replies; 38+ messages in thread
From: Will Deacon @ 2020-04-30 10:41 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: tsoni, virtio-dev, mst, jan.kiszka, jasowang, konrad.wilk,
	christoffer.dall, virtualization, alex.bennee, iommu,
	stefano.stabellini, pratikp, linux-kernel

On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote:
> * Will Deacon <will@kernel.org> [2020-04-30 11:14:32]:
> 
> > > +#ifdef CONFIG_VIRTIO_MMIO_OPS
> > >  
> > > +static struct virtio_mmio_ops *mmio_ops;
> > > +
> > > +#define virtio_readb(a)		mmio_ops->mmio_readl((a))
> > > +#define virtio_readw(a)		mmio_ops->mmio_readl((a))
> > > +#define virtio_readl(a)		mmio_ops->mmio_readl((a))
> > > +#define virtio_writeb(val, a)	mmio_ops->mmio_writeb((val), (a))
> > > +#define virtio_writew(val, a)	mmio_ops->mmio_writew((val), (a))
> > > +#define virtio_writel(val, a)	mmio_ops->mmio_writel((val), (a))
> > 
> > How exactly are these ops hooked up? I'm envisaging something like:
> > 
> > 	ops = spec_compliant_ops;
> > 	[...]
> > 	if (firmware_says_hypervisor_is_buggy())
> > 		ops = magic_qcom_ops;
> > 
> > am I wrong?
> 
> If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally
> set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
> example: message_queue_send() and message_queue_recevie() hypercalls).

Hmm, but then how would such a kernel work as a guest under all the
spec-compliant hypervisors out there?

> > > +int register_virtio_mmio_ops(struct virtio_mmio_ops *ops)
> > > +{
> > > +	pr_info("Registered %s as mmio ops\n", ops->name);
> > > +	mmio_ops = ops;
> > 
> > Not looking good, and really defeats the point of standardising this stuff
> > imo.
> 
> Ok. I guess the other option is to standardize on a new virtio transport (like
> ivshmem2-virtio)?

I haven't looked at that, but I suppose it depends on what your hypervisor
folks are willing to accomodate.

Will
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
  2020-04-30 10:07   ` Michael S. Tsirkin
  (?)
@ 2020-04-30 10:56     ` Jason Wang
  -1 siblings, 0 replies; 38+ messages in thread
From: Jason Wang @ 2020-04-30 10:56 UTC (permalink / raw)
  To: Michael S. Tsirkin, Srivatsa Vaddagiri
  Cc: konrad.wilk, jan.kiszka, will, stefano.stabellini, iommu,
	virtualization, virtio-dev, tsoni, pratikp, christoffer.dall,
	alex.bennee, linux-kernel


On 2020/4/30 下午6:07, Michael S. Tsirkin wrote:
> On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
>> The Type-1 hypervisor we are dealing with does not allow for MMIO transport.
> How about PCI then?


Or maybe you can use virtio-vdpa.

Thanks



^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
@ 2020-04-30 10:56     ` Jason Wang
  0 siblings, 0 replies; 38+ messages in thread
From: Jason Wang @ 2020-04-30 10:56 UTC (permalink / raw)
  To: Michael S. Tsirkin, Srivatsa Vaddagiri
  Cc: tsoni, virtio-dev, konrad.wilk, jan.kiszka, christoffer.dall,
	virtualization, alex.bennee, iommu, stefano.stabellini, will,
	linux-kernel, pratikp


On 2020/4/30 下午6:07, Michael S. Tsirkin wrote:
> On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
>> The Type-1 hypervisor we are dealing with does not allow for MMIO transport.
> How about PCI then?


Or maybe you can use virtio-vdpa.

Thanks


_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [virtio-dev] Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
@ 2020-04-30 10:56     ` Jason Wang
  0 siblings, 0 replies; 38+ messages in thread
From: Jason Wang @ 2020-04-30 10:56 UTC (permalink / raw)
  To: Michael S. Tsirkin, Srivatsa Vaddagiri
  Cc: konrad.wilk, jan.kiszka, will, stefano.stabellini, iommu,
	virtualization, virtio-dev, tsoni, pratikp, christoffer.dall,
	alex.bennee, linux-kernel


On 2020/4/30 下午6:07, Michael S. Tsirkin wrote:
> On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
>> The Type-1 hypervisor we are dealing with does not allow for MMIO transport.
> How about PCI then?


Or maybe you can use virtio-vdpa.

Thanks



---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
  2020-04-30 10:39       ` Will Deacon
  (?)
@ 2020-04-30 11:02         ` Srivatsa Vaddagiri
  -1 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 11:02 UTC (permalink / raw)
  To: Will Deacon
  Cc: konrad.wilk, mst, jasowang, jan.kiszka, stefano.stabellini,
	iommu, virtualization, virtio-dev, tsoni, pratikp,
	christoffer.dall, alex.bennee, linux-kernel

* Will Deacon <will@kernel.org> [2020-04-30 11:39:19]:

> Hi Vatsa,
> 
> On Thu, Apr 30, 2020 at 03:59:39PM +0530, Srivatsa Vaddagiri wrote:
> > > What's stopping you from implementing the trapping support in the
> > > hypervisor? Unlike the other patches you sent out, where the guest memory
> > > is not accessible to the host, there doesn't seem to be any advantage to
> > > not having trapping support, or am I missing something here?
> > 
> > 	I have had this discussion with hypervisor folks. They seem to be
> > concerned about complexity of having a VM's fault be handled in another
> > untrusted VM. They are not keen to add MMIO support.
> 
> Right, but I'm concerned about forking the implementation from the spec
> and I'm not keen to add these hooks ;)
> 
> What does your hook actually do? I'm assuming an HVC? 

Yes, it will issue message-queue related hypercalls

> If so, then where the
> fault is handled seems to be unrelated and whether the guest exit is due to
> an HVC or a stage-2 fault should be immaterial. 

A stage-2 fault would be considered fatal normally and result in termination of
guest. Modifying that behavior to allow resumption in case of virtio config
space access, especially including the untrusted VM in this flow, is
perhaps the concern. HVC calls OTOH are more vetted interfaces that the
hypervisor has to do nothing additional to handle.

> In other words, I don't
> follow why the trapping mechanism necessitates the way in which the fault is
> handled.

Let me check with our hypervisor folks again. Thanks for your inputs.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
@ 2020-04-30 11:02         ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 11:02 UTC (permalink / raw)
  To: Will Deacon
  Cc: tsoni, virtio-dev, mst, jan.kiszka, jasowang, konrad.wilk,
	christoffer.dall, virtualization, alex.bennee, iommu,
	stefano.stabellini, pratikp, linux-kernel

* Will Deacon <will@kernel.org> [2020-04-30 11:39:19]:

> Hi Vatsa,
> 
> On Thu, Apr 30, 2020 at 03:59:39PM +0530, Srivatsa Vaddagiri wrote:
> > > What's stopping you from implementing the trapping support in the
> > > hypervisor? Unlike the other patches you sent out, where the guest memory
> > > is not accessible to the host, there doesn't seem to be any advantage to
> > > not having trapping support, or am I missing something here?
> > 
> > 	I have had this discussion with hypervisor folks. They seem to be
> > concerned about complexity of having a VM's fault be handled in another
> > untrusted VM. They are not keen to add MMIO support.
> 
> Right, but I'm concerned about forking the implementation from the spec
> and I'm not keen to add these hooks ;)
> 
> What does your hook actually do? I'm assuming an HVC? 

Yes, it will issue message-queue related hypercalls

> If so, then where the
> fault is handled seems to be unrelated and whether the guest exit is due to
> an HVC or a stage-2 fault should be immaterial. 

A stage-2 fault would be considered fatal normally and result in termination of
guest. Modifying that behavior to allow resumption in case of virtio config
space access, especially including the untrusted VM in this flow, is
perhaps the concern. HVC calls OTOH are more vetted interfaces that the
hypervisor has to do nothing additional to handle.

> In other words, I don't
> follow why the trapping mechanism necessitates the way in which the fault is
> handled.

Let me check with our hypervisor folks again. Thanks for your inputs.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO
@ 2020-04-30 11:02         ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 11:02 UTC (permalink / raw)
  To: Will Deacon
  Cc: tsoni-sgV2jX0FEOL9JmXXK+q4OQ,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	mst-H+wXaHxf7aLQT0dZR+AlfA, jan.kiszka-kv7WeFo6aLtBDgjK7y7TUQ,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	konrad.wilk-QHcLZuEGTsvQT0dZR+AlfA, christoffer.dall-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	alex.bennee-QSEj5FYQhm4dnm+yROfE0A,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	stefano.stabellini-gjFFaj9aHVfQT0dZR+AlfA,
	pratikp-sgV2jX0FEOL9JmXXK+q4OQ,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

* Will Deacon <will-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> [2020-04-30 11:39:19]:

> Hi Vatsa,
> 
> On Thu, Apr 30, 2020 at 03:59:39PM +0530, Srivatsa Vaddagiri wrote:
> > > What's stopping you from implementing the trapping support in the
> > > hypervisor? Unlike the other patches you sent out, where the guest memory
> > > is not accessible to the host, there doesn't seem to be any advantage to
> > > not having trapping support, or am I missing something here?
> > 
> > 	I have had this discussion with hypervisor folks. They seem to be
> > concerned about complexity of having a VM's fault be handled in another
> > untrusted VM. They are not keen to add MMIO support.
> 
> Right, but I'm concerned about forking the implementation from the spec
> and I'm not keen to add these hooks ;)
> 
> What does your hook actually do? I'm assuming an HVC? 

Yes, it will issue message-queue related hypercalls

> If so, then where the
> fault is handled seems to be unrelated and whether the guest exit is due to
> an HVC or a stage-2 fault should be immaterial. 

A stage-2 fault would be considered fatal normally and result in termination of
guest. Modifying that behavior to allow resumption in case of virtio config
space access, especially including the untrusted VM in this flow, is
perhaps the concern. HVC calls OTOH are more vetted interfaces that the
hypervisor has to do nothing additional to handle.

> In other words, I don't
> follow why the trapping mechanism necessitates the way in which the fault is
> handled.

Let me check with our hypervisor folks again. Thanks for your inputs.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
  2020-04-30 10:41         ` Will Deacon
@ 2020-04-30 11:11           ` Srivatsa Vaddagiri
  -1 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 11:11 UTC (permalink / raw)
  To: Will Deacon
  Cc: konrad.wilk, mst, jasowang, jan.kiszka, stefano.stabellini,
	iommu, virtualization, virtio-dev, tsoni, pratikp,
	christoffer.dall, alex.bennee, linux-kernel

* Will Deacon <will@kernel.org> [2020-04-30 11:41:50]:

> On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote:
> > If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally
> > set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
> > example: message_queue_send() and message_queue_recevie() hypercalls).
> 
> Hmm, but then how would such a kernel work as a guest under all the
> spec-compliant hypervisors out there?

Ok I see your point and yes for better binary compatibility, the ops have to be
set based on runtime detection of hypervisor capabilities.

> > Ok. I guess the other option is to standardize on a new virtio transport (like
> > ivshmem2-virtio)?
> 
> I haven't looked at that, but I suppose it depends on what your hypervisor
> folks are willing to accomodate.

I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
(for life-cycle management of VMs), which our hypervisor may not support. A
simple shared memory and doorbell or message-queue based transport will work for
us.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
@ 2020-04-30 11:11           ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 11:11 UTC (permalink / raw)
  To: Will Deacon
  Cc: tsoni, virtio-dev, mst, jan.kiszka, jasowang, konrad.wilk,
	christoffer.dall, virtualization, alex.bennee, iommu,
	stefano.stabellini, pratikp, linux-kernel

* Will Deacon <will@kernel.org> [2020-04-30 11:41:50]:

> On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote:
> > If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally
> > set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
> > example: message_queue_send() and message_queue_recevie() hypercalls).
> 
> Hmm, but then how would such a kernel work as a guest under all the
> spec-compliant hypervisors out there?

Ok I see your point and yes for better binary compatibility, the ops have to be
set based on runtime detection of hypervisor capabilities.

> > Ok. I guess the other option is to standardize on a new virtio transport (like
> > ivshmem2-virtio)?
> 
> I haven't looked at that, but I suppose it depends on what your hypervisor
> folks are willing to accomodate.

I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
(for life-cycle management of VMs), which our hypervisor may not support. A
simple shared memory and doorbell or message-queue based transport will work for
us.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
  2020-04-30 11:11           ` Srivatsa Vaddagiri
  (?)
@ 2020-04-30 12:59             ` Jan Kiszka
  -1 siblings, 0 replies; 38+ messages in thread
From: Jan Kiszka @ 2020-04-30 12:59 UTC (permalink / raw)
  To: Srivatsa Vaddagiri, Will Deacon
  Cc: konrad.wilk, mst, jasowang, stefano.stabellini, iommu,
	virtualization, virtio-dev, tsoni, pratikp, christoffer.dall,
	alex.bennee, linux-kernel

On 30.04.20 13:11, Srivatsa Vaddagiri wrote:
> * Will Deacon <will@kernel.org> [2020-04-30 11:41:50]:
> 
>> On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote:
>>> If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally
>>> set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
>>> example: message_queue_send() and message_queue_recevie() hypercalls).
>>
>> Hmm, but then how would such a kernel work as a guest under all the
>> spec-compliant hypervisors out there?
> 
> Ok I see your point and yes for better binary compatibility, the ops have to be
> set based on runtime detection of hypervisor capabilities.
> 
>>> Ok. I guess the other option is to standardize on a new virtio transport (like
>>> ivshmem2-virtio)?
>>
>> I haven't looked at that, but I suppose it depends on what your hypervisor
>> folks are willing to accomodate.
> 
> I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> (for life-cycle management of VMs), which our hypervisor may not support. A
> simple shared memory and doorbell or message-queue based transport will work for
> us.

As written in our private conversation, a mapping of the ivshmem2 device 
discovery to platform mechanism (device tree etc.) and maybe even the 
register access for doorbell and life-cycle management to something 
hypercall-like would be imaginable. What would count more from virtio 
perspective is a common mapping on a shared memory transport.

That said, I also warned about all the features that PCI already defined 
(such as message-based interrupts) which you may have to add when going 
a different way for the shared memory device.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
@ 2020-04-30 12:59             ` Jan Kiszka
  0 siblings, 0 replies; 38+ messages in thread
From: Jan Kiszka @ 2020-04-30 12:59 UTC (permalink / raw)
  To: Srivatsa Vaddagiri, Will Deacon
  Cc: tsoni, virtio-dev, mst, alex.bennee, jasowang, konrad.wilk,
	christoffer.dall, virtualization, iommu, stefano.stabellini,
	pratikp, linux-kernel

On 30.04.20 13:11, Srivatsa Vaddagiri wrote:
> * Will Deacon <will@kernel.org> [2020-04-30 11:41:50]:
> 
>> On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote:
>>> If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally
>>> set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
>>> example: message_queue_send() and message_queue_recevie() hypercalls).
>>
>> Hmm, but then how would such a kernel work as a guest under all the
>> spec-compliant hypervisors out there?
> 
> Ok I see your point and yes for better binary compatibility, the ops have to be
> set based on runtime detection of hypervisor capabilities.
> 
>>> Ok. I guess the other option is to standardize on a new virtio transport (like
>>> ivshmem2-virtio)?
>>
>> I haven't looked at that, but I suppose it depends on what your hypervisor
>> folks are willing to accomodate.
> 
> I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> (for life-cycle management of VMs), which our hypervisor may not support. A
> simple shared memory and doorbell or message-queue based transport will work for
> us.

As written in our private conversation, a mapping of the ivshmem2 device 
discovery to platform mechanism (device tree etc.) and maybe even the 
register access for doorbell and life-cycle management to something 
hypercall-like would be imaginable. What would count more from virtio 
perspective is a common mapping on a shared memory transport.

That said, I also warned about all the features that PCI already defined 
(such as message-based interrupts) which you may have to add when going 
a different way for the shared memory device.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [virtio-dev] Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
@ 2020-04-30 12:59             ` Jan Kiszka
  0 siblings, 0 replies; 38+ messages in thread
From: Jan Kiszka @ 2020-04-30 12:59 UTC (permalink / raw)
  To: Srivatsa Vaddagiri, Will Deacon
  Cc: konrad.wilk, mst, jasowang, stefano.stabellini, iommu,
	virtualization, virtio-dev, tsoni, pratikp, christoffer.dall,
	alex.bennee, linux-kernel

On 30.04.20 13:11, Srivatsa Vaddagiri wrote:
> * Will Deacon <will@kernel.org> [2020-04-30 11:41:50]:
> 
>> On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote:
>>> If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally
>>> set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
>>> example: message_queue_send() and message_queue_recevie() hypercalls).
>>
>> Hmm, but then how would such a kernel work as a guest under all the
>> spec-compliant hypervisors out there?
> 
> Ok I see your point and yes for better binary compatibility, the ops have to be
> set based on runtime detection of hypervisor capabilities.
> 
>>> Ok. I guess the other option is to standardize on a new virtio transport (like
>>> ivshmem2-virtio)?
>>
>> I haven't looked at that, but I suppose it depends on what your hypervisor
>> folks are willing to accomodate.
> 
> I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> (for life-cycle management of VMs), which our hypervisor may not support. A
> simple shared memory and doorbell or message-queue based transport will work for
> us.

As written in our private conversation, a mapping of the ivshmem2 device 
discovery to platform mechanism (device tree etc.) and maybe even the 
register access for doorbell and life-cycle management to something 
hypercall-like would be imaginable. What would count more from virtio 
perspective is a common mapping on a shared memory transport.

That said, I also warned about all the features that PCI already defined 
(such as message-based interrupts) which you may have to add when going 
a different way for the shared memory device.

Jan

-- 
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux

---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
  2020-04-30 12:59             ` Jan Kiszka
  (?)
@ 2020-04-30 13:33               ` Srivatsa Vaddagiri
  -1 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 13:33 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: Will Deacon, konrad.wilk, mst, jasowang, stefano.stabellini,
	iommu, virtualization, virtio-dev, tsoni, pratikp,
	christoffer.dall, alex.bennee, linux-kernel

* Jan Kiszka <jan.kiszka@siemens.com> [2020-04-30 14:59:50]:

> >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> >(for life-cycle management of VMs), which our hypervisor may not support. A
> >simple shared memory and doorbell or message-queue based transport will work for
> >us.
> 
> As written in our private conversation, a mapping of the ivshmem2 device
> discovery to platform mechanism (device tree etc.) and maybe even the
> register access for doorbell and life-cycle management to something
> hypercall-like would be imaginable. What would count more from virtio
> perspective is a common mapping on a shared memory transport.

Yes that sounds simpler for us.

> That said, I also warned about all the features that PCI already defined
> (such as message-based interrupts) which you may have to add when going a
> different way for the shared memory device.

Is it really required to present this shared memory as belonging to a PCI
device? I would expect the device-tree to indicate the presence of this shared
memory region, which we should be able to present to ivshmem2 as shared memory
region to use (along with some handles for doorbell or message queue use).

I understand the usefulness of modeling the shared memory as part of device so
that hypervisor can send events related to peers going down or coming up. In our
case, there will be other means to discover those events and avoiding this
requirement on hypervisor (to emulate PCI) will simplify the solution for us.

Any idea when we can expect virtio over ivshmem2 to become available?!
 
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
@ 2020-04-30 13:33               ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 13:33 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: tsoni, virtio-dev, mst, pratikp, jasowang, konrad.wilk,
	christoffer.dall, virtualization, alex.bennee, iommu,
	stefano.stabellini, Will Deacon, linux-kernel

* Jan Kiszka <jan.kiszka@siemens.com> [2020-04-30 14:59:50]:

> >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> >(for life-cycle management of VMs), which our hypervisor may not support. A
> >simple shared memory and doorbell or message-queue based transport will work for
> >us.
> 
> As written in our private conversation, a mapping of the ivshmem2 device
> discovery to platform mechanism (device tree etc.) and maybe even the
> register access for doorbell and life-cycle management to something
> hypercall-like would be imaginable. What would count more from virtio
> perspective is a common mapping on a shared memory transport.

Yes that sounds simpler for us.

> That said, I also warned about all the features that PCI already defined
> (such as message-based interrupts) which you may have to add when going a
> different way for the shared memory device.

Is it really required to present this shared memory as belonging to a PCI
device? I would expect the device-tree to indicate the presence of this shared
memory region, which we should be able to present to ivshmem2 as shared memory
region to use (along with some handles for doorbell or message queue use).

I understand the usefulness of modeling the shared memory as part of device so
that hypervisor can send events related to peers going down or coming up. In our
case, there will be other means to discover those events and avoiding this
requirement on hypervisor (to emulate PCI) will simplify the solution for us.

Any idea when we can expect virtio over ivshmem2 to become available?!
 
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
@ 2020-04-30 13:33               ` Srivatsa Vaddagiri
  0 siblings, 0 replies; 38+ messages in thread
From: Srivatsa Vaddagiri @ 2020-04-30 13:33 UTC (permalink / raw)
  To: Jan Kiszka
  Cc: tsoni-sgV2jX0FEOL9JmXXK+q4OQ,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b,
	mst-H+wXaHxf7aLQT0dZR+AlfA, pratikp-sgV2jX0FEOL9JmXXK+q4OQ,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA,
	konrad.wilk-QHcLZuEGTsvQT0dZR+AlfA, christoffer.dall-5wv7dgnIgG8,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	alex.bennee-QSEj5FYQhm4dnm+yROfE0A,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	stefano.stabellini-gjFFaj9aHVfQT0dZR+AlfA, Will Deacon,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

* Jan Kiszka <jan.kiszka-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org> [2020-04-30 14:59:50]:

> >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> >(for life-cycle management of VMs), which our hypervisor may not support. A
> >simple shared memory and doorbell or message-queue based transport will work for
> >us.
> 
> As written in our private conversation, a mapping of the ivshmem2 device
> discovery to platform mechanism (device tree etc.) and maybe even the
> register access for doorbell and life-cycle management to something
> hypercall-like would be imaginable. What would count more from virtio
> perspective is a common mapping on a shared memory transport.

Yes that sounds simpler for us.

> That said, I also warned about all the features that PCI already defined
> (such as message-based interrupts) which you may have to add when going a
> different way for the shared memory device.

Is it really required to present this shared memory as belonging to a PCI
device? I would expect the device-tree to indicate the presence of this shared
memory region, which we should be able to present to ivshmem2 as shared memory
region to use (along with some handles for doorbell or message queue use).

I understand the usefulness of modeling the shared memory as part of device so
that hypervisor can send events related to peers going down or coming up. In our
case, there will be other means to discover those events and avoiding this
requirement on hypervisor (to emulate PCI) will simplify the solution for us.

Any idea when we can expect virtio over ivshmem2 to become available?!
 
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
  2020-04-30 13:33               ` Srivatsa Vaddagiri
  (?)
@ 2020-04-30 19:34                 ` Michael S. Tsirkin
  -1 siblings, 0 replies; 38+ messages in thread
From: Michael S. Tsirkin @ 2020-04-30 19:34 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: Jan Kiszka, Will Deacon, konrad.wilk, jasowang,
	stefano.stabellini, iommu, virtualization, virtio-dev, tsoni,
	pratikp, christoffer.dall, alex.bennee, linux-kernel

On Thu, Apr 30, 2020 at 07:03:21PM +0530, Srivatsa Vaddagiri wrote:
> * Jan Kiszka <jan.kiszka@siemens.com> [2020-04-30 14:59:50]:
> 
> > >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> > >(for life-cycle management of VMs), which our hypervisor may not support.

PCI is mostly just 2 registers. One sets the affected device, one the data to read/write.

> A
> > >simple shared memory and doorbell or message-queue based transport will work for
> > >us.
> > 
> > As written in our private conversation, a mapping of the ivshmem2 device
> > discovery to platform mechanism (device tree etc.) and maybe even the
> > register access for doorbell and life-cycle management to something
> > hypercall-like would be imaginable. What would count more from virtio
> > perspective is a common mapping on a shared memory transport.
> 
> Yes that sounds simpler for us.
> 
> > That said, I also warned about all the features that PCI already defined
> > (such as message-based interrupts) which you may have to add when going a
> > different way for the shared memory device.
> 
> Is it really required to present this shared memory as belonging to a PCI
> device?

But then you will go on and add MSI, and NUMA, and security, and and and ...

> I would expect the device-tree to indicate the presence of this shared
> memory region, which we should be able to present to ivshmem2 as shared memory
> region to use (along with some handles for doorbell or message queue use).
> 
> I understand the usefulness of modeling the shared memory as part of device so
> that hypervisor can send events related to peers going down or coming up. In our
> case, there will be other means to discover those events and avoiding this
> requirement on hypervisor (to emulate PCI) will simplify the solution for us.
> 
> Any idea when we can expect virtio over ivshmem2 to become available?!

Check out the virtio spec. Right at the beginning it states:

	These devices are
	found in virtual environments, yet by design they look like physical devices to the guest within
	the virtual machine - and this document treats them as such. This similarity allows the guest to
	use standard drivers and discovery mechanisms


> -- 
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
> of Code Aurora Forum, hosted by The Linux Foundation


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
@ 2020-04-30 19:34                 ` Michael S. Tsirkin
  0 siblings, 0 replies; 38+ messages in thread
From: Michael S. Tsirkin @ 2020-04-30 19:34 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: tsoni, virtio-dev, konrad.wilk, Jan Kiszka, jasowang,
	christoffer.dall, virtualization, alex.bennee, iommu,
	stefano.stabellini, Will Deacon, linux-kernel, pratikp

On Thu, Apr 30, 2020 at 07:03:21PM +0530, Srivatsa Vaddagiri wrote:
> * Jan Kiszka <jan.kiszka@siemens.com> [2020-04-30 14:59:50]:
> 
> > >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> > >(for life-cycle management of VMs), which our hypervisor may not support.

PCI is mostly just 2 registers. One sets the affected device, one the data to read/write.

> A
> > >simple shared memory and doorbell or message-queue based transport will work for
> > >us.
> > 
> > As written in our private conversation, a mapping of the ivshmem2 device
> > discovery to platform mechanism (device tree etc.) and maybe even the
> > register access for doorbell and life-cycle management to something
> > hypercall-like would be imaginable. What would count more from virtio
> > perspective is a common mapping on a shared memory transport.
> 
> Yes that sounds simpler for us.
> 
> > That said, I also warned about all the features that PCI already defined
> > (such as message-based interrupts) which you may have to add when going a
> > different way for the shared memory device.
> 
> Is it really required to present this shared memory as belonging to a PCI
> device?

But then you will go on and add MSI, and NUMA, and security, and and and ...

> I would expect the device-tree to indicate the presence of this shared
> memory region, which we should be able to present to ivshmem2 as shared memory
> region to use (along with some handles for doorbell or message queue use).
> 
> I understand the usefulness of modeling the shared memory as part of device so
> that hypervisor can send events related to peers going down or coming up. In our
> case, there will be other means to discover those events and avoiding this
> requirement on hypervisor (to emulate PCI) will simplify the solution for us.
> 
> Any idea when we can expect virtio over ivshmem2 to become available?!

Check out the virtio spec. Right at the beginning it states:

	These devices are
	found in virtual environments, yet by design they look like physical devices to the guest within
	the virtual machine - and this document treats them as such. This similarity allows the guest to
	use standard drivers and discovery mechanisms


> -- 
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
> of Code Aurora Forum, hosted by The Linux Foundation

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [virtio-dev] Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
@ 2020-04-30 19:34                 ` Michael S. Tsirkin
  0 siblings, 0 replies; 38+ messages in thread
From: Michael S. Tsirkin @ 2020-04-30 19:34 UTC (permalink / raw)
  To: Srivatsa Vaddagiri
  Cc: Jan Kiszka, Will Deacon, konrad.wilk, jasowang,
	stefano.stabellini, iommu, virtualization, virtio-dev, tsoni,
	pratikp, christoffer.dall, alex.bennee, linux-kernel

On Thu, Apr 30, 2020 at 07:03:21PM +0530, Srivatsa Vaddagiri wrote:
> * Jan Kiszka <jan.kiszka@siemens.com> [2020-04-30 14:59:50]:
> 
> > >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> > >(for life-cycle management of VMs), which our hypervisor may not support.

PCI is mostly just 2 registers. One sets the affected device, one the data to read/write.

> A
> > >simple shared memory and doorbell or message-queue based transport will work for
> > >us.
> > 
> > As written in our private conversation, a mapping of the ivshmem2 device
> > discovery to platform mechanism (device tree etc.) and maybe even the
> > register access for doorbell and life-cycle management to something
> > hypercall-like would be imaginable. What would count more from virtio
> > perspective is a common mapping on a shared memory transport.
> 
> Yes that sounds simpler for us.
> 
> > That said, I also warned about all the features that PCI already defined
> > (such as message-based interrupts) which you may have to add when going a
> > different way for the shared memory device.
> 
> Is it really required to present this shared memory as belonging to a PCI
> device?

But then you will go on and add MSI, and NUMA, and security, and and and ...

> I would expect the device-tree to indicate the presence of this shared
> memory region, which we should be able to present to ivshmem2 as shared memory
> region to use (along with some handles for doorbell or message queue use).
> 
> I understand the usefulness of modeling the shared memory as part of device so
> that hypervisor can send events related to peers going down or coming up. In our
> case, there will be other means to discover those events and avoiding this
> requirement on hypervisor (to emulate PCI) will simplify the solution for us.
> 
> Any idea when we can expect virtio over ivshmem2 to become available?!

Check out the virtio spec. Right at the beginning it states:

	These devices are
	found in virtual environments, yet by design they look like physical devices to the guest within
	the virtual machine - and this document treats them as such. This similarity allows the guest to
	use standard drivers and discovery mechanisms


> -- 
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
> of Code Aurora Forum, hosted by The Linux Foundation


---------------------------------------------------------------------
To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2020-04-30 19:34 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-30 10:02 [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO Srivatsa Vaddagiri
2020-04-30 10:02 ` Srivatsa Vaddagiri
2020-04-30 10:02 ` [RFC/PATCH 1/1] virtio: Introduce MMIO ops Srivatsa Vaddagiri
2020-04-30 10:02   ` Srivatsa Vaddagiri
2020-04-30 10:14   ` Will Deacon
2020-04-30 10:14     ` Will Deacon
2020-04-30 10:34     ` Srivatsa Vaddagiri
2020-04-30 10:34       ` Srivatsa Vaddagiri
2020-04-30 10:41       ` Will Deacon
2020-04-30 10:41         ` Will Deacon
2020-04-30 11:11         ` Srivatsa Vaddagiri
2020-04-30 11:11           ` Srivatsa Vaddagiri
2020-04-30 12:59           ` Jan Kiszka
2020-04-30 12:59             ` [virtio-dev] " Jan Kiszka
2020-04-30 12:59             ` Jan Kiszka
2020-04-30 13:33             ` Srivatsa Vaddagiri
2020-04-30 13:33               ` Srivatsa Vaddagiri
2020-04-30 13:33               ` Srivatsa Vaddagiri
2020-04-30 19:34               ` Michael S. Tsirkin
2020-04-30 19:34                 ` [virtio-dev] " Michael S. Tsirkin
2020-04-30 19:34                 ` Michael S. Tsirkin
2020-04-30 10:07 ` [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO Michael S. Tsirkin
2020-04-30 10:07   ` [virtio-dev] " Michael S. Tsirkin
2020-04-30 10:07   ` Michael S. Tsirkin
2020-04-30 10:40   ` Srivatsa Vaddagiri
2020-04-30 10:40     ` Srivatsa Vaddagiri
2020-04-30 10:56   ` Jason Wang
2020-04-30 10:56     ` [virtio-dev] " Jason Wang
2020-04-30 10:56     ` Jason Wang
2020-04-30 10:08 ` Will Deacon
2020-04-30 10:08   ` Will Deacon
2020-04-30 10:29   ` Srivatsa Vaddagiri
2020-04-30 10:29     ` Srivatsa Vaddagiri
2020-04-30 10:39     ` Will Deacon
2020-04-30 10:39       ` Will Deacon
2020-04-30 11:02       ` Srivatsa Vaddagiri
2020-04-30 11:02         ` Srivatsa Vaddagiri
2020-04-30 11:02         ` Srivatsa Vaddagiri

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.