All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 0/9] virtio DMA API, yet again
@ 2016-02-03  5:46 ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

This switches virtio to use the DMA API on Xen and if requested by
module option.

This fixes virtio on Xen, and it should break anything because it's
off by default on everything except Xen PV on x86.

To the Xen people: is this okay?  If it doesn't work on other Xen
variants (PVH? HVM?), can you submit follow-up patches to fix it?

To everyone else: we've waffled on this for way too long.  I think
we should to get DMA API implementation in with a conservative
policy like this rather than waiting until we achieve perfection.
I'm tired of carrying these patches around.

I changed queue allocation around a bit in this version.  Per Michael's
request, we no longer use dma_zalloc_coherent in the !dma_api case.
Instead we use alloc_pages_exact, just like the current code does.
This simplifies the ring address accessors, because they can always
load from the dma addr rather than depending on vring_use_dma_api
themselves.

There's an odd warning in here if the ring's physical address
doesn't fit in a dma_addr_t.  This could only possible happen on
really weird configurations in which phys_addr_t is wider than
dma_addr_t.  AFAICT this is only possible on i386 PAE systems and on
MIPS, and even there it only happens if highmem is off.  But that
means we're safe, since we should never end up with high allocations
on non-highmem systems unless we explicitly ask for them, which we
don't.

If this is too scary, I can add yet more cruft to avoid it, but
it seems harmless enough to me, and it means that the driver will
be totally clean once all the vring_use_dma_api calls go away.

Michael, if these survive review, can you stage these in your tree?
Can you also take a look at tools/virtio?  I probably broke it, but I
couldn't get it to build without these patches either, so I'm stuck.

Changes from v6:
 - Remove HAVE_DMA_ATTRS and add Acked-by (Cornelia)
 - Add some missing signed-off-by lines from me (whoops)
 - Rework queue allocation (Michael)

Changes from v5:
 - Typo fixes (David Woodhouse)
 - Use xen_domain() to detect Xen (David Vrabel)
 - Pass struct vring_virtqueue * into vring_use_dma_api for future proofing
 - Removed module parameter (Michael)

Changes from v4:
 - Bake vring_use_dma_api in from the beginning.
 - Automatically enable only on Xen.
 - Add module parameter.
 - Add s390 and alpha DMA API implementations.
 - Rebase to 4.5-rc1.

Changes from v3:
 - More big-endian fixes.
 - Added better virtio-ring APIs that handle allocation and use them in
   virtio-mmio and virtio-pci.
 - Switch to Michael's virtio-net patch.

Changes from v2:
 - Fix vring_mapping_error incorrect argument

Changes from v1:
 - Fix an endian conversion error causing a BUG to hit.
 - Fix a DMA ordering issue (swiotlb=force works now).
 - Minor cleanups.

Andy Lutomirski (6):
  vring: Introduce vring_use_dma_api()
  virtio_ring: Support DMA APIs
  virtio: Add improved queue allocation API
  virtio_mmio: Use the DMA API if enabled
  virtio_pci: Use the DMA API if enabled
  vring: Use the DMA API on Xen

Christian Borntraeger (3):
  dma: Provide simple noop dma ops
  alpha/dma: use common noop dma ops
  s390/dma: Allow per device dma ops

 arch/alpha/kernel/pci-noop.c        |  46 +---
 arch/s390/Kconfig                   |   5 +-
 arch/s390/include/asm/device.h      |   6 +-
 arch/s390/include/asm/dma-mapping.h |   6 +-
 arch/s390/pci/pci.c                 |   1 +
 arch/s390/pci/pci_dma.c             |   4 +-
 drivers/virtio/Kconfig              |   2 +-
 drivers/virtio/virtio_mmio.c        |  67 ++----
 drivers/virtio/virtio_pci_common.h  |   6 -
 drivers/virtio/virtio_pci_legacy.c  |  42 ++--
 drivers/virtio/virtio_pci_modern.c  |  61 ++---
 drivers/virtio/virtio_ring.c        | 439 +++++++++++++++++++++++++++++++-----
 include/linux/dma-mapping.h         |   2 +
 include/linux/virtio.h              |  23 +-
 include/linux/virtio_ring.h         |  35 +++
 lib/Makefile                        |   1 +
 lib/dma-noop.c                      |  75 ++++++
 tools/virtio/linux/dma-mapping.h    |  17 ++
 18 files changed, 594 insertions(+), 244 deletions(-)
 create mode 100644 lib/dma-noop.c
 create mode 100644 tools/virtio/linux/dma-mapping.h

-- 
2.5.0

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [PATCH v7 0/9] virtio DMA API, yet again
@ 2016-02-03  5:46 ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

This switches virtio to use the DMA API on Xen and if requested by
module option.

This fixes virtio on Xen, and it should break anything because it's
off by default on everything except Xen PV on x86.

To the Xen people: is this okay?  If it doesn't work on other Xen
variants (PVH? HVM?), can you submit follow-up patches to fix it?

To everyone else: we've waffled on this for way too long.  I think
we should to get DMA API implementation in with a conservative
policy like this rather than waiting until we achieve perfection.
I'm tired of carrying these patches around.

I changed queue allocation around a bit in this version.  Per Michael's
request, we no longer use dma_zalloc_coherent in the !dma_api case.
Instead we use alloc_pages_exact, just like the current code does.
This simplifies the ring address accessors, because they can always
load from the dma addr rather than depending on vring_use_dma_api
themselves.

There's an odd warning in here if the ring's physical address
doesn't fit in a dma_addr_t.  This could only possible happen on
really weird configurations in which phys_addr_t is wider than
dma_addr_t.  AFAICT this is only possible on i386 PAE systems and on
MIPS, and even there it only happens if highmem is off.  But that
means we're safe, since we should never end up with high allocations
on non-highmem systems unless we explicitly ask for them, which we
don't.

If this is too scary, I can add yet more cruft to avoid it, but
it seems harmless enough to me, and it means that the driver will
be totally clean once all the vring_use_dma_api calls go away.

Michael, if these survive review, can you stage these in your tree?
Can you also take a look at tools/virtio?  I probably broke it, but I
couldn't get it to build without these patches either, so I'm stuck.

Changes from v6:
 - Remove HAVE_DMA_ATTRS and add Acked-by (Cornelia)
 - Add some missing signed-off-by lines from me (whoops)
 - Rework queue allocation (Michael)

Changes from v5:
 - Typo fixes (David Woodhouse)
 - Use xen_domain() to detect Xen (David Vrabel)
 - Pass struct vring_virtqueue * into vring_use_dma_api for future proofing
 - Removed module parameter (Michael)

Changes from v4:
 - Bake vring_use_dma_api in from the beginning.
 - Automatically enable only on Xen.
 - Add module parameter.
 - Add s390 and alpha DMA API implementations.
 - Rebase to 4.5-rc1.

Changes from v3:
 - More big-endian fixes.
 - Added better virtio-ring APIs that handle allocation and use them in
   virtio-mmio and virtio-pci.
 - Switch to Michael's virtio-net patch.

Changes from v2:
 - Fix vring_mapping_error incorrect argument

Changes from v1:
 - Fix an endian conversion error causing a BUG to hit.
 - Fix a DMA ordering issue (swiotlb=force works now).
 - Minor cleanups.

Andy Lutomirski (6):
  vring: Introduce vring_use_dma_api()
  virtio_ring: Support DMA APIs
  virtio: Add improved queue allocation API
  virtio_mmio: Use the DMA API if enabled
  virtio_pci: Use the DMA API if enabled
  vring: Use the DMA API on Xen

Christian Borntraeger (3):
  dma: Provide simple noop dma ops
  alpha/dma: use common noop dma ops
  s390/dma: Allow per device dma ops

 arch/alpha/kernel/pci-noop.c        |  46 +---
 arch/s390/Kconfig                   |   5 +-
 arch/s390/include/asm/device.h      |   6 +-
 arch/s390/include/asm/dma-mapping.h |   6 +-
 arch/s390/pci/pci.c                 |   1 +
 arch/s390/pci/pci_dma.c             |   4 +-
 drivers/virtio/Kconfig              |   2 +-
 drivers/virtio/virtio_mmio.c        |  67 ++----
 drivers/virtio/virtio_pci_common.h  |   6 -
 drivers/virtio/virtio_pci_legacy.c  |  42 ++--
 drivers/virtio/virtio_pci_modern.c  |  61 ++---
 drivers/virtio/virtio_ring.c        | 439 +++++++++++++++++++++++++++++++-----
 include/linux/dma-mapping.h         |   2 +
 include/linux/virtio.h              |  23 +-
 include/linux/virtio_ring.h         |  35 +++
 lib/Makefile                        |   1 +
 lib/dma-noop.c                      |  75 ++++++
 tools/virtio/linux/dma-mapping.h    |  17 ++
 18 files changed, 594 insertions(+), 244 deletions(-)
 create mode 100644 lib/dma-noop.c
 create mode 100644 tools/virtio/linux/dma-mapping.h

-- 
2.5.0


^ permalink raw reply	[flat|nested] 73+ messages in thread

* [PATCH v7 1/9] dma: Provide simple noop dma ops
  2016-02-03  5:46 ` Andy Lutomirski
  (?)
@ 2016-02-03  5:46   ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

From: Christian Borntraeger <borntraeger@de.ibm.com>

We are going to require dma_ops for several common drivers, even for
systems that do have an identity mapping. Lets provide some minimal
no-op dma_ops that can be used for that purpose.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 include/linux/dma-mapping.h |  2 ++
 lib/Makefile                |  1 +
 lib/dma-noop.c              | 75 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 78 insertions(+)
 create mode 100644 lib/dma-noop.c

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 75857cda38e9..c0b27ff2c784 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -70,6 +70,8 @@ struct dma_map_ops {
 	int is_phys;
 };
 
+extern struct dma_map_ops dma_noop_ops;
+
 #define DMA_BIT_MASK(n)	(((n) == 64) ? ~0ULL : ((1ULL<<(n))-1))
 
 #define DMA_MASK_NONE	0x0ULL
diff --git a/lib/Makefile b/lib/Makefile
index a7c26a41a738..a572b86a1b1d 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -18,6 +18,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 obj-$(CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS) += usercopy.o
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
+lib-$(CONFIG_HAS_DMA) += dma-noop.o
 
 lib-y	+= kobject.o klist.o
 obj-y	+= lockref.o
diff --git a/lib/dma-noop.c b/lib/dma-noop.c
new file mode 100644
index 000000000000..72145646857e
--- /dev/null
+++ b/lib/dma-noop.c
@@ -0,0 +1,75 @@
+/*
+ *	lib/dma-noop.c
+ *
+ * Simple DMA noop-ops that map 1:1 with memory
+ */
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/dma-mapping.h>
+#include <linux/scatterlist.h>
+
+static void *dma_noop_alloc(struct device *dev, size_t size,
+			    dma_addr_t *dma_handle, gfp_t gfp,
+			    struct dma_attrs *attrs)
+{
+	void *ret;
+
+	ret = (void *)__get_free_pages(gfp, get_order(size));
+	if (ret)
+		*dma_handle = virt_to_phys(ret);
+	return ret;
+}
+
+static void dma_noop_free(struct device *dev, size_t size,
+			  void *cpu_addr, dma_addr_t dma_addr,
+			  struct dma_attrs *attrs)
+{
+	free_pages((unsigned long)cpu_addr, get_order(size));
+}
+
+static dma_addr_t dma_noop_map_page(struct device *dev, struct page *page,
+				      unsigned long offset, size_t size,
+				      enum dma_data_direction dir,
+				      struct dma_attrs *attrs)
+{
+	return page_to_phys(page) + offset;
+}
+
+static int dma_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
+			     enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+	int i;
+	struct scatterlist *sg;
+
+	for_each_sg(sgl, sg, nents, i) {
+		void *va;
+
+		BUG_ON(!sg_page(sg));
+		va = sg_virt(sg);
+		sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va);
+		sg_dma_len(sg) = sg->length;
+	}
+
+	return nents;
+}
+
+static int dma_noop_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+	return 0;
+}
+
+static int dma_noop_supported(struct device *dev, u64 mask)
+{
+	return 1;
+}
+
+struct dma_map_ops dma_noop_ops = {
+	.alloc			= dma_noop_alloc,
+	.free			= dma_noop_free,
+	.map_page		= dma_noop_map_page,
+	.map_sg			= dma_noop_map_sg,
+	.mapping_error		= dma_noop_mapping_error,
+	.dma_supported		= dma_noop_supported,
+};
+
+EXPORT_SYMBOL(dma_noop_ops);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 1/9] dma: Provide simple noop dma ops
@ 2016-02-03  5:46   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

From: Christian Borntraeger <borntraeger@de.ibm.com>

We are going to require dma_ops for several common drivers, even for
systems that do have an identity mapping. Lets provide some minimal
no-op dma_ops that can be used for that purpose.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 include/linux/dma-mapping.h |  2 ++
 lib/Makefile                |  1 +
 lib/dma-noop.c              | 75 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 78 insertions(+)
 create mode 100644 lib/dma-noop.c

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 75857cda38e9..c0b27ff2c784 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -70,6 +70,8 @@ struct dma_map_ops {
 	int is_phys;
 };
 
+extern struct dma_map_ops dma_noop_ops;
+
 #define DMA_BIT_MASK(n)	(((n) == 64) ? ~0ULL : ((1ULL<<(n))-1))
 
 #define DMA_MASK_NONE	0x0ULL
diff --git a/lib/Makefile b/lib/Makefile
index a7c26a41a738..a572b86a1b1d 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -18,6 +18,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 obj-$(CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS) += usercopy.o
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
+lib-$(CONFIG_HAS_DMA) += dma-noop.o
 
 lib-y	+= kobject.o klist.o
 obj-y	+= lockref.o
diff --git a/lib/dma-noop.c b/lib/dma-noop.c
new file mode 100644
index 000000000000..72145646857e
--- /dev/null
+++ b/lib/dma-noop.c
@@ -0,0 +1,75 @@
+/*
+ *	lib/dma-noop.c
+ *
+ * Simple DMA noop-ops that map 1:1 with memory
+ */
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/dma-mapping.h>
+#include <linux/scatterlist.h>
+
+static void *dma_noop_alloc(struct device *dev, size_t size,
+			    dma_addr_t *dma_handle, gfp_t gfp,
+			    struct dma_attrs *attrs)
+{
+	void *ret;
+
+	ret = (void *)__get_free_pages(gfp, get_order(size));
+	if (ret)
+		*dma_handle = virt_to_phys(ret);
+	return ret;
+}
+
+static void dma_noop_free(struct device *dev, size_t size,
+			  void *cpu_addr, dma_addr_t dma_addr,
+			  struct dma_attrs *attrs)
+{
+	free_pages((unsigned long)cpu_addr, get_order(size));
+}
+
+static dma_addr_t dma_noop_map_page(struct device *dev, struct page *page,
+				      unsigned long offset, size_t size,
+				      enum dma_data_direction dir,
+				      struct dma_attrs *attrs)
+{
+	return page_to_phys(page) + offset;
+}
+
+static int dma_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
+			     enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+	int i;
+	struct scatterlist *sg;
+
+	for_each_sg(sgl, sg, nents, i) {
+		void *va;
+
+		BUG_ON(!sg_page(sg));
+		va = sg_virt(sg);
+		sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va);
+		sg_dma_len(sg) = sg->length;
+	}
+
+	return nents;
+}
+
+static int dma_noop_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+	return 0;
+}
+
+static int dma_noop_supported(struct device *dev, u64 mask)
+{
+	return 1;
+}
+
+struct dma_map_ops dma_noop_ops = {
+	.alloc			= dma_noop_alloc,
+	.free			= dma_noop_free,
+	.map_page		= dma_noop_map_page,
+	.map_sg			= dma_noop_map_sg,
+	.mapping_error		= dma_noop_mapping_error,
+	.dma_supported		= dma_noop_supported,
+};
+
+EXPORT_SYMBOL(dma_noop_ops);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 1/9] dma: Provide simple noop dma ops
@ 2016-02-03  5:46   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

From: Christian Borntraeger <borntraeger@de.ibm.com>

We are going to require dma_ops for several common drivers, even for
systems that do have an identity mapping. Lets provide some minimal
no-op dma_ops that can be used for that purpose.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 include/linux/dma-mapping.h |  2 ++
 lib/Makefile                |  1 +
 lib/dma-noop.c              | 75 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 78 insertions(+)
 create mode 100644 lib/dma-noop.c

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 75857cda38e9..c0b27ff2c784 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -70,6 +70,8 @@ struct dma_map_ops {
 	int is_phys;
 };
 
+extern struct dma_map_ops dma_noop_ops;
+
 #define DMA_BIT_MASK(n)	(((n) = 64) ? ~0ULL : ((1ULL<<(n))-1))
 
 #define DMA_MASK_NONE	0x0ULL
diff --git a/lib/Makefile b/lib/Makefile
index a7c26a41a738..a572b86a1b1d 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -18,6 +18,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 obj-$(CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS) += usercopy.o
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
+lib-$(CONFIG_HAS_DMA) += dma-noop.o
 
 lib-y	+= kobject.o klist.o
 obj-y	+= lockref.o
diff --git a/lib/dma-noop.c b/lib/dma-noop.c
new file mode 100644
index 000000000000..72145646857e
--- /dev/null
+++ b/lib/dma-noop.c
@@ -0,0 +1,75 @@
+/*
+ *	lib/dma-noop.c
+ *
+ * Simple DMA noop-ops that map 1:1 with memory
+ */
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/dma-mapping.h>
+#include <linux/scatterlist.h>
+
+static void *dma_noop_alloc(struct device *dev, size_t size,
+			    dma_addr_t *dma_handle, gfp_t gfp,
+			    struct dma_attrs *attrs)
+{
+	void *ret;
+
+	ret = (void *)__get_free_pages(gfp, get_order(size));
+	if (ret)
+		*dma_handle = virt_to_phys(ret);
+	return ret;
+}
+
+static void dma_noop_free(struct device *dev, size_t size,
+			  void *cpu_addr, dma_addr_t dma_addr,
+			  struct dma_attrs *attrs)
+{
+	free_pages((unsigned long)cpu_addr, get_order(size));
+}
+
+static dma_addr_t dma_noop_map_page(struct device *dev, struct page *page,
+				      unsigned long offset, size_t size,
+				      enum dma_data_direction dir,
+				      struct dma_attrs *attrs)
+{
+	return page_to_phys(page) + offset;
+}
+
+static int dma_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
+			     enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+	int i;
+	struct scatterlist *sg;
+
+	for_each_sg(sgl, sg, nents, i) {
+		void *va;
+
+		BUG_ON(!sg_page(sg));
+		va = sg_virt(sg);
+		sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va);
+		sg_dma_len(sg) = sg->length;
+	}
+
+	return nents;
+}
+
+static int dma_noop_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+	return 0;
+}
+
+static int dma_noop_supported(struct device *dev, u64 mask)
+{
+	return 1;
+}
+
+struct dma_map_ops dma_noop_ops = {
+	.alloc			= dma_noop_alloc,
+	.free			= dma_noop_free,
+	.map_page		= dma_noop_map_page,
+	.map_sg			= dma_noop_map_sg,
+	.mapping_error		= dma_noop_mapping_error,
+	.dma_supported		= dma_noop_supported,
+};
+
+EXPORT_SYMBOL(dma_noop_ops);
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 1/9] dma: Provide simple noop dma ops
  2016-02-03  5:46 ` Andy Lutomirski
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, Andy Lutomirski, xen-devel, sparclinux,
	Paolo Bonzini, Linux Virtualization, David Woodhouse,
	David S. Miller, Martin Schwidefsky

From: Christian Borntraeger <borntraeger@de.ibm.com>

We are going to require dma_ops for several common drivers, even for
systems that do have an identity mapping. Lets provide some minimal
no-op dma_ops that can be used for that purpose.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 include/linux/dma-mapping.h |  2 ++
 lib/Makefile                |  1 +
 lib/dma-noop.c              | 75 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 78 insertions(+)
 create mode 100644 lib/dma-noop.c

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 75857cda38e9..c0b27ff2c784 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -70,6 +70,8 @@ struct dma_map_ops {
 	int is_phys;
 };
 
+extern struct dma_map_ops dma_noop_ops;
+
 #define DMA_BIT_MASK(n)	(((n) == 64) ? ~0ULL : ((1ULL<<(n))-1))
 
 #define DMA_MASK_NONE	0x0ULL
diff --git a/lib/Makefile b/lib/Makefile
index a7c26a41a738..a572b86a1b1d 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -18,6 +18,7 @@ lib-y := ctype.o string.o vsprintf.o cmdline.o \
 obj-$(CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS) += usercopy.o
 lib-$(CONFIG_MMU) += ioremap.o
 lib-$(CONFIG_SMP) += cpumask.o
+lib-$(CONFIG_HAS_DMA) += dma-noop.o
 
 lib-y	+= kobject.o klist.o
 obj-y	+= lockref.o
diff --git a/lib/dma-noop.c b/lib/dma-noop.c
new file mode 100644
index 000000000000..72145646857e
--- /dev/null
+++ b/lib/dma-noop.c
@@ -0,0 +1,75 @@
+/*
+ *	lib/dma-noop.c
+ *
+ * Simple DMA noop-ops that map 1:1 with memory
+ */
+#include <linux/export.h>
+#include <linux/mm.h>
+#include <linux/dma-mapping.h>
+#include <linux/scatterlist.h>
+
+static void *dma_noop_alloc(struct device *dev, size_t size,
+			    dma_addr_t *dma_handle, gfp_t gfp,
+			    struct dma_attrs *attrs)
+{
+	void *ret;
+
+	ret = (void *)__get_free_pages(gfp, get_order(size));
+	if (ret)
+		*dma_handle = virt_to_phys(ret);
+	return ret;
+}
+
+static void dma_noop_free(struct device *dev, size_t size,
+			  void *cpu_addr, dma_addr_t dma_addr,
+			  struct dma_attrs *attrs)
+{
+	free_pages((unsigned long)cpu_addr, get_order(size));
+}
+
+static dma_addr_t dma_noop_map_page(struct device *dev, struct page *page,
+				      unsigned long offset, size_t size,
+				      enum dma_data_direction dir,
+				      struct dma_attrs *attrs)
+{
+	return page_to_phys(page) + offset;
+}
+
+static int dma_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
+			     enum dma_data_direction dir, struct dma_attrs *attrs)
+{
+	int i;
+	struct scatterlist *sg;
+
+	for_each_sg(sgl, sg, nents, i) {
+		void *va;
+
+		BUG_ON(!sg_page(sg));
+		va = sg_virt(sg);
+		sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va);
+		sg_dma_len(sg) = sg->length;
+	}
+
+	return nents;
+}
+
+static int dma_noop_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+	return 0;
+}
+
+static int dma_noop_supported(struct device *dev, u64 mask)
+{
+	return 1;
+}
+
+struct dma_map_ops dma_noop_ops = {
+	.alloc			= dma_noop_alloc,
+	.free			= dma_noop_free,
+	.map_page		= dma_noop_map_page,
+	.map_sg			= dma_noop_map_sg,
+	.mapping_error		= dma_noop_mapping_error,
+	.dma_supported		= dma_noop_supported,
+};
+
+EXPORT_SYMBOL(dma_noop_ops);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 2/9] alpha/dma: use common noop dma ops
  2016-02-03  5:46 ` Andy Lutomirski
@ 2016-02-03  5:46   ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

From: Christian Borntraeger <borntraeger@de.ibm.com>

Some of the alpha pci noop dma ops are identical to the common ones.
Use them.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/alpha/kernel/pci-noop.c | 46 ++++----------------------------------------
 1 file changed, 4 insertions(+), 42 deletions(-)

diff --git a/arch/alpha/kernel/pci-noop.c b/arch/alpha/kernel/pci-noop.c
index 2b1f4a1e9272..8e735b5e56bd 100644
--- a/arch/alpha/kernel/pci-noop.c
+++ b/arch/alpha/kernel/pci-noop.c
@@ -123,44 +123,6 @@ static void *alpha_noop_alloc_coherent(struct device *dev, size_t size,
 	return ret;
 }
 
-static void alpha_noop_free_coherent(struct device *dev, size_t size,
-				     void *cpu_addr, dma_addr_t dma_addr,
-				     struct dma_attrs *attrs)
-{
-	free_pages((unsigned long)cpu_addr, get_order(size));
-}
-
-static dma_addr_t alpha_noop_map_page(struct device *dev, struct page *page,
-				      unsigned long offset, size_t size,
-				      enum dma_data_direction dir,
-				      struct dma_attrs *attrs)
-{
-	return page_to_pa(page) + offset;
-}
-
-static int alpha_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
-			     enum dma_data_direction dir, struct dma_attrs *attrs)
-{
-	int i;
-	struct scatterlist *sg;
-
-	for_each_sg(sgl, sg, nents, i) {
-		void *va;
-
-		BUG_ON(!sg_page(sg));
-		va = sg_virt(sg);
-		sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va);
-		sg_dma_len(sg) = sg->length;
-	}
-
-	return nents;
-}
-
-static int alpha_noop_mapping_error(struct device *dev, dma_addr_t dma_addr)
-{
-	return 0;
-}
-
 static int alpha_noop_supported(struct device *dev, u64 mask)
 {
 	return mask < 0x00ffffffUL ? 0 : 1;
@@ -168,10 +130,10 @@ static int alpha_noop_supported(struct device *dev, u64 mask)
 
 struct dma_map_ops alpha_noop_ops = {
 	.alloc			= alpha_noop_alloc_coherent,
-	.free			= alpha_noop_free_coherent,
-	.map_page		= alpha_noop_map_page,
-	.map_sg			= alpha_noop_map_sg,
-	.mapping_error		= alpha_noop_mapping_error,
+	.free			= dma_noop_free_coherent,
+	.map_page		= dma_noop_map_page,
+	.map_sg			= dma_noop_map_sg,
+	.mapping_error		= dma_noop_mapping_error,
 	.dma_supported		= alpha_noop_supported,
 };
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 2/9] alpha/dma: use common noop dma ops
@ 2016-02-03  5:46   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

From: Christian Borntraeger <borntraeger@de.ibm.com>

Some of the alpha pci noop dma ops are identical to the common ones.
Use them.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/alpha/kernel/pci-noop.c | 46 ++++----------------------------------------
 1 file changed, 4 insertions(+), 42 deletions(-)

diff --git a/arch/alpha/kernel/pci-noop.c b/arch/alpha/kernel/pci-noop.c
index 2b1f4a1e9272..8e735b5e56bd 100644
--- a/arch/alpha/kernel/pci-noop.c
+++ b/arch/alpha/kernel/pci-noop.c
@@ -123,44 +123,6 @@ static void *alpha_noop_alloc_coherent(struct device *dev, size_t size,
 	return ret;
 }
 
-static void alpha_noop_free_coherent(struct device *dev, size_t size,
-				     void *cpu_addr, dma_addr_t dma_addr,
-				     struct dma_attrs *attrs)
-{
-	free_pages((unsigned long)cpu_addr, get_order(size));
-}
-
-static dma_addr_t alpha_noop_map_page(struct device *dev, struct page *page,
-				      unsigned long offset, size_t size,
-				      enum dma_data_direction dir,
-				      struct dma_attrs *attrs)
-{
-	return page_to_pa(page) + offset;
-}
-
-static int alpha_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
-			     enum dma_data_direction dir, struct dma_attrs *attrs)
-{
-	int i;
-	struct scatterlist *sg;
-
-	for_each_sg(sgl, sg, nents, i) {
-		void *va;
-
-		BUG_ON(!sg_page(sg));
-		va = sg_virt(sg);
-		sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va);
-		sg_dma_len(sg) = sg->length;
-	}
-
-	return nents;
-}
-
-static int alpha_noop_mapping_error(struct device *dev, dma_addr_t dma_addr)
-{
-	return 0;
-}
-
 static int alpha_noop_supported(struct device *dev, u64 mask)
 {
 	return mask < 0x00ffffffUL ? 0 : 1;
@@ -168,10 +130,10 @@ static int alpha_noop_supported(struct device *dev, u64 mask)
 
 struct dma_map_ops alpha_noop_ops = {
 	.alloc			= alpha_noop_alloc_coherent,
-	.free			= alpha_noop_free_coherent,
-	.map_page		= alpha_noop_map_page,
-	.map_sg			= alpha_noop_map_sg,
-	.mapping_error		= alpha_noop_mapping_error,
+	.free			= dma_noop_free_coherent,
+	.map_page		= dma_noop_map_page,
+	.map_sg			= dma_noop_map_sg,
+	.mapping_error		= dma_noop_mapping_error,
 	.dma_supported		= alpha_noop_supported,
 };
 
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 2/9] alpha/dma: use common noop dma ops
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (4 preceding siblings ...)
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

From: Christian Borntraeger <borntraeger@de.ibm.com>

Some of the alpha pci noop dma ops are identical to the common ones.
Use them.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/alpha/kernel/pci-noop.c | 46 ++++----------------------------------------
 1 file changed, 4 insertions(+), 42 deletions(-)

diff --git a/arch/alpha/kernel/pci-noop.c b/arch/alpha/kernel/pci-noop.c
index 2b1f4a1e9272..8e735b5e56bd 100644
--- a/arch/alpha/kernel/pci-noop.c
+++ b/arch/alpha/kernel/pci-noop.c
@@ -123,44 +123,6 @@ static void *alpha_noop_alloc_coherent(struct device *dev, size_t size,
 	return ret;
 }
 
-static void alpha_noop_free_coherent(struct device *dev, size_t size,
-				     void *cpu_addr, dma_addr_t dma_addr,
-				     struct dma_attrs *attrs)
-{
-	free_pages((unsigned long)cpu_addr, get_order(size));
-}
-
-static dma_addr_t alpha_noop_map_page(struct device *dev, struct page *page,
-				      unsigned long offset, size_t size,
-				      enum dma_data_direction dir,
-				      struct dma_attrs *attrs)
-{
-	return page_to_pa(page) + offset;
-}
-
-static int alpha_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
-			     enum dma_data_direction dir, struct dma_attrs *attrs)
-{
-	int i;
-	struct scatterlist *sg;
-
-	for_each_sg(sgl, sg, nents, i) {
-		void *va;
-
-		BUG_ON(!sg_page(sg));
-		va = sg_virt(sg);
-		sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va);
-		sg_dma_len(sg) = sg->length;
-	}
-
-	return nents;
-}
-
-static int alpha_noop_mapping_error(struct device *dev, dma_addr_t dma_addr)
-{
-	return 0;
-}
-
 static int alpha_noop_supported(struct device *dev, u64 mask)
 {
 	return mask < 0x00ffffffUL ? 0 : 1;
@@ -168,10 +130,10 @@ static int alpha_noop_supported(struct device *dev, u64 mask)
 
 struct dma_map_ops alpha_noop_ops = {
 	.alloc			= alpha_noop_alloc_coherent,
-	.free			= alpha_noop_free_coherent,
-	.map_page		= alpha_noop_map_page,
-	.map_sg			= alpha_noop_map_sg,
-	.mapping_error		= alpha_noop_mapping_error,
+	.free			= dma_noop_free_coherent,
+	.map_page		= dma_noop_map_page,
+	.map_sg			= dma_noop_map_sg,
+	.mapping_error		= dma_noop_mapping_error,
 	.dma_supported		= alpha_noop_supported,
 };
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 2/9] alpha/dma: use common noop dma ops
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (2 preceding siblings ...)
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, Andy Lutomirski, xen-devel, sparclinux,
	Paolo Bonzini, Linux Virtualization, David Woodhouse,
	David S. Miller, Martin Schwidefsky

From: Christian Borntraeger <borntraeger@de.ibm.com>

Some of the alpha pci noop dma ops are identical to the common ones.
Use them.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/alpha/kernel/pci-noop.c | 46 ++++----------------------------------------
 1 file changed, 4 insertions(+), 42 deletions(-)

diff --git a/arch/alpha/kernel/pci-noop.c b/arch/alpha/kernel/pci-noop.c
index 2b1f4a1e9272..8e735b5e56bd 100644
--- a/arch/alpha/kernel/pci-noop.c
+++ b/arch/alpha/kernel/pci-noop.c
@@ -123,44 +123,6 @@ static void *alpha_noop_alloc_coherent(struct device *dev, size_t size,
 	return ret;
 }
 
-static void alpha_noop_free_coherent(struct device *dev, size_t size,
-				     void *cpu_addr, dma_addr_t dma_addr,
-				     struct dma_attrs *attrs)
-{
-	free_pages((unsigned long)cpu_addr, get_order(size));
-}
-
-static dma_addr_t alpha_noop_map_page(struct device *dev, struct page *page,
-				      unsigned long offset, size_t size,
-				      enum dma_data_direction dir,
-				      struct dma_attrs *attrs)
-{
-	return page_to_pa(page) + offset;
-}
-
-static int alpha_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
-			     enum dma_data_direction dir, struct dma_attrs *attrs)
-{
-	int i;
-	struct scatterlist *sg;
-
-	for_each_sg(sgl, sg, nents, i) {
-		void *va;
-
-		BUG_ON(!sg_page(sg));
-		va = sg_virt(sg);
-		sg_dma_address(sg) = (dma_addr_t)virt_to_phys(va);
-		sg_dma_len(sg) = sg->length;
-	}
-
-	return nents;
-}
-
-static int alpha_noop_mapping_error(struct device *dev, dma_addr_t dma_addr)
-{
-	return 0;
-}
-
 static int alpha_noop_supported(struct device *dev, u64 mask)
 {
 	return mask < 0x00ffffffUL ? 0 : 1;
@@ -168,10 +130,10 @@ static int alpha_noop_supported(struct device *dev, u64 mask)
 
 struct dma_map_ops alpha_noop_ops = {
 	.alloc			= alpha_noop_alloc_coherent,
-	.free			= alpha_noop_free_coherent,
-	.map_page		= alpha_noop_map_page,
-	.map_sg			= alpha_noop_map_sg,
-	.mapping_error		= alpha_noop_mapping_error,
+	.free			= dma_noop_free_coherent,
+	.map_page		= dma_noop_map_page,
+	.map_sg			= dma_noop_map_sg,
+	.mapping_error		= dma_noop_mapping_error,
 	.dma_supported		= alpha_noop_supported,
 };
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 3/9] s390/dma: Allow per device dma ops
  2016-02-03  5:46 ` Andy Lutomirski
  (?)
@ 2016-02-03  5:46   ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

From: Christian Borntraeger <borntraeger@de.ibm.com>

As virtio-ccw will have dma ops, we can no longer default to the
zPCI ones. Make use of dev_archdata to keep the dma_ops per device.
The pci devices now use that to override the default, and the
default is changed to use the noop ops for everything that does not
specify a device specific one.
To compile without PCI support we will enable HAS_DMA all the time,
via the default config in lib/Kconfig.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/s390/Kconfig                   | 5 +----
 arch/s390/include/asm/device.h      | 6 +++++-
 arch/s390/include/asm/dma-mapping.h | 6 ++++--
 arch/s390/pci/pci.c                 | 1 +
 arch/s390/pci/pci_dma.c             | 4 ++--
 5 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 3be9c832dec1..b29c66e77e32 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -124,6 +124,7 @@ config S390
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_DEBUG_KMEMLEAK
+	select HAVE_DMA_API_DEBUG
 	select HAVE_DYNAMIC_FTRACE
 	select HAVE_DYNAMIC_FTRACE_WITH_REGS
 	select HAVE_FTRACE_MCOUNT_RECORD
@@ -619,10 +620,6 @@ config HAS_IOMEM
 config IOMMU_HELPER
 	def_bool PCI
 
-config HAS_DMA
-	def_bool PCI
-	select HAVE_DMA_API_DEBUG
-
 config NEED_SG_DMA_LENGTH
 	def_bool PCI
 
diff --git a/arch/s390/include/asm/device.h b/arch/s390/include/asm/device.h
index d8f9872b0e2d..4a9f35e0973f 100644
--- a/arch/s390/include/asm/device.h
+++ b/arch/s390/include/asm/device.h
@@ -3,5 +3,9 @@
  *
  * This file is released under the GPLv2
  */
-#include <asm-generic/device.h>
+struct dev_archdata {
+	struct dma_map_ops *dma_ops;
+};
 
+struct pdev_archdata {
+};
diff --git a/arch/s390/include/asm/dma-mapping.h b/arch/s390/include/asm/dma-mapping.h
index e64bfcb9702f..3249b7464889 100644
--- a/arch/s390/include/asm/dma-mapping.h
+++ b/arch/s390/include/asm/dma-mapping.h
@@ -11,11 +11,13 @@
 
 #define DMA_ERROR_CODE		(~(dma_addr_t) 0x0)
 
-extern struct dma_map_ops s390_dma_ops;
+extern struct dma_map_ops s390_pci_dma_ops;
 
 static inline struct dma_map_ops *get_dma_ops(struct device *dev)
 {
-	return &s390_dma_ops;
+	if (dev && dev->archdata.dma_ops)
+		return dev->archdata.dma_ops;
+	return &dma_noop_ops;
 }
 
 static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
index 11d4f277e9f6..f5931135b9ae 100644
--- a/arch/s390/pci/pci.c
+++ b/arch/s390/pci/pci.c
@@ -649,6 +649,7 @@ int pcibios_add_device(struct pci_dev *pdev)
 
 	zdev->pdev = pdev;
 	pdev->dev.groups = zpci_attr_groups;
+	pdev->dev.archdata.dma_ops = &s390_pci_dma_ops;
 	zpci_map_resources(pdev);
 
 	for (i = 0; i < PCI_BAR_COUNT; i++) {
diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c
index 4638b93c7632..a79173ec54b9 100644
--- a/arch/s390/pci/pci_dma.c
+++ b/arch/s390/pci/pci_dma.c
@@ -544,7 +544,7 @@ static int __init dma_debug_do_init(void)
 }
 fs_initcall(dma_debug_do_init);
 
-struct dma_map_ops s390_dma_ops = {
+struct dma_map_ops s390_pci_dma_ops = {
 	.alloc		= s390_dma_alloc,
 	.free		= s390_dma_free,
 	.map_sg		= s390_dma_map_sg,
@@ -555,7 +555,7 @@ struct dma_map_ops s390_dma_ops = {
 	.is_phys	= 0,
 	/* dma_supported is unconditionally true without a callback */
 };
-EXPORT_SYMBOL_GPL(s390_dma_ops);
+EXPORT_SYMBOL_GPL(s390_pci_dma_ops);
 
 static int __init s390_iommu_setup(char *str)
 {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 3/9] s390/dma: Allow per device dma ops
@ 2016-02-03  5:46   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

From: Christian Borntraeger <borntraeger@de.ibm.com>

As virtio-ccw will have dma ops, we can no longer default to the
zPCI ones. Make use of dev_archdata to keep the dma_ops per device.
The pci devices now use that to override the default, and the
default is changed to use the noop ops for everything that does not
specify a device specific one.
To compile without PCI support we will enable HAS_DMA all the time,
via the default config in lib/Kconfig.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/s390/Kconfig                   | 5 +----
 arch/s390/include/asm/device.h      | 6 +++++-
 arch/s390/include/asm/dma-mapping.h | 6 ++++--
 arch/s390/pci/pci.c                 | 1 +
 arch/s390/pci/pci_dma.c             | 4 ++--
 5 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 3be9c832dec1..b29c66e77e32 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -124,6 +124,7 @@ config S390
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_DEBUG_KMEMLEAK
+	select HAVE_DMA_API_DEBUG
 	select HAVE_DYNAMIC_FTRACE
 	select HAVE_DYNAMIC_FTRACE_WITH_REGS
 	select HAVE_FTRACE_MCOUNT_RECORD
@@ -619,10 +620,6 @@ config HAS_IOMEM
 config IOMMU_HELPER
 	def_bool PCI
 
-config HAS_DMA
-	def_bool PCI
-	select HAVE_DMA_API_DEBUG
-
 config NEED_SG_DMA_LENGTH
 	def_bool PCI
 
diff --git a/arch/s390/include/asm/device.h b/arch/s390/include/asm/device.h
index d8f9872b0e2d..4a9f35e0973f 100644
--- a/arch/s390/include/asm/device.h
+++ b/arch/s390/include/asm/device.h
@@ -3,5 +3,9 @@
  *
  * This file is released under the GPLv2
  */
-#include <asm-generic/device.h>
+struct dev_archdata {
+	struct dma_map_ops *dma_ops;
+};
 
+struct pdev_archdata {
+};
diff --git a/arch/s390/include/asm/dma-mapping.h b/arch/s390/include/asm/dma-mapping.h
index e64bfcb9702f..3249b7464889 100644
--- a/arch/s390/include/asm/dma-mapping.h
+++ b/arch/s390/include/asm/dma-mapping.h
@@ -11,11 +11,13 @@
 
 #define DMA_ERROR_CODE		(~(dma_addr_t) 0x0)
 
-extern struct dma_map_ops s390_dma_ops;
+extern struct dma_map_ops s390_pci_dma_ops;
 
 static inline struct dma_map_ops *get_dma_ops(struct device *dev)
 {
-	return &s390_dma_ops;
+	if (dev && dev->archdata.dma_ops)
+		return dev->archdata.dma_ops;
+	return &dma_noop_ops;
 }
 
 static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
index 11d4f277e9f6..f5931135b9ae 100644
--- a/arch/s390/pci/pci.c
+++ b/arch/s390/pci/pci.c
@@ -649,6 +649,7 @@ int pcibios_add_device(struct pci_dev *pdev)
 
 	zdev->pdev = pdev;
 	pdev->dev.groups = zpci_attr_groups;
+	pdev->dev.archdata.dma_ops = &s390_pci_dma_ops;
 	zpci_map_resources(pdev);
 
 	for (i = 0; i < PCI_BAR_COUNT; i++) {
diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c
index 4638b93c7632..a79173ec54b9 100644
--- a/arch/s390/pci/pci_dma.c
+++ b/arch/s390/pci/pci_dma.c
@@ -544,7 +544,7 @@ static int __init dma_debug_do_init(void)
 }
 fs_initcall(dma_debug_do_init);
 
-struct dma_map_ops s390_dma_ops = {
+struct dma_map_ops s390_pci_dma_ops = {
 	.alloc		= s390_dma_alloc,
 	.free		= s390_dma_free,
 	.map_sg		= s390_dma_map_sg,
@@ -555,7 +555,7 @@ struct dma_map_ops s390_dma_ops = {
 	.is_phys	= 0,
 	/* dma_supported is unconditionally true without a callback */
 };
-EXPORT_SYMBOL_GPL(s390_dma_ops);
+EXPORT_SYMBOL_GPL(s390_pci_dma_ops);
 
 static int __init s390_iommu_setup(char *str)
 {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 3/9] s390/dma: Allow per device dma ops
@ 2016-02-03  5:46   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

From: Christian Borntraeger <borntraeger@de.ibm.com>

As virtio-ccw will have dma ops, we can no longer default to the
zPCI ones. Make use of dev_archdata to keep the dma_ops per device.
The pci devices now use that to override the default, and the
default is changed to use the noop ops for everything that does not
specify a device specific one.
To compile without PCI support we will enable HAS_DMA all the time,
via the default config in lib/Kconfig.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/s390/Kconfig                   | 5 +----
 arch/s390/include/asm/device.h      | 6 +++++-
 arch/s390/include/asm/dma-mapping.h | 6 ++++--
 arch/s390/pci/pci.c                 | 1 +
 arch/s390/pci/pci_dma.c             | 4 ++--
 5 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 3be9c832dec1..b29c66e77e32 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -124,6 +124,7 @@ config S390
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_DEBUG_KMEMLEAK
+	select HAVE_DMA_API_DEBUG
 	select HAVE_DYNAMIC_FTRACE
 	select HAVE_DYNAMIC_FTRACE_WITH_REGS
 	select HAVE_FTRACE_MCOUNT_RECORD
@@ -619,10 +620,6 @@ config HAS_IOMEM
 config IOMMU_HELPER
 	def_bool PCI
 
-config HAS_DMA
-	def_bool PCI
-	select HAVE_DMA_API_DEBUG
-
 config NEED_SG_DMA_LENGTH
 	def_bool PCI
 
diff --git a/arch/s390/include/asm/device.h b/arch/s390/include/asm/device.h
index d8f9872b0e2d..4a9f35e0973f 100644
--- a/arch/s390/include/asm/device.h
+++ b/arch/s390/include/asm/device.h
@@ -3,5 +3,9 @@
  *
  * This file is released under the GPLv2
  */
-#include <asm-generic/device.h>
+struct dev_archdata {
+	struct dma_map_ops *dma_ops;
+};
 
+struct pdev_archdata {
+};
diff --git a/arch/s390/include/asm/dma-mapping.h b/arch/s390/include/asm/dma-mapping.h
index e64bfcb9702f..3249b7464889 100644
--- a/arch/s390/include/asm/dma-mapping.h
+++ b/arch/s390/include/asm/dma-mapping.h
@@ -11,11 +11,13 @@
 
 #define DMA_ERROR_CODE		(~(dma_addr_t) 0x0)
 
-extern struct dma_map_ops s390_dma_ops;
+extern struct dma_map_ops s390_pci_dma_ops;
 
 static inline struct dma_map_ops *get_dma_ops(struct device *dev)
 {
-	return &s390_dma_ops;
+	if (dev && dev->archdata.dma_ops)
+		return dev->archdata.dma_ops;
+	return &dma_noop_ops;
 }
 
 static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
index 11d4f277e9f6..f5931135b9ae 100644
--- a/arch/s390/pci/pci.c
+++ b/arch/s390/pci/pci.c
@@ -649,6 +649,7 @@ int pcibios_add_device(struct pci_dev *pdev)
 
 	zdev->pdev = pdev;
 	pdev->dev.groups = zpci_attr_groups;
+	pdev->dev.archdata.dma_ops = &s390_pci_dma_ops;
 	zpci_map_resources(pdev);
 
 	for (i = 0; i < PCI_BAR_COUNT; i++) {
diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c
index 4638b93c7632..a79173ec54b9 100644
--- a/arch/s390/pci/pci_dma.c
+++ b/arch/s390/pci/pci_dma.c
@@ -544,7 +544,7 @@ static int __init dma_debug_do_init(void)
 }
 fs_initcall(dma_debug_do_init);
 
-struct dma_map_ops s390_dma_ops = {
+struct dma_map_ops s390_pci_dma_ops = {
 	.alloc		= s390_dma_alloc,
 	.free		= s390_dma_free,
 	.map_sg		= s390_dma_map_sg,
@@ -555,7 +555,7 @@ struct dma_map_ops s390_dma_ops = {
 	.is_phys	= 0,
 	/* dma_supported is unconditionally true without a callback */
 };
-EXPORT_SYMBOL_GPL(s390_dma_ops);
+EXPORT_SYMBOL_GPL(s390_pci_dma_ops);
 
 static int __init s390_iommu_setup(char *str)
 {
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 3/9] s390/dma: Allow per device dma ops
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (6 preceding siblings ...)
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, Andy Lutomirski, xen-devel, sparclinux,
	Paolo Bonzini, Linux Virtualization, David Woodhouse,
	David S. Miller, Martin Schwidefsky

From: Christian Borntraeger <borntraeger@de.ibm.com>

As virtio-ccw will have dma ops, we can no longer default to the
zPCI ones. Make use of dev_archdata to keep the dma_ops per device.
The pci devices now use that to override the default, and the
default is changed to use the noop ops for everything that does not
specify a device specific one.
To compile without PCI support we will enable HAS_DMA all the time,
via the default config in lib/Kconfig.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Joerg Roedel <jroedel@suse.de>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Acked-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 arch/s390/Kconfig                   | 5 +----
 arch/s390/include/asm/device.h      | 6 +++++-
 arch/s390/include/asm/dma-mapping.h | 6 ++++--
 arch/s390/pci/pci.c                 | 1 +
 arch/s390/pci/pci_dma.c             | 4 ++--
 5 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 3be9c832dec1..b29c66e77e32 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -124,6 +124,7 @@ config S390
 	select HAVE_CMPXCHG_DOUBLE
 	select HAVE_CMPXCHG_LOCAL
 	select HAVE_DEBUG_KMEMLEAK
+	select HAVE_DMA_API_DEBUG
 	select HAVE_DYNAMIC_FTRACE
 	select HAVE_DYNAMIC_FTRACE_WITH_REGS
 	select HAVE_FTRACE_MCOUNT_RECORD
@@ -619,10 +620,6 @@ config HAS_IOMEM
 config IOMMU_HELPER
 	def_bool PCI
 
-config HAS_DMA
-	def_bool PCI
-	select HAVE_DMA_API_DEBUG
-
 config NEED_SG_DMA_LENGTH
 	def_bool PCI
 
diff --git a/arch/s390/include/asm/device.h b/arch/s390/include/asm/device.h
index d8f9872b0e2d..4a9f35e0973f 100644
--- a/arch/s390/include/asm/device.h
+++ b/arch/s390/include/asm/device.h
@@ -3,5 +3,9 @@
  *
  * This file is released under the GPLv2
  */
-#include <asm-generic/device.h>
+struct dev_archdata {
+	struct dma_map_ops *dma_ops;
+};
 
+struct pdev_archdata {
+};
diff --git a/arch/s390/include/asm/dma-mapping.h b/arch/s390/include/asm/dma-mapping.h
index e64bfcb9702f..3249b7464889 100644
--- a/arch/s390/include/asm/dma-mapping.h
+++ b/arch/s390/include/asm/dma-mapping.h
@@ -11,11 +11,13 @@
 
 #define DMA_ERROR_CODE		(~(dma_addr_t) 0x0)
 
-extern struct dma_map_ops s390_dma_ops;
+extern struct dma_map_ops s390_pci_dma_ops;
 
 static inline struct dma_map_ops *get_dma_ops(struct device *dev)
 {
-	return &s390_dma_ops;
+	if (dev && dev->archdata.dma_ops)
+		return dev->archdata.dma_ops;
+	return &dma_noop_ops;
 }
 
 static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
index 11d4f277e9f6..f5931135b9ae 100644
--- a/arch/s390/pci/pci.c
+++ b/arch/s390/pci/pci.c
@@ -649,6 +649,7 @@ int pcibios_add_device(struct pci_dev *pdev)
 
 	zdev->pdev = pdev;
 	pdev->dev.groups = zpci_attr_groups;
+	pdev->dev.archdata.dma_ops = &s390_pci_dma_ops;
 	zpci_map_resources(pdev);
 
 	for (i = 0; i < PCI_BAR_COUNT; i++) {
diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c
index 4638b93c7632..a79173ec54b9 100644
--- a/arch/s390/pci/pci_dma.c
+++ b/arch/s390/pci/pci_dma.c
@@ -544,7 +544,7 @@ static int __init dma_debug_do_init(void)
 }
 fs_initcall(dma_debug_do_init);
 
-struct dma_map_ops s390_dma_ops = {
+struct dma_map_ops s390_pci_dma_ops = {
 	.alloc		= s390_dma_alloc,
 	.free		= s390_dma_free,
 	.map_sg		= s390_dma_map_sg,
@@ -555,7 +555,7 @@ struct dma_map_ops s390_dma_ops = {
 	.is_phys	= 0,
 	/* dma_supported is unconditionally true without a callback */
 };
-EXPORT_SYMBOL_GPL(s390_dma_ops);
+EXPORT_SYMBOL_GPL(s390_pci_dma_ops);
 
 static int __init s390_iommu_setup(char *str)
 {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 4/9] vring: Introduce vring_use_dma_api()
  2016-02-03  5:46 ` Andy Lutomirski
  (?)
@ 2016-02-03  5:46   ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

This is a kludge, but no one has come up with a a better idea yet.
We'll introduce DMA API support guarded by vring_use_dma_api().
Eventually we may be able to return true on more and more systems,
and hopefully we can get rid of vring_use_dma_api() entirely some
day.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_ring.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index e12e385f7ac3..ab0be6c084f6 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -104,6 +104,30 @@ struct vring_virtqueue {
 
 #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
 
+/*
+ * The interaction between virtio and a possible IOMMU is a mess.
+ *
+ * On most systems with virtio, physical addresses match bus addresses,
+ * and it doesn't particularly matter whether we use the DMA API.
+ *
+ * On some systems, including Xen and any system with a physical device
+ * that speaks virtio behind a physical IOMMU, we must use the DMA API
+ * for virtio DMA to work at all.
+ *
+ * On other systems, including SPARC and PPC64, virtio-pci devices are
+ * enumerated as though they are behind an IOMMU, but the virtio host
+ * ignores the IOMMU, so we must either pretend that the IOMMU isn't
+ * there or somehow map everything as the identity.
+ *
+ * For the time being, we preserve historic behavior and bypass the DMA
+ * API.
+ */
+
+static bool vring_use_dma_api(struct virtio_device *vdev)
+{
+	return false;
+}
+
 static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
 					 unsigned int total_sg, gfp_t gfp)
 {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 4/9] vring: Introduce vring_use_dma_api()
@ 2016-02-03  5:46   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

This is a kludge, but no one has come up with a a better idea yet.
We'll introduce DMA API support guarded by vring_use_dma_api().
Eventually we may be able to return true on more and more systems,
and hopefully we can get rid of vring_use_dma_api() entirely some
day.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_ring.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index e12e385f7ac3..ab0be6c084f6 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -104,6 +104,30 @@ struct vring_virtqueue {
 
 #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
 
+/*
+ * The interaction between virtio and a possible IOMMU is a mess.
+ *
+ * On most systems with virtio, physical addresses match bus addresses,
+ * and it doesn't particularly matter whether we use the DMA API.
+ *
+ * On some systems, including Xen and any system with a physical device
+ * that speaks virtio behind a physical IOMMU, we must use the DMA API
+ * for virtio DMA to work at all.
+ *
+ * On other systems, including SPARC and PPC64, virtio-pci devices are
+ * enumerated as though they are behind an IOMMU, but the virtio host
+ * ignores the IOMMU, so we must either pretend that the IOMMU isn't
+ * there or somehow map everything as the identity.
+ *
+ * For the time being, we preserve historic behavior and bypass the DMA
+ * API.
+ */
+
+static bool vring_use_dma_api(struct virtio_device *vdev)
+{
+	return false;
+}
+
 static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
 					 unsigned int total_sg, gfp_t gfp)
 {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 4/9] vring: Introduce vring_use_dma_api()
@ 2016-02-03  5:46   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

This is a kludge, but no one has come up with a a better idea yet.
We'll introduce DMA API support guarded by vring_use_dma_api().
Eventually we may be able to return true on more and more systems,
and hopefully we can get rid of vring_use_dma_api() entirely some
day.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_ring.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index e12e385f7ac3..ab0be6c084f6 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -104,6 +104,30 @@ struct vring_virtqueue {
 
 #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
 
+/*
+ * The interaction between virtio and a possible IOMMU is a mess.
+ *
+ * On most systems with virtio, physical addresses match bus addresses,
+ * and it doesn't particularly matter whether we use the DMA API.
+ *
+ * On some systems, including Xen and any system with a physical device
+ * that speaks virtio behind a physical IOMMU, we must use the DMA API
+ * for virtio DMA to work at all.
+ *
+ * On other systems, including SPARC and PPC64, virtio-pci devices are
+ * enumerated as though they are behind an IOMMU, but the virtio host
+ * ignores the IOMMU, so we must either pretend that the IOMMU isn't
+ * there or somehow map everything as the identity.
+ *
+ * For the time being, we preserve historic behavior and bypass the DMA
+ * API.
+ */
+
+static bool vring_use_dma_api(struct virtio_device *vdev)
+{
+	return false;
+}
+
 static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
 					 unsigned int total_sg, gfp_t gfp)
 {
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 4/9] vring: Introduce vring_use_dma_api()
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (7 preceding siblings ...)
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, Andy Lutomirski, xen-devel, sparclinux,
	Paolo Bonzini, Linux Virtualization, David Woodhouse,
	David S. Miller, Martin Schwidefsky

This is a kludge, but no one has come up with a a better idea yet.
We'll introduce DMA API support guarded by vring_use_dma_api().
Eventually we may be able to return true on more and more systems,
and hopefully we can get rid of vring_use_dma_api() entirely some
day.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_ring.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index e12e385f7ac3..ab0be6c084f6 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -104,6 +104,30 @@ struct vring_virtqueue {
 
 #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
 
+/*
+ * The interaction between virtio and a possible IOMMU is a mess.
+ *
+ * On most systems with virtio, physical addresses match bus addresses,
+ * and it doesn't particularly matter whether we use the DMA API.
+ *
+ * On some systems, including Xen and any system with a physical device
+ * that speaks virtio behind a physical IOMMU, we must use the DMA API
+ * for virtio DMA to work at all.
+ *
+ * On other systems, including SPARC and PPC64, virtio-pci devices are
+ * enumerated as though they are behind an IOMMU, but the virtio host
+ * ignores the IOMMU, so we must either pretend that the IOMMU isn't
+ * there or somehow map everything as the identity.
+ *
+ * For the time being, we preserve historic behavior and bypass the DMA
+ * API.
+ */
+
+static bool vring_use_dma_api(struct virtio_device *vdev)
+{
+	return false;
+}
+
 static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
 					 unsigned int total_sg, gfp_t gfp)
 {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 5/9] virtio_ring: Support DMA APIs
  2016-02-03  5:46 ` Andy Lutomirski
  (?)
@ 2016-02-03  5:46   ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

virtio_ring currently sends the device (usually a hypervisor)
physical addresses of its I/O buffers.  This is okay when DMA
addresses and physical addresses are the same thing, but this isn't
always the case.  For example, this never works on Xen guests, and
it is likely to fail if a physical "virtio" device ever ends up
behind an IOMMU or swiotlb.

The immediate use case for me is to enable virtio on Xen guests.
For that to work, we need vring to support DMA address translation
as well as a corresponding change to virtio_pci or to another
driver.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/Kconfig           |   2 +-
 drivers/virtio/virtio_ring.c     | 200 ++++++++++++++++++++++++++++++++-------
 tools/virtio/linux/dma-mapping.h |  17 ++++
 3 files changed, 183 insertions(+), 36 deletions(-)
 create mode 100644 tools/virtio/linux/dma-mapping.h

diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
index cab9f3f63a38..77590320d44c 100644
--- a/drivers/virtio/Kconfig
+++ b/drivers/virtio/Kconfig
@@ -60,7 +60,7 @@ config VIRTIO_INPUT
 
  config VIRTIO_MMIO
 	tristate "Platform bus driver for memory mapped virtio devices"
-	depends on HAS_IOMEM
+	depends on HAS_IOMEM && HAS_DMA
  	select VIRTIO
  	---help---
  	 This drivers provides support for memory mapped virtio
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index ab0be6c084f6..9abc008ff7ea 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -24,6 +24,7 @@
 #include <linux/module.h>
 #include <linux/hrtimer.h>
 #include <linux/kmemleak.h>
+#include <linux/dma-mapping.h>
 
 #ifdef DEBUG
 /* For development, we want to crash whenever the ring is screwed. */
@@ -54,6 +55,11 @@
 #define END_USE(vq)
 #endif
 
+struct vring_desc_state {
+	void *data;			/* Data for callback. */
+	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
+};
+
 struct vring_virtqueue {
 	struct virtqueue vq;
 
@@ -98,8 +104,8 @@ struct vring_virtqueue {
 	ktime_t last_add_time;
 #endif
 
-	/* Tokens for callbacks. */
-	void *data[];
+	/* Per-descriptor state. */
+	struct vring_desc_state desc_state[];
 };
 
 #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
@@ -128,6 +134,79 @@ static bool vring_use_dma_api(struct virtio_device *vdev)
 	return false;
 }
 
+/*
+ * The DMA ops on various arches are rather gnarly right now, and
+ * making all of the arch DMA ops work on the vring device itself
+ * is a mess.  For now, we use the parent device for DMA ops.
+ */
+struct device *vring_dma_dev(const struct vring_virtqueue *vq)
+{
+	return vq->vq.vdev->dev.parent;
+}
+
+/* Map one sg entry. */
+static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq,
+				   struct scatterlist *sg,
+				   enum dma_data_direction direction)
+{
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return (dma_addr_t)sg_phys(sg);
+
+	/*
+	 * We can't use dma_map_sg, because we don't use scatterlists in
+	 * the way it expects (we don't guarantee that the scatterlist
+	 * will exist for the lifetime of the mapping).
+	 */
+	return dma_map_page(vring_dma_dev(vq),
+			    sg_page(sg), sg->offset, sg->length,
+			    direction);
+}
+
+static dma_addr_t vring_map_single(const struct vring_virtqueue *vq,
+				   void *cpu_addr, size_t size,
+				   enum dma_data_direction direction)
+{
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return (dma_addr_t)virt_to_phys(cpu_addr);
+
+	return dma_map_single(vring_dma_dev(vq),
+			      cpu_addr, size, direction);
+}
+
+static void vring_unmap_one(const struct vring_virtqueue *vq,
+			    struct vring_desc *desc)
+{
+	u16 flags;
+
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return;
+
+	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
+
+	if (flags & VRING_DESC_F_INDIRECT) {
+		dma_unmap_single(vring_dma_dev(vq),
+				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
+				 virtio32_to_cpu(vq->vq.vdev, desc->len),
+				 (flags & VRING_DESC_F_WRITE) ?
+				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	} else {
+		dma_unmap_page(vring_dma_dev(vq),
+			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
+			       virtio32_to_cpu(vq->vq.vdev, desc->len),
+			       (flags & VRING_DESC_F_WRITE) ?
+			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	}
+}
+
+static int vring_mapping_error(const struct vring_virtqueue *vq,
+			       dma_addr_t addr)
+{
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return 0;
+
+	return dma_mapping_error(vring_dma_dev(vq), addr);
+}
+
 static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
 					 unsigned int total_sg, gfp_t gfp)
 {
@@ -161,7 +240,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	struct vring_virtqueue *vq = to_vvq(_vq);
 	struct scatterlist *sg;
 	struct vring_desc *desc;
-	unsigned int i, n, avail, descs_used, uninitialized_var(prev);
+	unsigned int i, n, avail, descs_used, uninitialized_var(prev), err_idx;
 	int head;
 	bool indirect;
 
@@ -201,21 +280,15 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 
 	if (desc) {
 		/* Use a single buffer which doesn't continue */
-		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
-		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, virt_to_phys(desc));
-		/* avoid kmemleak false positive (hidden by virt_to_phys) */
-		kmemleak_ignore(desc);
-		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
-
+		indirect = true;
 		/* Set up rest to use this indirect table. */
 		i = 0;
 		descs_used = 1;
-		indirect = true;
 	} else {
+		indirect = false;
 		desc = vq->vring.desc;
 		i = head;
 		descs_used = total_sg;
-		indirect = false;
 	}
 
 	if (vq->vq.num_free < descs_used) {
@@ -230,13 +303,14 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 		return -ENOSPC;
 	}
 
-	/* We're about to use some buffers from the free list. */
-	vq->vq.num_free -= descs_used;
-
 	for (n = 0; n < out_sgs; n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
+			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
+			if (vring_mapping_error(vq, addr))
+				goto unmap_release;
+
 			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT);
-			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
+			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
 			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
 			prev = i;
 			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
@@ -244,8 +318,12 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	}
 	for (; n < (out_sgs + in_sgs); n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
+			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
+			if (vring_mapping_error(vq, addr))
+				goto unmap_release;
+
 			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT | VRING_DESC_F_WRITE);
-			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
+			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
 			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
 			prev = i;
 			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
@@ -254,14 +332,33 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	/* Last one doesn't continue. */
 	desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
 
+	if (indirect) {
+		/* Now that the indirect table is filled in, map it. */
+		dma_addr_t addr = vring_map_single(
+			vq, desc, total_sg * sizeof(struct vring_desc),
+			DMA_TO_DEVICE);
+		if (vring_mapping_error(vq, addr))
+			goto unmap_release;
+
+		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
+		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, addr);
+
+		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
+	}
+
+	/* We're using some buffers from the free list. */
+	vq->vq.num_free -= descs_used;
+
 	/* Update free pointer */
 	if (indirect)
 		vq->free_head = virtio16_to_cpu(_vq->vdev, vq->vring.desc[head].next);
 	else
 		vq->free_head = i;
 
-	/* Set token. */
-	vq->data[head] = data;
+	/* Store token and indirect buffer state. */
+	vq->desc_state[head].data = data;
+	if (indirect)
+		vq->desc_state[head].indir_desc = desc;
 
 	/* Put entry in available array (but don't update avail->idx until they
 	 * do sync). */
@@ -284,6 +381,24 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 		virtqueue_kick(_vq);
 
 	return 0;
+
+unmap_release:
+	err_idx = i;
+	i = head;
+
+	for (n = 0; n < total_sg; n++) {
+		if (i == err_idx)
+			break;
+		vring_unmap_one(vq, &desc[i]);
+		i = vq->vring.desc[i].next;
+	}
+
+	vq->vq.num_free += total_sg;
+
+	if (indirect)
+		kfree(desc);
+
+	return -EIO;
 }
 
 /**
@@ -454,27 +569,43 @@ EXPORT_SYMBOL_GPL(virtqueue_kick);
 
 static void detach_buf(struct vring_virtqueue *vq, unsigned int head)
 {
-	unsigned int i;
+	unsigned int i, j;
+	u16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
 
 	/* Clear data ptr. */
-	vq->data[head] = NULL;
+	vq->desc_state[head].data = NULL;
 
-	/* Put back on free list: find end */
+	/* Put back on free list: unmap first-level descriptors and find end */
 	i = head;
 
-	/* Free the indirect table */
-	if (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT))
-		kfree(phys_to_virt(virtio64_to_cpu(vq->vq.vdev, vq->vring.desc[i].addr)));
-
-	while (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT)) {
+	while (vq->vring.desc[i].flags & nextflag) {
+		vring_unmap_one(vq, &vq->vring.desc[i]);
 		i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next);
 		vq->vq.num_free++;
 	}
 
+	vring_unmap_one(vq, &vq->vring.desc[i]);
 	vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head);
 	vq->free_head = head;
+
 	/* Plus final descriptor */
 	vq->vq.num_free++;
+
+	/* Free the indirect table, if any, now that it's unmapped. */
+	if (vq->desc_state[head].indir_desc) {
+		struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
+		u32 len = virtio32_to_cpu(vq->vq.vdev, vq->vring.desc[head].len);
+
+		BUG_ON(!(vq->vring.desc[head].flags &
+			 cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
+		BUG_ON(len == 0 || len % sizeof(struct vring_desc));
+
+		for (j = 0; j < len / sizeof(struct vring_desc); j++)
+			vring_unmap_one(vq, &indir_desc[j]);
+
+		kfree(vq->desc_state[head].indir_desc);
+		vq->desc_state[head].indir_desc = NULL;
+	}
 }
 
 static inline bool more_used(const struct vring_virtqueue *vq)
@@ -529,13 +660,13 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
 		BAD_RING(vq, "id %u out of range\n", i);
 		return NULL;
 	}
-	if (unlikely(!vq->data[i])) {
+	if (unlikely(!vq->desc_state[i].data)) {
 		BAD_RING(vq, "id %u is not a head!\n", i);
 		return NULL;
 	}
 
 	/* detach_buf clears data, so grab it now. */
-	ret = vq->data[i];
+	ret = vq->desc_state[i].data;
 	detach_buf(vq, i);
 	vq->last_used_idx++;
 	/* If we expect an interrupt for the next entry, tell host
@@ -709,10 +840,10 @@ void *virtqueue_detach_unused_buf(struct virtqueue *_vq)
 	START_USE(vq);
 
 	for (i = 0; i < vq->vring.num; i++) {
-		if (!vq->data[i])
+		if (!vq->desc_state[i].data)
 			continue;
 		/* detach_buf clears data, so grab it now. */
-		buf = vq->data[i];
+		buf = vq->desc_state[i].data;
 		detach_buf(vq, i);
 		vq->avail_idx_shadow--;
 		vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
@@ -766,7 +897,8 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 		return NULL;
 	}
 
-	vq = kmalloc(sizeof(*vq) + sizeof(void *)*num, GFP_KERNEL);
+	vq = kmalloc(sizeof(*vq) + num * sizeof(struct vring_desc_state),
+		     GFP_KERNEL);
 	if (!vq)
 		return NULL;
 
@@ -800,11 +932,9 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 
 	/* Put everything in free lists. */
 	vq->free_head = 0;
-	for (i = 0; i < num-1; i++) {
+	for (i = 0; i < num-1; i++)
 		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
-		vq->data[i] = NULL;
-	}
-	vq->data[i] = NULL;
+	memset(vq->desc_state, 0, num * sizeof(struct vring_desc_state));
 
 	return &vq->vq;
 }
diff --git a/tools/virtio/linux/dma-mapping.h b/tools/virtio/linux/dma-mapping.h
new file mode 100644
index 000000000000..4f93af89ae16
--- /dev/null
+++ b/tools/virtio/linux/dma-mapping.h
@@ -0,0 +1,17 @@
+#ifndef _LINUX_DMA_MAPPING_H
+#define _LINUX_DMA_MAPPING_H
+
+#ifdef CONFIG_HAS_DMA
+# error Virtio userspace code does not support CONFIG_HAS_DMA
+#endif
+
+#define PCI_DMA_BUS_IS_PHYS 1
+
+enum dma_data_direction {
+	DMA_BIDIRECTIONAL = 0,
+	DMA_TO_DEVICE = 1,
+	DMA_FROM_DEVICE = 2,
+	DMA_NONE = 3,
+};
+
+#endif
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 5/9] virtio_ring: Support DMA APIs
@ 2016-02-03  5:46   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

virtio_ring currently sends the device (usually a hypervisor)
physical addresses of its I/O buffers.  This is okay when DMA
addresses and physical addresses are the same thing, but this isn't
always the case.  For example, this never works on Xen guests, and
it is likely to fail if a physical "virtio" device ever ends up
behind an IOMMU or swiotlb.

The immediate use case for me is to enable virtio on Xen guests.
For that to work, we need vring to support DMA address translation
as well as a corresponding change to virtio_pci or to another
driver.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/Kconfig           |   2 +-
 drivers/virtio/virtio_ring.c     | 200 ++++++++++++++++++++++++++++++++-------
 tools/virtio/linux/dma-mapping.h |  17 ++++
 3 files changed, 183 insertions(+), 36 deletions(-)
 create mode 100644 tools/virtio/linux/dma-mapping.h

diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
index cab9f3f63a38..77590320d44c 100644
--- a/drivers/virtio/Kconfig
+++ b/drivers/virtio/Kconfig
@@ -60,7 +60,7 @@ config VIRTIO_INPUT
 
  config VIRTIO_MMIO
 	tristate "Platform bus driver for memory mapped virtio devices"
-	depends on HAS_IOMEM
+	depends on HAS_IOMEM && HAS_DMA
  	select VIRTIO
  	---help---
  	 This drivers provides support for memory mapped virtio
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index ab0be6c084f6..9abc008ff7ea 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -24,6 +24,7 @@
 #include <linux/module.h>
 #include <linux/hrtimer.h>
 #include <linux/kmemleak.h>
+#include <linux/dma-mapping.h>
 
 #ifdef DEBUG
 /* For development, we want to crash whenever the ring is screwed. */
@@ -54,6 +55,11 @@
 #define END_USE(vq)
 #endif
 
+struct vring_desc_state {
+	void *data;			/* Data for callback. */
+	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
+};
+
 struct vring_virtqueue {
 	struct virtqueue vq;
 
@@ -98,8 +104,8 @@ struct vring_virtqueue {
 	ktime_t last_add_time;
 #endif
 
-	/* Tokens for callbacks. */
-	void *data[];
+	/* Per-descriptor state. */
+	struct vring_desc_state desc_state[];
 };
 
 #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
@@ -128,6 +134,79 @@ static bool vring_use_dma_api(struct virtio_device *vdev)
 	return false;
 }
 
+/*
+ * The DMA ops on various arches are rather gnarly right now, and
+ * making all of the arch DMA ops work on the vring device itself
+ * is a mess.  For now, we use the parent device for DMA ops.
+ */
+struct device *vring_dma_dev(const struct vring_virtqueue *vq)
+{
+	return vq->vq.vdev->dev.parent;
+}
+
+/* Map one sg entry. */
+static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq,
+				   struct scatterlist *sg,
+				   enum dma_data_direction direction)
+{
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return (dma_addr_t)sg_phys(sg);
+
+	/*
+	 * We can't use dma_map_sg, because we don't use scatterlists in
+	 * the way it expects (we don't guarantee that the scatterlist
+	 * will exist for the lifetime of the mapping).
+	 */
+	return dma_map_page(vring_dma_dev(vq),
+			    sg_page(sg), sg->offset, sg->length,
+			    direction);
+}
+
+static dma_addr_t vring_map_single(const struct vring_virtqueue *vq,
+				   void *cpu_addr, size_t size,
+				   enum dma_data_direction direction)
+{
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return (dma_addr_t)virt_to_phys(cpu_addr);
+
+	return dma_map_single(vring_dma_dev(vq),
+			      cpu_addr, size, direction);
+}
+
+static void vring_unmap_one(const struct vring_virtqueue *vq,
+			    struct vring_desc *desc)
+{
+	u16 flags;
+
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return;
+
+	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
+
+	if (flags & VRING_DESC_F_INDIRECT) {
+		dma_unmap_single(vring_dma_dev(vq),
+				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
+				 virtio32_to_cpu(vq->vq.vdev, desc->len),
+				 (flags & VRING_DESC_F_WRITE) ?
+				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	} else {
+		dma_unmap_page(vring_dma_dev(vq),
+			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
+			       virtio32_to_cpu(vq->vq.vdev, desc->len),
+			       (flags & VRING_DESC_F_WRITE) ?
+			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	}
+}
+
+static int vring_mapping_error(const struct vring_virtqueue *vq,
+			       dma_addr_t addr)
+{
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return 0;
+
+	return dma_mapping_error(vring_dma_dev(vq), addr);
+}
+
 static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
 					 unsigned int total_sg, gfp_t gfp)
 {
@@ -161,7 +240,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	struct vring_virtqueue *vq = to_vvq(_vq);
 	struct scatterlist *sg;
 	struct vring_desc *desc;
-	unsigned int i, n, avail, descs_used, uninitialized_var(prev);
+	unsigned int i, n, avail, descs_used, uninitialized_var(prev), err_idx;
 	int head;
 	bool indirect;
 
@@ -201,21 +280,15 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 
 	if (desc) {
 		/* Use a single buffer which doesn't continue */
-		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
-		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, virt_to_phys(desc));
-		/* avoid kmemleak false positive (hidden by virt_to_phys) */
-		kmemleak_ignore(desc);
-		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
-
+		indirect = true;
 		/* Set up rest to use this indirect table. */
 		i = 0;
 		descs_used = 1;
-		indirect = true;
 	} else {
+		indirect = false;
 		desc = vq->vring.desc;
 		i = head;
 		descs_used = total_sg;
-		indirect = false;
 	}
 
 	if (vq->vq.num_free < descs_used) {
@@ -230,13 +303,14 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 		return -ENOSPC;
 	}
 
-	/* We're about to use some buffers from the free list. */
-	vq->vq.num_free -= descs_used;
-
 	for (n = 0; n < out_sgs; n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
+			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
+			if (vring_mapping_error(vq, addr))
+				goto unmap_release;
+
 			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT);
-			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
+			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
 			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
 			prev = i;
 			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
@@ -244,8 +318,12 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	}
 	for (; n < (out_sgs + in_sgs); n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
+			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
+			if (vring_mapping_error(vq, addr))
+				goto unmap_release;
+
 			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT | VRING_DESC_F_WRITE);
-			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
+			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
 			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
 			prev = i;
 			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
@@ -254,14 +332,33 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	/* Last one doesn't continue. */
 	desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
 
+	if (indirect) {
+		/* Now that the indirect table is filled in, map it. */
+		dma_addr_t addr = vring_map_single(
+			vq, desc, total_sg * sizeof(struct vring_desc),
+			DMA_TO_DEVICE);
+		if (vring_mapping_error(vq, addr))
+			goto unmap_release;
+
+		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
+		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, addr);
+
+		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
+	}
+
+	/* We're using some buffers from the free list. */
+	vq->vq.num_free -= descs_used;
+
 	/* Update free pointer */
 	if (indirect)
 		vq->free_head = virtio16_to_cpu(_vq->vdev, vq->vring.desc[head].next);
 	else
 		vq->free_head = i;
 
-	/* Set token. */
-	vq->data[head] = data;
+	/* Store token and indirect buffer state. */
+	vq->desc_state[head].data = data;
+	if (indirect)
+		vq->desc_state[head].indir_desc = desc;
 
 	/* Put entry in available array (but don't update avail->idx until they
 	 * do sync). */
@@ -284,6 +381,24 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 		virtqueue_kick(_vq);
 
 	return 0;
+
+unmap_release:
+	err_idx = i;
+	i = head;
+
+	for (n = 0; n < total_sg; n++) {
+		if (i == err_idx)
+			break;
+		vring_unmap_one(vq, &desc[i]);
+		i = vq->vring.desc[i].next;
+	}
+
+	vq->vq.num_free += total_sg;
+
+	if (indirect)
+		kfree(desc);
+
+	return -EIO;
 }
 
 /**
@@ -454,27 +569,43 @@ EXPORT_SYMBOL_GPL(virtqueue_kick);
 
 static void detach_buf(struct vring_virtqueue *vq, unsigned int head)
 {
-	unsigned int i;
+	unsigned int i, j;
+	u16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
 
 	/* Clear data ptr. */
-	vq->data[head] = NULL;
+	vq->desc_state[head].data = NULL;
 
-	/* Put back on free list: find end */
+	/* Put back on free list: unmap first-level descriptors and find end */
 	i = head;
 
-	/* Free the indirect table */
-	if (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT))
-		kfree(phys_to_virt(virtio64_to_cpu(vq->vq.vdev, vq->vring.desc[i].addr)));
-
-	while (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT)) {
+	while (vq->vring.desc[i].flags & nextflag) {
+		vring_unmap_one(vq, &vq->vring.desc[i]);
 		i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next);
 		vq->vq.num_free++;
 	}
 
+	vring_unmap_one(vq, &vq->vring.desc[i]);
 	vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head);
 	vq->free_head = head;
+
 	/* Plus final descriptor */
 	vq->vq.num_free++;
+
+	/* Free the indirect table, if any, now that it's unmapped. */
+	if (vq->desc_state[head].indir_desc) {
+		struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
+		u32 len = virtio32_to_cpu(vq->vq.vdev, vq->vring.desc[head].len);
+
+		BUG_ON(!(vq->vring.desc[head].flags &
+			 cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
+		BUG_ON(len == 0 || len % sizeof(struct vring_desc));
+
+		for (j = 0; j < len / sizeof(struct vring_desc); j++)
+			vring_unmap_one(vq, &indir_desc[j]);
+
+		kfree(vq->desc_state[head].indir_desc);
+		vq->desc_state[head].indir_desc = NULL;
+	}
 }
 
 static inline bool more_used(const struct vring_virtqueue *vq)
@@ -529,13 +660,13 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
 		BAD_RING(vq, "id %u out of range\n", i);
 		return NULL;
 	}
-	if (unlikely(!vq->data[i])) {
+	if (unlikely(!vq->desc_state[i].data)) {
 		BAD_RING(vq, "id %u is not a head!\n", i);
 		return NULL;
 	}
 
 	/* detach_buf clears data, so grab it now. */
-	ret = vq->data[i];
+	ret = vq->desc_state[i].data;
 	detach_buf(vq, i);
 	vq->last_used_idx++;
 	/* If we expect an interrupt for the next entry, tell host
@@ -709,10 +840,10 @@ void *virtqueue_detach_unused_buf(struct virtqueue *_vq)
 	START_USE(vq);
 
 	for (i = 0; i < vq->vring.num; i++) {
-		if (!vq->data[i])
+		if (!vq->desc_state[i].data)
 			continue;
 		/* detach_buf clears data, so grab it now. */
-		buf = vq->data[i];
+		buf = vq->desc_state[i].data;
 		detach_buf(vq, i);
 		vq->avail_idx_shadow--;
 		vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
@@ -766,7 +897,8 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 		return NULL;
 	}
 
-	vq = kmalloc(sizeof(*vq) + sizeof(void *)*num, GFP_KERNEL);
+	vq = kmalloc(sizeof(*vq) + num * sizeof(struct vring_desc_state),
+		     GFP_KERNEL);
 	if (!vq)
 		return NULL;
 
@@ -800,11 +932,9 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 
 	/* Put everything in free lists. */
 	vq->free_head = 0;
-	for (i = 0; i < num-1; i++) {
+	for (i = 0; i < num-1; i++)
 		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
-		vq->data[i] = NULL;
-	}
-	vq->data[i] = NULL;
+	memset(vq->desc_state, 0, num * sizeof(struct vring_desc_state));
 
 	return &vq->vq;
 }
diff --git a/tools/virtio/linux/dma-mapping.h b/tools/virtio/linux/dma-mapping.h
new file mode 100644
index 000000000000..4f93af89ae16
--- /dev/null
+++ b/tools/virtio/linux/dma-mapping.h
@@ -0,0 +1,17 @@
+#ifndef _LINUX_DMA_MAPPING_H
+#define _LINUX_DMA_MAPPING_H
+
+#ifdef CONFIG_HAS_DMA
+# error Virtio userspace code does not support CONFIG_HAS_DMA
+#endif
+
+#define PCI_DMA_BUS_IS_PHYS 1
+
+enum dma_data_direction {
+	DMA_BIDIRECTIONAL = 0,
+	DMA_TO_DEVICE = 1,
+	DMA_FROM_DEVICE = 2,
+	DMA_NONE = 3,
+};
+
+#endif
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 5/9] virtio_ring: Support DMA APIs
@ 2016-02-03  5:46   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

virtio_ring currently sends the device (usually a hypervisor)
physical addresses of its I/O buffers.  This is okay when DMA
addresses and physical addresses are the same thing, but this isn't
always the case.  For example, this never works on Xen guests, and
it is likely to fail if a physical "virtio" device ever ends up
behind an IOMMU or swiotlb.

The immediate use case for me is to enable virtio on Xen guests.
For that to work, we need vring to support DMA address translation
as well as a corresponding change to virtio_pci or to another
driver.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/Kconfig           |   2 +-
 drivers/virtio/virtio_ring.c     | 200 ++++++++++++++++++++++++++++++++-------
 tools/virtio/linux/dma-mapping.h |  17 ++++
 3 files changed, 183 insertions(+), 36 deletions(-)
 create mode 100644 tools/virtio/linux/dma-mapping.h

diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
index cab9f3f63a38..77590320d44c 100644
--- a/drivers/virtio/Kconfig
+++ b/drivers/virtio/Kconfig
@@ -60,7 +60,7 @@ config VIRTIO_INPUT
 
  config VIRTIO_MMIO
 	tristate "Platform bus driver for memory mapped virtio devices"
-	depends on HAS_IOMEM
+	depends on HAS_IOMEM && HAS_DMA
  	select VIRTIO
  	---help---
  	 This drivers provides support for memory mapped virtio
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index ab0be6c084f6..9abc008ff7ea 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -24,6 +24,7 @@
 #include <linux/module.h>
 #include <linux/hrtimer.h>
 #include <linux/kmemleak.h>
+#include <linux/dma-mapping.h>
 
 #ifdef DEBUG
 /* For development, we want to crash whenever the ring is screwed. */
@@ -54,6 +55,11 @@
 #define END_USE(vq)
 #endif
 
+struct vring_desc_state {
+	void *data;			/* Data for callback. */
+	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
+};
+
 struct vring_virtqueue {
 	struct virtqueue vq;
 
@@ -98,8 +104,8 @@ struct vring_virtqueue {
 	ktime_t last_add_time;
 #endif
 
-	/* Tokens for callbacks. */
-	void *data[];
+	/* Per-descriptor state. */
+	struct vring_desc_state desc_state[];
 };
 
 #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
@@ -128,6 +134,79 @@ static bool vring_use_dma_api(struct virtio_device *vdev)
 	return false;
 }
 
+/*
+ * The DMA ops on various arches are rather gnarly right now, and
+ * making all of the arch DMA ops work on the vring device itself
+ * is a mess.  For now, we use the parent device for DMA ops.
+ */
+struct device *vring_dma_dev(const struct vring_virtqueue *vq)
+{
+	return vq->vq.vdev->dev.parent;
+}
+
+/* Map one sg entry. */
+static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq,
+				   struct scatterlist *sg,
+				   enum dma_data_direction direction)
+{
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return (dma_addr_t)sg_phys(sg);
+
+	/*
+	 * We can't use dma_map_sg, because we don't use scatterlists in
+	 * the way it expects (we don't guarantee that the scatterlist
+	 * will exist for the lifetime of the mapping).
+	 */
+	return dma_map_page(vring_dma_dev(vq),
+			    sg_page(sg), sg->offset, sg->length,
+			    direction);
+}
+
+static dma_addr_t vring_map_single(const struct vring_virtqueue *vq,
+				   void *cpu_addr, size_t size,
+				   enum dma_data_direction direction)
+{
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return (dma_addr_t)virt_to_phys(cpu_addr);
+
+	return dma_map_single(vring_dma_dev(vq),
+			      cpu_addr, size, direction);
+}
+
+static void vring_unmap_one(const struct vring_virtqueue *vq,
+			    struct vring_desc *desc)
+{
+	u16 flags;
+
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return;
+
+	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
+
+	if (flags & VRING_DESC_F_INDIRECT) {
+		dma_unmap_single(vring_dma_dev(vq),
+				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
+				 virtio32_to_cpu(vq->vq.vdev, desc->len),
+				 (flags & VRING_DESC_F_WRITE) ?
+				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	} else {
+		dma_unmap_page(vring_dma_dev(vq),
+			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
+			       virtio32_to_cpu(vq->vq.vdev, desc->len),
+			       (flags & VRING_DESC_F_WRITE) ?
+			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	}
+}
+
+static int vring_mapping_error(const struct vring_virtqueue *vq,
+			       dma_addr_t addr)
+{
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return 0;
+
+	return dma_mapping_error(vring_dma_dev(vq), addr);
+}
+
 static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
 					 unsigned int total_sg, gfp_t gfp)
 {
@@ -161,7 +240,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	struct vring_virtqueue *vq = to_vvq(_vq);
 	struct scatterlist *sg;
 	struct vring_desc *desc;
-	unsigned int i, n, avail, descs_used, uninitialized_var(prev);
+	unsigned int i, n, avail, descs_used, uninitialized_var(prev), err_idx;
 	int head;
 	bool indirect;
 
@@ -201,21 +280,15 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 
 	if (desc) {
 		/* Use a single buffer which doesn't continue */
-		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
-		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, virt_to_phys(desc));
-		/* avoid kmemleak false positive (hidden by virt_to_phys) */
-		kmemleak_ignore(desc);
-		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
-
+		indirect = true;
 		/* Set up rest to use this indirect table. */
 		i = 0;
 		descs_used = 1;
-		indirect = true;
 	} else {
+		indirect = false;
 		desc = vq->vring.desc;
 		i = head;
 		descs_used = total_sg;
-		indirect = false;
 	}
 
 	if (vq->vq.num_free < descs_used) {
@@ -230,13 +303,14 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 		return -ENOSPC;
 	}
 
-	/* We're about to use some buffers from the free list. */
-	vq->vq.num_free -= descs_used;
-
 	for (n = 0; n < out_sgs; n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
+			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
+			if (vring_mapping_error(vq, addr))
+				goto unmap_release;
+
 			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT);
-			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
+			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
 			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
 			prev = i;
 			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
@@ -244,8 +318,12 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	}
 	for (; n < (out_sgs + in_sgs); n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
+			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
+			if (vring_mapping_error(vq, addr))
+				goto unmap_release;
+
 			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT | VRING_DESC_F_WRITE);
-			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
+			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
 			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
 			prev = i;
 			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
@@ -254,14 +332,33 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	/* Last one doesn't continue. */
 	desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
 
+	if (indirect) {
+		/* Now that the indirect table is filled in, map it. */
+		dma_addr_t addr = vring_map_single(
+			vq, desc, total_sg * sizeof(struct vring_desc),
+			DMA_TO_DEVICE);
+		if (vring_mapping_error(vq, addr))
+			goto unmap_release;
+
+		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
+		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, addr);
+
+		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
+	}
+
+	/* We're using some buffers from the free list. */
+	vq->vq.num_free -= descs_used;
+
 	/* Update free pointer */
 	if (indirect)
 		vq->free_head = virtio16_to_cpu(_vq->vdev, vq->vring.desc[head].next);
 	else
 		vq->free_head = i;
 
-	/* Set token. */
-	vq->data[head] = data;
+	/* Store token and indirect buffer state. */
+	vq->desc_state[head].data = data;
+	if (indirect)
+		vq->desc_state[head].indir_desc = desc;
 
 	/* Put entry in available array (but don't update avail->idx until they
 	 * do sync). */
@@ -284,6 +381,24 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 		virtqueue_kick(_vq);
 
 	return 0;
+
+unmap_release:
+	err_idx = i;
+	i = head;
+
+	for (n = 0; n < total_sg; n++) {
+		if (i = err_idx)
+			break;
+		vring_unmap_one(vq, &desc[i]);
+		i = vq->vring.desc[i].next;
+	}
+
+	vq->vq.num_free += total_sg;
+
+	if (indirect)
+		kfree(desc);
+
+	return -EIO;
 }
 
 /**
@@ -454,27 +569,43 @@ EXPORT_SYMBOL_GPL(virtqueue_kick);
 
 static void detach_buf(struct vring_virtqueue *vq, unsigned int head)
 {
-	unsigned int i;
+	unsigned int i, j;
+	u16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
 
 	/* Clear data ptr. */
-	vq->data[head] = NULL;
+	vq->desc_state[head].data = NULL;
 
-	/* Put back on free list: find end */
+	/* Put back on free list: unmap first-level descriptors and find end */
 	i = head;
 
-	/* Free the indirect table */
-	if (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT))
-		kfree(phys_to_virt(virtio64_to_cpu(vq->vq.vdev, vq->vring.desc[i].addr)));
-
-	while (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT)) {
+	while (vq->vring.desc[i].flags & nextflag) {
+		vring_unmap_one(vq, &vq->vring.desc[i]);
 		i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next);
 		vq->vq.num_free++;
 	}
 
+	vring_unmap_one(vq, &vq->vring.desc[i]);
 	vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head);
 	vq->free_head = head;
+
 	/* Plus final descriptor */
 	vq->vq.num_free++;
+
+	/* Free the indirect table, if any, now that it's unmapped. */
+	if (vq->desc_state[head].indir_desc) {
+		struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
+		u32 len = virtio32_to_cpu(vq->vq.vdev, vq->vring.desc[head].len);
+
+		BUG_ON(!(vq->vring.desc[head].flags &
+			 cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
+		BUG_ON(len = 0 || len % sizeof(struct vring_desc));
+
+		for (j = 0; j < len / sizeof(struct vring_desc); j++)
+			vring_unmap_one(vq, &indir_desc[j]);
+
+		kfree(vq->desc_state[head].indir_desc);
+		vq->desc_state[head].indir_desc = NULL;
+	}
 }
 
 static inline bool more_used(const struct vring_virtqueue *vq)
@@ -529,13 +660,13 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
 		BAD_RING(vq, "id %u out of range\n", i);
 		return NULL;
 	}
-	if (unlikely(!vq->data[i])) {
+	if (unlikely(!vq->desc_state[i].data)) {
 		BAD_RING(vq, "id %u is not a head!\n", i);
 		return NULL;
 	}
 
 	/* detach_buf clears data, so grab it now. */
-	ret = vq->data[i];
+	ret = vq->desc_state[i].data;
 	detach_buf(vq, i);
 	vq->last_used_idx++;
 	/* If we expect an interrupt for the next entry, tell host
@@ -709,10 +840,10 @@ void *virtqueue_detach_unused_buf(struct virtqueue *_vq)
 	START_USE(vq);
 
 	for (i = 0; i < vq->vring.num; i++) {
-		if (!vq->data[i])
+		if (!vq->desc_state[i].data)
 			continue;
 		/* detach_buf clears data, so grab it now. */
-		buf = vq->data[i];
+		buf = vq->desc_state[i].data;
 		detach_buf(vq, i);
 		vq->avail_idx_shadow--;
 		vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
@@ -766,7 +897,8 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 		return NULL;
 	}
 
-	vq = kmalloc(sizeof(*vq) + sizeof(void *)*num, GFP_KERNEL);
+	vq = kmalloc(sizeof(*vq) + num * sizeof(struct vring_desc_state),
+		     GFP_KERNEL);
 	if (!vq)
 		return NULL;
 
@@ -800,11 +932,9 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 
 	/* Put everything in free lists. */
 	vq->free_head = 0;
-	for (i = 0; i < num-1; i++) {
+	for (i = 0; i < num-1; i++)
 		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
-		vq->data[i] = NULL;
-	}
-	vq->data[i] = NULL;
+	memset(vq->desc_state, 0, num * sizeof(struct vring_desc_state));
 
 	return &vq->vq;
 }
diff --git a/tools/virtio/linux/dma-mapping.h b/tools/virtio/linux/dma-mapping.h
new file mode 100644
index 000000000000..4f93af89ae16
--- /dev/null
+++ b/tools/virtio/linux/dma-mapping.h
@@ -0,0 +1,17 @@
+#ifndef _LINUX_DMA_MAPPING_H
+#define _LINUX_DMA_MAPPING_H
+
+#ifdef CONFIG_HAS_DMA
+# error Virtio userspace code does not support CONFIG_HAS_DMA
+#endif
+
+#define PCI_DMA_BUS_IS_PHYS 1
+
+enum dma_data_direction {
+	DMA_BIDIRECTIONAL = 0,
+	DMA_TO_DEVICE = 1,
+	DMA_FROM_DEVICE = 2,
+	DMA_NONE = 3,
+};
+
+#endif
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 5/9] virtio_ring: Support DMA APIs
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (9 preceding siblings ...)
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, Andy Lutomirski, xen-devel, sparclinux,
	Paolo Bonzini, Linux Virtualization, David Woodhouse,
	David S. Miller, Martin Schwidefsky

virtio_ring currently sends the device (usually a hypervisor)
physical addresses of its I/O buffers.  This is okay when DMA
addresses and physical addresses are the same thing, but this isn't
always the case.  For example, this never works on Xen guests, and
it is likely to fail if a physical "virtio" device ever ends up
behind an IOMMU or swiotlb.

The immediate use case for me is to enable virtio on Xen guests.
For that to work, we need vring to support DMA address translation
as well as a corresponding change to virtio_pci or to another
driver.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/Kconfig           |   2 +-
 drivers/virtio/virtio_ring.c     | 200 ++++++++++++++++++++++++++++++++-------
 tools/virtio/linux/dma-mapping.h |  17 ++++
 3 files changed, 183 insertions(+), 36 deletions(-)
 create mode 100644 tools/virtio/linux/dma-mapping.h

diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
index cab9f3f63a38..77590320d44c 100644
--- a/drivers/virtio/Kconfig
+++ b/drivers/virtio/Kconfig
@@ -60,7 +60,7 @@ config VIRTIO_INPUT
 
  config VIRTIO_MMIO
 	tristate "Platform bus driver for memory mapped virtio devices"
-	depends on HAS_IOMEM
+	depends on HAS_IOMEM && HAS_DMA
  	select VIRTIO
  	---help---
  	 This drivers provides support for memory mapped virtio
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index ab0be6c084f6..9abc008ff7ea 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -24,6 +24,7 @@
 #include <linux/module.h>
 #include <linux/hrtimer.h>
 #include <linux/kmemleak.h>
+#include <linux/dma-mapping.h>
 
 #ifdef DEBUG
 /* For development, we want to crash whenever the ring is screwed. */
@@ -54,6 +55,11 @@
 #define END_USE(vq)
 #endif
 
+struct vring_desc_state {
+	void *data;			/* Data for callback. */
+	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
+};
+
 struct vring_virtqueue {
 	struct virtqueue vq;
 
@@ -98,8 +104,8 @@ struct vring_virtqueue {
 	ktime_t last_add_time;
 #endif
 
-	/* Tokens for callbacks. */
-	void *data[];
+	/* Per-descriptor state. */
+	struct vring_desc_state desc_state[];
 };
 
 #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
@@ -128,6 +134,79 @@ static bool vring_use_dma_api(struct virtio_device *vdev)
 	return false;
 }
 
+/*
+ * The DMA ops on various arches are rather gnarly right now, and
+ * making all of the arch DMA ops work on the vring device itself
+ * is a mess.  For now, we use the parent device for DMA ops.
+ */
+struct device *vring_dma_dev(const struct vring_virtqueue *vq)
+{
+	return vq->vq.vdev->dev.parent;
+}
+
+/* Map one sg entry. */
+static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq,
+				   struct scatterlist *sg,
+				   enum dma_data_direction direction)
+{
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return (dma_addr_t)sg_phys(sg);
+
+	/*
+	 * We can't use dma_map_sg, because we don't use scatterlists in
+	 * the way it expects (we don't guarantee that the scatterlist
+	 * will exist for the lifetime of the mapping).
+	 */
+	return dma_map_page(vring_dma_dev(vq),
+			    sg_page(sg), sg->offset, sg->length,
+			    direction);
+}
+
+static dma_addr_t vring_map_single(const struct vring_virtqueue *vq,
+				   void *cpu_addr, size_t size,
+				   enum dma_data_direction direction)
+{
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return (dma_addr_t)virt_to_phys(cpu_addr);
+
+	return dma_map_single(vring_dma_dev(vq),
+			      cpu_addr, size, direction);
+}
+
+static void vring_unmap_one(const struct vring_virtqueue *vq,
+			    struct vring_desc *desc)
+{
+	u16 flags;
+
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return;
+
+	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
+
+	if (flags & VRING_DESC_F_INDIRECT) {
+		dma_unmap_single(vring_dma_dev(vq),
+				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
+				 virtio32_to_cpu(vq->vq.vdev, desc->len),
+				 (flags & VRING_DESC_F_WRITE) ?
+				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	} else {
+		dma_unmap_page(vring_dma_dev(vq),
+			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
+			       virtio32_to_cpu(vq->vq.vdev, desc->len),
+			       (flags & VRING_DESC_F_WRITE) ?
+			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	}
+}
+
+static int vring_mapping_error(const struct vring_virtqueue *vq,
+			       dma_addr_t addr)
+{
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return 0;
+
+	return dma_mapping_error(vring_dma_dev(vq), addr);
+}
+
 static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
 					 unsigned int total_sg, gfp_t gfp)
 {
@@ -161,7 +240,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	struct vring_virtqueue *vq = to_vvq(_vq);
 	struct scatterlist *sg;
 	struct vring_desc *desc;
-	unsigned int i, n, avail, descs_used, uninitialized_var(prev);
+	unsigned int i, n, avail, descs_used, uninitialized_var(prev), err_idx;
 	int head;
 	bool indirect;
 
@@ -201,21 +280,15 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 
 	if (desc) {
 		/* Use a single buffer which doesn't continue */
-		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
-		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, virt_to_phys(desc));
-		/* avoid kmemleak false positive (hidden by virt_to_phys) */
-		kmemleak_ignore(desc);
-		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
-
+		indirect = true;
 		/* Set up rest to use this indirect table. */
 		i = 0;
 		descs_used = 1;
-		indirect = true;
 	} else {
+		indirect = false;
 		desc = vq->vring.desc;
 		i = head;
 		descs_used = total_sg;
-		indirect = false;
 	}
 
 	if (vq->vq.num_free < descs_used) {
@@ -230,13 +303,14 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 		return -ENOSPC;
 	}
 
-	/* We're about to use some buffers from the free list. */
-	vq->vq.num_free -= descs_used;
-
 	for (n = 0; n < out_sgs; n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
+			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
+			if (vring_mapping_error(vq, addr))
+				goto unmap_release;
+
 			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT);
-			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
+			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
 			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
 			prev = i;
 			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
@@ -244,8 +318,12 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	}
 	for (; n < (out_sgs + in_sgs); n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
+			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
+			if (vring_mapping_error(vq, addr))
+				goto unmap_release;
+
 			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT | VRING_DESC_F_WRITE);
-			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
+			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
 			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
 			prev = i;
 			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
@@ -254,14 +332,33 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	/* Last one doesn't continue. */
 	desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
 
+	if (indirect) {
+		/* Now that the indirect table is filled in, map it. */
+		dma_addr_t addr = vring_map_single(
+			vq, desc, total_sg * sizeof(struct vring_desc),
+			DMA_TO_DEVICE);
+		if (vring_mapping_error(vq, addr))
+			goto unmap_release;
+
+		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
+		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, addr);
+
+		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
+	}
+
+	/* We're using some buffers from the free list. */
+	vq->vq.num_free -= descs_used;
+
 	/* Update free pointer */
 	if (indirect)
 		vq->free_head = virtio16_to_cpu(_vq->vdev, vq->vring.desc[head].next);
 	else
 		vq->free_head = i;
 
-	/* Set token. */
-	vq->data[head] = data;
+	/* Store token and indirect buffer state. */
+	vq->desc_state[head].data = data;
+	if (indirect)
+		vq->desc_state[head].indir_desc = desc;
 
 	/* Put entry in available array (but don't update avail->idx until they
 	 * do sync). */
@@ -284,6 +381,24 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 		virtqueue_kick(_vq);
 
 	return 0;
+
+unmap_release:
+	err_idx = i;
+	i = head;
+
+	for (n = 0; n < total_sg; n++) {
+		if (i == err_idx)
+			break;
+		vring_unmap_one(vq, &desc[i]);
+		i = vq->vring.desc[i].next;
+	}
+
+	vq->vq.num_free += total_sg;
+
+	if (indirect)
+		kfree(desc);
+
+	return -EIO;
 }
 
 /**
@@ -454,27 +569,43 @@ EXPORT_SYMBOL_GPL(virtqueue_kick);
 
 static void detach_buf(struct vring_virtqueue *vq, unsigned int head)
 {
-	unsigned int i;
+	unsigned int i, j;
+	u16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
 
 	/* Clear data ptr. */
-	vq->data[head] = NULL;
+	vq->desc_state[head].data = NULL;
 
-	/* Put back on free list: find end */
+	/* Put back on free list: unmap first-level descriptors and find end */
 	i = head;
 
-	/* Free the indirect table */
-	if (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT))
-		kfree(phys_to_virt(virtio64_to_cpu(vq->vq.vdev, vq->vring.desc[i].addr)));
-
-	while (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT)) {
+	while (vq->vring.desc[i].flags & nextflag) {
+		vring_unmap_one(vq, &vq->vring.desc[i]);
 		i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next);
 		vq->vq.num_free++;
 	}
 
+	vring_unmap_one(vq, &vq->vring.desc[i]);
 	vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head);
 	vq->free_head = head;
+
 	/* Plus final descriptor */
 	vq->vq.num_free++;
+
+	/* Free the indirect table, if any, now that it's unmapped. */
+	if (vq->desc_state[head].indir_desc) {
+		struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
+		u32 len = virtio32_to_cpu(vq->vq.vdev, vq->vring.desc[head].len);
+
+		BUG_ON(!(vq->vring.desc[head].flags &
+			 cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
+		BUG_ON(len == 0 || len % sizeof(struct vring_desc));
+
+		for (j = 0; j < len / sizeof(struct vring_desc); j++)
+			vring_unmap_one(vq, &indir_desc[j]);
+
+		kfree(vq->desc_state[head].indir_desc);
+		vq->desc_state[head].indir_desc = NULL;
+	}
 }
 
 static inline bool more_used(const struct vring_virtqueue *vq)
@@ -529,13 +660,13 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
 		BAD_RING(vq, "id %u out of range\n", i);
 		return NULL;
 	}
-	if (unlikely(!vq->data[i])) {
+	if (unlikely(!vq->desc_state[i].data)) {
 		BAD_RING(vq, "id %u is not a head!\n", i);
 		return NULL;
 	}
 
 	/* detach_buf clears data, so grab it now. */
-	ret = vq->data[i];
+	ret = vq->desc_state[i].data;
 	detach_buf(vq, i);
 	vq->last_used_idx++;
 	/* If we expect an interrupt for the next entry, tell host
@@ -709,10 +840,10 @@ void *virtqueue_detach_unused_buf(struct virtqueue *_vq)
 	START_USE(vq);
 
 	for (i = 0; i < vq->vring.num; i++) {
-		if (!vq->data[i])
+		if (!vq->desc_state[i].data)
 			continue;
 		/* detach_buf clears data, so grab it now. */
-		buf = vq->data[i];
+		buf = vq->desc_state[i].data;
 		detach_buf(vq, i);
 		vq->avail_idx_shadow--;
 		vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
@@ -766,7 +897,8 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 		return NULL;
 	}
 
-	vq = kmalloc(sizeof(*vq) + sizeof(void *)*num, GFP_KERNEL);
+	vq = kmalloc(sizeof(*vq) + num * sizeof(struct vring_desc_state),
+		     GFP_KERNEL);
 	if (!vq)
 		return NULL;
 
@@ -800,11 +932,9 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 
 	/* Put everything in free lists. */
 	vq->free_head = 0;
-	for (i = 0; i < num-1; i++) {
+	for (i = 0; i < num-1; i++)
 		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
-		vq->data[i] = NULL;
-	}
-	vq->data[i] = NULL;
+	memset(vq->desc_state, 0, num * sizeof(struct vring_desc_state));
 
 	return &vq->vq;
 }
diff --git a/tools/virtio/linux/dma-mapping.h b/tools/virtio/linux/dma-mapping.h
new file mode 100644
index 000000000000..4f93af89ae16
--- /dev/null
+++ b/tools/virtio/linux/dma-mapping.h
@@ -0,0 +1,17 @@
+#ifndef _LINUX_DMA_MAPPING_H
+#define _LINUX_DMA_MAPPING_H
+
+#ifdef CONFIG_HAS_DMA
+# error Virtio userspace code does not support CONFIG_HAS_DMA
+#endif
+
+#define PCI_DMA_BUS_IS_PHYS 1
+
+enum dma_data_direction {
+	DMA_BIDIRECTIONAL = 0,
+	DMA_TO_DEVICE = 1,
+	DMA_FROM_DEVICE = 2,
+	DMA_NONE = 3,
+};
+
+#endif
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 6/9] virtio: Add improved queue allocation API
  2016-02-03  5:46 ` Andy Lutomirski
@ 2016-02-03  5:46   ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

This leaves vring_new_virtqueue alone for compatbility, but it
adds two new improved APIs:

vring_create_virtqueue: Creates a virtqueue backed by automatically
allocated coherent memory.  (Some day it this could be extended to
support non-coherent memory, too, if there ends up being a platform
on which it's worthwhile.)

__vring_new_virtqueue: Creates a virtqueue with a manually-specified
layout.  This should allow mic_virtio to work much more cleanly.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_ring.c | 209 ++++++++++++++++++++++++++++++++++++-------
 include/linux/virtio.h       |  23 ++++-
 include/linux/virtio_ring.h  |  35 ++++++++
 3 files changed, 235 insertions(+), 32 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 9abc008ff7ea..e46d08107a50 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -95,6 +95,11 @@ struct vring_virtqueue {
 	/* How to notify other side. FIXME: commonalize hcalls! */
 	bool (*notify)(struct virtqueue *vq);
 
+	/* DMA, allocation, and size information */
+	bool we_own_ring;
+	size_t queue_size_in_bytes;
+	dma_addr_t queue_dma_addr;
+
 #ifdef DEBUG
 	/* They're supposed to lock for us. */
 	unsigned int in_use;
@@ -878,36 +883,31 @@ irqreturn_t vring_interrupt(int irq, void *_vq)
 }
 EXPORT_SYMBOL_GPL(vring_interrupt);
 
-struct virtqueue *vring_new_virtqueue(unsigned int index,
-				      unsigned int num,
-				      unsigned int vring_align,
-				      struct virtio_device *vdev,
-				      bool weak_barriers,
-				      void *pages,
-				      bool (*notify)(struct virtqueue *),
-				      void (*callback)(struct virtqueue *),
-				      const char *name)
+struct virtqueue *__vring_new_virtqueue(unsigned int index,
+					struct vring vring,
+					struct virtio_device *vdev,
+					bool weak_barriers,
+					bool (*notify)(struct virtqueue *),
+					void (*callback)(struct virtqueue *),
+					const char *name)
 {
-	struct vring_virtqueue *vq;
 	unsigned int i;
+	struct vring_virtqueue *vq;
 
-	/* We assume num is a power of 2. */
-	if (num & (num - 1)) {
-		dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num);
-		return NULL;
-	}
-
-	vq = kmalloc(sizeof(*vq) + num * sizeof(struct vring_desc_state),
+	vq = kmalloc(sizeof(*vq) + vring.num * sizeof(struct vring_desc_state),
 		     GFP_KERNEL);
 	if (!vq)
 		return NULL;
 
-	vring_init(&vq->vring, num, pages, vring_align);
+	vq->vring = vring;
 	vq->vq.callback = callback;
 	vq->vq.vdev = vdev;
 	vq->vq.name = name;
-	vq->vq.num_free = num;
+	vq->vq.num_free = vring.num;
 	vq->vq.index = index;
+	vq->we_own_ring = false;
+	vq->queue_dma_addr = 0;
+	vq->queue_size_in_bytes = 0;
 	vq->notify = notify;
 	vq->weak_barriers = weak_barriers;
 	vq->broken = false;
@@ -932,18 +932,145 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 
 	/* Put everything in free lists. */
 	vq->free_head = 0;
-	for (i = 0; i < num-1; i++)
+	for (i = 0; i < vring.num-1; i++)
 		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
-	memset(vq->desc_state, 0, num * sizeof(struct vring_desc_state));
+	memset(vq->desc_state, 0, vring.num * sizeof(struct vring_desc_state));
 
 	return &vq->vq;
 }
+EXPORT_SYMBOL_GPL(__vring_new_virtqueue);
+
+static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
+			      dma_addr_t *dma_handle, gfp_t flag)
+{
+	if (vring_use_dma_api(vdev)) {
+		return dma_alloc_coherent(vdev->dev.parent, size,
+					  dma_handle, flag);
+	} else {
+		void *queue = alloc_pages_exact(PAGE_ALIGN(size), flag);
+		if (queue) {
+			phys_addr_t phys_addr = virt_to_phys(queue);
+			*dma_handle = (dma_addr_t)phys_addr;
+
+			/*
+			 * Sanity check: make sure we dind't truncate
+			 * the address.  The only arches I can find that
+			 * have 64-bit phys_addr_t but 32-bit dma_addr_t
+			 * are certain non-highmem MIPS and x86
+			 * configurations, but these configurations
+			 * should never allocate physical pages above 32
+			 * bits, so this is fine.  Just in case, throw a
+			 * warning and abort if we end up with an
+			 * unrepresentable address.
+			 */
+			if (WARN_ON_ONCE(*dma_handle != phys_addr)) {
+				free_pages_exact(queue, PAGE_ALIGN(size));
+				return NULL;
+			}
+		}
+		return queue;
+	}
+}
+
+static void vring_free_queue(struct virtio_device *vdev, size_t size,
+			     void *queue, dma_addr_t dma_handle)
+{
+	if (vring_use_dma_api(vdev)) {
+		dma_free_coherent(vdev->dev.parent, size, queue, dma_handle);
+	} else {
+		free_pages_exact(queue, PAGE_ALIGN(size));
+	}
+}
+
+struct virtqueue *vring_create_virtqueue(
+	unsigned int index,
+	unsigned int num,
+	unsigned int vring_align,
+	struct virtio_device *vdev,
+	bool weak_barriers,
+	bool may_reduce_num,
+	bool (*notify)(struct virtqueue *),
+	void (*callback)(struct virtqueue *),
+	const char *name)
+{
+	struct virtqueue *vq;
+	void *queue;
+	dma_addr_t dma_addr;
+	size_t queue_size_in_bytes;
+	struct vring vring;
+
+	/* We assume num is a power of 2. */
+	if (num & (num - 1)) {
+		dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num);
+		return NULL;
+	}
+
+	/* TODO: allocate each queue chunk individually */
+	for (; num && vring_size(num, vring_align) > PAGE_SIZE; num /= 2) {
+		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
+					  &dma_addr,
+					  GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO);
+		if (queue)
+			break;
+	}
+
+	if (!num)
+		return NULL;
+
+	if (!queue) {
+		/* Try to get a single page. You are my only hope! */
+		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
+					  &dma_addr, GFP_KERNEL|__GFP_ZERO);
+	}
+	if (!queue)
+		return NULL;
+
+	queue_size_in_bytes = vring_size(num, vring_align);
+	vring_init(&vring, num, queue, vring_align);
+
+	vq = __vring_new_virtqueue(index, vring, vdev, weak_barriers,
+				   notify, callback, name);
+	if (!vq) {
+		vring_free_queue(vdev, queue_size_in_bytes, queue,
+				 dma_addr);
+		return NULL;
+	}
+
+	to_vvq(vq)->queue_dma_addr = dma_addr;
+	to_vvq(vq)->queue_size_in_bytes = queue_size_in_bytes;
+	to_vvq(vq)->we_own_ring = true;
+
+	return vq;
+}
+EXPORT_SYMBOL_GPL(vring_create_virtqueue);
+
+struct virtqueue *vring_new_virtqueue(unsigned int index,
+				      unsigned int num,
+				      unsigned int vring_align,
+				      struct virtio_device *vdev,
+				      bool weak_barriers,
+				      void *pages,
+				      bool (*notify)(struct virtqueue *vq),
+				      void (*callback)(struct virtqueue *vq),
+				      const char *name)
+{
+	struct vring vring;
+	vring_init(&vring, num, pages, vring_align);
+	return __vring_new_virtqueue(index, vring, vdev, weak_barriers,
+				     notify, callback, name);
+}
 EXPORT_SYMBOL_GPL(vring_new_virtqueue);
 
-void vring_del_virtqueue(struct virtqueue *vq)
+void vring_del_virtqueue(struct virtqueue *_vq)
 {
-	list_del(&vq->list);
-	kfree(to_vvq(vq));
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	if (vq->we_own_ring) {
+		vring_free_queue(vq->vq.vdev, vq->queue_size_in_bytes,
+				 vq->vring.desc, vq->queue_dma_addr);
+	}
+	list_del(&_vq->list);
+	kfree(vq);
 }
 EXPORT_SYMBOL_GPL(vring_del_virtqueue);
 
@@ -1007,20 +1134,42 @@ void virtio_break_device(struct virtio_device *dev)
 }
 EXPORT_SYMBOL_GPL(virtio_break_device);
 
-void *virtqueue_get_avail(struct virtqueue *_vq)
+dma_addr_t virtqueue_get_desc_addr(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	return vq->vring.avail;
+	BUG_ON(!vq->we_own_ring);
+
+	return vq->queue_dma_addr;
 }
-EXPORT_SYMBOL_GPL(virtqueue_get_avail);
+EXPORT_SYMBOL_GPL(virtqueue_get_desc_addr);
 
-void *virtqueue_get_used(struct virtqueue *_vq)
+dma_addr_t virtqueue_get_avail_addr(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	return vq->vring.used;
+	BUG_ON(!vq->we_own_ring);
+
+	return vq->queue_dma_addr +
+		((char *)vq->vring.avail - (char *)vq->vring.desc);
+}
+EXPORT_SYMBOL_GPL(virtqueue_get_avail_addr);
+
+dma_addr_t virtqueue_get_used_addr(struct virtqueue *_vq)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	BUG_ON(!vq->we_own_ring);
+
+	return vq->queue_dma_addr +
+		((char *)vq->vring.used - (char *)vq->vring.desc);
+}
+EXPORT_SYMBOL_GPL(virtqueue_get_used_addr);
+
+const struct vring *virtqueue_get_vring(struct virtqueue *vq)
+{
+	return &to_vvq(vq)->vring;
 }
-EXPORT_SYMBOL_GPL(virtqueue_get_used);
+EXPORT_SYMBOL_GPL(virtqueue_get_vring);
 
 MODULE_LICENSE("GPL");
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index 8f4d4bfa6d46..d5eb5479a425 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -75,8 +75,27 @@ unsigned int virtqueue_get_vring_size(struct virtqueue *vq);
 
 bool virtqueue_is_broken(struct virtqueue *vq);
 
-void *virtqueue_get_avail(struct virtqueue *vq);
-void *virtqueue_get_used(struct virtqueue *vq);
+const struct vring *virtqueue_get_vring(struct virtqueue *vq);
+dma_addr_t virtqueue_get_desc_addr(struct virtqueue *vq);
+dma_addr_t virtqueue_get_avail_addr(struct virtqueue *vq);
+dma_addr_t virtqueue_get_used_addr(struct virtqueue *vq);
+
+/*
+ * Legacy accessors -- in almost all cases, these are the wrong functions
+ * to use.
+ */
+static inline void *virtqueue_get_desc(struct virtqueue *vq)
+{
+	return virtqueue_get_vring(vq)->desc;
+}
+static inline void *virtqueue_get_avail(struct virtqueue *vq)
+{
+	return virtqueue_get_vring(vq)->avail;
+}
+static inline void *virtqueue_get_used(struct virtqueue *vq)
+{
+	return virtqueue_get_vring(vq)->used;
+}
 
 /**
  * virtio_device - representation of a device using virtio
diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index a156e2b6ccfe..e8d36938f09a 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -59,6 +59,35 @@ static inline void virtio_store_mb(bool weak_barriers,
 struct virtio_device;
 struct virtqueue;
 
+/*
+ * Creates a virtqueue and allocates the descriptor ring.  If
+ * may_reduce_num is set, then this may allocate a smaller ring than
+ * expected.  The caller should query virtqueue_get_ring_size to learn
+ * the actual size of the ring.
+ */
+struct virtqueue *vring_create_virtqueue(unsigned int index,
+					 unsigned int num,
+					 unsigned int vring_align,
+					 struct virtio_device *vdev,
+					 bool weak_barriers,
+					 bool may_reduce_num,
+					 bool (*notify)(struct virtqueue *vq),
+					 void (*callback)(struct virtqueue *vq),
+					 const char *name);
+
+/* Creates a virtqueue with a custom layout. */
+struct virtqueue *__vring_new_virtqueue(unsigned int index,
+					struct vring vring,
+					struct virtio_device *vdev,
+					bool weak_barriers,
+					bool (*notify)(struct virtqueue *),
+					void (*callback)(struct virtqueue *),
+					const char *name);
+
+/*
+ * Creates a virtqueue with a standard layout but a caller-allocated
+ * ring.
+ */
 struct virtqueue *vring_new_virtqueue(unsigned int index,
 				      unsigned int num,
 				      unsigned int vring_align,
@@ -68,7 +97,13 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 				      bool (*notify)(struct virtqueue *vq),
 				      void (*callback)(struct virtqueue *vq),
 				      const char *name);
+
+/*
+ * Destroys a virtqueue.  If created with vring_create_virtqueue, this
+ * also frees the ring.
+ */
 void vring_del_virtqueue(struct virtqueue *vq);
+
 /* Filter out transport-specific feature bits. */
 void vring_transport_features(struct virtio_device *vdev);
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 6/9] virtio: Add improved queue allocation API
@ 2016-02-03  5:46   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

This leaves vring_new_virtqueue alone for compatbility, but it
adds two new improved APIs:

vring_create_virtqueue: Creates a virtqueue backed by automatically
allocated coherent memory.  (Some day it this could be extended to
support non-coherent memory, too, if there ends up being a platform
on which it's worthwhile.)

__vring_new_virtqueue: Creates a virtqueue with a manually-specified
layout.  This should allow mic_virtio to work much more cleanly.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_ring.c | 209 ++++++++++++++++++++++++++++++++++++-------
 include/linux/virtio.h       |  23 ++++-
 include/linux/virtio_ring.h  |  35 ++++++++
 3 files changed, 235 insertions(+), 32 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 9abc008ff7ea..e46d08107a50 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -95,6 +95,11 @@ struct vring_virtqueue {
 	/* How to notify other side. FIXME: commonalize hcalls! */
 	bool (*notify)(struct virtqueue *vq);
 
+	/* DMA, allocation, and size information */
+	bool we_own_ring;
+	size_t queue_size_in_bytes;
+	dma_addr_t queue_dma_addr;
+
 #ifdef DEBUG
 	/* They're supposed to lock for us. */
 	unsigned int in_use;
@@ -878,36 +883,31 @@ irqreturn_t vring_interrupt(int irq, void *_vq)
 }
 EXPORT_SYMBOL_GPL(vring_interrupt);
 
-struct virtqueue *vring_new_virtqueue(unsigned int index,
-				      unsigned int num,
-				      unsigned int vring_align,
-				      struct virtio_device *vdev,
-				      bool weak_barriers,
-				      void *pages,
-				      bool (*notify)(struct virtqueue *),
-				      void (*callback)(struct virtqueue *),
-				      const char *name)
+struct virtqueue *__vring_new_virtqueue(unsigned int index,
+					struct vring vring,
+					struct virtio_device *vdev,
+					bool weak_barriers,
+					bool (*notify)(struct virtqueue *),
+					void (*callback)(struct virtqueue *),
+					const char *name)
 {
-	struct vring_virtqueue *vq;
 	unsigned int i;
+	struct vring_virtqueue *vq;
 
-	/* We assume num is a power of 2. */
-	if (num & (num - 1)) {
-		dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num);
-		return NULL;
-	}
-
-	vq = kmalloc(sizeof(*vq) + num * sizeof(struct vring_desc_state),
+	vq = kmalloc(sizeof(*vq) + vring.num * sizeof(struct vring_desc_state),
 		     GFP_KERNEL);
 	if (!vq)
 		return NULL;
 
-	vring_init(&vq->vring, num, pages, vring_align);
+	vq->vring = vring;
 	vq->vq.callback = callback;
 	vq->vq.vdev = vdev;
 	vq->vq.name = name;
-	vq->vq.num_free = num;
+	vq->vq.num_free = vring.num;
 	vq->vq.index = index;
+	vq->we_own_ring = false;
+	vq->queue_dma_addr = 0;
+	vq->queue_size_in_bytes = 0;
 	vq->notify = notify;
 	vq->weak_barriers = weak_barriers;
 	vq->broken = false;
@@ -932,18 +932,145 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 
 	/* Put everything in free lists. */
 	vq->free_head = 0;
-	for (i = 0; i < num-1; i++)
+	for (i = 0; i < vring.num-1; i++)
 		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
-	memset(vq->desc_state, 0, num * sizeof(struct vring_desc_state));
+	memset(vq->desc_state, 0, vring.num * sizeof(struct vring_desc_state));
 
 	return &vq->vq;
 }
+EXPORT_SYMBOL_GPL(__vring_new_virtqueue);
+
+static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
+			      dma_addr_t *dma_handle, gfp_t flag)
+{
+	if (vring_use_dma_api(vdev)) {
+		return dma_alloc_coherent(vdev->dev.parent, size,
+					  dma_handle, flag);
+	} else {
+		void *queue = alloc_pages_exact(PAGE_ALIGN(size), flag);
+		if (queue) {
+			phys_addr_t phys_addr = virt_to_phys(queue);
+			*dma_handle = (dma_addr_t)phys_addr;
+
+			/*
+			 * Sanity check: make sure we dind't truncate
+			 * the address.  The only arches I can find that
+			 * have 64-bit phys_addr_t but 32-bit dma_addr_t
+			 * are certain non-highmem MIPS and x86
+			 * configurations, but these configurations
+			 * should never allocate physical pages above 32
+			 * bits, so this is fine.  Just in case, throw a
+			 * warning and abort if we end up with an
+			 * unrepresentable address.
+			 */
+			if (WARN_ON_ONCE(*dma_handle != phys_addr)) {
+				free_pages_exact(queue, PAGE_ALIGN(size));
+				return NULL;
+			}
+		}
+		return queue;
+	}
+}
+
+static void vring_free_queue(struct virtio_device *vdev, size_t size,
+			     void *queue, dma_addr_t dma_handle)
+{
+	if (vring_use_dma_api(vdev)) {
+		dma_free_coherent(vdev->dev.parent, size, queue, dma_handle);
+	} else {
+		free_pages_exact(queue, PAGE_ALIGN(size));
+	}
+}
+
+struct virtqueue *vring_create_virtqueue(
+	unsigned int index,
+	unsigned int num,
+	unsigned int vring_align,
+	struct virtio_device *vdev,
+	bool weak_barriers,
+	bool may_reduce_num,
+	bool (*notify)(struct virtqueue *),
+	void (*callback)(struct virtqueue *),
+	const char *name)
+{
+	struct virtqueue *vq;
+	void *queue;
+	dma_addr_t dma_addr;
+	size_t queue_size_in_bytes;
+	struct vring vring;
+
+	/* We assume num is a power of 2. */
+	if (num & (num - 1)) {
+		dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num);
+		return NULL;
+	}
+
+	/* TODO: allocate each queue chunk individually */
+	for (; num && vring_size(num, vring_align) > PAGE_SIZE; num /= 2) {
+		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
+					  &dma_addr,
+					  GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO);
+		if (queue)
+			break;
+	}
+
+	if (!num)
+		return NULL;
+
+	if (!queue) {
+		/* Try to get a single page. You are my only hope! */
+		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
+					  &dma_addr, GFP_KERNEL|__GFP_ZERO);
+	}
+	if (!queue)
+		return NULL;
+
+	queue_size_in_bytes = vring_size(num, vring_align);
+	vring_init(&vring, num, queue, vring_align);
+
+	vq = __vring_new_virtqueue(index, vring, vdev, weak_barriers,
+				   notify, callback, name);
+	if (!vq) {
+		vring_free_queue(vdev, queue_size_in_bytes, queue,
+				 dma_addr);
+		return NULL;
+	}
+
+	to_vvq(vq)->queue_dma_addr = dma_addr;
+	to_vvq(vq)->queue_size_in_bytes = queue_size_in_bytes;
+	to_vvq(vq)->we_own_ring = true;
+
+	return vq;
+}
+EXPORT_SYMBOL_GPL(vring_create_virtqueue);
+
+struct virtqueue *vring_new_virtqueue(unsigned int index,
+				      unsigned int num,
+				      unsigned int vring_align,
+				      struct virtio_device *vdev,
+				      bool weak_barriers,
+				      void *pages,
+				      bool (*notify)(struct virtqueue *vq),
+				      void (*callback)(struct virtqueue *vq),
+				      const char *name)
+{
+	struct vring vring;
+	vring_init(&vring, num, pages, vring_align);
+	return __vring_new_virtqueue(index, vring, vdev, weak_barriers,
+				     notify, callback, name);
+}
 EXPORT_SYMBOL_GPL(vring_new_virtqueue);
 
-void vring_del_virtqueue(struct virtqueue *vq)
+void vring_del_virtqueue(struct virtqueue *_vq)
 {
-	list_del(&vq->list);
-	kfree(to_vvq(vq));
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	if (vq->we_own_ring) {
+		vring_free_queue(vq->vq.vdev, vq->queue_size_in_bytes,
+				 vq->vring.desc, vq->queue_dma_addr);
+	}
+	list_del(&_vq->list);
+	kfree(vq);
 }
 EXPORT_SYMBOL_GPL(vring_del_virtqueue);
 
@@ -1007,20 +1134,42 @@ void virtio_break_device(struct virtio_device *dev)
 }
 EXPORT_SYMBOL_GPL(virtio_break_device);
 
-void *virtqueue_get_avail(struct virtqueue *_vq)
+dma_addr_t virtqueue_get_desc_addr(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	return vq->vring.avail;
+	BUG_ON(!vq->we_own_ring);
+
+	return vq->queue_dma_addr;
 }
-EXPORT_SYMBOL_GPL(virtqueue_get_avail);
+EXPORT_SYMBOL_GPL(virtqueue_get_desc_addr);
 
-void *virtqueue_get_used(struct virtqueue *_vq)
+dma_addr_t virtqueue_get_avail_addr(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	return vq->vring.used;
+	BUG_ON(!vq->we_own_ring);
+
+	return vq->queue_dma_addr +
+		((char *)vq->vring.avail - (char *)vq->vring.desc);
+}
+EXPORT_SYMBOL_GPL(virtqueue_get_avail_addr);
+
+dma_addr_t virtqueue_get_used_addr(struct virtqueue *_vq)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	BUG_ON(!vq->we_own_ring);
+
+	return vq->queue_dma_addr +
+		((char *)vq->vring.used - (char *)vq->vring.desc);
+}
+EXPORT_SYMBOL_GPL(virtqueue_get_used_addr);
+
+const struct vring *virtqueue_get_vring(struct virtqueue *vq)
+{
+	return &to_vvq(vq)->vring;
 }
-EXPORT_SYMBOL_GPL(virtqueue_get_used);
+EXPORT_SYMBOL_GPL(virtqueue_get_vring);
 
 MODULE_LICENSE("GPL");
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index 8f4d4bfa6d46..d5eb5479a425 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -75,8 +75,27 @@ unsigned int virtqueue_get_vring_size(struct virtqueue *vq);
 
 bool virtqueue_is_broken(struct virtqueue *vq);
 
-void *virtqueue_get_avail(struct virtqueue *vq);
-void *virtqueue_get_used(struct virtqueue *vq);
+const struct vring *virtqueue_get_vring(struct virtqueue *vq);
+dma_addr_t virtqueue_get_desc_addr(struct virtqueue *vq);
+dma_addr_t virtqueue_get_avail_addr(struct virtqueue *vq);
+dma_addr_t virtqueue_get_used_addr(struct virtqueue *vq);
+
+/*
+ * Legacy accessors -- in almost all cases, these are the wrong functions
+ * to use.
+ */
+static inline void *virtqueue_get_desc(struct virtqueue *vq)
+{
+	return virtqueue_get_vring(vq)->desc;
+}
+static inline void *virtqueue_get_avail(struct virtqueue *vq)
+{
+	return virtqueue_get_vring(vq)->avail;
+}
+static inline void *virtqueue_get_used(struct virtqueue *vq)
+{
+	return virtqueue_get_vring(vq)->used;
+}
 
 /**
  * virtio_device - representation of a device using virtio
diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index a156e2b6ccfe..e8d36938f09a 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -59,6 +59,35 @@ static inline void virtio_store_mb(bool weak_barriers,
 struct virtio_device;
 struct virtqueue;
 
+/*
+ * Creates a virtqueue and allocates the descriptor ring.  If
+ * may_reduce_num is set, then this may allocate a smaller ring than
+ * expected.  The caller should query virtqueue_get_ring_size to learn
+ * the actual size of the ring.
+ */
+struct virtqueue *vring_create_virtqueue(unsigned int index,
+					 unsigned int num,
+					 unsigned int vring_align,
+					 struct virtio_device *vdev,
+					 bool weak_barriers,
+					 bool may_reduce_num,
+					 bool (*notify)(struct virtqueue *vq),
+					 void (*callback)(struct virtqueue *vq),
+					 const char *name);
+
+/* Creates a virtqueue with a custom layout. */
+struct virtqueue *__vring_new_virtqueue(unsigned int index,
+					struct vring vring,
+					struct virtio_device *vdev,
+					bool weak_barriers,
+					bool (*notify)(struct virtqueue *),
+					void (*callback)(struct virtqueue *),
+					const char *name);
+
+/*
+ * Creates a virtqueue with a standard layout but a caller-allocated
+ * ring.
+ */
 struct virtqueue *vring_new_virtqueue(unsigned int index,
 				      unsigned int num,
 				      unsigned int vring_align,
@@ -68,7 +97,13 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 				      bool (*notify)(struct virtqueue *vq),
 				      void (*callback)(struct virtqueue *vq),
 				      const char *name);
+
+/*
+ * Destroys a virtqueue.  If created with vring_create_virtqueue, this
+ * also frees the ring.
+ */
 void vring_del_virtqueue(struct virtqueue *vq);
+
 /* Filter out transport-specific feature bits. */
 void vring_transport_features(struct virtio_device *vdev);
 
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 6/9] virtio: Add improved queue allocation API
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (13 preceding siblings ...)
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

This leaves vring_new_virtqueue alone for compatbility, but it
adds two new improved APIs:

vring_create_virtqueue: Creates a virtqueue backed by automatically
allocated coherent memory.  (Some day it this could be extended to
support non-coherent memory, too, if there ends up being a platform
on which it's worthwhile.)

__vring_new_virtqueue: Creates a virtqueue with a manually-specified
layout.  This should allow mic_virtio to work much more cleanly.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_ring.c | 209 ++++++++++++++++++++++++++++++++++++-------
 include/linux/virtio.h       |  23 ++++-
 include/linux/virtio_ring.h  |  35 ++++++++
 3 files changed, 235 insertions(+), 32 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 9abc008ff7ea..e46d08107a50 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -95,6 +95,11 @@ struct vring_virtqueue {
 	/* How to notify other side. FIXME: commonalize hcalls! */
 	bool (*notify)(struct virtqueue *vq);
 
+	/* DMA, allocation, and size information */
+	bool we_own_ring;
+	size_t queue_size_in_bytes;
+	dma_addr_t queue_dma_addr;
+
 #ifdef DEBUG
 	/* They're supposed to lock for us. */
 	unsigned int in_use;
@@ -878,36 +883,31 @@ irqreturn_t vring_interrupt(int irq, void *_vq)
 }
 EXPORT_SYMBOL_GPL(vring_interrupt);
 
-struct virtqueue *vring_new_virtqueue(unsigned int index,
-				      unsigned int num,
-				      unsigned int vring_align,
-				      struct virtio_device *vdev,
-				      bool weak_barriers,
-				      void *pages,
-				      bool (*notify)(struct virtqueue *),
-				      void (*callback)(struct virtqueue *),
-				      const char *name)
+struct virtqueue *__vring_new_virtqueue(unsigned int index,
+					struct vring vring,
+					struct virtio_device *vdev,
+					bool weak_barriers,
+					bool (*notify)(struct virtqueue *),
+					void (*callback)(struct virtqueue *),
+					const char *name)
 {
-	struct vring_virtqueue *vq;
 	unsigned int i;
+	struct vring_virtqueue *vq;
 
-	/* We assume num is a power of 2. */
-	if (num & (num - 1)) {
-		dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num);
-		return NULL;
-	}
-
-	vq = kmalloc(sizeof(*vq) + num * sizeof(struct vring_desc_state),
+	vq = kmalloc(sizeof(*vq) + vring.num * sizeof(struct vring_desc_state),
 		     GFP_KERNEL);
 	if (!vq)
 		return NULL;
 
-	vring_init(&vq->vring, num, pages, vring_align);
+	vq->vring = vring;
 	vq->vq.callback = callback;
 	vq->vq.vdev = vdev;
 	vq->vq.name = name;
-	vq->vq.num_free = num;
+	vq->vq.num_free = vring.num;
 	vq->vq.index = index;
+	vq->we_own_ring = false;
+	vq->queue_dma_addr = 0;
+	vq->queue_size_in_bytes = 0;
 	vq->notify = notify;
 	vq->weak_barriers = weak_barriers;
 	vq->broken = false;
@@ -932,18 +932,145 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 
 	/* Put everything in free lists. */
 	vq->free_head = 0;
-	for (i = 0; i < num-1; i++)
+	for (i = 0; i < vring.num-1; i++)
 		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
-	memset(vq->desc_state, 0, num * sizeof(struct vring_desc_state));
+	memset(vq->desc_state, 0, vring.num * sizeof(struct vring_desc_state));
 
 	return &vq->vq;
 }
+EXPORT_SYMBOL_GPL(__vring_new_virtqueue);
+
+static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
+			      dma_addr_t *dma_handle, gfp_t flag)
+{
+	if (vring_use_dma_api(vdev)) {
+		return dma_alloc_coherent(vdev->dev.parent, size,
+					  dma_handle, flag);
+	} else {
+		void *queue = alloc_pages_exact(PAGE_ALIGN(size), flag);
+		if (queue) {
+			phys_addr_t phys_addr = virt_to_phys(queue);
+			*dma_handle = (dma_addr_t)phys_addr;
+
+			/*
+			 * Sanity check: make sure we dind't truncate
+			 * the address.  The only arches I can find that
+			 * have 64-bit phys_addr_t but 32-bit dma_addr_t
+			 * are certain non-highmem MIPS and x86
+			 * configurations, but these configurations
+			 * should never allocate physical pages above 32
+			 * bits, so this is fine.  Just in case, throw a
+			 * warning and abort if we end up with an
+			 * unrepresentable address.
+			 */
+			if (WARN_ON_ONCE(*dma_handle != phys_addr)) {
+				free_pages_exact(queue, PAGE_ALIGN(size));
+				return NULL;
+			}
+		}
+		return queue;
+	}
+}
+
+static void vring_free_queue(struct virtio_device *vdev, size_t size,
+			     void *queue, dma_addr_t dma_handle)
+{
+	if (vring_use_dma_api(vdev)) {
+		dma_free_coherent(vdev->dev.parent, size, queue, dma_handle);
+	} else {
+		free_pages_exact(queue, PAGE_ALIGN(size));
+	}
+}
+
+struct virtqueue *vring_create_virtqueue(
+	unsigned int index,
+	unsigned int num,
+	unsigned int vring_align,
+	struct virtio_device *vdev,
+	bool weak_barriers,
+	bool may_reduce_num,
+	bool (*notify)(struct virtqueue *),
+	void (*callback)(struct virtqueue *),
+	const char *name)
+{
+	struct virtqueue *vq;
+	void *queue;
+	dma_addr_t dma_addr;
+	size_t queue_size_in_bytes;
+	struct vring vring;
+
+	/* We assume num is a power of 2. */
+	if (num & (num - 1)) {
+		dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num);
+		return NULL;
+	}
+
+	/* TODO: allocate each queue chunk individually */
+	for (; num && vring_size(num, vring_align) > PAGE_SIZE; num /= 2) {
+		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
+					  &dma_addr,
+					  GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO);
+		if (queue)
+			break;
+	}
+
+	if (!num)
+		return NULL;
+
+	if (!queue) {
+		/* Try to get a single page. You are my only hope! */
+		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
+					  &dma_addr, GFP_KERNEL|__GFP_ZERO);
+	}
+	if (!queue)
+		return NULL;
+
+	queue_size_in_bytes = vring_size(num, vring_align);
+	vring_init(&vring, num, queue, vring_align);
+
+	vq = __vring_new_virtqueue(index, vring, vdev, weak_barriers,
+				   notify, callback, name);
+	if (!vq) {
+		vring_free_queue(vdev, queue_size_in_bytes, queue,
+				 dma_addr);
+		return NULL;
+	}
+
+	to_vvq(vq)->queue_dma_addr = dma_addr;
+	to_vvq(vq)->queue_size_in_bytes = queue_size_in_bytes;
+	to_vvq(vq)->we_own_ring = true;
+
+	return vq;
+}
+EXPORT_SYMBOL_GPL(vring_create_virtqueue);
+
+struct virtqueue *vring_new_virtqueue(unsigned int index,
+				      unsigned int num,
+				      unsigned int vring_align,
+				      struct virtio_device *vdev,
+				      bool weak_barriers,
+				      void *pages,
+				      bool (*notify)(struct virtqueue *vq),
+				      void (*callback)(struct virtqueue *vq),
+				      const char *name)
+{
+	struct vring vring;
+	vring_init(&vring, num, pages, vring_align);
+	return __vring_new_virtqueue(index, vring, vdev, weak_barriers,
+				     notify, callback, name);
+}
 EXPORT_SYMBOL_GPL(vring_new_virtqueue);
 
-void vring_del_virtqueue(struct virtqueue *vq)
+void vring_del_virtqueue(struct virtqueue *_vq)
 {
-	list_del(&vq->list);
-	kfree(to_vvq(vq));
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	if (vq->we_own_ring) {
+		vring_free_queue(vq->vq.vdev, vq->queue_size_in_bytes,
+				 vq->vring.desc, vq->queue_dma_addr);
+	}
+	list_del(&_vq->list);
+	kfree(vq);
 }
 EXPORT_SYMBOL_GPL(vring_del_virtqueue);
 
@@ -1007,20 +1134,42 @@ void virtio_break_device(struct virtio_device *dev)
 }
 EXPORT_SYMBOL_GPL(virtio_break_device);
 
-void *virtqueue_get_avail(struct virtqueue *_vq)
+dma_addr_t virtqueue_get_desc_addr(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	return vq->vring.avail;
+	BUG_ON(!vq->we_own_ring);
+
+	return vq->queue_dma_addr;
 }
-EXPORT_SYMBOL_GPL(virtqueue_get_avail);
+EXPORT_SYMBOL_GPL(virtqueue_get_desc_addr);
 
-void *virtqueue_get_used(struct virtqueue *_vq)
+dma_addr_t virtqueue_get_avail_addr(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	return vq->vring.used;
+	BUG_ON(!vq->we_own_ring);
+
+	return vq->queue_dma_addr +
+		((char *)vq->vring.avail - (char *)vq->vring.desc);
+}
+EXPORT_SYMBOL_GPL(virtqueue_get_avail_addr);
+
+dma_addr_t virtqueue_get_used_addr(struct virtqueue *_vq)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	BUG_ON(!vq->we_own_ring);
+
+	return vq->queue_dma_addr +
+		((char *)vq->vring.used - (char *)vq->vring.desc);
+}
+EXPORT_SYMBOL_GPL(virtqueue_get_used_addr);
+
+const struct vring *virtqueue_get_vring(struct virtqueue *vq)
+{
+	return &to_vvq(vq)->vring;
 }
-EXPORT_SYMBOL_GPL(virtqueue_get_used);
+EXPORT_SYMBOL_GPL(virtqueue_get_vring);
 
 MODULE_LICENSE("GPL");
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index 8f4d4bfa6d46..d5eb5479a425 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -75,8 +75,27 @@ unsigned int virtqueue_get_vring_size(struct virtqueue *vq);
 
 bool virtqueue_is_broken(struct virtqueue *vq);
 
-void *virtqueue_get_avail(struct virtqueue *vq);
-void *virtqueue_get_used(struct virtqueue *vq);
+const struct vring *virtqueue_get_vring(struct virtqueue *vq);
+dma_addr_t virtqueue_get_desc_addr(struct virtqueue *vq);
+dma_addr_t virtqueue_get_avail_addr(struct virtqueue *vq);
+dma_addr_t virtqueue_get_used_addr(struct virtqueue *vq);
+
+/*
+ * Legacy accessors -- in almost all cases, these are the wrong functions
+ * to use.
+ */
+static inline void *virtqueue_get_desc(struct virtqueue *vq)
+{
+	return virtqueue_get_vring(vq)->desc;
+}
+static inline void *virtqueue_get_avail(struct virtqueue *vq)
+{
+	return virtqueue_get_vring(vq)->avail;
+}
+static inline void *virtqueue_get_used(struct virtqueue *vq)
+{
+	return virtqueue_get_vring(vq)->used;
+}
 
 /**
  * virtio_device - representation of a device using virtio
diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index a156e2b6ccfe..e8d36938f09a 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -59,6 +59,35 @@ static inline void virtio_store_mb(bool weak_barriers,
 struct virtio_device;
 struct virtqueue;
 
+/*
+ * Creates a virtqueue and allocates the descriptor ring.  If
+ * may_reduce_num is set, then this may allocate a smaller ring than
+ * expected.  The caller should query virtqueue_get_ring_size to learn
+ * the actual size of the ring.
+ */
+struct virtqueue *vring_create_virtqueue(unsigned int index,
+					 unsigned int num,
+					 unsigned int vring_align,
+					 struct virtio_device *vdev,
+					 bool weak_barriers,
+					 bool may_reduce_num,
+					 bool (*notify)(struct virtqueue *vq),
+					 void (*callback)(struct virtqueue *vq),
+					 const char *name);
+
+/* Creates a virtqueue with a custom layout. */
+struct virtqueue *__vring_new_virtqueue(unsigned int index,
+					struct vring vring,
+					struct virtio_device *vdev,
+					bool weak_barriers,
+					bool (*notify)(struct virtqueue *),
+					void (*callback)(struct virtqueue *),
+					const char *name);
+
+/*
+ * Creates a virtqueue with a standard layout but a caller-allocated
+ * ring.
+ */
 struct virtqueue *vring_new_virtqueue(unsigned int index,
 				      unsigned int num,
 				      unsigned int vring_align,
@@ -68,7 +97,13 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 				      bool (*notify)(struct virtqueue *vq),
 				      void (*callback)(struct virtqueue *vq),
 				      const char *name);
+
+/*
+ * Destroys a virtqueue.  If created with vring_create_virtqueue, this
+ * also frees the ring.
+ */
 void vring_del_virtqueue(struct virtqueue *vq);
+
 /* Filter out transport-specific feature bits. */
 void vring_transport_features(struct virtio_device *vdev);
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 6/9] virtio: Add improved queue allocation API
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (11 preceding siblings ...)
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, Andy Lutomirski, xen-devel, sparclinux,
	Paolo Bonzini, Linux Virtualization, David Woodhouse,
	David S. Miller, Martin Schwidefsky

This leaves vring_new_virtqueue alone for compatbility, but it
adds two new improved APIs:

vring_create_virtqueue: Creates a virtqueue backed by automatically
allocated coherent memory.  (Some day it this could be extended to
support non-coherent memory, too, if there ends up being a platform
on which it's worthwhile.)

__vring_new_virtqueue: Creates a virtqueue with a manually-specified
layout.  This should allow mic_virtio to work much more cleanly.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_ring.c | 209 ++++++++++++++++++++++++++++++++++++-------
 include/linux/virtio.h       |  23 ++++-
 include/linux/virtio_ring.h  |  35 ++++++++
 3 files changed, 235 insertions(+), 32 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 9abc008ff7ea..e46d08107a50 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -95,6 +95,11 @@ struct vring_virtqueue {
 	/* How to notify other side. FIXME: commonalize hcalls! */
 	bool (*notify)(struct virtqueue *vq);
 
+	/* DMA, allocation, and size information */
+	bool we_own_ring;
+	size_t queue_size_in_bytes;
+	dma_addr_t queue_dma_addr;
+
 #ifdef DEBUG
 	/* They're supposed to lock for us. */
 	unsigned int in_use;
@@ -878,36 +883,31 @@ irqreturn_t vring_interrupt(int irq, void *_vq)
 }
 EXPORT_SYMBOL_GPL(vring_interrupt);
 
-struct virtqueue *vring_new_virtqueue(unsigned int index,
-				      unsigned int num,
-				      unsigned int vring_align,
-				      struct virtio_device *vdev,
-				      bool weak_barriers,
-				      void *pages,
-				      bool (*notify)(struct virtqueue *),
-				      void (*callback)(struct virtqueue *),
-				      const char *name)
+struct virtqueue *__vring_new_virtqueue(unsigned int index,
+					struct vring vring,
+					struct virtio_device *vdev,
+					bool weak_barriers,
+					bool (*notify)(struct virtqueue *),
+					void (*callback)(struct virtqueue *),
+					const char *name)
 {
-	struct vring_virtqueue *vq;
 	unsigned int i;
+	struct vring_virtqueue *vq;
 
-	/* We assume num is a power of 2. */
-	if (num & (num - 1)) {
-		dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num);
-		return NULL;
-	}
-
-	vq = kmalloc(sizeof(*vq) + num * sizeof(struct vring_desc_state),
+	vq = kmalloc(sizeof(*vq) + vring.num * sizeof(struct vring_desc_state),
 		     GFP_KERNEL);
 	if (!vq)
 		return NULL;
 
-	vring_init(&vq->vring, num, pages, vring_align);
+	vq->vring = vring;
 	vq->vq.callback = callback;
 	vq->vq.vdev = vdev;
 	vq->vq.name = name;
-	vq->vq.num_free = num;
+	vq->vq.num_free = vring.num;
 	vq->vq.index = index;
+	vq->we_own_ring = false;
+	vq->queue_dma_addr = 0;
+	vq->queue_size_in_bytes = 0;
 	vq->notify = notify;
 	vq->weak_barriers = weak_barriers;
 	vq->broken = false;
@@ -932,18 +932,145 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 
 	/* Put everything in free lists. */
 	vq->free_head = 0;
-	for (i = 0; i < num-1; i++)
+	for (i = 0; i < vring.num-1; i++)
 		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
-	memset(vq->desc_state, 0, num * sizeof(struct vring_desc_state));
+	memset(vq->desc_state, 0, vring.num * sizeof(struct vring_desc_state));
 
 	return &vq->vq;
 }
+EXPORT_SYMBOL_GPL(__vring_new_virtqueue);
+
+static void *vring_alloc_queue(struct virtio_device *vdev, size_t size,
+			      dma_addr_t *dma_handle, gfp_t flag)
+{
+	if (vring_use_dma_api(vdev)) {
+		return dma_alloc_coherent(vdev->dev.parent, size,
+					  dma_handle, flag);
+	} else {
+		void *queue = alloc_pages_exact(PAGE_ALIGN(size), flag);
+		if (queue) {
+			phys_addr_t phys_addr = virt_to_phys(queue);
+			*dma_handle = (dma_addr_t)phys_addr;
+
+			/*
+			 * Sanity check: make sure we dind't truncate
+			 * the address.  The only arches I can find that
+			 * have 64-bit phys_addr_t but 32-bit dma_addr_t
+			 * are certain non-highmem MIPS and x86
+			 * configurations, but these configurations
+			 * should never allocate physical pages above 32
+			 * bits, so this is fine.  Just in case, throw a
+			 * warning and abort if we end up with an
+			 * unrepresentable address.
+			 */
+			if (WARN_ON_ONCE(*dma_handle != phys_addr)) {
+				free_pages_exact(queue, PAGE_ALIGN(size));
+				return NULL;
+			}
+		}
+		return queue;
+	}
+}
+
+static void vring_free_queue(struct virtio_device *vdev, size_t size,
+			     void *queue, dma_addr_t dma_handle)
+{
+	if (vring_use_dma_api(vdev)) {
+		dma_free_coherent(vdev->dev.parent, size, queue, dma_handle);
+	} else {
+		free_pages_exact(queue, PAGE_ALIGN(size));
+	}
+}
+
+struct virtqueue *vring_create_virtqueue(
+	unsigned int index,
+	unsigned int num,
+	unsigned int vring_align,
+	struct virtio_device *vdev,
+	bool weak_barriers,
+	bool may_reduce_num,
+	bool (*notify)(struct virtqueue *),
+	void (*callback)(struct virtqueue *),
+	const char *name)
+{
+	struct virtqueue *vq;
+	void *queue;
+	dma_addr_t dma_addr;
+	size_t queue_size_in_bytes;
+	struct vring vring;
+
+	/* We assume num is a power of 2. */
+	if (num & (num - 1)) {
+		dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num);
+		return NULL;
+	}
+
+	/* TODO: allocate each queue chunk individually */
+	for (; num && vring_size(num, vring_align) > PAGE_SIZE; num /= 2) {
+		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
+					  &dma_addr,
+					  GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO);
+		if (queue)
+			break;
+	}
+
+	if (!num)
+		return NULL;
+
+	if (!queue) {
+		/* Try to get a single page. You are my only hope! */
+		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
+					  &dma_addr, GFP_KERNEL|__GFP_ZERO);
+	}
+	if (!queue)
+		return NULL;
+
+	queue_size_in_bytes = vring_size(num, vring_align);
+	vring_init(&vring, num, queue, vring_align);
+
+	vq = __vring_new_virtqueue(index, vring, vdev, weak_barriers,
+				   notify, callback, name);
+	if (!vq) {
+		vring_free_queue(vdev, queue_size_in_bytes, queue,
+				 dma_addr);
+		return NULL;
+	}
+
+	to_vvq(vq)->queue_dma_addr = dma_addr;
+	to_vvq(vq)->queue_size_in_bytes = queue_size_in_bytes;
+	to_vvq(vq)->we_own_ring = true;
+
+	return vq;
+}
+EXPORT_SYMBOL_GPL(vring_create_virtqueue);
+
+struct virtqueue *vring_new_virtqueue(unsigned int index,
+				      unsigned int num,
+				      unsigned int vring_align,
+				      struct virtio_device *vdev,
+				      bool weak_barriers,
+				      void *pages,
+				      bool (*notify)(struct virtqueue *vq),
+				      void (*callback)(struct virtqueue *vq),
+				      const char *name)
+{
+	struct vring vring;
+	vring_init(&vring, num, pages, vring_align);
+	return __vring_new_virtqueue(index, vring, vdev, weak_barriers,
+				     notify, callback, name);
+}
 EXPORT_SYMBOL_GPL(vring_new_virtqueue);
 
-void vring_del_virtqueue(struct virtqueue *vq)
+void vring_del_virtqueue(struct virtqueue *_vq)
 {
-	list_del(&vq->list);
-	kfree(to_vvq(vq));
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	if (vq->we_own_ring) {
+		vring_free_queue(vq->vq.vdev, vq->queue_size_in_bytes,
+				 vq->vring.desc, vq->queue_dma_addr);
+	}
+	list_del(&_vq->list);
+	kfree(vq);
 }
 EXPORT_SYMBOL_GPL(vring_del_virtqueue);
 
@@ -1007,20 +1134,42 @@ void virtio_break_device(struct virtio_device *dev)
 }
 EXPORT_SYMBOL_GPL(virtio_break_device);
 
-void *virtqueue_get_avail(struct virtqueue *_vq)
+dma_addr_t virtqueue_get_desc_addr(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	return vq->vring.avail;
+	BUG_ON(!vq->we_own_ring);
+
+	return vq->queue_dma_addr;
 }
-EXPORT_SYMBOL_GPL(virtqueue_get_avail);
+EXPORT_SYMBOL_GPL(virtqueue_get_desc_addr);
 
-void *virtqueue_get_used(struct virtqueue *_vq)
+dma_addr_t virtqueue_get_avail_addr(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	return vq->vring.used;
+	BUG_ON(!vq->we_own_ring);
+
+	return vq->queue_dma_addr +
+		((char *)vq->vring.avail - (char *)vq->vring.desc);
+}
+EXPORT_SYMBOL_GPL(virtqueue_get_avail_addr);
+
+dma_addr_t virtqueue_get_used_addr(struct virtqueue *_vq)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	BUG_ON(!vq->we_own_ring);
+
+	return vq->queue_dma_addr +
+		((char *)vq->vring.used - (char *)vq->vring.desc);
+}
+EXPORT_SYMBOL_GPL(virtqueue_get_used_addr);
+
+const struct vring *virtqueue_get_vring(struct virtqueue *vq)
+{
+	return &to_vvq(vq)->vring;
 }
-EXPORT_SYMBOL_GPL(virtqueue_get_used);
+EXPORT_SYMBOL_GPL(virtqueue_get_vring);
 
 MODULE_LICENSE("GPL");
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index 8f4d4bfa6d46..d5eb5479a425 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -75,8 +75,27 @@ unsigned int virtqueue_get_vring_size(struct virtqueue *vq);
 
 bool virtqueue_is_broken(struct virtqueue *vq);
 
-void *virtqueue_get_avail(struct virtqueue *vq);
-void *virtqueue_get_used(struct virtqueue *vq);
+const struct vring *virtqueue_get_vring(struct virtqueue *vq);
+dma_addr_t virtqueue_get_desc_addr(struct virtqueue *vq);
+dma_addr_t virtqueue_get_avail_addr(struct virtqueue *vq);
+dma_addr_t virtqueue_get_used_addr(struct virtqueue *vq);
+
+/*
+ * Legacy accessors -- in almost all cases, these are the wrong functions
+ * to use.
+ */
+static inline void *virtqueue_get_desc(struct virtqueue *vq)
+{
+	return virtqueue_get_vring(vq)->desc;
+}
+static inline void *virtqueue_get_avail(struct virtqueue *vq)
+{
+	return virtqueue_get_vring(vq)->avail;
+}
+static inline void *virtqueue_get_used(struct virtqueue *vq)
+{
+	return virtqueue_get_vring(vq)->used;
+}
 
 /**
  * virtio_device - representation of a device using virtio
diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index a156e2b6ccfe..e8d36938f09a 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -59,6 +59,35 @@ static inline void virtio_store_mb(bool weak_barriers,
 struct virtio_device;
 struct virtqueue;
 
+/*
+ * Creates a virtqueue and allocates the descriptor ring.  If
+ * may_reduce_num is set, then this may allocate a smaller ring than
+ * expected.  The caller should query virtqueue_get_ring_size to learn
+ * the actual size of the ring.
+ */
+struct virtqueue *vring_create_virtqueue(unsigned int index,
+					 unsigned int num,
+					 unsigned int vring_align,
+					 struct virtio_device *vdev,
+					 bool weak_barriers,
+					 bool may_reduce_num,
+					 bool (*notify)(struct virtqueue *vq),
+					 void (*callback)(struct virtqueue *vq),
+					 const char *name);
+
+/* Creates a virtqueue with a custom layout. */
+struct virtqueue *__vring_new_virtqueue(unsigned int index,
+					struct vring vring,
+					struct virtio_device *vdev,
+					bool weak_barriers,
+					bool (*notify)(struct virtqueue *),
+					void (*callback)(struct virtqueue *),
+					const char *name);
+
+/*
+ * Creates a virtqueue with a standard layout but a caller-allocated
+ * ring.
+ */
 struct virtqueue *vring_new_virtqueue(unsigned int index,
 				      unsigned int num,
 				      unsigned int vring_align,
@@ -68,7 +97,13 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 				      bool (*notify)(struct virtqueue *vq),
 				      void (*callback)(struct virtqueue *vq),
 				      const char *name);
+
+/*
+ * Destroys a virtqueue.  If created with vring_create_virtqueue, this
+ * also frees the ring.
+ */
 void vring_del_virtqueue(struct virtqueue *vq);
+
 /* Filter out transport-specific feature bits. */
 void vring_transport_features(struct virtio_device *vdev);
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 7/9] virtio_mmio: Use the DMA API if enabled
  2016-02-03  5:46 ` Andy Lutomirski
@ 2016-02-03  5:46   ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

This switches to vring_create_virtqueue, simplifying the driver and
adding DMA API support.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_mmio.c | 67 ++++++++++----------------------------------
 1 file changed, 15 insertions(+), 52 deletions(-)

diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
index 745c6ee1bb3e..48bfea91dbca 100644
--- a/drivers/virtio/virtio_mmio.c
+++ b/drivers/virtio/virtio_mmio.c
@@ -99,12 +99,6 @@ struct virtio_mmio_vq_info {
 	/* the actual virtqueue */
 	struct virtqueue *vq;
 
-	/* the number of entries in the queue */
-	unsigned int num;
-
-	/* the virtual address of the ring queue */
-	void *queue;
-
 	/* the list node for the virtqueues list */
 	struct list_head node;
 };
@@ -322,15 +316,13 @@ static void vm_del_vq(struct virtqueue *vq)
 {
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vq->vdev);
 	struct virtio_mmio_vq_info *info = vq->priv;
-	unsigned long flags, size;
+	unsigned long flags;
 	unsigned int index = vq->index;
 
 	spin_lock_irqsave(&vm_dev->lock, flags);
 	list_del(&info->node);
 	spin_unlock_irqrestore(&vm_dev->lock, flags);
 
-	vring_del_virtqueue(vq);
-
 	/* Select and deactivate the queue */
 	writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
 	if (vm_dev->version == 1) {
@@ -340,8 +332,8 @@ static void vm_del_vq(struct virtqueue *vq)
 		WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
 	}
 
-	size = PAGE_ALIGN(vring_size(info->num, VIRTIO_MMIO_VRING_ALIGN));
-	free_pages_exact(info->queue, size);
+	vring_del_virtqueue(vq);
+
 	kfree(info);
 }
 
@@ -356,8 +348,6 @@ static void vm_del_vqs(struct virtio_device *vdev)
 	free_irq(platform_get_irq(vm_dev->pdev, 0), vm_dev);
 }
 
-
-
 static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 				  void (*callback)(struct virtqueue *vq),
 				  const char *name)
@@ -365,7 +355,8 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
 	struct virtio_mmio_vq_info *info;
 	struct virtqueue *vq;
-	unsigned long flags, size;
+	unsigned long flags;
+	unsigned int num;
 	int err;
 
 	if (!name)
@@ -388,66 +379,40 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 		goto error_kmalloc;
 	}
 
-	/* Allocate pages for the queue - start with a queue as big as
-	 * possible (limited by maximum size allowed by device), drop down
-	 * to a minimal size, just big enough to fit descriptor table
-	 * and two rings (which makes it "alignment_size * 2")
-	 */
-	info->num = readl(vm_dev->base + VIRTIO_MMIO_QUEUE_NUM_MAX);
-
-	/* If the device reports a 0 entry queue, we won't be able to
-	 * use it to perform I/O, and vring_new_virtqueue() can't create
-	 * empty queues anyway, so don't bother to set up the device.
-	 */
-	if (info->num == 0) {
+	num = readl(vm_dev->base + VIRTIO_MMIO_QUEUE_NUM_MAX);
+	if (num == 0) {
 		err = -ENOENT;
-		goto error_alloc_pages;
-	}
-
-	while (1) {
-		size = PAGE_ALIGN(vring_size(info->num,
-				VIRTIO_MMIO_VRING_ALIGN));
-		/* Did the last iter shrink the queue below minimum size? */
-		if (size < VIRTIO_MMIO_VRING_ALIGN * 2) {
-			err = -ENOMEM;
-			goto error_alloc_pages;
-		}
-
-		info->queue = alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO);
-		if (info->queue)
-			break;
-
-		info->num /= 2;
+		goto error_new_virtqueue;
 	}
 
 	/* Create the vring */
-	vq = vring_new_virtqueue(index, info->num, VIRTIO_MMIO_VRING_ALIGN, vdev,
-				 true, info->queue, vm_notify, callback, name);
+	vq = vring_create_virtqueue(index, num, VIRTIO_MMIO_VRING_ALIGN, vdev,
+				 true, true, vm_notify, callback, name);
 	if (!vq) {
 		err = -ENOMEM;
 		goto error_new_virtqueue;
 	}
 
 	/* Activate the queue */
-	writel(info->num, vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
+	writel(virtqueue_get_vring_size(vq), vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
 	if (vm_dev->version == 1) {
 		writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_QUEUE_ALIGN);
-		writel(virt_to_phys(info->queue) >> PAGE_SHIFT,
+		writel(virtqueue_get_desc_addr(vq) >> PAGE_SHIFT,
 				vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
 	} else {
 		u64 addr;
 
-		addr = virt_to_phys(info->queue);
+		addr = virtqueue_get_desc_addr(vq);
 		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_LOW);
 		writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_HIGH);
 
-		addr = virt_to_phys(virtqueue_get_avail(vq));
+		addr = virtqueue_get_avail_addr(vq);
 		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_LOW);
 		writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_HIGH);
 
-		addr = virt_to_phys(virtqueue_get_used(vq));
+		addr = virtqueue_get_used_addr(vq);
 		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_USED_LOW);
 		writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_USED_HIGH);
@@ -471,8 +436,6 @@ error_new_virtqueue:
 		writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
 		WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
 	}
-	free_pages_exact(info->queue, size);
-error_alloc_pages:
 	kfree(info);
 error_kmalloc:
 error_available:
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 7/9] virtio_mmio: Use the DMA API if enabled
@ 2016-02-03  5:46   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

This switches to vring_create_virtqueue, simplifying the driver and
adding DMA API support.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_mmio.c | 67 ++++++++++----------------------------------
 1 file changed, 15 insertions(+), 52 deletions(-)

diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
index 745c6ee1bb3e..48bfea91dbca 100644
--- a/drivers/virtio/virtio_mmio.c
+++ b/drivers/virtio/virtio_mmio.c
@@ -99,12 +99,6 @@ struct virtio_mmio_vq_info {
 	/* the actual virtqueue */
 	struct virtqueue *vq;
 
-	/* the number of entries in the queue */
-	unsigned int num;
-
-	/* the virtual address of the ring queue */
-	void *queue;
-
 	/* the list node for the virtqueues list */
 	struct list_head node;
 };
@@ -322,15 +316,13 @@ static void vm_del_vq(struct virtqueue *vq)
 {
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vq->vdev);
 	struct virtio_mmio_vq_info *info = vq->priv;
-	unsigned long flags, size;
+	unsigned long flags;
 	unsigned int index = vq->index;
 
 	spin_lock_irqsave(&vm_dev->lock, flags);
 	list_del(&info->node);
 	spin_unlock_irqrestore(&vm_dev->lock, flags);
 
-	vring_del_virtqueue(vq);
-
 	/* Select and deactivate the queue */
 	writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
 	if (vm_dev->version = 1) {
@@ -340,8 +332,8 @@ static void vm_del_vq(struct virtqueue *vq)
 		WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
 	}
 
-	size = PAGE_ALIGN(vring_size(info->num, VIRTIO_MMIO_VRING_ALIGN));
-	free_pages_exact(info->queue, size);
+	vring_del_virtqueue(vq);
+
 	kfree(info);
 }
 
@@ -356,8 +348,6 @@ static void vm_del_vqs(struct virtio_device *vdev)
 	free_irq(platform_get_irq(vm_dev->pdev, 0), vm_dev);
 }
 
-
-
 static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 				  void (*callback)(struct virtqueue *vq),
 				  const char *name)
@@ -365,7 +355,8 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
 	struct virtio_mmio_vq_info *info;
 	struct virtqueue *vq;
-	unsigned long flags, size;
+	unsigned long flags;
+	unsigned int num;
 	int err;
 
 	if (!name)
@@ -388,66 +379,40 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 		goto error_kmalloc;
 	}
 
-	/* Allocate pages for the queue - start with a queue as big as
-	 * possible (limited by maximum size allowed by device), drop down
-	 * to a minimal size, just big enough to fit descriptor table
-	 * and two rings (which makes it "alignment_size * 2")
-	 */
-	info->num = readl(vm_dev->base + VIRTIO_MMIO_QUEUE_NUM_MAX);
-
-	/* If the device reports a 0 entry queue, we won't be able to
-	 * use it to perform I/O, and vring_new_virtqueue() can't create
-	 * empty queues anyway, so don't bother to set up the device.
-	 */
-	if (info->num = 0) {
+	num = readl(vm_dev->base + VIRTIO_MMIO_QUEUE_NUM_MAX);
+	if (num = 0) {
 		err = -ENOENT;
-		goto error_alloc_pages;
-	}
-
-	while (1) {
-		size = PAGE_ALIGN(vring_size(info->num,
-				VIRTIO_MMIO_VRING_ALIGN));
-		/* Did the last iter shrink the queue below minimum size? */
-		if (size < VIRTIO_MMIO_VRING_ALIGN * 2) {
-			err = -ENOMEM;
-			goto error_alloc_pages;
-		}
-
-		info->queue = alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO);
-		if (info->queue)
-			break;
-
-		info->num /= 2;
+		goto error_new_virtqueue;
 	}
 
 	/* Create the vring */
-	vq = vring_new_virtqueue(index, info->num, VIRTIO_MMIO_VRING_ALIGN, vdev,
-				 true, info->queue, vm_notify, callback, name);
+	vq = vring_create_virtqueue(index, num, VIRTIO_MMIO_VRING_ALIGN, vdev,
+				 true, true, vm_notify, callback, name);
 	if (!vq) {
 		err = -ENOMEM;
 		goto error_new_virtqueue;
 	}
 
 	/* Activate the queue */
-	writel(info->num, vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
+	writel(virtqueue_get_vring_size(vq), vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
 	if (vm_dev->version = 1) {
 		writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_QUEUE_ALIGN);
-		writel(virt_to_phys(info->queue) >> PAGE_SHIFT,
+		writel(virtqueue_get_desc_addr(vq) >> PAGE_SHIFT,
 				vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
 	} else {
 		u64 addr;
 
-		addr = virt_to_phys(info->queue);
+		addr = virtqueue_get_desc_addr(vq);
 		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_LOW);
 		writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_HIGH);
 
-		addr = virt_to_phys(virtqueue_get_avail(vq));
+		addr = virtqueue_get_avail_addr(vq);
 		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_LOW);
 		writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_HIGH);
 
-		addr = virt_to_phys(virtqueue_get_used(vq));
+		addr = virtqueue_get_used_addr(vq);
 		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_USED_LOW);
 		writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_USED_HIGH);
@@ -471,8 +436,6 @@ error_new_virtqueue:
 		writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
 		WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
 	}
-	free_pages_exact(info->queue, size);
-error_alloc_pages:
 	kfree(info);
 error_kmalloc:
 error_available:
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 7/9] virtio_mmio: Use the DMA API if enabled
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (15 preceding siblings ...)
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

This switches to vring_create_virtqueue, simplifying the driver and
adding DMA API support.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_mmio.c | 67 ++++++++++----------------------------------
 1 file changed, 15 insertions(+), 52 deletions(-)

diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
index 745c6ee1bb3e..48bfea91dbca 100644
--- a/drivers/virtio/virtio_mmio.c
+++ b/drivers/virtio/virtio_mmio.c
@@ -99,12 +99,6 @@ struct virtio_mmio_vq_info {
 	/* the actual virtqueue */
 	struct virtqueue *vq;
 
-	/* the number of entries in the queue */
-	unsigned int num;
-
-	/* the virtual address of the ring queue */
-	void *queue;
-
 	/* the list node for the virtqueues list */
 	struct list_head node;
 };
@@ -322,15 +316,13 @@ static void vm_del_vq(struct virtqueue *vq)
 {
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vq->vdev);
 	struct virtio_mmio_vq_info *info = vq->priv;
-	unsigned long flags, size;
+	unsigned long flags;
 	unsigned int index = vq->index;
 
 	spin_lock_irqsave(&vm_dev->lock, flags);
 	list_del(&info->node);
 	spin_unlock_irqrestore(&vm_dev->lock, flags);
 
-	vring_del_virtqueue(vq);
-
 	/* Select and deactivate the queue */
 	writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
 	if (vm_dev->version == 1) {
@@ -340,8 +332,8 @@ static void vm_del_vq(struct virtqueue *vq)
 		WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
 	}
 
-	size = PAGE_ALIGN(vring_size(info->num, VIRTIO_MMIO_VRING_ALIGN));
-	free_pages_exact(info->queue, size);
+	vring_del_virtqueue(vq);
+
 	kfree(info);
 }
 
@@ -356,8 +348,6 @@ static void vm_del_vqs(struct virtio_device *vdev)
 	free_irq(platform_get_irq(vm_dev->pdev, 0), vm_dev);
 }
 
-
-
 static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 				  void (*callback)(struct virtqueue *vq),
 				  const char *name)
@@ -365,7 +355,8 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
 	struct virtio_mmio_vq_info *info;
 	struct virtqueue *vq;
-	unsigned long flags, size;
+	unsigned long flags;
+	unsigned int num;
 	int err;
 
 	if (!name)
@@ -388,66 +379,40 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 		goto error_kmalloc;
 	}
 
-	/* Allocate pages for the queue - start with a queue as big as
-	 * possible (limited by maximum size allowed by device), drop down
-	 * to a minimal size, just big enough to fit descriptor table
-	 * and two rings (which makes it "alignment_size * 2")
-	 */
-	info->num = readl(vm_dev->base + VIRTIO_MMIO_QUEUE_NUM_MAX);
-
-	/* If the device reports a 0 entry queue, we won't be able to
-	 * use it to perform I/O, and vring_new_virtqueue() can't create
-	 * empty queues anyway, so don't bother to set up the device.
-	 */
-	if (info->num == 0) {
+	num = readl(vm_dev->base + VIRTIO_MMIO_QUEUE_NUM_MAX);
+	if (num == 0) {
 		err = -ENOENT;
-		goto error_alloc_pages;
-	}
-
-	while (1) {
-		size = PAGE_ALIGN(vring_size(info->num,
-				VIRTIO_MMIO_VRING_ALIGN));
-		/* Did the last iter shrink the queue below minimum size? */
-		if (size < VIRTIO_MMIO_VRING_ALIGN * 2) {
-			err = -ENOMEM;
-			goto error_alloc_pages;
-		}
-
-		info->queue = alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO);
-		if (info->queue)
-			break;
-
-		info->num /= 2;
+		goto error_new_virtqueue;
 	}
 
 	/* Create the vring */
-	vq = vring_new_virtqueue(index, info->num, VIRTIO_MMIO_VRING_ALIGN, vdev,
-				 true, info->queue, vm_notify, callback, name);
+	vq = vring_create_virtqueue(index, num, VIRTIO_MMIO_VRING_ALIGN, vdev,
+				 true, true, vm_notify, callback, name);
 	if (!vq) {
 		err = -ENOMEM;
 		goto error_new_virtqueue;
 	}
 
 	/* Activate the queue */
-	writel(info->num, vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
+	writel(virtqueue_get_vring_size(vq), vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
 	if (vm_dev->version == 1) {
 		writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_QUEUE_ALIGN);
-		writel(virt_to_phys(info->queue) >> PAGE_SHIFT,
+		writel(virtqueue_get_desc_addr(vq) >> PAGE_SHIFT,
 				vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
 	} else {
 		u64 addr;
 
-		addr = virt_to_phys(info->queue);
+		addr = virtqueue_get_desc_addr(vq);
 		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_LOW);
 		writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_HIGH);
 
-		addr = virt_to_phys(virtqueue_get_avail(vq));
+		addr = virtqueue_get_avail_addr(vq);
 		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_LOW);
 		writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_HIGH);
 
-		addr = virt_to_phys(virtqueue_get_used(vq));
+		addr = virtqueue_get_used_addr(vq);
 		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_USED_LOW);
 		writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_USED_HIGH);
@@ -471,8 +436,6 @@ error_new_virtqueue:
 		writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
 		WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
 	}
-	free_pages_exact(info->queue, size);
-error_alloc_pages:
 	kfree(info);
 error_kmalloc:
 error_available:
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 7/9] virtio_mmio: Use the DMA API if enabled
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (14 preceding siblings ...)
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, Andy Lutomirski, xen-devel, sparclinux,
	Paolo Bonzini, Linux Virtualization, David Woodhouse,
	David S. Miller, Martin Schwidefsky

This switches to vring_create_virtqueue, simplifying the driver and
adding DMA API support.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_mmio.c | 67 ++++++++++----------------------------------
 1 file changed, 15 insertions(+), 52 deletions(-)

diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
index 745c6ee1bb3e..48bfea91dbca 100644
--- a/drivers/virtio/virtio_mmio.c
+++ b/drivers/virtio/virtio_mmio.c
@@ -99,12 +99,6 @@ struct virtio_mmio_vq_info {
 	/* the actual virtqueue */
 	struct virtqueue *vq;
 
-	/* the number of entries in the queue */
-	unsigned int num;
-
-	/* the virtual address of the ring queue */
-	void *queue;
-
 	/* the list node for the virtqueues list */
 	struct list_head node;
 };
@@ -322,15 +316,13 @@ static void vm_del_vq(struct virtqueue *vq)
 {
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vq->vdev);
 	struct virtio_mmio_vq_info *info = vq->priv;
-	unsigned long flags, size;
+	unsigned long flags;
 	unsigned int index = vq->index;
 
 	spin_lock_irqsave(&vm_dev->lock, flags);
 	list_del(&info->node);
 	spin_unlock_irqrestore(&vm_dev->lock, flags);
 
-	vring_del_virtqueue(vq);
-
 	/* Select and deactivate the queue */
 	writel(index, vm_dev->base + VIRTIO_MMIO_QUEUE_SEL);
 	if (vm_dev->version == 1) {
@@ -340,8 +332,8 @@ static void vm_del_vq(struct virtqueue *vq)
 		WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
 	}
 
-	size = PAGE_ALIGN(vring_size(info->num, VIRTIO_MMIO_VRING_ALIGN));
-	free_pages_exact(info->queue, size);
+	vring_del_virtqueue(vq);
+
 	kfree(info);
 }
 
@@ -356,8 +348,6 @@ static void vm_del_vqs(struct virtio_device *vdev)
 	free_irq(platform_get_irq(vm_dev->pdev, 0), vm_dev);
 }
 
-
-
 static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 				  void (*callback)(struct virtqueue *vq),
 				  const char *name)
@@ -365,7 +355,8 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 	struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
 	struct virtio_mmio_vq_info *info;
 	struct virtqueue *vq;
-	unsigned long flags, size;
+	unsigned long flags;
+	unsigned int num;
 	int err;
 
 	if (!name)
@@ -388,66 +379,40 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 		goto error_kmalloc;
 	}
 
-	/* Allocate pages for the queue - start with a queue as big as
-	 * possible (limited by maximum size allowed by device), drop down
-	 * to a minimal size, just big enough to fit descriptor table
-	 * and two rings (which makes it "alignment_size * 2")
-	 */
-	info->num = readl(vm_dev->base + VIRTIO_MMIO_QUEUE_NUM_MAX);
-
-	/* If the device reports a 0 entry queue, we won't be able to
-	 * use it to perform I/O, and vring_new_virtqueue() can't create
-	 * empty queues anyway, so don't bother to set up the device.
-	 */
-	if (info->num == 0) {
+	num = readl(vm_dev->base + VIRTIO_MMIO_QUEUE_NUM_MAX);
+	if (num == 0) {
 		err = -ENOENT;
-		goto error_alloc_pages;
-	}
-
-	while (1) {
-		size = PAGE_ALIGN(vring_size(info->num,
-				VIRTIO_MMIO_VRING_ALIGN));
-		/* Did the last iter shrink the queue below minimum size? */
-		if (size < VIRTIO_MMIO_VRING_ALIGN * 2) {
-			err = -ENOMEM;
-			goto error_alloc_pages;
-		}
-
-		info->queue = alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO);
-		if (info->queue)
-			break;
-
-		info->num /= 2;
+		goto error_new_virtqueue;
 	}
 
 	/* Create the vring */
-	vq = vring_new_virtqueue(index, info->num, VIRTIO_MMIO_VRING_ALIGN, vdev,
-				 true, info->queue, vm_notify, callback, name);
+	vq = vring_create_virtqueue(index, num, VIRTIO_MMIO_VRING_ALIGN, vdev,
+				 true, true, vm_notify, callback, name);
 	if (!vq) {
 		err = -ENOMEM;
 		goto error_new_virtqueue;
 	}
 
 	/* Activate the queue */
-	writel(info->num, vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
+	writel(virtqueue_get_vring_size(vq), vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
 	if (vm_dev->version == 1) {
 		writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_QUEUE_ALIGN);
-		writel(virt_to_phys(info->queue) >> PAGE_SHIFT,
+		writel(virtqueue_get_desc_addr(vq) >> PAGE_SHIFT,
 				vm_dev->base + VIRTIO_MMIO_QUEUE_PFN);
 	} else {
 		u64 addr;
 
-		addr = virt_to_phys(info->queue);
+		addr = virtqueue_get_desc_addr(vq);
 		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_LOW);
 		writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_DESC_HIGH);
 
-		addr = virt_to_phys(virtqueue_get_avail(vq));
+		addr = virtqueue_get_avail_addr(vq);
 		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_LOW);
 		writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_AVAIL_HIGH);
 
-		addr = virt_to_phys(virtqueue_get_used(vq));
+		addr = virtqueue_get_used_addr(vq);
 		writel((u32)addr, vm_dev->base + VIRTIO_MMIO_QUEUE_USED_LOW);
 		writel((u32)(addr >> 32),
 				vm_dev->base + VIRTIO_MMIO_QUEUE_USED_HIGH);
@@ -471,8 +436,6 @@ error_new_virtqueue:
 		writel(0, vm_dev->base + VIRTIO_MMIO_QUEUE_READY);
 		WARN_ON(readl(vm_dev->base + VIRTIO_MMIO_QUEUE_READY));
 	}
-	free_pages_exact(info->queue, size);
-error_alloc_pages:
 	kfree(info);
 error_kmalloc:
 error_available:
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 8/9] virtio_pci: Use the DMA API if enabled
  2016-02-03  5:46 ` Andy Lutomirski
@ 2016-02-03  5:46   ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

This switches to vring_create_virtqueue, simplifying the driver and
adding DMA API support.

This fixes virtio-pci on platforms and busses that have IOMMUs.  This
will break the experimental QEMU Q35 IOMMU support until QEMU is
fixed.  In exchange, it fixes physical virtio hardware as well as
virtio-pci running under Xen.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_pci_common.h |  6 ----
 drivers/virtio/virtio_pci_legacy.c | 42 +++++++++++---------------
 drivers/virtio/virtio_pci_modern.c | 61 ++++++++++----------------------------
 3 files changed, 33 insertions(+), 76 deletions(-)

diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h
index 2cc252270b2d..28263200ed42 100644
--- a/drivers/virtio/virtio_pci_common.h
+++ b/drivers/virtio/virtio_pci_common.h
@@ -35,12 +35,6 @@ struct virtio_pci_vq_info {
 	/* the actual virtqueue */
 	struct virtqueue *vq;
 
-	/* the number of entries in the queue */
-	int num;
-
-	/* the virtual address of the ring queue */
-	void *queue;
-
 	/* the list node for the virtqueues list */
 	struct list_head node;
 
diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c
index 48bc9797e530..8c4e61783441 100644
--- a/drivers/virtio/virtio_pci_legacy.c
+++ b/drivers/virtio/virtio_pci_legacy.c
@@ -119,7 +119,6 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 				  u16 msix_vec)
 {
 	struct virtqueue *vq;
-	unsigned long size;
 	u16 num;
 	int err;
 
@@ -131,27 +130,19 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 	if (!num || ioread32(vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN))
 		return ERR_PTR(-ENOENT);
 
-	info->num = num;
 	info->msix_vector = msix_vec;
 
-	size = PAGE_ALIGN(vring_size(num, VIRTIO_PCI_VRING_ALIGN));
-	info->queue = alloc_pages_exact(size, GFP_KERNEL|__GFP_ZERO);
-	if (info->queue == NULL)
+	/* create the vring */
+	vq = vring_create_virtqueue(index, num,
+				    VIRTIO_PCI_VRING_ALIGN, &vp_dev->vdev,
+				    true, false, vp_notify, callback, name);
+	if (!vq)
 		return ERR_PTR(-ENOMEM);
 
 	/* activate the queue */
-	iowrite32(virt_to_phys(info->queue) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT,
+	iowrite32(virtqueue_get_desc_addr(vq) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT,
 		  vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
 
-	/* create the vring */
-	vq = vring_new_virtqueue(index, info->num,
-				 VIRTIO_PCI_VRING_ALIGN, &vp_dev->vdev,
-				 true, info->queue, vp_notify, callback, name);
-	if (!vq) {
-		err = -ENOMEM;
-		goto out_activate_queue;
-	}
-
 	vq->priv = (void __force *)vp_dev->ioaddr + VIRTIO_PCI_QUEUE_NOTIFY;
 
 	if (msix_vec != VIRTIO_MSI_NO_VECTOR) {
@@ -159,17 +150,15 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 		msix_vec = ioread16(vp_dev->ioaddr + VIRTIO_MSI_QUEUE_VECTOR);
 		if (msix_vec == VIRTIO_MSI_NO_VECTOR) {
 			err = -EBUSY;
-			goto out_assign;
+			goto out_deactivate;
 		}
 	}
 
 	return vq;
 
-out_assign:
-	vring_del_virtqueue(vq);
-out_activate_queue:
+out_deactivate:
 	iowrite32(0, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
-	free_pages_exact(info->queue, size);
+	vring_del_virtqueue(vq);
 	return ERR_PTR(err);
 }
 
@@ -177,7 +166,6 @@ static void del_vq(struct virtio_pci_vq_info *info)
 {
 	struct virtqueue *vq = info->vq;
 	struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
-	unsigned long size;
 
 	iowrite16(vq->index, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_SEL);
 
@@ -188,13 +176,10 @@ static void del_vq(struct virtio_pci_vq_info *info)
 		ioread8(vp_dev->ioaddr + VIRTIO_PCI_ISR);
 	}
 
-	vring_del_virtqueue(vq);
-
 	/* Select and deactivate the queue */
 	iowrite32(0, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
 
-	size = PAGE_ALIGN(vring_size(info->num, VIRTIO_PCI_VRING_ALIGN));
-	free_pages_exact(info->queue, size);
+	vring_del_virtqueue(vq);
 }
 
 static const struct virtio_config_ops virtio_pci_config_ops = {
@@ -227,6 +212,13 @@ int virtio_pci_legacy_probe(struct virtio_pci_device *vp_dev)
 		return -ENODEV;
 	}
 
+	rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64));
+	if (rc)
+		rc = dma_set_mask_and_coherent(&pci_dev->dev,
+						DMA_BIT_MASK(32));
+	if (rc)
+		dev_warn(&pci_dev->dev, "Failed to enable 64-bit or 32-bit DMA.  Trying to continue, but this might not work.\n");
+
 	rc = pci_request_region(pci_dev, 0, "virtio-pci-legacy");
 	if (rc)
 		return rc;
diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
index c0c11fad4611..0b4a4f440b85 100644
--- a/drivers/virtio/virtio_pci_modern.c
+++ b/drivers/virtio/virtio_pci_modern.c
@@ -287,31 +287,6 @@ static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector)
 	return vp_ioread16(&vp_dev->common->msix_config);
 }
 
-static size_t vring_pci_size(u16 num)
-{
-	/* We only need a cacheline separation. */
-	return PAGE_ALIGN(vring_size(num, SMP_CACHE_BYTES));
-}
-
-static void *alloc_virtqueue_pages(int *num)
-{
-	void *pages;
-
-	/* TODO: allocate each queue chunk individually */
-	for (; *num && vring_pci_size(*num) > PAGE_SIZE; *num /= 2) {
-		pages = alloc_pages_exact(vring_pci_size(*num),
-					  GFP_KERNEL|__GFP_ZERO|__GFP_NOWARN);
-		if (pages)
-			return pages;
-	}
-
-	if (!*num)
-		return NULL;
-
-	/* Try to get a single page. You are my only hope! */
-	return alloc_pages_exact(vring_pci_size(*num), GFP_KERNEL|__GFP_ZERO);
-}
-
 static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 				  struct virtio_pci_vq_info *info,
 				  unsigned index,
@@ -343,29 +318,22 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 	/* get offset of notification word for this vq */
 	off = vp_ioread16(&cfg->queue_notify_off);
 
-	info->num = num;
 	info->msix_vector = msix_vec;
 
-	info->queue = alloc_virtqueue_pages(&info->num);
-	if (info->queue == NULL)
-		return ERR_PTR(-ENOMEM);
-
 	/* create the vring */
-	vq = vring_new_virtqueue(index, info->num,
-				 SMP_CACHE_BYTES, &vp_dev->vdev,
-				 true, info->queue, vp_notify, callback, name);
-	if (!vq) {
-		err = -ENOMEM;
-		goto err_new_queue;
-	}
+	vq = vring_create_virtqueue(index, num,
+				    SMP_CACHE_BYTES, &vp_dev->vdev,
+				    true, true, vp_notify, callback, name);
+	if (!vq)
+		return ERR_PTR(-ENOMEM);
 
 	/* activate the queue */
-	vp_iowrite16(num, &cfg->queue_size);
-	vp_iowrite64_twopart(virt_to_phys(info->queue),
+	vp_iowrite16(virtqueue_get_vring_size(vq), &cfg->queue_size);
+	vp_iowrite64_twopart(virtqueue_get_desc_addr(vq),
 			     &cfg->queue_desc_lo, &cfg->queue_desc_hi);
-	vp_iowrite64_twopart(virt_to_phys(virtqueue_get_avail(vq)),
+	vp_iowrite64_twopart(virtqueue_get_avail_addr(vq),
 			     &cfg->queue_avail_lo, &cfg->queue_avail_hi);
-	vp_iowrite64_twopart(virt_to_phys(virtqueue_get_used(vq)),
+	vp_iowrite64_twopart(virtqueue_get_used_addr(vq),
 			     &cfg->queue_used_lo, &cfg->queue_used_hi);
 
 	if (vp_dev->notify_base) {
@@ -410,8 +378,6 @@ err_assign_vector:
 		pci_iounmap(vp_dev->pci_dev, (void __iomem __force *)vq->priv);
 err_map_notify:
 	vring_del_virtqueue(vq);
-err_new_queue:
-	free_pages_exact(info->queue, vring_pci_size(info->num));
 	return ERR_PTR(err);
 }
 
@@ -456,8 +422,6 @@ static void del_vq(struct virtio_pci_vq_info *info)
 		pci_iounmap(vp_dev->pci_dev, (void __force __iomem *)vq->priv);
 
 	vring_del_virtqueue(vq);
-
-	free_pages_exact(info->queue, vring_pci_size(info->num));
 }
 
 static const struct virtio_config_ops virtio_pci_config_nodev_ops = {
@@ -641,6 +605,13 @@ int virtio_pci_modern_probe(struct virtio_pci_device *vp_dev)
 		return -EINVAL;
 	}
 
+	err = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64));
+	if (err)
+		err = dma_set_mask_and_coherent(&pci_dev->dev,
+						DMA_BIT_MASK(32));
+	if (err)
+		dev_warn(&pci_dev->dev, "Failed to enable 64-bit or 32-bit DMA.  Trying to continue, but this might not work.\n");
+
 	/* Device capability is only mandatory for devices that have
 	 * device-specific configuration.
 	 */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 8/9] virtio_pci: Use the DMA API if enabled
@ 2016-02-03  5:46   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

This switches to vring_create_virtqueue, simplifying the driver and
adding DMA API support.

This fixes virtio-pci on platforms and busses that have IOMMUs.  This
will break the experimental QEMU Q35 IOMMU support until QEMU is
fixed.  In exchange, it fixes physical virtio hardware as well as
virtio-pci running under Xen.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_pci_common.h |  6 ----
 drivers/virtio/virtio_pci_legacy.c | 42 +++++++++++---------------
 drivers/virtio/virtio_pci_modern.c | 61 ++++++++++----------------------------
 3 files changed, 33 insertions(+), 76 deletions(-)

diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h
index 2cc252270b2d..28263200ed42 100644
--- a/drivers/virtio/virtio_pci_common.h
+++ b/drivers/virtio/virtio_pci_common.h
@@ -35,12 +35,6 @@ struct virtio_pci_vq_info {
 	/* the actual virtqueue */
 	struct virtqueue *vq;
 
-	/* the number of entries in the queue */
-	int num;
-
-	/* the virtual address of the ring queue */
-	void *queue;
-
 	/* the list node for the virtqueues list */
 	struct list_head node;
 
diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c
index 48bc9797e530..8c4e61783441 100644
--- a/drivers/virtio/virtio_pci_legacy.c
+++ b/drivers/virtio/virtio_pci_legacy.c
@@ -119,7 +119,6 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 				  u16 msix_vec)
 {
 	struct virtqueue *vq;
-	unsigned long size;
 	u16 num;
 	int err;
 
@@ -131,27 +130,19 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 	if (!num || ioread32(vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN))
 		return ERR_PTR(-ENOENT);
 
-	info->num = num;
 	info->msix_vector = msix_vec;
 
-	size = PAGE_ALIGN(vring_size(num, VIRTIO_PCI_VRING_ALIGN));
-	info->queue = alloc_pages_exact(size, GFP_KERNEL|__GFP_ZERO);
-	if (info->queue = NULL)
+	/* create the vring */
+	vq = vring_create_virtqueue(index, num,
+				    VIRTIO_PCI_VRING_ALIGN, &vp_dev->vdev,
+				    true, false, vp_notify, callback, name);
+	if (!vq)
 		return ERR_PTR(-ENOMEM);
 
 	/* activate the queue */
-	iowrite32(virt_to_phys(info->queue) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT,
+	iowrite32(virtqueue_get_desc_addr(vq) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT,
 		  vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
 
-	/* create the vring */
-	vq = vring_new_virtqueue(index, info->num,
-				 VIRTIO_PCI_VRING_ALIGN, &vp_dev->vdev,
-				 true, info->queue, vp_notify, callback, name);
-	if (!vq) {
-		err = -ENOMEM;
-		goto out_activate_queue;
-	}
-
 	vq->priv = (void __force *)vp_dev->ioaddr + VIRTIO_PCI_QUEUE_NOTIFY;
 
 	if (msix_vec != VIRTIO_MSI_NO_VECTOR) {
@@ -159,17 +150,15 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 		msix_vec = ioread16(vp_dev->ioaddr + VIRTIO_MSI_QUEUE_VECTOR);
 		if (msix_vec = VIRTIO_MSI_NO_VECTOR) {
 			err = -EBUSY;
-			goto out_assign;
+			goto out_deactivate;
 		}
 	}
 
 	return vq;
 
-out_assign:
-	vring_del_virtqueue(vq);
-out_activate_queue:
+out_deactivate:
 	iowrite32(0, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
-	free_pages_exact(info->queue, size);
+	vring_del_virtqueue(vq);
 	return ERR_PTR(err);
 }
 
@@ -177,7 +166,6 @@ static void del_vq(struct virtio_pci_vq_info *info)
 {
 	struct virtqueue *vq = info->vq;
 	struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
-	unsigned long size;
 
 	iowrite16(vq->index, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_SEL);
 
@@ -188,13 +176,10 @@ static void del_vq(struct virtio_pci_vq_info *info)
 		ioread8(vp_dev->ioaddr + VIRTIO_PCI_ISR);
 	}
 
-	vring_del_virtqueue(vq);
-
 	/* Select and deactivate the queue */
 	iowrite32(0, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
 
-	size = PAGE_ALIGN(vring_size(info->num, VIRTIO_PCI_VRING_ALIGN));
-	free_pages_exact(info->queue, size);
+	vring_del_virtqueue(vq);
 }
 
 static const struct virtio_config_ops virtio_pci_config_ops = {
@@ -227,6 +212,13 @@ int virtio_pci_legacy_probe(struct virtio_pci_device *vp_dev)
 		return -ENODEV;
 	}
 
+	rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64));
+	if (rc)
+		rc = dma_set_mask_and_coherent(&pci_dev->dev,
+						DMA_BIT_MASK(32));
+	if (rc)
+		dev_warn(&pci_dev->dev, "Failed to enable 64-bit or 32-bit DMA.  Trying to continue, but this might not work.\n");
+
 	rc = pci_request_region(pci_dev, 0, "virtio-pci-legacy");
 	if (rc)
 		return rc;
diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
index c0c11fad4611..0b4a4f440b85 100644
--- a/drivers/virtio/virtio_pci_modern.c
+++ b/drivers/virtio/virtio_pci_modern.c
@@ -287,31 +287,6 @@ static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector)
 	return vp_ioread16(&vp_dev->common->msix_config);
 }
 
-static size_t vring_pci_size(u16 num)
-{
-	/* We only need a cacheline separation. */
-	return PAGE_ALIGN(vring_size(num, SMP_CACHE_BYTES));
-}
-
-static void *alloc_virtqueue_pages(int *num)
-{
-	void *pages;
-
-	/* TODO: allocate each queue chunk individually */
-	for (; *num && vring_pci_size(*num) > PAGE_SIZE; *num /= 2) {
-		pages = alloc_pages_exact(vring_pci_size(*num),
-					  GFP_KERNEL|__GFP_ZERO|__GFP_NOWARN);
-		if (pages)
-			return pages;
-	}
-
-	if (!*num)
-		return NULL;
-
-	/* Try to get a single page. You are my only hope! */
-	return alloc_pages_exact(vring_pci_size(*num), GFP_KERNEL|__GFP_ZERO);
-}
-
 static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 				  struct virtio_pci_vq_info *info,
 				  unsigned index,
@@ -343,29 +318,22 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 	/* get offset of notification word for this vq */
 	off = vp_ioread16(&cfg->queue_notify_off);
 
-	info->num = num;
 	info->msix_vector = msix_vec;
 
-	info->queue = alloc_virtqueue_pages(&info->num);
-	if (info->queue = NULL)
-		return ERR_PTR(-ENOMEM);
-
 	/* create the vring */
-	vq = vring_new_virtqueue(index, info->num,
-				 SMP_CACHE_BYTES, &vp_dev->vdev,
-				 true, info->queue, vp_notify, callback, name);
-	if (!vq) {
-		err = -ENOMEM;
-		goto err_new_queue;
-	}
+	vq = vring_create_virtqueue(index, num,
+				    SMP_CACHE_BYTES, &vp_dev->vdev,
+				    true, true, vp_notify, callback, name);
+	if (!vq)
+		return ERR_PTR(-ENOMEM);
 
 	/* activate the queue */
-	vp_iowrite16(num, &cfg->queue_size);
-	vp_iowrite64_twopart(virt_to_phys(info->queue),
+	vp_iowrite16(virtqueue_get_vring_size(vq), &cfg->queue_size);
+	vp_iowrite64_twopart(virtqueue_get_desc_addr(vq),
 			     &cfg->queue_desc_lo, &cfg->queue_desc_hi);
-	vp_iowrite64_twopart(virt_to_phys(virtqueue_get_avail(vq)),
+	vp_iowrite64_twopart(virtqueue_get_avail_addr(vq),
 			     &cfg->queue_avail_lo, &cfg->queue_avail_hi);
-	vp_iowrite64_twopart(virt_to_phys(virtqueue_get_used(vq)),
+	vp_iowrite64_twopart(virtqueue_get_used_addr(vq),
 			     &cfg->queue_used_lo, &cfg->queue_used_hi);
 
 	if (vp_dev->notify_base) {
@@ -410,8 +378,6 @@ err_assign_vector:
 		pci_iounmap(vp_dev->pci_dev, (void __iomem __force *)vq->priv);
 err_map_notify:
 	vring_del_virtqueue(vq);
-err_new_queue:
-	free_pages_exact(info->queue, vring_pci_size(info->num));
 	return ERR_PTR(err);
 }
 
@@ -456,8 +422,6 @@ static void del_vq(struct virtio_pci_vq_info *info)
 		pci_iounmap(vp_dev->pci_dev, (void __force __iomem *)vq->priv);
 
 	vring_del_virtqueue(vq);
-
-	free_pages_exact(info->queue, vring_pci_size(info->num));
 }
 
 static const struct virtio_config_ops virtio_pci_config_nodev_ops = {
@@ -641,6 +605,13 @@ int virtio_pci_modern_probe(struct virtio_pci_device *vp_dev)
 		return -EINVAL;
 	}
 
+	err = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64));
+	if (err)
+		err = dma_set_mask_and_coherent(&pci_dev->dev,
+						DMA_BIT_MASK(32));
+	if (err)
+		dev_warn(&pci_dev->dev, "Failed to enable 64-bit or 32-bit DMA.  Trying to continue, but this might not work.\n");
+
 	/* Device capability is only mandatory for devices that have
 	 * device-specific configuration.
 	 */
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 8/9] virtio_pci: Use the DMA API if enabled
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (19 preceding siblings ...)
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

This switches to vring_create_virtqueue, simplifying the driver and
adding DMA API support.

This fixes virtio-pci on platforms and busses that have IOMMUs.  This
will break the experimental QEMU Q35 IOMMU support until QEMU is
fixed.  In exchange, it fixes physical virtio hardware as well as
virtio-pci running under Xen.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_pci_common.h |  6 ----
 drivers/virtio/virtio_pci_legacy.c | 42 +++++++++++---------------
 drivers/virtio/virtio_pci_modern.c | 61 ++++++++++----------------------------
 3 files changed, 33 insertions(+), 76 deletions(-)

diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h
index 2cc252270b2d..28263200ed42 100644
--- a/drivers/virtio/virtio_pci_common.h
+++ b/drivers/virtio/virtio_pci_common.h
@@ -35,12 +35,6 @@ struct virtio_pci_vq_info {
 	/* the actual virtqueue */
 	struct virtqueue *vq;
 
-	/* the number of entries in the queue */
-	int num;
-
-	/* the virtual address of the ring queue */
-	void *queue;
-
 	/* the list node for the virtqueues list */
 	struct list_head node;
 
diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c
index 48bc9797e530..8c4e61783441 100644
--- a/drivers/virtio/virtio_pci_legacy.c
+++ b/drivers/virtio/virtio_pci_legacy.c
@@ -119,7 +119,6 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 				  u16 msix_vec)
 {
 	struct virtqueue *vq;
-	unsigned long size;
 	u16 num;
 	int err;
 
@@ -131,27 +130,19 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 	if (!num || ioread32(vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN))
 		return ERR_PTR(-ENOENT);
 
-	info->num = num;
 	info->msix_vector = msix_vec;
 
-	size = PAGE_ALIGN(vring_size(num, VIRTIO_PCI_VRING_ALIGN));
-	info->queue = alloc_pages_exact(size, GFP_KERNEL|__GFP_ZERO);
-	if (info->queue == NULL)
+	/* create the vring */
+	vq = vring_create_virtqueue(index, num,
+				    VIRTIO_PCI_VRING_ALIGN, &vp_dev->vdev,
+				    true, false, vp_notify, callback, name);
+	if (!vq)
 		return ERR_PTR(-ENOMEM);
 
 	/* activate the queue */
-	iowrite32(virt_to_phys(info->queue) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT,
+	iowrite32(virtqueue_get_desc_addr(vq) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT,
 		  vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
 
-	/* create the vring */
-	vq = vring_new_virtqueue(index, info->num,
-				 VIRTIO_PCI_VRING_ALIGN, &vp_dev->vdev,
-				 true, info->queue, vp_notify, callback, name);
-	if (!vq) {
-		err = -ENOMEM;
-		goto out_activate_queue;
-	}
-
 	vq->priv = (void __force *)vp_dev->ioaddr + VIRTIO_PCI_QUEUE_NOTIFY;
 
 	if (msix_vec != VIRTIO_MSI_NO_VECTOR) {
@@ -159,17 +150,15 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 		msix_vec = ioread16(vp_dev->ioaddr + VIRTIO_MSI_QUEUE_VECTOR);
 		if (msix_vec == VIRTIO_MSI_NO_VECTOR) {
 			err = -EBUSY;
-			goto out_assign;
+			goto out_deactivate;
 		}
 	}
 
 	return vq;
 
-out_assign:
-	vring_del_virtqueue(vq);
-out_activate_queue:
+out_deactivate:
 	iowrite32(0, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
-	free_pages_exact(info->queue, size);
+	vring_del_virtqueue(vq);
 	return ERR_PTR(err);
 }
 
@@ -177,7 +166,6 @@ static void del_vq(struct virtio_pci_vq_info *info)
 {
 	struct virtqueue *vq = info->vq;
 	struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
-	unsigned long size;
 
 	iowrite16(vq->index, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_SEL);
 
@@ -188,13 +176,10 @@ static void del_vq(struct virtio_pci_vq_info *info)
 		ioread8(vp_dev->ioaddr + VIRTIO_PCI_ISR);
 	}
 
-	vring_del_virtqueue(vq);
-
 	/* Select and deactivate the queue */
 	iowrite32(0, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
 
-	size = PAGE_ALIGN(vring_size(info->num, VIRTIO_PCI_VRING_ALIGN));
-	free_pages_exact(info->queue, size);
+	vring_del_virtqueue(vq);
 }
 
 static const struct virtio_config_ops virtio_pci_config_ops = {
@@ -227,6 +212,13 @@ int virtio_pci_legacy_probe(struct virtio_pci_device *vp_dev)
 		return -ENODEV;
 	}
 
+	rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64));
+	if (rc)
+		rc = dma_set_mask_and_coherent(&pci_dev->dev,
+						DMA_BIT_MASK(32));
+	if (rc)
+		dev_warn(&pci_dev->dev, "Failed to enable 64-bit or 32-bit DMA.  Trying to continue, but this might not work.\n");
+
 	rc = pci_request_region(pci_dev, 0, "virtio-pci-legacy");
 	if (rc)
 		return rc;
diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
index c0c11fad4611..0b4a4f440b85 100644
--- a/drivers/virtio/virtio_pci_modern.c
+++ b/drivers/virtio/virtio_pci_modern.c
@@ -287,31 +287,6 @@ static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector)
 	return vp_ioread16(&vp_dev->common->msix_config);
 }
 
-static size_t vring_pci_size(u16 num)
-{
-	/* We only need a cacheline separation. */
-	return PAGE_ALIGN(vring_size(num, SMP_CACHE_BYTES));
-}
-
-static void *alloc_virtqueue_pages(int *num)
-{
-	void *pages;
-
-	/* TODO: allocate each queue chunk individually */
-	for (; *num && vring_pci_size(*num) > PAGE_SIZE; *num /= 2) {
-		pages = alloc_pages_exact(vring_pci_size(*num),
-					  GFP_KERNEL|__GFP_ZERO|__GFP_NOWARN);
-		if (pages)
-			return pages;
-	}
-
-	if (!*num)
-		return NULL;
-
-	/* Try to get a single page. You are my only hope! */
-	return alloc_pages_exact(vring_pci_size(*num), GFP_KERNEL|__GFP_ZERO);
-}
-
 static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 				  struct virtio_pci_vq_info *info,
 				  unsigned index,
@@ -343,29 +318,22 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 	/* get offset of notification word for this vq */
 	off = vp_ioread16(&cfg->queue_notify_off);
 
-	info->num = num;
 	info->msix_vector = msix_vec;
 
-	info->queue = alloc_virtqueue_pages(&info->num);
-	if (info->queue == NULL)
-		return ERR_PTR(-ENOMEM);
-
 	/* create the vring */
-	vq = vring_new_virtqueue(index, info->num,
-				 SMP_CACHE_BYTES, &vp_dev->vdev,
-				 true, info->queue, vp_notify, callback, name);
-	if (!vq) {
-		err = -ENOMEM;
-		goto err_new_queue;
-	}
+	vq = vring_create_virtqueue(index, num,
+				    SMP_CACHE_BYTES, &vp_dev->vdev,
+				    true, true, vp_notify, callback, name);
+	if (!vq)
+		return ERR_PTR(-ENOMEM);
 
 	/* activate the queue */
-	vp_iowrite16(num, &cfg->queue_size);
-	vp_iowrite64_twopart(virt_to_phys(info->queue),
+	vp_iowrite16(virtqueue_get_vring_size(vq), &cfg->queue_size);
+	vp_iowrite64_twopart(virtqueue_get_desc_addr(vq),
 			     &cfg->queue_desc_lo, &cfg->queue_desc_hi);
-	vp_iowrite64_twopart(virt_to_phys(virtqueue_get_avail(vq)),
+	vp_iowrite64_twopart(virtqueue_get_avail_addr(vq),
 			     &cfg->queue_avail_lo, &cfg->queue_avail_hi);
-	vp_iowrite64_twopart(virt_to_phys(virtqueue_get_used(vq)),
+	vp_iowrite64_twopart(virtqueue_get_used_addr(vq),
 			     &cfg->queue_used_lo, &cfg->queue_used_hi);
 
 	if (vp_dev->notify_base) {
@@ -410,8 +378,6 @@ err_assign_vector:
 		pci_iounmap(vp_dev->pci_dev, (void __iomem __force *)vq->priv);
 err_map_notify:
 	vring_del_virtqueue(vq);
-err_new_queue:
-	free_pages_exact(info->queue, vring_pci_size(info->num));
 	return ERR_PTR(err);
 }
 
@@ -456,8 +422,6 @@ static void del_vq(struct virtio_pci_vq_info *info)
 		pci_iounmap(vp_dev->pci_dev, (void __force __iomem *)vq->priv);
 
 	vring_del_virtqueue(vq);
-
-	free_pages_exact(info->queue, vring_pci_size(info->num));
 }
 
 static const struct virtio_config_ops virtio_pci_config_nodev_ops = {
@@ -641,6 +605,13 @@ int virtio_pci_modern_probe(struct virtio_pci_device *vp_dev)
 		return -EINVAL;
 	}
 
+	err = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64));
+	if (err)
+		err = dma_set_mask_and_coherent(&pci_dev->dev,
+						DMA_BIT_MASK(32));
+	if (err)
+		dev_warn(&pci_dev->dev, "Failed to enable 64-bit or 32-bit DMA.  Trying to continue, but this might not work.\n");
+
 	/* Device capability is only mandatory for devices that have
 	 * device-specific configuration.
 	 */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 8/9] virtio_pci: Use the DMA API if enabled
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (18 preceding siblings ...)
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, Andy Lutomirski, xen-devel, sparclinux,
	Paolo Bonzini, Linux Virtualization, David Woodhouse,
	David S. Miller, Martin Schwidefsky

This switches to vring_create_virtqueue, simplifying the driver and
adding DMA API support.

This fixes virtio-pci on platforms and busses that have IOMMUs.  This
will break the experimental QEMU Q35 IOMMU support until QEMU is
fixed.  In exchange, it fixes physical virtio hardware as well as
virtio-pci running under Xen.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_pci_common.h |  6 ----
 drivers/virtio/virtio_pci_legacy.c | 42 +++++++++++---------------
 drivers/virtio/virtio_pci_modern.c | 61 ++++++++++----------------------------
 3 files changed, 33 insertions(+), 76 deletions(-)

diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h
index 2cc252270b2d..28263200ed42 100644
--- a/drivers/virtio/virtio_pci_common.h
+++ b/drivers/virtio/virtio_pci_common.h
@@ -35,12 +35,6 @@ struct virtio_pci_vq_info {
 	/* the actual virtqueue */
 	struct virtqueue *vq;
 
-	/* the number of entries in the queue */
-	int num;
-
-	/* the virtual address of the ring queue */
-	void *queue;
-
 	/* the list node for the virtqueues list */
 	struct list_head node;
 
diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c
index 48bc9797e530..8c4e61783441 100644
--- a/drivers/virtio/virtio_pci_legacy.c
+++ b/drivers/virtio/virtio_pci_legacy.c
@@ -119,7 +119,6 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 				  u16 msix_vec)
 {
 	struct virtqueue *vq;
-	unsigned long size;
 	u16 num;
 	int err;
 
@@ -131,27 +130,19 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 	if (!num || ioread32(vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN))
 		return ERR_PTR(-ENOENT);
 
-	info->num = num;
 	info->msix_vector = msix_vec;
 
-	size = PAGE_ALIGN(vring_size(num, VIRTIO_PCI_VRING_ALIGN));
-	info->queue = alloc_pages_exact(size, GFP_KERNEL|__GFP_ZERO);
-	if (info->queue == NULL)
+	/* create the vring */
+	vq = vring_create_virtqueue(index, num,
+				    VIRTIO_PCI_VRING_ALIGN, &vp_dev->vdev,
+				    true, false, vp_notify, callback, name);
+	if (!vq)
 		return ERR_PTR(-ENOMEM);
 
 	/* activate the queue */
-	iowrite32(virt_to_phys(info->queue) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT,
+	iowrite32(virtqueue_get_desc_addr(vq) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT,
 		  vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
 
-	/* create the vring */
-	vq = vring_new_virtqueue(index, info->num,
-				 VIRTIO_PCI_VRING_ALIGN, &vp_dev->vdev,
-				 true, info->queue, vp_notify, callback, name);
-	if (!vq) {
-		err = -ENOMEM;
-		goto out_activate_queue;
-	}
-
 	vq->priv = (void __force *)vp_dev->ioaddr + VIRTIO_PCI_QUEUE_NOTIFY;
 
 	if (msix_vec != VIRTIO_MSI_NO_VECTOR) {
@@ -159,17 +150,15 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 		msix_vec = ioread16(vp_dev->ioaddr + VIRTIO_MSI_QUEUE_VECTOR);
 		if (msix_vec == VIRTIO_MSI_NO_VECTOR) {
 			err = -EBUSY;
-			goto out_assign;
+			goto out_deactivate;
 		}
 	}
 
 	return vq;
 
-out_assign:
-	vring_del_virtqueue(vq);
-out_activate_queue:
+out_deactivate:
 	iowrite32(0, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
-	free_pages_exact(info->queue, size);
+	vring_del_virtqueue(vq);
 	return ERR_PTR(err);
 }
 
@@ -177,7 +166,6 @@ static void del_vq(struct virtio_pci_vq_info *info)
 {
 	struct virtqueue *vq = info->vq;
 	struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
-	unsigned long size;
 
 	iowrite16(vq->index, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_SEL);
 
@@ -188,13 +176,10 @@ static void del_vq(struct virtio_pci_vq_info *info)
 		ioread8(vp_dev->ioaddr + VIRTIO_PCI_ISR);
 	}
 
-	vring_del_virtqueue(vq);
-
 	/* Select and deactivate the queue */
 	iowrite32(0, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_PFN);
 
-	size = PAGE_ALIGN(vring_size(info->num, VIRTIO_PCI_VRING_ALIGN));
-	free_pages_exact(info->queue, size);
+	vring_del_virtqueue(vq);
 }
 
 static const struct virtio_config_ops virtio_pci_config_ops = {
@@ -227,6 +212,13 @@ int virtio_pci_legacy_probe(struct virtio_pci_device *vp_dev)
 		return -ENODEV;
 	}
 
+	rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64));
+	if (rc)
+		rc = dma_set_mask_and_coherent(&pci_dev->dev,
+						DMA_BIT_MASK(32));
+	if (rc)
+		dev_warn(&pci_dev->dev, "Failed to enable 64-bit or 32-bit DMA.  Trying to continue, but this might not work.\n");
+
 	rc = pci_request_region(pci_dev, 0, "virtio-pci-legacy");
 	if (rc)
 		return rc;
diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
index c0c11fad4611..0b4a4f440b85 100644
--- a/drivers/virtio/virtio_pci_modern.c
+++ b/drivers/virtio/virtio_pci_modern.c
@@ -287,31 +287,6 @@ static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector)
 	return vp_ioread16(&vp_dev->common->msix_config);
 }
 
-static size_t vring_pci_size(u16 num)
-{
-	/* We only need a cacheline separation. */
-	return PAGE_ALIGN(vring_size(num, SMP_CACHE_BYTES));
-}
-
-static void *alloc_virtqueue_pages(int *num)
-{
-	void *pages;
-
-	/* TODO: allocate each queue chunk individually */
-	for (; *num && vring_pci_size(*num) > PAGE_SIZE; *num /= 2) {
-		pages = alloc_pages_exact(vring_pci_size(*num),
-					  GFP_KERNEL|__GFP_ZERO|__GFP_NOWARN);
-		if (pages)
-			return pages;
-	}
-
-	if (!*num)
-		return NULL;
-
-	/* Try to get a single page. You are my only hope! */
-	return alloc_pages_exact(vring_pci_size(*num), GFP_KERNEL|__GFP_ZERO);
-}
-
 static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 				  struct virtio_pci_vq_info *info,
 				  unsigned index,
@@ -343,29 +318,22 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 	/* get offset of notification word for this vq */
 	off = vp_ioread16(&cfg->queue_notify_off);
 
-	info->num = num;
 	info->msix_vector = msix_vec;
 
-	info->queue = alloc_virtqueue_pages(&info->num);
-	if (info->queue == NULL)
-		return ERR_PTR(-ENOMEM);
-
 	/* create the vring */
-	vq = vring_new_virtqueue(index, info->num,
-				 SMP_CACHE_BYTES, &vp_dev->vdev,
-				 true, info->queue, vp_notify, callback, name);
-	if (!vq) {
-		err = -ENOMEM;
-		goto err_new_queue;
-	}
+	vq = vring_create_virtqueue(index, num,
+				    SMP_CACHE_BYTES, &vp_dev->vdev,
+				    true, true, vp_notify, callback, name);
+	if (!vq)
+		return ERR_PTR(-ENOMEM);
 
 	/* activate the queue */
-	vp_iowrite16(num, &cfg->queue_size);
-	vp_iowrite64_twopart(virt_to_phys(info->queue),
+	vp_iowrite16(virtqueue_get_vring_size(vq), &cfg->queue_size);
+	vp_iowrite64_twopart(virtqueue_get_desc_addr(vq),
 			     &cfg->queue_desc_lo, &cfg->queue_desc_hi);
-	vp_iowrite64_twopart(virt_to_phys(virtqueue_get_avail(vq)),
+	vp_iowrite64_twopart(virtqueue_get_avail_addr(vq),
 			     &cfg->queue_avail_lo, &cfg->queue_avail_hi);
-	vp_iowrite64_twopart(virt_to_phys(virtqueue_get_used(vq)),
+	vp_iowrite64_twopart(virtqueue_get_used_addr(vq),
 			     &cfg->queue_used_lo, &cfg->queue_used_hi);
 
 	if (vp_dev->notify_base) {
@@ -410,8 +378,6 @@ err_assign_vector:
 		pci_iounmap(vp_dev->pci_dev, (void __iomem __force *)vq->priv);
 err_map_notify:
 	vring_del_virtqueue(vq);
-err_new_queue:
-	free_pages_exact(info->queue, vring_pci_size(info->num));
 	return ERR_PTR(err);
 }
 
@@ -456,8 +422,6 @@ static void del_vq(struct virtio_pci_vq_info *info)
 		pci_iounmap(vp_dev->pci_dev, (void __force __iomem *)vq->priv);
 
 	vring_del_virtqueue(vq);
-
-	free_pages_exact(info->queue, vring_pci_size(info->num));
 }
 
 static const struct virtio_config_ops virtio_pci_config_nodev_ops = {
@@ -641,6 +605,13 @@ int virtio_pci_modern_probe(struct virtio_pci_device *vp_dev)
 		return -EINVAL;
 	}
 
+	err = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(64));
+	if (err)
+		err = dma_set_mask_and_coherent(&pci_dev->dev,
+						DMA_BIT_MASK(32));
+	if (err)
+		dev_warn(&pci_dev->dev, "Failed to enable 64-bit or 32-bit DMA.  Trying to continue, but this might not work.\n");
+
 	/* Device capability is only mandatory for devices that have
 	 * device-specific configuration.
 	 */
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 9/9] vring: Use the DMA API on Xen
  2016-02-03  5:46 ` Andy Lutomirski
@ 2016-02-03  5:46   ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_ring.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index e46d08107a50..5c802d47892c 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -25,6 +25,7 @@
 #include <linux/hrtimer.h>
 #include <linux/kmemleak.h>
 #include <linux/dma-mapping.h>
+#include <xen/xen.h>
 
 #ifdef DEBUG
 /* For development, we want to crash whenever the ring is screwed. */
@@ -136,6 +137,17 @@ struct vring_virtqueue {
 
 static bool vring_use_dma_api(struct virtio_device *vdev)
 {
+	/*
+	 * In theory, it's possible to have a buggy QEMU-supposed
+	 * emulated Q35 IOMMU and Xen enabled at the same time.  On
+	 * such a configuration, virtio has never worked and will
+	 * not work without an even larger kludge.  Instead, enable
+	 * the DMA API if we're a Xen guest, which at least allows
+	 * all of the sensible Xen configurations to work correctly.
+	 */
+	if (xen_domain())
+		return true;
+
 	return false;
 }
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 9/9] vring: Use the DMA API on Xen
@ 2016-02-03  5:46   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel, Andy Lutomirski

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_ring.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index e46d08107a50..5c802d47892c 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -25,6 +25,7 @@
 #include <linux/hrtimer.h>
 #include <linux/kmemleak.h>
 #include <linux/dma-mapping.h>
+#include <xen/xen.h>
 
 #ifdef DEBUG
 /* For development, we want to crash whenever the ring is screwed. */
@@ -136,6 +137,17 @@ struct vring_virtqueue {
 
 static bool vring_use_dma_api(struct virtio_device *vdev)
 {
+	/*
+	 * In theory, it's possible to have a buggy QEMU-supposed
+	 * emulated Q35 IOMMU and Xen enabled at the same time.  On
+	 * such a configuration, virtio has never worked and will
+	 * not work without an even larger kludge.  Instead, enable
+	 * the DMA API if we're a Xen guest, which at least allows
+	 * all of the sensible Xen configurations to work correctly.
+	 */
+	if (xen_domain())
+		return true;
+
 	return false;
 }
 
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 9/9] vring: Use the DMA API on Xen
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (21 preceding siblings ...)
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_ring.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index e46d08107a50..5c802d47892c 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -25,6 +25,7 @@
 #include <linux/hrtimer.h>
 #include <linux/kmemleak.h>
 #include <linux/dma-mapping.h>
+#include <xen/xen.h>
 
 #ifdef DEBUG
 /* For development, we want to crash whenever the ring is screwed. */
@@ -136,6 +137,17 @@ struct vring_virtqueue {
 
 static bool vring_use_dma_api(struct virtio_device *vdev)
 {
+	/*
+	 * In theory, it's possible to have a buggy QEMU-supposed
+	 * emulated Q35 IOMMU and Xen enabled at the same time.  On
+	 * such a configuration, virtio has never worked and will
+	 * not work without an even larger kludge.  Instead, enable
+	 * the DMA API if we're a Xen guest, which at least allows
+	 * all of the sensible Xen configurations to work correctly.
+	 */
+	if (xen_domain())
+		return true;
+
 	return false;
 }
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH v7 9/9] vring: Use the DMA API on Xen
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (22 preceding siblings ...)
  (?)
@ 2016-02-03  5:46 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, Andy Lutomirski, xen-devel, sparclinux,
	Paolo Bonzini, Linux Virtualization, David Woodhouse,
	David S. Miller, Martin Schwidefsky

Signed-off-by: Andy Lutomirski <luto@kernel.org>
---
 drivers/virtio/virtio_ring.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index e46d08107a50..5c802d47892c 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -25,6 +25,7 @@
 #include <linux/hrtimer.h>
 #include <linux/kmemleak.h>
 #include <linux/dma-mapping.h>
+#include <xen/xen.h>
 
 #ifdef DEBUG
 /* For development, we want to crash whenever the ring is screwed. */
@@ -136,6 +137,17 @@ struct vring_virtqueue {
 
 static bool vring_use_dma_api(struct virtio_device *vdev)
 {
+	/*
+	 * In theory, it's possible to have a buggy QEMU-supposed
+	 * emulated Q35 IOMMU and Xen enabled at the same time.  On
+	 * such a configuration, virtio has never worked and will
+	 * not work without an even larger kludge.  Instead, enable
+	 * the DMA API if we're a Xen guest, which at least allows
+	 * all of the sensible Xen configurations to work correctly.
+	 */
+	if (xen_domain())
+		return true;
+
 	return false;
 }
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 9/9] vring: Use the DMA API on Xen
  2016-02-03  5:46   ` Andy Lutomirski
  (?)
@ 2016-02-03  9:49     ` David Vrabel
  -1 siblings, 0 replies; 73+ messages in thread
From: David Vrabel @ 2016-02-03  9:49 UTC (permalink / raw)
  To: Andy Lutomirski, Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	Stefano Stabellini, xen-devel

On 03/02/16 05:46, Andy Lutomirski wrote:
> Signed-off-by: Andy Lutomirski <luto@kernel.org>

You forgot the previous Reviewed-by tags.

David

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 9/9] vring: Use the DMA API on Xen
@ 2016-02-03  9:49     ` David Vrabel
  0 siblings, 0 replies; 73+ messages in thread
From: David Vrabel @ 2016-02-03  9:49 UTC (permalink / raw)
  To: Andy Lutomirski, Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	Stefano Stabellini, xen-devel

On 03/02/16 05:46, Andy Lutomirski wrote:
> Signed-off-by: Andy Lutomirski <luto@kernel.org>

You forgot the previous Reviewed-by tags.

David

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 9/9] vring: Use the DMA API on Xen
@ 2016-02-03  9:49     ` David Vrabel
  0 siblings, 0 replies; 73+ messages in thread
From: David Vrabel @ 2016-02-03  9:49 UTC (permalink / raw)
  To: Andy Lutomirski, Michael S. Tsirkin
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	Stefano Stabellini, xen-devel

On 03/02/16 05:46, Andy Lutomirski wrote:
> Signed-off-by: Andy Lutomirski <luto@kernel.org>

You forgot the previous Reviewed-by tags.

David

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 9/9] vring: Use the DMA API on Xen
  2016-02-03  5:46   ` Andy Lutomirski
                     ` (2 preceding siblings ...)
  (?)
@ 2016-02-03  9:49   ` David Vrabel
  -1 siblings, 0 replies; 73+ messages in thread
From: David Vrabel @ 2016-02-03  9:49 UTC (permalink / raw)
  To: Andy Lutomirski, Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, xen-devel, sparclinux,
	Paolo Bonzini, Linux Virtualization, David Woodhouse,
	David S. Miller, Martin Schwidefsky

On 03/02/16 05:46, Andy Lutomirski wrote:
> Signed-off-by: Andy Lutomirski <luto@kernel.org>

You forgot the previous Reviewed-by tags.

David

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 9/9] vring: Use the DMA API on Xen
  2016-02-03  5:46   ` Andy Lutomirski
  (?)
  (?)
@ 2016-02-03  9:49   ` David Vrabel
  -1 siblings, 0 replies; 73+ messages in thread
From: David Vrabel @ 2016-02-03  9:49 UTC (permalink / raw)
  To: Andy Lutomirski, Michael S. Tsirkin
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	xen-devel, sparclinux, Paolo Bonzini, Linux Virtualization,
	David Woodhouse, David S. Miller, Martin Schwidefsky

On 03/02/16 05:46, Andy Lutomirski wrote:
> Signed-off-by: Andy Lutomirski <luto@kernel.org>

You forgot the previous Reviewed-by tags.

David

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 5/9] virtio_ring: Support DMA APIs
  2016-02-03  5:46   ` Andy Lutomirski
  (?)
@ 2016-02-03 13:52     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 73+ messages in thread
From: Michael S. Tsirkin @ 2016-02-03 13:52 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Benjamin Herrenschmidt, David Woodhouse, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel

On Tue, Feb 02, 2016 at 09:46:36PM -0800, Andy Lutomirski wrote:
> virtio_ring currently sends the device (usually a hypervisor)
> physical addresses of its I/O buffers.  This is okay when DMA
> addresses and physical addresses are the same thing, but this isn't
> always the case.  For example, this never works on Xen guests, and
> it is likely to fail if a physical "virtio" device ever ends up
> behind an IOMMU or swiotlb.
> 
> The immediate use case for me is to enable virtio on Xen guests.
> For that to work, we need vring to support DMA address translation
> as well as a corresponding change to virtio_pci or to another
> driver.
> 
> Signed-off-by: Andy Lutomirski <luto@kernel.org>
> ---
>  drivers/virtio/Kconfig           |   2 +-
>  drivers/virtio/virtio_ring.c     | 200 ++++++++++++++++++++++++++++++++-------
>  tools/virtio/linux/dma-mapping.h |  17 ++++
>  3 files changed, 183 insertions(+), 36 deletions(-)
>  create mode 100644 tools/virtio/linux/dma-mapping.h
> 
> diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> index cab9f3f63a38..77590320d44c 100644
> --- a/drivers/virtio/Kconfig
> +++ b/drivers/virtio/Kconfig
> @@ -60,7 +60,7 @@ config VIRTIO_INPUT
>  
>   config VIRTIO_MMIO
>  	tristate "Platform bus driver for memory mapped virtio devices"
> -	depends on HAS_IOMEM
> +	depends on HAS_IOMEM && HAS_DMA
>   	select VIRTIO
>   	---help---
>   	 This drivers provides support for memory mapped virtio

What's this chunk doing here btw? Should be part of the mmio patch?

> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index ab0be6c084f6..9abc008ff7ea 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -24,6 +24,7 @@
>  #include <linux/module.h>
>  #include <linux/hrtimer.h>
>  #include <linux/kmemleak.h>
> +#include <linux/dma-mapping.h>
>  
>  #ifdef DEBUG
>  /* For development, we want to crash whenever the ring is screwed. */
> @@ -54,6 +55,11 @@
>  #define END_USE(vq)
>  #endif
>  
> +struct vring_desc_state {
> +	void *data;			/* Data for callback. */
> +	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
> +};
> +
>  struct vring_virtqueue {
>  	struct virtqueue vq;
>  
> @@ -98,8 +104,8 @@ struct vring_virtqueue {
>  	ktime_t last_add_time;
>  #endif
>  
> -	/* Tokens for callbacks. */
> -	void *data[];
> +	/* Per-descriptor state. */
> +	struct vring_desc_state desc_state[];
>  };
>  
>  #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
> @@ -128,6 +134,79 @@ static bool vring_use_dma_api(struct virtio_device *vdev)
>  	return false;
>  }
>  
> +/*
> + * The DMA ops on various arches are rather gnarly right now, and
> + * making all of the arch DMA ops work on the vring device itself
> + * is a mess.  For now, we use the parent device for DMA ops.
> + */
> +struct device *vring_dma_dev(const struct vring_virtqueue *vq)
> +{
> +	return vq->vq.vdev->dev.parent;
> +}
> +
> +/* Map one sg entry. */
> +static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq,
> +				   struct scatterlist *sg,
> +				   enum dma_data_direction direction)
> +{
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return (dma_addr_t)sg_phys(sg);
> +
> +	/*
> +	 * We can't use dma_map_sg, because we don't use scatterlists in
> +	 * the way it expects (we don't guarantee that the scatterlist
> +	 * will exist for the lifetime of the mapping).
> +	 */
> +	return dma_map_page(vring_dma_dev(vq),
> +			    sg_page(sg), sg->offset, sg->length,
> +			    direction);
> +}
> +
> +static dma_addr_t vring_map_single(const struct vring_virtqueue *vq,
> +				   void *cpu_addr, size_t size,
> +				   enum dma_data_direction direction)
> +{
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return (dma_addr_t)virt_to_phys(cpu_addr);
> +
> +	return dma_map_single(vring_dma_dev(vq),
> +			      cpu_addr, size, direction);
> +}
> +
> +static void vring_unmap_one(const struct vring_virtqueue *vq,
> +			    struct vring_desc *desc)
> +{
> +	u16 flags;
> +
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return;
> +
> +	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
> +
> +	if (flags & VRING_DESC_F_INDIRECT) {
> +		dma_unmap_single(vring_dma_dev(vq),
> +				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
> +				 virtio32_to_cpu(vq->vq.vdev, desc->len),
> +				 (flags & VRING_DESC_F_WRITE) ?
> +				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	} else {
> +		dma_unmap_page(vring_dma_dev(vq),
> +			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
> +			       virtio32_to_cpu(vq->vq.vdev, desc->len),
> +			       (flags & VRING_DESC_F_WRITE) ?
> +			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	}
> +}
> +
> +static int vring_mapping_error(const struct vring_virtqueue *vq,
> +			       dma_addr_t addr)
> +{
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return 0;
> +
> +	return dma_mapping_error(vring_dma_dev(vq), addr);
> +}
> +
>  static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
>  					 unsigned int total_sg, gfp_t gfp)
>  {
> @@ -161,7 +240,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	struct vring_virtqueue *vq = to_vvq(_vq);
>  	struct scatterlist *sg;
>  	struct vring_desc *desc;
> -	unsigned int i, n, avail, descs_used, uninitialized_var(prev);
> +	unsigned int i, n, avail, descs_used, uninitialized_var(prev), err_idx;
>  	int head;
>  	bool indirect;
>  
> @@ -201,21 +280,15 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  
>  	if (desc) {
>  		/* Use a single buffer which doesn't continue */
> -		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
> -		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, virt_to_phys(desc));
> -		/* avoid kmemleak false positive (hidden by virt_to_phys) */
> -		kmemleak_ignore(desc);
> -		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
> -
> +		indirect = true;
>  		/* Set up rest to use this indirect table. */
>  		i = 0;
>  		descs_used = 1;
> -		indirect = true;
>  	} else {
> +		indirect = false;
>  		desc = vq->vring.desc;
>  		i = head;
>  		descs_used = total_sg;
> -		indirect = false;
>  	}
>  
>  	if (vq->vq.num_free < descs_used) {
> @@ -230,13 +303,14 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  		return -ENOSPC;
>  	}
>  
> -	/* We're about to use some buffers from the free list. */
> -	vq->vq.num_free -= descs_used;
> -
>  	for (n = 0; n < out_sgs; n++) {
>  		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> +			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
> +			if (vring_mapping_error(vq, addr))
> +				goto unmap_release;
> +
>  			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT);
> -			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
> +			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
>  			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
>  			prev = i;
>  			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
> @@ -244,8 +318,12 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	}
>  	for (; n < (out_sgs + in_sgs); n++) {
>  		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> +			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
> +			if (vring_mapping_error(vq, addr))
> +				goto unmap_release;
> +
>  			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT | VRING_DESC_F_WRITE);
> -			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
> +			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
>  			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
>  			prev = i;
>  			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
> @@ -254,14 +332,33 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	/* Last one doesn't continue. */
>  	desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
>  
> +	if (indirect) {
> +		/* Now that the indirect table is filled in, map it. */
> +		dma_addr_t addr = vring_map_single(
> +			vq, desc, total_sg * sizeof(struct vring_desc),
> +			DMA_TO_DEVICE);
> +		if (vring_mapping_error(vq, addr))
> +			goto unmap_release;
> +
> +		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
> +		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, addr);
> +
> +		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
> +	}
> +
> +	/* We're using some buffers from the free list. */
> +	vq->vq.num_free -= descs_used;
> +
>  	/* Update free pointer */
>  	if (indirect)
>  		vq->free_head = virtio16_to_cpu(_vq->vdev, vq->vring.desc[head].next);
>  	else
>  		vq->free_head = i;
>  
> -	/* Set token. */
> -	vq->data[head] = data;
> +	/* Store token and indirect buffer state. */
> +	vq->desc_state[head].data = data;
> +	if (indirect)
> +		vq->desc_state[head].indir_desc = desc;
>  
>  	/* Put entry in available array (but don't update avail->idx until they
>  	 * do sync). */
> @@ -284,6 +381,24 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  		virtqueue_kick(_vq);
>  
>  	return 0;
> +
> +unmap_release:
> +	err_idx = i;
> +	i = head;
> +
> +	for (n = 0; n < total_sg; n++) {
> +		if (i == err_idx)
> +			break;
> +		vring_unmap_one(vq, &desc[i]);
> +		i = vq->vring.desc[i].next;
> +	}
> +
> +	vq->vq.num_free += total_sg;
> +
> +	if (indirect)
> +		kfree(desc);
> +
> +	return -EIO;
>  }
>  
>  /**
> @@ -454,27 +569,43 @@ EXPORT_SYMBOL_GPL(virtqueue_kick);
>  
>  static void detach_buf(struct vring_virtqueue *vq, unsigned int head)
>  {
> -	unsigned int i;
> +	unsigned int i, j;
> +	u16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
>  
>  	/* Clear data ptr. */
> -	vq->data[head] = NULL;
> +	vq->desc_state[head].data = NULL;
>  
> -	/* Put back on free list: find end */
> +	/* Put back on free list: unmap first-level descriptors and find end */
>  	i = head;
>  
> -	/* Free the indirect table */
> -	if (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT))
> -		kfree(phys_to_virt(virtio64_to_cpu(vq->vq.vdev, vq->vring.desc[i].addr)));
> -
> -	while (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT)) {
> +	while (vq->vring.desc[i].flags & nextflag) {
> +		vring_unmap_one(vq, &vq->vring.desc[i]);
>  		i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next);
>  		vq->vq.num_free++;
>  	}
>  
> +	vring_unmap_one(vq, &vq->vring.desc[i]);
>  	vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head);
>  	vq->free_head = head;
> +
>  	/* Plus final descriptor */
>  	vq->vq.num_free++;
> +
> +	/* Free the indirect table, if any, now that it's unmapped. */
> +	if (vq->desc_state[head].indir_desc) {
> +		struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
> +		u32 len = virtio32_to_cpu(vq->vq.vdev, vq->vring.desc[head].len);
> +
> +		BUG_ON(!(vq->vring.desc[head].flags &
> +			 cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
> +		BUG_ON(len == 0 || len % sizeof(struct vring_desc));
> +
> +		for (j = 0; j < len / sizeof(struct vring_desc); j++)
> +			vring_unmap_one(vq, &indir_desc[j]);
> +
> +		kfree(vq->desc_state[head].indir_desc);
> +		vq->desc_state[head].indir_desc = NULL;
> +	}
>  }
>  
>  static inline bool more_used(const struct vring_virtqueue *vq)
> @@ -529,13 +660,13 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
>  		BAD_RING(vq, "id %u out of range\n", i);
>  		return NULL;
>  	}
> -	if (unlikely(!vq->data[i])) {
> +	if (unlikely(!vq->desc_state[i].data)) {
>  		BAD_RING(vq, "id %u is not a head!\n", i);
>  		return NULL;
>  	}
>  
>  	/* detach_buf clears data, so grab it now. */
> -	ret = vq->data[i];
> +	ret = vq->desc_state[i].data;
>  	detach_buf(vq, i);
>  	vq->last_used_idx++;
>  	/* If we expect an interrupt for the next entry, tell host
> @@ -709,10 +840,10 @@ void *virtqueue_detach_unused_buf(struct virtqueue *_vq)
>  	START_USE(vq);
>  
>  	for (i = 0; i < vq->vring.num; i++) {
> -		if (!vq->data[i])
> +		if (!vq->desc_state[i].data)
>  			continue;
>  		/* detach_buf clears data, so grab it now. */
> -		buf = vq->data[i];
> +		buf = vq->desc_state[i].data;
>  		detach_buf(vq, i);
>  		vq->avail_idx_shadow--;
>  		vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
> @@ -766,7 +897,8 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
>  		return NULL;
>  	}
>  
> -	vq = kmalloc(sizeof(*vq) + sizeof(void *)*num, GFP_KERNEL);
> +	vq = kmalloc(sizeof(*vq) + num * sizeof(struct vring_desc_state),
> +		     GFP_KERNEL);
>  	if (!vq)
>  		return NULL;
>  
> @@ -800,11 +932,9 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
>  
>  	/* Put everything in free lists. */
>  	vq->free_head = 0;
> -	for (i = 0; i < num-1; i++) {
> +	for (i = 0; i < num-1; i++)
>  		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
> -		vq->data[i] = NULL;
> -	}
> -	vq->data[i] = NULL;
> +	memset(vq->desc_state, 0, num * sizeof(struct vring_desc_state));
>  
>  	return &vq->vq;
>  }
> diff --git a/tools/virtio/linux/dma-mapping.h b/tools/virtio/linux/dma-mapping.h
> new file mode 100644
> index 000000000000..4f93af89ae16
> --- /dev/null
> +++ b/tools/virtio/linux/dma-mapping.h
> @@ -0,0 +1,17 @@
> +#ifndef _LINUX_DMA_MAPPING_H
> +#define _LINUX_DMA_MAPPING_H
> +
> +#ifdef CONFIG_HAS_DMA
> +# error Virtio userspace code does not support CONFIG_HAS_DMA
> +#endif
> +
> +#define PCI_DMA_BUS_IS_PHYS 1
> +
> +enum dma_data_direction {
> +	DMA_BIDIRECTIONAL = 0,
> +	DMA_TO_DEVICE = 1,
> +	DMA_FROM_DEVICE = 2,
> +	DMA_NONE = 3,
> +};
> +
> +#endif
> -- 
> 2.5.0

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 5/9] virtio_ring: Support DMA APIs
@ 2016-02-03 13:52     ` Michael S. Tsirkin
  0 siblings, 0 replies; 73+ messages in thread
From: Michael S. Tsirkin @ 2016-02-03 13:52 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	xen-devel, sparclinux, Paolo Bonzini, Linux Virtualization,
	David Woodhouse, David S. Miller, Martin Schwidefsky

On Tue, Feb 02, 2016 at 09:46:36PM -0800, Andy Lutomirski wrote:
> virtio_ring currently sends the device (usually a hypervisor)
> physical addresses of its I/O buffers.  This is okay when DMA
> addresses and physical addresses are the same thing, but this isn't
> always the case.  For example, this never works on Xen guests, and
> it is likely to fail if a physical "virtio" device ever ends up
> behind an IOMMU or swiotlb.
> 
> The immediate use case for me is to enable virtio on Xen guests.
> For that to work, we need vring to support DMA address translation
> as well as a corresponding change to virtio_pci or to another
> driver.
> 
> Signed-off-by: Andy Lutomirski <luto@kernel.org>
> ---
>  drivers/virtio/Kconfig           |   2 +-
>  drivers/virtio/virtio_ring.c     | 200 ++++++++++++++++++++++++++++++++-------
>  tools/virtio/linux/dma-mapping.h |  17 ++++
>  3 files changed, 183 insertions(+), 36 deletions(-)
>  create mode 100644 tools/virtio/linux/dma-mapping.h
> 
> diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> index cab9f3f63a38..77590320d44c 100644
> --- a/drivers/virtio/Kconfig
> +++ b/drivers/virtio/Kconfig
> @@ -60,7 +60,7 @@ config VIRTIO_INPUT
>  
>   config VIRTIO_MMIO
>  	tristate "Platform bus driver for memory mapped virtio devices"
> -	depends on HAS_IOMEM
> +	depends on HAS_IOMEM && HAS_DMA
>   	select VIRTIO
>   	---help---
>   	 This drivers provides support for memory mapped virtio

What's this chunk doing here btw? Should be part of the mmio patch?

> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index ab0be6c084f6..9abc008ff7ea 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -24,6 +24,7 @@
>  #include <linux/module.h>
>  #include <linux/hrtimer.h>
>  #include <linux/kmemleak.h>
> +#include <linux/dma-mapping.h>
>  
>  #ifdef DEBUG
>  /* For development, we want to crash whenever the ring is screwed. */
> @@ -54,6 +55,11 @@
>  #define END_USE(vq)
>  #endif
>  
> +struct vring_desc_state {
> +	void *data;			/* Data for callback. */
> +	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
> +};
> +
>  struct vring_virtqueue {
>  	struct virtqueue vq;
>  
> @@ -98,8 +104,8 @@ struct vring_virtqueue {
>  	ktime_t last_add_time;
>  #endif
>  
> -	/* Tokens for callbacks. */
> -	void *data[];
> +	/* Per-descriptor state. */
> +	struct vring_desc_state desc_state[];
>  };
>  
>  #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
> @@ -128,6 +134,79 @@ static bool vring_use_dma_api(struct virtio_device *vdev)
>  	return false;
>  }
>  
> +/*
> + * The DMA ops on various arches are rather gnarly right now, and
> + * making all of the arch DMA ops work on the vring device itself
> + * is a mess.  For now, we use the parent device for DMA ops.
> + */
> +struct device *vring_dma_dev(const struct vring_virtqueue *vq)
> +{
> +	return vq->vq.vdev->dev.parent;
> +}
> +
> +/* Map one sg entry. */
> +static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq,
> +				   struct scatterlist *sg,
> +				   enum dma_data_direction direction)
> +{
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return (dma_addr_t)sg_phys(sg);
> +
> +	/*
> +	 * We can't use dma_map_sg, because we don't use scatterlists in
> +	 * the way it expects (we don't guarantee that the scatterlist
> +	 * will exist for the lifetime of the mapping).
> +	 */
> +	return dma_map_page(vring_dma_dev(vq),
> +			    sg_page(sg), sg->offset, sg->length,
> +			    direction);
> +}
> +
> +static dma_addr_t vring_map_single(const struct vring_virtqueue *vq,
> +				   void *cpu_addr, size_t size,
> +				   enum dma_data_direction direction)
> +{
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return (dma_addr_t)virt_to_phys(cpu_addr);
> +
> +	return dma_map_single(vring_dma_dev(vq),
> +			      cpu_addr, size, direction);
> +}
> +
> +static void vring_unmap_one(const struct vring_virtqueue *vq,
> +			    struct vring_desc *desc)
> +{
> +	u16 flags;
> +
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return;
> +
> +	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
> +
> +	if (flags & VRING_DESC_F_INDIRECT) {
> +		dma_unmap_single(vring_dma_dev(vq),
> +				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
> +				 virtio32_to_cpu(vq->vq.vdev, desc->len),
> +				 (flags & VRING_DESC_F_WRITE) ?
> +				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	} else {
> +		dma_unmap_page(vring_dma_dev(vq),
> +			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
> +			       virtio32_to_cpu(vq->vq.vdev, desc->len),
> +			       (flags & VRING_DESC_F_WRITE) ?
> +			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	}
> +}
> +
> +static int vring_mapping_error(const struct vring_virtqueue *vq,
> +			       dma_addr_t addr)
> +{
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return 0;
> +
> +	return dma_mapping_error(vring_dma_dev(vq), addr);
> +}
> +
>  static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
>  					 unsigned int total_sg, gfp_t gfp)
>  {
> @@ -161,7 +240,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	struct vring_virtqueue *vq = to_vvq(_vq);
>  	struct scatterlist *sg;
>  	struct vring_desc *desc;
> -	unsigned int i, n, avail, descs_used, uninitialized_var(prev);
> +	unsigned int i, n, avail, descs_used, uninitialized_var(prev), err_idx;
>  	int head;
>  	bool indirect;
>  
> @@ -201,21 +280,15 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  
>  	if (desc) {
>  		/* Use a single buffer which doesn't continue */
> -		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
> -		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, virt_to_phys(desc));
> -		/* avoid kmemleak false positive (hidden by virt_to_phys) */
> -		kmemleak_ignore(desc);
> -		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
> -
> +		indirect = true;
>  		/* Set up rest to use this indirect table. */
>  		i = 0;
>  		descs_used = 1;
> -		indirect = true;
>  	} else {
> +		indirect = false;
>  		desc = vq->vring.desc;
>  		i = head;
>  		descs_used = total_sg;
> -		indirect = false;
>  	}
>  
>  	if (vq->vq.num_free < descs_used) {
> @@ -230,13 +303,14 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  		return -ENOSPC;
>  	}
>  
> -	/* We're about to use some buffers from the free list. */
> -	vq->vq.num_free -= descs_used;
> -
>  	for (n = 0; n < out_sgs; n++) {
>  		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> +			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
> +			if (vring_mapping_error(vq, addr))
> +				goto unmap_release;
> +
>  			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT);
> -			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
> +			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
>  			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
>  			prev = i;
>  			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
> @@ -244,8 +318,12 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	}
>  	for (; n < (out_sgs + in_sgs); n++) {
>  		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> +			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
> +			if (vring_mapping_error(vq, addr))
> +				goto unmap_release;
> +
>  			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT | VRING_DESC_F_WRITE);
> -			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
> +			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
>  			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
>  			prev = i;
>  			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
> @@ -254,14 +332,33 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	/* Last one doesn't continue. */
>  	desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
>  
> +	if (indirect) {
> +		/* Now that the indirect table is filled in, map it. */
> +		dma_addr_t addr = vring_map_single(
> +			vq, desc, total_sg * sizeof(struct vring_desc),
> +			DMA_TO_DEVICE);
> +		if (vring_mapping_error(vq, addr))
> +			goto unmap_release;
> +
> +		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
> +		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, addr);
> +
> +		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
> +	}
> +
> +	/* We're using some buffers from the free list. */
> +	vq->vq.num_free -= descs_used;
> +
>  	/* Update free pointer */
>  	if (indirect)
>  		vq->free_head = virtio16_to_cpu(_vq->vdev, vq->vring.desc[head].next);
>  	else
>  		vq->free_head = i;
>  
> -	/* Set token. */
> -	vq->data[head] = data;
> +	/* Store token and indirect buffer state. */
> +	vq->desc_state[head].data = data;
> +	if (indirect)
> +		vq->desc_state[head].indir_desc = desc;
>  
>  	/* Put entry in available array (but don't update avail->idx until they
>  	 * do sync). */
> @@ -284,6 +381,24 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  		virtqueue_kick(_vq);
>  
>  	return 0;
> +
> +unmap_release:
> +	err_idx = i;
> +	i = head;
> +
> +	for (n = 0; n < total_sg; n++) {
> +		if (i == err_idx)
> +			break;
> +		vring_unmap_one(vq, &desc[i]);
> +		i = vq->vring.desc[i].next;
> +	}
> +
> +	vq->vq.num_free += total_sg;
> +
> +	if (indirect)
> +		kfree(desc);
> +
> +	return -EIO;
>  }
>  
>  /**
> @@ -454,27 +569,43 @@ EXPORT_SYMBOL_GPL(virtqueue_kick);
>  
>  static void detach_buf(struct vring_virtqueue *vq, unsigned int head)
>  {
> -	unsigned int i;
> +	unsigned int i, j;
> +	u16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
>  
>  	/* Clear data ptr. */
> -	vq->data[head] = NULL;
> +	vq->desc_state[head].data = NULL;
>  
> -	/* Put back on free list: find end */
> +	/* Put back on free list: unmap first-level descriptors and find end */
>  	i = head;
>  
> -	/* Free the indirect table */
> -	if (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT))
> -		kfree(phys_to_virt(virtio64_to_cpu(vq->vq.vdev, vq->vring.desc[i].addr)));
> -
> -	while (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT)) {
> +	while (vq->vring.desc[i].flags & nextflag) {
> +		vring_unmap_one(vq, &vq->vring.desc[i]);
>  		i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next);
>  		vq->vq.num_free++;
>  	}
>  
> +	vring_unmap_one(vq, &vq->vring.desc[i]);
>  	vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head);
>  	vq->free_head = head;
> +
>  	/* Plus final descriptor */
>  	vq->vq.num_free++;
> +
> +	/* Free the indirect table, if any, now that it's unmapped. */
> +	if (vq->desc_state[head].indir_desc) {
> +		struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
> +		u32 len = virtio32_to_cpu(vq->vq.vdev, vq->vring.desc[head].len);
> +
> +		BUG_ON(!(vq->vring.desc[head].flags &
> +			 cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
> +		BUG_ON(len == 0 || len % sizeof(struct vring_desc));
> +
> +		for (j = 0; j < len / sizeof(struct vring_desc); j++)
> +			vring_unmap_one(vq, &indir_desc[j]);
> +
> +		kfree(vq->desc_state[head].indir_desc);
> +		vq->desc_state[head].indir_desc = NULL;
> +	}
>  }
>  
>  static inline bool more_used(const struct vring_virtqueue *vq)
> @@ -529,13 +660,13 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
>  		BAD_RING(vq, "id %u out of range\n", i);
>  		return NULL;
>  	}
> -	if (unlikely(!vq->data[i])) {
> +	if (unlikely(!vq->desc_state[i].data)) {
>  		BAD_RING(vq, "id %u is not a head!\n", i);
>  		return NULL;
>  	}
>  
>  	/* detach_buf clears data, so grab it now. */
> -	ret = vq->data[i];
> +	ret = vq->desc_state[i].data;
>  	detach_buf(vq, i);
>  	vq->last_used_idx++;
>  	/* If we expect an interrupt for the next entry, tell host
> @@ -709,10 +840,10 @@ void *virtqueue_detach_unused_buf(struct virtqueue *_vq)
>  	START_USE(vq);
>  
>  	for (i = 0; i < vq->vring.num; i++) {
> -		if (!vq->data[i])
> +		if (!vq->desc_state[i].data)
>  			continue;
>  		/* detach_buf clears data, so grab it now. */
> -		buf = vq->data[i];
> +		buf = vq->desc_state[i].data;
>  		detach_buf(vq, i);
>  		vq->avail_idx_shadow--;
>  		vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
> @@ -766,7 +897,8 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
>  		return NULL;
>  	}
>  
> -	vq = kmalloc(sizeof(*vq) + sizeof(void *)*num, GFP_KERNEL);
> +	vq = kmalloc(sizeof(*vq) + num * sizeof(struct vring_desc_state),
> +		     GFP_KERNEL);
>  	if (!vq)
>  		return NULL;
>  
> @@ -800,11 +932,9 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
>  
>  	/* Put everything in free lists. */
>  	vq->free_head = 0;
> -	for (i = 0; i < num-1; i++) {
> +	for (i = 0; i < num-1; i++)
>  		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
> -		vq->data[i] = NULL;
> -	}
> -	vq->data[i] = NULL;
> +	memset(vq->desc_state, 0, num * sizeof(struct vring_desc_state));
>  
>  	return &vq->vq;
>  }
> diff --git a/tools/virtio/linux/dma-mapping.h b/tools/virtio/linux/dma-mapping.h
> new file mode 100644
> index 000000000000..4f93af89ae16
> --- /dev/null
> +++ b/tools/virtio/linux/dma-mapping.h
> @@ -0,0 +1,17 @@
> +#ifndef _LINUX_DMA_MAPPING_H
> +#define _LINUX_DMA_MAPPING_H
> +
> +#ifdef CONFIG_HAS_DMA
> +# error Virtio userspace code does not support CONFIG_HAS_DMA
> +#endif
> +
> +#define PCI_DMA_BUS_IS_PHYS 1
> +
> +enum dma_data_direction {
> +	DMA_BIDIRECTIONAL = 0,
> +	DMA_TO_DEVICE = 1,
> +	DMA_FROM_DEVICE = 2,
> +	DMA_NONE = 3,
> +};
> +
> +#endif
> -- 
> 2.5.0

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 5/9] virtio_ring: Support DMA APIs
@ 2016-02-03 13:52     ` Michael S. Tsirkin
  0 siblings, 0 replies; 73+ messages in thread
From: Michael S. Tsirkin @ 2016-02-03 13:52 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	xen-devel, sparclinux, Paolo Bonzini, Linux Virtualization,
	David Woodhouse, David S. Miller, Martin Schwidefsky

On Tue, Feb 02, 2016 at 09:46:36PM -0800, Andy Lutomirski wrote:
> virtio_ring currently sends the device (usually a hypervisor)
> physical addresses of its I/O buffers.  This is okay when DMA
> addresses and physical addresses are the same thing, but this isn't
> always the case.  For example, this never works on Xen guests, and
> it is likely to fail if a physical "virtio" device ever ends up
> behind an IOMMU or swiotlb.
> 
> The immediate use case for me is to enable virtio on Xen guests.
> For that to work, we need vring to support DMA address translation
> as well as a corresponding change to virtio_pci or to another
> driver.
> 
> Signed-off-by: Andy Lutomirski <luto@kernel.org>
> ---
>  drivers/virtio/Kconfig           |   2 +-
>  drivers/virtio/virtio_ring.c     | 200 ++++++++++++++++++++++++++++++++-------
>  tools/virtio/linux/dma-mapping.h |  17 ++++
>  3 files changed, 183 insertions(+), 36 deletions(-)
>  create mode 100644 tools/virtio/linux/dma-mapping.h
> 
> diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> index cab9f3f63a38..77590320d44c 100644
> --- a/drivers/virtio/Kconfig
> +++ b/drivers/virtio/Kconfig
> @@ -60,7 +60,7 @@ config VIRTIO_INPUT
>  
>   config VIRTIO_MMIO
>  	tristate "Platform bus driver for memory mapped virtio devices"
> -	depends on HAS_IOMEM
> +	depends on HAS_IOMEM && HAS_DMA
>   	select VIRTIO
>   	---help---
>   	 This drivers provides support for memory mapped virtio

What's this chunk doing here btw? Should be part of the mmio patch?

> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index ab0be6c084f6..9abc008ff7ea 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -24,6 +24,7 @@
>  #include <linux/module.h>
>  #include <linux/hrtimer.h>
>  #include <linux/kmemleak.h>
> +#include <linux/dma-mapping.h>
>  
>  #ifdef DEBUG
>  /* For development, we want to crash whenever the ring is screwed. */
> @@ -54,6 +55,11 @@
>  #define END_USE(vq)
>  #endif
>  
> +struct vring_desc_state {
> +	void *data;			/* Data for callback. */
> +	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
> +};
> +
>  struct vring_virtqueue {
>  	struct virtqueue vq;
>  
> @@ -98,8 +104,8 @@ struct vring_virtqueue {
>  	ktime_t last_add_time;
>  #endif
>  
> -	/* Tokens for callbacks. */
> -	void *data[];
> +	/* Per-descriptor state. */
> +	struct vring_desc_state desc_state[];
>  };
>  
>  #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
> @@ -128,6 +134,79 @@ static bool vring_use_dma_api(struct virtio_device *vdev)
>  	return false;
>  }
>  
> +/*
> + * The DMA ops on various arches are rather gnarly right now, and
> + * making all of the arch DMA ops work on the vring device itself
> + * is a mess.  For now, we use the parent device for DMA ops.
> + */
> +struct device *vring_dma_dev(const struct vring_virtqueue *vq)
> +{
> +	return vq->vq.vdev->dev.parent;
> +}
> +
> +/* Map one sg entry. */
> +static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq,
> +				   struct scatterlist *sg,
> +				   enum dma_data_direction direction)
> +{
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return (dma_addr_t)sg_phys(sg);
> +
> +	/*
> +	 * We can't use dma_map_sg, because we don't use scatterlists in
> +	 * the way it expects (we don't guarantee that the scatterlist
> +	 * will exist for the lifetime of the mapping).
> +	 */
> +	return dma_map_page(vring_dma_dev(vq),
> +			    sg_page(sg), sg->offset, sg->length,
> +			    direction);
> +}
> +
> +static dma_addr_t vring_map_single(const struct vring_virtqueue *vq,
> +				   void *cpu_addr, size_t size,
> +				   enum dma_data_direction direction)
> +{
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return (dma_addr_t)virt_to_phys(cpu_addr);
> +
> +	return dma_map_single(vring_dma_dev(vq),
> +			      cpu_addr, size, direction);
> +}
> +
> +static void vring_unmap_one(const struct vring_virtqueue *vq,
> +			    struct vring_desc *desc)
> +{
> +	u16 flags;
> +
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return;
> +
> +	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
> +
> +	if (flags & VRING_DESC_F_INDIRECT) {
> +		dma_unmap_single(vring_dma_dev(vq),
> +				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
> +				 virtio32_to_cpu(vq->vq.vdev, desc->len),
> +				 (flags & VRING_DESC_F_WRITE) ?
> +				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	} else {
> +		dma_unmap_page(vring_dma_dev(vq),
> +			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
> +			       virtio32_to_cpu(vq->vq.vdev, desc->len),
> +			       (flags & VRING_DESC_F_WRITE) ?
> +			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	}
> +}
> +
> +static int vring_mapping_error(const struct vring_virtqueue *vq,
> +			       dma_addr_t addr)
> +{
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return 0;
> +
> +	return dma_mapping_error(vring_dma_dev(vq), addr);
> +}
> +
>  static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
>  					 unsigned int total_sg, gfp_t gfp)
>  {
> @@ -161,7 +240,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	struct vring_virtqueue *vq = to_vvq(_vq);
>  	struct scatterlist *sg;
>  	struct vring_desc *desc;
> -	unsigned int i, n, avail, descs_used, uninitialized_var(prev);
> +	unsigned int i, n, avail, descs_used, uninitialized_var(prev), err_idx;
>  	int head;
>  	bool indirect;
>  
> @@ -201,21 +280,15 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  
>  	if (desc) {
>  		/* Use a single buffer which doesn't continue */
> -		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
> -		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, virt_to_phys(desc));
> -		/* avoid kmemleak false positive (hidden by virt_to_phys) */
> -		kmemleak_ignore(desc);
> -		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
> -
> +		indirect = true;
>  		/* Set up rest to use this indirect table. */
>  		i = 0;
>  		descs_used = 1;
> -		indirect = true;
>  	} else {
> +		indirect = false;
>  		desc = vq->vring.desc;
>  		i = head;
>  		descs_used = total_sg;
> -		indirect = false;
>  	}
>  
>  	if (vq->vq.num_free < descs_used) {
> @@ -230,13 +303,14 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  		return -ENOSPC;
>  	}
>  
> -	/* We're about to use some buffers from the free list. */
> -	vq->vq.num_free -= descs_used;
> -
>  	for (n = 0; n < out_sgs; n++) {
>  		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> +			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
> +			if (vring_mapping_error(vq, addr))
> +				goto unmap_release;
> +
>  			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT);
> -			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
> +			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
>  			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
>  			prev = i;
>  			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
> @@ -244,8 +318,12 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	}
>  	for (; n < (out_sgs + in_sgs); n++) {
>  		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> +			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
> +			if (vring_mapping_error(vq, addr))
> +				goto unmap_release;
> +
>  			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT | VRING_DESC_F_WRITE);
> -			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
> +			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
>  			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
>  			prev = i;
>  			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
> @@ -254,14 +332,33 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	/* Last one doesn't continue. */
>  	desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
>  
> +	if (indirect) {
> +		/* Now that the indirect table is filled in, map it. */
> +		dma_addr_t addr = vring_map_single(
> +			vq, desc, total_sg * sizeof(struct vring_desc),
> +			DMA_TO_DEVICE);
> +		if (vring_mapping_error(vq, addr))
> +			goto unmap_release;
> +
> +		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
> +		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, addr);
> +
> +		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
> +	}
> +
> +	/* We're using some buffers from the free list. */
> +	vq->vq.num_free -= descs_used;
> +
>  	/* Update free pointer */
>  	if (indirect)
>  		vq->free_head = virtio16_to_cpu(_vq->vdev, vq->vring.desc[head].next);
>  	else
>  		vq->free_head = i;
>  
> -	/* Set token. */
> -	vq->data[head] = data;
> +	/* Store token and indirect buffer state. */
> +	vq->desc_state[head].data = data;
> +	if (indirect)
> +		vq->desc_state[head].indir_desc = desc;
>  
>  	/* Put entry in available array (but don't update avail->idx until they
>  	 * do sync). */
> @@ -284,6 +381,24 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  		virtqueue_kick(_vq);
>  
>  	return 0;
> +
> +unmap_release:
> +	err_idx = i;
> +	i = head;
> +
> +	for (n = 0; n < total_sg; n++) {
> +		if (i = err_idx)
> +			break;
> +		vring_unmap_one(vq, &desc[i]);
> +		i = vq->vring.desc[i].next;
> +	}
> +
> +	vq->vq.num_free += total_sg;
> +
> +	if (indirect)
> +		kfree(desc);
> +
> +	return -EIO;
>  }
>  
>  /**
> @@ -454,27 +569,43 @@ EXPORT_SYMBOL_GPL(virtqueue_kick);
>  
>  static void detach_buf(struct vring_virtqueue *vq, unsigned int head)
>  {
> -	unsigned int i;
> +	unsigned int i, j;
> +	u16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
>  
>  	/* Clear data ptr. */
> -	vq->data[head] = NULL;
> +	vq->desc_state[head].data = NULL;
>  
> -	/* Put back on free list: find end */
> +	/* Put back on free list: unmap first-level descriptors and find end */
>  	i = head;
>  
> -	/* Free the indirect table */
> -	if (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT))
> -		kfree(phys_to_virt(virtio64_to_cpu(vq->vq.vdev, vq->vring.desc[i].addr)));
> -
> -	while (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT)) {
> +	while (vq->vring.desc[i].flags & nextflag) {
> +		vring_unmap_one(vq, &vq->vring.desc[i]);
>  		i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next);
>  		vq->vq.num_free++;
>  	}
>  
> +	vring_unmap_one(vq, &vq->vring.desc[i]);
>  	vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head);
>  	vq->free_head = head;
> +
>  	/* Plus final descriptor */
>  	vq->vq.num_free++;
> +
> +	/* Free the indirect table, if any, now that it's unmapped. */
> +	if (vq->desc_state[head].indir_desc) {
> +		struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
> +		u32 len = virtio32_to_cpu(vq->vq.vdev, vq->vring.desc[head].len);
> +
> +		BUG_ON(!(vq->vring.desc[head].flags &
> +			 cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
> +		BUG_ON(len = 0 || len % sizeof(struct vring_desc));
> +
> +		for (j = 0; j < len / sizeof(struct vring_desc); j++)
> +			vring_unmap_one(vq, &indir_desc[j]);
> +
> +		kfree(vq->desc_state[head].indir_desc);
> +		vq->desc_state[head].indir_desc = NULL;
> +	}
>  }
>  
>  static inline bool more_used(const struct vring_virtqueue *vq)
> @@ -529,13 +660,13 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
>  		BAD_RING(vq, "id %u out of range\n", i);
>  		return NULL;
>  	}
> -	if (unlikely(!vq->data[i])) {
> +	if (unlikely(!vq->desc_state[i].data)) {
>  		BAD_RING(vq, "id %u is not a head!\n", i);
>  		return NULL;
>  	}
>  
>  	/* detach_buf clears data, so grab it now. */
> -	ret = vq->data[i];
> +	ret = vq->desc_state[i].data;
>  	detach_buf(vq, i);
>  	vq->last_used_idx++;
>  	/* If we expect an interrupt for the next entry, tell host
> @@ -709,10 +840,10 @@ void *virtqueue_detach_unused_buf(struct virtqueue *_vq)
>  	START_USE(vq);
>  
>  	for (i = 0; i < vq->vring.num; i++) {
> -		if (!vq->data[i])
> +		if (!vq->desc_state[i].data)
>  			continue;
>  		/* detach_buf clears data, so grab it now. */
> -		buf = vq->data[i];
> +		buf = vq->desc_state[i].data;
>  		detach_buf(vq, i);
>  		vq->avail_idx_shadow--;
>  		vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
> @@ -766,7 +897,8 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
>  		return NULL;
>  	}
>  
> -	vq = kmalloc(sizeof(*vq) + sizeof(void *)*num, GFP_KERNEL);
> +	vq = kmalloc(sizeof(*vq) + num * sizeof(struct vring_desc_state),
> +		     GFP_KERNEL);
>  	if (!vq)
>  		return NULL;
>  
> @@ -800,11 +932,9 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
>  
>  	/* Put everything in free lists. */
>  	vq->free_head = 0;
> -	for (i = 0; i < num-1; i++) {
> +	for (i = 0; i < num-1; i++)
>  		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
> -		vq->data[i] = NULL;
> -	}
> -	vq->data[i] = NULL;
> +	memset(vq->desc_state, 0, num * sizeof(struct vring_desc_state));
>  
>  	return &vq->vq;
>  }
> diff --git a/tools/virtio/linux/dma-mapping.h b/tools/virtio/linux/dma-mapping.h
> new file mode 100644
> index 000000000000..4f93af89ae16
> --- /dev/null
> +++ b/tools/virtio/linux/dma-mapping.h
> @@ -0,0 +1,17 @@
> +#ifndef _LINUX_DMA_MAPPING_H
> +#define _LINUX_DMA_MAPPING_H
> +
> +#ifdef CONFIG_HAS_DMA
> +# error Virtio userspace code does not support CONFIG_HAS_DMA
> +#endif
> +
> +#define PCI_DMA_BUS_IS_PHYS 1
> +
> +enum dma_data_direction {
> +	DMA_BIDIRECTIONAL = 0,
> +	DMA_TO_DEVICE = 1,
> +	DMA_FROM_DEVICE = 2,
> +	DMA_NONE = 3,
> +};
> +
> +#endif
> -- 
> 2.5.0

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 5/9] virtio_ring: Support DMA APIs
  2016-02-03  5:46   ` Andy Lutomirski
  (?)
  (?)
@ 2016-02-03 13:52   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 73+ messages in thread
From: Michael S. Tsirkin @ 2016-02-03 13:52 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

On Tue, Feb 02, 2016 at 09:46:36PM -0800, Andy Lutomirski wrote:
> virtio_ring currently sends the device (usually a hypervisor)
> physical addresses of its I/O buffers.  This is okay when DMA
> addresses and physical addresses are the same thing, but this isn't
> always the case.  For example, this never works on Xen guests, and
> it is likely to fail if a physical "virtio" device ever ends up
> behind an IOMMU or swiotlb.
> 
> The immediate use case for me is to enable virtio on Xen guests.
> For that to work, we need vring to support DMA address translation
> as well as a corresponding change to virtio_pci or to another
> driver.
> 
> Signed-off-by: Andy Lutomirski <luto@kernel.org>
> ---
>  drivers/virtio/Kconfig           |   2 +-
>  drivers/virtio/virtio_ring.c     | 200 ++++++++++++++++++++++++++++++++-------
>  tools/virtio/linux/dma-mapping.h |  17 ++++
>  3 files changed, 183 insertions(+), 36 deletions(-)
>  create mode 100644 tools/virtio/linux/dma-mapping.h
> 
> diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> index cab9f3f63a38..77590320d44c 100644
> --- a/drivers/virtio/Kconfig
> +++ b/drivers/virtio/Kconfig
> @@ -60,7 +60,7 @@ config VIRTIO_INPUT
>  
>   config VIRTIO_MMIO
>  	tristate "Platform bus driver for memory mapped virtio devices"
> -	depends on HAS_IOMEM
> +	depends on HAS_IOMEM && HAS_DMA
>   	select VIRTIO
>   	---help---
>   	 This drivers provides support for memory mapped virtio

What's this chunk doing here btw? Should be part of the mmio patch?

> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index ab0be6c084f6..9abc008ff7ea 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -24,6 +24,7 @@
>  #include <linux/module.h>
>  #include <linux/hrtimer.h>
>  #include <linux/kmemleak.h>
> +#include <linux/dma-mapping.h>
>  
>  #ifdef DEBUG
>  /* For development, we want to crash whenever the ring is screwed. */
> @@ -54,6 +55,11 @@
>  #define END_USE(vq)
>  #endif
>  
> +struct vring_desc_state {
> +	void *data;			/* Data for callback. */
> +	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
> +};
> +
>  struct vring_virtqueue {
>  	struct virtqueue vq;
>  
> @@ -98,8 +104,8 @@ struct vring_virtqueue {
>  	ktime_t last_add_time;
>  #endif
>  
> -	/* Tokens for callbacks. */
> -	void *data[];
> +	/* Per-descriptor state. */
> +	struct vring_desc_state desc_state[];
>  };
>  
>  #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
> @@ -128,6 +134,79 @@ static bool vring_use_dma_api(struct virtio_device *vdev)
>  	return false;
>  }
>  
> +/*
> + * The DMA ops on various arches are rather gnarly right now, and
> + * making all of the arch DMA ops work on the vring device itself
> + * is a mess.  For now, we use the parent device for DMA ops.
> + */
> +struct device *vring_dma_dev(const struct vring_virtqueue *vq)
> +{
> +	return vq->vq.vdev->dev.parent;
> +}
> +
> +/* Map one sg entry. */
> +static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq,
> +				   struct scatterlist *sg,
> +				   enum dma_data_direction direction)
> +{
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return (dma_addr_t)sg_phys(sg);
> +
> +	/*
> +	 * We can't use dma_map_sg, because we don't use scatterlists in
> +	 * the way it expects (we don't guarantee that the scatterlist
> +	 * will exist for the lifetime of the mapping).
> +	 */
> +	return dma_map_page(vring_dma_dev(vq),
> +			    sg_page(sg), sg->offset, sg->length,
> +			    direction);
> +}
> +
> +static dma_addr_t vring_map_single(const struct vring_virtqueue *vq,
> +				   void *cpu_addr, size_t size,
> +				   enum dma_data_direction direction)
> +{
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return (dma_addr_t)virt_to_phys(cpu_addr);
> +
> +	return dma_map_single(vring_dma_dev(vq),
> +			      cpu_addr, size, direction);
> +}
> +
> +static void vring_unmap_one(const struct vring_virtqueue *vq,
> +			    struct vring_desc *desc)
> +{
> +	u16 flags;
> +
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return;
> +
> +	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
> +
> +	if (flags & VRING_DESC_F_INDIRECT) {
> +		dma_unmap_single(vring_dma_dev(vq),
> +				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
> +				 virtio32_to_cpu(vq->vq.vdev, desc->len),
> +				 (flags & VRING_DESC_F_WRITE) ?
> +				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	} else {
> +		dma_unmap_page(vring_dma_dev(vq),
> +			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
> +			       virtio32_to_cpu(vq->vq.vdev, desc->len),
> +			       (flags & VRING_DESC_F_WRITE) ?
> +			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	}
> +}
> +
> +static int vring_mapping_error(const struct vring_virtqueue *vq,
> +			       dma_addr_t addr)
> +{
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return 0;
> +
> +	return dma_mapping_error(vring_dma_dev(vq), addr);
> +}
> +
>  static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
>  					 unsigned int total_sg, gfp_t gfp)
>  {
> @@ -161,7 +240,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	struct vring_virtqueue *vq = to_vvq(_vq);
>  	struct scatterlist *sg;
>  	struct vring_desc *desc;
> -	unsigned int i, n, avail, descs_used, uninitialized_var(prev);
> +	unsigned int i, n, avail, descs_used, uninitialized_var(prev), err_idx;
>  	int head;
>  	bool indirect;
>  
> @@ -201,21 +280,15 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  
>  	if (desc) {
>  		/* Use a single buffer which doesn't continue */
> -		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
> -		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, virt_to_phys(desc));
> -		/* avoid kmemleak false positive (hidden by virt_to_phys) */
> -		kmemleak_ignore(desc);
> -		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
> -
> +		indirect = true;
>  		/* Set up rest to use this indirect table. */
>  		i = 0;
>  		descs_used = 1;
> -		indirect = true;
>  	} else {
> +		indirect = false;
>  		desc = vq->vring.desc;
>  		i = head;
>  		descs_used = total_sg;
> -		indirect = false;
>  	}
>  
>  	if (vq->vq.num_free < descs_used) {
> @@ -230,13 +303,14 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  		return -ENOSPC;
>  	}
>  
> -	/* We're about to use some buffers from the free list. */
> -	vq->vq.num_free -= descs_used;
> -
>  	for (n = 0; n < out_sgs; n++) {
>  		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> +			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
> +			if (vring_mapping_error(vq, addr))
> +				goto unmap_release;
> +
>  			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT);
> -			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
> +			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
>  			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
>  			prev = i;
>  			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
> @@ -244,8 +318,12 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	}
>  	for (; n < (out_sgs + in_sgs); n++) {
>  		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> +			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
> +			if (vring_mapping_error(vq, addr))
> +				goto unmap_release;
> +
>  			desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT | VRING_DESC_F_WRITE);
> -			desc[i].addr = cpu_to_virtio64(_vq->vdev, sg_phys(sg));
> +			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
>  			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
>  			prev = i;
>  			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
> @@ -254,14 +332,33 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	/* Last one doesn't continue. */
>  	desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
>  
> +	if (indirect) {
> +		/* Now that the indirect table is filled in, map it. */
> +		dma_addr_t addr = vring_map_single(
> +			vq, desc, total_sg * sizeof(struct vring_desc),
> +			DMA_TO_DEVICE);
> +		if (vring_mapping_error(vq, addr))
> +			goto unmap_release;
> +
> +		vq->vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT);
> +		vq->vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, addr);
> +
> +		vq->vring.desc[head].len = cpu_to_virtio32(_vq->vdev, total_sg * sizeof(struct vring_desc));
> +	}
> +
> +	/* We're using some buffers from the free list. */
> +	vq->vq.num_free -= descs_used;
> +
>  	/* Update free pointer */
>  	if (indirect)
>  		vq->free_head = virtio16_to_cpu(_vq->vdev, vq->vring.desc[head].next);
>  	else
>  		vq->free_head = i;
>  
> -	/* Set token. */
> -	vq->data[head] = data;
> +	/* Store token and indirect buffer state. */
> +	vq->desc_state[head].data = data;
> +	if (indirect)
> +		vq->desc_state[head].indir_desc = desc;
>  
>  	/* Put entry in available array (but don't update avail->idx until they
>  	 * do sync). */
> @@ -284,6 +381,24 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  		virtqueue_kick(_vq);
>  
>  	return 0;
> +
> +unmap_release:
> +	err_idx = i;
> +	i = head;
> +
> +	for (n = 0; n < total_sg; n++) {
> +		if (i == err_idx)
> +			break;
> +		vring_unmap_one(vq, &desc[i]);
> +		i = vq->vring.desc[i].next;
> +	}
> +
> +	vq->vq.num_free += total_sg;
> +
> +	if (indirect)
> +		kfree(desc);
> +
> +	return -EIO;
>  }
>  
>  /**
> @@ -454,27 +569,43 @@ EXPORT_SYMBOL_GPL(virtqueue_kick);
>  
>  static void detach_buf(struct vring_virtqueue *vq, unsigned int head)
>  {
> -	unsigned int i;
> +	unsigned int i, j;
> +	u16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
>  
>  	/* Clear data ptr. */
> -	vq->data[head] = NULL;
> +	vq->desc_state[head].data = NULL;
>  
> -	/* Put back on free list: find end */
> +	/* Put back on free list: unmap first-level descriptors and find end */
>  	i = head;
>  
> -	/* Free the indirect table */
> -	if (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT))
> -		kfree(phys_to_virt(virtio64_to_cpu(vq->vq.vdev, vq->vring.desc[i].addr)));
> -
> -	while (vq->vring.desc[i].flags & cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT)) {
> +	while (vq->vring.desc[i].flags & nextflag) {
> +		vring_unmap_one(vq, &vq->vring.desc[i]);
>  		i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next);
>  		vq->vq.num_free++;
>  	}
>  
> +	vring_unmap_one(vq, &vq->vring.desc[i]);
>  	vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head);
>  	vq->free_head = head;
> +
>  	/* Plus final descriptor */
>  	vq->vq.num_free++;
> +
> +	/* Free the indirect table, if any, now that it's unmapped. */
> +	if (vq->desc_state[head].indir_desc) {
> +		struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
> +		u32 len = virtio32_to_cpu(vq->vq.vdev, vq->vring.desc[head].len);
> +
> +		BUG_ON(!(vq->vring.desc[head].flags &
> +			 cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
> +		BUG_ON(len == 0 || len % sizeof(struct vring_desc));
> +
> +		for (j = 0; j < len / sizeof(struct vring_desc); j++)
> +			vring_unmap_one(vq, &indir_desc[j]);
> +
> +		kfree(vq->desc_state[head].indir_desc);
> +		vq->desc_state[head].indir_desc = NULL;
> +	}
>  }
>  
>  static inline bool more_used(const struct vring_virtqueue *vq)
> @@ -529,13 +660,13 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
>  		BAD_RING(vq, "id %u out of range\n", i);
>  		return NULL;
>  	}
> -	if (unlikely(!vq->data[i])) {
> +	if (unlikely(!vq->desc_state[i].data)) {
>  		BAD_RING(vq, "id %u is not a head!\n", i);
>  		return NULL;
>  	}
>  
>  	/* detach_buf clears data, so grab it now. */
> -	ret = vq->data[i];
> +	ret = vq->desc_state[i].data;
>  	detach_buf(vq, i);
>  	vq->last_used_idx++;
>  	/* If we expect an interrupt for the next entry, tell host
> @@ -709,10 +840,10 @@ void *virtqueue_detach_unused_buf(struct virtqueue *_vq)
>  	START_USE(vq);
>  
>  	for (i = 0; i < vq->vring.num; i++) {
> -		if (!vq->data[i])
> +		if (!vq->desc_state[i].data)
>  			continue;
>  		/* detach_buf clears data, so grab it now. */
> -		buf = vq->data[i];
> +		buf = vq->desc_state[i].data;
>  		detach_buf(vq, i);
>  		vq->avail_idx_shadow--;
>  		vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
> @@ -766,7 +897,8 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
>  		return NULL;
>  	}
>  
> -	vq = kmalloc(sizeof(*vq) + sizeof(void *)*num, GFP_KERNEL);
> +	vq = kmalloc(sizeof(*vq) + num * sizeof(struct vring_desc_state),
> +		     GFP_KERNEL);
>  	if (!vq)
>  		return NULL;
>  
> @@ -800,11 +932,9 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
>  
>  	/* Put everything in free lists. */
>  	vq->free_head = 0;
> -	for (i = 0; i < num-1; i++) {
> +	for (i = 0; i < num-1; i++)
>  		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
> -		vq->data[i] = NULL;
> -	}
> -	vq->data[i] = NULL;
> +	memset(vq->desc_state, 0, num * sizeof(struct vring_desc_state));
>  
>  	return &vq->vq;
>  }
> diff --git a/tools/virtio/linux/dma-mapping.h b/tools/virtio/linux/dma-mapping.h
> new file mode 100644
> index 000000000000..4f93af89ae16
> --- /dev/null
> +++ b/tools/virtio/linux/dma-mapping.h
> @@ -0,0 +1,17 @@
> +#ifndef _LINUX_DMA_MAPPING_H
> +#define _LINUX_DMA_MAPPING_H
> +
> +#ifdef CONFIG_HAS_DMA
> +# error Virtio userspace code does not support CONFIG_HAS_DMA
> +#endif
> +
> +#define PCI_DMA_BUS_IS_PHYS 1
> +
> +enum dma_data_direction {
> +	DMA_BIDIRECTIONAL = 0,
> +	DMA_TO_DEVICE = 1,
> +	DMA_FROM_DEVICE = 2,
> +	DMA_NONE = 3,
> +};
> +
> +#endif
> -- 
> 2.5.0

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
  2016-02-03  5:46 ` Andy Lutomirski
  (?)
@ 2016-02-03 17:52   ` Stefano Stabellini
  -1 siblings, 0 replies; 73+ messages in thread
From: Stefano Stabellini @ 2016-02-03 17:52 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Michael S. Tsirkin, Benjamin Herrenschmidt, David Woodhouse,
	linux-kernel, David S. Miller, sparclinux, Joerg Roedel,
	Christian Borntraeger, Cornelia Huck, Sebastian Ott,
	Paolo Bonzini, Christoph Hellwig, KVM, Martin Schwidefsky,
	linux-s390, Linux Virtualization, David Vrabel,
	Stefano Stabellini, xen-devel

On Tue, 2 Feb 2016, Andy Lutomirski wrote:
> This switches virtio to use the DMA API on Xen and if requested by
> module option.
> 
> This fixes virtio on Xen, and it should break anything because it's
> off by default on everything except Xen PV on x86.
> 
> To the Xen people: is this okay?  If it doesn't work on other Xen
> variants (PVH? HVM?), can you submit follow-up patches to fix it?

I have been waiting for something like this for a long time: up to now
it wasn't possible to use Xen inside a VM with virtio devices.

You can add my:

Tested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> To everyone else: we've waffled on this for way too long.  I think
> we should to get DMA API implementation in with a conservative
> policy like this rather than waiting until we achieve perfection.
> I'm tired of carrying these patches around.
> 
> I changed queue allocation around a bit in this version.  Per Michael's
> request, we no longer use dma_zalloc_coherent in the !dma_api case.
> Instead we use alloc_pages_exact, just like the current code does.
> This simplifies the ring address accessors, because they can always
> load from the dma addr rather than depending on vring_use_dma_api
> themselves.
> 
> There's an odd warning in here if the ring's physical address
> doesn't fit in a dma_addr_t.  This could only possible happen on
> really weird configurations in which phys_addr_t is wider than
> dma_addr_t.  AFAICT this is only possible on i386 PAE systems and on
> MIPS, and even there it only happens if highmem is off.  But that
> means we're safe, since we should never end up with high allocations
> on non-highmem systems unless we explicitly ask for them, which we
> don't.
> 
> If this is too scary, I can add yet more cruft to avoid it, but
> it seems harmless enough to me, and it means that the driver will
> be totally clean once all the vring_use_dma_api calls go away.
> 
> Michael, if these survive review, can you stage these in your tree?
> Can you also take a look at tools/virtio?  I probably broke it, but I
> couldn't get it to build without these patches either, so I'm stuck.
> 
> Changes from v6:
>  - Remove HAVE_DMA_ATTRS and add Acked-by (Cornelia)
>  - Add some missing signed-off-by lines from me (whoops)
>  - Rework queue allocation (Michael)
> 
> Changes from v5:
>  - Typo fixes (David Woodhouse)
>  - Use xen_domain() to detect Xen (David Vrabel)
>  - Pass struct vring_virtqueue * into vring_use_dma_api for future proofing
>  - Removed module parameter (Michael)
> 
> Changes from v4:
>  - Bake vring_use_dma_api in from the beginning.
>  - Automatically enable only on Xen.
>  - Add module parameter.
>  - Add s390 and alpha DMA API implementations.
>  - Rebase to 4.5-rc1.
> 
> Changes from v3:
>  - More big-endian fixes.
>  - Added better virtio-ring APIs that handle allocation and use them in
>    virtio-mmio and virtio-pci.
>  - Switch to Michael's virtio-net patch.
> 
> Changes from v2:
>  - Fix vring_mapping_error incorrect argument
> 
> Changes from v1:
>  - Fix an endian conversion error causing a BUG to hit.
>  - Fix a DMA ordering issue (swiotlb=force works now).
>  - Minor cleanups.
> 
> Andy Lutomirski (6):
>   vring: Introduce vring_use_dma_api()
>   virtio_ring: Support DMA APIs
>   virtio: Add improved queue allocation API
>   virtio_mmio: Use the DMA API if enabled
>   virtio_pci: Use the DMA API if enabled
>   vring: Use the DMA API on Xen
> 
> Christian Borntraeger (3):
>   dma: Provide simple noop dma ops
>   alpha/dma: use common noop dma ops
>   s390/dma: Allow per device dma ops
> 
>  arch/alpha/kernel/pci-noop.c        |  46 +---
>  arch/s390/Kconfig                   |   5 +-
>  arch/s390/include/asm/device.h      |   6 +-
>  arch/s390/include/asm/dma-mapping.h |   6 +-
>  arch/s390/pci/pci.c                 |   1 +
>  arch/s390/pci/pci_dma.c             |   4 +-
>  drivers/virtio/Kconfig              |   2 +-
>  drivers/virtio/virtio_mmio.c        |  67 ++----
>  drivers/virtio/virtio_pci_common.h  |   6 -
>  drivers/virtio/virtio_pci_legacy.c  |  42 ++--
>  drivers/virtio/virtio_pci_modern.c  |  61 ++---
>  drivers/virtio/virtio_ring.c        | 439 +++++++++++++++++++++++++++++++-----
>  include/linux/dma-mapping.h         |   2 +
>  include/linux/virtio.h              |  23 +-
>  include/linux/virtio_ring.h         |  35 +++
>  lib/Makefile                        |   1 +
>  lib/dma-noop.c                      |  75 ++++++
>  tools/virtio/linux/dma-mapping.h    |  17 ++
>  18 files changed, 594 insertions(+), 244 deletions(-)
>  create mode 100644 lib/dma-noop.c
>  create mode 100644 tools/virtio/linux/dma-mapping.h
> 
> -- 
> 2.5.0
> 

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
@ 2016-02-03 17:52   ` Stefano Stabellini
  0 siblings, 0 replies; 73+ messages in thread
From: Stefano Stabellini @ 2016-02-03 17:52 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Michael S. Tsirkin, Benjamin Herrenschmidt, David Woodhouse,
	linux-kernel, David S. Miller, sparclinux, Joerg Roedel,
	Christian Borntraeger, Cornelia Huck, Sebastian Ott,
	Paolo Bonzini, Christoph Hellwig, KVM, Martin Schwidefsky,
	linux-s390, Linux Virtualization, David Vrabel,
	Stefano Stabellini, xen-devel

On Tue, 2 Feb 2016, Andy Lutomirski wrote:
> This switches virtio to use the DMA API on Xen and if requested by
> module option.
> 
> This fixes virtio on Xen, and it should break anything because it's
> off by default on everything except Xen PV on x86.
> 
> To the Xen people: is this okay?  If it doesn't work on other Xen
> variants (PVH? HVM?), can you submit follow-up patches to fix it?

I have been waiting for something like this for a long time: up to now
it wasn't possible to use Xen inside a VM with virtio devices.

You can add my:

Tested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> To everyone else: we've waffled on this for way too long.  I think
> we should to get DMA API implementation in with a conservative
> policy like this rather than waiting until we achieve perfection.
> I'm tired of carrying these patches around.
> 
> I changed queue allocation around a bit in this version.  Per Michael's
> request, we no longer use dma_zalloc_coherent in the !dma_api case.
> Instead we use alloc_pages_exact, just like the current code does.
> This simplifies the ring address accessors, because they can always
> load from the dma addr rather than depending on vring_use_dma_api
> themselves.
> 
> There's an odd warning in here if the ring's physical address
> doesn't fit in a dma_addr_t.  This could only possible happen on
> really weird configurations in which phys_addr_t is wider than
> dma_addr_t.  AFAICT this is only possible on i386 PAE systems and on
> MIPS, and even there it only happens if highmem is off.  But that
> means we're safe, since we should never end up with high allocations
> on non-highmem systems unless we explicitly ask for them, which we
> don't.
> 
> If this is too scary, I can add yet more cruft to avoid it, but
> it seems harmless enough to me, and it means that the driver will
> be totally clean once all the vring_use_dma_api calls go away.
> 
> Michael, if these survive review, can you stage these in your tree?
> Can you also take a look at tools/virtio?  I probably broke it, but I
> couldn't get it to build without these patches either, so I'm stuck.
> 
> Changes from v6:
>  - Remove HAVE_DMA_ATTRS and add Acked-by (Cornelia)
>  - Add some missing signed-off-by lines from me (whoops)
>  - Rework queue allocation (Michael)
> 
> Changes from v5:
>  - Typo fixes (David Woodhouse)
>  - Use xen_domain() to detect Xen (David Vrabel)
>  - Pass struct vring_virtqueue * into vring_use_dma_api for future proofing
>  - Removed module parameter (Michael)
> 
> Changes from v4:
>  - Bake vring_use_dma_api in from the beginning.
>  - Automatically enable only on Xen.
>  - Add module parameter.
>  - Add s390 and alpha DMA API implementations.
>  - Rebase to 4.5-rc1.
> 
> Changes from v3:
>  - More big-endian fixes.
>  - Added better virtio-ring APIs that handle allocation and use them in
>    virtio-mmio and virtio-pci.
>  - Switch to Michael's virtio-net patch.
> 
> Changes from v2:
>  - Fix vring_mapping_error incorrect argument
> 
> Changes from v1:
>  - Fix an endian conversion error causing a BUG to hit.
>  - Fix a DMA ordering issue (swiotlb=force works now).
>  - Minor cleanups.
> 
> Andy Lutomirski (6):
>   vring: Introduce vring_use_dma_api()
>   virtio_ring: Support DMA APIs
>   virtio: Add improved queue allocation API
>   virtio_mmio: Use the DMA API if enabled
>   virtio_pci: Use the DMA API if enabled
>   vring: Use the DMA API on Xen
> 
> Christian Borntraeger (3):
>   dma: Provide simple noop dma ops
>   alpha/dma: use common noop dma ops
>   s390/dma: Allow per device dma ops
> 
>  arch/alpha/kernel/pci-noop.c        |  46 +---
>  arch/s390/Kconfig                   |   5 +-
>  arch/s390/include/asm/device.h      |   6 +-
>  arch/s390/include/asm/dma-mapping.h |   6 +-
>  arch/s390/pci/pci.c                 |   1 +
>  arch/s390/pci/pci_dma.c             |   4 +-
>  drivers/virtio/Kconfig              |   2 +-
>  drivers/virtio/virtio_mmio.c        |  67 ++----
>  drivers/virtio/virtio_pci_common.h  |   6 -
>  drivers/virtio/virtio_pci_legacy.c  |  42 ++--
>  drivers/virtio/virtio_pci_modern.c  |  61 ++---
>  drivers/virtio/virtio_ring.c        | 439 +++++++++++++++++++++++++++++++-----
>  include/linux/dma-mapping.h         |   2 +
>  include/linux/virtio.h              |  23 +-
>  include/linux/virtio_ring.h         |  35 +++
>  lib/Makefile                        |   1 +
>  lib/dma-noop.c                      |  75 ++++++
>  tools/virtio/linux/dma-mapping.h    |  17 ++
>  18 files changed, 594 insertions(+), 244 deletions(-)
>  create mode 100644 lib/dma-noop.c
>  create mode 100644 tools/virtio/linux/dma-mapping.h
> 
> -- 
> 2.5.0
> 

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
@ 2016-02-03 17:52   ` Stefano Stabellini
  0 siblings, 0 replies; 73+ messages in thread
From: Stefano Stabellini @ 2016-02-03 17:52 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Michael S. Tsirkin, Benjamin Herrenschmidt, David Woodhouse,
	linux-kernel, David S. Miller, sparclinux, Joerg Roedel,
	Christian Borntraeger, Cornelia Huck, Sebastian Ott,
	Paolo Bonzini, Christoph Hellwig, KVM, Martin Schwidefsky,
	linux-s390, Linux Virtualization, David Vrabel,
	Stefano Stabellini, xen-devel

On Tue, 2 Feb 2016, Andy Lutomirski wrote:
> This switches virtio to use the DMA API on Xen and if requested by
> module option.
> 
> This fixes virtio on Xen, and it should break anything because it's
> off by default on everything except Xen PV on x86.
> 
> To the Xen people: is this okay?  If it doesn't work on other Xen
> variants (PVH? HVM?), can you submit follow-up patches to fix it?

I have been waiting for something like this for a long time: up to now
it wasn't possible to use Xen inside a VM with virtio devices.

You can add my:

Tested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> To everyone else: we've waffled on this for way too long.  I think
> we should to get DMA API implementation in with a conservative
> policy like this rather than waiting until we achieve perfection.
> I'm tired of carrying these patches around.
> 
> I changed queue allocation around a bit in this version.  Per Michael's
> request, we no longer use dma_zalloc_coherent in the !dma_api case.
> Instead we use alloc_pages_exact, just like the current code does.
> This simplifies the ring address accessors, because they can always
> load from the dma addr rather than depending on vring_use_dma_api
> themselves.
> 
> There's an odd warning in here if the ring's physical address
> doesn't fit in a dma_addr_t.  This could only possible happen on
> really weird configurations in which phys_addr_t is wider than
> dma_addr_t.  AFAICT this is only possible on i386 PAE systems and on
> MIPS, and even there it only happens if highmem is off.  But that
> means we're safe, since we should never end up with high allocations
> on non-highmem systems unless we explicitly ask for them, which we
> don't.
> 
> If this is too scary, I can add yet more cruft to avoid it, but
> it seems harmless enough to me, and it means that the driver will
> be totally clean once all the vring_use_dma_api calls go away.
> 
> Michael, if these survive review, can you stage these in your tree?
> Can you also take a look at tools/virtio?  I probably broke it, but I
> couldn't get it to build without these patches either, so I'm stuck.
> 
> Changes from v6:
>  - Remove HAVE_DMA_ATTRS and add Acked-by (Cornelia)
>  - Add some missing signed-off-by lines from me (whoops)
>  - Rework queue allocation (Michael)
> 
> Changes from v5:
>  - Typo fixes (David Woodhouse)
>  - Use xen_domain() to detect Xen (David Vrabel)
>  - Pass struct vring_virtqueue * into vring_use_dma_api for future proofing
>  - Removed module parameter (Michael)
> 
> Changes from v4:
>  - Bake vring_use_dma_api in from the beginning.
>  - Automatically enable only on Xen.
>  - Add module parameter.
>  - Add s390 and alpha DMA API implementations.
>  - Rebase to 4.5-rc1.
> 
> Changes from v3:
>  - More big-endian fixes.
>  - Added better virtio-ring APIs that handle allocation and use them in
>    virtio-mmio and virtio-pci.
>  - Switch to Michael's virtio-net patch.
> 
> Changes from v2:
>  - Fix vring_mapping_error incorrect argument
> 
> Changes from v1:
>  - Fix an endian conversion error causing a BUG to hit.
>  - Fix a DMA ordering issue (swiotlb=force works now).
>  - Minor cleanups.
> 
> Andy Lutomirski (6):
>   vring: Introduce vring_use_dma_api()
>   virtio_ring: Support DMA APIs
>   virtio: Add improved queue allocation API
>   virtio_mmio: Use the DMA API if enabled
>   virtio_pci: Use the DMA API if enabled
>   vring: Use the DMA API on Xen
> 
> Christian Borntraeger (3):
>   dma: Provide simple noop dma ops
>   alpha/dma: use common noop dma ops
>   s390/dma: Allow per device dma ops
> 
>  arch/alpha/kernel/pci-noop.c        |  46 +---
>  arch/s390/Kconfig                   |   5 +-
>  arch/s390/include/asm/device.h      |   6 +-
>  arch/s390/include/asm/dma-mapping.h |   6 +-
>  arch/s390/pci/pci.c                 |   1 +
>  arch/s390/pci/pci_dma.c             |   4 +-
>  drivers/virtio/Kconfig              |   2 +-
>  drivers/virtio/virtio_mmio.c        |  67 ++----
>  drivers/virtio/virtio_pci_common.h  |   6 -
>  drivers/virtio/virtio_pci_legacy.c  |  42 ++--
>  drivers/virtio/virtio_pci_modern.c  |  61 ++---
>  drivers/virtio/virtio_ring.c        | 439 +++++++++++++++++++++++++++++++-----
>  include/linux/dma-mapping.h         |   2 +
>  include/linux/virtio.h              |  23 +-
>  include/linux/virtio_ring.h         |  35 +++
>  lib/Makefile                        |   1 +
>  lib/dma-noop.c                      |  75 ++++++
>  tools/virtio/linux/dma-mapping.h    |  17 ++
>  18 files changed, 594 insertions(+), 244 deletions(-)
>  create mode 100644 lib/dma-noop.c
>  create mode 100644 tools/virtio/linux/dma-mapping.h
> 
> -- 
> 2.5.0
> 

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (23 preceding siblings ...)
  (?)
@ 2016-02-03 17:52 ` Stefano Stabellini
  -1 siblings, 0 replies; 73+ messages in thread
From: Stefano Stabellini @ 2016-02-03 17:52 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-s390, Joerg Roedel, KVM, Michael S. Tsirkin,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

On Tue, 2 Feb 2016, Andy Lutomirski wrote:
> This switches virtio to use the DMA API on Xen and if requested by
> module option.
> 
> This fixes virtio on Xen, and it should break anything because it's
> off by default on everything except Xen PV on x86.
> 
> To the Xen people: is this okay?  If it doesn't work on other Xen
> variants (PVH? HVM?), can you submit follow-up patches to fix it?

I have been waiting for something like this for a long time: up to now
it wasn't possible to use Xen inside a VM with virtio devices.

You can add my:

Tested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> To everyone else: we've waffled on this for way too long.  I think
> we should to get DMA API implementation in with a conservative
> policy like this rather than waiting until we achieve perfection.
> I'm tired of carrying these patches around.
> 
> I changed queue allocation around a bit in this version.  Per Michael's
> request, we no longer use dma_zalloc_coherent in the !dma_api case.
> Instead we use alloc_pages_exact, just like the current code does.
> This simplifies the ring address accessors, because they can always
> load from the dma addr rather than depending on vring_use_dma_api
> themselves.
> 
> There's an odd warning in here if the ring's physical address
> doesn't fit in a dma_addr_t.  This could only possible happen on
> really weird configurations in which phys_addr_t is wider than
> dma_addr_t.  AFAICT this is only possible on i386 PAE systems and on
> MIPS, and even there it only happens if highmem is off.  But that
> means we're safe, since we should never end up with high allocations
> on non-highmem systems unless we explicitly ask for them, which we
> don't.
> 
> If this is too scary, I can add yet more cruft to avoid it, but
> it seems harmless enough to me, and it means that the driver will
> be totally clean once all the vring_use_dma_api calls go away.
> 
> Michael, if these survive review, can you stage these in your tree?
> Can you also take a look at tools/virtio?  I probably broke it, but I
> couldn't get it to build without these patches either, so I'm stuck.
> 
> Changes from v6:
>  - Remove HAVE_DMA_ATTRS and add Acked-by (Cornelia)
>  - Add some missing signed-off-by lines from me (whoops)
>  - Rework queue allocation (Michael)
> 
> Changes from v5:
>  - Typo fixes (David Woodhouse)
>  - Use xen_domain() to detect Xen (David Vrabel)
>  - Pass struct vring_virtqueue * into vring_use_dma_api for future proofing
>  - Removed module parameter (Michael)
> 
> Changes from v4:
>  - Bake vring_use_dma_api in from the beginning.
>  - Automatically enable only on Xen.
>  - Add module parameter.
>  - Add s390 and alpha DMA API implementations.
>  - Rebase to 4.5-rc1.
> 
> Changes from v3:
>  - More big-endian fixes.
>  - Added better virtio-ring APIs that handle allocation and use them in
>    virtio-mmio and virtio-pci.
>  - Switch to Michael's virtio-net patch.
> 
> Changes from v2:
>  - Fix vring_mapping_error incorrect argument
> 
> Changes from v1:
>  - Fix an endian conversion error causing a BUG to hit.
>  - Fix a DMA ordering issue (swiotlb=force works now).
>  - Minor cleanups.
> 
> Andy Lutomirski (6):
>   vring: Introduce vring_use_dma_api()
>   virtio_ring: Support DMA APIs
>   virtio: Add improved queue allocation API
>   virtio_mmio: Use the DMA API if enabled
>   virtio_pci: Use the DMA API if enabled
>   vring: Use the DMA API on Xen
> 
> Christian Borntraeger (3):
>   dma: Provide simple noop dma ops
>   alpha/dma: use common noop dma ops
>   s390/dma: Allow per device dma ops
> 
>  arch/alpha/kernel/pci-noop.c        |  46 +---
>  arch/s390/Kconfig                   |   5 +-
>  arch/s390/include/asm/device.h      |   6 +-
>  arch/s390/include/asm/dma-mapping.h |   6 +-
>  arch/s390/pci/pci.c                 |   1 +
>  arch/s390/pci/pci_dma.c             |   4 +-
>  drivers/virtio/Kconfig              |   2 +-
>  drivers/virtio/virtio_mmio.c        |  67 ++----
>  drivers/virtio/virtio_pci_common.h  |   6 -
>  drivers/virtio/virtio_pci_legacy.c  |  42 ++--
>  drivers/virtio/virtio_pci_modern.c  |  61 ++---
>  drivers/virtio/virtio_ring.c        | 439 +++++++++++++++++++++++++++++++-----
>  include/linux/dma-mapping.h         |   2 +
>  include/linux/virtio.h              |  23 +-
>  include/linux/virtio_ring.h         |  35 +++
>  lib/Makefile                        |   1 +
>  lib/dma-noop.c                      |  75 ++++++
>  tools/virtio/linux/dma-mapping.h    |  17 ++
>  18 files changed, 594 insertions(+), 244 deletions(-)
>  create mode 100644 lib/dma-noop.c
>  create mode 100644 tools/virtio/linux/dma-mapping.h
> 
> -- 
> 2.5.0
> 

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (25 preceding siblings ...)
  (?)
@ 2016-02-03 17:52 ` Stefano Stabellini
  -1 siblings, 0 replies; 73+ messages in thread
From: Stefano Stabellini @ 2016-02-03 17:52 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Cornelia Huck, linux-s390, Joerg Roedel, KVM, Michael S. Tsirkin,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

On Tue, 2 Feb 2016, Andy Lutomirski wrote:
> This switches virtio to use the DMA API on Xen and if requested by
> module option.
> 
> This fixes virtio on Xen, and it should break anything because it's
> off by default on everything except Xen PV on x86.
> 
> To the Xen people: is this okay?  If it doesn't work on other Xen
> variants (PVH? HVM?), can you submit follow-up patches to fix it?

I have been waiting for something like this for a long time: up to now
it wasn't possible to use Xen inside a VM with virtio devices.

You can add my:

Tested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> To everyone else: we've waffled on this for way too long.  I think
> we should to get DMA API implementation in with a conservative
> policy like this rather than waiting until we achieve perfection.
> I'm tired of carrying these patches around.
> 
> I changed queue allocation around a bit in this version.  Per Michael's
> request, we no longer use dma_zalloc_coherent in the !dma_api case.
> Instead we use alloc_pages_exact, just like the current code does.
> This simplifies the ring address accessors, because they can always
> load from the dma addr rather than depending on vring_use_dma_api
> themselves.
> 
> There's an odd warning in here if the ring's physical address
> doesn't fit in a dma_addr_t.  This could only possible happen on
> really weird configurations in which phys_addr_t is wider than
> dma_addr_t.  AFAICT this is only possible on i386 PAE systems and on
> MIPS, and even there it only happens if highmem is off.  But that
> means we're safe, since we should never end up with high allocations
> on non-highmem systems unless we explicitly ask for them, which we
> don't.
> 
> If this is too scary, I can add yet more cruft to avoid it, but
> it seems harmless enough to me, and it means that the driver will
> be totally clean once all the vring_use_dma_api calls go away.
> 
> Michael, if these survive review, can you stage these in your tree?
> Can you also take a look at tools/virtio?  I probably broke it, but I
> couldn't get it to build without these patches either, so I'm stuck.
> 
> Changes from v6:
>  - Remove HAVE_DMA_ATTRS and add Acked-by (Cornelia)
>  - Add some missing signed-off-by lines from me (whoops)
>  - Rework queue allocation (Michael)
> 
> Changes from v5:
>  - Typo fixes (David Woodhouse)
>  - Use xen_domain() to detect Xen (David Vrabel)
>  - Pass struct vring_virtqueue * into vring_use_dma_api for future proofing
>  - Removed module parameter (Michael)
> 
> Changes from v4:
>  - Bake vring_use_dma_api in from the beginning.
>  - Automatically enable only on Xen.
>  - Add module parameter.
>  - Add s390 and alpha DMA API implementations.
>  - Rebase to 4.5-rc1.
> 
> Changes from v3:
>  - More big-endian fixes.
>  - Added better virtio-ring APIs that handle allocation and use them in
>    virtio-mmio and virtio-pci.
>  - Switch to Michael's virtio-net patch.
> 
> Changes from v2:
>  - Fix vring_mapping_error incorrect argument
> 
> Changes from v1:
>  - Fix an endian conversion error causing a BUG to hit.
>  - Fix a DMA ordering issue (swiotlb=force works now).
>  - Minor cleanups.
> 
> Andy Lutomirski (6):
>   vring: Introduce vring_use_dma_api()
>   virtio_ring: Support DMA APIs
>   virtio: Add improved queue allocation API
>   virtio_mmio: Use the DMA API if enabled
>   virtio_pci: Use the DMA API if enabled
>   vring: Use the DMA API on Xen
> 
> Christian Borntraeger (3):
>   dma: Provide simple noop dma ops
>   alpha/dma: use common noop dma ops
>   s390/dma: Allow per device dma ops
> 
>  arch/alpha/kernel/pci-noop.c        |  46 +---
>  arch/s390/Kconfig                   |   5 +-
>  arch/s390/include/asm/device.h      |   6 +-
>  arch/s390/include/asm/dma-mapping.h |   6 +-
>  arch/s390/pci/pci.c                 |   1 +
>  arch/s390/pci/pci_dma.c             |   4 +-
>  drivers/virtio/Kconfig              |   2 +-
>  drivers/virtio/virtio_mmio.c        |  67 ++----
>  drivers/virtio/virtio_pci_common.h  |   6 -
>  drivers/virtio/virtio_pci_legacy.c  |  42 ++--
>  drivers/virtio/virtio_pci_modern.c  |  61 ++---
>  drivers/virtio/virtio_ring.c        | 439 +++++++++++++++++++++++++++++++-----
>  include/linux/dma-mapping.h         |   2 +
>  include/linux/virtio.h              |  23 +-
>  include/linux/virtio_ring.h         |  35 +++
>  lib/Makefile                        |   1 +
>  lib/dma-noop.c                      |  75 ++++++
>  tools/virtio/linux/dma-mapping.h    |  17 ++
>  18 files changed, 594 insertions(+), 244 deletions(-)
>  create mode 100644 lib/dma-noop.c
>  create mode 100644 tools/virtio/linux/dma-mapping.h
> 
> -- 
> 2.5.0
> 

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 5/9] virtio_ring: Support DMA APIs
  2016-02-03 13:52     ` Michael S. Tsirkin
@ 2016-02-03 17:53       ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03 17:53 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Christian Borntraeger, Paolo Bonzini, David Woodhouse, xen-devel,
	Martin Schwidefsky, Sebastian Ott, David S. Miller, linux-s390,
	David Vrabel, Cornelia Huck, Joerg Roedel, KVM,
	Benjamin Herrenschmidt, Stefano Stabellini, Christoph Hellwig,
	Linux Virtualization, linux-kernel, sparclinux

On Feb 3, 2016 5:52 AM, "Michael S. Tsirkin" <mst@redhat.com> wrote:
>
> On Tue, Feb 02, 2016 at 09:46:36PM -0800, Andy Lutomirski wrote:
> > virtio_ring currently sends the device (usually a hypervisor)
> > physical addresses of its I/O buffers.  This is okay when DMA
> > addresses and physical addresses are the same thing, but this isn't
> > always the case.  For example, this never works on Xen guests, and
> > it is likely to fail if a physical "virtio" device ever ends up
> > behind an IOMMU or swiotlb.
> >
> > The immediate use case for me is to enable virtio on Xen guests.
> > For that to work, we need vring to support DMA address translation
> > as well as a corresponding change to virtio_pci or to another
> > driver.
> >
> > Signed-off-by: Andy Lutomirski <luto@kernel.org>
> > ---
> >  drivers/virtio/Kconfig           |   2 +-
> >  drivers/virtio/virtio_ring.c     | 200 ++++++++++++++++++++++++++++++++-------
> >  tools/virtio/linux/dma-mapping.h |  17 ++++
> >  3 files changed, 183 insertions(+), 36 deletions(-)
> >  create mode 100644 tools/virtio/linux/dma-mapping.h
> >
> > diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> > index cab9f3f63a38..77590320d44c 100644
> > --- a/drivers/virtio/Kconfig
> > +++ b/drivers/virtio/Kconfig
> > @@ -60,7 +60,7 @@ config VIRTIO_INPUT
> >
> >   config VIRTIO_MMIO
> >       tristate "Platform bus driver for memory mapped virtio devices"
> > -     depends on HAS_IOMEM
> > +     depends on HAS_IOMEM && HAS_DMA
> >       select VIRTIO
> >       ---help---
> >        This drivers provides support for memory mapped virtio
>
> What's this chunk doing here btw? Should be part of the mmio patch?
>

IIRC it was deliberate.  Making virtio depend on HAS_DMA didn't work
right because kconfig doesn't propagate dependencies through select
intelligently.  This patch makes core virtio depend on HAS_DMA, so I
added the dependency here, too.

--Andy

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 5/9] virtio_ring: Support DMA APIs
@ 2016-02-03 17:53       ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03 17:53 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Christian Borntraeger, Paolo Bonzini, David Woodhouse, xen-devel,
	Martin Schwidefsky, Sebastian Ott, David S. Miller, linux-s390,
	David Vrabel, Cornelia Huck, Joerg Roedel, KVM,
	Benjamin Herrenschmidt, Stefano Stabellini, Christoph Hellwig,
	Linux Virtualization, linux-kernel, sparclinux

On Feb 3, 2016 5:52 AM, "Michael S. Tsirkin" <mst@redhat.com> wrote:
>
> On Tue, Feb 02, 2016 at 09:46:36PM -0800, Andy Lutomirski wrote:
> > virtio_ring currently sends the device (usually a hypervisor)
> > physical addresses of its I/O buffers.  This is okay when DMA
> > addresses and physical addresses are the same thing, but this isn't
> > always the case.  For example, this never works on Xen guests, and
> > it is likely to fail if a physical "virtio" device ever ends up
> > behind an IOMMU or swiotlb.
> >
> > The immediate use case for me is to enable virtio on Xen guests.
> > For that to work, we need vring to support DMA address translation
> > as well as a corresponding change to virtio_pci or to another
> > driver.
> >
> > Signed-off-by: Andy Lutomirski <luto@kernel.org>
> > ---
> >  drivers/virtio/Kconfig           |   2 +-
> >  drivers/virtio/virtio_ring.c     | 200 ++++++++++++++++++++++++++++++++-------
> >  tools/virtio/linux/dma-mapping.h |  17 ++++
> >  3 files changed, 183 insertions(+), 36 deletions(-)
> >  create mode 100644 tools/virtio/linux/dma-mapping.h
> >
> > diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> > index cab9f3f63a38..77590320d44c 100644
> > --- a/drivers/virtio/Kconfig
> > +++ b/drivers/virtio/Kconfig
> > @@ -60,7 +60,7 @@ config VIRTIO_INPUT
> >
> >   config VIRTIO_MMIO
> >       tristate "Platform bus driver for memory mapped virtio devices"
> > -     depends on HAS_IOMEM
> > +     depends on HAS_IOMEM && HAS_DMA
> >       select VIRTIO
> >       ---help---
> >        This drivers provides support for memory mapped virtio
>
> What's this chunk doing here btw? Should be part of the mmio patch?
>

IIRC it was deliberate.  Making virtio depend on HAS_DMA didn't work
right because kconfig doesn't propagate dependencies through select
intelligently.  This patch makes core virtio depend on HAS_DMA, so I
added the dependency here, too.

--Andy

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 5/9] virtio_ring: Support DMA APIs
  2016-02-03 13:52     ` Michael S. Tsirkin
                       ` (2 preceding siblings ...)
  (?)
@ 2016-02-03 17:53     ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03 17:53 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-s390, sparclinux, KVM, Stefano Stabellini,
	Benjamin Herrenschmidt, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, Joerg Roedel,
	David Vrabel, Paolo Bonzini, Martin Schwidefsky, xen-devel,
	Linux Virtualization, David Woodhouse, David S. Miller

On Feb 3, 2016 5:52 AM, "Michael S. Tsirkin" <mst@redhat.com> wrote:
>
> On Tue, Feb 02, 2016 at 09:46:36PM -0800, Andy Lutomirski wrote:
> > virtio_ring currently sends the device (usually a hypervisor)
> > physical addresses of its I/O buffers.  This is okay when DMA
> > addresses and physical addresses are the same thing, but this isn't
> > always the case.  For example, this never works on Xen guests, and
> > it is likely to fail if a physical "virtio" device ever ends up
> > behind an IOMMU or swiotlb.
> >
> > The immediate use case for me is to enable virtio on Xen guests.
> > For that to work, we need vring to support DMA address translation
> > as well as a corresponding change to virtio_pci or to another
> > driver.
> >
> > Signed-off-by: Andy Lutomirski <luto@kernel.org>
> > ---
> >  drivers/virtio/Kconfig           |   2 +-
> >  drivers/virtio/virtio_ring.c     | 200 ++++++++++++++++++++++++++++++++-------
> >  tools/virtio/linux/dma-mapping.h |  17 ++++
> >  3 files changed, 183 insertions(+), 36 deletions(-)
> >  create mode 100644 tools/virtio/linux/dma-mapping.h
> >
> > diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> > index cab9f3f63a38..77590320d44c 100644
> > --- a/drivers/virtio/Kconfig
> > +++ b/drivers/virtio/Kconfig
> > @@ -60,7 +60,7 @@ config VIRTIO_INPUT
> >
> >   config VIRTIO_MMIO
> >       tristate "Platform bus driver for memory mapped virtio devices"
> > -     depends on HAS_IOMEM
> > +     depends on HAS_IOMEM && HAS_DMA
> >       select VIRTIO
> >       ---help---
> >        This drivers provides support for memory mapped virtio
>
> What's this chunk doing here btw? Should be part of the mmio patch?
>

IIRC it was deliberate.  Making virtio depend on HAS_DMA didn't work
right because kconfig doesn't propagate dependencies through select
intelligently.  This patch makes core virtio depend on HAS_DMA, so I
added the dependency here, too.

--Andy

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 5/9] virtio_ring: Support DMA APIs
  2016-02-03 13:52     ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-02-03 17:53     ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03 17:53 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: linux-s390, sparclinux, Cornelia Huck, KVM, Stefano Stabellini,
	Benjamin Herrenschmidt, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, Joerg Roedel,
	David Vrabel, Paolo Bonzini, Martin Schwidefsky, xen-devel,
	Linux Virtualization, David Woodhouse, David S. Miller

On Feb 3, 2016 5:52 AM, "Michael S. Tsirkin" <mst@redhat.com> wrote:
>
> On Tue, Feb 02, 2016 at 09:46:36PM -0800, Andy Lutomirski wrote:
> > virtio_ring currently sends the device (usually a hypervisor)
> > physical addresses of its I/O buffers.  This is okay when DMA
> > addresses and physical addresses are the same thing, but this isn't
> > always the case.  For example, this never works on Xen guests, and
> > it is likely to fail if a physical "virtio" device ever ends up
> > behind an IOMMU or swiotlb.
> >
> > The immediate use case for me is to enable virtio on Xen guests.
> > For that to work, we need vring to support DMA address translation
> > as well as a corresponding change to virtio_pci or to another
> > driver.
> >
> > Signed-off-by: Andy Lutomirski <luto@kernel.org>
> > ---
> >  drivers/virtio/Kconfig           |   2 +-
> >  drivers/virtio/virtio_ring.c     | 200 ++++++++++++++++++++++++++++++++-------
> >  tools/virtio/linux/dma-mapping.h |  17 ++++
> >  3 files changed, 183 insertions(+), 36 deletions(-)
> >  create mode 100644 tools/virtio/linux/dma-mapping.h
> >
> > diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
> > index cab9f3f63a38..77590320d44c 100644
> > --- a/drivers/virtio/Kconfig
> > +++ b/drivers/virtio/Kconfig
> > @@ -60,7 +60,7 @@ config VIRTIO_INPUT
> >
> >   config VIRTIO_MMIO
> >       tristate "Platform bus driver for memory mapped virtio devices"
> > -     depends on HAS_IOMEM
> > +     depends on HAS_IOMEM && HAS_DMA
> >       select VIRTIO
> >       ---help---
> >        This drivers provides support for memory mapped virtio
>
> What's this chunk doing here btw? Should be part of the mmio patch?
>

IIRC it was deliberate.  Making virtio depend on HAS_DMA didn't work
right because kconfig doesn't propagate dependencies through select
intelligently.  This patch makes core virtio depend on HAS_DMA, so I
added the dependency here, too.

--Andy

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 9/9] vring: Use the DMA API on Xen
  2016-02-03  9:49     ` David Vrabel
  (?)
@ 2016-02-04 17:49       ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-04 17:49 UTC (permalink / raw)
  To: David Vrabel
  Cc: Andy Lutomirski, Michael S. Tsirkin, Benjamin Herrenschmidt,
	David Woodhouse, linux-kernel, David S. Miller, sparclinux,
	Joerg Roedel, Christian Borntraeger, Cornelia Huck,
	Sebastian Ott, Paolo Bonzini, Christoph Hellwig, KVM,
	Martin Schwidefsky, linux-s390, Linux Virtualization,
	Stefano Stabellini, xen-devel

On Wed, Feb 3, 2016 at 1:49 AM, David Vrabel <david.vrabel@citrix.com> wrote:
> On 03/02/16 05:46, Andy Lutomirski wrote:
>> Signed-off-by: Andy Lutomirski <luto@kernel.org>
>
> You forgot the previous Reviewed-by tags.

Whoops.  If I send another version, they'll be there.

>
> David



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 9/9] vring: Use the DMA API on Xen
@ 2016-02-04 17:49       ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-04 17:49 UTC (permalink / raw)
  To: David Vrabel
  Cc: linux-s390, Joerg Roedel, KVM, Michael S. Tsirkin,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

On Wed, Feb 3, 2016 at 1:49 AM, David Vrabel <david.vrabel@citrix.com> wrote:
> On 03/02/16 05:46, Andy Lutomirski wrote:
>> Signed-off-by: Andy Lutomirski <luto@kernel.org>
>
> You forgot the previous Reviewed-by tags.

Whoops.  If I send another version, they'll be there.

>
> David



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 9/9] vring: Use the DMA API on Xen
@ 2016-02-04 17:49       ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-04 17:49 UTC (permalink / raw)
  To: David Vrabel
  Cc: linux-s390, Joerg Roedel, KVM, Michael S. Tsirkin,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

On Wed, Feb 3, 2016 at 1:49 AM, David Vrabel <david.vrabel@citrix.com> wrote:
> On 03/02/16 05:46, Andy Lutomirski wrote:
>> Signed-off-by: Andy Lutomirski <luto@kernel.org>
>
> You forgot the previous Reviewed-by tags.

Whoops.  If I send another version, they'll be there.

>
> David



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 9/9] vring: Use the DMA API on Xen
  2016-02-03  9:49     ` David Vrabel
  (?)
  (?)
@ 2016-02-04 17:49     ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-04 17:49 UTC (permalink / raw)
  To: David Vrabel
  Cc: Cornelia Huck, linux-s390, Joerg Roedel, KVM, Michael S. Tsirkin,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

On Wed, Feb 3, 2016 at 1:49 AM, David Vrabel <david.vrabel@citrix.com> wrote:
> On 03/02/16 05:46, Andy Lutomirski wrote:
>> Signed-off-by: Andy Lutomirski <luto@kernel.org>
>
> You forgot the previous Reviewed-by tags.

Whoops.  If I send another version, they'll be there.

>
> David



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
  2016-02-03  5:46 ` Andy Lutomirski
@ 2016-02-17  5:48   ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-17  5:48 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Michael S. Tsirkin, Benjamin Herrenschmidt, David Woodhouse,
	linux-kernel, David S. Miller, sparclinux, Joerg Roedel,
	Christian Borntraeger, Cornelia Huck, Sebastian Ott,
	Paolo Bonzini, Christoph Hellwig, KVM, Martin Schwidefsky,
	linux-s390, Linux Virtualization, David Vrabel,
	Stefano Stabellini, xen-devel

On Tue, Feb 2, 2016 at 9:46 PM, Andy Lutomirski <luto@kernel.org> wrote:
> This switches virtio to use the DMA API on Xen and if requested by
> module option.

Michael, any update on this?

--Andy

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
@ 2016-02-17  5:48   ` Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-17  5:48 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Michael S. Tsirkin, Benjamin Herrenschmidt, David Woodhouse,
	linux-kernel, David S. Miller, sparclinux, Joerg Roedel,
	Christian Borntraeger, Cornelia Huck, Sebastian Ott,
	Paolo Bonzini, Christoph Hellwig, KVM, Martin Schwidefsky,
	linux-s390, Linux Virtualization, David Vrabel,
	Stefano Stabellini, xen-devel

On Tue, Feb 2, 2016 at 9:46 PM, Andy Lutomirski <luto@kernel.org> wrote:
> This switches virtio to use the DMA API on Xen and if requested by
> module option.

Michael, any update on this?

--Andy

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (28 preceding siblings ...)
  (?)
@ 2016-02-17  5:48 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-17  5:48 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: linux-s390, Joerg Roedel, KVM, Michael S. Tsirkin,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

On Tue, Feb 2, 2016 at 9:46 PM, Andy Lutomirski <luto@kernel.org> wrote:
> This switches virtio to use the DMA API on Xen and if requested by
> module option.

Michael, any update on this?

--Andy

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
  2016-02-03  5:46 ` Andy Lutomirski
                   ` (27 preceding siblings ...)
  (?)
@ 2016-02-17  5:48 ` Andy Lutomirski
  -1 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-17  5:48 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Cornelia Huck, linux-s390, Joerg Roedel, KVM, Michael S. Tsirkin,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

On Tue, Feb 2, 2016 at 9:46 PM, Andy Lutomirski <luto@kernel.org> wrote:
> This switches virtio to use the DMA API on Xen and if requested by
> module option.

Michael, any update on this?

--Andy

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
  2016-02-17  5:48   ` Andy Lutomirski
  (?)
@ 2016-02-17  9:29     ` Michael S. Tsirkin
  -1 siblings, 0 replies; 73+ messages in thread
From: Michael S. Tsirkin @ 2016-02-17  9:29 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Andy Lutomirski, Benjamin Herrenschmidt, David Woodhouse,
	linux-kernel, David S. Miller, sparclinux, Joerg Roedel,
	Christian Borntraeger, Cornelia Huck, Sebastian Ott,
	Paolo Bonzini, Christoph Hellwig, KVM, Martin Schwidefsky,
	linux-s390, Linux Virtualization, David Vrabel,
	Stefano Stabellini, xen-devel

On Tue, Feb 16, 2016 at 09:48:58PM -0800, Andy Lutomirski wrote:
> On Tue, Feb 2, 2016 at 9:46 PM, Andy Lutomirski <luto@kernel.org> wrote:
> > This switches virtio to use the DMA API on Xen and if requested by
> > module option.
> 
> Michael, any update on this?
> 
> --Andy

I was hoping for an explicit ack from David Woodhouse,
but I guess the informal "let's not hold this up"
that was sent on v5 will do.

I've queued this up for 4.6, thanks everyone!

-- 
MST

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
@ 2016-02-17  9:29     ` Michael S. Tsirkin
  0 siblings, 0 replies; 73+ messages in thread
From: Michael S. Tsirkin @ 2016-02-17  9:29 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

On Tue, Feb 16, 2016 at 09:48:58PM -0800, Andy Lutomirski wrote:
> On Tue, Feb 2, 2016 at 9:46 PM, Andy Lutomirski <luto@kernel.org> wrote:
> > This switches virtio to use the DMA API on Xen and if requested by
> > module option.
> 
> Michael, any update on this?
> 
> --Andy

I was hoping for an explicit ack from David Woodhouse,
but I guess the informal "let's not hold this up"
that was sent on v5 will do.

I've queued this up for 4.6, thanks everyone!

-- 
MST

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
@ 2016-02-17  9:29     ` Michael S. Tsirkin
  0 siblings, 0 replies; 73+ messages in thread
From: Michael S. Tsirkin @ 2016-02-17  9:29 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

On Tue, Feb 16, 2016 at 09:48:58PM -0800, Andy Lutomirski wrote:
> On Tue, Feb 2, 2016 at 9:46 PM, Andy Lutomirski <luto@kernel.org> wrote:
> > This switches virtio to use the DMA API on Xen and if requested by
> > module option.
> 
> Michael, any update on this?
> 
> --Andy

I was hoping for an explicit ack from David Woodhouse,
but I guess the informal "let's not hold this up"
that was sent on v5 will do.

I've queued this up for 4.6, thanks everyone!

-- 
MST

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
  2016-02-17  5:48   ` Andy Lutomirski
  (?)
@ 2016-02-17  9:29   ` Michael S. Tsirkin
  -1 siblings, 0 replies; 73+ messages in thread
From: Michael S. Tsirkin @ 2016-02-17  9:29 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, Andy Lutomirski, xen-devel, sparclinux,
	Paolo Bonzini, Linux Virtualization, David Woodhouse,
	David S. Miller, Martin Schwidefsky

On Tue, Feb 16, 2016 at 09:48:58PM -0800, Andy Lutomirski wrote:
> On Tue, Feb 2, 2016 at 9:46 PM, Andy Lutomirski <luto@kernel.org> wrote:
> > This switches virtio to use the DMA API on Xen and if requested by
> > module option.
> 
> Michael, any update on this?
> 
> --Andy

I was hoping for an explicit ack from David Woodhouse,
but I guess the informal "let's not hold this up"
that was sent on v5 will do.

I've queued this up for 4.6, thanks everyone!

-- 
MST

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
  2016-02-17  9:29     ` Michael S. Tsirkin
                       ` (3 preceding siblings ...)
  (?)
@ 2016-02-17  9:42     ` David Woodhouse
  -1 siblings, 0 replies; 73+ messages in thread
From: David Woodhouse @ 2016-02-17  9:42 UTC (permalink / raw)
  To: Michael S. Tsirkin, Andy Lutomirski
  Cc: Andy Lutomirski, Benjamin Herrenschmidt, linux-kernel,
	David S. Miller, sparclinux, Joerg Roedel, Christian Borntraeger,
	Cornelia Huck, Sebastian Ott, Paolo Bonzini, Christoph Hellwig,
	KVM, Martin Schwidefsky, linux-s390, Linux Virtualization,
	David Vrabel, Stefano Stabellini, xen-devel

[-- Attachment #1: Type: text/plain, Size: 280 bytes --]

On Wed, 2016-02-17 at 11:29 +0200, Michael S. Tsirkin wrote:
> 
> I was hoping for an explicit ack from David Woodhouse,
> but I guess the informal "let's not hold this up"
> that was sent on v5 will do.

Apologies; I was working under that assumption too.

-- 
dwmw2


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5691 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
  2016-02-17  9:29     ` Michael S. Tsirkin
                       ` (2 preceding siblings ...)
  (?)
@ 2016-02-17  9:42     ` David Woodhouse
  -1 siblings, 0 replies; 73+ messages in thread
From: David Woodhouse @ 2016-02-17  9:42 UTC (permalink / raw)
  To: Michael S. Tsirkin, Andy Lutomirski
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David S. Miller, Martin Schwidefsky


[-- Attachment #1.1: Type: text/plain, Size: 280 bytes --]

On Wed, 2016-02-17 at 11:29 +0200, Michael S. Tsirkin wrote:
> 
> I was hoping for an explicit ack from David Woodhouse,
> but I guess the informal "let's not hold this up"
> that was sent on v5 will do.

Apologies; I was working under that assumption too.

-- 
dwmw2


[-- Attachment #1.2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5691 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH v7 0/9] virtio DMA API, yet again
  2016-02-17  9:29     ` Michael S. Tsirkin
  (?)
  (?)
@ 2016-02-17  9:42     ` David Woodhouse
  -1 siblings, 0 replies; 73+ messages in thread
From: David Woodhouse @ 2016-02-17  9:42 UTC (permalink / raw)
  To: Michael S. Tsirkin, Andy Lutomirski
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, Andy Lutomirski, xen-devel, sparclinux,
	Paolo Bonzini, Linux Virtualization, David S. Miller,
	Martin Schwidefsky


[-- Attachment #1.1: Type: text/plain, Size: 280 bytes --]

On Wed, 2016-02-17 at 11:29 +0200, Michael S. Tsirkin wrote:
> 
> I was hoping for an explicit ack from David Woodhouse,
> but I guess the informal "let's not hold this up"
> that was sent on v5 will do.

Apologies; I was working under that assumption too.

-- 
dwmw2


[-- Attachment #1.2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5691 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [PATCH v7 0/9] virtio DMA API, yet again
@ 2016-02-03  5:46 Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Joerg Roedel, KVM, linux-s390, Benjamin Herrenschmidt,
	Stefano Stabellini, Sebastian Ott, linux-kernel,
	Christoph Hellwig, Christian Borntraeger, David Vrabel,
	Andy Lutomirski, xen-devel, sparclinux, Paolo Bonzini,
	Linux Virtualization, David Woodhouse, David S. Miller,
	Martin Schwidefsky

This switches virtio to use the DMA API on Xen and if requested by
module option.

This fixes virtio on Xen, and it should break anything because it's
off by default on everything except Xen PV on x86.

To the Xen people: is this okay?  If it doesn't work on other Xen
variants (PVH? HVM?), can you submit follow-up patches to fix it?

To everyone else: we've waffled on this for way too long.  I think
we should to get DMA API implementation in with a conservative
policy like this rather than waiting until we achieve perfection.
I'm tired of carrying these patches around.

I changed queue allocation around a bit in this version.  Per Michael's
request, we no longer use dma_zalloc_coherent in the !dma_api case.
Instead we use alloc_pages_exact, just like the current code does.
This simplifies the ring address accessors, because they can always
load from the dma addr rather than depending on vring_use_dma_api
themselves.

There's an odd warning in here if the ring's physical address
doesn't fit in a dma_addr_t.  This could only possible happen on
really weird configurations in which phys_addr_t is wider than
dma_addr_t.  AFAICT this is only possible on i386 PAE systems and on
MIPS, and even there it only happens if highmem is off.  But that
means we're safe, since we should never end up with high allocations
on non-highmem systems unless we explicitly ask for them, which we
don't.

If this is too scary, I can add yet more cruft to avoid it, but
it seems harmless enough to me, and it means that the driver will
be totally clean once all the vring_use_dma_api calls go away.

Michael, if these survive review, can you stage these in your tree?
Can you also take a look at tools/virtio?  I probably broke it, but I
couldn't get it to build without these patches either, so I'm stuck.

Changes from v6:
 - Remove HAVE_DMA_ATTRS and add Acked-by (Cornelia)
 - Add some missing signed-off-by lines from me (whoops)
 - Rework queue allocation (Michael)

Changes from v5:
 - Typo fixes (David Woodhouse)
 - Use xen_domain() to detect Xen (David Vrabel)
 - Pass struct vring_virtqueue * into vring_use_dma_api for future proofing
 - Removed module parameter (Michael)

Changes from v4:
 - Bake vring_use_dma_api in from the beginning.
 - Automatically enable only on Xen.
 - Add module parameter.
 - Add s390 and alpha DMA API implementations.
 - Rebase to 4.5-rc1.

Changes from v3:
 - More big-endian fixes.
 - Added better virtio-ring APIs that handle allocation and use them in
   virtio-mmio and virtio-pci.
 - Switch to Michael's virtio-net patch.

Changes from v2:
 - Fix vring_mapping_error incorrect argument

Changes from v1:
 - Fix an endian conversion error causing a BUG to hit.
 - Fix a DMA ordering issue (swiotlb=force works now).
 - Minor cleanups.

Andy Lutomirski (6):
  vring: Introduce vring_use_dma_api()
  virtio_ring: Support DMA APIs
  virtio: Add improved queue allocation API
  virtio_mmio: Use the DMA API if enabled
  virtio_pci: Use the DMA API if enabled
  vring: Use the DMA API on Xen

Christian Borntraeger (3):
  dma: Provide simple noop dma ops
  alpha/dma: use common noop dma ops
  s390/dma: Allow per device dma ops

 arch/alpha/kernel/pci-noop.c        |  46 +---
 arch/s390/Kconfig                   |   5 +-
 arch/s390/include/asm/device.h      |   6 +-
 arch/s390/include/asm/dma-mapping.h |   6 +-
 arch/s390/pci/pci.c                 |   1 +
 arch/s390/pci/pci_dma.c             |   4 +-
 drivers/virtio/Kconfig              |   2 +-
 drivers/virtio/virtio_mmio.c        |  67 ++----
 drivers/virtio/virtio_pci_common.h  |   6 -
 drivers/virtio/virtio_pci_legacy.c  |  42 ++--
 drivers/virtio/virtio_pci_modern.c  |  61 ++---
 drivers/virtio/virtio_ring.c        | 439 +++++++++++++++++++++++++++++++-----
 include/linux/dma-mapping.h         |   2 +
 include/linux/virtio.h              |  23 +-
 include/linux/virtio_ring.h         |  35 +++
 lib/Makefile                        |   1 +
 lib/dma-noop.c                      |  75 ++++++
 tools/virtio/linux/dma-mapping.h    |  17 ++
 18 files changed, 594 insertions(+), 244 deletions(-)
 create mode 100644 lib/dma-noop.c
 create mode 100644 tools/virtio/linux/dma-mapping.h

-- 
2.5.0

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [PATCH v7 0/9] virtio DMA API, yet again
@ 2016-02-03  5:46 Andy Lutomirski
  0 siblings, 0 replies; 73+ messages in thread
From: Andy Lutomirski @ 2016-02-03  5:46 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Cornelia Huck, Joerg Roedel, KVM, linux-s390,
	Benjamin Herrenschmidt, Stefano Stabellini, Sebastian Ott,
	linux-kernel, Christoph Hellwig, Christian Borntraeger,
	David Vrabel, Andy Lutomirski, xen-devel, sparclinux,
	Paolo Bonzini, Linux Virtualization, David Woodhouse,
	David S. Miller, Martin Schwidefsky

This switches virtio to use the DMA API on Xen and if requested by
module option.

This fixes virtio on Xen, and it should break anything because it's
off by default on everything except Xen PV on x86.

To the Xen people: is this okay?  If it doesn't work on other Xen
variants (PVH? HVM?), can you submit follow-up patches to fix it?

To everyone else: we've waffled on this for way too long.  I think
we should to get DMA API implementation in with a conservative
policy like this rather than waiting until we achieve perfection.
I'm tired of carrying these patches around.

I changed queue allocation around a bit in this version.  Per Michael's
request, we no longer use dma_zalloc_coherent in the !dma_api case.
Instead we use alloc_pages_exact, just like the current code does.
This simplifies the ring address accessors, because they can always
load from the dma addr rather than depending on vring_use_dma_api
themselves.

There's an odd warning in here if the ring's physical address
doesn't fit in a dma_addr_t.  This could only possible happen on
really weird configurations in which phys_addr_t is wider than
dma_addr_t.  AFAICT this is only possible on i386 PAE systems and on
MIPS, and even there it only happens if highmem is off.  But that
means we're safe, since we should never end up with high allocations
on non-highmem systems unless we explicitly ask for them, which we
don't.

If this is too scary, I can add yet more cruft to avoid it, but
it seems harmless enough to me, and it means that the driver will
be totally clean once all the vring_use_dma_api calls go away.

Michael, if these survive review, can you stage these in your tree?
Can you also take a look at tools/virtio?  I probably broke it, but I
couldn't get it to build without these patches either, so I'm stuck.

Changes from v6:
 - Remove HAVE_DMA_ATTRS and add Acked-by (Cornelia)
 - Add some missing signed-off-by lines from me (whoops)
 - Rework queue allocation (Michael)

Changes from v5:
 - Typo fixes (David Woodhouse)
 - Use xen_domain() to detect Xen (David Vrabel)
 - Pass struct vring_virtqueue * into vring_use_dma_api for future proofing
 - Removed module parameter (Michael)

Changes from v4:
 - Bake vring_use_dma_api in from the beginning.
 - Automatically enable only on Xen.
 - Add module parameter.
 - Add s390 and alpha DMA API implementations.
 - Rebase to 4.5-rc1.

Changes from v3:
 - More big-endian fixes.
 - Added better virtio-ring APIs that handle allocation and use them in
   virtio-mmio and virtio-pci.
 - Switch to Michael's virtio-net patch.

Changes from v2:
 - Fix vring_mapping_error incorrect argument

Changes from v1:
 - Fix an endian conversion error causing a BUG to hit.
 - Fix a DMA ordering issue (swiotlb=force works now).
 - Minor cleanups.

Andy Lutomirski (6):
  vring: Introduce vring_use_dma_api()
  virtio_ring: Support DMA APIs
  virtio: Add improved queue allocation API
  virtio_mmio: Use the DMA API if enabled
  virtio_pci: Use the DMA API if enabled
  vring: Use the DMA API on Xen

Christian Borntraeger (3):
  dma: Provide simple noop dma ops
  alpha/dma: use common noop dma ops
  s390/dma: Allow per device dma ops

 arch/alpha/kernel/pci-noop.c        |  46 +---
 arch/s390/Kconfig                   |   5 +-
 arch/s390/include/asm/device.h      |   6 +-
 arch/s390/include/asm/dma-mapping.h |   6 +-
 arch/s390/pci/pci.c                 |   1 +
 arch/s390/pci/pci_dma.c             |   4 +-
 drivers/virtio/Kconfig              |   2 +-
 drivers/virtio/virtio_mmio.c        |  67 ++----
 drivers/virtio/virtio_pci_common.h  |   6 -
 drivers/virtio/virtio_pci_legacy.c  |  42 ++--
 drivers/virtio/virtio_pci_modern.c  |  61 ++---
 drivers/virtio/virtio_ring.c        | 439 +++++++++++++++++++++++++++++++-----
 include/linux/dma-mapping.h         |   2 +
 include/linux/virtio.h              |  23 +-
 include/linux/virtio_ring.h         |  35 +++
 lib/Makefile                        |   1 +
 lib/dma-noop.c                      |  75 ++++++
 tools/virtio/linux/dma-mapping.h    |  17 ++
 18 files changed, 594 insertions(+), 244 deletions(-)
 create mode 100644 lib/dma-noop.c
 create mode 100644 tools/virtio/linux/dma-mapping.h

-- 
2.5.0

^ permalink raw reply	[flat|nested] 73+ messages in thread

end of thread, other threads:[~2016-02-17  9:42 UTC | newest]

Thread overview: 73+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-03  5:46 [PATCH v7 0/9] virtio DMA API, yet again Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46 ` [PATCH v7 1/9] dma: Provide simple noop dma ops Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46   ` Andy Lutomirski
2016-02-03  5:46   ` Andy Lutomirski
2016-02-03  5:46 ` [PATCH v7 2/9] alpha/dma: use common " Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46   ` Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46 ` [PATCH v7 3/9] s390/dma: Allow per device " Andy Lutomirski
2016-02-03  5:46   ` Andy Lutomirski
2016-02-03  5:46   ` Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46 ` [PATCH v7 4/9] vring: Introduce vring_use_dma_api() Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46   ` Andy Lutomirski
2016-02-03  5:46   ` Andy Lutomirski
2016-02-03  5:46 ` [PATCH v7 5/9] virtio_ring: Support DMA APIs Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46   ` Andy Lutomirski
2016-02-03  5:46   ` Andy Lutomirski
2016-02-03 13:52   ` Michael S. Tsirkin
2016-02-03 13:52   ` Michael S. Tsirkin
2016-02-03 13:52     ` Michael S. Tsirkin
2016-02-03 13:52     ` Michael S. Tsirkin
2016-02-03 17:53     ` Andy Lutomirski
2016-02-03 17:53     ` Andy Lutomirski
2016-02-03 17:53     ` Andy Lutomirski
2016-02-03 17:53       ` Andy Lutomirski
2016-02-03  5:46 ` [PATCH v7 6/9] virtio: Add improved queue allocation API Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46   ` Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46 ` [PATCH v7 7/9] virtio_mmio: Use the DMA API if enabled Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46   ` Andy Lutomirski
2016-02-03  5:46 ` [PATCH v7 8/9] virtio_pci: " Andy Lutomirski
2016-02-03  5:46   ` Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46 ` [PATCH v7 9/9] vring: Use the DMA API on Xen Andy Lutomirski
2016-02-03  5:46   ` Andy Lutomirski
2016-02-03  9:49   ` David Vrabel
2016-02-03  9:49     ` David Vrabel
2016-02-03  9:49     ` David Vrabel
2016-02-04 17:49     ` Andy Lutomirski
2016-02-04 17:49     ` Andy Lutomirski
2016-02-04 17:49       ` Andy Lutomirski
2016-02-04 17:49       ` Andy Lutomirski
2016-02-03  9:49   ` David Vrabel
2016-02-03  9:49   ` David Vrabel
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03  5:46 ` Andy Lutomirski
2016-02-03 17:52 ` [PATCH v7 0/9] virtio DMA API, yet again Stefano Stabellini
2016-02-03 17:52 ` Stefano Stabellini
2016-02-03 17:52   ` Stefano Stabellini
2016-02-03 17:52   ` Stefano Stabellini
2016-02-03 17:52 ` Stefano Stabellini
2016-02-17  5:48 ` Andy Lutomirski
2016-02-17  5:48   ` Andy Lutomirski
2016-02-17  9:29   ` Michael S. Tsirkin
2016-02-17  9:29   ` Michael S. Tsirkin
2016-02-17  9:29     ` Michael S. Tsirkin
2016-02-17  9:29     ` Michael S. Tsirkin
2016-02-17  9:42     ` David Woodhouse
2016-02-17  9:42     ` David Woodhouse
2016-02-17  9:42     ` David Woodhouse
2016-02-17  5:48 ` Andy Lutomirski
2016-02-17  5:48 ` Andy Lutomirski
  -- strict thread matches above, loose matches on Subject: below --
2016-02-03  5:46 Andy Lutomirski
2016-02-03  5:46 Andy Lutomirski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.