All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] swiotlb v0.6: seperation of physical/virtual address translation
@ 2010-03-19 15:04 Konrad Rzeszutek Wilk
  2010-03-19 15:04 ` [PATCH 1/5] swiotlb: Make internal bookkeeping functions have 'swiotlb_bk' prefix Konrad Rzeszutek Wilk
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-03-19 15:04 UTC (permalink / raw)
  To: fujita.tomonori, linux-kernel, iommu, albert_herranz
  Cc: chrisw, jeremy, Ian.Campbell, dwmw2, alex.williamson

Fujita-san et al.

Attached is a set of patches that separate the address translation
(virt_to_phys, virt_to_bus, etc) from the SWIOTLB library.

Since the last posting I've:
 - Made the exported functions/variables have the 'swiotlb_bk' prefix instead
   of the 'do_[map|unmap]*' and 'io_tlb_*' combination.
 - dropped the checkpatches/other reworks patches.

I had not addressed the question of removing the 'overflow' buffer. There are over
~300 instances of the the DMA operations not being checked which plan on addressing
in a seperate set of patches that will slowly roll out the checks and then
finally the removal of the 'overflow' buffer.

.. and the writeup for this set set of patches:

The idea behind this set of patches is to make it possible to have separate
mechanisms for translating virtual to physical or virtual to DMA addresses
on platforms which need an SWIOTLB, and where physical != PCI bus address.

One customers of this is the pv-ops project, which can switch between
different modes of operation depending on the environment it is running in:
bare-metal or virtualized (Xen for now).

On bare-metal SWIOTLB is used when there are no hardware IOMMU. In virtualized
environment it used when PCI pass-through is enabled for the guest. The problems
with PCI pass-through is that the guest's idea of PFN's is not the real thing.
To fix that, there is translation layer for PFN->machine frame number and vice-versa.
To bubble that up to the SWIOTLB layer there are two possible solutions.

One solution has been to wholesale copy the SWIOTLB, stick it in
arch/x86/xen/swiotlb.c and modify the virt_to_phys, phys_to_virt and others
to use the Xen address translation functions. Unfortunately, since the kernel can
run on bare-metal, there would be big code overlap with the real SWIOTLB.
(git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git xen/dom0/swiotlb-new)

Another approach, which this set of patches explores, is to abstract the
address translation and address determination functions away from the
SWIOTLB book-keeping functions. This way the core SWIOTLB library functions
are present in one place, while the address related functions are in
a separate library that can be loaded when running under non-bare-metal platform.

The set of patches is also accessible on (with a user on top it):

git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb-2.6.git swiotlb-0.6


 include/linux/swiotlb.h |   33 +++++
 lib/swiotlb.c           |  321 +++++++++++++++++++++++++----------------------
 2 files changed, 202 insertions(+), 152 deletions(-)


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/5] swiotlb: Make internal bookkeeping functions have 'swiotlb_bk' prefix.
  2010-03-19 15:04 [PATCH] swiotlb v0.6: seperation of physical/virtual address translation Konrad Rzeszutek Wilk
@ 2010-03-19 15:04 ` Konrad Rzeszutek Wilk
  2010-03-19 15:04   ` [PATCH 2/5] swiotlb: swiotlb_bk_map_single: abstract out swiotlb_virt_to_bus calls out Konrad Rzeszutek Wilk
  2010-03-25 13:56 ` [PATCH] swiotlb v0.6: seperation of physical/virtual address translation Konrad Rzeszutek Wilk
  2010-04-05  2:12 ` FUJITA Tomonori
  2 siblings, 1 reply; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-03-19 15:04 UTC (permalink / raw)
  To: fujita.tomonori, linux-kernel, iommu, albert_herranz
  Cc: chrisw, jeremy, Ian.Campbell, dwmw2, alex.williamson,
	Konrad Rzeszutek Wilk

The functions that operate on io_tlb_list/io_tlb_start/io_tlb_orig_addr
have the prefix 'swiotlb_bk' now.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 lib/swiotlb.c |   33 ++++++++++++++++++---------------
 1 files changed, 18 insertions(+), 15 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 437eedb..8530aa6 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -60,8 +60,8 @@ enum dma_sync_target {
 int swiotlb_force;
 
 /*
- * Used to do a quick range check in unmap_single and
- * sync_single_*, to see if the memory was in fact allocated by this
+ * Used to do a quick range check in swiotlb_bk_unmap_single and
+ * swiotlb_bk_sync_single_*, to see if the memory was in fact allocated by this
  * API.
  */
 static char *io_tlb_start, *io_tlb_end;
@@ -364,7 +364,8 @@ static void swiotlb_bounce(phys_addr_t phys, char *dma_addr, size_t size,
  * Allocates bounce buffer and returns its kernel virtual address.
  */
 static void *
-map_single(struct device *hwdev, phys_addr_t phys, size_t size, int dir)
+swiotlb_bk_map_single(struct device *hwdev, phys_addr_t phys, size_t size,
+		      int dir)
 {
 	unsigned long flags;
 	char *dma_addr;
@@ -470,7 +471,8 @@ found:
  * dma_addr is the kernel virtual address of the bounce buffer to unmap.
  */
 static void
-do_unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+swiotlb_bk_unmap_single(struct device *hwdev, char *dma_addr, size_t size,
+			int dir)
 {
 	unsigned long flags;
 	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
@@ -510,7 +512,7 @@ do_unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
 }
 
 static void
-sync_single(struct device *hwdev, char *dma_addr, size_t size,
+swiotlb_bk_sync_single(struct device *hwdev, char *dma_addr, size_t size,
 	    int dir, int target)
 {
 	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
@@ -558,11 +560,11 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 	}
 	if (!ret) {
 		/*
-		 * We are either out of memory or the device can't DMA
-		 * to GFP_DMA memory; fall back on map_single(), which
+		 * We are either out of memory or the device can't DMA to
+		 * GFP_DMA memory; fall back on swiotlb_bk_map_single(), which
 		 * will grab memory from the lowest available address range.
 		 */
-		ret = map_single(hwdev, 0, size, DMA_FROM_DEVICE);
+		ret = swiotlb_bk_map_single(hwdev, 0, size, DMA_FROM_DEVICE);
 		if (!ret)
 			return NULL;
 	}
@@ -577,7 +579,7 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 		       (unsigned long long)dev_addr);
 
 		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
-		do_unmap_single(hwdev, ret, size, DMA_TO_DEVICE);
+		swiotlb_bk_unmap_single(hwdev, ret, size, DMA_TO_DEVICE);
 		return NULL;
 	}
 	*dma_handle = dev_addr;
@@ -595,8 +597,8 @@ swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	if (!is_swiotlb_buffer(paddr))
 		free_pages((unsigned long)vaddr, get_order(size));
 	else
-		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
-		do_unmap_single(hwdev, vaddr, size, DMA_TO_DEVICE);
+		/* DMA_TO_DEVICE to avoid memcpy in swiotlb_bk_unmap_single */
+		swiotlb_bk_unmap_single(hwdev, vaddr, size, DMA_TO_DEVICE);
 }
 EXPORT_SYMBOL(swiotlb_free_coherent);
 
@@ -652,7 +654,7 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
 	/*
 	 * Oh well, have to allocate and map a bounce buffer.
 	 */
-	map = map_single(dev, phys, size, dir);
+	map = swiotlb_bk_map_single(dev, phys, size, dir);
 	if (!map) {
 		swiotlb_full(dev, size, dir, 1);
 		map = io_tlb_overflow_buffer;
@@ -686,7 +688,7 @@ static void unmap_single(struct device *hwdev, dma_addr_t dev_addr,
 	BUG_ON(dir == DMA_NONE);
 
 	if (is_swiotlb_buffer(paddr)) {
-		do_unmap_single(hwdev, phys_to_virt(paddr), size, dir);
+		swiotlb_bk_unmap_single(hwdev, phys_to_virt(paddr), size, dir);
 		return;
 	}
 
@@ -729,7 +731,8 @@ swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
 	BUG_ON(dir == DMA_NONE);
 
 	if (is_swiotlb_buffer(paddr)) {
-		sync_single(hwdev, phys_to_virt(paddr), size, dir, target);
+		swiotlb_bk_sync_single(hwdev, phys_to_virt(paddr), size, dir,
+				       target);
 		return;
 	}
 
@@ -817,7 +820,7 @@ swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
 
 		if (swiotlb_force ||
 		    !dma_capable(hwdev, dev_addr, sg->length)) {
-			void *map = map_single(hwdev, sg_phys(sg),
+			void *map = swiotlb_bk_map_single(hwdev, sg_phys(sg),
 					       sg->length, dir);
 			if (!map) {
 				/* Don't panic here, we expect map_sg users
-- 
1.6.2.5


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/5] swiotlb: swiotlb_bk_map_single: abstract out swiotlb_virt_to_bus calls out.
  2010-03-19 15:04 ` [PATCH 1/5] swiotlb: Make internal bookkeeping functions have 'swiotlb_bk' prefix Konrad Rzeszutek Wilk
@ 2010-03-19 15:04   ` Konrad Rzeszutek Wilk
  2010-03-19 15:04     ` [PATCH 3/5] swiotlb: Make all bookkeeping functions and variables have same prefix Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-03-19 15:04 UTC (permalink / raw)
  To: fujita.tomonori, linux-kernel, iommu, albert_herranz
  Cc: chrisw, jeremy, Ian.Campbell, dwmw2, alex.williamson,
	Konrad Rzeszutek Wilk

We want to move that function out of swiotlb_bk_map_single so that the caller
of this function does the virt->phys->bus address translation.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 lib/swiotlb.c |   22 ++++++++++++++--------
 1 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 8530aa6..5a7d73b 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -364,21 +364,19 @@ static void swiotlb_bounce(phys_addr_t phys, char *dma_addr, size_t size,
  * Allocates bounce buffer and returns its kernel virtual address.
  */
 static void *
-swiotlb_bk_map_single(struct device *hwdev, phys_addr_t phys, size_t size,
-		      int dir)
+swiotlb_bk_map_single(struct device *hwdev, phys_addr_t phys,
+		      unsigned long start_dma_addr, size_t size, int dir)
 {
 	unsigned long flags;
 	char *dma_addr;
 	unsigned int nslots, stride, index, wrap;
 	int i;
-	unsigned long start_dma_addr;
 	unsigned long mask;
 	unsigned long offset_slots;
 	unsigned long max_slots;
 
 	mask = dma_get_seg_boundary(hwdev);
-	start_dma_addr = swiotlb_virt_to_bus(hwdev, io_tlb_start) & mask;
-
+	start_dma_addr = start_dma_addr & mask;
 	offset_slots = ALIGN(start_dma_addr, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
 
 	/*
@@ -546,6 +544,7 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 	void *ret;
 	int order = get_order(size);
 	u64 dma_mask = DMA_BIT_MASK(32);
+	unsigned long start_dma_addr;
 
 	if (hwdev && hwdev->coherent_dma_mask)
 		dma_mask = hwdev->coherent_dma_mask;
@@ -564,7 +563,9 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 		 * GFP_DMA memory; fall back on swiotlb_bk_map_single(), which
 		 * will grab memory from the lowest available address range.
 		 */
-		ret = swiotlb_bk_map_single(hwdev, 0, size, DMA_FROM_DEVICE);
+		start_dma_addr = swiotlb_virt_to_bus(hwdev, io_tlb_start);
+		ret = swiotlb_bk_map_single(hwdev, 0, start_dma_addr, size,
+					    DMA_FROM_DEVICE);
 		if (!ret)
 			return NULL;
 	}
@@ -638,6 +639,7 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
 			    enum dma_data_direction dir,
 			    struct dma_attrs *attrs)
 {
+	unsigned long start_dma_addr;
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dev_addr = phys_to_dma(dev, phys);
 	void *map;
@@ -654,7 +656,8 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
 	/*
 	 * Oh well, have to allocate and map a bounce buffer.
 	 */
-	map = swiotlb_bk_map_single(dev, phys, size, dir);
+	start_dma_addr = swiotlb_virt_to_bus(dev, io_tlb_start);
+	map = swiotlb_bk_map_single(dev, phys, start_dma_addr, size, dir);
 	if (!map) {
 		swiotlb_full(dev, size, dir, 1);
 		map = io_tlb_overflow_buffer;
@@ -809,11 +812,13 @@ int
 swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
 		     enum dma_data_direction dir, struct dma_attrs *attrs)
 {
+	unsigned long start_dma_addr;
 	struct scatterlist *sg;
 	int i;
 
 	BUG_ON(dir == DMA_NONE);
 
+	start_dma_addr = swiotlb_virt_to_bus(hwdev, io_tlb_start);
 	for_each_sg(sgl, sg, nelems, i) {
 		phys_addr_t paddr = sg_phys(sg);
 		dma_addr_t dev_addr = phys_to_dma(hwdev, paddr);
@@ -821,7 +826,8 @@ swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
 		if (swiotlb_force ||
 		    !dma_capable(hwdev, dev_addr, sg->length)) {
 			void *map = swiotlb_bk_map_single(hwdev, sg_phys(sg),
-					       sg->length, dir);
+							  start_dma_addr,
+							  sg->length, dir);
 			if (!map) {
 				/* Don't panic here, we expect map_sg users
 				   to do proper error handling. */
-- 
1.6.2.5


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/5] swiotlb: Make all bookkeeping functions and variables have same prefix.
  2010-03-19 15:04   ` [PATCH 2/5] swiotlb: swiotlb_bk_map_single: abstract out swiotlb_virt_to_bus calls out Konrad Rzeszutek Wilk
@ 2010-03-19 15:04     ` Konrad Rzeszutek Wilk
  2010-03-19 15:04       ` [PATCH 4/5] swiotlb: Make swiotlb bookkeeping functions visible in the header file Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-03-19 15:04 UTC (permalink / raw)
  To: fujita.tomonori, linux-kernel, iommu, albert_herranz
  Cc: chrisw, jeremy, Ian.Campbell, dwmw2, alex.williamson,
	Konrad Rzeszutek Wilk

We prefix all book keeping functions and variables with the
'swiotlb_bk' prefix.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 lib/swiotlb.c |  247 +++++++++++++++++++++++++++++----------------------------
 1 files changed, 126 insertions(+), 121 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 5a7d73b..3926c14 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -64,38 +64,38 @@ int swiotlb_force;
  * swiotlb_bk_sync_single_*, to see if the memory was in fact allocated by this
  * API.
  */
-static char *io_tlb_start, *io_tlb_end;
+static char *swiotlb_bk_start, *swiotlb_bk_end;
 
 /*
- * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
- * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
+ * The number of IO TLB blocks (in groups of 64) betweeen swiotlb_bk_start and
+ * swiotlb_bk_end.  This is command line adjustable via setup_io_tlb_npages.
  */
-static unsigned long io_tlb_nslabs;
+static unsigned long swiotlb_bk_nslabs;
 
 /*
  * When the IOMMU overflows we return a fallback buffer. This sets the size.
  */
-static unsigned long io_tlb_overflow = 32*1024;
+static unsigned long swiotlb_bk_overflow = 32*1024;
 
-void *io_tlb_overflow_buffer;
+void *swiotlb_bk_overflow_buffer;
 
 /*
  * This is a free list describing the number of free entries available from
  * each index
  */
-static unsigned int *io_tlb_list;
-static unsigned int io_tlb_index;
+static unsigned int *swiotlb_bk_list;
+static unsigned int swiotlb_bk_index;
 
 /*
  * We need to save away the original address corresponding to a mapped entry
  * for the sync operations.
  */
-static phys_addr_t *io_tlb_orig_addr;
+static phys_addr_t *swiotlb_bk_orig_addr;
 
 /*
  * Protect the above data structures in the map and unmap calls
  */
-static DEFINE_SPINLOCK(io_tlb_lock);
+static DEFINE_SPINLOCK(swiotlb_bk_lock);
 
 static int late_alloc;
 
@@ -103,9 +103,9 @@ static int __init
 setup_io_tlb_npages(char *str)
 {
 	if (isdigit(*str)) {
-		io_tlb_nslabs = simple_strtoul(str, &str, 0);
+		swiotlb_bk_nslabs = simple_strtoul(str, &str, 0);
 		/* avoid tail segment of size < IO_TLB_SEGSIZE */
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+		swiotlb_bk_nslabs = ALIGN(swiotlb_bk_nslabs, IO_TLB_SEGSIZE);
 	}
 	if (*str == ',')
 		++str;
@@ -115,7 +115,7 @@ setup_io_tlb_npages(char *str)
 	return 1;
 }
 __setup("swiotlb=", setup_io_tlb_npages);
-/* make io_tlb_overflow tunable too? */
+/* make swiotlb_bk_overflow tunable too? */
 
 /* Note that this doesn't work with highmem page */
 static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev,
@@ -126,14 +126,14 @@ static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev,
 
 void swiotlb_print_info(void)
 {
-	unsigned long bytes = io_tlb_nslabs << IO_TLB_SHIFT;
+	unsigned long bytes = swiotlb_bk_nslabs << IO_TLB_SHIFT;
 	phys_addr_t pstart, pend;
 
-	pstart = virt_to_phys(io_tlb_start);
-	pend = virt_to_phys(io_tlb_end);
+	pstart = virt_to_phys(swiotlb_bk_start);
+	pend = virt_to_phys(swiotlb_bk_end);
 
 	printk(KERN_INFO "Placing %luMB software IO TLB between %p - %p\n",
-	       bytes >> 20, io_tlb_start, io_tlb_end);
+	       bytes >> 20, swiotlb_bk_start, swiotlb_bk_end);
 	printk(KERN_INFO "software IO TLB at phys %#llx - %#llx\n",
 	       (unsigned long long)pstart,
 	       (unsigned long long)pend);
@@ -148,37 +148,38 @@ swiotlb_init_with_default_size(size_t default_size, int verbose)
 {
 	unsigned long i, bytes;
 
-	if (!io_tlb_nslabs) {
-		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	if (!swiotlb_bk_nslabs) {
+		swiotlb_bk_nslabs = (default_size >> IO_TLB_SHIFT);
+		swiotlb_bk_nslabs = ALIGN(swiotlb_bk_nslabs, IO_TLB_SEGSIZE);
 	}
 
-	bytes = io_tlb_nslabs << IO_TLB_SHIFT;
+	bytes = swiotlb_bk_nslabs << IO_TLB_SHIFT;
 
 	/*
 	 * Get IO TLB memory from the low pages
 	 */
-	io_tlb_start = alloc_bootmem_low_pages(bytes);
-	if (!io_tlb_start)
+	swiotlb_bk_start = alloc_bootmem_low_pages(bytes);
+	if (!swiotlb_bk_start)
 		panic("Cannot allocate SWIOTLB buffer");
-	io_tlb_end = io_tlb_start + bytes;
+	swiotlb_bk_end = swiotlb_bk_start + bytes;
 
 	/*
 	 * Allocate and initialize the free list array.  This array is used
 	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
-	 * between io_tlb_start and io_tlb_end.
+	 * between swiotlb_bk_start and swiotlb_bk_end.
 	 */
-	io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
-	for (i = 0; i < io_tlb_nslabs; i++)
- 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-	io_tlb_index = 0;
-	io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(phys_addr_t));
+	swiotlb_bk_list = alloc_bootmem(swiotlb_bk_nslabs * sizeof(int));
+	for (i = 0; i < swiotlb_bk_nslabs; i++)
+		swiotlb_bk_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+	swiotlb_bk_index = 0;
+	swiotlb_bk_orig_addr = alloc_bootmem(swiotlb_bk_nslabs *
+					     sizeof(phys_addr_t));
 
 	/*
 	 * Get the overflow emergency buffer
 	 */
-	io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
-	if (!io_tlb_overflow_buffer)
+	swiotlb_bk_overflow_buffer = alloc_bootmem_low(swiotlb_bk_overflow);
+	if (!swiotlb_bk_overflow_buffer)
 		panic("Cannot allocate SWIOTLB overflow buffer!\n");
 	if (verbose)
 		swiotlb_print_info();
@@ -198,70 +199,71 @@ swiotlb_init(int verbose)
 int
 swiotlb_late_init_with_default_size(size_t default_size)
 {
-	unsigned long i, bytes, req_nslabs = io_tlb_nslabs;
+	unsigned long i, bytes, req_nslabs = swiotlb_bk_nslabs;
 	unsigned int order;
 
-	if (!io_tlb_nslabs) {
-		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	if (!swiotlb_bk_nslabs) {
+		swiotlb_bk_nslabs = (default_size >> IO_TLB_SHIFT);
+		swiotlb_bk_nslabs = ALIGN(swiotlb_bk_nslabs, IO_TLB_SEGSIZE);
 	}
 
 	/*
 	 * Get IO TLB memory from the low pages
 	 */
-	order = get_order(io_tlb_nslabs << IO_TLB_SHIFT);
-	io_tlb_nslabs = SLABS_PER_PAGE << order;
-	bytes = io_tlb_nslabs << IO_TLB_SHIFT;
+	order = get_order(swiotlb_bk_nslabs << IO_TLB_SHIFT);
+	swiotlb_bk_nslabs = SLABS_PER_PAGE << order;
+	bytes = swiotlb_bk_nslabs << IO_TLB_SHIFT;
 
 	while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
-		io_tlb_start = (void *)__get_free_pages(GFP_DMA | __GFP_NOWARN,
+		swiotlb_bk_start = (void *)__get_free_pages(GFP_DMA |
+							__GFP_NOWARN,
 							order);
-		if (io_tlb_start)
+		if (swiotlb_bk_start)
 			break;
 		order--;
 	}
 
-	if (!io_tlb_start)
+	if (!swiotlb_bk_start)
 		goto cleanup1;
 
 	if (order != get_order(bytes)) {
 		printk(KERN_WARNING "Warning: only able to allocate %ld MB "
 		       "for software IO TLB\n", (PAGE_SIZE << order) >> 20);
-		io_tlb_nslabs = SLABS_PER_PAGE << order;
-		bytes = io_tlb_nslabs << IO_TLB_SHIFT;
+		swiotlb_bk_nslabs = SLABS_PER_PAGE << order;
+		bytes = swiotlb_bk_nslabs << IO_TLB_SHIFT;
 	}
-	io_tlb_end = io_tlb_start + bytes;
-	memset(io_tlb_start, 0, bytes);
+	swiotlb_bk_end = swiotlb_bk_start + bytes;
+	memset(swiotlb_bk_start, 0, bytes);
 
 	/*
 	 * Allocate and initialize the free list array.  This array is used
 	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
-	 * between io_tlb_start and io_tlb_end.
+	 * between swiotlb_bk_start and swiotlb_bk_end.
 	 */
-	io_tlb_list = (unsigned int *)__get_free_pages(GFP_KERNEL,
-	                              get_order(io_tlb_nslabs * sizeof(int)));
-	if (!io_tlb_list)
+	swiotlb_bk_list = (unsigned int *)__get_free_pages(GFP_KERNEL,
+				get_order(swiotlb_bk_nslabs * sizeof(int)));
+	if (!swiotlb_bk_list)
 		goto cleanup2;
 
-	for (i = 0; i < io_tlb_nslabs; i++)
- 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-	io_tlb_index = 0;
+	for (i = 0; i < swiotlb_bk_nslabs; i++)
+		swiotlb_bk_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+	swiotlb_bk_index = 0;
 
-	io_tlb_orig_addr = (phys_addr_t *)
-		__get_free_pages(GFP_KERNEL,
-				 get_order(io_tlb_nslabs *
-					   sizeof(phys_addr_t)));
-	if (!io_tlb_orig_addr)
+	swiotlb_bk_orig_addr = (phys_addr_t *)__get_free_pages(GFP_KERNEL,
+						get_order(swiotlb_bk_nslabs *
+						  sizeof(phys_addr_t)));
+	if (!swiotlb_bk_orig_addr)
 		goto cleanup3;
 
-	memset(io_tlb_orig_addr, 0, io_tlb_nslabs * sizeof(phys_addr_t));
+	memset(swiotlb_bk_orig_addr, 0, swiotlb_bk_nslabs *
+					sizeof(phys_addr_t));
 
 	/*
 	 * Get the overflow emergency buffer
 	 */
-	io_tlb_overflow_buffer = (void *)__get_free_pages(GFP_DMA,
-	                                          get_order(io_tlb_overflow));
-	if (!io_tlb_overflow_buffer)
+	swiotlb_bk_overflow_buffer = (void *)__get_free_pages(GFP_DMA,
+					  get_order(swiotlb_bk_overflow));
+	if (!swiotlb_bk_overflow_buffer)
 		goto cleanup4;
 
 	swiotlb_print_info();
@@ -271,52 +273,52 @@ swiotlb_late_init_with_default_size(size_t default_size)
 	return 0;
 
 cleanup4:
-	free_pages((unsigned long)io_tlb_orig_addr,
-		   get_order(io_tlb_nslabs * sizeof(phys_addr_t)));
-	io_tlb_orig_addr = NULL;
+	free_pages((unsigned long)swiotlb_bk_orig_addr,
+		   get_order(swiotlb_bk_nslabs * sizeof(phys_addr_t)));
+	swiotlb_bk_orig_addr = NULL;
 cleanup3:
-	free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs *
+	free_pages((unsigned long)swiotlb_bk_list, get_order(swiotlb_bk_nslabs *
 	                                                 sizeof(int)));
-	io_tlb_list = NULL;
+	swiotlb_bk_list = NULL;
 cleanup2:
-	io_tlb_end = NULL;
-	free_pages((unsigned long)io_tlb_start, order);
-	io_tlb_start = NULL;
+	swiotlb_bk_end = NULL;
+	free_pages((unsigned long)swiotlb_bk_start, order);
+	swiotlb_bk_start = NULL;
 cleanup1:
-	io_tlb_nslabs = req_nslabs;
+	swiotlb_bk_nslabs = req_nslabs;
 	return -ENOMEM;
 }
 
 void __init swiotlb_free(void)
 {
-	if (!io_tlb_overflow_buffer)
+	if (!swiotlb_bk_overflow_buffer)
 		return;
 
 	if (late_alloc) {
-		free_pages((unsigned long)io_tlb_overflow_buffer,
-			   get_order(io_tlb_overflow));
-		free_pages((unsigned long)io_tlb_orig_addr,
-			   get_order(io_tlb_nslabs * sizeof(phys_addr_t)));
-		free_pages((unsigned long)io_tlb_list, get_order(io_tlb_nslabs *
-								 sizeof(int)));
-		free_pages((unsigned long)io_tlb_start,
-			   get_order(io_tlb_nslabs << IO_TLB_SHIFT));
+		free_pages((unsigned long)swiotlb_bk_overflow_buffer,
+			   get_order(swiotlb_bk_overflow));
+		free_pages((unsigned long)swiotlb_bk_orig_addr,
+			   get_order(swiotlb_bk_nslabs * sizeof(phys_addr_t)));
+		free_pages((unsigned long)swiotlb_bk_list,
+			   get_order(swiotlb_bk_nslabs * sizeof(int)));
+		free_pages((unsigned long)swiotlb_bk_start,
+			   get_order(swiotlb_bk_nslabs << IO_TLB_SHIFT));
 	} else {
-		free_bootmem_late(__pa(io_tlb_overflow_buffer),
-				  io_tlb_overflow);
-		free_bootmem_late(__pa(io_tlb_orig_addr),
-				  io_tlb_nslabs * sizeof(phys_addr_t));
-		free_bootmem_late(__pa(io_tlb_list),
-				  io_tlb_nslabs * sizeof(int));
-		free_bootmem_late(__pa(io_tlb_start),
-				  io_tlb_nslabs << IO_TLB_SHIFT);
+		free_bootmem_late(__pa(swiotlb_bk_overflow_buffer),
+				  swiotlb_bk_overflow);
+		free_bootmem_late(__pa(swiotlb_bk_orig_addr),
+				  swiotlb_bk_nslabs * sizeof(phys_addr_t));
+		free_bootmem_late(__pa(swiotlb_bk_list),
+				  swiotlb_bk_nslabs * sizeof(int));
+		free_bootmem_late(__pa(swiotlb_bk_start),
+				  swiotlb_bk_nslabs << IO_TLB_SHIFT);
 	}
 }
 
 static int is_swiotlb_buffer(phys_addr_t paddr)
 {
-	return paddr >= virt_to_phys(io_tlb_start) &&
-		paddr < virt_to_phys(io_tlb_end);
+	return paddr >= virt_to_phys(swiotlb_bk_start) &&
+		paddr < virt_to_phys(swiotlb_bk_end);
 }
 
 /*
@@ -402,9 +404,9 @@ swiotlb_bk_map_single(struct device *hwdev, phys_addr_t phys,
 	 * Find suitable number of IO TLB entries size that will fit this
 	 * request and allocate a buffer from that IO TLB pool.
 	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
-	index = ALIGN(io_tlb_index, stride);
-	if (index >= io_tlb_nslabs)
+	spin_lock_irqsave(&swiotlb_bk_lock, flags);
+	index = ALIGN(swiotlb_bk_index, stride);
+	if (index >= swiotlb_bk_nslabs)
 		index = 0;
 	wrap = index;
 
@@ -412,7 +414,7 @@ swiotlb_bk_map_single(struct device *hwdev, phys_addr_t phys,
 		while (iommu_is_span_boundary(index, nslots, offset_slots,
 					      max_slots)) {
 			index += stride;
-			if (index >= io_tlb_nslabs)
+			if (index >= swiotlb_bk_nslabs)
 				index = 0;
 			if (index == wrap)
 				goto not_found;
@@ -423,34 +425,35 @@ swiotlb_bk_map_single(struct device *hwdev, phys_addr_t phys,
 		 * contiguous buffers, we allocate the buffers from that slot
 		 * and mark the entries as '0' indicating unavailable.
 		 */
-		if (io_tlb_list[index] >= nslots) {
+		if (swiotlb_bk_list[index] >= nslots) {
 			int count = 0;
 
 			for (i = index; i < (int) (index + nslots); i++)
-				io_tlb_list[i] = 0;
-			for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE - 1) && io_tlb_list[i]; i--)
-				io_tlb_list[i] = ++count;
-			dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
+				swiotlb_bk_list[i] = 0;
+			for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE)
+			     != IO_TLB_SEGSIZE - 1) && swiotlb_bk_list[i]; i--)
+				swiotlb_bk_list[i] = ++count;
+			dma_addr = swiotlb_bk_start + (index << IO_TLB_SHIFT);
 
 			/*
 			 * Update the indices to avoid searching in the next
 			 * round.
 			 */
-			io_tlb_index = ((index + nslots) < io_tlb_nslabs
+			swiotlb_bk_index = ((index + nslots) < swiotlb_bk_nslabs
 					? (index + nslots) : 0);
 
 			goto found;
 		}
 		index += stride;
-		if (index >= io_tlb_nslabs)
+		if (index >= swiotlb_bk_nslabs)
 			index = 0;
 	} while (index != wrap);
 
 not_found:
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
+	spin_unlock_irqrestore(&swiotlb_bk_lock, flags);
 	return NULL;
 found:
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
+	spin_unlock_irqrestore(&swiotlb_bk_lock, flags);
 
 	/*
 	 * Save away the mapping from the original address to the DMA address.
@@ -458,7 +461,7 @@ found:
 	 * needed.
 	 */
 	for (i = 0; i < nslots; i++)
-		io_tlb_orig_addr[index+i] = phys + (i << IO_TLB_SHIFT);
+		swiotlb_bk_orig_addr[index+i] = phys + (i << IO_TLB_SHIFT);
 	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
 		swiotlb_bounce(phys, dma_addr, size, DMA_TO_DEVICE);
 
@@ -474,8 +477,8 @@ swiotlb_bk_unmap_single(struct device *hwdev, char *dma_addr, size_t size,
 {
 	unsigned long flags;
 	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	phys_addr_t phys = io_tlb_orig_addr[index];
+	int index = (dma_addr - swiotlb_bk_start) >> IO_TLB_SHIFT;
+	phys_addr_t phys = swiotlb_bk_orig_addr[index];
 
 	/*
 	 * First, sync the memory before unmapping the entry
@@ -489,32 +492,33 @@ swiotlb_bk_unmap_single(struct device *hwdev, char *dma_addr, size_t size,
 	 * While returning the entries to the free list, we merge the entries
 	 * with slots below and above the pool being returned.
 	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
+	spin_lock_irqsave(&swiotlb_bk_lock, flags);
 	{
 		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
-			 io_tlb_list[index + nslots] : 0);
+			 swiotlb_bk_list[index + nslots] : 0);
 		/*
 		 * Step 1: return the slots to the free list, merging the
 		 * slots with superceeding slots
 		 */
 		for (i = index + nslots - 1; i >= index; i--)
-			io_tlb_list[i] = ++count;
+			swiotlb_bk_list[i] = ++count;
 		/*
 		 * Step 2: merge the returned slots with the preceding slots,
 		 * if available (non zero)
 		 */
-		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-			io_tlb_list[i] = ++count;
+		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE)
+		     != IO_TLB_SEGSIZE - 1) && swiotlb_bk_list[i]; i--)
+			swiotlb_bk_list[i] = ++count;
 	}
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
+	spin_unlock_irqrestore(&swiotlb_bk_lock, flags);
 }
 
 static void
 swiotlb_bk_sync_single(struct device *hwdev, char *dma_addr, size_t size,
 	    int dir, int target)
 {
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	phys_addr_t phys = io_tlb_orig_addr[index];
+	int index = (dma_addr - swiotlb_bk_start) >> IO_TLB_SHIFT;
+	phys_addr_t phys = swiotlb_bk_orig_addr[index];
 
 	phys += ((unsigned long)dma_addr & ((1 << IO_TLB_SHIFT) - 1));
 
@@ -563,7 +567,7 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 		 * GFP_DMA memory; fall back on swiotlb_bk_map_single(), which
 		 * will grab memory from the lowest available address range.
 		 */
-		start_dma_addr = swiotlb_virt_to_bus(hwdev, io_tlb_start);
+		start_dma_addr = swiotlb_virt_to_bus(hwdev, swiotlb_bk_start);
 		ret = swiotlb_bk_map_single(hwdev, 0, start_dma_addr, size,
 					    DMA_FROM_DEVICE);
 		if (!ret)
@@ -616,7 +620,7 @@ swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
 	printk(KERN_ERR "DMA: Out of SW-IOMMU space for %zu bytes at "
 	       "device %s\n", size, dev ? dev_name(dev) : "?");
 
-	if (size <= io_tlb_overflow || !do_panic)
+	if (size <= swiotlb_bk_overflow || !do_panic)
 		return;
 
 	if (dir == DMA_BIDIRECTIONAL)
@@ -656,11 +660,11 @@ dma_addr_t swiotlb_map_page(struct device *dev, struct page *page,
 	/*
 	 * Oh well, have to allocate and map a bounce buffer.
 	 */
-	start_dma_addr = swiotlb_virt_to_bus(dev, io_tlb_start);
+	start_dma_addr = swiotlb_virt_to_bus(dev, swiotlb_bk_start);
 	map = swiotlb_bk_map_single(dev, phys, start_dma_addr, size, dir);
 	if (!map) {
 		swiotlb_full(dev, size, dir, 1);
-		map = io_tlb_overflow_buffer;
+		map = swiotlb_bk_overflow_buffer;
 	}
 
 	dev_addr = swiotlb_virt_to_bus(dev, map);
@@ -818,7 +822,7 @@ swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl, int nelems,
 
 	BUG_ON(dir == DMA_NONE);
 
-	start_dma_addr = swiotlb_virt_to_bus(hwdev, io_tlb_start);
+	start_dma_addr = swiotlb_virt_to_bus(hwdev, swiotlb_bk_start);
 	for_each_sg(sgl, sg, nelems, i) {
 		phys_addr_t paddr = sg_phys(sg);
 		dma_addr_t dev_addr = phys_to_dma(hwdev, paddr);
@@ -919,7 +923,8 @@ EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
 int
 swiotlb_dma_mapping_error(struct device *hwdev, dma_addr_t dma_addr)
 {
-	return (dma_addr == swiotlb_virt_to_bus(hwdev, io_tlb_overflow_buffer));
+	return (dma_addr == swiotlb_virt_to_bus(hwdev,
+						swiotlb_bk_overflow_buffer));
 }
 EXPORT_SYMBOL(swiotlb_dma_mapping_error);
 
@@ -932,6 +937,6 @@ EXPORT_SYMBOL(swiotlb_dma_mapping_error);
 int
 swiotlb_dma_supported(struct device *hwdev, u64 mask)
 {
-	return swiotlb_virt_to_bus(hwdev, io_tlb_end - 1) <= mask;
+	return swiotlb_virt_to_bus(hwdev, swiotlb_bk_end - 1) <= mask;
 }
 EXPORT_SYMBOL(swiotlb_dma_supported);
-- 
1.6.2.5


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/5] swiotlb: Make swiotlb bookkeeping functions visible in the header file.
  2010-03-19 15:04     ` [PATCH 3/5] swiotlb: Make all bookkeeping functions and variables have same prefix Konrad Rzeszutek Wilk
@ 2010-03-19 15:04       ` Konrad Rzeszutek Wilk
  2010-03-19 15:04         ` [PATCH 5/5] swiotlb: EXPORT_SYMBOL_GPL functions + variables that are defined " Konrad Rzeszutek Wilk
  2010-04-05  2:13         ` [PATCH 4/5] swiotlb: Make swiotlb bookkeeping functions visible " FUJITA Tomonori
  0 siblings, 2 replies; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-03-19 15:04 UTC (permalink / raw)
  To: fujita.tomonori, linux-kernel, iommu, albert_herranz
  Cc: chrisw, jeremy, Ian.Campbell, dwmw2, alex.williamson,
	Konrad Rzeszutek Wilk

We put the init, free, and functions dealing with the operations on the
SWIOTLB buffer at the top of the header. Also we export some of the variables
that are used by the dma_ops functions.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 include/linux/swiotlb.h |   33 +++++++++++++++++++++++++++++++++
 lib/swiotlb.c           |   28 ++++++++++------------------
 2 files changed, 43 insertions(+), 18 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index febedcf..8550d6b 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -24,6 +24,39 @@ extern int swiotlb_force;
 
 extern void swiotlb_init(int verbose);
 
+/* Internal book-keeping functions. Must be linked against the library
+ * to take advantage of them.*/
+#ifdef CONFIG_SWIOTLB
+/*
+ * Enumeration for sync targets
+ */
+enum dma_sync_target {
+	SYNC_FOR_CPU = 0,
+	SYNC_FOR_DEVICE = 1,
+};
+extern char *swiotlb_bk_start;
+extern char *swiotlb_bk_end;
+extern unsigned long swiotlb_bk_nslabs;
+extern void *swiotlb_bk_overflow_buffer;
+extern unsigned long swiotlb_bk_overflow;
+extern int is_swiotlb_buffer(phys_addr_t paddr);
+extern void *swiotlb_bk_map_single(struct device *hwdev, phys_addr_t phys,
+			    unsigned long start_dma_addr, size_t size, int dir);
+
+extern void swiotlb_bk_unmap_single(struct device *hwdev, char *dma_addr, size_t size,
+			     int dir);
+
+extern void swiotlb_bk_sync_single(struct device *hwdev, char *dma_addr, size_t size,
+			   int dir, int target);
+
+/* Accessory functions. */
+extern void swiotlb_bounce(phys_addr_t phys, char *dma_addr, size_t size,
+			   enum dma_data_direction dir);
+extern void swiotlb_full(struct device *dev, size_t size, int dir, int do_panic);
+
+#endif
+
+/* swiotlb.c: dma_ops functions. */
 extern void
 *swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 			dma_addr_t *dma_handle, gfp_t flags);
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 3926c14..b3eef1c 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -49,14 +49,6 @@
  */
 #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
 
-/*
- * Enumeration for sync targets
- */
-enum dma_sync_target {
-	SYNC_FOR_CPU = 0,
-	SYNC_FOR_DEVICE = 1,
-};
-
 int swiotlb_force;
 
 /*
@@ -64,18 +56,18 @@ int swiotlb_force;
  * swiotlb_bk_sync_single_*, to see if the memory was in fact allocated by this
  * API.
  */
-static char *swiotlb_bk_start, *swiotlb_bk_end;
+char *swiotlb_bk_start, *swiotlb_bk_end;
 
 /*
  * The number of IO TLB blocks (in groups of 64) betweeen swiotlb_bk_start and
  * swiotlb_bk_end.  This is command line adjustable via setup_io_tlb_npages.
  */
-static unsigned long swiotlb_bk_nslabs;
+unsigned long swiotlb_bk_nslabs;
 
 /*
  * When the IOMMU overflows we return a fallback buffer. This sets the size.
  */
-static unsigned long swiotlb_bk_overflow = 32*1024;
+unsigned long swiotlb_bk_overflow = 32*1024;
 
 void *swiotlb_bk_overflow_buffer;
 
@@ -315,7 +307,7 @@ void __init swiotlb_free(void)
 	}
 }
 
-static int is_swiotlb_buffer(phys_addr_t paddr)
+int is_swiotlb_buffer(phys_addr_t paddr)
 {
 	return paddr >= virt_to_phys(swiotlb_bk_start) &&
 		paddr < virt_to_phys(swiotlb_bk_end);
@@ -324,7 +316,7 @@ static int is_swiotlb_buffer(phys_addr_t paddr)
 /*
  * Bounce: copy the swiotlb buffer back to the original dma location
  */
-static void swiotlb_bounce(phys_addr_t phys, char *dma_addr, size_t size,
+void swiotlb_bounce(phys_addr_t phys, char *dma_addr, size_t size,
 			   enum dma_data_direction dir)
 {
 	unsigned long pfn = PFN_DOWN(phys);
@@ -365,7 +357,7 @@ static void swiotlb_bounce(phys_addr_t phys, char *dma_addr, size_t size,
 /*
  * Allocates bounce buffer and returns its kernel virtual address.
  */
-static void *
+void *
 swiotlb_bk_map_single(struct device *hwdev, phys_addr_t phys,
 		      unsigned long start_dma_addr, size_t size, int dir)
 {
@@ -471,7 +463,7 @@ found:
 /*
  * dma_addr is the kernel virtual address of the bounce buffer to unmap.
  */
-static void
+void
 swiotlb_bk_unmap_single(struct device *hwdev, char *dma_addr, size_t size,
 			int dir)
 {
@@ -513,9 +505,9 @@ swiotlb_bk_unmap_single(struct device *hwdev, char *dma_addr, size_t size,
 	spin_unlock_irqrestore(&swiotlb_bk_lock, flags);
 }
 
-static void
+void
 swiotlb_bk_sync_single(struct device *hwdev, char *dma_addr, size_t size,
-	    int dir, int target)
+		       int dir, int target)
 {
 	int index = (dma_addr - swiotlb_bk_start) >> IO_TLB_SHIFT;
 	phys_addr_t phys = swiotlb_bk_orig_addr[index];
@@ -607,7 +599,7 @@ swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 }
 EXPORT_SYMBOL(swiotlb_free_coherent);
 
-static void
+void
 swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
 {
 	/*
-- 
1.6.2.5


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 5/5] swiotlb: EXPORT_SYMBOL_GPL functions + variables that are defined in the header file.
  2010-03-19 15:04       ` [PATCH 4/5] swiotlb: Make swiotlb bookkeeping functions visible in the header file Konrad Rzeszutek Wilk
@ 2010-03-19 15:04         ` Konrad Rzeszutek Wilk
  2010-04-05  2:13         ` [PATCH 4/5] swiotlb: Make swiotlb bookkeeping functions visible " FUJITA Tomonori
  1 sibling, 0 replies; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-03-19 15:04 UTC (permalink / raw)
  To: fujita.tomonori, linux-kernel, iommu, albert_herranz
  Cc: chrisw, jeremy, Ian.Campbell, dwmw2, alex.williamson,
	Konrad Rzeszutek Wilk

Make the functions and variables that are now declared in the swiotlb.h
header file visible by the linker.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 lib/swiotlb.c |   11 +++++++++++
 1 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index b3eef1c..c6bfa5d 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -57,19 +57,24 @@ int swiotlb_force;
  * API.
  */
 char *swiotlb_bk_start, *swiotlb_bk_end;
+EXPORT_SYMBOL_GPL(swiotlb_bk_start);
+EXPORT_SYMBOL_GPL(swiotlb_bk_end);
 
 /*
  * The number of IO TLB blocks (in groups of 64) betweeen swiotlb_bk_start and
  * swiotlb_bk_end.  This is command line adjustable via setup_io_tlb_npages.
  */
 unsigned long swiotlb_bk_nslabs;
+EXPORT_SYMBOL_GPL(swiotlb_bk_nslabs);
 
 /*
  * When the IOMMU overflows we return a fallback buffer. This sets the size.
  */
 unsigned long swiotlb_bk_overflow = 32*1024;
+EXPORT_SYMBOL_GPL(swiotlb_bk_overflow);
 
 void *swiotlb_bk_overflow_buffer;
+EXPORT_SYMBOL_GPL(swiotlb_bk_overflow_buffer);
 
 /*
  * This is a free list describing the number of free entries available from
@@ -312,6 +317,7 @@ int is_swiotlb_buffer(phys_addr_t paddr)
 	return paddr >= virt_to_phys(swiotlb_bk_start) &&
 		paddr < virt_to_phys(swiotlb_bk_end);
 }
+EXPORT_SYMBOL_GPL(is_swiotlb_buffer);
 
 /*
  * Bounce: copy the swiotlb buffer back to the original dma location
@@ -353,6 +359,7 @@ void swiotlb_bounce(phys_addr_t phys, char *dma_addr, size_t size,
 			memcpy(phys_to_virt(phys), dma_addr, size);
 	}
 }
+EXPORT_SYMBOL_GPL(swiotlb_bounce);
 
 /*
  * Allocates bounce buffer and returns its kernel virtual address.
@@ -459,6 +466,7 @@ found:
 
 	return dma_addr;
 }
+EXPORT_SYMBOL_GPL(swiotlb_bk_map_single);
 
 /*
  * dma_addr is the kernel virtual address of the bounce buffer to unmap.
@@ -504,6 +512,7 @@ swiotlb_bk_unmap_single(struct device *hwdev, char *dma_addr, size_t size,
 	}
 	spin_unlock_irqrestore(&swiotlb_bk_lock, flags);
 }
+EXPORT_SYMBOL_GPL(swiotlb_bk_unmap_single);
 
 void
 swiotlb_bk_sync_single(struct device *hwdev, char *dma_addr, size_t size,
@@ -531,6 +540,7 @@ swiotlb_bk_sync_single(struct device *hwdev, char *dma_addr, size_t size,
 		BUG();
 	}
 }
+EXPORT_SYMBOL_GPL(swiotlb_bk_sync_single);
 
 void *
 swiotlb_alloc_coherent(struct device *hwdev, size_t size,
@@ -622,6 +632,7 @@ swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
 	if (dir == DMA_TO_DEVICE)
 		panic("DMA: Random memory could be DMA read\n");
 }
+EXPORT_SYMBOL_GPL(swiotlb_full);
 
 /*
  * Map a single buffer of the indicated size for DMA in streaming mode.  The
-- 
1.6.2.5


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH] swiotlb v0.6: seperation of physical/virtual address translation
  2010-03-19 15:04 [PATCH] swiotlb v0.6: seperation of physical/virtual address translation Konrad Rzeszutek Wilk
  2010-03-19 15:04 ` [PATCH 1/5] swiotlb: Make internal bookkeeping functions have 'swiotlb_bk' prefix Konrad Rzeszutek Wilk
@ 2010-03-25 13:56 ` Konrad Rzeszutek Wilk
  2010-03-25 23:01   ` Albert Herranz
  2010-04-05  2:12 ` FUJITA Tomonori
  2 siblings, 1 reply; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-03-25 13:56 UTC (permalink / raw)
  To: fujita.tomonori, linux-kernel, iommu, albert_herranz
  Cc: chrisw, Ian.Campbell, jeremy, dwmw2, alex.williamson

On Fri, Mar 19, 2010 at 11:04:17AM -0400, Konrad Rzeszutek Wilk wrote:
> Fujita-san et al.
> 
> Attached is a set of patches that separate the address translation
> (virt_to_phys, virt_to_bus, etc) from the SWIOTLB library.
> 
> Since the last posting I've:
>  - Made the exported functions/variables have the 'swiotlb_bk' prefix instead
>    of the 'do_[map|unmap]*' and 'io_tlb_*' combination.
>  - dropped the checkpatches/other reworks patches.
   - and testing, which warrants:
Tested-by:  Sander Eikelenboom <linux@eikelenboom.it>

To my happy surprise, I've found that Mr. Sander Eikelenboom and Mr. Albert Herranz
had been using these patches.

I've asked whether Mr. Sander wouldn't mind chiming in and he said he
would gladly add 'Tested-by:  Sander Eikelenboom <linux@eikelenboom.it>'
to the patches. I haven't asked Mr. Albert since he is busy making his
set of patches for the Wii controller ready.

Mr. Sander's long summary (a bit of explanation here: these five patches form the
basis of a branch that has Xen PCI frontend driver allowing PCI
passthrough, so his testing encompassed these five and many more):

"If have placed the usb controller in another system now.

So it's tested with:

Intel system, usb 3.0 xhci PCIe host controller:
- Xen-4.0.0rc6, dom0 xen-next, domU your 2.6.33 tree i mentioned
- Baremetal on this system with the  2.6.33 from your tree

AMD system (running now, no iommu in this system): passthrough of USB2.0
PCI host controller, USB2.0 PCIe hostcontroller, USB 3.0
+PCIe host controller, 1 usb videograbber per usb controller.
- Xen-4.0.0rc6, dom0 2.6.31.12 pvops kernel from jeremy's tree, domU
  your 2.6.33 tree i mentioned


All(with xen and baremetal) have been tested by grabbing raw or mpeg2
video streams to v4l usb capture devices.

So a 'Tested-by' seems to be justified i would say"

The git branch in question is pv/merge.2.6.33 from GIT tree:
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] swiotlb v0.6: seperation of physical/virtual address translation
  2010-03-25 13:56 ` [PATCH] swiotlb v0.6: seperation of physical/virtual address translation Konrad Rzeszutek Wilk
@ 2010-03-25 23:01   ` Albert Herranz
  0 siblings, 0 replies; 12+ messages in thread
From: Albert Herranz @ 2010-03-25 23:01 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: fujita.tomonori, linux-kernel, iommu, chrisw, Ian.Campbell,
	jeremy, dwmw2, alex.williamson, stern

Konrad Rzeszutek Wilk wrote:
> To my happy surprise, I've found that Mr. Sander Eikelenboom and Mr. Albert Herranz
> had been using these patches.
> 
> I've asked whether Mr. Sander wouldn't mind chiming in and he said he
> would gladly add 'Tested-by:  Sander Eikelenboom <linux@eikelenboom.it>'
> to the patches. I haven't asked Mr. Albert since he is busy making his
> set of patches for the Wii controller ready.
> 

Actually, I'm waiting now for some USB core changes to settle down before sending again a patch series.
Alan Stern and I were touching the same USB code base, so it makes sense to coordinate our efforts.

> Mr. Sander's long summary (a bit of explanation here: these five patches form the
> basis of a branch that has Xen PCI frontend driver allowing PCI
> passthrough, so his testing encompassed these five and many more):
> 
> "If have placed the usb controller in another system now.
> 
> So it's tested with:
> 
> Intel system, usb 3.0 xhci PCIe host controller:
> - Xen-4.0.0rc6, dom0 xen-next, domU your 2.6.33 tree i mentioned
> - Baremetal on this system with the  2.6.33 from your tree
> 
> AMD system (running now, no iommu in this system): passthrough of USB2.0
> PCI host controller, USB2.0 PCIe hostcontroller, USB 3.0
> +PCIe host controller, 1 usb videograbber per usb controller.
> - Xen-4.0.0rc6, dom0 2.6.31.12 pvops kernel from jeremy's tree, domU
>   your 2.6.33 tree i mentioned
> 
> 
> All(with xen and baremetal) have been tested by grabbing raw or mpeg2
> video streams to v4l usb capture devices.
> 
> So a 'Tested-by' seems to be justified i would say"
> 
> The git branch in question is pv/merge.2.6.33 from GIT tree:
> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
> 

What I've tested (successfully) so far are these patches from your master branch in the tree:
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb-2.6.git

 swiotlb: EXPORT_SYMBOL_GPL functions + variables that are defined in the header file.
 swiotlb: Make swiotlb bookkeeping functions visible in the header file.
 swiotlb: Make all bookkeeping functions and variables have same prefix.
 swiotlb: swiotlb_bk_map_single: abstract out swiotlb_virt_to_bus calls out.
 swiotlb: Make internal bookkeeping functions have 'swiotlb_bk' prefix.

And these two add-ons (which I needed for supporting swiotlb on the Wii):

 swiotlb: make swiotlb_bounce() __weak
 swiotbl: add back swiotlb_alloc_boot()

The swiotlb bk code has been tested as part of the "MEM2" DMA ops code used to support the EHCI controller of the Wii.
That support code includes (as of last patch series) the following patches too:

 wii: hollywood ehci controller support
 wii: enable swiotlb
 wii: add mem2 dma mapping ops
 wii: have generic dma coherent
 USB: add HCD_NO_COHERENT_MEM host controller driver flag
 USB: refactor unmap_urb_for_dma/map_urb_for_dma
 powerpc: add min_direct_dma_addr
 powerpc: add per-device dma coherent support

The last iteration of the series (v5) is available at:
http://marc.info/?l=linux-usb&m=126902357306668

Concerns raised so far for v5 will be addressed in v6.

Thanks,
Albert


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] swiotlb v0.6: seperation of physical/virtual address translation
  2010-03-19 15:04 [PATCH] swiotlb v0.6: seperation of physical/virtual address translation Konrad Rzeszutek Wilk
  2010-03-19 15:04 ` [PATCH 1/5] swiotlb: Make internal bookkeeping functions have 'swiotlb_bk' prefix Konrad Rzeszutek Wilk
  2010-03-25 13:56 ` [PATCH] swiotlb v0.6: seperation of physical/virtual address translation Konrad Rzeszutek Wilk
@ 2010-04-05  2:12 ` FUJITA Tomonori
  2010-04-07 19:28   ` [LKML] " Konrad Rzeszutek Wilk
  2 siblings, 1 reply; 12+ messages in thread
From: FUJITA Tomonori @ 2010-04-05  2:12 UTC (permalink / raw)
  To: konrad.wilk
  Cc: fujita.tomonori, linux-kernel, iommu, albert_herranz, chrisw,
	Ian.Campbell, jeremy, dwmw2, alex.williamson

On Fri, 19 Mar 2010 11:04:17 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> Fujita-san et al.
> 
> Attached is a set of patches that separate the address translation
> (virt_to_phys, virt_to_bus, etc) from the SWIOTLB library.
> 
> Since the last posting I've:
>  - Made the exported functions/variables have the 'swiotlb_bk' prefix instead
>    of the 'do_[map|unmap]*' and 'io_tlb_*' combination.

Why can't we use more simpler names such as 'swiotlb_tbl_index'?

Why do we need to add the prefix to static things like
'swiotlb_bk_list', 'swiotlb_bk_index', etc? Please let them alone.


>  - dropped the checkpatches/other reworks patches.
> 
> I had not addressed the question of removing the 'overflow' buffer. There are over
> ~300 instances of the the DMA operations not being checked which plan on addressing
> in a seperate set of patches that will slowly roll out the checks and then
> finally the removal of the 'overflow' buffer.

Except for swiotlb, no IOMMU implementations has the mechanism of
overflow buffer. So drivers that don't check a DMA mapping error are
broken anyway. Also the size of the overflow is 32K by default. We
often see larger request than that. Even with the overflow mechanism,
we see data corruption anyway.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] swiotlb: Make swiotlb bookkeeping functions visible in the header file.
  2010-03-19 15:04       ` [PATCH 4/5] swiotlb: Make swiotlb bookkeeping functions visible in the header file Konrad Rzeszutek Wilk
  2010-03-19 15:04         ` [PATCH 5/5] swiotlb: EXPORT_SYMBOL_GPL functions + variables that are defined " Konrad Rzeszutek Wilk
@ 2010-04-05  2:13         ` FUJITA Tomonori
  2010-04-07 19:22           ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 12+ messages in thread
From: FUJITA Tomonori @ 2010-04-05  2:13 UTC (permalink / raw)
  To: konrad.wilk
  Cc: fujita.tomonori, linux-kernel, iommu, albert_herranz,
	Ian.Campbell, jeremy, chrisw, dwmw2, alex.williamson

On Fri, 19 Mar 2010 11:04:21 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> We put the init, free, and functions dealing with the operations on the
> SWIOTLB buffer at the top of the header. Also we export some of the variables
> that are used by the dma_ops functions.
> 
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  include/linux/swiotlb.h |   33 +++++++++++++++++++++++++++++++++
>  lib/swiotlb.c           |   28 ++++++++++------------------
>  2 files changed, 43 insertions(+), 18 deletions(-)
> 
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index febedcf..8550d6b 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -24,6 +24,39 @@ extern int swiotlb_force;
>  
>  extern void swiotlb_init(int verbose);
>  
> +/* Internal book-keeping functions. Must be linked against the library
> + * to take advantage of them.*/
> +#ifdef CONFIG_SWIOTLB
> +/*
> + * Enumeration for sync targets
> + */
> +enum dma_sync_target {
> +	SYNC_FOR_CPU = 0,
> +	SYNC_FOR_DEVICE = 1,
> +};
> +extern char *swiotlb_bk_start;
> +extern char *swiotlb_bk_end;
> +extern unsigned long swiotlb_bk_nslabs;

exporting swiotlb_bk_start and swiotlb_bk_nslabs aren't enough?


> +extern void *swiotlb_bk_overflow_buffer;
> +extern unsigned long swiotlb_bk_overflow;
> +extern int is_swiotlb_buffer(phys_addr_t paddr);
> +extern void *swiotlb_bk_map_single(struct device *hwdev, phys_addr_t phys,
> +			    unsigned long start_dma_addr, size_t size, int dir);

enum dma_data_direction is better for 'dir'.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 4/5] swiotlb: Make swiotlb bookkeeping functions visible in the header file.
  2010-04-05  2:13         ` [PATCH 4/5] swiotlb: Make swiotlb bookkeeping functions visible " FUJITA Tomonori
@ 2010-04-07 19:22           ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-04-07 19:22 UTC (permalink / raw)
  To: FUJITA Tomonori
  Cc: linux-kernel, iommu, albert_herranz, Ian.Campbell, jeremy,
	chrisw, dwmw2, alex.williamson

> > +extern char *swiotlb_bk_start;
> > +extern char *swiotlb_bk_end;
> > +extern unsigned long swiotlb_bk_nslabs;
> 
> exporting swiotlb_bk_start and swiotlb_bk_nslabs aren't enough?

It is. 
> 
> 
> > +extern void *swiotlb_bk_overflow_buffer;
> > +extern unsigned long swiotlb_bk_overflow;

> > +extern int is_swiotlb_buffer(phys_addr_t paddr);
> > +extern void *swiotlb_bk_map_single(struct device *hwdev, phys_addr_t phys,
> > +			    unsigned long start_dma_addr, size_t size, int dir);
> 
> enum dma_data_direction is better for 'dir'.

Done. I had to make a bigger change that would also change other
functions usage of 'int dir' -> 'enum ..' otherwise we had compiler
warnings.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [LKML] Re: [PATCH] swiotlb v0.6: seperation of physical/virtual address translation
  2010-04-05  2:12 ` FUJITA Tomonori
@ 2010-04-07 19:28   ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-04-07 19:28 UTC (permalink / raw)
  To: FUJITA Tomonori
  Cc: linux-kernel, iommu, albert_herranz, chrisw, Ian.Campbell,
	jeremy, dwmw2, alex.williamson

> > Since the last posting I've:
> >  - Made the exported functions/variables have the 'swiotlb_bk' prefix instead
> >    of the 'do_[map|unmap]*' and 'io_tlb_*' combination.
> 
> Why can't we use more simpler names such as 'swiotlb_tbl_index'?

Much better. I was trying to come up with a name and the one I came up
with was 'bookkeeping', which I shortned to 'bk'. But 'tbl'
sounds better.

> Why do we need to add the prefix to static things like
> 'swiotlb_bk_list', 'swiotlb_bk_index', etc? Please let them alone.

Yup. Removed.
> > I had not addressed the question of removing the 'overflow' buffer. There are over
> > ~300 instances of the the DMA operations not being checked which plan on addressing
> > in a seperate set of patches that will slowly roll out the checks and then
> > finally the removal of the 'overflow' buffer.
> 
> Except for swiotlb, no IOMMU implementations has the mechanism of

I believe the GART one does it too. I think the overflow points to the
first page of the GART address and has logic to remind the user that danger
is immenient.

> overflow buffer. So drivers that don't check a DMA mapping error are
> broken anyway. Also the size of the overflow is 32K by default. We
> often see larger request than that. Even with the overflow mechanism,
> we see data corruption anyway.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2010-04-07 19:30 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-03-19 15:04 [PATCH] swiotlb v0.6: seperation of physical/virtual address translation Konrad Rzeszutek Wilk
2010-03-19 15:04 ` [PATCH 1/5] swiotlb: Make internal bookkeeping functions have 'swiotlb_bk' prefix Konrad Rzeszutek Wilk
2010-03-19 15:04   ` [PATCH 2/5] swiotlb: swiotlb_bk_map_single: abstract out swiotlb_virt_to_bus calls out Konrad Rzeszutek Wilk
2010-03-19 15:04     ` [PATCH 3/5] swiotlb: Make all bookkeeping functions and variables have same prefix Konrad Rzeszutek Wilk
2010-03-19 15:04       ` [PATCH 4/5] swiotlb: Make swiotlb bookkeeping functions visible in the header file Konrad Rzeszutek Wilk
2010-03-19 15:04         ` [PATCH 5/5] swiotlb: EXPORT_SYMBOL_GPL functions + variables that are defined " Konrad Rzeszutek Wilk
2010-04-05  2:13         ` [PATCH 4/5] swiotlb: Make swiotlb bookkeeping functions visible " FUJITA Tomonori
2010-04-07 19:22           ` Konrad Rzeszutek Wilk
2010-03-25 13:56 ` [PATCH] swiotlb v0.6: seperation of physical/virtual address translation Konrad Rzeszutek Wilk
2010-03-25 23:01   ` Albert Herranz
2010-04-05  2:12 ` FUJITA Tomonori
2010-04-07 19:28   ` [LKML] " Konrad Rzeszutek Wilk

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.