All of lore.kernel.org
 help / color / mirror / Atom feed
* [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device}
@ 2005-08-29 20:09 John W. Linville
  2005-08-29 20:54 ` Andi Kleen
  0 siblings, 1 reply; 99+ messages in thread
From: John W. Linville @ 2005-08-29 20:09 UTC (permalink / raw)
  To: linux-kernel; +Cc: ak, discuss

Implement dma_sync_single_range_for_{cpu,device}, based on curent
implementations of dma_sync_single_for_{cpu,device}.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---
It is hard to use this API if common platforms do not implement it. :-)
Hopefully I did not miss something obvious?

This is a naive implementation, so flame away...

 include/asm-x86_64/dma-mapping.h |   28 ++++++++++++++++++++++++++++
 1 files changed, 28 insertions(+)

diff --git a/include/asm-x86_64/dma-mapping.h b/include/asm-x86_64/dma-mapping.h
--- a/include/asm-x86_64/dma-mapping.h
+++ b/include/asm-x86_64/dma-mapping.h
@@ -85,6 +85,34 @@ static inline void dma_sync_single_for_d
 	flush_write_buffers();
 }
 
+static inline void dma_sync_single_range_for_cpu(struct device *hwdev,
+						 dma_addr_t dma_handle,
+						 unsigned long offset,
+						 size_t size, int direction)
+{
+	if (direction == DMA_NONE)
+		out_of_line_bug();
+
+	if (swiotlb)
+		return swiotlb_sync_single_for_cpu(hwdev,dma_handle+offset,size,direction);
+
+	flush_write_buffers();
+}
+
+static inline void dma_sync_single_range_for_device(struct device *hwdev,
+						    dma_addr_t dma_handle,
+						    unsigned long offset,
+						    size_t size, int direction)
+{
+        if (direction == DMA_NONE)
+		out_of_line_bug();
+
+	if (swiotlb)
+		return swiotlb_sync_single_for_device(hwdev,dma_handle+offset,size,direction);
+
+	flush_write_buffers();
+}
+
 static inline void dma_sync_sg_for_cpu(struct device *hwdev,
 				       struct scatterlist *sg,
 				       int nelems, int direction)
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device}
  2005-08-29 20:09 [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device} John W. Linville
@ 2005-08-29 20:54 ` Andi Kleen
  2005-08-29 21:48   ` John W. Linville
  0 siblings, 1 reply; 99+ messages in thread
From: Andi Kleen @ 2005-08-29 20:54 UTC (permalink / raw)
  To: John W. Linville; +Cc: linux-kernel, discuss

On Monday 29 August 2005 22:09, John W. Linville wrote:
> Implement dma_sync_single_range_for_{cpu,device}, based on curent
> implementations of dma_sync_single_for_{cpu,device}.

Hmm, who or what needs that? It doesn't seem to be documented
in Documentation/DMA* and I also don't remember seeing any 
discussion of it.

If it's commonly used it might better to add new swiotlb_* 
functions that only copy the requested range.

-Andi



^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device}
  2005-08-29 20:54 ` Andi Kleen
@ 2005-08-29 21:48   ` John W. Linville
  2005-08-30  1:14     ` [discuss] " Andi Kleen
  0 siblings, 1 reply; 99+ messages in thread
From: John W. Linville @ 2005-08-29 21:48 UTC (permalink / raw)
  To: Andi Kleen; +Cc: linux-kernel, discuss

On Mon, Aug 29, 2005 at 10:54:53PM +0200, Andi Kleen wrote:
> On Monday 29 August 2005 22:09, John W. Linville wrote:
> > Implement dma_sync_single_range_for_{cpu,device}, based on curent
> > implementations of dma_sync_single_for_{cpu,device}.
> 
> Hmm, who or what needs that? It doesn't seem to be documented
> in Documentation/DMA* and I also don't remember seeing any 
> discussion of it.

In Documentation/DMA-API.txt it is still referred to as
dma_sync_single_range.  I imagine the *_for_{cpu,device} stuff got added
at about the same time as it did for dma_sync_single, dma_sync_sg,
and the like.

These calls are implemented for basically all the other arches.
And, except for the noted *_for_{cpu,device} discrepancies, these are
documented in Documentation/DMA-API.txt.  It definitely seems to be
an unfortunate omission from include/asm-x86_64/dma-mapping.h.

As for who needs it, well, I suppose I do.  I want to use that API
in a patch I'm working-on.  No one will want to merge my patch if it
will not compile on x86_64... :-(

> If it's commonly used it might better to add new swiotlb_* 
> functions that only copy the requested range.

Perhaps...but I think that sounds more like a discussion of _how_ to
implement the API, rather than _whether_ it should be implemented.
Using some new variant of the swiotlb_* API might be appropriate
for the x86_64 implementation.  But, since this is a portable API,
I don't think calling the (apparently Intel-specific) swiotlb_*
functions would be an appropriate replacement.

I'd be happy to have do the implementation differently (or to have
someone else do so).  Do you have specific suggestions for how to
do so?

Thanks,

John
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [discuss] Re: [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device}
  2005-08-29 21:48   ` John W. Linville
@ 2005-08-30  1:14     ` Andi Kleen
  2005-08-30 17:54       ` John W. Linville
  0 siblings, 1 reply; 99+ messages in thread
From: Andi Kleen @ 2005-08-30  1:14 UTC (permalink / raw)
  To: discuss; +Cc: John W. Linville, linux-kernel

On Monday 29 August 2005 23:48, John W. Linville wrote:

> Perhaps...but I think that sounds more like a discussion of _how_ to
> implement the API, rather than _whether_ it should be implemented.
> Using some new variant of the swiotlb_* API might be appropriate
> for the x86_64 implementation.  But, since this is a portable API,
> I don't think calling the (apparently Intel-specific) swiotlb_*
> functions would be an appropriate replacement.

What I meant is that instead of the dumb implementation you did
it would be better to implement it in swiotlb_* too and copy 
only the requested byte range there and then call these new
functions from the x86-64 wrapper.

-Andi

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [discuss] Re: [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device}
  2005-08-30  1:14     ` [discuss] " Andi Kleen
@ 2005-08-30 17:54       ` John W. Linville
  2005-08-30 17:58           ` John W. Linville
  0 siblings, 1 reply; 99+ messages in thread
From: John W. Linville @ 2005-08-30 17:54 UTC (permalink / raw)
  To: Andi Kleen; +Cc: discuss, linux-kernel

On Tue, Aug 30, 2005 at 03:14:34AM +0200, Andi Kleen wrote:
> On Monday 29 August 2005 23:48, John W. Linville wrote:

> > I don't think calling the (apparently Intel-specific) swiotlb_*
> > functions would be an appropriate replacement.
> 
> What I meant is that instead of the dumb implementation you did
> it would be better to implement it in swiotlb_* too and copy 
> only the requested byte range there and then call these new
> functions from the x86-64 wrapper.

Thanks.  That is more helpful than the previous message.

I was leery of disturbing the swiotlb_* API needlessly, especially
since that involves ia64 as well.  But if you think that would be
better, then I'll work in that direction.

Patches to follow...

John

P.S.  BTW, "dumb" is a term that is both subjective and perjorative.
IMHO, it is "dumb" to use the word "dumb" in public discourse...
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device}
  2005-08-30 17:54       ` John W. Linville
@ 2005-08-30 17:58           ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-08-30 17:58 UTC (permalink / raw)
  To: linux-kernel; +Cc: Andi Kleen, discuss, tony.luck, linux-ia64

Add swiotlb_sync_single_range_for_{cpu,device} implementations.  This is
used to support implementation of dma_sync_single_range_for_{cpu,device}
on x86_64.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 arch/ia64/lib/swiotlb.c      |   33 +++++++++++++++++++++++++++++++++
 include/asm-x86_64/swiotlb.h |    8 ++++++++
 2 files changed, 41 insertions(+)

diff --git a/arch/ia64/lib/swiotlb.c b/arch/ia64/lib/swiotlb.c
--- a/arch/ia64/lib/swiotlb.c
+++ b/arch/ia64/lib/swiotlb.c
@@ -522,6 +522,37 @@ swiotlb_sync_single_for_device(struct de
 }
 
 /*
+ * Same as above, but for a sub-range of the mapping.
+ */
+void
+swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+				  unsigned long offset, size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr) + offset;
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
+				     unsigned long offset, size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr) + offset;
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
  * Map a set of buffers described by scatterlist in streaming mode for DMA.
  * This is the scatter-gather version of the above swiotlb_map_single
  * interface.  Here the scatter gather list elements are each tagged with the
@@ -650,6 +681,8 @@ EXPORT_SYMBOL(swiotlb_map_sg);
 EXPORT_SYMBOL(swiotlb_unmap_sg);
 EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_cpu);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
 EXPORT_SYMBOL(swiotlb_dma_mapping_error);
diff --git a/include/asm-x86_64/swiotlb.h b/include/asm-x86_64/swiotlb.h
--- a/include/asm-x86_64/swiotlb.h
+++ b/include/asm-x86_64/swiotlb.h
@@ -15,6 +15,14 @@ extern void swiotlb_sync_single_for_cpu(
 extern void swiotlb_sync_single_for_device(struct device *hwdev,
 					    dma_addr_t dev_addr,
 					    size_t size, int dir);
+extern void swiotlb_sync_single_range_for_cpu(struct device *hwdev,
+					      dma_addr_t dev_addr,
+					      unsigned long offset,
+					      size_t size, int dir);
+extern void swiotlb_sync_single_range_for_device(struct device *hwdev,
+						 dma_addr_t dev_addr,
+						 unsigned long offset,
+						 size_t size, int dir);
 extern void swiotlb_sync_sg_for_cpu(struct device *hwdev,
 				     struct scatterlist *sg, int nelems,
 				     int dir);
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device}
@ 2005-08-30 17:58           ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-08-30 17:58 UTC (permalink / raw)
  To: linux-kernel; +Cc: Andi Kleen, discuss, tony.luck, linux-ia64

Add swiotlb_sync_single_range_for_{cpu,device} implementations.  This is
used to support implementation of dma_sync_single_range_for_{cpu,device}
on x86_64.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 arch/ia64/lib/swiotlb.c      |   33 +++++++++++++++++++++++++++++++++
 include/asm-x86_64/swiotlb.h |    8 ++++++++
 2 files changed, 41 insertions(+)

diff --git a/arch/ia64/lib/swiotlb.c b/arch/ia64/lib/swiotlb.c
--- a/arch/ia64/lib/swiotlb.c
+++ b/arch/ia64/lib/swiotlb.c
@@ -522,6 +522,37 @@ swiotlb_sync_single_for_device(struct de
 }
 
 /*
+ * Same as above, but for a sub-range of the mapping.
+ */
+void
+swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+				  unsigned long offset, size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr) + offset;
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
+				     unsigned long offset, size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr) + offset;
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
  * Map a set of buffers described by scatterlist in streaming mode for DMA.
  * This is the scatter-gather version of the above swiotlb_map_single
  * interface.  Here the scatter gather list elements are each tagged with the
@@ -650,6 +681,8 @@ EXPORT_SYMBOL(swiotlb_map_sg);
 EXPORT_SYMBOL(swiotlb_unmap_sg);
 EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_cpu);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
 EXPORT_SYMBOL(swiotlb_dma_mapping_error);
diff --git a/include/asm-x86_64/swiotlb.h b/include/asm-x86_64/swiotlb.h
--- a/include/asm-x86_64/swiotlb.h
+++ b/include/asm-x86_64/swiotlb.h
@@ -15,6 +15,14 @@ extern void swiotlb_sync_single_for_cpu(
 extern void swiotlb_sync_single_for_device(struct device *hwdev,
 					    dma_addr_t dev_addr,
 					    size_t size, int dir);
+extern void swiotlb_sync_single_range_for_cpu(struct device *hwdev,
+					      dma_addr_t dev_addr,
+					      unsigned long offset,
+					      size_t size, int dir);
+extern void swiotlb_sync_single_range_for_device(struct device *hwdev,
+						 dma_addr_t dev_addr,
+						 unsigned long offset,
+						 size_t size, int dir);
 extern void swiotlb_sync_sg_for_cpu(struct device *hwdev,
 				     struct scatterlist *sg, int nelems,
 				     int dir);
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device}
  2005-08-30 17:58           ` John W. Linville
  (?)
@ 2005-08-30 18:00           ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-08-30 18:00 UTC (permalink / raw)
  To: linux-kernel; +Cc: Andi Kleen, discuss

Implement dma_sync_single_range_for_{cpu,device} on x86_64.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---
Makes use of swiotlb_sync_single_range_for_{cpu,device} implemented
in preceding patch.

 include/asm-x86_64/dma-mapping.h |   28 ++++++++++++++++++++++++++++
 1 files changed, 28 insertions(+)

diff --git a/include/asm-x86_64/dma-mapping.h b/include/asm-x86_64/dma-mapping.h
--- a/include/asm-x86_64/dma-mapping.h
+++ b/include/asm-x86_64/dma-mapping.h
@@ -85,6 +85,34 @@ static inline void dma_sync_single_for_d
 	flush_write_buffers();
 }
 
+static inline void dma_sync_single_range_for_cpu(struct device *hwdev,
+						 dma_addr_t dma_handle,
+						 unsigned long offset,
+						 size_t size, int direction)
+{
+	if (direction == DMA_NONE)
+		out_of_line_bug();
+
+	if (swiotlb)
+		return swiotlb_sync_single_range_for_cpu(hwdev,dma_handle,offset,size,direction);
+
+	flush_write_buffers();
+}
+
+static inline void dma_sync_single_range_for_device(struct device *hwdev,
+						    dma_addr_t dma_handle,
+						    unsigned long offset,
+						    size_t size, int direction)
+{
+        if (direction == DMA_NONE)
+		out_of_line_bug();
+
+	if (swiotlb)
+		return swiotlb_sync_single_range_for_device(hwdev,dma_handle,offset,size,direction);
+
+	flush_write_buffers();
+}
+
 static inline void dma_sync_sg_for_cpu(struct device *hwdev,
 				       struct scatterlist *sg,
 				       int nelems, int direction)
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* RE: [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device}
  2005-08-30 17:58           ` John W. Linville
@ 2005-08-30 18:03 ` Luck, Tony
  -1 siblings, 0 replies; 99+ messages in thread
From: Luck, Tony @ 2005-08-30 18:03 UTC (permalink / raw)
  To: John W. Linville, linux-kernel; +Cc: Andi Kleen, discuss, linux-ia64


>+swiotlb_sync_single_range_for_cpu(struct device *hwdev, 
>+swiotlb_sync_single_range_for_device(struct device *hwdev, 

Huh?  These look identical ... same args, same code, just a
different name.

-Tony

^ permalink raw reply	[flat|nested] 99+ messages in thread

* RE: [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device}
@ 2005-08-30 18:03 ` Luck, Tony
  0 siblings, 0 replies; 99+ messages in thread
From: Luck, Tony @ 2005-08-30 18:03 UTC (permalink / raw)
  To: John W. Linville, linux-kernel; +Cc: Andi Kleen, discuss, linux-ia64


>+swiotlb_sync_single_range_for_cpu(struct device *hwdev, 
>+swiotlb_sync_single_range_for_device(struct device *hwdev, 

Huh?  These look identical ... same args, same code, just a
different name.

-Tony

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device}
  2005-08-30 18:03 ` Luck, Tony
@ 2005-08-30 18:09   ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-08-30 18:09 UTC (permalink / raw)
  To: Luck, Tony
  Cc: linux-kernel, Andi Kleen, discuss, linux-ia64, Asit.K.Mallick,
	goutham.rao, davidm

On Tue, Aug 30, 2005 at 11:03:35AM -0700, Luck, Tony wrote:
> 
> >+swiotlb_sync_single_range_for_cpu(struct device *hwdev, 
> >+swiotlb_sync_single_range_for_device(struct device *hwdev, 
> 
> Huh?  These look identical ... same args, same code, just a
> different name.

Have you looked at the implementations for swiotlb_sync_single_for_cpu
and swiotlb_sync_single_for_device?  Those are already identical.

I'm just following the existing style/practice in that file.  I could
do an additional patch to rectify the replication in those functions
if you'd like?

Who is responsible for the swiotlb code?

John
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device}
@ 2005-08-30 18:09   ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-08-30 18:09 UTC (permalink / raw)
  To: Luck, Tony
  Cc: linux-kernel, Andi Kleen, discuss, linux-ia64, Asit.K.Mallick,
	goutham.rao, davidm

On Tue, Aug 30, 2005 at 11:03:35AM -0700, Luck, Tony wrote:
> 
> >+swiotlb_sync_single_range_for_cpu(struct device *hwdev, 
> >+swiotlb_sync_single_range_for_device(struct device *hwdev, 
> 
> Huh?  These look identical ... same args, same code, just a
> different name.

Have you looked at the implementations for swiotlb_sync_single_for_cpu
and swiotlb_sync_single_for_device?  Those are already identical.

I'm just following the existing style/practice in that file.  I could
do an additional patch to rectify the replication in those functions
if you'd like?

Who is responsible for the swiotlb code?

John
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [rfc patch] swiotlb: consolidate swiotlb_sync_single_* implementations
  2005-08-30 18:09   ` John W. Linville
@ 2005-08-30 18:33     ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-08-30 18:33 UTC (permalink / raw)
  To: linux-kernel; +Cc: Andi Kleen, discuss, tony.luck, linux-ia64, Asit.K.Mallick

On Tue, Aug 30, 2005 at 02:09:14PM -0400, John W. Linville wrote:
> On Tue, Aug 30, 2005 at 11:03:35AM -0700, Luck, Tony wrote:
> > 
> > >+swiotlb_sync_single_range_for_cpu(struct device *hwdev, 
> > >+swiotlb_sync_single_range_for_device(struct device *hwdev, 
> > 
> > Huh?  These look identical ... same args, same code, just a
> > different name.
> 
> Have you looked at the implementations for swiotlb_sync_single_for_cpu
> and swiotlb_sync_single_for_device?  Those are already identical.

How about a patch like this?  Just for comment...I'll repost if people
want it...

John

P.S.  This is meant to apply on top of my previous swiotlb patch...

--- linux-8_29_2005/arch/ia64/lib/swiotlb.c.orig	2005-08-30 14:19:32.000000000 -0400
+++ linux-8_29_2005/arch/ia64/lib/swiotlb.c	2005-08-30 14:23:18.000000000 -0400
@@ -493,11 +493,11 @@ swiotlb_unmap_single(struct device *hwde
  * address back to the card, you must first perform a
  * swiotlb_dma_sync_for_device, and then the device again owns the buffer
  */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
+static inline void
+swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
+			  unsigned long offset, size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr);
+	char *dma_addr = phys_to_virt(dev_addr) + offset;
 
 	if (dir == DMA_NONE)
 		BUG();
@@ -508,17 +508,17 @@ swiotlb_sync_single_for_cpu(struct devic
 }
 
 void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, 0, size, dir);
+}
+
+void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
+	swiotlb_sync_single_range(hwdev, dev_addr, 0, size, dir);
 }
 
 /*
@@ -528,28 +528,14 @@ void
 swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 				  unsigned long offset, size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr) + offset;
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
 }
 
 void
 swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
 				     unsigned long offset, size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr) + offset;
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
 }
 
 /*
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [rfc patch] swiotlb: consolidate swiotlb_sync_single_* implementations
@ 2005-08-30 18:33     ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-08-30 18:33 UTC (permalink / raw)
  To: linux-kernel; +Cc: Andi Kleen, discuss, tony.luck, linux-ia64, Asit.K.Mallick

On Tue, Aug 30, 2005 at 02:09:14PM -0400, John W. Linville wrote:
> On Tue, Aug 30, 2005 at 11:03:35AM -0700, Luck, Tony wrote:
> > 
> > >+swiotlb_sync_single_range_for_cpu(struct device *hwdev, 
> > >+swiotlb_sync_single_range_for_device(struct device *hwdev, 
> > 
> > Huh?  These look identical ... same args, same code, just a
> > different name.
> 
> Have you looked at the implementations for swiotlb_sync_single_for_cpu
> and swiotlb_sync_single_for_device?  Those are already identical.

How about a patch like this?  Just for comment...I'll repost if people
want it...

John

P.S.  This is meant to apply on top of my previous swiotlb patch...

--- linux-8_29_2005/arch/ia64/lib/swiotlb.c.orig	2005-08-30 14:19:32.000000000 -0400
+++ linux-8_29_2005/arch/ia64/lib/swiotlb.c	2005-08-30 14:23:18.000000000 -0400
@@ -493,11 +493,11 @@ swiotlb_unmap_single(struct device *hwde
  * address back to the card, you must first perform a
  * swiotlb_dma_sync_for_device, and then the device again owns the buffer
  */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
+static inline void
+swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
+			  unsigned long offset, size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr);
+	char *dma_addr = phys_to_virt(dev_addr) + offset;
 
 	if (dir = DMA_NONE)
 		BUG();
@@ -508,17 +508,17 @@ swiotlb_sync_single_for_cpu(struct devic
 }
 
 void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, 0, size, dir);
+}
+
+void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
+	swiotlb_sync_single_range(hwdev, dev_addr, 0, size, dir);
 }
 
 /*
@@ -528,28 +528,14 @@ void
 swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 				  unsigned long offset, size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr) + offset;
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
 }
 
 void
 swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
 				     unsigned long offset, size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr) + offset;
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
 }
 
 /*
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [rfc patch] swiotlb: consolidate swiotlb_sync_sg_* implementations
  2005-08-30 18:33     ` John W. Linville
@ 2005-08-30 18:40       ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-08-30 18:40 UTC (permalink / raw)
  To: linux-kernel; +Cc: Andi Kleen, discuss, tony.luck, linux-ia64, Asit.K.Mallick

On Tue, Aug 30, 2005 at 02:33:39PM -0400, John W. Linville wrote:
> On Tue, Aug 30, 2005 at 02:09:14PM -0400, John W. Linville wrote:
> > On Tue, Aug 30, 2005 at 11:03:35AM -0700, Luck, Tony wrote:
> > > 
> > > >+swiotlb_sync_single_range_for_cpu(struct device *hwdev, 
> > > >+swiotlb_sync_single_range_for_device(struct device *hwdev, 
> > > 
> > > Huh?  These look identical ... same args, same code, just a
> > > different name.
> > 
> > Have you looked at the implementations for swiotlb_sync_single_for_cpu
> > and swiotlb_sync_single_for_device?  Those are already identical.
> 
> How about a patch like this?  Just for comment...I'll repost if people
> want it...

Probably should include the swiotlb_sync_sg_* variations too...

Whaddya think?  Again, I'll repost if this is viewed favorably.

John

--- linux-8_29_2005/arch/ia64/lib/swiotlb.c.orig	2005-08-30 14:35:35.000000000 -0400
+++ linux-8_29_2005/arch/ia64/lib/swiotlb.c	2005-08-30 14:37:05.000000000 -0400
@@ -612,9 +612,9 @@ swiotlb_unmap_sg(struct device *hwdev, s
  * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
  * and usage.
  */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
+static inline void
+swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
+		int nelems, int dir)
 {
 	int i;
 
@@ -628,18 +628,17 @@ swiotlb_sync_sg_for_cpu(struct device *h
 }
 
 void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+}
+
+void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
 }
 
 int
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [rfc patch] swiotlb: consolidate swiotlb_sync_sg_* implementations
@ 2005-08-30 18:40       ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-08-30 18:40 UTC (permalink / raw)
  To: linux-kernel; +Cc: Andi Kleen, discuss, tony.luck, linux-ia64, Asit.K.Mallick

On Tue, Aug 30, 2005 at 02:33:39PM -0400, John W. Linville wrote:
> On Tue, Aug 30, 2005 at 02:09:14PM -0400, John W. Linville wrote:
> > On Tue, Aug 30, 2005 at 11:03:35AM -0700, Luck, Tony wrote:
> > > 
> > > >+swiotlb_sync_single_range_for_cpu(struct device *hwdev, 
> > > >+swiotlb_sync_single_range_for_device(struct device *hwdev, 
> > > 
> > > Huh?  These look identical ... same args, same code, just a
> > > different name.
> > 
> > Have you looked at the implementations for swiotlb_sync_single_for_cpu
> > and swiotlb_sync_single_for_device?  Those are already identical.
> 
> How about a patch like this?  Just for comment...I'll repost if people
> want it...

Probably should include the swiotlb_sync_sg_* variations too...

Whaddya think?  Again, I'll repost if this is viewed favorably.

John

--- linux-8_29_2005/arch/ia64/lib/swiotlb.c.orig	2005-08-30 14:35:35.000000000 -0400
+++ linux-8_29_2005/arch/ia64/lib/swiotlb.c	2005-08-30 14:37:05.000000000 -0400
@@ -612,9 +612,9 @@ swiotlb_unmap_sg(struct device *hwdev, s
  * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
  * and usage.
  */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
+static inline void
+swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
+		int nelems, int dir)
 {
 	int i;
 
@@ -628,18 +628,17 @@ swiotlb_sync_sg_for_cpu(struct device *h
 }
 
 void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+}
+
+void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
 }
 
 int
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-08-30 18:09   ` John W. Linville
@ 2005-09-12 14:48     ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick

Conduct some maintenance of the swiotlb code:

	-- Move the code from arch/ia64/lib to lib

	-- Cleanup some cruft (code duplication)

	-- Add support for syncing sub-ranges of mappings

	-- Add support for syncing DMA_BIDIRECTIONAL mappings

	-- Comment fixup & change record

Also, tack-on an x86_64 implementation of dma_sync_single_range_for_cpu
and dma_sync_single_range_for _device.  This makes use of the new
swiotlb sync sub-range support.

Patches to follow...

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-12 14:48     ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick

Conduct some maintenance of the swiotlb code:

	-- Move the code from arch/ia64/lib to lib

	-- Cleanup some cruft (code duplication)

	-- Add support for syncing sub-ranges of mappings

	-- Add support for syncing DMA_BIDIRECTIONAL mappings

	-- Comment fixup & change record

Also, tack-on an x86_64 implementation of dma_sync_single_range_for_cpu
and dma_sync_single_range_for _device.  This makes use of the new
swiotlb sync sub-range support.

Patches to follow...

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 1/6] swiotlb: move from arch/ia64/lib to lib
  2005-09-12 14:48     ` John W. Linville
@ 2005-09-12 14:48       ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick

The swiotlb implementation is shared by both IA-64 and EM64T. However,
the source itself lives under arch/ia64. This patch moves swiotlb.c
from arch/ia64/lib to lib and fixes-up the appropriate Makefile and
Kconfig files. No actual changes are made to swiotlb.c.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 arch/ia64/Kconfig           |    4 
 arch/ia64/lib/Makefile      |    2 
 arch/ia64/lib/swiotlb.c     |  657 --------------------------------------------
 arch/x86_64/kernel/Makefile |    2 
 lib/Makefile                |    2 
 lib/swiotlb.c               |  657 ++++++++++++++++++++++++++++++++++++++++++++
 6 files changed, 664 insertions(+), 660 deletions(-)

--- linux-swiotlb-9_9_2005/lib/swiotlb.c.orig	2005-09-09 16:18:36.000000000 -0400
+++ linux-swiotlb-9_9_2005/lib/swiotlb.c	2005-09-09 16:18:27.000000000 -0400
@@ -0,0 +1,657 @@
+/*
+ * Dynamic DMA mapping support.
+ *
+ * This implementation is for IA-64 platforms that do not support
+ * I/O TLBs (aka DMA address translation hardware).
+ * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
+ * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
+ * Copyright (C) 2000, 2003 Hewlett-Packard Co
+ *	David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
+ * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
+ *			unnecessary i-cache flushing.
+ * 04/07/.. ak          Better overflow handling. Assorted fixes.
+ */
+
+#include <linux/cache.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ctype.h>
+
+#include <asm/io.h>
+#include <asm/pci.h>
+#include <asm/dma.h>
+
+#include <linux/init.h>
+#include <linux/bootmem.h>
+
+#define OFFSET(val,align) ((unsigned long)	\
+	                   ( (val) & ( (align) - 1)))
+
+#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
+#define SG_ENT_PHYS_ADDRESS(SG)	virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
+
+/*
+ * Maximum allowable number of contiguous slabs to map,
+ * must be a power of 2.  What is the appropriate value ?
+ * The complexity of {map,unmap}_single is linearly dependent on this value.
+ */
+#define IO_TLB_SEGSIZE	128
+
+/*
+ * log of the size of each IO TLB slab.  The number of slabs is command line
+ * controllable.
+ */
+#define IO_TLB_SHIFT 11
+
+int swiotlb_force;
+
+/*
+ * Used to do a quick range check in swiotlb_unmap_single and
+ * swiotlb_sync_single_*, to see if the memory was in fact allocated by this
+ * API.
+ */
+static char *io_tlb_start, *io_tlb_end;
+
+/*
+ * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
+ * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
+ */
+static unsigned long io_tlb_nslabs;
+
+/*
+ * When the IOMMU overflows we return a fallback buffer. This sets the size.
+ */
+static unsigned long io_tlb_overflow = 32*1024;
+
+void *io_tlb_overflow_buffer;
+
+/*
+ * This is a free list describing the number of free entries available from
+ * each index
+ */
+static unsigned int *io_tlb_list;
+static unsigned int io_tlb_index;
+
+/*
+ * We need to save away the original address corresponding to a mapped entry
+ * for the sync operations.
+ */
+static unsigned char **io_tlb_orig_addr;
+
+/*
+ * Protect the above data structures in the map and unmap calls
+ */
+static DEFINE_SPINLOCK(io_tlb_lock);
+
+static int __init
+setup_io_tlb_npages(char *str)
+{
+	if (isdigit(*str)) {
+		io_tlb_nslabs = simple_strtoul(str, &str, 0);
+		/* avoid tail segment of size < IO_TLB_SEGSIZE */
+		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	}
+	if (*str == ',')
+		++str;
+	if (!strcmp(str, "force"))
+		swiotlb_force = 1;
+	return 1;
+}
+__setup("swiotlb=", setup_io_tlb_npages);
+/* make io_tlb_overflow tunable too? */
+
+/*
+ * Statically reserve bounce buffer space and initialize bounce buffer data
+ * structures for the software IO TLB used to implement the PCI DMA API.
+ */
+void
+swiotlb_init_with_default_size (size_t default_size)
+{
+	unsigned long i;
+
+	if (!io_tlb_nslabs) {
+		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
+		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	}
+
+	/*
+	 * Get IO TLB memory from the low pages
+	 */
+	io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
+					       (1 << IO_TLB_SHIFT));
+	if (!io_tlb_start)
+		panic("Cannot allocate SWIOTLB buffer");
+	io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
+
+	/*
+	 * Allocate and initialize the free list array.  This array is used
+	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
+	 * between io_tlb_start and io_tlb_end.
+	 */
+	io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
+	for (i = 0; i < io_tlb_nslabs; i++)
+ 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+	io_tlb_index = 0;
+	io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
+
+	/*
+	 * Get the overflow emergency buffer
+	 */
+	io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
+	printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n",
+	       virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end));
+}
+
+void
+swiotlb_init (void)
+{
+	swiotlb_init_with_default_size(64 * (1<<20));	/* default to 64MB */
+}
+
+static inline int
+address_needs_mapping(struct device *hwdev, dma_addr_t addr)
+{
+	dma_addr_t mask = 0xffffffff;
+	/* If the device has a mask, use it, otherwise default to 32 bits */
+	if (hwdev && hwdev->dma_mask)
+		mask = *hwdev->dma_mask;
+	return (addr & ~mask) != 0;
+}
+
+/*
+ * Allocates bounce buffer and returns its kernel virtual address.
+ */
+static void *
+map_single(struct device *hwdev, char *buffer, size_t size, int dir)
+{
+	unsigned long flags;
+	char *dma_addr;
+	unsigned int nslots, stride, index, wrap;
+	int i;
+
+	/*
+	 * For mappings greater than a page, we limit the stride (and
+	 * hence alignment) to a page size.
+	 */
+	nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+	if (size > PAGE_SIZE)
+		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
+	else
+		stride = 1;
+
+	if (!nslots)
+		BUG();
+
+	/*
+	 * Find suitable number of IO TLB entries size that will fit this
+	 * request and allocate a buffer from that IO TLB pool.
+	 */
+	spin_lock_irqsave(&io_tlb_lock, flags);
+	{
+		wrap = index = ALIGN(io_tlb_index, stride);
+
+		if (index >= io_tlb_nslabs)
+			wrap = index = 0;
+
+		do {
+			/*
+			 * If we find a slot that indicates we have 'nslots'
+			 * number of contiguous buffers, we allocate the
+			 * buffers from that slot and mark the entries as '0'
+			 * indicating unavailable.
+			 */
+			if (io_tlb_list[index] >= nslots) {
+				int count = 0;
+
+				for (i = index; i < (int) (index + nslots); i++)
+					io_tlb_list[i] = 0;
+				for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
+					io_tlb_list[i] = ++count;
+				dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
+
+				/*
+				 * Update the indices to avoid searching in
+				 * the next round.
+				 */
+				io_tlb_index = ((index + nslots) < io_tlb_nslabs
+						? (index + nslots) : 0);
+
+				goto found;
+			}
+			index += stride;
+			if (index >= io_tlb_nslabs)
+				index = 0;
+		} while (index != wrap);
+
+		spin_unlock_irqrestore(&io_tlb_lock, flags);
+		return NULL;
+	}
+  found:
+	spin_unlock_irqrestore(&io_tlb_lock, flags);
+
+	/*
+	 * Save away the mapping from the original address to the DMA address.
+	 * This is needed when we sync the memory.  Then we sync the buffer if
+	 * needed.
+	 */
+	io_tlb_orig_addr[index] = buffer;
+	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
+		memcpy(dma_addr, buffer, size);
+
+	return dma_addr;
+}
+
+/*
+ * dma_addr is the kernel virtual address of the bounce buffer to unmap.
+ */
+static void
+unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+{
+	unsigned long flags;
+	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+	char *buffer = io_tlb_orig_addr[index];
+
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (buffer && ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL)))
+		/*
+		 * bounce... copy the data back into the original buffer * and
+		 * delete the bounce buffer.
+		 */
+		memcpy(buffer, dma_addr, size);
+
+	/*
+	 * Return the buffer to the free list by setting the corresponding
+	 * entries to indicate the number of contigous entries available.
+	 * While returning the entries to the free list, we merge the entries
+	 * with slots below and above the pool being returned.
+	 */
+	spin_lock_irqsave(&io_tlb_lock, flags);
+	{
+		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
+			 io_tlb_list[index + nslots] : 0);
+		/*
+		 * Step 1: return the slots to the free list, merging the
+		 * slots with superceeding slots
+		 */
+		for (i = index + nslots - 1; i >= index; i--)
+			io_tlb_list[i] = ++count;
+		/*
+		 * Step 2: merge the returned slots with the preceding slots,
+		 * if available (non zero)
+		 */
+		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
+			io_tlb_list[i] = ++count;
+	}
+	spin_unlock_irqrestore(&io_tlb_lock, flags);
+}
+
+static void
+sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+{
+	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+	char *buffer = io_tlb_orig_addr[index];
+
+	/*
+	 * bounce... copy the data back into/from the original buffer
+	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
+	 */
+	if (dir == DMA_FROM_DEVICE)
+		memcpy(buffer, dma_addr, size);
+	else if (dir == DMA_TO_DEVICE)
+		memcpy(dma_addr, buffer, size);
+	else
+		BUG();
+}
+
+void *
+swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+		       dma_addr_t *dma_handle, int flags)
+{
+	unsigned long dev_addr;
+	void *ret;
+	int order = get_order(size);
+
+	/*
+	 * XXX fix me: the DMA API should pass us an explicit DMA mask
+	 * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32
+	 * bit range instead of a 16MB one).
+	 */
+	flags |= GFP_DMA;
+
+	ret = (void *)__get_free_pages(flags, order);
+	if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
+		/*
+		 * The allocated memory isn't reachable by the device.
+		 * Fall back on swiotlb_map_single().
+		 */
+		free_pages((unsigned long) ret, order);
+		ret = NULL;
+	}
+	if (!ret) {
+		/*
+		 * We are either out of memory or the device can't DMA
+		 * to GFP_DMA memory; fall back on
+		 * swiotlb_map_single(), which will grab memory from
+		 * the lowest available address range.
+		 */
+		dma_addr_t handle;
+		handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE);
+		if (dma_mapping_error(handle))
+			return NULL;
+
+		ret = phys_to_virt(handle);
+	}
+
+	memset(ret, 0, size);
+	dev_addr = virt_to_phys(ret);
+
+	/* Confirm address can be DMA'd by device */
+	if (address_needs_mapping(hwdev, dev_addr)) {
+		printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n",
+		       (unsigned long long)*hwdev->dma_mask, dev_addr);
+		panic("swiotlb_alloc_coherent: allocated memory is out of "
+		      "range for device");
+	}
+	*dma_handle = dev_addr;
+	return ret;
+}
+
+void
+swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+		      dma_addr_t dma_handle)
+{
+	if (!(vaddr >= (void *)io_tlb_start
+                    && vaddr < (void *)io_tlb_end))
+		free_pages((unsigned long) vaddr, get_order(size));
+	else
+		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
+		swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE);
+}
+
+static void
+swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
+{
+	/*
+	 * Ran out of IOMMU space for this operation. This is very bad.
+	 * Unfortunately the drivers cannot handle this operation properly.
+	 * unless they check for pci_dma_mapping_error (most don't)
+	 * When the mapping is small enough return a static buffer to limit
+	 * the damage, or panic when the transfer is too big.
+	 */
+	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
+	       "device %s\n", size, dev ? dev->bus_id : "?");
+
+	if (size > io_tlb_overflow && do_panic) {
+		if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL)
+			panic("PCI-DMA: Memory would be corrupted\n");
+		if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL)
+			panic("PCI-DMA: Random memory would be DMAed\n");
+	}
+}
+
+/*
+ * Map a single buffer of the indicated size for DMA in streaming mode.  The
+ * PCI address to use is returned.
+ *
+ * Once the device is given the dma address, the device owns this memory until
+ * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
+ */
+dma_addr_t
+swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir)
+{
+	unsigned long dev_addr = virt_to_phys(ptr);
+	void *map;
+
+	if (dir == DMA_NONE)
+		BUG();
+	/*
+	 * If the pointer passed in happens to be in the device's DMA window,
+	 * we can safely return the device addr and not worry about bounce
+	 * buffering it.
+	 */
+	if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
+		return dev_addr;
+
+	/*
+	 * Oh well, have to allocate and map a bounce buffer.
+	 */
+	map = map_single(hwdev, ptr, size, dir);
+	if (!map) {
+		swiotlb_full(hwdev, size, dir, 1);
+		map = io_tlb_overflow_buffer;
+	}
+
+	dev_addr = virt_to_phys(map);
+
+	/*
+	 * Ensure that the address returned is DMA'ble
+	 */
+	if (address_needs_mapping(hwdev, dev_addr))
+		panic("map_single: bounce buffer is not DMA'ble");
+
+	return dev_addr;
+}
+
+/*
+ * Since DMA is i-cache coherent, any (complete) pages that were written via
+ * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to
+ * flush them when they get mapped into an executable vm-area.
+ */
+static void
+mark_clean(void *addr, size_t size)
+{
+	unsigned long pg_addr, end;
+
+	pg_addr = PAGE_ALIGN((unsigned long) addr);
+	end = (unsigned long) addr + size;
+	while (pg_addr + PAGE_SIZE <= end) {
+		struct page *page = virt_to_page(pg_addr);
+		set_bit(PG_arch_1, &page->flags);
+		pg_addr += PAGE_SIZE;
+	}
+}
+
+/*
+ * Unmap a single streaming mode DMA translation.  The dma_addr and size must
+ * match what was provided for in a previous swiotlb_map_single call.  All
+ * other usages are undefined.
+ *
+ * After this call, reads by the cpu to the buffer are guaranteed to see
+ * whatever the device wrote there.
+ */
+void
+swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size,
+		     int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		unmap_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
+ * Make physical memory consistent for a single streaming mode DMA translation
+ * after a transfer.
+ *
+ * If you perform a swiotlb_map_single() but wish to interrogate the buffer
+ * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
+ * call this function before doing so.  At the next point you give the PCI dma
+ * address back to the card, you must first perform a
+ * swiotlb_dma_sync_for_device, and then the device again owns the buffer
+ */
+void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
+			       size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
+ * Map a set of buffers described by scatterlist in streaming mode for DMA.
+ * This is the scatter-gather version of the above swiotlb_map_single
+ * interface.  Here the scatter gather list elements are each tagged with the
+ * appropriate dma address and length.  They are obtained via
+ * sg_dma_{address,length}(SG).
+ *
+ * NOTE: An implementation may be able to use a smaller number of
+ *       DMA address/length pairs than there are SG table elements.
+ *       (for example via virtual mapping capabilities)
+ *       The routine returns the number of addr/length pairs actually
+ *       used, at most nents.
+ *
+ * Device ownership issues as mentioned above for swiotlb_map_single are the
+ * same here.
+ */
+int
+swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+	       int dir)
+{
+	void *addr;
+	unsigned long dev_addr;
+	int i;
+
+	if (dir == DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++) {
+		addr = SG_ENT_VIRT_ADDRESS(sg);
+		dev_addr = virt_to_phys(addr);
+		if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
+			sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir));
+			if (!sg->dma_address) {
+				/* Don't panic here, we expect map_sg users
+				   to do proper error handling. */
+				swiotlb_full(hwdev, sg->length, dir, 0);
+				swiotlb_unmap_sg(hwdev, sg - i, i, dir);
+				sg[0].dma_length = 0;
+				return 0;
+			}
+		} else
+			sg->dma_address = dev_addr;
+		sg->dma_length = sg->length;
+	}
+	return nelems;
+}
+
+/*
+ * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
+ * concerning calls here are the same as for swiotlb_unmap_single() above.
+ */
+void
+swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+		 int dir)
+{
+	int i;
+
+	if (dir == DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir);
+		else if (dir == DMA_FROM_DEVICE)
+			mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
+}
+
+/*
+ * Make physical memory consistent for a set of streaming mode DMA translations
+ * after a transfer.
+ *
+ * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
+ * and usage.
+ */
+void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	int i;
+
+	if (dir == DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			sync_single(hwdev, (void *) sg->dma_address,
+				    sg->dma_length, dir);
+}
+
+void
+swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
+			   int nelems, int dir)
+{
+	int i;
+
+	if (dir == DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			sync_single(hwdev, (void *) sg->dma_address,
+				    sg->dma_length, dir);
+}
+
+int
+swiotlb_dma_mapping_error(dma_addr_t dma_addr)
+{
+	return (dma_addr == virt_to_phys(io_tlb_overflow_buffer));
+}
+
+/*
+ * Return whether the given PCI device DMA address mask can be supported
+ * properly.  For example, if your device can only drive the low 24-bits
+ * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
+ * this function.
+ */
+int
+swiotlb_dma_supported (struct device *hwdev, u64 mask)
+{
+	return (virt_to_phys (io_tlb_end) - 1) <= mask;
+}
+
+EXPORT_SYMBOL(swiotlb_init);
+EXPORT_SYMBOL(swiotlb_map_single);
+EXPORT_SYMBOL(swiotlb_unmap_single);
+EXPORT_SYMBOL(swiotlb_map_sg);
+EXPORT_SYMBOL(swiotlb_unmap_sg);
+EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
+EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
+EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
+EXPORT_SYMBOL(swiotlb_dma_mapping_error);
+EXPORT_SYMBOL(swiotlb_alloc_coherent);
+EXPORT_SYMBOL(swiotlb_free_coherent);
+EXPORT_SYMBOL(swiotlb_dma_supported);
--- linux-swiotlb-9_9_2005/lib/Makefile.orig	2005-09-09 14:27:39.000000000 -0400
+++ linux-swiotlb-9_9_2005/lib/Makefile	2005-09-09 16:17:44.000000000 -0400
@@ -43,6 +43,8 @@ obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o
 obj-$(CONFIG_TEXTSEARCH_BM) += ts_bm.o
 obj-$(CONFIG_TEXTSEARCH_FSM) += ts_fsm.o
 
+obj-$(CONFIG_SWIOTLB) += swiotlb.o
+
 hostprogs-y	:= gen_crc32table
 clean-files	:= crc32table.h
 
--- linux-swiotlb-9_9_2005/arch/x86_64/kernel/Makefile.orig	2005-09-09 14:27:34.000000000 -0400
+++ linux-swiotlb-9_9_2005/arch/x86_64/kernel/Makefile	2005-09-09 16:17:44.000000000 -0400
@@ -27,7 +27,6 @@ obj-$(CONFIG_CPU_FREQ)		+= cpufreq/
 obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
 obj-$(CONFIG_GART_IOMMU)	+= pci-gart.o aperture.o
 obj-$(CONFIG_DUMMY_IOMMU)	+= pci-nommu.o pci-dma.o
-obj-$(CONFIG_SWIOTLB)		+= swiotlb.o
 obj-$(CONFIG_KPROBES)		+= kprobes.o
 obj-$(CONFIG_X86_PM_TIMER)	+= pmtimer.o
 
@@ -41,7 +40,6 @@ CFLAGS_vsyscall.o		:= $(PROFILING) -g0
 bootflag-y			+= ../../i386/kernel/bootflag.o
 cpuid-$(subst m,y,$(CONFIG_X86_CPUID))  += ../../i386/kernel/cpuid.o
 topology-y                     += ../../i386/mach-default/topology.o
-swiotlb-$(CONFIG_SWIOTLB)      += ../../ia64/lib/swiotlb.o
 microcode-$(subst m,y,$(CONFIG_MICROCODE))  += ../../i386/kernel/microcode.o
 intel_cacheinfo-y		+= ../../i386/kernel/cpu/intel_cacheinfo.o
 quirks-y			+= ../../i386/kernel/quirks.o
--- linux-swiotlb-9_9_2005/arch/ia64/Kconfig.orig	2005-09-09 14:27:33.000000000 -0400
+++ linux-swiotlb-9_9_2005/arch/ia64/Kconfig	2005-09-09 16:17:44.000000000 -0400
@@ -26,6 +26,10 @@ config MMU
 	bool
 	default y
 
+config SWIOTLB
+       bool
+       default y
+
 config RWSEM_XCHGADD_ALGORITHM
 	bool
 	default y
--- linux-swiotlb-9_9_2005/arch/ia64/lib/swiotlb.c.orig	2005-09-09 14:27:33.000000000 -0400
+++ linux-swiotlb-9_9_2005/arch/ia64/lib/swiotlb.c	2005-09-09 16:18:46.000000000 -0400
@@ -1,657 +0,0 @@
-/*
- * Dynamic DMA mapping support.
- *
- * This implementation is for IA-64 platforms that do not support
- * I/O TLBs (aka DMA address translation hardware).
- * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
- * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
- * Copyright (C) 2000, 2003 Hewlett-Packard Co
- *	David Mosberger-Tang <davidm@hpl.hp.com>
- *
- * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
- * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
- *			unnecessary i-cache flushing.
- * 04/07/.. ak          Better overflow handling. Assorted fixes.
- */
-
-#include <linux/cache.h>
-#include <linux/mm.h>
-#include <linux/module.h>
-#include <linux/pci.h>
-#include <linux/spinlock.h>
-#include <linux/string.h>
-#include <linux/types.h>
-#include <linux/ctype.h>
-
-#include <asm/io.h>
-#include <asm/pci.h>
-#include <asm/dma.h>
-
-#include <linux/init.h>
-#include <linux/bootmem.h>
-
-#define OFFSET(val,align) ((unsigned long)	\
-	                   ( (val) & ( (align) - 1)))
-
-#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
-#define SG_ENT_PHYS_ADDRESS(SG)	virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
-
-/*
- * Maximum allowable number of contiguous slabs to map,
- * must be a power of 2.  What is the appropriate value ?
- * The complexity of {map,unmap}_single is linearly dependent on this value.
- */
-#define IO_TLB_SEGSIZE	128
-
-/*
- * log of the size of each IO TLB slab.  The number of slabs is command line
- * controllable.
- */
-#define IO_TLB_SHIFT 11
-
-int swiotlb_force;
-
-/*
- * Used to do a quick range check in swiotlb_unmap_single and
- * swiotlb_sync_single_*, to see if the memory was in fact allocated by this
- * API.
- */
-static char *io_tlb_start, *io_tlb_end;
-
-/*
- * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
- * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
- */
-static unsigned long io_tlb_nslabs;
-
-/*
- * When the IOMMU overflows we return a fallback buffer. This sets the size.
- */
-static unsigned long io_tlb_overflow = 32*1024;
-
-void *io_tlb_overflow_buffer;
-
-/*
- * This is a free list describing the number of free entries available from
- * each index
- */
-static unsigned int *io_tlb_list;
-static unsigned int io_tlb_index;
-
-/*
- * We need to save away the original address corresponding to a mapped entry
- * for the sync operations.
- */
-static unsigned char **io_tlb_orig_addr;
-
-/*
- * Protect the above data structures in the map and unmap calls
- */
-static DEFINE_SPINLOCK(io_tlb_lock);
-
-static int __init
-setup_io_tlb_npages(char *str)
-{
-	if (isdigit(*str)) {
-		io_tlb_nslabs = simple_strtoul(str, &str, 0);
-		/* avoid tail segment of size < IO_TLB_SEGSIZE */
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
-	if (*str == ',')
-		++str;
-	if (!strcmp(str, "force"))
-		swiotlb_force = 1;
-	return 1;
-}
-__setup("swiotlb=", setup_io_tlb_npages);
-/* make io_tlb_overflow tunable too? */
-
-/*
- * Statically reserve bounce buffer space and initialize bounce buffer data
- * structures for the software IO TLB used to implement the PCI DMA API.
- */
-void
-swiotlb_init_with_default_size (size_t default_size)
-{
-	unsigned long i;
-
-	if (!io_tlb_nslabs) {
-		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
-
-	/*
-	 * Get IO TLB memory from the low pages
-	 */
-	io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
-					       (1 << IO_TLB_SHIFT));
-	if (!io_tlb_start)
-		panic("Cannot allocate SWIOTLB buffer");
-	io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
-
-	/*
-	 * Allocate and initialize the free list array.  This array is used
-	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
-	 * between io_tlb_start and io_tlb_end.
-	 */
-	io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
-	for (i = 0; i < io_tlb_nslabs; i++)
- 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-	io_tlb_index = 0;
-	io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
-
-	/*
-	 * Get the overflow emergency buffer
-	 */
-	io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
-	printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n",
-	       virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end));
-}
-
-void
-swiotlb_init (void)
-{
-	swiotlb_init_with_default_size(64 * (1<<20));	/* default to 64MB */
-}
-
-static inline int
-address_needs_mapping(struct device *hwdev, dma_addr_t addr)
-{
-	dma_addr_t mask = 0xffffffff;
-	/* If the device has a mask, use it, otherwise default to 32 bits */
-	if (hwdev && hwdev->dma_mask)
-		mask = *hwdev->dma_mask;
-	return (addr & ~mask) != 0;
-}
-
-/*
- * Allocates bounce buffer and returns its kernel virtual address.
- */
-static void *
-map_single(struct device *hwdev, char *buffer, size_t size, int dir)
-{
-	unsigned long flags;
-	char *dma_addr;
-	unsigned int nslots, stride, index, wrap;
-	int i;
-
-	/*
-	 * For mappings greater than a page, we limit the stride (and
-	 * hence alignment) to a page size.
-	 */
-	nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	if (size > PAGE_SIZE)
-		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
-	else
-		stride = 1;
-
-	if (!nslots)
-		BUG();
-
-	/*
-	 * Find suitable number of IO TLB entries size that will fit this
-	 * request and allocate a buffer from that IO TLB pool.
-	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
-	{
-		wrap = index = ALIGN(io_tlb_index, stride);
-
-		if (index >= io_tlb_nslabs)
-			wrap = index = 0;
-
-		do {
-			/*
-			 * If we find a slot that indicates we have 'nslots'
-			 * number of contiguous buffers, we allocate the
-			 * buffers from that slot and mark the entries as '0'
-			 * indicating unavailable.
-			 */
-			if (io_tlb_list[index] >= nslots) {
-				int count = 0;
-
-				for (i = index; i < (int) (index + nslots); i++)
-					io_tlb_list[i] = 0;
-				for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-					io_tlb_list[i] = ++count;
-				dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
-
-				/*
-				 * Update the indices to avoid searching in
-				 * the next round.
-				 */
-				io_tlb_index = ((index + nslots) < io_tlb_nslabs
-						? (index + nslots) : 0);
-
-				goto found;
-			}
-			index += stride;
-			if (index >= io_tlb_nslabs)
-				index = 0;
-		} while (index != wrap);
-
-		spin_unlock_irqrestore(&io_tlb_lock, flags);
-		return NULL;
-	}
-  found:
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
-
-	/*
-	 * Save away the mapping from the original address to the DMA address.
-	 * This is needed when we sync the memory.  Then we sync the buffer if
-	 * needed.
-	 */
-	io_tlb_orig_addr[index] = buffer;
-	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
-		memcpy(dma_addr, buffer, size);
-
-	return dma_addr;
-}
-
-/*
- * dma_addr is the kernel virtual address of the bounce buffer to unmap.
- */
-static void
-unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
-{
-	unsigned long flags;
-	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	char *buffer = io_tlb_orig_addr[index];
-
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (buffer && ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL)))
-		/*
-		 * bounce... copy the data back into the original buffer * and
-		 * delete the bounce buffer.
-		 */
-		memcpy(buffer, dma_addr, size);
-
-	/*
-	 * Return the buffer to the free list by setting the corresponding
-	 * entries to indicate the number of contigous entries available.
-	 * While returning the entries to the free list, we merge the entries
-	 * with slots below and above the pool being returned.
-	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
-	{
-		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
-			 io_tlb_list[index + nslots] : 0);
-		/*
-		 * Step 1: return the slots to the free list, merging the
-		 * slots with superceeding slots
-		 */
-		for (i = index + nslots - 1; i >= index; i--)
-			io_tlb_list[i] = ++count;
-		/*
-		 * Step 2: merge the returned slots with the preceding slots,
-		 * if available (non zero)
-		 */
-		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-			io_tlb_list[i] = ++count;
-	}
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
-}
-
-static void
-sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
-{
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	char *buffer = io_tlb_orig_addr[index];
-
-	/*
-	 * bounce... copy the data back into/from the original buffer
-	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
-	 */
-	if (dir == DMA_FROM_DEVICE)
-		memcpy(buffer, dma_addr, size);
-	else if (dir == DMA_TO_DEVICE)
-		memcpy(dma_addr, buffer, size);
-	else
-		BUG();
-}
-
-void *
-swiotlb_alloc_coherent(struct device *hwdev, size_t size,
-		       dma_addr_t *dma_handle, int flags)
-{
-	unsigned long dev_addr;
-	void *ret;
-	int order = get_order(size);
-
-	/*
-	 * XXX fix me: the DMA API should pass us an explicit DMA mask
-	 * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32
-	 * bit range instead of a 16MB one).
-	 */
-	flags |= GFP_DMA;
-
-	ret = (void *)__get_free_pages(flags, order);
-	if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
-		/*
-		 * The allocated memory isn't reachable by the device.
-		 * Fall back on swiotlb_map_single().
-		 */
-		free_pages((unsigned long) ret, order);
-		ret = NULL;
-	}
-	if (!ret) {
-		/*
-		 * We are either out of memory or the device can't DMA
-		 * to GFP_DMA memory; fall back on
-		 * swiotlb_map_single(), which will grab memory from
-		 * the lowest available address range.
-		 */
-		dma_addr_t handle;
-		handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE);
-		if (dma_mapping_error(handle))
-			return NULL;
-
-		ret = phys_to_virt(handle);
-	}
-
-	memset(ret, 0, size);
-	dev_addr = virt_to_phys(ret);
-
-	/* Confirm address can be DMA'd by device */
-	if (address_needs_mapping(hwdev, dev_addr)) {
-		printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n",
-		       (unsigned long long)*hwdev->dma_mask, dev_addr);
-		panic("swiotlb_alloc_coherent: allocated memory is out of "
-		      "range for device");
-	}
-	*dma_handle = dev_addr;
-	return ret;
-}
-
-void
-swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
-		      dma_addr_t dma_handle)
-{
-	if (!(vaddr >= (void *)io_tlb_start
-                    && vaddr < (void *)io_tlb_end))
-		free_pages((unsigned long) vaddr, get_order(size));
-	else
-		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
-		swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE);
-}
-
-static void
-swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
-{
-	/*
-	 * Ran out of IOMMU space for this operation. This is very bad.
-	 * Unfortunately the drivers cannot handle this operation properly.
-	 * unless they check for pci_dma_mapping_error (most don't)
-	 * When the mapping is small enough return a static buffer to limit
-	 * the damage, or panic when the transfer is too big.
-	 */
-	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
-	       "device %s\n", size, dev ? dev->bus_id : "?");
-
-	if (size > io_tlb_overflow && do_panic) {
-		if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Memory would be corrupted\n");
-		if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Random memory would be DMAed\n");
-	}
-}
-
-/*
- * Map a single buffer of the indicated size for DMA in streaming mode.  The
- * PCI address to use is returned.
- *
- * Once the device is given the dma address, the device owns this memory until
- * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
- */
-dma_addr_t
-swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir)
-{
-	unsigned long dev_addr = virt_to_phys(ptr);
-	void *map;
-
-	if (dir == DMA_NONE)
-		BUG();
-	/*
-	 * If the pointer passed in happens to be in the device's DMA window,
-	 * we can safely return the device addr and not worry about bounce
-	 * buffering it.
-	 */
-	if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
-		return dev_addr;
-
-	/*
-	 * Oh well, have to allocate and map a bounce buffer.
-	 */
-	map = map_single(hwdev, ptr, size, dir);
-	if (!map) {
-		swiotlb_full(hwdev, size, dir, 1);
-		map = io_tlb_overflow_buffer;
-	}
-
-	dev_addr = virt_to_phys(map);
-
-	/*
-	 * Ensure that the address returned is DMA'ble
-	 */
-	if (address_needs_mapping(hwdev, dev_addr))
-		panic("map_single: bounce buffer is not DMA'ble");
-
-	return dev_addr;
-}
-
-/*
- * Since DMA is i-cache coherent, any (complete) pages that were written via
- * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to
- * flush them when they get mapped into an executable vm-area.
- */
-static void
-mark_clean(void *addr, size_t size)
-{
-	unsigned long pg_addr, end;
-
-	pg_addr = PAGE_ALIGN((unsigned long) addr);
-	end = (unsigned long) addr + size;
-	while (pg_addr + PAGE_SIZE <= end) {
-		struct page *page = virt_to_page(pg_addr);
-		set_bit(PG_arch_1, &page->flags);
-		pg_addr += PAGE_SIZE;
-	}
-}
-
-/*
- * Unmap a single streaming mode DMA translation.  The dma_addr and size must
- * match what was provided for in a previous swiotlb_map_single call.  All
- * other usages are undefined.
- *
- * After this call, reads by the cpu to the buffer are guaranteed to see
- * whatever the device wrote there.
- */
-void
-swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size,
-		     int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		unmap_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-/*
- * Make physical memory consistent for a single streaming mode DMA translation
- * after a transfer.
- *
- * If you perform a swiotlb_map_single() but wish to interrogate the buffer
- * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
- * call this function before doing so.  At the next point you give the PCI dma
- * address back to the card, you must first perform a
- * swiotlb_dma_sync_for_device, and then the device again owns the buffer
- */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-void
-swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
-			       size_t size, int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-/*
- * Map a set of buffers described by scatterlist in streaming mode for DMA.
- * This is the scatter-gather version of the above swiotlb_map_single
- * interface.  Here the scatter gather list elements are each tagged with the
- * appropriate dma address and length.  They are obtained via
- * sg_dma_{address,length}(SG).
- *
- * NOTE: An implementation may be able to use a smaller number of
- *       DMA address/length pairs than there are SG table elements.
- *       (for example via virtual mapping capabilities)
- *       The routine returns the number of addr/length pairs actually
- *       used, at most nents.
- *
- * Device ownership issues as mentioned above for swiotlb_map_single are the
- * same here.
- */
-int
-swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
-	       int dir)
-{
-	void *addr;
-	unsigned long dev_addr;
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++) {
-		addr = SG_ENT_VIRT_ADDRESS(sg);
-		dev_addr = virt_to_phys(addr);
-		if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
-			sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir));
-			if (!sg->dma_address) {
-				/* Don't panic here, we expect map_sg users
-				   to do proper error handling. */
-				swiotlb_full(hwdev, sg->length, dir, 0);
-				swiotlb_unmap_sg(hwdev, sg - i, i, dir);
-				sg[0].dma_length = 0;
-				return 0;
-			}
-		} else
-			sg->dma_address = dev_addr;
-		sg->dma_length = sg->length;
-	}
-	return nelems;
-}
-
-/*
- * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
- * concerning calls here are the same as for swiotlb_unmap_single() above.
- */
-void
-swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
-		 int dir)
-{
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir);
-		else if (dir == DMA_FROM_DEVICE)
-			mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
-}
-
-/*
- * Make physical memory consistent for a set of streaming mode DMA translations
- * after a transfer.
- *
- * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
- * and usage.
- */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
-{
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
-}
-
-void
-swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
-			   int nelems, int dir)
-{
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
-}
-
-int
-swiotlb_dma_mapping_error(dma_addr_t dma_addr)
-{
-	return (dma_addr == virt_to_phys(io_tlb_overflow_buffer));
-}
-
-/*
- * Return whether the given PCI device DMA address mask can be supported
- * properly.  For example, if your device can only drive the low 24-bits
- * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
- * this function.
- */
-int
-swiotlb_dma_supported (struct device *hwdev, u64 mask)
-{
-	return (virt_to_phys (io_tlb_end) - 1) <= mask;
-}
-
-EXPORT_SYMBOL(swiotlb_init);
-EXPORT_SYMBOL(swiotlb_map_single);
-EXPORT_SYMBOL(swiotlb_unmap_single);
-EXPORT_SYMBOL(swiotlb_map_sg);
-EXPORT_SYMBOL(swiotlb_unmap_sg);
-EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
-EXPORT_SYMBOL(swiotlb_sync_single_for_device);
-EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
-EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
-EXPORT_SYMBOL(swiotlb_dma_mapping_error);
-EXPORT_SYMBOL(swiotlb_alloc_coherent);
-EXPORT_SYMBOL(swiotlb_free_coherent);
-EXPORT_SYMBOL(swiotlb_dma_supported);
--- linux-swiotlb-9_9_2005/arch/ia64/lib/Makefile.orig	2005-09-09 14:27:33.000000000 -0400
+++ linux-swiotlb-9_9_2005/arch/ia64/lib/Makefile	2005-09-09 16:17:44.000000000 -0400
@@ -9,7 +9,7 @@ lib-y := __divsi3.o __udivsi3.o __modsi3
 	bitop.o checksum.o clear_page.o csum_partial_copy.o		\
 	clear_user.o strncpy_from_user.o strlen_user.o strnlen_user.o	\
 	flush.o ip_fast_csum.o do_csum.o				\
-	memset.o strlen.o swiotlb.o
+	memset.o strlen.o
 
 lib-$(CONFIG_ITANIUM)	+= copy_page.o copy_user.o memcpy.o
 lib-$(CONFIG_MCKINLEY)	+= copy_page_mck.o memcpy_mck.o

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 2/6] swiotlb: cleanup some code duplication cruft
  2005-09-12 14:48       ` John W. Linville
@ 2005-09-12 14:48         ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick

The implementations of swiotlb_sync_single_for_{cpu,device} are
identical. Likewise for swiotlb_syng_sg_for_{cpu,device}. This patch
move the guts of those functions to two new inline functions, and
calls the appropriate one from the bodies of those functions.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |   45 ++++++++++++++++++++++-----------------------
 1 files changed, 22 insertions(+), 23 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -492,9 +492,9 @@ swiotlb_unmap_single(struct device *hwde
  * address back to the card, you must first perform a
  * swiotlb_dma_sync_for_device, and then the device again owns the buffer
  */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
+static inline void
+swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
+		    size_t size, int dir)
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
@@ -507,17 +507,17 @@ swiotlb_sync_single_for_cpu(struct devic
 }
 
 void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+}
+
+void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir);
 }
 
 /*
@@ -594,9 +594,9 @@ swiotlb_unmap_sg(struct device *hwdev, s
  * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
  * and usage.
  */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
+static inline void
+swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
+		int nelems, int dir)
 {
 	int i;
 
@@ -610,18 +610,17 @@ swiotlb_sync_sg_for_cpu(struct device *h
 }
 
 void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+}
+
+void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
 }
 
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 1/6] swiotlb: move from arch/ia64/lib to lib
@ 2005-09-12 14:48       ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick

The swiotlb implementation is shared by both IA-64 and EM64T. However,
the source itself lives under arch/ia64. This patch moves swiotlb.c
from arch/ia64/lib to lib and fixes-up the appropriate Makefile and
Kconfig files. No actual changes are made to swiotlb.c.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 arch/ia64/Kconfig           |    4 
 arch/ia64/lib/Makefile      |    2 
 arch/ia64/lib/swiotlb.c     |  657 --------------------------------------------
 arch/x86_64/kernel/Makefile |    2 
 lib/Makefile                |    2 
 lib/swiotlb.c               |  657 ++++++++++++++++++++++++++++++++++++++++++++
 6 files changed, 664 insertions(+), 660 deletions(-)

--- linux-swiotlb-9_9_2005/lib/swiotlb.c.orig	2005-09-09 16:18:36.000000000 -0400
+++ linux-swiotlb-9_9_2005/lib/swiotlb.c	2005-09-09 16:18:27.000000000 -0400
@@ -0,0 +1,657 @@
+/*
+ * Dynamic DMA mapping support.
+ *
+ * This implementation is for IA-64 platforms that do not support
+ * I/O TLBs (aka DMA address translation hardware).
+ * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
+ * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
+ * Copyright (C) 2000, 2003 Hewlett-Packard Co
+ *	David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
+ * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
+ *			unnecessary i-cache flushing.
+ * 04/07/.. ak          Better overflow handling. Assorted fixes.
+ */
+
+#include <linux/cache.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ctype.h>
+
+#include <asm/io.h>
+#include <asm/pci.h>
+#include <asm/dma.h>
+
+#include <linux/init.h>
+#include <linux/bootmem.h>
+
+#define OFFSET(val,align) ((unsigned long)	\
+	                   ( (val) & ( (align) - 1)))
+
+#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
+#define SG_ENT_PHYS_ADDRESS(SG)	virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
+
+/*
+ * Maximum allowable number of contiguous slabs to map,
+ * must be a power of 2.  What is the appropriate value ?
+ * The complexity of {map,unmap}_single is linearly dependent on this value.
+ */
+#define IO_TLB_SEGSIZE	128
+
+/*
+ * log of the size of each IO TLB slab.  The number of slabs is command line
+ * controllable.
+ */
+#define IO_TLB_SHIFT 11
+
+int swiotlb_force;
+
+/*
+ * Used to do a quick range check in swiotlb_unmap_single and
+ * swiotlb_sync_single_*, to see if the memory was in fact allocated by this
+ * API.
+ */
+static char *io_tlb_start, *io_tlb_end;
+
+/*
+ * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
+ * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
+ */
+static unsigned long io_tlb_nslabs;
+
+/*
+ * When the IOMMU overflows we return a fallback buffer. This sets the size.
+ */
+static unsigned long io_tlb_overflow = 32*1024;
+
+void *io_tlb_overflow_buffer;
+
+/*
+ * This is a free list describing the number of free entries available from
+ * each index
+ */
+static unsigned int *io_tlb_list;
+static unsigned int io_tlb_index;
+
+/*
+ * We need to save away the original address corresponding to a mapped entry
+ * for the sync operations.
+ */
+static unsigned char **io_tlb_orig_addr;
+
+/*
+ * Protect the above data structures in the map and unmap calls
+ */
+static DEFINE_SPINLOCK(io_tlb_lock);
+
+static int __init
+setup_io_tlb_npages(char *str)
+{
+	if (isdigit(*str)) {
+		io_tlb_nslabs = simple_strtoul(str, &str, 0);
+		/* avoid tail segment of size < IO_TLB_SEGSIZE */
+		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	}
+	if (*str = ',')
+		++str;
+	if (!strcmp(str, "force"))
+		swiotlb_force = 1;
+	return 1;
+}
+__setup("swiotlb=", setup_io_tlb_npages);
+/* make io_tlb_overflow tunable too? */
+
+/*
+ * Statically reserve bounce buffer space and initialize bounce buffer data
+ * structures for the software IO TLB used to implement the PCI DMA API.
+ */
+void
+swiotlb_init_with_default_size (size_t default_size)
+{
+	unsigned long i;
+
+	if (!io_tlb_nslabs) {
+		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
+		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	}
+
+	/*
+	 * Get IO TLB memory from the low pages
+	 */
+	io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
+					       (1 << IO_TLB_SHIFT));
+	if (!io_tlb_start)
+		panic("Cannot allocate SWIOTLB buffer");
+	io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
+
+	/*
+	 * Allocate and initialize the free list array.  This array is used
+	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
+	 * between io_tlb_start and io_tlb_end.
+	 */
+	io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
+	for (i = 0; i < io_tlb_nslabs; i++)
+ 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+	io_tlb_index = 0;
+	io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
+
+	/*
+	 * Get the overflow emergency buffer
+	 */
+	io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
+	printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n",
+	       virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end));
+}
+
+void
+swiotlb_init (void)
+{
+	swiotlb_init_with_default_size(64 * (1<<20));	/* default to 64MB */
+}
+
+static inline int
+address_needs_mapping(struct device *hwdev, dma_addr_t addr)
+{
+	dma_addr_t mask = 0xffffffff;
+	/* If the device has a mask, use it, otherwise default to 32 bits */
+	if (hwdev && hwdev->dma_mask)
+		mask = *hwdev->dma_mask;
+	return (addr & ~mask) != 0;
+}
+
+/*
+ * Allocates bounce buffer and returns its kernel virtual address.
+ */
+static void *
+map_single(struct device *hwdev, char *buffer, size_t size, int dir)
+{
+	unsigned long flags;
+	char *dma_addr;
+	unsigned int nslots, stride, index, wrap;
+	int i;
+
+	/*
+	 * For mappings greater than a page, we limit the stride (and
+	 * hence alignment) to a page size.
+	 */
+	nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+	if (size > PAGE_SIZE)
+		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
+	else
+		stride = 1;
+
+	if (!nslots)
+		BUG();
+
+	/*
+	 * Find suitable number of IO TLB entries size that will fit this
+	 * request and allocate a buffer from that IO TLB pool.
+	 */
+	spin_lock_irqsave(&io_tlb_lock, flags);
+	{
+		wrap = index = ALIGN(io_tlb_index, stride);
+
+		if (index >= io_tlb_nslabs)
+			wrap = index = 0;
+
+		do {
+			/*
+			 * If we find a slot that indicates we have 'nslots'
+			 * number of contiguous buffers, we allocate the
+			 * buffers from that slot and mark the entries as '0'
+			 * indicating unavailable.
+			 */
+			if (io_tlb_list[index] >= nslots) {
+				int count = 0;
+
+				for (i = index; i < (int) (index + nslots); i++)
+					io_tlb_list[i] = 0;
+				for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
+					io_tlb_list[i] = ++count;
+				dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
+
+				/*
+				 * Update the indices to avoid searching in
+				 * the next round.
+				 */
+				io_tlb_index = ((index + nslots) < io_tlb_nslabs
+						? (index + nslots) : 0);
+
+				goto found;
+			}
+			index += stride;
+			if (index >= io_tlb_nslabs)
+				index = 0;
+		} while (index != wrap);
+
+		spin_unlock_irqrestore(&io_tlb_lock, flags);
+		return NULL;
+	}
+  found:
+	spin_unlock_irqrestore(&io_tlb_lock, flags);
+
+	/*
+	 * Save away the mapping from the original address to the DMA address.
+	 * This is needed when we sync the memory.  Then we sync the buffer if
+	 * needed.
+	 */
+	io_tlb_orig_addr[index] = buffer;
+	if (dir = DMA_TO_DEVICE || dir = DMA_BIDIRECTIONAL)
+		memcpy(dma_addr, buffer, size);
+
+	return dma_addr;
+}
+
+/*
+ * dma_addr is the kernel virtual address of the bounce buffer to unmap.
+ */
+static void
+unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+{
+	unsigned long flags;
+	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+	char *buffer = io_tlb_orig_addr[index];
+
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (buffer && ((dir = DMA_FROM_DEVICE) || (dir = DMA_BIDIRECTIONAL)))
+		/*
+		 * bounce... copy the data back into the original buffer * and
+		 * delete the bounce buffer.
+		 */
+		memcpy(buffer, dma_addr, size);
+
+	/*
+	 * Return the buffer to the free list by setting the corresponding
+	 * entries to indicate the number of contigous entries available.
+	 * While returning the entries to the free list, we merge the entries
+	 * with slots below and above the pool being returned.
+	 */
+	spin_lock_irqsave(&io_tlb_lock, flags);
+	{
+		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
+			 io_tlb_list[index + nslots] : 0);
+		/*
+		 * Step 1: return the slots to the free list, merging the
+		 * slots with superceeding slots
+		 */
+		for (i = index + nslots - 1; i >= index; i--)
+			io_tlb_list[i] = ++count;
+		/*
+		 * Step 2: merge the returned slots with the preceding slots,
+		 * if available (non zero)
+		 */
+		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
+			io_tlb_list[i] = ++count;
+	}
+	spin_unlock_irqrestore(&io_tlb_lock, flags);
+}
+
+static void
+sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+{
+	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+	char *buffer = io_tlb_orig_addr[index];
+
+	/*
+	 * bounce... copy the data back into/from the original buffer
+	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
+	 */
+	if (dir = DMA_FROM_DEVICE)
+		memcpy(buffer, dma_addr, size);
+	else if (dir = DMA_TO_DEVICE)
+		memcpy(dma_addr, buffer, size);
+	else
+		BUG();
+}
+
+void *
+swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+		       dma_addr_t *dma_handle, int flags)
+{
+	unsigned long dev_addr;
+	void *ret;
+	int order = get_order(size);
+
+	/*
+	 * XXX fix me: the DMA API should pass us an explicit DMA mask
+	 * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32
+	 * bit range instead of a 16MB one).
+	 */
+	flags |= GFP_DMA;
+
+	ret = (void *)__get_free_pages(flags, order);
+	if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
+		/*
+		 * The allocated memory isn't reachable by the device.
+		 * Fall back on swiotlb_map_single().
+		 */
+		free_pages((unsigned long) ret, order);
+		ret = NULL;
+	}
+	if (!ret) {
+		/*
+		 * We are either out of memory or the device can't DMA
+		 * to GFP_DMA memory; fall back on
+		 * swiotlb_map_single(), which will grab memory from
+		 * the lowest available address range.
+		 */
+		dma_addr_t handle;
+		handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE);
+		if (dma_mapping_error(handle))
+			return NULL;
+
+		ret = phys_to_virt(handle);
+	}
+
+	memset(ret, 0, size);
+	dev_addr = virt_to_phys(ret);
+
+	/* Confirm address can be DMA'd by device */
+	if (address_needs_mapping(hwdev, dev_addr)) {
+		printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n",
+		       (unsigned long long)*hwdev->dma_mask, dev_addr);
+		panic("swiotlb_alloc_coherent: allocated memory is out of "
+		      "range for device");
+	}
+	*dma_handle = dev_addr;
+	return ret;
+}
+
+void
+swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+		      dma_addr_t dma_handle)
+{
+	if (!(vaddr >= (void *)io_tlb_start
+                    && vaddr < (void *)io_tlb_end))
+		free_pages((unsigned long) vaddr, get_order(size));
+	else
+		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
+		swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE);
+}
+
+static void
+swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
+{
+	/*
+	 * Ran out of IOMMU space for this operation. This is very bad.
+	 * Unfortunately the drivers cannot handle this operation properly.
+	 * unless they check for pci_dma_mapping_error (most don't)
+	 * When the mapping is small enough return a static buffer to limit
+	 * the damage, or panic when the transfer is too big.
+	 */
+	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
+	       "device %s\n", size, dev ? dev->bus_id : "?");
+
+	if (size > io_tlb_overflow && do_panic) {
+		if (dir = PCI_DMA_FROMDEVICE || dir = PCI_DMA_BIDIRECTIONAL)
+			panic("PCI-DMA: Memory would be corrupted\n");
+		if (dir = PCI_DMA_TODEVICE || dir = PCI_DMA_BIDIRECTIONAL)
+			panic("PCI-DMA: Random memory would be DMAed\n");
+	}
+}
+
+/*
+ * Map a single buffer of the indicated size for DMA in streaming mode.  The
+ * PCI address to use is returned.
+ *
+ * Once the device is given the dma address, the device owns this memory until
+ * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
+ */
+dma_addr_t
+swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir)
+{
+	unsigned long dev_addr = virt_to_phys(ptr);
+	void *map;
+
+	if (dir = DMA_NONE)
+		BUG();
+	/*
+	 * If the pointer passed in happens to be in the device's DMA window,
+	 * we can safely return the device addr and not worry about bounce
+	 * buffering it.
+	 */
+	if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
+		return dev_addr;
+
+	/*
+	 * Oh well, have to allocate and map a bounce buffer.
+	 */
+	map = map_single(hwdev, ptr, size, dir);
+	if (!map) {
+		swiotlb_full(hwdev, size, dir, 1);
+		map = io_tlb_overflow_buffer;
+	}
+
+	dev_addr = virt_to_phys(map);
+
+	/*
+	 * Ensure that the address returned is DMA'ble
+	 */
+	if (address_needs_mapping(hwdev, dev_addr))
+		panic("map_single: bounce buffer is not DMA'ble");
+
+	return dev_addr;
+}
+
+/*
+ * Since DMA is i-cache coherent, any (complete) pages that were written via
+ * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to
+ * flush them when they get mapped into an executable vm-area.
+ */
+static void
+mark_clean(void *addr, size_t size)
+{
+	unsigned long pg_addr, end;
+
+	pg_addr = PAGE_ALIGN((unsigned long) addr);
+	end = (unsigned long) addr + size;
+	while (pg_addr + PAGE_SIZE <= end) {
+		struct page *page = virt_to_page(pg_addr);
+		set_bit(PG_arch_1, &page->flags);
+		pg_addr += PAGE_SIZE;
+	}
+}
+
+/*
+ * Unmap a single streaming mode DMA translation.  The dma_addr and size must
+ * match what was provided for in a previous swiotlb_map_single call.  All
+ * other usages are undefined.
+ *
+ * After this call, reads by the cpu to the buffer are guaranteed to see
+ * whatever the device wrote there.
+ */
+void
+swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size,
+		     int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		unmap_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
+ * Make physical memory consistent for a single streaming mode DMA translation
+ * after a transfer.
+ *
+ * If you perform a swiotlb_map_single() but wish to interrogate the buffer
+ * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
+ * call this function before doing so.  At the next point you give the PCI dma
+ * address back to the card, you must first perform a
+ * swiotlb_dma_sync_for_device, and then the device again owns the buffer
+ */
+void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
+			       size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
+ * Map a set of buffers described by scatterlist in streaming mode for DMA.
+ * This is the scatter-gather version of the above swiotlb_map_single
+ * interface.  Here the scatter gather list elements are each tagged with the
+ * appropriate dma address and length.  They are obtained via
+ * sg_dma_{address,length}(SG).
+ *
+ * NOTE: An implementation may be able to use a smaller number of
+ *       DMA address/length pairs than there are SG table elements.
+ *       (for example via virtual mapping capabilities)
+ *       The routine returns the number of addr/length pairs actually
+ *       used, at most nents.
+ *
+ * Device ownership issues as mentioned above for swiotlb_map_single are the
+ * same here.
+ */
+int
+swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+	       int dir)
+{
+	void *addr;
+	unsigned long dev_addr;
+	int i;
+
+	if (dir = DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++) {
+		addr = SG_ENT_VIRT_ADDRESS(sg);
+		dev_addr = virt_to_phys(addr);
+		if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
+			sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir));
+			if (!sg->dma_address) {
+				/* Don't panic here, we expect map_sg users
+				   to do proper error handling. */
+				swiotlb_full(hwdev, sg->length, dir, 0);
+				swiotlb_unmap_sg(hwdev, sg - i, i, dir);
+				sg[0].dma_length = 0;
+				return 0;
+			}
+		} else
+			sg->dma_address = dev_addr;
+		sg->dma_length = sg->length;
+	}
+	return nelems;
+}
+
+/*
+ * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
+ * concerning calls here are the same as for swiotlb_unmap_single() above.
+ */
+void
+swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+		 int dir)
+{
+	int i;
+
+	if (dir = DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir);
+		else if (dir = DMA_FROM_DEVICE)
+			mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
+}
+
+/*
+ * Make physical memory consistent for a set of streaming mode DMA translations
+ * after a transfer.
+ *
+ * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
+ * and usage.
+ */
+void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	int i;
+
+	if (dir = DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			sync_single(hwdev, (void *) sg->dma_address,
+				    sg->dma_length, dir);
+}
+
+void
+swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
+			   int nelems, int dir)
+{
+	int i;
+
+	if (dir = DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			sync_single(hwdev, (void *) sg->dma_address,
+				    sg->dma_length, dir);
+}
+
+int
+swiotlb_dma_mapping_error(dma_addr_t dma_addr)
+{
+	return (dma_addr = virt_to_phys(io_tlb_overflow_buffer));
+}
+
+/*
+ * Return whether the given PCI device DMA address mask can be supported
+ * properly.  For example, if your device can only drive the low 24-bits
+ * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
+ * this function.
+ */
+int
+swiotlb_dma_supported (struct device *hwdev, u64 mask)
+{
+	return (virt_to_phys (io_tlb_end) - 1) <= mask;
+}
+
+EXPORT_SYMBOL(swiotlb_init);
+EXPORT_SYMBOL(swiotlb_map_single);
+EXPORT_SYMBOL(swiotlb_unmap_single);
+EXPORT_SYMBOL(swiotlb_map_sg);
+EXPORT_SYMBOL(swiotlb_unmap_sg);
+EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
+EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
+EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
+EXPORT_SYMBOL(swiotlb_dma_mapping_error);
+EXPORT_SYMBOL(swiotlb_alloc_coherent);
+EXPORT_SYMBOL(swiotlb_free_coherent);
+EXPORT_SYMBOL(swiotlb_dma_supported);
--- linux-swiotlb-9_9_2005/lib/Makefile.orig	2005-09-09 14:27:39.000000000 -0400
+++ linux-swiotlb-9_9_2005/lib/Makefile	2005-09-09 16:17:44.000000000 -0400
@@ -43,6 +43,8 @@ obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o
 obj-$(CONFIG_TEXTSEARCH_BM) += ts_bm.o
 obj-$(CONFIG_TEXTSEARCH_FSM) += ts_fsm.o
 
+obj-$(CONFIG_SWIOTLB) += swiotlb.o
+
 hostprogs-y	:= gen_crc32table
 clean-files	:= crc32table.h
 
--- linux-swiotlb-9_9_2005/arch/x86_64/kernel/Makefile.orig	2005-09-09 14:27:34.000000000 -0400
+++ linux-swiotlb-9_9_2005/arch/x86_64/kernel/Makefile	2005-09-09 16:17:44.000000000 -0400
@@ -27,7 +27,6 @@ obj-$(CONFIG_CPU_FREQ)		+= cpufreq/
 obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
 obj-$(CONFIG_GART_IOMMU)	+= pci-gart.o aperture.o
 obj-$(CONFIG_DUMMY_IOMMU)	+= pci-nommu.o pci-dma.o
-obj-$(CONFIG_SWIOTLB)		+= swiotlb.o
 obj-$(CONFIG_KPROBES)		+= kprobes.o
 obj-$(CONFIG_X86_PM_TIMER)	+= pmtimer.o
 
@@ -41,7 +40,6 @@ CFLAGS_vsyscall.o		:= $(PROFILING) -g0
 bootflag-y			+= ../../i386/kernel/bootflag.o
 cpuid-$(subst m,y,$(CONFIG_X86_CPUID))  += ../../i386/kernel/cpuid.o
 topology-y                     += ../../i386/mach-default/topology.o
-swiotlb-$(CONFIG_SWIOTLB)      += ../../ia64/lib/swiotlb.o
 microcode-$(subst m,y,$(CONFIG_MICROCODE))  += ../../i386/kernel/microcode.o
 intel_cacheinfo-y		+= ../../i386/kernel/cpu/intel_cacheinfo.o
 quirks-y			+= ../../i386/kernel/quirks.o
--- linux-swiotlb-9_9_2005/arch/ia64/Kconfig.orig	2005-09-09 14:27:33.000000000 -0400
+++ linux-swiotlb-9_9_2005/arch/ia64/Kconfig	2005-09-09 16:17:44.000000000 -0400
@@ -26,6 +26,10 @@ config MMU
 	bool
 	default y
 
+config SWIOTLB
+       bool
+       default y
+
 config RWSEM_XCHGADD_ALGORITHM
 	bool
 	default y
--- linux-swiotlb-9_9_2005/arch/ia64/lib/swiotlb.c.orig	2005-09-09 14:27:33.000000000 -0400
+++ linux-swiotlb-9_9_2005/arch/ia64/lib/swiotlb.c	2005-09-09 16:18:46.000000000 -0400
@@ -1,657 +0,0 @@
-/*
- * Dynamic DMA mapping support.
- *
- * This implementation is for IA-64 platforms that do not support
- * I/O TLBs (aka DMA address translation hardware).
- * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
- * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
- * Copyright (C) 2000, 2003 Hewlett-Packard Co
- *	David Mosberger-Tang <davidm@hpl.hp.com>
- *
- * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
- * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
- *			unnecessary i-cache flushing.
- * 04/07/.. ak          Better overflow handling. Assorted fixes.
- */
-
-#include <linux/cache.h>
-#include <linux/mm.h>
-#include <linux/module.h>
-#include <linux/pci.h>
-#include <linux/spinlock.h>
-#include <linux/string.h>
-#include <linux/types.h>
-#include <linux/ctype.h>
-
-#include <asm/io.h>
-#include <asm/pci.h>
-#include <asm/dma.h>
-
-#include <linux/init.h>
-#include <linux/bootmem.h>
-
-#define OFFSET(val,align) ((unsigned long)	\
-	                   ( (val) & ( (align) - 1)))
-
-#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
-#define SG_ENT_PHYS_ADDRESS(SG)	virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
-
-/*
- * Maximum allowable number of contiguous slabs to map,
- * must be a power of 2.  What is the appropriate value ?
- * The complexity of {map,unmap}_single is linearly dependent on this value.
- */
-#define IO_TLB_SEGSIZE	128
-
-/*
- * log of the size of each IO TLB slab.  The number of slabs is command line
- * controllable.
- */
-#define IO_TLB_SHIFT 11
-
-int swiotlb_force;
-
-/*
- * Used to do a quick range check in swiotlb_unmap_single and
- * swiotlb_sync_single_*, to see if the memory was in fact allocated by this
- * API.
- */
-static char *io_tlb_start, *io_tlb_end;
-
-/*
- * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
- * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
- */
-static unsigned long io_tlb_nslabs;
-
-/*
- * When the IOMMU overflows we return a fallback buffer. This sets the size.
- */
-static unsigned long io_tlb_overflow = 32*1024;
-
-void *io_tlb_overflow_buffer;
-
-/*
- * This is a free list describing the number of free entries available from
- * each index
- */
-static unsigned int *io_tlb_list;
-static unsigned int io_tlb_index;
-
-/*
- * We need to save away the original address corresponding to a mapped entry
- * for the sync operations.
- */
-static unsigned char **io_tlb_orig_addr;
-
-/*
- * Protect the above data structures in the map and unmap calls
- */
-static DEFINE_SPINLOCK(io_tlb_lock);
-
-static int __init
-setup_io_tlb_npages(char *str)
-{
-	if (isdigit(*str)) {
-		io_tlb_nslabs = simple_strtoul(str, &str, 0);
-		/* avoid tail segment of size < IO_TLB_SEGSIZE */
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
-	if (*str = ',')
-		++str;
-	if (!strcmp(str, "force"))
-		swiotlb_force = 1;
-	return 1;
-}
-__setup("swiotlb=", setup_io_tlb_npages);
-/* make io_tlb_overflow tunable too? */
-
-/*
- * Statically reserve bounce buffer space and initialize bounce buffer data
- * structures for the software IO TLB used to implement the PCI DMA API.
- */
-void
-swiotlb_init_with_default_size (size_t default_size)
-{
-	unsigned long i;
-
-	if (!io_tlb_nslabs) {
-		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
-
-	/*
-	 * Get IO TLB memory from the low pages
-	 */
-	io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
-					       (1 << IO_TLB_SHIFT));
-	if (!io_tlb_start)
-		panic("Cannot allocate SWIOTLB buffer");
-	io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
-
-	/*
-	 * Allocate and initialize the free list array.  This array is used
-	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
-	 * between io_tlb_start and io_tlb_end.
-	 */
-	io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
-	for (i = 0; i < io_tlb_nslabs; i++)
- 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-	io_tlb_index = 0;
-	io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
-
-	/*
-	 * Get the overflow emergency buffer
-	 */
-	io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
-	printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n",
-	       virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end));
-}
-
-void
-swiotlb_init (void)
-{
-	swiotlb_init_with_default_size(64 * (1<<20));	/* default to 64MB */
-}
-
-static inline int
-address_needs_mapping(struct device *hwdev, dma_addr_t addr)
-{
-	dma_addr_t mask = 0xffffffff;
-	/* If the device has a mask, use it, otherwise default to 32 bits */
-	if (hwdev && hwdev->dma_mask)
-		mask = *hwdev->dma_mask;
-	return (addr & ~mask) != 0;
-}
-
-/*
- * Allocates bounce buffer and returns its kernel virtual address.
- */
-static void *
-map_single(struct device *hwdev, char *buffer, size_t size, int dir)
-{
-	unsigned long flags;
-	char *dma_addr;
-	unsigned int nslots, stride, index, wrap;
-	int i;
-
-	/*
-	 * For mappings greater than a page, we limit the stride (and
-	 * hence alignment) to a page size.
-	 */
-	nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	if (size > PAGE_SIZE)
-		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
-	else
-		stride = 1;
-
-	if (!nslots)
-		BUG();
-
-	/*
-	 * Find suitable number of IO TLB entries size that will fit this
-	 * request and allocate a buffer from that IO TLB pool.
-	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
-	{
-		wrap = index = ALIGN(io_tlb_index, stride);
-
-		if (index >= io_tlb_nslabs)
-			wrap = index = 0;
-
-		do {
-			/*
-			 * If we find a slot that indicates we have 'nslots'
-			 * number of contiguous buffers, we allocate the
-			 * buffers from that slot and mark the entries as '0'
-			 * indicating unavailable.
-			 */
-			if (io_tlb_list[index] >= nslots) {
-				int count = 0;
-
-				for (i = index; i < (int) (index + nslots); i++)
-					io_tlb_list[i] = 0;
-				for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-					io_tlb_list[i] = ++count;
-				dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
-
-				/*
-				 * Update the indices to avoid searching in
-				 * the next round.
-				 */
-				io_tlb_index = ((index + nslots) < io_tlb_nslabs
-						? (index + nslots) : 0);
-
-				goto found;
-			}
-			index += stride;
-			if (index >= io_tlb_nslabs)
-				index = 0;
-		} while (index != wrap);
-
-		spin_unlock_irqrestore(&io_tlb_lock, flags);
-		return NULL;
-	}
-  found:
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
-
-	/*
-	 * Save away the mapping from the original address to the DMA address.
-	 * This is needed when we sync the memory.  Then we sync the buffer if
-	 * needed.
-	 */
-	io_tlb_orig_addr[index] = buffer;
-	if (dir = DMA_TO_DEVICE || dir = DMA_BIDIRECTIONAL)
-		memcpy(dma_addr, buffer, size);
-
-	return dma_addr;
-}
-
-/*
- * dma_addr is the kernel virtual address of the bounce buffer to unmap.
- */
-static void
-unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
-{
-	unsigned long flags;
-	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	char *buffer = io_tlb_orig_addr[index];
-
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (buffer && ((dir = DMA_FROM_DEVICE) || (dir = DMA_BIDIRECTIONAL)))
-		/*
-		 * bounce... copy the data back into the original buffer * and
-		 * delete the bounce buffer.
-		 */
-		memcpy(buffer, dma_addr, size);
-
-	/*
-	 * Return the buffer to the free list by setting the corresponding
-	 * entries to indicate the number of contigous entries available.
-	 * While returning the entries to the free list, we merge the entries
-	 * with slots below and above the pool being returned.
-	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
-	{
-		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
-			 io_tlb_list[index + nslots] : 0);
-		/*
-		 * Step 1: return the slots to the free list, merging the
-		 * slots with superceeding slots
-		 */
-		for (i = index + nslots - 1; i >= index; i--)
-			io_tlb_list[i] = ++count;
-		/*
-		 * Step 2: merge the returned slots with the preceding slots,
-		 * if available (non zero)
-		 */
-		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-			io_tlb_list[i] = ++count;
-	}
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
-}
-
-static void
-sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
-{
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	char *buffer = io_tlb_orig_addr[index];
-
-	/*
-	 * bounce... copy the data back into/from the original buffer
-	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
-	 */
-	if (dir = DMA_FROM_DEVICE)
-		memcpy(buffer, dma_addr, size);
-	else if (dir = DMA_TO_DEVICE)
-		memcpy(dma_addr, buffer, size);
-	else
-		BUG();
-}
-
-void *
-swiotlb_alloc_coherent(struct device *hwdev, size_t size,
-		       dma_addr_t *dma_handle, int flags)
-{
-	unsigned long dev_addr;
-	void *ret;
-	int order = get_order(size);
-
-	/*
-	 * XXX fix me: the DMA API should pass us an explicit DMA mask
-	 * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32
-	 * bit range instead of a 16MB one).
-	 */
-	flags |= GFP_DMA;
-
-	ret = (void *)__get_free_pages(flags, order);
-	if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
-		/*
-		 * The allocated memory isn't reachable by the device.
-		 * Fall back on swiotlb_map_single().
-		 */
-		free_pages((unsigned long) ret, order);
-		ret = NULL;
-	}
-	if (!ret) {
-		/*
-		 * We are either out of memory or the device can't DMA
-		 * to GFP_DMA memory; fall back on
-		 * swiotlb_map_single(), which will grab memory from
-		 * the lowest available address range.
-		 */
-		dma_addr_t handle;
-		handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE);
-		if (dma_mapping_error(handle))
-			return NULL;
-
-		ret = phys_to_virt(handle);
-	}
-
-	memset(ret, 0, size);
-	dev_addr = virt_to_phys(ret);
-
-	/* Confirm address can be DMA'd by device */
-	if (address_needs_mapping(hwdev, dev_addr)) {
-		printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n",
-		       (unsigned long long)*hwdev->dma_mask, dev_addr);
-		panic("swiotlb_alloc_coherent: allocated memory is out of "
-		      "range for device");
-	}
-	*dma_handle = dev_addr;
-	return ret;
-}
-
-void
-swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
-		      dma_addr_t dma_handle)
-{
-	if (!(vaddr >= (void *)io_tlb_start
-                    && vaddr < (void *)io_tlb_end))
-		free_pages((unsigned long) vaddr, get_order(size));
-	else
-		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
-		swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE);
-}
-
-static void
-swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
-{
-	/*
-	 * Ran out of IOMMU space for this operation. This is very bad.
-	 * Unfortunately the drivers cannot handle this operation properly.
-	 * unless they check for pci_dma_mapping_error (most don't)
-	 * When the mapping is small enough return a static buffer to limit
-	 * the damage, or panic when the transfer is too big.
-	 */
-	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
-	       "device %s\n", size, dev ? dev->bus_id : "?");
-
-	if (size > io_tlb_overflow && do_panic) {
-		if (dir = PCI_DMA_FROMDEVICE || dir = PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Memory would be corrupted\n");
-		if (dir = PCI_DMA_TODEVICE || dir = PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Random memory would be DMAed\n");
-	}
-}
-
-/*
- * Map a single buffer of the indicated size for DMA in streaming mode.  The
- * PCI address to use is returned.
- *
- * Once the device is given the dma address, the device owns this memory until
- * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
- */
-dma_addr_t
-swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir)
-{
-	unsigned long dev_addr = virt_to_phys(ptr);
-	void *map;
-
-	if (dir = DMA_NONE)
-		BUG();
-	/*
-	 * If the pointer passed in happens to be in the device's DMA window,
-	 * we can safely return the device addr and not worry about bounce
-	 * buffering it.
-	 */
-	if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
-		return dev_addr;
-
-	/*
-	 * Oh well, have to allocate and map a bounce buffer.
-	 */
-	map = map_single(hwdev, ptr, size, dir);
-	if (!map) {
-		swiotlb_full(hwdev, size, dir, 1);
-		map = io_tlb_overflow_buffer;
-	}
-
-	dev_addr = virt_to_phys(map);
-
-	/*
-	 * Ensure that the address returned is DMA'ble
-	 */
-	if (address_needs_mapping(hwdev, dev_addr))
-		panic("map_single: bounce buffer is not DMA'ble");
-
-	return dev_addr;
-}
-
-/*
- * Since DMA is i-cache coherent, any (complete) pages that were written via
- * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to
- * flush them when they get mapped into an executable vm-area.
- */
-static void
-mark_clean(void *addr, size_t size)
-{
-	unsigned long pg_addr, end;
-
-	pg_addr = PAGE_ALIGN((unsigned long) addr);
-	end = (unsigned long) addr + size;
-	while (pg_addr + PAGE_SIZE <= end) {
-		struct page *page = virt_to_page(pg_addr);
-		set_bit(PG_arch_1, &page->flags);
-		pg_addr += PAGE_SIZE;
-	}
-}
-
-/*
- * Unmap a single streaming mode DMA translation.  The dma_addr and size must
- * match what was provided for in a previous swiotlb_map_single call.  All
- * other usages are undefined.
- *
- * After this call, reads by the cpu to the buffer are guaranteed to see
- * whatever the device wrote there.
- */
-void
-swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size,
-		     int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		unmap_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-/*
- * Make physical memory consistent for a single streaming mode DMA translation
- * after a transfer.
- *
- * If you perform a swiotlb_map_single() but wish to interrogate the buffer
- * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
- * call this function before doing so.  At the next point you give the PCI dma
- * address back to the card, you must first perform a
- * swiotlb_dma_sync_for_device, and then the device again owns the buffer
- */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-void
-swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
-			       size_t size, int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-/*
- * Map a set of buffers described by scatterlist in streaming mode for DMA.
- * This is the scatter-gather version of the above swiotlb_map_single
- * interface.  Here the scatter gather list elements are each tagged with the
- * appropriate dma address and length.  They are obtained via
- * sg_dma_{address,length}(SG).
- *
- * NOTE: An implementation may be able to use a smaller number of
- *       DMA address/length pairs than there are SG table elements.
- *       (for example via virtual mapping capabilities)
- *       The routine returns the number of addr/length pairs actually
- *       used, at most nents.
- *
- * Device ownership issues as mentioned above for swiotlb_map_single are the
- * same here.
- */
-int
-swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
-	       int dir)
-{
-	void *addr;
-	unsigned long dev_addr;
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++) {
-		addr = SG_ENT_VIRT_ADDRESS(sg);
-		dev_addr = virt_to_phys(addr);
-		if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
-			sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir));
-			if (!sg->dma_address) {
-				/* Don't panic here, we expect map_sg users
-				   to do proper error handling. */
-				swiotlb_full(hwdev, sg->length, dir, 0);
-				swiotlb_unmap_sg(hwdev, sg - i, i, dir);
-				sg[0].dma_length = 0;
-				return 0;
-			}
-		} else
-			sg->dma_address = dev_addr;
-		sg->dma_length = sg->length;
-	}
-	return nelems;
-}
-
-/*
- * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
- * concerning calls here are the same as for swiotlb_unmap_single() above.
- */
-void
-swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
-		 int dir)
-{
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir);
-		else if (dir = DMA_FROM_DEVICE)
-			mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
-}
-
-/*
- * Make physical memory consistent for a set of streaming mode DMA translations
- * after a transfer.
- *
- * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
- * and usage.
- */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
-{
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
-}
-
-void
-swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
-			   int nelems, int dir)
-{
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
-}
-
-int
-swiotlb_dma_mapping_error(dma_addr_t dma_addr)
-{
-	return (dma_addr = virt_to_phys(io_tlb_overflow_buffer));
-}
-
-/*
- * Return whether the given PCI device DMA address mask can be supported
- * properly.  For example, if your device can only drive the low 24-bits
- * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
- * this function.
- */
-int
-swiotlb_dma_supported (struct device *hwdev, u64 mask)
-{
-	return (virt_to_phys (io_tlb_end) - 1) <= mask;
-}
-
-EXPORT_SYMBOL(swiotlb_init);
-EXPORT_SYMBOL(swiotlb_map_single);
-EXPORT_SYMBOL(swiotlb_unmap_single);
-EXPORT_SYMBOL(swiotlb_map_sg);
-EXPORT_SYMBOL(swiotlb_unmap_sg);
-EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
-EXPORT_SYMBOL(swiotlb_sync_single_for_device);
-EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
-EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
-EXPORT_SYMBOL(swiotlb_dma_mapping_error);
-EXPORT_SYMBOL(swiotlb_alloc_coherent);
-EXPORT_SYMBOL(swiotlb_free_coherent);
-EXPORT_SYMBOL(swiotlb_dma_supported);
--- linux-swiotlb-9_9_2005/arch/ia64/lib/Makefile.orig	2005-09-09 14:27:33.000000000 -0400
+++ linux-swiotlb-9_9_2005/arch/ia64/lib/Makefile	2005-09-09 16:17:44.000000000 -0400
@@ -9,7 +9,7 @@ lib-y := __divsi3.o __udivsi3.o __modsi3
 	bitop.o checksum.o clear_page.o csum_partial_copy.o		\
 	clear_user.o strncpy_from_user.o strlen_user.o strnlen_user.o	\
 	flush.o ip_fast_csum.o do_csum.o				\
-	memset.o strlen.o swiotlb.o
+	memset.o strlen.o
 
 lib-$(CONFIG_ITANIUM)	+= copy_page.o copy_user.o memcpy.o
 lib-$(CONFIG_MCKINLEY)	+= copy_page_mck.o memcpy_mck.o

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 2/6] swiotlb: cleanup some code duplication cruft
@ 2005-09-12 14:48         ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick

The implementations of swiotlb_sync_single_for_{cpu,device} are
identical. Likewise for swiotlb_syng_sg_for_{cpu,device}. This patch
move the guts of those functions to two new inline functions, and
calls the appropriate one from the bodies of those functions.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |   45 ++++++++++++++++++++++-----------------------
 1 files changed, 22 insertions(+), 23 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -492,9 +492,9 @@ swiotlb_unmap_single(struct device *hwde
  * address back to the card, you must first perform a
  * swiotlb_dma_sync_for_device, and then the device again owns the buffer
  */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
+static inline void
+swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
+		    size_t size, int dir)
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
@@ -507,17 +507,17 @@ swiotlb_sync_single_for_cpu(struct devic
 }
 
 void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+}
+
+void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir);
 }
 
 /*
@@ -594,9 +594,9 @@ swiotlb_unmap_sg(struct device *hwdev, s
  * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
  * and usage.
  */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
+static inline void
+swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
+		int nelems, int dir)
 {
 	int i;
 
@@ -610,18 +610,17 @@ swiotlb_sync_sg_for_cpu(struct device *h
 }
 
 void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+}
+
+void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
 }
 
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 5/6] swiotlb: file header comments
  2005-09-12 14:48             ` John W. Linville
@ 2005-09-12 14:48               ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick

Change comment at top of swiotlb.c to reflect that the code is shared
with EM64T (i.e. Intel x86_64). Also add an entry for myself so that
if I "broke it", everyone knows who "bought it"... :-)

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -1,7 +1,7 @@
 /*
  * Dynamic DMA mapping support.
  *
- * This implementation is for IA-64 platforms that do not support
+ * This implementation is for IA-64 and EM64T platforms that do not support
  * I/O TLBs (aka DMA address translation hardware).
  * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
  * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
@@ -11,7 +11,9 @@
  * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
  * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
  *			unnecessary i-cache flushing.
- * 04/07/.. ak          Better overflow handling. Assorted fixes.
+ * 04/07/.. ak		Better overflow handling. Assorted fixes.
+ * 05/09/10 linville	Add support for syncing ranges, support syncing for
+ *			DMA_BIDIRECTIONAL mappings, miscellaneous cleanup.
  */
 
 #include <linux/cache.h>

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 3/6] swiotlb: support syncing sub-ranges of mappings
  2005-09-12 14:48         ` John W. Linville
@ 2005-09-12 14:48           ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick

This patch implements swiotlb_sync_single_range_for_{cpu,device}. This
is intended to support an x86_64 implementation of
dma_sync_single_range_for_{cpu,device}.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 include/asm-x86_64/swiotlb.h |    8 ++++++++
 lib/swiotlb.c                |   33 +++++++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/include/asm-x86_64/swiotlb.h b/include/asm-x86_64/swiotlb.h
--- a/include/asm-x86_64/swiotlb.h
+++ b/include/asm-x86_64/swiotlb.h
@@ -15,6 +15,14 @@ extern void swiotlb_sync_single_for_cpu(
 extern void swiotlb_sync_single_for_device(struct device *hwdev,
 					    dma_addr_t dev_addr,
 					    size_t size, int dir);
+extern void swiotlb_sync_single_range_for_cpu(struct device *hwdev,
+					      dma_addr_t dev_addr,
+					      unsigned long offset,
+					      size_t size, int dir);
+extern void swiotlb_sync_single_range_for_device(struct device *hwdev,
+						 dma_addr_t dev_addr,
+						 unsigned long offset,
+						 size_t size, int dir);
 extern void swiotlb_sync_sg_for_cpu(struct device *hwdev,
 				     struct scatterlist *sg, int nelems,
 				     int dir);
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -521,6 +521,37 @@ swiotlb_sync_single_for_device(struct de
 }
 
 /*
+ * Same as above, but for a sub-range of the mapping.
+ */
+static inline void
+swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
+			  unsigned long offset, size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr) + offset;
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+				  unsigned long offset, size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+}
+
+void
+swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
+				     unsigned long offset, size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+}
+
+/*
  * Map a set of buffers described by scatterlist in streaming mode for DMA.
  * This is the scatter-gather version of the above swiotlb_map_single
  * interface.  Here the scatter gather list elements are each tagged with the
@@ -648,6 +679,8 @@ EXPORT_SYMBOL(swiotlb_map_sg);
 EXPORT_SYMBOL(swiotlb_unmap_sg);
 EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_cpu);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
 EXPORT_SYMBOL(swiotlb_dma_mapping_error);

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 6/6] x86_64: implement dma_sync_single_range_for_{cpu,device}
  2005-09-12 14:48               ` John W. Linville
  (?)
@ 2005-09-12 14:48               ` John W. Linville
  2005-09-12 15:22                 ` Andi Kleen
  -1 siblings, 1 reply; 99+ messages in thread
From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw)
  To: linux-kernel, discuss; +Cc: ak

Implement dma_sync_single_range_for_{cpu,device} for x86_64. This
makes use of swiotlb_sync_single_range_for_{cpu,device}.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 include/asm-x86_64/dma-mapping.h |   28 ++++++++++++++++++++++++++++
 1 files changed, 28 insertions(+)

diff --git a/include/asm-x86_64/dma-mapping.h b/include/asm-x86_64/dma-mapping.h
--- a/include/asm-x86_64/dma-mapping.h
+++ b/include/asm-x86_64/dma-mapping.h
@@ -85,6 +85,34 @@ static inline void dma_sync_single_for_d
 	flush_write_buffers();
 }
 
+static inline void dma_sync_single_range_for_cpu(struct device *hwdev,
+						 dma_addr_t dma_handle,
+						 unsigned long offset,
+						 size_t size, int direction)
+{
+	if (direction == DMA_NONE)
+		out_of_line_bug();
+
+	if (swiotlb)
+		return swiotlb_sync_single_range_for_cpu(hwdev,dma_handle,offset,size,direction);
+
+	flush_write_buffers();
+}
+
+static inline void dma_sync_single_range_for_device(struct device *hwdev,
+						    dma_addr_t dma_handle,
+						    unsigned long offset,
+						    size_t size, int direction)
+{
+        if (direction == DMA_NONE)
+		out_of_line_bug();
+
+	if (swiotlb)
+		return swiotlb_sync_single_range_for_device(hwdev,dma_handle,offset,size,direction);
+
+	flush_write_buffers();
+}
+
 static inline void dma_sync_sg_for_cpu(struct device *hwdev,
 				       struct scatterlist *sg,
 				       int nelems, int direction)

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings
  2005-09-12 14:48           ` John W. Linville
@ 2005-09-12 14:48             ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick

The current implementation of sync_single in swiotlb.c chokes on
DMA_BIDIRECTIONAL mappings. This patch adds the capability to sync
those mappings, and optimizes other syncs by accounting for the
sync target (i.e. cpu or device) in addition to the DMA direction of
the mapping.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |   62 +++++++++++++++++++++++++++++++++++++---------------------
 1 files changed, 40 insertions(+), 22 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -49,6 +49,14 @@
  */
 #define IO_TLB_SHIFT 11
 
+/*
+ * Enumeration for sync targets
+ */
+enum dma_sync_target {
+	SYNC_FOR_CPU = 0,
+	SYNC_FOR_DEVICE = 1,
+};
+
 int swiotlb_force;
 
 /*
@@ -295,21 +303,28 @@ unmap_single(struct device *hwdev, char 
 }
 
 static void
-sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+sync_single(struct device *hwdev, char *dma_addr, size_t size,
+	    int dir, int target)
 {
 	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
 	char *buffer = io_tlb_orig_addr[index];
 
-	/*
-	 * bounce... copy the data back into/from the original buffer
-	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
-	 */
-	if (dir == DMA_FROM_DEVICE)
-		memcpy(buffer, dma_addr, size);
-	else if (dir == DMA_TO_DEVICE)
-		memcpy(dma_addr, buffer, size);
-	else
+	switch (target) {
+	case SYNC_FOR_CPU:
+		if (likely(dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+			memcpy(buffer, dma_addr, size);
+		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
+			BUG();
+		break;
+	case SYNC_FOR_DEVICE:
+		if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
+			memcpy(dma_addr, buffer, size);
+		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
+			BUG();
+		break;
+	default:
 		BUG();
+	}
 }
 
 void *
@@ -494,14 +509,14 @@ swiotlb_unmap_single(struct device *hwde
  */
 static inline void
 swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
-		    size_t size, int dir)
+		    size_t size, int dir, int target)
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
 	if (dir == DMA_NONE)
 		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
+		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir == DMA_FROM_DEVICE)
 		mark_clean(dma_addr, size);
 }
@@ -510,14 +525,14 @@ void
 swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 			    size_t size, int dir)
 {
-	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_DEVICE);
 }
 
 /*
@@ -525,14 +540,15 @@ swiotlb_sync_single_for_device(struct de
  */
 static inline void
 swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
-			  unsigned long offset, size_t size, int dir)
+			  unsigned long offset, size_t size,
+			  int dir, int target)
 {
 	char *dma_addr = phys_to_virt(dev_addr) + offset;
 
 	if (dir == DMA_NONE)
 		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
+		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir == DMA_FROM_DEVICE)
 		mark_clean(dma_addr, size);
 }
@@ -541,14 +557,16 @@ void
 swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 				  unsigned long offset, size_t size, int dir)
 {
-	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir,
+				  SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
 				     unsigned long offset, size_t size, int dir)
 {
-	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir,
+				  SYNC_FOR_DEVICE);
 }
 
 /*
@@ -627,7 +645,7 @@ swiotlb_unmap_sg(struct device *hwdev, s
  */
 static inline void
 swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
-		int nelems, int dir)
+		int nelems, int dir, int target)
 {
 	int i;
 
@@ -637,21 +655,21 @@ swiotlb_sync_sg(struct device *hwdev, st
 	for (i = 0; i < nelems; i++, sg++)
 		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
 			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+				    sg->dma_length, dir, target);
 }
 
 void
 swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
 			int nelems, int dir)
 {
-	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_DEVICE);
 }
 
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings
@ 2005-09-12 14:48             ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick

The current implementation of sync_single in swiotlb.c chokes on
DMA_BIDIRECTIONAL mappings. This patch adds the capability to sync
those mappings, and optimizes other syncs by accounting for the
sync target (i.e. cpu or device) in addition to the DMA direction of
the mapping.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |   62 +++++++++++++++++++++++++++++++++++++---------------------
 1 files changed, 40 insertions(+), 22 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -49,6 +49,14 @@
  */
 #define IO_TLB_SHIFT 11
 
+/*
+ * Enumeration for sync targets
+ */
+enum dma_sync_target {
+	SYNC_FOR_CPU = 0,
+	SYNC_FOR_DEVICE = 1,
+};
+
 int swiotlb_force;
 
 /*
@@ -295,21 +303,28 @@ unmap_single(struct device *hwdev, char 
 }
 
 static void
-sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+sync_single(struct device *hwdev, char *dma_addr, size_t size,
+	    int dir, int target)
 {
 	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
 	char *buffer = io_tlb_orig_addr[index];
 
-	/*
-	 * bounce... copy the data back into/from the original buffer
-	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
-	 */
-	if (dir = DMA_FROM_DEVICE)
-		memcpy(buffer, dma_addr, size);
-	else if (dir = DMA_TO_DEVICE)
-		memcpy(dma_addr, buffer, size);
-	else
+	switch (target) {
+	case SYNC_FOR_CPU:
+		if (likely(dir = DMA_FROM_DEVICE || dir = DMA_BIDIRECTIONAL))
+			memcpy(buffer, dma_addr, size);
+		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
+			BUG();
+		break;
+	case SYNC_FOR_DEVICE:
+		if (likely(dir = DMA_TO_DEVICE || dir = DMA_BIDIRECTIONAL))
+			memcpy(dma_addr, buffer, size);
+		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
+			BUG();
+		break;
+	default:
 		BUG();
+	}
 }
 
 void *
@@ -494,14 +509,14 @@ swiotlb_unmap_single(struct device *hwde
  */
 static inline void
 swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
-		    size_t size, int dir)
+		    size_t size, int dir, int target)
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
 	if (dir = DMA_NONE)
 		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
+		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir = DMA_FROM_DEVICE)
 		mark_clean(dma_addr, size);
 }
@@ -510,14 +525,14 @@ void
 swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 			    size_t size, int dir)
 {
-	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_DEVICE);
 }
 
 /*
@@ -525,14 +540,15 @@ swiotlb_sync_single_for_device(struct de
  */
 static inline void
 swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
-			  unsigned long offset, size_t size, int dir)
+			  unsigned long offset, size_t size,
+			  int dir, int target)
 {
 	char *dma_addr = phys_to_virt(dev_addr) + offset;
 
 	if (dir = DMA_NONE)
 		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
+		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir = DMA_FROM_DEVICE)
 		mark_clean(dma_addr, size);
 }
@@ -541,14 +557,16 @@ void
 swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 				  unsigned long offset, size_t size, int dir)
 {
-	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir,
+				  SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
 				     unsigned long offset, size_t size, int dir)
 {
-	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir,
+				  SYNC_FOR_DEVICE);
 }
 
 /*
@@ -627,7 +645,7 @@ swiotlb_unmap_sg(struct device *hwdev, s
  */
 static inline void
 swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
-		int nelems, int dir)
+		int nelems, int dir, int target)
 {
 	int i;
 
@@ -637,21 +655,21 @@ swiotlb_sync_sg(struct device *hwdev, st
 	for (i = 0; i < nelems; i++, sg++)
 		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
 			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+				    sg->dma_length, dir, target);
 }
 
 void
 swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
 			int nelems, int dir)
 {
-	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_DEVICE);
 }
 
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 5/6] swiotlb: file header comments
@ 2005-09-12 14:48               ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick

Change comment at top of swiotlb.c to reflect that the code is shared
with EM64T (i.e. Intel x86_64). Also add an entry for myself so that
if I "broke it", everyone knows who "bought it"... :-)

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -1,7 +1,7 @@
 /*
  * Dynamic DMA mapping support.
  *
- * This implementation is for IA-64 platforms that do not support
+ * This implementation is for IA-64 and EM64T platforms that do not support
  * I/O TLBs (aka DMA address translation hardware).
  * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
  * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
@@ -11,7 +11,9 @@
  * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
  * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
  *			unnecessary i-cache flushing.
- * 04/07/.. ak          Better overflow handling. Assorted fixes.
+ * 04/07/.. ak		Better overflow handling. Assorted fixes.
+ * 05/09/10 linville	Add support for syncing ranges, support syncing for
+ *			DMA_BIDIRECTIONAL mappings, miscellaneous cleanup.
  */
 
 #include <linux/cache.h>

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 3/6] swiotlb: support syncing sub-ranges of mappings
@ 2005-09-12 14:48           ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 14:48 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64; +Cc: ak, tony.luck, Asit.K.Mallick

This patch implements swiotlb_sync_single_range_for_{cpu,device}. This
is intended to support an x86_64 implementation of
dma_sync_single_range_for_{cpu,device}.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 include/asm-x86_64/swiotlb.h |    8 ++++++++
 lib/swiotlb.c                |   33 +++++++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/include/asm-x86_64/swiotlb.h b/include/asm-x86_64/swiotlb.h
--- a/include/asm-x86_64/swiotlb.h
+++ b/include/asm-x86_64/swiotlb.h
@@ -15,6 +15,14 @@ extern void swiotlb_sync_single_for_cpu(
 extern void swiotlb_sync_single_for_device(struct device *hwdev,
 					    dma_addr_t dev_addr,
 					    size_t size, int dir);
+extern void swiotlb_sync_single_range_for_cpu(struct device *hwdev,
+					      dma_addr_t dev_addr,
+					      unsigned long offset,
+					      size_t size, int dir);
+extern void swiotlb_sync_single_range_for_device(struct device *hwdev,
+						 dma_addr_t dev_addr,
+						 unsigned long offset,
+						 size_t size, int dir);
 extern void swiotlb_sync_sg_for_cpu(struct device *hwdev,
 				     struct scatterlist *sg, int nelems,
 				     int dir);
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -521,6 +521,37 @@ swiotlb_sync_single_for_device(struct de
 }
 
 /*
+ * Same as above, but for a sub-range of the mapping.
+ */
+static inline void
+swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
+			  unsigned long offset, size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr) + offset;
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+				  unsigned long offset, size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+}
+
+void
+swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
+				     unsigned long offset, size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+}
+
+/*
  * Map a set of buffers described by scatterlist in streaming mode for DMA.
  * This is the scatter-gather version of the above swiotlb_map_single
  * interface.  Here the scatter gather list elements are each tagged with the
@@ -648,6 +679,8 @@ EXPORT_SYMBOL(swiotlb_map_sg);
 EXPORT_SYMBOL(swiotlb_unmap_sg);
 EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_cpu);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
 EXPORT_SYMBOL(swiotlb_dma_mapping_error);

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 6/6] x86_64: implement dma_sync_single_range_for_{cpu,device}
  2005-09-12 14:48               ` [patch 2.6.13 6/6] x86_64: implement dma_sync_single_range_for_{cpu,device} John W. Linville
@ 2005-09-12 15:22                 ` Andi Kleen
  0 siblings, 0 replies; 99+ messages in thread
From: Andi Kleen @ 2005-09-12 15:22 UTC (permalink / raw)
  To: John W. Linville; +Cc: linux-kernel, discuss

On Monday 12 September 2005 16:48, John W. Linville wrote:
> Implement dma_sync_single_range_for_{cpu,device} for x86_64. This
> makes use of swiotlb_sync_single_range_for_{cpu,device}.

I already have the simple patch that just used sync_single_range in my tree 
and it's scheduled to go to Linus ASAP. You can rebase on that later.

-Andi

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings
  2005-09-12 14:48             ` John W. Linville
@ 2005-09-12 18:51               ` Grant Grundler
  -1 siblings, 0 replies; 99+ messages in thread
From: Grant Grundler @ 2005-09-12 18:51 UTC (permalink / raw)
  To: John W. Linville
  Cc: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick

On Mon, Sep 12, 2005 at 10:48:51AM -0400, John W. Linville wrote:
...
> +	switch (target) {
> +	case SYNC_FOR_CPU:
> +		if (likely(dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
> +			memcpy(buffer, dma_addr, size);
> +		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
> +			BUG();
> +		break;
> +	case SYNC_FOR_DEVICE:
> +		if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
> +			memcpy(dma_addr, buffer, size);
> +		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
> +			BUG();
> +		break;
> +	default:
>  		BUG();
> +	}

Isn't "DMA_NONE" expected to generate a warning or panic?

Documentation/DMA-mapping.txt says:
	The value PCI_DMA_NONE is to be used for debugging.  One can
	hold this in a data structure before you come to know the
	precise direction, and this will help catch cases where your
	direction tracking logic has failed to set things up properly.


And it just seems wrong to sync a buffer if no DMA has taking place.

...
> @@ -525,14 +540,15 @@ swiotlb_sync_single_for_device(struct de
>   */
>  static inline void
>  swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
> -			  unsigned long offset, size_t size, int dir)
> +			  unsigned long offset, size_t size,
> +			  int dir, int target)
>  {
>  	char *dma_addr = phys_to_virt(dev_addr) + offset;
>  
>  	if (dir == DMA_NONE)
>  		BUG();

This existing code seems to support the idea that DMA sync interfaces
require the direction be set to something other than DMA_NONE.

thanks,
grant

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings
@ 2005-09-12 18:51               ` Grant Grundler
  0 siblings, 0 replies; 99+ messages in thread
From: Grant Grundler @ 2005-09-12 18:51 UTC (permalink / raw)
  To: John W. Linville
  Cc: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick

On Mon, Sep 12, 2005 at 10:48:51AM -0400, John W. Linville wrote:
...
> +	switch (target) {
> +	case SYNC_FOR_CPU:
> +		if (likely(dir = DMA_FROM_DEVICE || dir = DMA_BIDIRECTIONAL))
> +			memcpy(buffer, dma_addr, size);
> +		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
> +			BUG();
> +		break;
> +	case SYNC_FOR_DEVICE:
> +		if (likely(dir = DMA_TO_DEVICE || dir = DMA_BIDIRECTIONAL))
> +			memcpy(dma_addr, buffer, size);
> +		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
> +			BUG();
> +		break;
> +	default:
>  		BUG();
> +	}

Isn't "DMA_NONE" expected to generate a warning or panic?

Documentation/DMA-mapping.txt says:
	The value PCI_DMA_NONE is to be used for debugging.  One can
	hold this in a data structure before you come to know the
	precise direction, and this will help catch cases where your
	direction tracking logic has failed to set things up properly.


And it just seems wrong to sync a buffer if no DMA has taking place.

...
> @@ -525,14 +540,15 @@ swiotlb_sync_single_for_device(struct de
>   */
>  static inline void
>  swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
> -			  unsigned long offset, size_t size, int dir)
> +			  unsigned long offset, size_t size,
> +			  int dir, int target)
>  {
>  	char *dma_addr = phys_to_virt(dev_addr) + offset;
>  
>  	if (dir = DMA_NONE)
>  		BUG();

This existing code seems to support the idea that DMA sync interfaces
require the direction be set to something other than DMA_NONE.

thanks,
grant

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings
  2005-09-12 18:51               ` Grant Grundler
@ 2005-09-12 19:51                 ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 19:51 UTC (permalink / raw)
  To: Grant Grundler
  Cc: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick

On Mon, Sep 12, 2005 at 11:51:20AM -0700, Grant Grundler wrote:
> On Mon, Sep 12, 2005 at 10:48:51AM -0400, John W. Linville wrote:

> > +	case SYNC_FOR_DEVICE:
> > +		if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
> > +			memcpy(dma_addr, buffer, size);
> > +		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
> > +			BUG();
> > +		break;
> > +	default:

> Isn't "DMA_NONE" expected to generate a warning or panic?

True enough...I'll follow-up w/ an additive patch to account for that.
As you pointed-out, the higher-level functions in swiotlb filter that
out anyway, so this really isn't a big issue.

John
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings
@ 2005-09-12 19:51                 ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 19:51 UTC (permalink / raw)
  To: Grant Grundler
  Cc: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick

On Mon, Sep 12, 2005 at 11:51:20AM -0700, Grant Grundler wrote:
> On Mon, Sep 12, 2005 at 10:48:51AM -0400, John W. Linville wrote:

> > +	case SYNC_FOR_DEVICE:
> > +		if (likely(dir = DMA_TO_DEVICE || dir = DMA_BIDIRECTIONAL))
> > +			memcpy(dma_addr, buffer, size);
> > +		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
> > +			BUG();
> > +		break;
> > +	default:

> Isn't "DMA_NONE" expected to generate a warning or panic?

True enough...I'll follow-up w/ an additive patch to account for that.
As you pointed-out, the higher-level functions in swiotlb filter that
out anyway, so this really isn't a big issue.

John
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13] swiotlb: BUG() for DMA_NONE in sync_single
  2005-09-12 19:51                 ` John W. Linville
@ 2005-09-12 19:53                   ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 19:53 UTC (permalink / raw)
  To: Grant Grundler
  Cc: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick

Call BUG() if DMA_NONE is passed-in as direction for sync_single.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -315,13 +315,13 @@ sync_single(struct device *hwdev, char *
 	case SYNC_FOR_CPU:
 		if (likely(dir == DMA_FROM_DEVICE || dma == DMA_BIDIRECTIONAL))
 			memcpy(buffer, dma_addr, size);
-		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
+		else if (dir != DMA_TO_DEVICE)
 			BUG();
 		break;
 	case SYNC_FOR_DEVICE:
 		if (likely(dir == DMA_TO_DEVICE || dma == DMA_BIDIRECTIONAL))
 			memcpy(dma_addr, buffer, size);
-		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
+		else if (dir != DMA_FROM_DEVICE)
 			BUG();
 		break;
 	default:
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13] swiotlb: BUG() for DMA_NONE in sync_single
@ 2005-09-12 19:53                   ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 19:53 UTC (permalink / raw)
  To: Grant Grundler
  Cc: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick

Call BUG() if DMA_NONE is passed-in as direction for sync_single.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -315,13 +315,13 @@ sync_single(struct device *hwdev, char *
 	case SYNC_FOR_CPU:
 		if (likely(dir = DMA_FROM_DEVICE || dma = DMA_BIDIRECTIONAL))
 			memcpy(buffer, dma_addr, size);
-		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
+		else if (dir != DMA_TO_DEVICE)
 			BUG();
 		break;
 	case SYNC_FOR_DEVICE:
 		if (likely(dir = DMA_TO_DEVICE || dma = DMA_BIDIRECTIONAL))
 			memcpy(dma_addr, buffer, size);
-		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
+		else if (dir != DMA_FROM_DEVICE)
 			BUG();
 		break;
 	default:
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13] swiotlb: BUG() for DMA_NONE in sync_single
  2005-09-12 19:53                   ` John W. Linville
@ 2005-09-12 20:23                     ` Grant Grundler
  -1 siblings, 0 replies; 99+ messages in thread
From: Grant Grundler @ 2005-09-12 20:23 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick

On Mon, Sep 12, 2005 at 03:53:56PM -0400, John W. Linville wrote:
> Call BUG() if DMA_NONE is passed-in as direction for sync_single.
> 
> Signed-off-by: John W. Linville <linville@tuxdriver.com>

Acked-by: Grant Grundler <iod00d@hp.com>

John,
Sorry - I didn't realize the tests for DMA_NONE I pointed
out were now redundant.  Can you respin this patch removing
the redundant checks for DMA_NONE as well?

thanks,
grant

> ---
> 
>  lib/swiotlb.c |    4 ++--
>  1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/swiotlb.c b/lib/swiotlb.c
> --- a/lib/swiotlb.c
> +++ b/lib/swiotlb.c
> @@ -315,13 +315,13 @@ sync_single(struct device *hwdev, char *
>  	case SYNC_FOR_CPU:
>  		if (likely(dir == DMA_FROM_DEVICE || dma == DMA_BIDIRECTIONAL))
>  			memcpy(buffer, dma_addr, size);
> -		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
> +		else if (dir != DMA_TO_DEVICE)
>  			BUG();
>  		break;
>  	case SYNC_FOR_DEVICE:
>  		if (likely(dir == DMA_TO_DEVICE || dma == DMA_BIDIRECTIONAL))
>  			memcpy(dma_addr, buffer, size);
> -		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
> +		else if (dir != DMA_FROM_DEVICE)
>  			BUG();
>  		break;
>  	default:
> -- 
> John W. Linville
> linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13] swiotlb: BUG() for DMA_NONE in sync_single
@ 2005-09-12 20:23                     ` Grant Grundler
  0 siblings, 0 replies; 99+ messages in thread
From: Grant Grundler @ 2005-09-12 20:23 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick

On Mon, Sep 12, 2005 at 03:53:56PM -0400, John W. Linville wrote:
> Call BUG() if DMA_NONE is passed-in as direction for sync_single.
> 
> Signed-off-by: John W. Linville <linville@tuxdriver.com>

Acked-by: Grant Grundler <iod00d@hp.com>

John,
Sorry - I didn't realize the tests for DMA_NONE I pointed
out were now redundant.  Can you respin this patch removing
the redundant checks for DMA_NONE as well?

thanks,
grant

> ---
> 
>  lib/swiotlb.c |    4 ++--
>  1 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/swiotlb.c b/lib/swiotlb.c
> --- a/lib/swiotlb.c
> +++ b/lib/swiotlb.c
> @@ -315,13 +315,13 @@ sync_single(struct device *hwdev, char *
>  	case SYNC_FOR_CPU:
>  		if (likely(dir = DMA_FROM_DEVICE || dma = DMA_BIDIRECTIONAL))
>  			memcpy(buffer, dma_addr, size);
> -		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
> +		else if (dir != DMA_TO_DEVICE)
>  			BUG();
>  		break;
>  	case SYNC_FOR_DEVICE:
>  		if (likely(dir = DMA_TO_DEVICE || dma = DMA_BIDIRECTIONAL))
>  			memcpy(dma_addr, buffer, size);
> -		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
> +		else if (dir != DMA_FROM_DEVICE)
>  			BUG();
>  		break;
>  	default:
> -- 
> John W. Linville
> linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 (take #2)] swiotlb: BUG() for DMA_NONE in sync_single
  2005-09-12 20:23                     ` Grant Grundler
@ 2005-09-12 23:45                       ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 23:45 UTC (permalink / raw)
  To: Grant Grundler
  Cc: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick

Call BUG() if DMA_NONE is passed-in as direction for sync_single.
Also remove unnecessary checks for DMA_NONE in callers of sync_single.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---
This patch replaces the previous patch with (almost) the same subject.

 lib/swiotlb.c |   11 ++---------
 1 files changed, 2 insertions(+), 9 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -315,13 +315,13 @@ sync_single(struct device *hwdev, char *
 	case SYNC_FOR_CPU:
 		if (likely(dir == DMA_FROM_DEVICE || dma == DMA_BIDIRECTIONAL))
 			memcpy(buffer, dma_addr, size);
-		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
+		else if (dir != DMA_TO_DEVICE)
 			BUG();
 		break;
 	case SYNC_FOR_DEVICE:
 		if (likely(dir == DMA_TO_DEVICE || dma == DMA_BIDIRECTIONAL))
 			memcpy(dma_addr, buffer, size);
-		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
+		else if (dir != DMA_FROM_DEVICE)
 			BUG();
 		break;
 	default:
@@ -515,8 +515,6 @@ swiotlb_sync_single(struct device *hwdev
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
-	if (dir == DMA_NONE)
-		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
 		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir == DMA_FROM_DEVICE)
@@ -547,8 +545,6 @@ swiotlb_sync_single_range(struct device 
 {
 	char *dma_addr = phys_to_virt(dev_addr) + offset;
 
-	if (dir == DMA_NONE)
-		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
 		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir == DMA_FROM_DEVICE)
@@ -651,9 +647,6 @@ swiotlb_sync_sg(struct device *hwdev, st
 {
 	int i;
 
-	if (dir == DMA_NONE)
-		BUG();
-
 	for (i = 0; i < nelems; i++, sg++)
 		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
 			sync_single(hwdev, (void *) sg->dma_address,
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.13 (take #2)] swiotlb: BUG() for DMA_NONE in sync_single
@ 2005-09-12 23:45                       ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-12 23:45 UTC (permalink / raw)
  To: Grant Grundler
  Cc: linux-kernel, discuss, linux-ia64, ak, tony.luck, Asit.K.Mallick

Call BUG() if DMA_NONE is passed-in as direction for sync_single.
Also remove unnecessary checks for DMA_NONE in callers of sync_single.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---
This patch replaces the previous patch with (almost) the same subject.

 lib/swiotlb.c |   11 ++---------
 1 files changed, 2 insertions(+), 9 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -315,13 +315,13 @@ sync_single(struct device *hwdev, char *
 	case SYNC_FOR_CPU:
 		if (likely(dir = DMA_FROM_DEVICE || dma = DMA_BIDIRECTIONAL))
 			memcpy(buffer, dma_addr, size);
-		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
+		else if (dir != DMA_TO_DEVICE)
 			BUG();
 		break;
 	case SYNC_FOR_DEVICE:
 		if (likely(dir = DMA_TO_DEVICE || dma = DMA_BIDIRECTIONAL))
 			memcpy(dma_addr, buffer, size);
-		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
+		else if (dir != DMA_FROM_DEVICE)
 			BUG();
 		break;
 	default:
@@ -515,8 +515,6 @@ swiotlb_sync_single(struct device *hwdev
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
-	if (dir = DMA_NONE)
-		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
 		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir = DMA_FROM_DEVICE)
@@ -547,8 +545,6 @@ swiotlb_sync_single_range(struct device 
 {
 	char *dma_addr = phys_to_virt(dev_addr) + offset;
 
-	if (dir = DMA_NONE)
-		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
 		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir = DMA_FROM_DEVICE)
@@ -651,9 +647,6 @@ swiotlb_sync_sg(struct device *hwdev, st
 {
 	int i;
 
-	if (dir = DMA_NONE)
-		BUG();
-
 	for (i = 0; i < nelems; i++, sg++)
 		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
 			sync_single(hwdev, (void *) sg->dma_address,
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 (take #2)] swiotlb: BUG() for DMA_NONE in sync_single
  2005-09-12 23:45                       ` John W. Linville
@ 2005-09-12 23:59                         ` Grant Grundler
  -1 siblings, 0 replies; 99+ messages in thread
From: Grant Grundler @ 2005-09-12 23:59 UTC (permalink / raw)
  To: Grant Grundler, linux-kernel, discuss, linux-ia64, ak, tony.luck,
	Asit.K.Mallick

On Mon, Sep 12, 2005 at 07:45:34PM -0400, John W. Linville wrote:
> Call BUG() if DMA_NONE is passed-in as direction for sync_single.
> Also remove unnecessary checks for DMA_NONE in callers of sync_single.

Looks good to me! :^)

> Signed-off-by: John W. Linville <linville@tuxdriver.com>

In case it matters:
	ACKed-by: Grant Grundler <iod00d@hp.com>

thanks
grant

> ---
> This patch replaces the previous patch with (almost) the same subject.
> 
>  lib/swiotlb.c |   11 ++---------
>  1 files changed, 2 insertions(+), 9 deletions(-)
> 
> diff --git a/lib/swiotlb.c b/lib/swiotlb.c
> --- a/lib/swiotlb.c
> +++ b/lib/swiotlb.c
> @@ -315,13 +315,13 @@ sync_single(struct device *hwdev, char *
>  	case SYNC_FOR_CPU:
>  		if (likely(dir == DMA_FROM_DEVICE || dma == DMA_BIDIRECTIONAL))
>  			memcpy(buffer, dma_addr, size);
> -		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
> +		else if (dir != DMA_TO_DEVICE)
>  			BUG();
>  		break;
>  	case SYNC_FOR_DEVICE:
>  		if (likely(dir == DMA_TO_DEVICE || dma == DMA_BIDIRECTIONAL))
>  			memcpy(dma_addr, buffer, size);
> -		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
> +		else if (dir != DMA_FROM_DEVICE)
>  			BUG();
>  		break;
>  	default:
> @@ -515,8 +515,6 @@ swiotlb_sync_single(struct device *hwdev
>  {
>  	char *dma_addr = phys_to_virt(dev_addr);
>  
> -	if (dir == DMA_NONE)
> -		BUG();
>  	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
>  		sync_single(hwdev, dma_addr, size, dir, target);
>  	else if (dir == DMA_FROM_DEVICE)
> @@ -547,8 +545,6 @@ swiotlb_sync_single_range(struct device 
>  {
>  	char *dma_addr = phys_to_virt(dev_addr) + offset;
>  
> -	if (dir == DMA_NONE)
> -		BUG();
>  	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
>  		sync_single(hwdev, dma_addr, size, dir, target);
>  	else if (dir == DMA_FROM_DEVICE)
> @@ -651,9 +647,6 @@ swiotlb_sync_sg(struct device *hwdev, st
>  {
>  	int i;
>  
> -	if (dir == DMA_NONE)
> -		BUG();
> -
>  	for (i = 0; i < nelems; i++, sg++)
>  		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
>  			sync_single(hwdev, (void *) sg->dma_address,
> -- 
> John W. Linville
> linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 (take #2)] swiotlb: BUG() for DMA_NONE in sync_single
@ 2005-09-12 23:59                         ` Grant Grundler
  0 siblings, 0 replies; 99+ messages in thread
From: Grant Grundler @ 2005-09-12 23:59 UTC (permalink / raw)
  To: Grant Grundler, linux-kernel, discuss, linux-ia64, ak, tony.luck,
	Asit.K.Mallick

On Mon, Sep 12, 2005 at 07:45:34PM -0400, John W. Linville wrote:
> Call BUG() if DMA_NONE is passed-in as direction for sync_single.
> Also remove unnecessary checks for DMA_NONE in callers of sync_single.

Looks good to me! :^)

> Signed-off-by: John W. Linville <linville@tuxdriver.com>

In case it matters:
	ACKed-by: Grant Grundler <iod00d@hp.com>

thanks
grant

> ---
> This patch replaces the previous patch with (almost) the same subject.
> 
>  lib/swiotlb.c |   11 ++---------
>  1 files changed, 2 insertions(+), 9 deletions(-)
> 
> diff --git a/lib/swiotlb.c b/lib/swiotlb.c
> --- a/lib/swiotlb.c
> +++ b/lib/swiotlb.c
> @@ -315,13 +315,13 @@ sync_single(struct device *hwdev, char *
>  	case SYNC_FOR_CPU:
>  		if (likely(dir = DMA_FROM_DEVICE || dma = DMA_BIDIRECTIONAL))
>  			memcpy(buffer, dma_addr, size);
> -		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
> +		else if (dir != DMA_TO_DEVICE)
>  			BUG();
>  		break;
>  	case SYNC_FOR_DEVICE:
>  		if (likely(dir = DMA_TO_DEVICE || dma = DMA_BIDIRECTIONAL))
>  			memcpy(dma_addr, buffer, size);
> -		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
> +		else if (dir != DMA_FROM_DEVICE)
>  			BUG();
>  		break;
>  	default:
> @@ -515,8 +515,6 @@ swiotlb_sync_single(struct device *hwdev
>  {
>  	char *dma_addr = phys_to_virt(dev_addr);
>  
> -	if (dir = DMA_NONE)
> -		BUG();
>  	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
>  		sync_single(hwdev, dma_addr, size, dir, target);
>  	else if (dir = DMA_FROM_DEVICE)
> @@ -547,8 +545,6 @@ swiotlb_sync_single_range(struct device 
>  {
>  	char *dma_addr = phys_to_virt(dev_addr) + offset;
>  
> -	if (dir = DMA_NONE)
> -		BUG();
>  	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
>  		sync_single(hwdev, dma_addr, size, dir, target);
>  	else if (dir = DMA_FROM_DEVICE)
> @@ -651,9 +647,6 @@ swiotlb_sync_sg(struct device *hwdev, st
>  {
>  	int i;
>  
> -	if (dir = DMA_NONE)
> -		BUG();
> -
>  	for (i = 0; i < nelems; i++, sg++)
>  		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
>  			sync_single(hwdev, (void *) sg->dma_address,
> -- 
> John W. Linville
> linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [discuss] [patch 2.6.13 (take #2)] swiotlb: BUG() for DMA_NONE in sync_single
  2005-09-12 23:45                       ` John W. Linville
@ 2005-09-13  4:05                         ` Andi Kleen
  -1 siblings, 0 replies; 99+ messages in thread
From: Andi Kleen @ 2005-09-13  4:05 UTC (permalink / raw)
  To: discuss
  Cc: John W. Linville, Grant Grundler, linux-kernel, linux-ia64,
	tony.luck, Asit.K.Mallick

On Tuesday 13 September 2005 01:45, John W. Linville wrote:
> Call BUG() if DMA_NONE is passed-in as direction for sync_single.
> Also remove unnecessary checks for DMA_NONE in callers of sync_single.
>
> Signed-off-by: John W. Linville <linville@tuxdriver.com>

Hi - your changes look good, but you missed the 2.6.14 merge window now
so it'll be all 2.6.15 material. If you think there are any critical bug fixes
in there (I didn't there were any) please extract them only.

-Andi

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [discuss] [patch 2.6.13 (take #2)] swiotlb: BUG() for DMA_NONE in sync_single
@ 2005-09-13  4:05                         ` Andi Kleen
  0 siblings, 0 replies; 99+ messages in thread
From: Andi Kleen @ 2005-09-13  4:05 UTC (permalink / raw)
  To: discuss
  Cc: John W. Linville, Grant Grundler, linux-kernel, linux-ia64,
	tony.luck, Asit.K.Mallick

On Tuesday 13 September 2005 01:45, John W. Linville wrote:
> Call BUG() if DMA_NONE is passed-in as direction for sync_single.
> Also remove unnecessary checks for DMA_NONE in callers of sync_single.
>
> Signed-off-by: John W. Linville <linville@tuxdriver.com>

Hi - your changes look good, but you missed the 2.6.14 merge window now
so it'll be all 2.6.15 material. If you think there are any critical bug fixes
in there (I didn't there were any) please extract them only.

-Andi

^ permalink raw reply	[flat|nested] 99+ messages in thread

* RE: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-12 14:48     ` John W. Linville
@ 2005-09-22 20:37 ` Luck, Tony
  -1 siblings, 0 replies; 99+ messages in thread
From: Luck, Tony @ 2005-09-22 20:37 UTC (permalink / raw)
  To: John W. Linville, linux-kernel, discuss, linux-ia64; +Cc: ak, Mallick, Asit K

>Conduct some maintenance of the swiotlb code:
>
>	-- Move the code from arch/ia64/lib to lib

I agree that this code needs to move up out of arch/ia64, it is messy
that x86_64 needs to reach over and grab this from arch/ia64.

But is "lib" really the right place for it to move to?  Perhaps
a more logical place might be "drivers/pci/swiotlb/" since this
code is tightly coupled to pci?

-Tony

^ permalink raw reply	[flat|nested] 99+ messages in thread

* RE: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-22 20:37 ` Luck, Tony
  0 siblings, 0 replies; 99+ messages in thread
From: Luck, Tony @ 2005-09-22 20:37 UTC (permalink / raw)
  To: John W. Linville, linux-kernel, discuss, linux-ia64; +Cc: ak, Mallick, Asit K

>Conduct some maintenance of the swiotlb code:
>
>	-- Move the code from arch/ia64/lib to lib

I agree that this code needs to move up out of arch/ia64, it is messy
that x86_64 needs to reach over and grab this from arch/ia64.

But is "lib" really the right place for it to move to?  Perhaps
a more logical place might be "drivers/pci/swiotlb/" since this
code is tightly coupled to pci?

-Tony

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-22 20:37 ` Luck, Tony
@ 2005-09-22 20:41   ` Christoph Hellwig
  -1 siblings, 0 replies; 99+ messages in thread
From: Christoph Hellwig @ 2005-09-22 20:41 UTC (permalink / raw)
  To: Luck, Tony
  Cc: John W. Linville, linux-kernel, discuss, linux-ia64, ak, Mallick, Asit K

On Thu, Sep 22, 2005 at 01:37:46PM -0700, Luck, Tony wrote:
> >Conduct some maintenance of the swiotlb code:
> >
> >	-- Move the code from arch/ia64/lib to lib
> 
> I agree that this code needs to move up out of arch/ia64, it is messy
> that x86_64 needs to reach over and grab this from arch/ia64.
> 
> But is "lib" really the right place for it to move to?  Perhaps
> a more logical place might be "drivers/pci/swiotlb/" since this
> code is tightly coupled to pci?

It should just go away once the GFP_DMA32 code is merged.

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-22 20:41   ` Christoph Hellwig
  0 siblings, 0 replies; 99+ messages in thread
From: Christoph Hellwig @ 2005-09-22 20:41 UTC (permalink / raw)
  To: Luck, Tony
  Cc: John W. Linville, linux-kernel, discuss, linux-ia64, ak, Mallick, Asit K

On Thu, Sep 22, 2005 at 01:37:46PM -0700, Luck, Tony wrote:
> >Conduct some maintenance of the swiotlb code:
> >
> >	-- Move the code from arch/ia64/lib to lib
> 
> I agree that this code needs to move up out of arch/ia64, it is messy
> that x86_64 needs to reach over and grab this from arch/ia64.
> 
> But is "lib" really the right place for it to move to?  Perhaps
> a more logical place might be "drivers/pci/swiotlb/" since this
> code is tightly coupled to pci?

It should just go away once the GFP_DMA32 code is merged.

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-22 20:41   ` Christoph Hellwig
@ 2005-09-23 18:22     ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-23 18:22 UTC (permalink / raw)
  To: Christoph Hellwig, Luck, Tony, linux-kernel, discuss, linux-ia64,
	ak, Mallick, Asit K

On Thu, Sep 22, 2005 at 09:41:55PM +0100, Christoph Hellwig wrote:
> On Thu, Sep 22, 2005 at 01:37:46PM -0700, Luck, Tony wrote:
> > >Conduct some maintenance of the swiotlb code:
> > >
> > >	-- Move the code from arch/ia64/lib to lib
> > 
> > I agree that this code needs to move up out of arch/ia64, it is messy
> > that x86_64 needs to reach over and grab this from arch/ia64.
> > 
> > But is "lib" really the right place for it to move to?  Perhaps
> > a more logical place might be "drivers/pci/swiotlb/" since this
> > code is tightly coupled to pci?
> 
> It should just go away once the GFP_DMA32 code is merged.

Is that the plan?  I suppose it makes sense.

So, move it to driver/pci/swiotlb.c?  Or just leave it where it is?

Either way, I'll redo the other patches to reflect the correct
location.

John
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-23 18:22     ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-23 18:22 UTC (permalink / raw)
  To: Christoph Hellwig, Luck, Tony, linux-kernel, discuss, linux-ia64,
	ak, Mallick, Asit K

On Thu, Sep 22, 2005 at 09:41:55PM +0100, Christoph Hellwig wrote:
> On Thu, Sep 22, 2005 at 01:37:46PM -0700, Luck, Tony wrote:
> > >Conduct some maintenance of the swiotlb code:
> > >
> > >	-- Move the code from arch/ia64/lib to lib
> > 
> > I agree that this code needs to move up out of arch/ia64, it is messy
> > that x86_64 needs to reach over and grab this from arch/ia64.
> > 
> > But is "lib" really the right place for it to move to?  Perhaps
> > a more logical place might be "drivers/pci/swiotlb/" since this
> > code is tightly coupled to pci?
> 
> It should just go away once the GFP_DMA32 code is merged.

Is that the plan?  I suppose it makes sense.

So, move it to driver/pci/swiotlb.c?  Or just leave it where it is?

Either way, I'll redo the other patches to reflect the correct
location.

John
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* RE: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-12 14:48     ` John W. Linville
@ 2005-09-23 18:27 ` Luck, Tony
  -1 siblings, 0 replies; 99+ messages in thread
From: Luck, Tony @ 2005-09-23 18:27 UTC (permalink / raw)
  To: John W. Linville, Christoph Hellwig, linux-kernel, discuss,
	linux-ia64, ak, Mallick, Asit K

>> It should just go away once the GFP_DMA32 code is merged.
>
>Is that the plan?  I suppose it makes sense.
>
>So, move it to driver/pci/swiotlb.c?  Or just leave it where it is?
>
>Either way, I'll redo the other patches to reflect the correct
>location.

I don't have a good (or in fact any) understanding of the impact
of GFP_DMA32 on ia64.  People tell me it will all be good, but I'd
like to hear from someone running it.

If it is good, and if it is coming soon, then there is no point
moving swiotlb.  But I don't know the answers to either of those
questions.

-Tony

^ permalink raw reply	[flat|nested] 99+ messages in thread

* RE: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-23 18:27 ` Luck, Tony
  0 siblings, 0 replies; 99+ messages in thread
From: Luck, Tony @ 2005-09-23 18:27 UTC (permalink / raw)
  To: John W. Linville, Christoph Hellwig, linux-kernel, discuss,
	linux-ia64, ak, Mallick, Asit K

>> It should just go away once the GFP_DMA32 code is merged.
>
>Is that the plan?  I suppose it makes sense.
>
>So, move it to driver/pci/swiotlb.c?  Or just leave it where it is?
>
>Either way, I'll redo the other patches to reflect the correct
>location.

I don't have a good (or in fact any) understanding of the impact
of GFP_DMA32 on ia64.  People tell me it will all be good, but I'd
like to hear from someone running it.

If it is good, and if it is coming soon, then there is no point
moving swiotlb.  But I don't know the answers to either of those
questions.

-Tony

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-23 18:22     ` John W. Linville
@ 2005-09-23 18:31       ` Muli Ben-Yehuda
  -1 siblings, 0 replies; 99+ messages in thread
From: Muli Ben-Yehuda @ 2005-09-23 18:31 UTC (permalink / raw)
  To: Christoph Hellwig, Luck, Tony, linux-kernel, discuss, linux-ia64,
	ak, Mallick, Asit K

On Fri, Sep 23, 2005 at 02:22:35PM -0400, John W. Linville wrote:

> > It should just go away once the GFP_DMA32 code is merged.
> 
> Is that the plan?  I suppose it makes sense.

What about the odd devices that can do less than 32 bit DMA masks on
platforms without IOMMU?
 
> So, move it to driver/pci/swiotlb.c?  Or just leave it where it is?

drivers/pci/swiotlb.c makes sense. Xen has its own swiotlb.c these
days, moving it to drivers/pci/ should make it slightly easier to use
the generic one.

Cheers,
Muli
-- 
Muli Ben-Yehuda
http://www.mulix.org | http://mulix.livejournal.com/


^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-23 18:31       ` Muli Ben-Yehuda
  0 siblings, 0 replies; 99+ messages in thread
From: Muli Ben-Yehuda @ 2005-09-23 18:31 UTC (permalink / raw)
  To: Christoph Hellwig, Luck, Tony, linux-kernel, discuss, linux-ia64,
	ak, Mallick, Asit K

On Fri, Sep 23, 2005 at 02:22:35PM -0400, John W. Linville wrote:

> > It should just go away once the GFP_DMA32 code is merged.
> 
> Is that the plan?  I suppose it makes sense.

What about the odd devices that can do less than 32 bit DMA masks on
platforms without IOMMU?
 
> So, move it to driver/pci/swiotlb.c?  Or just leave it where it is?

drivers/pci/swiotlb.c makes sense. Xen has its own swiotlb.c these
days, moving it to drivers/pci/ should make it slightly easier to use
the generic one.

Cheers,
Muli
-- 
Muli Ben-Yehuda
http://www.mulix.org | http://mulix.livejournal.com/


^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-23 18:27 ` Luck, Tony
@ 2005-09-23 18:50   ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-23 18:50 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Christoph Hellwig, linux-kernel, discuss, linux-ia64, ak,
	Mallick, Asit K

On Fri, Sep 23, 2005 at 11:27:26AM -0700, Luck, Tony wrote:
> >> It should just go away once the GFP_DMA32 code is merged.
> >
> >Is that the plan?  I suppose it makes sense.

> I don't have a good (or in fact any) understanding of the impact
> of GFP_DMA32 on ia64.  People tell me it will all be good, but I'd
> like to hear from someone running it.
 
All the patches I saw were for x86_64.  So, the impact on ia64 should
be minimal... :-)

> If it is good, and if it is coming soon, then there is no point
> moving swiotlb.  But I don't know the answers to either of those
> questions.

The xen guys have an swiotlb implementation, although theres differs
somewhat.  Perhaps if we moved it out from under ia64, the two could
be consolidated?

John
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-23 18:50   ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-23 18:50 UTC (permalink / raw)
  To: Luck, Tony
  Cc: Christoph Hellwig, linux-kernel, discuss, linux-ia64, ak,
	Mallick, Asit K

On Fri, Sep 23, 2005 at 11:27:26AM -0700, Luck, Tony wrote:
> >> It should just go away once the GFP_DMA32 code is merged.
> >
> >Is that the plan?  I suppose it makes sense.

> I don't have a good (or in fact any) understanding of the impact
> of GFP_DMA32 on ia64.  People tell me it will all be good, but I'd
> like to hear from someone running it.
 
All the patches I saw were for x86_64.  So, the impact on ia64 should
be minimal... :-)

> If it is good, and if it is coming soon, then there is no point
> moving swiotlb.  But I don't know the answers to either of those
> questions.

The xen guys have an swiotlb implementation, although theres differs
somewhat.  Perhaps if we moved it out from under ia64, the two could
be consolidated?

John
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-23 18:50   ` John W. Linville
@ 2005-09-23 21:38     ` Grant Grundler
  -1 siblings, 0 replies; 99+ messages in thread
From: Grant Grundler @ 2005-09-23 21:38 UTC (permalink / raw)
  To: Luck, Tony, Christoph Hellwig, linux-kernel, discuss, linux-ia64,
	ak, Mallick, Asit K

On Fri, Sep 23, 2005 at 02:50:23PM -0400, John W. Linville wrote:
> On Fri, Sep 23, 2005 at 11:27:26AM -0700, Luck, Tony wrote:
> > >> It should just go away once the GFP_DMA32 code is merged.
> > >
> > >Is that the plan?  I suppose it makes sense.
> 
> > I don't have a good (or in fact any) understanding of the impact
> > of GFP_DMA32 on ia64.  People tell me it will all be good, but I'd
> > like to hear from someone running it.
>  
> All the patches I saw were for x86_64.  So, the impact on ia64 should
> be minimal... :-)

impact to arch/ia64 might be minimal. My understanding was
all the drivers that support 32-bit devices would need tweaks
to use GFP_DMA32 flag.  Is this still the plan?
	http://www.ussg.iu.edu/hypermail/linux/kernel/9901.3/0323.html

Most of what I've read in Andi's patch otherwise makes sense:
	http://lwn.net/Articles/152337/

But I didn't see Andi suggest that swiotlb should go completely away.

thanks,
grant

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-23 21:38     ` Grant Grundler
  0 siblings, 0 replies; 99+ messages in thread
From: Grant Grundler @ 2005-09-23 21:38 UTC (permalink / raw)
  To: Luck, Tony, Christoph Hellwig, linux-kernel, discuss, linux-ia64,
	ak, Mallick, Asit K

On Fri, Sep 23, 2005 at 02:50:23PM -0400, John W. Linville wrote:
> On Fri, Sep 23, 2005 at 11:27:26AM -0700, Luck, Tony wrote:
> > >> It should just go away once the GFP_DMA32 code is merged.
> > >
> > >Is that the plan?  I suppose it makes sense.
> 
> > I don't have a good (or in fact any) understanding of the impact
> > of GFP_DMA32 on ia64.  People tell me it will all be good, but I'd
> > like to hear from someone running it.
>  
> All the patches I saw were for x86_64.  So, the impact on ia64 should
> be minimal... :-)

impact to arch/ia64 might be minimal. My understanding was
all the drivers that support 32-bit devices would need tweaks
to use GFP_DMA32 flag.  Is this still the plan?
	http://www.ussg.iu.edu/hypermail/linux/kernel/9901.3/0323.html

Most of what I've read in Andi's patch otherwise makes sense:
	http://lwn.net/Articles/152337/

But I didn't see Andi suggest that swiotlb should go completely away.

thanks,
grant

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-23 18:27 ` Luck, Tony
@ 2005-09-26  7:01   ` Andi Kleen
  -1 siblings, 0 replies; 99+ messages in thread
From: Andi Kleen @ 2005-09-26  7:01 UTC (permalink / raw)
  To: Luck, Tony
  Cc: John W. Linville, Christoph Hellwig, linux-kernel, discuss,
	linux-ia64, Mallick, Asit K

On Friday 23 September 2005 20:27, Luck, Tony wrote:
> >> It should just go away once the GFP_DMA32 code is merged.
> >
> >Is that the plan?  I suppose it makes sense.
> >
> >So, move it to driver/pci/swiotlb.c?  Or just leave it where it is?
> >
> >Either way, I'll redo the other patches to reflect the correct
> >location.
>
> I don't have a good (or in fact any) understanding of the impact
> of GFP_DMA32 on ia64.  People tell me it will all be good, but I'd
> like to hear from someone running it.

It shouldn't change anything for IA64. GFP_DMA32 just becomes
an alias for your GFP_DMA.  On advantage is that drivers can
be now source level compatible between x86-64 and ia64 
for this (although they should be really using pci_alloc_consistent()
instead) 

> If it is good, and if it is coming soon, then there is no point
> moving swiotlb.  But I don't know the answers to either of those
> questions.

swiotlb is still needed even with GFP_DMA32. Just move it.
2.6.15 won't have it also.

-Andi


^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-26  7:01   ` Andi Kleen
  0 siblings, 0 replies; 99+ messages in thread
From: Andi Kleen @ 2005-09-26  7:01 UTC (permalink / raw)
  To: Luck, Tony
  Cc: John W. Linville, Christoph Hellwig, linux-kernel, discuss,
	linux-ia64, Mallick, Asit K

On Friday 23 September 2005 20:27, Luck, Tony wrote:
> >> It should just go away once the GFP_DMA32 code is merged.
> >
> >Is that the plan?  I suppose it makes sense.
> >
> >So, move it to driver/pci/swiotlb.c?  Or just leave it where it is?
> >
> >Either way, I'll redo the other patches to reflect the correct
> >location.
>
> I don't have a good (or in fact any) understanding of the impact
> of GFP_DMA32 on ia64.  People tell me it will all be good, but I'd
> like to hear from someone running it.

It shouldn't change anything for IA64. GFP_DMA32 just becomes
an alias for your GFP_DMA.  On advantage is that drivers can
be now source level compatible between x86-64 and ia64 
for this (although they should be really using pci_alloc_consistent()
instead) 

> If it is good, and if it is coming soon, then there is no point
> moving swiotlb.  But I don't know the answers to either of those
> questions.

swiotlb is still needed even with GFP_DMA32. Just move it.
2.6.15 won't have it also.

-Andi


^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-23 18:27 ` Luck, Tony
@ 2005-09-26 21:01   ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:01 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

Conduct some maintenance of the swiotlb code:

	-- Move the code from arch/ia64/lib to drivers/pci

	-- Cleanup some cruft (code duplication)

	-- Add support for syncing sub-ranges of mappings

	-- Add support for syncing DMA_BIDIRECTIONAL mappings

	-- Comment fixup & change record

Also, tack-on an x86_64 implementation of dma_sync_single_range_for_cpu
and dma_sync_single_range_for _device.  This makes use of the new
swiotlb sync sub-range support.

In this round, the new location for swiotlb is driver/pci/swiotlb.c.
This is the result of discussions on lkml pointing-out that swiotlb is
closely related to PCI.

Patches to follow...

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 1/5] swiotlb: move from arch/ia64/lib to drivers/pci
  2005-09-26 21:01   ` John W. Linville
@ 2005-09-26 21:01     ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:01 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

The swiotlb implementation is shared by both IA-64 and EM64T. However,
the source itself lives under arch/ia64. This patch moves swiotlb.c
from arch/ia64/lib to drivers/pci and fixes-up the appropriate Makefile
and Kconfig files. No actual changes are made to swiotlb.c.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 arch/ia64/Kconfig           |    4 
 arch/ia64/lib/Makefile      |    2 
 arch/ia64/lib/swiotlb.c     |  657 --------------------------------------------
 arch/x86_64/kernel/Makefile |    2 
 drivers/pci/Makefile        |    2 
 drivers/pci/swiotlb.c       |  657 ++++++++++++++++++++++++++++++++++++++++++++
 6 files changed, 664 insertions(+), 660 deletions(-)

--- a/drivers/pci/swiotlb.c
+++ b/drivers/pci/swiotlb.c
@@ -0,0 +1,657 @@
+/*
+ * Dynamic DMA mapping support.
+ *
+ * This implementation is for IA-64 platforms that do not support
+ * I/O TLBs (aka DMA address translation hardware).
+ * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
+ * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
+ * Copyright (C) 2000, 2003 Hewlett-Packard Co
+ *	David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
+ * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
+ *			unnecessary i-cache flushing.
+ * 04/07/.. ak          Better overflow handling. Assorted fixes.
+ */
+
+#include <linux/cache.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ctype.h>
+
+#include <asm/io.h>
+#include <asm/pci.h>
+#include <asm/dma.h>
+
+#include <linux/init.h>
+#include <linux/bootmem.h>
+
+#define OFFSET(val,align) ((unsigned long)	\
+	                   ( (val) & ( (align) - 1)))
+
+#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
+#define SG_ENT_PHYS_ADDRESS(SG)	virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
+
+/*
+ * Maximum allowable number of contiguous slabs to map,
+ * must be a power of 2.  What is the appropriate value ?
+ * The complexity of {map,unmap}_single is linearly dependent on this value.
+ */
+#define IO_TLB_SEGSIZE	128
+
+/*
+ * log of the size of each IO TLB slab.  The number of slabs is command line
+ * controllable.
+ */
+#define IO_TLB_SHIFT 11
+
+int swiotlb_force;
+
+/*
+ * Used to do a quick range check in swiotlb_unmap_single and
+ * swiotlb_sync_single_*, to see if the memory was in fact allocated by this
+ * API.
+ */
+static char *io_tlb_start, *io_tlb_end;
+
+/*
+ * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
+ * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
+ */
+static unsigned long io_tlb_nslabs;
+
+/*
+ * When the IOMMU overflows we return a fallback buffer. This sets the size.
+ */
+static unsigned long io_tlb_overflow = 32*1024;
+
+void *io_tlb_overflow_buffer;
+
+/*
+ * This is a free list describing the number of free entries available from
+ * each index
+ */
+static unsigned int *io_tlb_list;
+static unsigned int io_tlb_index;
+
+/*
+ * We need to save away the original address corresponding to a mapped entry
+ * for the sync operations.
+ */
+static unsigned char **io_tlb_orig_addr;
+
+/*
+ * Protect the above data structures in the map and unmap calls
+ */
+static DEFINE_SPINLOCK(io_tlb_lock);
+
+static int __init
+setup_io_tlb_npages(char *str)
+{
+	if (isdigit(*str)) {
+		io_tlb_nslabs = simple_strtoul(str, &str, 0);
+		/* avoid tail segment of size < IO_TLB_SEGSIZE */
+		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	}
+	if (*str == ',')
+		++str;
+	if (!strcmp(str, "force"))
+		swiotlb_force = 1;
+	return 1;
+}
+__setup("swiotlb=", setup_io_tlb_npages);
+/* make io_tlb_overflow tunable too? */
+
+/*
+ * Statically reserve bounce buffer space and initialize bounce buffer data
+ * structures for the software IO TLB used to implement the PCI DMA API.
+ */
+void
+swiotlb_init_with_default_size (size_t default_size)
+{
+	unsigned long i;
+
+	if (!io_tlb_nslabs) {
+		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
+		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	}
+
+	/*
+	 * Get IO TLB memory from the low pages
+	 */
+	io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
+					       (1 << IO_TLB_SHIFT));
+	if (!io_tlb_start)
+		panic("Cannot allocate SWIOTLB buffer");
+	io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
+
+	/*
+	 * Allocate and initialize the free list array.  This array is used
+	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
+	 * between io_tlb_start and io_tlb_end.
+	 */
+	io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
+	for (i = 0; i < io_tlb_nslabs; i++)
+ 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+	io_tlb_index = 0;
+	io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
+
+	/*
+	 * Get the overflow emergency buffer
+	 */
+	io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
+	printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n",
+	       virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end));
+}
+
+void
+swiotlb_init (void)
+{
+	swiotlb_init_with_default_size(64 * (1<<20));	/* default to 64MB */
+}
+
+static inline int
+address_needs_mapping(struct device *hwdev, dma_addr_t addr)
+{
+	dma_addr_t mask = 0xffffffff;
+	/* If the device has a mask, use it, otherwise default to 32 bits */
+	if (hwdev && hwdev->dma_mask)
+		mask = *hwdev->dma_mask;
+	return (addr & ~mask) != 0;
+}
+
+/*
+ * Allocates bounce buffer and returns its kernel virtual address.
+ */
+static void *
+map_single(struct device *hwdev, char *buffer, size_t size, int dir)
+{
+	unsigned long flags;
+	char *dma_addr;
+	unsigned int nslots, stride, index, wrap;
+	int i;
+
+	/*
+	 * For mappings greater than a page, we limit the stride (and
+	 * hence alignment) to a page size.
+	 */
+	nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+	if (size > PAGE_SIZE)
+		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
+	else
+		stride = 1;
+
+	if (!nslots)
+		BUG();
+
+	/*
+	 * Find suitable number of IO TLB entries size that will fit this
+	 * request and allocate a buffer from that IO TLB pool.
+	 */
+	spin_lock_irqsave(&io_tlb_lock, flags);
+	{
+		wrap = index = ALIGN(io_tlb_index, stride);
+
+		if (index >= io_tlb_nslabs)
+			wrap = index = 0;
+
+		do {
+			/*
+			 * If we find a slot that indicates we have 'nslots'
+			 * number of contiguous buffers, we allocate the
+			 * buffers from that slot and mark the entries as '0'
+			 * indicating unavailable.
+			 */
+			if (io_tlb_list[index] >= nslots) {
+				int count = 0;
+
+				for (i = index; i < (int) (index + nslots); i++)
+					io_tlb_list[i] = 0;
+				for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
+					io_tlb_list[i] = ++count;
+				dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
+
+				/*
+				 * Update the indices to avoid searching in
+				 * the next round.
+				 */
+				io_tlb_index = ((index + nslots) < io_tlb_nslabs
+						? (index + nslots) : 0);
+
+				goto found;
+			}
+			index += stride;
+			if (index >= io_tlb_nslabs)
+				index = 0;
+		} while (index != wrap);
+
+		spin_unlock_irqrestore(&io_tlb_lock, flags);
+		return NULL;
+	}
+  found:
+	spin_unlock_irqrestore(&io_tlb_lock, flags);
+
+	/*
+	 * Save away the mapping from the original address to the DMA address.
+	 * This is needed when we sync the memory.  Then we sync the buffer if
+	 * needed.
+	 */
+	io_tlb_orig_addr[index] = buffer;
+	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
+		memcpy(dma_addr, buffer, size);
+
+	return dma_addr;
+}
+
+/*
+ * dma_addr is the kernel virtual address of the bounce buffer to unmap.
+ */
+static void
+unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+{
+	unsigned long flags;
+	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+	char *buffer = io_tlb_orig_addr[index];
+
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (buffer && ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL)))
+		/*
+		 * bounce... copy the data back into the original buffer * and
+		 * delete the bounce buffer.
+		 */
+		memcpy(buffer, dma_addr, size);
+
+	/*
+	 * Return the buffer to the free list by setting the corresponding
+	 * entries to indicate the number of contigous entries available.
+	 * While returning the entries to the free list, we merge the entries
+	 * with slots below and above the pool being returned.
+	 */
+	spin_lock_irqsave(&io_tlb_lock, flags);
+	{
+		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
+			 io_tlb_list[index + nslots] : 0);
+		/*
+		 * Step 1: return the slots to the free list, merging the
+		 * slots with superceeding slots
+		 */
+		for (i = index + nslots - 1; i >= index; i--)
+			io_tlb_list[i] = ++count;
+		/*
+		 * Step 2: merge the returned slots with the preceding slots,
+		 * if available (non zero)
+		 */
+		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
+			io_tlb_list[i] = ++count;
+	}
+	spin_unlock_irqrestore(&io_tlb_lock, flags);
+}
+
+static void
+sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+{
+	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+	char *buffer = io_tlb_orig_addr[index];
+
+	/*
+	 * bounce... copy the data back into/from the original buffer
+	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
+	 */
+	if (dir == DMA_FROM_DEVICE)
+		memcpy(buffer, dma_addr, size);
+	else if (dir == DMA_TO_DEVICE)
+		memcpy(dma_addr, buffer, size);
+	else
+		BUG();
+}
+
+void *
+swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+		       dma_addr_t *dma_handle, int flags)
+{
+	unsigned long dev_addr;
+	void *ret;
+	int order = get_order(size);
+
+	/*
+	 * XXX fix me: the DMA API should pass us an explicit DMA mask
+	 * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32
+	 * bit range instead of a 16MB one).
+	 */
+	flags |= GFP_DMA;
+
+	ret = (void *)__get_free_pages(flags, order);
+	if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
+		/*
+		 * The allocated memory isn't reachable by the device.
+		 * Fall back on swiotlb_map_single().
+		 */
+		free_pages((unsigned long) ret, order);
+		ret = NULL;
+	}
+	if (!ret) {
+		/*
+		 * We are either out of memory or the device can't DMA
+		 * to GFP_DMA memory; fall back on
+		 * swiotlb_map_single(), which will grab memory from
+		 * the lowest available address range.
+		 */
+		dma_addr_t handle;
+		handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE);
+		if (dma_mapping_error(handle))
+			return NULL;
+
+		ret = phys_to_virt(handle);
+	}
+
+	memset(ret, 0, size);
+	dev_addr = virt_to_phys(ret);
+
+	/* Confirm address can be DMA'd by device */
+	if (address_needs_mapping(hwdev, dev_addr)) {
+		printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n",
+		       (unsigned long long)*hwdev->dma_mask, dev_addr);
+		panic("swiotlb_alloc_coherent: allocated memory is out of "
+		      "range for device");
+	}
+	*dma_handle = dev_addr;
+	return ret;
+}
+
+void
+swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+		      dma_addr_t dma_handle)
+{
+	if (!(vaddr >= (void *)io_tlb_start
+                    && vaddr < (void *)io_tlb_end))
+		free_pages((unsigned long) vaddr, get_order(size));
+	else
+		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
+		swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE);
+}
+
+static void
+swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
+{
+	/*
+	 * Ran out of IOMMU space for this operation. This is very bad.
+	 * Unfortunately the drivers cannot handle this operation properly.
+	 * unless they check for pci_dma_mapping_error (most don't)
+	 * When the mapping is small enough return a static buffer to limit
+	 * the damage, or panic when the transfer is too big.
+	 */
+	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
+	       "device %s\n", size, dev ? dev->bus_id : "?");
+
+	if (size > io_tlb_overflow && do_panic) {
+		if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL)
+			panic("PCI-DMA: Memory would be corrupted\n");
+		if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL)
+			panic("PCI-DMA: Random memory would be DMAed\n");
+	}
+}
+
+/*
+ * Map a single buffer of the indicated size for DMA in streaming mode.  The
+ * PCI address to use is returned.
+ *
+ * Once the device is given the dma address, the device owns this memory until
+ * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
+ */
+dma_addr_t
+swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir)
+{
+	unsigned long dev_addr = virt_to_phys(ptr);
+	void *map;
+
+	if (dir == DMA_NONE)
+		BUG();
+	/*
+	 * If the pointer passed in happens to be in the device's DMA window,
+	 * we can safely return the device addr and not worry about bounce
+	 * buffering it.
+	 */
+	if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
+		return dev_addr;
+
+	/*
+	 * Oh well, have to allocate and map a bounce buffer.
+	 */
+	map = map_single(hwdev, ptr, size, dir);
+	if (!map) {
+		swiotlb_full(hwdev, size, dir, 1);
+		map = io_tlb_overflow_buffer;
+	}
+
+	dev_addr = virt_to_phys(map);
+
+	/*
+	 * Ensure that the address returned is DMA'ble
+	 */
+	if (address_needs_mapping(hwdev, dev_addr))
+		panic("map_single: bounce buffer is not DMA'ble");
+
+	return dev_addr;
+}
+
+/*
+ * Since DMA is i-cache coherent, any (complete) pages that were written via
+ * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to
+ * flush them when they get mapped into an executable vm-area.
+ */
+static void
+mark_clean(void *addr, size_t size)
+{
+	unsigned long pg_addr, end;
+
+	pg_addr = PAGE_ALIGN((unsigned long) addr);
+	end = (unsigned long) addr + size;
+	while (pg_addr + PAGE_SIZE <= end) {
+		struct page *page = virt_to_page(pg_addr);
+		set_bit(PG_arch_1, &page->flags);
+		pg_addr += PAGE_SIZE;
+	}
+}
+
+/*
+ * Unmap a single streaming mode DMA translation.  The dma_addr and size must
+ * match what was provided for in a previous swiotlb_map_single call.  All
+ * other usages are undefined.
+ *
+ * After this call, reads by the cpu to the buffer are guaranteed to see
+ * whatever the device wrote there.
+ */
+void
+swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size,
+		     int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		unmap_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
+ * Make physical memory consistent for a single streaming mode DMA translation
+ * after a transfer.
+ *
+ * If you perform a swiotlb_map_single() but wish to interrogate the buffer
+ * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
+ * call this function before doing so.  At the next point you give the PCI dma
+ * address back to the card, you must first perform a
+ * swiotlb_dma_sync_for_device, and then the device again owns the buffer
+ */
+void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
+			       size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
+ * Map a set of buffers described by scatterlist in streaming mode for DMA.
+ * This is the scatter-gather version of the above swiotlb_map_single
+ * interface.  Here the scatter gather list elements are each tagged with the
+ * appropriate dma address and length.  They are obtained via
+ * sg_dma_{address,length}(SG).
+ *
+ * NOTE: An implementation may be able to use a smaller number of
+ *       DMA address/length pairs than there are SG table elements.
+ *       (for example via virtual mapping capabilities)
+ *       The routine returns the number of addr/length pairs actually
+ *       used, at most nents.
+ *
+ * Device ownership issues as mentioned above for swiotlb_map_single are the
+ * same here.
+ */
+int
+swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+	       int dir)
+{
+	void *addr;
+	unsigned long dev_addr;
+	int i;
+
+	if (dir == DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++) {
+		addr = SG_ENT_VIRT_ADDRESS(sg);
+		dev_addr = virt_to_phys(addr);
+		if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
+			sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir));
+			if (!sg->dma_address) {
+				/* Don't panic here, we expect map_sg users
+				   to do proper error handling. */
+				swiotlb_full(hwdev, sg->length, dir, 0);
+				swiotlb_unmap_sg(hwdev, sg - i, i, dir);
+				sg[0].dma_length = 0;
+				return 0;
+			}
+		} else
+			sg->dma_address = dev_addr;
+		sg->dma_length = sg->length;
+	}
+	return nelems;
+}
+
+/*
+ * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
+ * concerning calls here are the same as for swiotlb_unmap_single() above.
+ */
+void
+swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+		 int dir)
+{
+	int i;
+
+	if (dir == DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir);
+		else if (dir == DMA_FROM_DEVICE)
+			mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
+}
+
+/*
+ * Make physical memory consistent for a set of streaming mode DMA translations
+ * after a transfer.
+ *
+ * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
+ * and usage.
+ */
+void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	int i;
+
+	if (dir == DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			sync_single(hwdev, (void *) sg->dma_address,
+				    sg->dma_length, dir);
+}
+
+void
+swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
+			   int nelems, int dir)
+{
+	int i;
+
+	if (dir == DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			sync_single(hwdev, (void *) sg->dma_address,
+				    sg->dma_length, dir);
+}
+
+int
+swiotlb_dma_mapping_error(dma_addr_t dma_addr)
+{
+	return (dma_addr == virt_to_phys(io_tlb_overflow_buffer));
+}
+
+/*
+ * Return whether the given PCI device DMA address mask can be supported
+ * properly.  For example, if your device can only drive the low 24-bits
+ * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
+ * this function.
+ */
+int
+swiotlb_dma_supported (struct device *hwdev, u64 mask)
+{
+	return (virt_to_phys (io_tlb_end) - 1) <= mask;
+}
+
+EXPORT_SYMBOL(swiotlb_init);
+EXPORT_SYMBOL(swiotlb_map_single);
+EXPORT_SYMBOL(swiotlb_unmap_single);
+EXPORT_SYMBOL(swiotlb_map_sg);
+EXPORT_SYMBOL(swiotlb_unmap_sg);
+EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
+EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
+EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
+EXPORT_SYMBOL(swiotlb_dma_mapping_error);
+EXPORT_SYMBOL(swiotlb_alloc_coherent);
+EXPORT_SYMBOL(swiotlb_free_coherent);
+EXPORT_SYMBOL(swiotlb_dma_supported);
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -44,3 +44,5 @@ endif
 # Build PCI Express stuff if needed
 obj-$(CONFIG_PCIEPORTBUS) += pcie/
 
+# Build swiotlb stuff if needed
+obj-$(CONFIG_SWIOTLB) += swiotlb.o
--- a/arch/x86_64/kernel/Makefile
+++ b/arch/x86_64/kernel/Makefile
@@ -27,7 +27,6 @@ obj-$(CONFIG_CPU_FREQ)		+= cpufreq/
 obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
 obj-$(CONFIG_GART_IOMMU)	+= pci-gart.o aperture.o
 obj-$(CONFIG_DUMMY_IOMMU)	+= pci-nommu.o pci-dma.o
-obj-$(CONFIG_SWIOTLB)		+= swiotlb.o
 obj-$(CONFIG_KPROBES)		+= kprobes.o
 obj-$(CONFIG_X86_PM_TIMER)	+= pmtimer.o
 
@@ -41,7 +40,6 @@ CFLAGS_vsyscall.o		:= $(PROFILING) -g0
 bootflag-y			+= ../../i386/kernel/bootflag.o
 cpuid-$(subst m,y,$(CONFIG_X86_CPUID))  += ../../i386/kernel/cpuid.o
 topology-y                     += ../../i386/mach-default/topology.o
-swiotlb-$(CONFIG_SWIOTLB)      += ../../ia64/lib/swiotlb.o
 microcode-$(subst m,y,$(CONFIG_MICROCODE))  += ../../i386/kernel/microcode.o
 intel_cacheinfo-y		+= ../../i386/kernel/cpu/intel_cacheinfo.o
 quirks-y			+= ../../i386/kernel/quirks.o
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -26,6 +26,10 @@ config MMU
 	bool
 	default y
 
+config SWIOTLB
+       bool
+       default y
+
 config RWSEM_XCHGADD_ALGORITHM
 	bool
 	default y
--- a/arch/ia64/lib/swiotlb.c
+++ b/arch/ia64/lib/swiotlb.c
@@ -1,657 +0,0 @@
-/*
- * Dynamic DMA mapping support.
- *
- * This implementation is for IA-64 platforms that do not support
- * I/O TLBs (aka DMA address translation hardware).
- * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
- * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
- * Copyright (C) 2000, 2003 Hewlett-Packard Co
- *	David Mosberger-Tang <davidm@hpl.hp.com>
- *
- * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
- * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
- *			unnecessary i-cache flushing.
- * 04/07/.. ak          Better overflow handling. Assorted fixes.
- */
-
-#include <linux/cache.h>
-#include <linux/mm.h>
-#include <linux/module.h>
-#include <linux/pci.h>
-#include <linux/spinlock.h>
-#include <linux/string.h>
-#include <linux/types.h>
-#include <linux/ctype.h>
-
-#include <asm/io.h>
-#include <asm/pci.h>
-#include <asm/dma.h>
-
-#include <linux/init.h>
-#include <linux/bootmem.h>
-
-#define OFFSET(val,align) ((unsigned long)	\
-	                   ( (val) & ( (align) - 1)))
-
-#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
-#define SG_ENT_PHYS_ADDRESS(SG)	virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
-
-/*
- * Maximum allowable number of contiguous slabs to map,
- * must be a power of 2.  What is the appropriate value ?
- * The complexity of {map,unmap}_single is linearly dependent on this value.
- */
-#define IO_TLB_SEGSIZE	128
-
-/*
- * log of the size of each IO TLB slab.  The number of slabs is command line
- * controllable.
- */
-#define IO_TLB_SHIFT 11
-
-int swiotlb_force;
-
-/*
- * Used to do a quick range check in swiotlb_unmap_single and
- * swiotlb_sync_single_*, to see if the memory was in fact allocated by this
- * API.
- */
-static char *io_tlb_start, *io_tlb_end;
-
-/*
- * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
- * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
- */
-static unsigned long io_tlb_nslabs;
-
-/*
- * When the IOMMU overflows we return a fallback buffer. This sets the size.
- */
-static unsigned long io_tlb_overflow = 32*1024;
-
-void *io_tlb_overflow_buffer;
-
-/*
- * This is a free list describing the number of free entries available from
- * each index
- */
-static unsigned int *io_tlb_list;
-static unsigned int io_tlb_index;
-
-/*
- * We need to save away the original address corresponding to a mapped entry
- * for the sync operations.
- */
-static unsigned char **io_tlb_orig_addr;
-
-/*
- * Protect the above data structures in the map and unmap calls
- */
-static DEFINE_SPINLOCK(io_tlb_lock);
-
-static int __init
-setup_io_tlb_npages(char *str)
-{
-	if (isdigit(*str)) {
-		io_tlb_nslabs = simple_strtoul(str, &str, 0);
-		/* avoid tail segment of size < IO_TLB_SEGSIZE */
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
-	if (*str == ',')
-		++str;
-	if (!strcmp(str, "force"))
-		swiotlb_force = 1;
-	return 1;
-}
-__setup("swiotlb=", setup_io_tlb_npages);
-/* make io_tlb_overflow tunable too? */
-
-/*
- * Statically reserve bounce buffer space and initialize bounce buffer data
- * structures for the software IO TLB used to implement the PCI DMA API.
- */
-void
-swiotlb_init_with_default_size (size_t default_size)
-{
-	unsigned long i;
-
-	if (!io_tlb_nslabs) {
-		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
-
-	/*
-	 * Get IO TLB memory from the low pages
-	 */
-	io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
-					       (1 << IO_TLB_SHIFT));
-	if (!io_tlb_start)
-		panic("Cannot allocate SWIOTLB buffer");
-	io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
-
-	/*
-	 * Allocate and initialize the free list array.  This array is used
-	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
-	 * between io_tlb_start and io_tlb_end.
-	 */
-	io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
-	for (i = 0; i < io_tlb_nslabs; i++)
- 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-	io_tlb_index = 0;
-	io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
-
-	/*
-	 * Get the overflow emergency buffer
-	 */
-	io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
-	printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n",
-	       virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end));
-}
-
-void
-swiotlb_init (void)
-{
-	swiotlb_init_with_default_size(64 * (1<<20));	/* default to 64MB */
-}
-
-static inline int
-address_needs_mapping(struct device *hwdev, dma_addr_t addr)
-{
-	dma_addr_t mask = 0xffffffff;
-	/* If the device has a mask, use it, otherwise default to 32 bits */
-	if (hwdev && hwdev->dma_mask)
-		mask = *hwdev->dma_mask;
-	return (addr & ~mask) != 0;
-}
-
-/*
- * Allocates bounce buffer and returns its kernel virtual address.
- */
-static void *
-map_single(struct device *hwdev, char *buffer, size_t size, int dir)
-{
-	unsigned long flags;
-	char *dma_addr;
-	unsigned int nslots, stride, index, wrap;
-	int i;
-
-	/*
-	 * For mappings greater than a page, we limit the stride (and
-	 * hence alignment) to a page size.
-	 */
-	nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	if (size > PAGE_SIZE)
-		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
-	else
-		stride = 1;
-
-	if (!nslots)
-		BUG();
-
-	/*
-	 * Find suitable number of IO TLB entries size that will fit this
-	 * request and allocate a buffer from that IO TLB pool.
-	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
-	{
-		wrap = index = ALIGN(io_tlb_index, stride);
-
-		if (index >= io_tlb_nslabs)
-			wrap = index = 0;
-
-		do {
-			/*
-			 * If we find a slot that indicates we have 'nslots'
-			 * number of contiguous buffers, we allocate the
-			 * buffers from that slot and mark the entries as '0'
-			 * indicating unavailable.
-			 */
-			if (io_tlb_list[index] >= nslots) {
-				int count = 0;
-
-				for (i = index; i < (int) (index + nslots); i++)
-					io_tlb_list[i] = 0;
-				for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-					io_tlb_list[i] = ++count;
-				dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
-
-				/*
-				 * Update the indices to avoid searching in
-				 * the next round.
-				 */
-				io_tlb_index = ((index + nslots) < io_tlb_nslabs
-						? (index + nslots) : 0);
-
-				goto found;
-			}
-			index += stride;
-			if (index >= io_tlb_nslabs)
-				index = 0;
-		} while (index != wrap);
-
-		spin_unlock_irqrestore(&io_tlb_lock, flags);
-		return NULL;
-	}
-  found:
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
-
-	/*
-	 * Save away the mapping from the original address to the DMA address.
-	 * This is needed when we sync the memory.  Then we sync the buffer if
-	 * needed.
-	 */
-	io_tlb_orig_addr[index] = buffer;
-	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
-		memcpy(dma_addr, buffer, size);
-
-	return dma_addr;
-}
-
-/*
- * dma_addr is the kernel virtual address of the bounce buffer to unmap.
- */
-static void
-unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
-{
-	unsigned long flags;
-	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	char *buffer = io_tlb_orig_addr[index];
-
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (buffer && ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL)))
-		/*
-		 * bounce... copy the data back into the original buffer * and
-		 * delete the bounce buffer.
-		 */
-		memcpy(buffer, dma_addr, size);
-
-	/*
-	 * Return the buffer to the free list by setting the corresponding
-	 * entries to indicate the number of contigous entries available.
-	 * While returning the entries to the free list, we merge the entries
-	 * with slots below and above the pool being returned.
-	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
-	{
-		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
-			 io_tlb_list[index + nslots] : 0);
-		/*
-		 * Step 1: return the slots to the free list, merging the
-		 * slots with superceeding slots
-		 */
-		for (i = index + nslots - 1; i >= index; i--)
-			io_tlb_list[i] = ++count;
-		/*
-		 * Step 2: merge the returned slots with the preceding slots,
-		 * if available (non zero)
-		 */
-		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-			io_tlb_list[i] = ++count;
-	}
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
-}
-
-static void
-sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
-{
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	char *buffer = io_tlb_orig_addr[index];
-
-	/*
-	 * bounce... copy the data back into/from the original buffer
-	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
-	 */
-	if (dir == DMA_FROM_DEVICE)
-		memcpy(buffer, dma_addr, size);
-	else if (dir == DMA_TO_DEVICE)
-		memcpy(dma_addr, buffer, size);
-	else
-		BUG();
-}
-
-void *
-swiotlb_alloc_coherent(struct device *hwdev, size_t size,
-		       dma_addr_t *dma_handle, int flags)
-{
-	unsigned long dev_addr;
-	void *ret;
-	int order = get_order(size);
-
-	/*
-	 * XXX fix me: the DMA API should pass us an explicit DMA mask
-	 * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32
-	 * bit range instead of a 16MB one).
-	 */
-	flags |= GFP_DMA;
-
-	ret = (void *)__get_free_pages(flags, order);
-	if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
-		/*
-		 * The allocated memory isn't reachable by the device.
-		 * Fall back on swiotlb_map_single().
-		 */
-		free_pages((unsigned long) ret, order);
-		ret = NULL;
-	}
-	if (!ret) {
-		/*
-		 * We are either out of memory or the device can't DMA
-		 * to GFP_DMA memory; fall back on
-		 * swiotlb_map_single(), which will grab memory from
-		 * the lowest available address range.
-		 */
-		dma_addr_t handle;
-		handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE);
-		if (dma_mapping_error(handle))
-			return NULL;
-
-		ret = phys_to_virt(handle);
-	}
-
-	memset(ret, 0, size);
-	dev_addr = virt_to_phys(ret);
-
-	/* Confirm address can be DMA'd by device */
-	if (address_needs_mapping(hwdev, dev_addr)) {
-		printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n",
-		       (unsigned long long)*hwdev->dma_mask, dev_addr);
-		panic("swiotlb_alloc_coherent: allocated memory is out of "
-		      "range for device");
-	}
-	*dma_handle = dev_addr;
-	return ret;
-}
-
-void
-swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
-		      dma_addr_t dma_handle)
-{
-	if (!(vaddr >= (void *)io_tlb_start
-                    && vaddr < (void *)io_tlb_end))
-		free_pages((unsigned long) vaddr, get_order(size));
-	else
-		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
-		swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE);
-}
-
-static void
-swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
-{
-	/*
-	 * Ran out of IOMMU space for this operation. This is very bad.
-	 * Unfortunately the drivers cannot handle this operation properly.
-	 * unless they check for pci_dma_mapping_error (most don't)
-	 * When the mapping is small enough return a static buffer to limit
-	 * the damage, or panic when the transfer is too big.
-	 */
-	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
-	       "device %s\n", size, dev ? dev->bus_id : "?");
-
-	if (size > io_tlb_overflow && do_panic) {
-		if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Memory would be corrupted\n");
-		if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Random memory would be DMAed\n");
-	}
-}
-
-/*
- * Map a single buffer of the indicated size for DMA in streaming mode.  The
- * PCI address to use is returned.
- *
- * Once the device is given the dma address, the device owns this memory until
- * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
- */
-dma_addr_t
-swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir)
-{
-	unsigned long dev_addr = virt_to_phys(ptr);
-	void *map;
-
-	if (dir == DMA_NONE)
-		BUG();
-	/*
-	 * If the pointer passed in happens to be in the device's DMA window,
-	 * we can safely return the device addr and not worry about bounce
-	 * buffering it.
-	 */
-	if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
-		return dev_addr;
-
-	/*
-	 * Oh well, have to allocate and map a bounce buffer.
-	 */
-	map = map_single(hwdev, ptr, size, dir);
-	if (!map) {
-		swiotlb_full(hwdev, size, dir, 1);
-		map = io_tlb_overflow_buffer;
-	}
-
-	dev_addr = virt_to_phys(map);
-
-	/*
-	 * Ensure that the address returned is DMA'ble
-	 */
-	if (address_needs_mapping(hwdev, dev_addr))
-		panic("map_single: bounce buffer is not DMA'ble");
-
-	return dev_addr;
-}
-
-/*
- * Since DMA is i-cache coherent, any (complete) pages that were written via
- * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to
- * flush them when they get mapped into an executable vm-area.
- */
-static void
-mark_clean(void *addr, size_t size)
-{
-	unsigned long pg_addr, end;
-
-	pg_addr = PAGE_ALIGN((unsigned long) addr);
-	end = (unsigned long) addr + size;
-	while (pg_addr + PAGE_SIZE <= end) {
-		struct page *page = virt_to_page(pg_addr);
-		set_bit(PG_arch_1, &page->flags);
-		pg_addr += PAGE_SIZE;
-	}
-}
-
-/*
- * Unmap a single streaming mode DMA translation.  The dma_addr and size must
- * match what was provided for in a previous swiotlb_map_single call.  All
- * other usages are undefined.
- *
- * After this call, reads by the cpu to the buffer are guaranteed to see
- * whatever the device wrote there.
- */
-void
-swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size,
-		     int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		unmap_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-/*
- * Make physical memory consistent for a single streaming mode DMA translation
- * after a transfer.
- *
- * If you perform a swiotlb_map_single() but wish to interrogate the buffer
- * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
- * call this function before doing so.  At the next point you give the PCI dma
- * address back to the card, you must first perform a
- * swiotlb_dma_sync_for_device, and then the device again owns the buffer
- */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-void
-swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
-			       size_t size, int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-/*
- * Map a set of buffers described by scatterlist in streaming mode for DMA.
- * This is the scatter-gather version of the above swiotlb_map_single
- * interface.  Here the scatter gather list elements are each tagged with the
- * appropriate dma address and length.  They are obtained via
- * sg_dma_{address,length}(SG).
- *
- * NOTE: An implementation may be able to use a smaller number of
- *       DMA address/length pairs than there are SG table elements.
- *       (for example via virtual mapping capabilities)
- *       The routine returns the number of addr/length pairs actually
- *       used, at most nents.
- *
- * Device ownership issues as mentioned above for swiotlb_map_single are the
- * same here.
- */
-int
-swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
-	       int dir)
-{
-	void *addr;
-	unsigned long dev_addr;
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++) {
-		addr = SG_ENT_VIRT_ADDRESS(sg);
-		dev_addr = virt_to_phys(addr);
-		if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
-			sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir));
-			if (!sg->dma_address) {
-				/* Don't panic here, we expect map_sg users
-				   to do proper error handling. */
-				swiotlb_full(hwdev, sg->length, dir, 0);
-				swiotlb_unmap_sg(hwdev, sg - i, i, dir);
-				sg[0].dma_length = 0;
-				return 0;
-			}
-		} else
-			sg->dma_address = dev_addr;
-		sg->dma_length = sg->length;
-	}
-	return nelems;
-}
-
-/*
- * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
- * concerning calls here are the same as for swiotlb_unmap_single() above.
- */
-void
-swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
-		 int dir)
-{
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir);
-		else if (dir == DMA_FROM_DEVICE)
-			mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
-}
-
-/*
- * Make physical memory consistent for a set of streaming mode DMA translations
- * after a transfer.
- *
- * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
- * and usage.
- */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
-{
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
-}
-
-void
-swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
-			   int nelems, int dir)
-{
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
-}
-
-int
-swiotlb_dma_mapping_error(dma_addr_t dma_addr)
-{
-	return (dma_addr == virt_to_phys(io_tlb_overflow_buffer));
-}
-
-/*
- * Return whether the given PCI device DMA address mask can be supported
- * properly.  For example, if your device can only drive the low 24-bits
- * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
- * this function.
- */
-int
-swiotlb_dma_supported (struct device *hwdev, u64 mask)
-{
-	return (virt_to_phys (io_tlb_end) - 1) <= mask;
-}
-
-EXPORT_SYMBOL(swiotlb_init);
-EXPORT_SYMBOL(swiotlb_map_single);
-EXPORT_SYMBOL(swiotlb_unmap_single);
-EXPORT_SYMBOL(swiotlb_map_sg);
-EXPORT_SYMBOL(swiotlb_unmap_sg);
-EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
-EXPORT_SYMBOL(swiotlb_sync_single_for_device);
-EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
-EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
-EXPORT_SYMBOL(swiotlb_dma_mapping_error);
-EXPORT_SYMBOL(swiotlb_alloc_coherent);
-EXPORT_SYMBOL(swiotlb_free_coherent);
-EXPORT_SYMBOL(swiotlb_dma_supported);
--- a/arch/ia64/lib/Makefile
+++ b/arch/ia64/lib/Makefile
@@ -9,7 +9,7 @@ lib-y := __divsi3.o __udivsi3.o __modsi3
 	bitop.o checksum.o clear_page.o csum_partial_copy.o		\
 	clear_user.o strncpy_from_user.o strlen_user.o strnlen_user.o	\
 	flush.o ip_fast_csum.o do_csum.o				\
-	memset.o strlen.o swiotlb.o
+	memset.o strlen.o
 
 lib-$(CONFIG_ITANIUM)	+= copy_page.o copy_user.o memcpy.o
 lib-$(CONFIG_MCKINLEY)	+= copy_page_mck.o memcpy_mck.o

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-26 21:01   ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:01 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

Conduct some maintenance of the swiotlb code:

	-- Move the code from arch/ia64/lib to drivers/pci

	-- Cleanup some cruft (code duplication)

	-- Add support for syncing sub-ranges of mappings

	-- Add support for syncing DMA_BIDIRECTIONAL mappings

	-- Comment fixup & change record

Also, tack-on an x86_64 implementation of dma_sync_single_range_for_cpu
and dma_sync_single_range_for _device.  This makes use of the new
swiotlb sync sub-range support.

In this round, the new location for swiotlb is driver/pci/swiotlb.c.
This is the result of discussions on lkml pointing-out that swiotlb is
closely related to PCI.

Patches to follow...

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 1/5] swiotlb: move from arch/ia64/lib to drivers/pci
@ 2005-09-26 21:01     ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:01 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

The swiotlb implementation is shared by both IA-64 and EM64T. However,
the source itself lives under arch/ia64. This patch moves swiotlb.c
from arch/ia64/lib to drivers/pci and fixes-up the appropriate Makefile
and Kconfig files. No actual changes are made to swiotlb.c.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 arch/ia64/Kconfig           |    4 
 arch/ia64/lib/Makefile      |    2 
 arch/ia64/lib/swiotlb.c     |  657 --------------------------------------------
 arch/x86_64/kernel/Makefile |    2 
 drivers/pci/Makefile        |    2 
 drivers/pci/swiotlb.c       |  657 ++++++++++++++++++++++++++++++++++++++++++++
 6 files changed, 664 insertions(+), 660 deletions(-)

--- a/drivers/pci/swiotlb.c
+++ b/drivers/pci/swiotlb.c
@@ -0,0 +1,657 @@
+/*
+ * Dynamic DMA mapping support.
+ *
+ * This implementation is for IA-64 platforms that do not support
+ * I/O TLBs (aka DMA address translation hardware).
+ * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
+ * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
+ * Copyright (C) 2000, 2003 Hewlett-Packard Co
+ *	David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
+ * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
+ *			unnecessary i-cache flushing.
+ * 04/07/.. ak          Better overflow handling. Assorted fixes.
+ */
+
+#include <linux/cache.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ctype.h>
+
+#include <asm/io.h>
+#include <asm/pci.h>
+#include <asm/dma.h>
+
+#include <linux/init.h>
+#include <linux/bootmem.h>
+
+#define OFFSET(val,align) ((unsigned long)	\
+	                   ( (val) & ( (align) - 1)))
+
+#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
+#define SG_ENT_PHYS_ADDRESS(SG)	virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
+
+/*
+ * Maximum allowable number of contiguous slabs to map,
+ * must be a power of 2.  What is the appropriate value ?
+ * The complexity of {map,unmap}_single is linearly dependent on this value.
+ */
+#define IO_TLB_SEGSIZE	128
+
+/*
+ * log of the size of each IO TLB slab.  The number of slabs is command line
+ * controllable.
+ */
+#define IO_TLB_SHIFT 11
+
+int swiotlb_force;
+
+/*
+ * Used to do a quick range check in swiotlb_unmap_single and
+ * swiotlb_sync_single_*, to see if the memory was in fact allocated by this
+ * API.
+ */
+static char *io_tlb_start, *io_tlb_end;
+
+/*
+ * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
+ * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
+ */
+static unsigned long io_tlb_nslabs;
+
+/*
+ * When the IOMMU overflows we return a fallback buffer. This sets the size.
+ */
+static unsigned long io_tlb_overflow = 32*1024;
+
+void *io_tlb_overflow_buffer;
+
+/*
+ * This is a free list describing the number of free entries available from
+ * each index
+ */
+static unsigned int *io_tlb_list;
+static unsigned int io_tlb_index;
+
+/*
+ * We need to save away the original address corresponding to a mapped entry
+ * for the sync operations.
+ */
+static unsigned char **io_tlb_orig_addr;
+
+/*
+ * Protect the above data structures in the map and unmap calls
+ */
+static DEFINE_SPINLOCK(io_tlb_lock);
+
+static int __init
+setup_io_tlb_npages(char *str)
+{
+	if (isdigit(*str)) {
+		io_tlb_nslabs = simple_strtoul(str, &str, 0);
+		/* avoid tail segment of size < IO_TLB_SEGSIZE */
+		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	}
+	if (*str = ',')
+		++str;
+	if (!strcmp(str, "force"))
+		swiotlb_force = 1;
+	return 1;
+}
+__setup("swiotlb=", setup_io_tlb_npages);
+/* make io_tlb_overflow tunable too? */
+
+/*
+ * Statically reserve bounce buffer space and initialize bounce buffer data
+ * structures for the software IO TLB used to implement the PCI DMA API.
+ */
+void
+swiotlb_init_with_default_size (size_t default_size)
+{
+	unsigned long i;
+
+	if (!io_tlb_nslabs) {
+		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
+		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	}
+
+	/*
+	 * Get IO TLB memory from the low pages
+	 */
+	io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
+					       (1 << IO_TLB_SHIFT));
+	if (!io_tlb_start)
+		panic("Cannot allocate SWIOTLB buffer");
+	io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
+
+	/*
+	 * Allocate and initialize the free list array.  This array is used
+	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
+	 * between io_tlb_start and io_tlb_end.
+	 */
+	io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
+	for (i = 0; i < io_tlb_nslabs; i++)
+ 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+	io_tlb_index = 0;
+	io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
+
+	/*
+	 * Get the overflow emergency buffer
+	 */
+	io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
+	printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n",
+	       virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end));
+}
+
+void
+swiotlb_init (void)
+{
+	swiotlb_init_with_default_size(64 * (1<<20));	/* default to 64MB */
+}
+
+static inline int
+address_needs_mapping(struct device *hwdev, dma_addr_t addr)
+{
+	dma_addr_t mask = 0xffffffff;
+	/* If the device has a mask, use it, otherwise default to 32 bits */
+	if (hwdev && hwdev->dma_mask)
+		mask = *hwdev->dma_mask;
+	return (addr & ~mask) != 0;
+}
+
+/*
+ * Allocates bounce buffer and returns its kernel virtual address.
+ */
+static void *
+map_single(struct device *hwdev, char *buffer, size_t size, int dir)
+{
+	unsigned long flags;
+	char *dma_addr;
+	unsigned int nslots, stride, index, wrap;
+	int i;
+
+	/*
+	 * For mappings greater than a page, we limit the stride (and
+	 * hence alignment) to a page size.
+	 */
+	nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+	if (size > PAGE_SIZE)
+		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
+	else
+		stride = 1;
+
+	if (!nslots)
+		BUG();
+
+	/*
+	 * Find suitable number of IO TLB entries size that will fit this
+	 * request and allocate a buffer from that IO TLB pool.
+	 */
+	spin_lock_irqsave(&io_tlb_lock, flags);
+	{
+		wrap = index = ALIGN(io_tlb_index, stride);
+
+		if (index >= io_tlb_nslabs)
+			wrap = index = 0;
+
+		do {
+			/*
+			 * If we find a slot that indicates we have 'nslots'
+			 * number of contiguous buffers, we allocate the
+			 * buffers from that slot and mark the entries as '0'
+			 * indicating unavailable.
+			 */
+			if (io_tlb_list[index] >= nslots) {
+				int count = 0;
+
+				for (i = index; i < (int) (index + nslots); i++)
+					io_tlb_list[i] = 0;
+				for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
+					io_tlb_list[i] = ++count;
+				dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
+
+				/*
+				 * Update the indices to avoid searching in
+				 * the next round.
+				 */
+				io_tlb_index = ((index + nslots) < io_tlb_nslabs
+						? (index + nslots) : 0);
+
+				goto found;
+			}
+			index += stride;
+			if (index >= io_tlb_nslabs)
+				index = 0;
+		} while (index != wrap);
+
+		spin_unlock_irqrestore(&io_tlb_lock, flags);
+		return NULL;
+	}
+  found:
+	spin_unlock_irqrestore(&io_tlb_lock, flags);
+
+	/*
+	 * Save away the mapping from the original address to the DMA address.
+	 * This is needed when we sync the memory.  Then we sync the buffer if
+	 * needed.
+	 */
+	io_tlb_orig_addr[index] = buffer;
+	if (dir = DMA_TO_DEVICE || dir = DMA_BIDIRECTIONAL)
+		memcpy(dma_addr, buffer, size);
+
+	return dma_addr;
+}
+
+/*
+ * dma_addr is the kernel virtual address of the bounce buffer to unmap.
+ */
+static void
+unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+{
+	unsigned long flags;
+	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+	char *buffer = io_tlb_orig_addr[index];
+
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (buffer && ((dir = DMA_FROM_DEVICE) || (dir = DMA_BIDIRECTIONAL)))
+		/*
+		 * bounce... copy the data back into the original buffer * and
+		 * delete the bounce buffer.
+		 */
+		memcpy(buffer, dma_addr, size);
+
+	/*
+	 * Return the buffer to the free list by setting the corresponding
+	 * entries to indicate the number of contigous entries available.
+	 * While returning the entries to the free list, we merge the entries
+	 * with slots below and above the pool being returned.
+	 */
+	spin_lock_irqsave(&io_tlb_lock, flags);
+	{
+		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
+			 io_tlb_list[index + nslots] : 0);
+		/*
+		 * Step 1: return the slots to the free list, merging the
+		 * slots with superceeding slots
+		 */
+		for (i = index + nslots - 1; i >= index; i--)
+			io_tlb_list[i] = ++count;
+		/*
+		 * Step 2: merge the returned slots with the preceding slots,
+		 * if available (non zero)
+		 */
+		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
+			io_tlb_list[i] = ++count;
+	}
+	spin_unlock_irqrestore(&io_tlb_lock, flags);
+}
+
+static void
+sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+{
+	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+	char *buffer = io_tlb_orig_addr[index];
+
+	/*
+	 * bounce... copy the data back into/from the original buffer
+	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
+	 */
+	if (dir = DMA_FROM_DEVICE)
+		memcpy(buffer, dma_addr, size);
+	else if (dir = DMA_TO_DEVICE)
+		memcpy(dma_addr, buffer, size);
+	else
+		BUG();
+}
+
+void *
+swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+		       dma_addr_t *dma_handle, int flags)
+{
+	unsigned long dev_addr;
+	void *ret;
+	int order = get_order(size);
+
+	/*
+	 * XXX fix me: the DMA API should pass us an explicit DMA mask
+	 * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32
+	 * bit range instead of a 16MB one).
+	 */
+	flags |= GFP_DMA;
+
+	ret = (void *)__get_free_pages(flags, order);
+	if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
+		/*
+		 * The allocated memory isn't reachable by the device.
+		 * Fall back on swiotlb_map_single().
+		 */
+		free_pages((unsigned long) ret, order);
+		ret = NULL;
+	}
+	if (!ret) {
+		/*
+		 * We are either out of memory or the device can't DMA
+		 * to GFP_DMA memory; fall back on
+		 * swiotlb_map_single(), which will grab memory from
+		 * the lowest available address range.
+		 */
+		dma_addr_t handle;
+		handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE);
+		if (dma_mapping_error(handle))
+			return NULL;
+
+		ret = phys_to_virt(handle);
+	}
+
+	memset(ret, 0, size);
+	dev_addr = virt_to_phys(ret);
+
+	/* Confirm address can be DMA'd by device */
+	if (address_needs_mapping(hwdev, dev_addr)) {
+		printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n",
+		       (unsigned long long)*hwdev->dma_mask, dev_addr);
+		panic("swiotlb_alloc_coherent: allocated memory is out of "
+		      "range for device");
+	}
+	*dma_handle = dev_addr;
+	return ret;
+}
+
+void
+swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+		      dma_addr_t dma_handle)
+{
+	if (!(vaddr >= (void *)io_tlb_start
+                    && vaddr < (void *)io_tlb_end))
+		free_pages((unsigned long) vaddr, get_order(size));
+	else
+		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
+		swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE);
+}
+
+static void
+swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
+{
+	/*
+	 * Ran out of IOMMU space for this operation. This is very bad.
+	 * Unfortunately the drivers cannot handle this operation properly.
+	 * unless they check for pci_dma_mapping_error (most don't)
+	 * When the mapping is small enough return a static buffer to limit
+	 * the damage, or panic when the transfer is too big.
+	 */
+	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
+	       "device %s\n", size, dev ? dev->bus_id : "?");
+
+	if (size > io_tlb_overflow && do_panic) {
+		if (dir = PCI_DMA_FROMDEVICE || dir = PCI_DMA_BIDIRECTIONAL)
+			panic("PCI-DMA: Memory would be corrupted\n");
+		if (dir = PCI_DMA_TODEVICE || dir = PCI_DMA_BIDIRECTIONAL)
+			panic("PCI-DMA: Random memory would be DMAed\n");
+	}
+}
+
+/*
+ * Map a single buffer of the indicated size for DMA in streaming mode.  The
+ * PCI address to use is returned.
+ *
+ * Once the device is given the dma address, the device owns this memory until
+ * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
+ */
+dma_addr_t
+swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir)
+{
+	unsigned long dev_addr = virt_to_phys(ptr);
+	void *map;
+
+	if (dir = DMA_NONE)
+		BUG();
+	/*
+	 * If the pointer passed in happens to be in the device's DMA window,
+	 * we can safely return the device addr and not worry about bounce
+	 * buffering it.
+	 */
+	if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
+		return dev_addr;
+
+	/*
+	 * Oh well, have to allocate and map a bounce buffer.
+	 */
+	map = map_single(hwdev, ptr, size, dir);
+	if (!map) {
+		swiotlb_full(hwdev, size, dir, 1);
+		map = io_tlb_overflow_buffer;
+	}
+
+	dev_addr = virt_to_phys(map);
+
+	/*
+	 * Ensure that the address returned is DMA'ble
+	 */
+	if (address_needs_mapping(hwdev, dev_addr))
+		panic("map_single: bounce buffer is not DMA'ble");
+
+	return dev_addr;
+}
+
+/*
+ * Since DMA is i-cache coherent, any (complete) pages that were written via
+ * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to
+ * flush them when they get mapped into an executable vm-area.
+ */
+static void
+mark_clean(void *addr, size_t size)
+{
+	unsigned long pg_addr, end;
+
+	pg_addr = PAGE_ALIGN((unsigned long) addr);
+	end = (unsigned long) addr + size;
+	while (pg_addr + PAGE_SIZE <= end) {
+		struct page *page = virt_to_page(pg_addr);
+		set_bit(PG_arch_1, &page->flags);
+		pg_addr += PAGE_SIZE;
+	}
+}
+
+/*
+ * Unmap a single streaming mode DMA translation.  The dma_addr and size must
+ * match what was provided for in a previous swiotlb_map_single call.  All
+ * other usages are undefined.
+ *
+ * After this call, reads by the cpu to the buffer are guaranteed to see
+ * whatever the device wrote there.
+ */
+void
+swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size,
+		     int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		unmap_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
+ * Make physical memory consistent for a single streaming mode DMA translation
+ * after a transfer.
+ *
+ * If you perform a swiotlb_map_single() but wish to interrogate the buffer
+ * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
+ * call this function before doing so.  At the next point you give the PCI dma
+ * address back to the card, you must first perform a
+ * swiotlb_dma_sync_for_device, and then the device again owns the buffer
+ */
+void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
+			       size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
+ * Map a set of buffers described by scatterlist in streaming mode for DMA.
+ * This is the scatter-gather version of the above swiotlb_map_single
+ * interface.  Here the scatter gather list elements are each tagged with the
+ * appropriate dma address and length.  They are obtained via
+ * sg_dma_{address,length}(SG).
+ *
+ * NOTE: An implementation may be able to use a smaller number of
+ *       DMA address/length pairs than there are SG table elements.
+ *       (for example via virtual mapping capabilities)
+ *       The routine returns the number of addr/length pairs actually
+ *       used, at most nents.
+ *
+ * Device ownership issues as mentioned above for swiotlb_map_single are the
+ * same here.
+ */
+int
+swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+	       int dir)
+{
+	void *addr;
+	unsigned long dev_addr;
+	int i;
+
+	if (dir = DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++) {
+		addr = SG_ENT_VIRT_ADDRESS(sg);
+		dev_addr = virt_to_phys(addr);
+		if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
+			sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir));
+			if (!sg->dma_address) {
+				/* Don't panic here, we expect map_sg users
+				   to do proper error handling. */
+				swiotlb_full(hwdev, sg->length, dir, 0);
+				swiotlb_unmap_sg(hwdev, sg - i, i, dir);
+				sg[0].dma_length = 0;
+				return 0;
+			}
+		} else
+			sg->dma_address = dev_addr;
+		sg->dma_length = sg->length;
+	}
+	return nelems;
+}
+
+/*
+ * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
+ * concerning calls here are the same as for swiotlb_unmap_single() above.
+ */
+void
+swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+		 int dir)
+{
+	int i;
+
+	if (dir = DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir);
+		else if (dir = DMA_FROM_DEVICE)
+			mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
+}
+
+/*
+ * Make physical memory consistent for a set of streaming mode DMA translations
+ * after a transfer.
+ *
+ * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
+ * and usage.
+ */
+void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	int i;
+
+	if (dir = DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			sync_single(hwdev, (void *) sg->dma_address,
+				    sg->dma_length, dir);
+}
+
+void
+swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
+			   int nelems, int dir)
+{
+	int i;
+
+	if (dir = DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			sync_single(hwdev, (void *) sg->dma_address,
+				    sg->dma_length, dir);
+}
+
+int
+swiotlb_dma_mapping_error(dma_addr_t dma_addr)
+{
+	return (dma_addr = virt_to_phys(io_tlb_overflow_buffer));
+}
+
+/*
+ * Return whether the given PCI device DMA address mask can be supported
+ * properly.  For example, if your device can only drive the low 24-bits
+ * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
+ * this function.
+ */
+int
+swiotlb_dma_supported (struct device *hwdev, u64 mask)
+{
+	return (virt_to_phys (io_tlb_end) - 1) <= mask;
+}
+
+EXPORT_SYMBOL(swiotlb_init);
+EXPORT_SYMBOL(swiotlb_map_single);
+EXPORT_SYMBOL(swiotlb_unmap_single);
+EXPORT_SYMBOL(swiotlb_map_sg);
+EXPORT_SYMBOL(swiotlb_unmap_sg);
+EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
+EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
+EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
+EXPORT_SYMBOL(swiotlb_dma_mapping_error);
+EXPORT_SYMBOL(swiotlb_alloc_coherent);
+EXPORT_SYMBOL(swiotlb_free_coherent);
+EXPORT_SYMBOL(swiotlb_dma_supported);
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -44,3 +44,5 @@ endif
 # Build PCI Express stuff if needed
 obj-$(CONFIG_PCIEPORTBUS) += pcie/
 
+# Build swiotlb stuff if needed
+obj-$(CONFIG_SWIOTLB) += swiotlb.o
--- a/arch/x86_64/kernel/Makefile
+++ b/arch/x86_64/kernel/Makefile
@@ -27,7 +27,6 @@ obj-$(CONFIG_CPU_FREQ)		+= cpufreq/
 obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
 obj-$(CONFIG_GART_IOMMU)	+= pci-gart.o aperture.o
 obj-$(CONFIG_DUMMY_IOMMU)	+= pci-nommu.o pci-dma.o
-obj-$(CONFIG_SWIOTLB)		+= swiotlb.o
 obj-$(CONFIG_KPROBES)		+= kprobes.o
 obj-$(CONFIG_X86_PM_TIMER)	+= pmtimer.o
 
@@ -41,7 +40,6 @@ CFLAGS_vsyscall.o		:= $(PROFILING) -g0
 bootflag-y			+= ../../i386/kernel/bootflag.o
 cpuid-$(subst m,y,$(CONFIG_X86_CPUID))  += ../../i386/kernel/cpuid.o
 topology-y                     += ../../i386/mach-default/topology.o
-swiotlb-$(CONFIG_SWIOTLB)      += ../../ia64/lib/swiotlb.o
 microcode-$(subst m,y,$(CONFIG_MICROCODE))  += ../../i386/kernel/microcode.o
 intel_cacheinfo-y		+= ../../i386/kernel/cpu/intel_cacheinfo.o
 quirks-y			+= ../../i386/kernel/quirks.o
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -26,6 +26,10 @@ config MMU
 	bool
 	default y
 
+config SWIOTLB
+       bool
+       default y
+
 config RWSEM_XCHGADD_ALGORITHM
 	bool
 	default y
--- a/arch/ia64/lib/swiotlb.c
+++ b/arch/ia64/lib/swiotlb.c
@@ -1,657 +0,0 @@
-/*
- * Dynamic DMA mapping support.
- *
- * This implementation is for IA-64 platforms that do not support
- * I/O TLBs (aka DMA address translation hardware).
- * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
- * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
- * Copyright (C) 2000, 2003 Hewlett-Packard Co
- *	David Mosberger-Tang <davidm@hpl.hp.com>
- *
- * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
- * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
- *			unnecessary i-cache flushing.
- * 04/07/.. ak          Better overflow handling. Assorted fixes.
- */
-
-#include <linux/cache.h>
-#include <linux/mm.h>
-#include <linux/module.h>
-#include <linux/pci.h>
-#include <linux/spinlock.h>
-#include <linux/string.h>
-#include <linux/types.h>
-#include <linux/ctype.h>
-
-#include <asm/io.h>
-#include <asm/pci.h>
-#include <asm/dma.h>
-
-#include <linux/init.h>
-#include <linux/bootmem.h>
-
-#define OFFSET(val,align) ((unsigned long)	\
-	                   ( (val) & ( (align) - 1)))
-
-#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
-#define SG_ENT_PHYS_ADDRESS(SG)	virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
-
-/*
- * Maximum allowable number of contiguous slabs to map,
- * must be a power of 2.  What is the appropriate value ?
- * The complexity of {map,unmap}_single is linearly dependent on this value.
- */
-#define IO_TLB_SEGSIZE	128
-
-/*
- * log of the size of each IO TLB slab.  The number of slabs is command line
- * controllable.
- */
-#define IO_TLB_SHIFT 11
-
-int swiotlb_force;
-
-/*
- * Used to do a quick range check in swiotlb_unmap_single and
- * swiotlb_sync_single_*, to see if the memory was in fact allocated by this
- * API.
- */
-static char *io_tlb_start, *io_tlb_end;
-
-/*
- * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
- * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
- */
-static unsigned long io_tlb_nslabs;
-
-/*
- * When the IOMMU overflows we return a fallback buffer. This sets the size.
- */
-static unsigned long io_tlb_overflow = 32*1024;
-
-void *io_tlb_overflow_buffer;
-
-/*
- * This is a free list describing the number of free entries available from
- * each index
- */
-static unsigned int *io_tlb_list;
-static unsigned int io_tlb_index;
-
-/*
- * We need to save away the original address corresponding to a mapped entry
- * for the sync operations.
- */
-static unsigned char **io_tlb_orig_addr;
-
-/*
- * Protect the above data structures in the map and unmap calls
- */
-static DEFINE_SPINLOCK(io_tlb_lock);
-
-static int __init
-setup_io_tlb_npages(char *str)
-{
-	if (isdigit(*str)) {
-		io_tlb_nslabs = simple_strtoul(str, &str, 0);
-		/* avoid tail segment of size < IO_TLB_SEGSIZE */
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
-	if (*str = ',')
-		++str;
-	if (!strcmp(str, "force"))
-		swiotlb_force = 1;
-	return 1;
-}
-__setup("swiotlb=", setup_io_tlb_npages);
-/* make io_tlb_overflow tunable too? */
-
-/*
- * Statically reserve bounce buffer space and initialize bounce buffer data
- * structures for the software IO TLB used to implement the PCI DMA API.
- */
-void
-swiotlb_init_with_default_size (size_t default_size)
-{
-	unsigned long i;
-
-	if (!io_tlb_nslabs) {
-		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
-
-	/*
-	 * Get IO TLB memory from the low pages
-	 */
-	io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
-					       (1 << IO_TLB_SHIFT));
-	if (!io_tlb_start)
-		panic("Cannot allocate SWIOTLB buffer");
-	io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
-
-	/*
-	 * Allocate and initialize the free list array.  This array is used
-	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
-	 * between io_tlb_start and io_tlb_end.
-	 */
-	io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
-	for (i = 0; i < io_tlb_nslabs; i++)
- 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-	io_tlb_index = 0;
-	io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
-
-	/*
-	 * Get the overflow emergency buffer
-	 */
-	io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
-	printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n",
-	       virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end));
-}
-
-void
-swiotlb_init (void)
-{
-	swiotlb_init_with_default_size(64 * (1<<20));	/* default to 64MB */
-}
-
-static inline int
-address_needs_mapping(struct device *hwdev, dma_addr_t addr)
-{
-	dma_addr_t mask = 0xffffffff;
-	/* If the device has a mask, use it, otherwise default to 32 bits */
-	if (hwdev && hwdev->dma_mask)
-		mask = *hwdev->dma_mask;
-	return (addr & ~mask) != 0;
-}
-
-/*
- * Allocates bounce buffer and returns its kernel virtual address.
- */
-static void *
-map_single(struct device *hwdev, char *buffer, size_t size, int dir)
-{
-	unsigned long flags;
-	char *dma_addr;
-	unsigned int nslots, stride, index, wrap;
-	int i;
-
-	/*
-	 * For mappings greater than a page, we limit the stride (and
-	 * hence alignment) to a page size.
-	 */
-	nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	if (size > PAGE_SIZE)
-		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
-	else
-		stride = 1;
-
-	if (!nslots)
-		BUG();
-
-	/*
-	 * Find suitable number of IO TLB entries size that will fit this
-	 * request and allocate a buffer from that IO TLB pool.
-	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
-	{
-		wrap = index = ALIGN(io_tlb_index, stride);
-
-		if (index >= io_tlb_nslabs)
-			wrap = index = 0;
-
-		do {
-			/*
-			 * If we find a slot that indicates we have 'nslots'
-			 * number of contiguous buffers, we allocate the
-			 * buffers from that slot and mark the entries as '0'
-			 * indicating unavailable.
-			 */
-			if (io_tlb_list[index] >= nslots) {
-				int count = 0;
-
-				for (i = index; i < (int) (index + nslots); i++)
-					io_tlb_list[i] = 0;
-				for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-					io_tlb_list[i] = ++count;
-				dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
-
-				/*
-				 * Update the indices to avoid searching in
-				 * the next round.
-				 */
-				io_tlb_index = ((index + nslots) < io_tlb_nslabs
-						? (index + nslots) : 0);
-
-				goto found;
-			}
-			index += stride;
-			if (index >= io_tlb_nslabs)
-				index = 0;
-		} while (index != wrap);
-
-		spin_unlock_irqrestore(&io_tlb_lock, flags);
-		return NULL;
-	}
-  found:
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
-
-	/*
-	 * Save away the mapping from the original address to the DMA address.
-	 * This is needed when we sync the memory.  Then we sync the buffer if
-	 * needed.
-	 */
-	io_tlb_orig_addr[index] = buffer;
-	if (dir = DMA_TO_DEVICE || dir = DMA_BIDIRECTIONAL)
-		memcpy(dma_addr, buffer, size);
-
-	return dma_addr;
-}
-
-/*
- * dma_addr is the kernel virtual address of the bounce buffer to unmap.
- */
-static void
-unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
-{
-	unsigned long flags;
-	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	char *buffer = io_tlb_orig_addr[index];
-
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (buffer && ((dir = DMA_FROM_DEVICE) || (dir = DMA_BIDIRECTIONAL)))
-		/*
-		 * bounce... copy the data back into the original buffer * and
-		 * delete the bounce buffer.
-		 */
-		memcpy(buffer, dma_addr, size);
-
-	/*
-	 * Return the buffer to the free list by setting the corresponding
-	 * entries to indicate the number of contigous entries available.
-	 * While returning the entries to the free list, we merge the entries
-	 * with slots below and above the pool being returned.
-	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
-	{
-		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
-			 io_tlb_list[index + nslots] : 0);
-		/*
-		 * Step 1: return the slots to the free list, merging the
-		 * slots with superceeding slots
-		 */
-		for (i = index + nslots - 1; i >= index; i--)
-			io_tlb_list[i] = ++count;
-		/*
-		 * Step 2: merge the returned slots with the preceding slots,
-		 * if available (non zero)
-		 */
-		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-			io_tlb_list[i] = ++count;
-	}
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
-}
-
-static void
-sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
-{
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	char *buffer = io_tlb_orig_addr[index];
-
-	/*
-	 * bounce... copy the data back into/from the original buffer
-	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
-	 */
-	if (dir = DMA_FROM_DEVICE)
-		memcpy(buffer, dma_addr, size);
-	else if (dir = DMA_TO_DEVICE)
-		memcpy(dma_addr, buffer, size);
-	else
-		BUG();
-}
-
-void *
-swiotlb_alloc_coherent(struct device *hwdev, size_t size,
-		       dma_addr_t *dma_handle, int flags)
-{
-	unsigned long dev_addr;
-	void *ret;
-	int order = get_order(size);
-
-	/*
-	 * XXX fix me: the DMA API should pass us an explicit DMA mask
-	 * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32
-	 * bit range instead of a 16MB one).
-	 */
-	flags |= GFP_DMA;
-
-	ret = (void *)__get_free_pages(flags, order);
-	if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
-		/*
-		 * The allocated memory isn't reachable by the device.
-		 * Fall back on swiotlb_map_single().
-		 */
-		free_pages((unsigned long) ret, order);
-		ret = NULL;
-	}
-	if (!ret) {
-		/*
-		 * We are either out of memory or the device can't DMA
-		 * to GFP_DMA memory; fall back on
-		 * swiotlb_map_single(), which will grab memory from
-		 * the lowest available address range.
-		 */
-		dma_addr_t handle;
-		handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE);
-		if (dma_mapping_error(handle))
-			return NULL;
-
-		ret = phys_to_virt(handle);
-	}
-
-	memset(ret, 0, size);
-	dev_addr = virt_to_phys(ret);
-
-	/* Confirm address can be DMA'd by device */
-	if (address_needs_mapping(hwdev, dev_addr)) {
-		printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n",
-		       (unsigned long long)*hwdev->dma_mask, dev_addr);
-		panic("swiotlb_alloc_coherent: allocated memory is out of "
-		      "range for device");
-	}
-	*dma_handle = dev_addr;
-	return ret;
-}
-
-void
-swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
-		      dma_addr_t dma_handle)
-{
-	if (!(vaddr >= (void *)io_tlb_start
-                    && vaddr < (void *)io_tlb_end))
-		free_pages((unsigned long) vaddr, get_order(size));
-	else
-		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
-		swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE);
-}
-
-static void
-swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
-{
-	/*
-	 * Ran out of IOMMU space for this operation. This is very bad.
-	 * Unfortunately the drivers cannot handle this operation properly.
-	 * unless they check for pci_dma_mapping_error (most don't)
-	 * When the mapping is small enough return a static buffer to limit
-	 * the damage, or panic when the transfer is too big.
-	 */
-	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
-	       "device %s\n", size, dev ? dev->bus_id : "?");
-
-	if (size > io_tlb_overflow && do_panic) {
-		if (dir = PCI_DMA_FROMDEVICE || dir = PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Memory would be corrupted\n");
-		if (dir = PCI_DMA_TODEVICE || dir = PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Random memory would be DMAed\n");
-	}
-}
-
-/*
- * Map a single buffer of the indicated size for DMA in streaming mode.  The
- * PCI address to use is returned.
- *
- * Once the device is given the dma address, the device owns this memory until
- * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
- */
-dma_addr_t
-swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir)
-{
-	unsigned long dev_addr = virt_to_phys(ptr);
-	void *map;
-
-	if (dir = DMA_NONE)
-		BUG();
-	/*
-	 * If the pointer passed in happens to be in the device's DMA window,
-	 * we can safely return the device addr and not worry about bounce
-	 * buffering it.
-	 */
-	if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
-		return dev_addr;
-
-	/*
-	 * Oh well, have to allocate and map a bounce buffer.
-	 */
-	map = map_single(hwdev, ptr, size, dir);
-	if (!map) {
-		swiotlb_full(hwdev, size, dir, 1);
-		map = io_tlb_overflow_buffer;
-	}
-
-	dev_addr = virt_to_phys(map);
-
-	/*
-	 * Ensure that the address returned is DMA'ble
-	 */
-	if (address_needs_mapping(hwdev, dev_addr))
-		panic("map_single: bounce buffer is not DMA'ble");
-
-	return dev_addr;
-}
-
-/*
- * Since DMA is i-cache coherent, any (complete) pages that were written via
- * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to
- * flush them when they get mapped into an executable vm-area.
- */
-static void
-mark_clean(void *addr, size_t size)
-{
-	unsigned long pg_addr, end;
-
-	pg_addr = PAGE_ALIGN((unsigned long) addr);
-	end = (unsigned long) addr + size;
-	while (pg_addr + PAGE_SIZE <= end) {
-		struct page *page = virt_to_page(pg_addr);
-		set_bit(PG_arch_1, &page->flags);
-		pg_addr += PAGE_SIZE;
-	}
-}
-
-/*
- * Unmap a single streaming mode DMA translation.  The dma_addr and size must
- * match what was provided for in a previous swiotlb_map_single call.  All
- * other usages are undefined.
- *
- * After this call, reads by the cpu to the buffer are guaranteed to see
- * whatever the device wrote there.
- */
-void
-swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size,
-		     int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		unmap_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-/*
- * Make physical memory consistent for a single streaming mode DMA translation
- * after a transfer.
- *
- * If you perform a swiotlb_map_single() but wish to interrogate the buffer
- * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
- * call this function before doing so.  At the next point you give the PCI dma
- * address back to the card, you must first perform a
- * swiotlb_dma_sync_for_device, and then the device again owns the buffer
- */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-void
-swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
-			       size_t size, int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-/*
- * Map a set of buffers described by scatterlist in streaming mode for DMA.
- * This is the scatter-gather version of the above swiotlb_map_single
- * interface.  Here the scatter gather list elements are each tagged with the
- * appropriate dma address and length.  They are obtained via
- * sg_dma_{address,length}(SG).
- *
- * NOTE: An implementation may be able to use a smaller number of
- *       DMA address/length pairs than there are SG table elements.
- *       (for example via virtual mapping capabilities)
- *       The routine returns the number of addr/length pairs actually
- *       used, at most nents.
- *
- * Device ownership issues as mentioned above for swiotlb_map_single are the
- * same here.
- */
-int
-swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
-	       int dir)
-{
-	void *addr;
-	unsigned long dev_addr;
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++) {
-		addr = SG_ENT_VIRT_ADDRESS(sg);
-		dev_addr = virt_to_phys(addr);
-		if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
-			sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir));
-			if (!sg->dma_address) {
-				/* Don't panic here, we expect map_sg users
-				   to do proper error handling. */
-				swiotlb_full(hwdev, sg->length, dir, 0);
-				swiotlb_unmap_sg(hwdev, sg - i, i, dir);
-				sg[0].dma_length = 0;
-				return 0;
-			}
-		} else
-			sg->dma_address = dev_addr;
-		sg->dma_length = sg->length;
-	}
-	return nelems;
-}
-
-/*
- * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
- * concerning calls here are the same as for swiotlb_unmap_single() above.
- */
-void
-swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
-		 int dir)
-{
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir);
-		else if (dir = DMA_FROM_DEVICE)
-			mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
-}
-
-/*
- * Make physical memory consistent for a set of streaming mode DMA translations
- * after a transfer.
- *
- * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
- * and usage.
- */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
-{
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
-}
-
-void
-swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
-			   int nelems, int dir)
-{
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
-}
-
-int
-swiotlb_dma_mapping_error(dma_addr_t dma_addr)
-{
-	return (dma_addr = virt_to_phys(io_tlb_overflow_buffer));
-}
-
-/*
- * Return whether the given PCI device DMA address mask can be supported
- * properly.  For example, if your device can only drive the low 24-bits
- * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
- * this function.
- */
-int
-swiotlb_dma_supported (struct device *hwdev, u64 mask)
-{
-	return (virt_to_phys (io_tlb_end) - 1) <= mask;
-}
-
-EXPORT_SYMBOL(swiotlb_init);
-EXPORT_SYMBOL(swiotlb_map_single);
-EXPORT_SYMBOL(swiotlb_unmap_single);
-EXPORT_SYMBOL(swiotlb_map_sg);
-EXPORT_SYMBOL(swiotlb_unmap_sg);
-EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
-EXPORT_SYMBOL(swiotlb_sync_single_for_device);
-EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
-EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
-EXPORT_SYMBOL(swiotlb_dma_mapping_error);
-EXPORT_SYMBOL(swiotlb_alloc_coherent);
-EXPORT_SYMBOL(swiotlb_free_coherent);
-EXPORT_SYMBOL(swiotlb_dma_supported);
--- a/arch/ia64/lib/Makefile
+++ b/arch/ia64/lib/Makefile
@@ -9,7 +9,7 @@ lib-y := __divsi3.o __udivsi3.o __modsi3
 	bitop.o checksum.o clear_page.o csum_partial_copy.o		\
 	clear_user.o strncpy_from_user.o strlen_user.o strnlen_user.o	\
 	flush.o ip_fast_csum.o do_csum.o				\
-	memset.o strlen.o swiotlb.o
+	memset.o strlen.o
 
 lib-$(CONFIG_ITANIUM)	+= copy_page.o copy_user.o memcpy.o
 lib-$(CONFIG_MCKINLEY)	+= copy_page_mck.o memcpy_mck.o

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 3/5] swiotlb: support syncing sub-ranges of mappings
  2005-09-26 21:01       ` John W. Linville
@ 2005-09-26 21:01         ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:01 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

This patch implements swiotlb_sync_single_range_for_{cpu,device}. This
is intended to support an x86_64 implementation of
dma_sync_single_range_for_{cpu,device}.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 drivers/pci/swiotlb.c        |   33 +++++++++++++++++++++++++++++++++
 include/asm-x86_64/swiotlb.h |    8 ++++++++
 2 files changed, 41 insertions(+)

diff --git a/drivers/pci/swiotlb.c b/drivers/pci/swiotlb.c
--- a/drivers/pci/swiotlb.c
+++ b/drivers/pci/swiotlb.c
@@ -521,6 +521,37 @@ swiotlb_sync_single_for_device(struct de
 }
 
 /*
+ * Same as above, but for a sub-range of the mapping.
+ */
+static inline void
+swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
+			  unsigned long offset, size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr) + offset;
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+				  unsigned long offset, size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+}
+
+void
+swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
+				     unsigned long offset, size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+}
+
+/*
  * Map a set of buffers described by scatterlist in streaming mode for DMA.
  * This is the scatter-gather version of the above swiotlb_map_single
  * interface.  Here the scatter gather list elements are each tagged with the
@@ -648,6 +679,8 @@ EXPORT_SYMBOL(swiotlb_map_sg);
 EXPORT_SYMBOL(swiotlb_unmap_sg);
 EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_cpu);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
 EXPORT_SYMBOL(swiotlb_dma_mapping_error);
diff --git a/include/asm-x86_64/swiotlb.h b/include/asm-x86_64/swiotlb.h
--- a/include/asm-x86_64/swiotlb.h
+++ b/include/asm-x86_64/swiotlb.h
@@ -15,6 +15,14 @@ extern void swiotlb_sync_single_for_cpu(
 extern void swiotlb_sync_single_for_device(struct device *hwdev,
 					    dma_addr_t dev_addr,
 					    size_t size, int dir);
+extern void swiotlb_sync_single_range_for_cpu(struct device *hwdev,
+					      dma_addr_t dev_addr,
+					      unsigned long offset,
+					      size_t size, int dir);
+extern void swiotlb_sync_single_range_for_device(struct device *hwdev,
+						 dma_addr_t dev_addr,
+						 unsigned long offset,
+						 size_t size, int dir);
 extern void swiotlb_sync_sg_for_cpu(struct device *hwdev,
 				     struct scatterlist *sg, int nelems,
 				     int dir);

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 5/5] swiotlb: file header comments
  2005-09-26 21:01           ` John W. Linville
@ 2005-09-26 21:01             ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:01 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

Change comment at top of swiotlb.c to reflect that the code is shared
with EM64T (i.e. Intel x86_64). Also add an entry for myself so that
if I "broke it", everyone knows who "bought it"... :-)

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 drivers/pci/swiotlb.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/swiotlb.c b/drivers/pci/swiotlb.c
--- a/drivers/pci/swiotlb.c
+++ b/drivers/pci/swiotlb.c
@@ -1,7 +1,7 @@
 /*
  * Dynamic DMA mapping support.
  *
- * This implementation is for IA-64 platforms that do not support
+ * This implementation is for IA-64 and EM64T platforms that do not support
  * I/O TLBs (aka DMA address translation hardware).
  * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
  * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
@@ -11,7 +11,9 @@
  * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
  * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
  *			unnecessary i-cache flushing.
- * 04/07/.. ak          Better overflow handling. Assorted fixes.
+ * 04/07/.. ak		Better overflow handling. Assorted fixes.
+ * 05/09/10 linville	Add support for syncing ranges, support syncing for
+ *			DMA_BIDIRECTIONAL mappings, miscellaneous cleanup.
  */
 
 #include <linux/cache.h>

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 4/5] swiotlb: support syncing DMA_BIDIRECTIONAL mappings
  2005-09-26 21:01         ` John W. Linville
@ 2005-09-26 21:01           ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:01 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

The current implementation of sync_single in swiotlb.c chokes on
DMA_BIDIRECTIONAL mappings. This patch adds the capability to sync
those mappings, and optimizes other syncs by accounting for the
sync target (i.e. cpu or device) in addition to the DMA direction of
the mapping.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 drivers/pci/swiotlb.c |   62 ++++++++++++++++++++++++++++++++------------------
 1 files changed, 40 insertions(+), 22 deletions(-)

diff --git a/drivers/pci/swiotlb.c b/drivers/pci/swiotlb.c
--- a/drivers/pci/swiotlb.c
+++ b/drivers/pci/swiotlb.c
@@ -49,6 +49,14 @@
  */
 #define IO_TLB_SHIFT 11
 
+/*
+ * Enumeration for sync targets
+ */
+enum dma_sync_target {
+	SYNC_FOR_CPU = 0,
+	SYNC_FOR_DEVICE = 1,
+};
+
 int swiotlb_force;
 
 /*
@@ -295,21 +303,28 @@ unmap_single(struct device *hwdev, char 
 }
 
 static void
-sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+sync_single(struct device *hwdev, char *dma_addr, size_t size,
+	    int dir, int target)
 {
 	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
 	char *buffer = io_tlb_orig_addr[index];
 
-	/*
-	 * bounce... copy the data back into/from the original buffer
-	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
-	 */
-	if (dir == DMA_FROM_DEVICE)
-		memcpy(buffer, dma_addr, size);
-	else if (dir == DMA_TO_DEVICE)
-		memcpy(dma_addr, buffer, size);
-	else
+	switch (target) {
+	case SYNC_FOR_CPU:
+		if (likely(dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+			memcpy(buffer, dma_addr, size);
+		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
+			BUG();
+		break;
+	case SYNC_FOR_DEVICE:
+		if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
+			memcpy(dma_addr, buffer, size);
+		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
+			BUG();
+		break;
+	default:
 		BUG();
+	}
 }
 
 void *
@@ -494,14 +509,14 @@ swiotlb_unmap_single(struct device *hwde
  */
 static inline void
 swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
-		    size_t size, int dir)
+		    size_t size, int dir, int target)
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
 	if (dir == DMA_NONE)
 		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
+		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir == DMA_FROM_DEVICE)
 		mark_clean(dma_addr, size);
 }
@@ -510,14 +525,14 @@ void
 swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 			    size_t size, int dir)
 {
-	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_DEVICE);
 }
 
 /*
@@ -525,14 +540,15 @@ swiotlb_sync_single_for_device(struct de
  */
 static inline void
 swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
-			  unsigned long offset, size_t size, int dir)
+			  unsigned long offset, size_t size,
+			  int dir, int target)
 {
 	char *dma_addr = phys_to_virt(dev_addr) + offset;
 
 	if (dir == DMA_NONE)
 		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
+		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir == DMA_FROM_DEVICE)
 		mark_clean(dma_addr, size);
 }
@@ -541,14 +557,16 @@ void
 swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 				  unsigned long offset, size_t size, int dir)
 {
-	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir,
+				  SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
 				     unsigned long offset, size_t size, int dir)
 {
-	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir,
+				  SYNC_FOR_DEVICE);
 }
 
 /*
@@ -627,7 +645,7 @@ swiotlb_unmap_sg(struct device *hwdev, s
  */
 static inline void
 swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
-		int nelems, int dir)
+		int nelems, int dir, int target)
 {
 	int i;
 
@@ -637,21 +655,21 @@ swiotlb_sync_sg(struct device *hwdev, st
 	for (i = 0; i < nelems; i++, sg++)
 		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
 			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+				    sg->dma_length, dir, target);
 }
 
 void
 swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
 			int nelems, int dir)
 {
-	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_DEVICE);
 }
 
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 2/5] swiotlb: cleanup some code duplication cruft
  2005-09-26 21:01     ` John W. Linville
@ 2005-09-26 21:01       ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:01 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

The implementations of swiotlb_sync_single_for_{cpu,device} are
identical. Likewise for swiotlb_syng_sg_for_{cpu,device}. This patch
move the guts of those functions to two new inline functions, and
calls the appropriate one from the bodies of those functions.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 drivers/pci/swiotlb.c |   45 ++++++++++++++++++++++-----------------------
 1 files changed, 22 insertions(+), 23 deletions(-)

diff --git a/drivers/pci/swiotlb.c b/drivers/pci/swiotlb.c
--- a/drivers/pci/swiotlb.c
+++ b/drivers/pci/swiotlb.c
@@ -492,9 +492,9 @@ swiotlb_unmap_single(struct device *hwde
  * address back to the card, you must first perform a
  * swiotlb_dma_sync_for_device, and then the device again owns the buffer
  */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
+static inline void
+swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
+		    size_t size, int dir)
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
@@ -507,17 +507,17 @@ swiotlb_sync_single_for_cpu(struct devic
 }
 
 void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+}
+
+void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir);
 }
 
 /*
@@ -594,9 +594,9 @@ swiotlb_unmap_sg(struct device *hwdev, s
  * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
  * and usage.
  */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
+static inline void
+swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
+		int nelems, int dir)
 {
 	int i;
 
@@ -610,18 +610,17 @@ swiotlb_sync_sg_for_cpu(struct device *h
 }
 
 void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+}
+
+void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
 }
 
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 2/5] swiotlb: cleanup some code duplication cruft
@ 2005-09-26 21:01       ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:01 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

The implementations of swiotlb_sync_single_for_{cpu,device} are
identical. Likewise for swiotlb_syng_sg_for_{cpu,device}. This patch
move the guts of those functions to two new inline functions, and
calls the appropriate one from the bodies of those functions.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 drivers/pci/swiotlb.c |   45 ++++++++++++++++++++++-----------------------
 1 files changed, 22 insertions(+), 23 deletions(-)

diff --git a/drivers/pci/swiotlb.c b/drivers/pci/swiotlb.c
--- a/drivers/pci/swiotlb.c
+++ b/drivers/pci/swiotlb.c
@@ -492,9 +492,9 @@ swiotlb_unmap_single(struct device *hwde
  * address back to the card, you must first perform a
  * swiotlb_dma_sync_for_device, and then the device again owns the buffer
  */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
+static inline void
+swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
+		    size_t size, int dir)
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
@@ -507,17 +507,17 @@ swiotlb_sync_single_for_cpu(struct devic
 }
 
 void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+}
+
+void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir);
 }
 
 /*
@@ -594,9 +594,9 @@ swiotlb_unmap_sg(struct device *hwdev, s
  * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
  * and usage.
  */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
+static inline void
+swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
+		int nelems, int dir)
 {
 	int i;
 
@@ -610,18 +610,17 @@ swiotlb_sync_sg_for_cpu(struct device *h
 }
 
 void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+}
+
+void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
 }
 
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 3/5] swiotlb: support syncing sub-ranges of mappings
@ 2005-09-26 21:01         ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:01 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

This patch implements swiotlb_sync_single_range_for_{cpu,device}. This
is intended to support an x86_64 implementation of
dma_sync_single_range_for_{cpu,device}.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 drivers/pci/swiotlb.c        |   33 +++++++++++++++++++++++++++++++++
 include/asm-x86_64/swiotlb.h |    8 ++++++++
 2 files changed, 41 insertions(+)

diff --git a/drivers/pci/swiotlb.c b/drivers/pci/swiotlb.c
--- a/drivers/pci/swiotlb.c
+++ b/drivers/pci/swiotlb.c
@@ -521,6 +521,37 @@ swiotlb_sync_single_for_device(struct de
 }
 
 /*
+ * Same as above, but for a sub-range of the mapping.
+ */
+static inline void
+swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
+			  unsigned long offset, size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr) + offset;
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+				  unsigned long offset, size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+}
+
+void
+swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
+				     unsigned long offset, size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+}
+
+/*
  * Map a set of buffers described by scatterlist in streaming mode for DMA.
  * This is the scatter-gather version of the above swiotlb_map_single
  * interface.  Here the scatter gather list elements are each tagged with the
@@ -648,6 +679,8 @@ EXPORT_SYMBOL(swiotlb_map_sg);
 EXPORT_SYMBOL(swiotlb_unmap_sg);
 EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_cpu);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
 EXPORT_SYMBOL(swiotlb_dma_mapping_error);
diff --git a/include/asm-x86_64/swiotlb.h b/include/asm-x86_64/swiotlb.h
--- a/include/asm-x86_64/swiotlb.h
+++ b/include/asm-x86_64/swiotlb.h
@@ -15,6 +15,14 @@ extern void swiotlb_sync_single_for_cpu(
 extern void swiotlb_sync_single_for_device(struct device *hwdev,
 					    dma_addr_t dev_addr,
 					    size_t size, int dir);
+extern void swiotlb_sync_single_range_for_cpu(struct device *hwdev,
+					      dma_addr_t dev_addr,
+					      unsigned long offset,
+					      size_t size, int dir);
+extern void swiotlb_sync_single_range_for_device(struct device *hwdev,
+						 dma_addr_t dev_addr,
+						 unsigned long offset,
+						 size_t size, int dir);
 extern void swiotlb_sync_sg_for_cpu(struct device *hwdev,
 				     struct scatterlist *sg, int nelems,
 				     int dir);

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 4/5] swiotlb: support syncing DMA_BIDIRECTIONAL mappings
@ 2005-09-26 21:01           ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:01 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

The current implementation of sync_single in swiotlb.c chokes on
DMA_BIDIRECTIONAL mappings. This patch adds the capability to sync
those mappings, and optimizes other syncs by accounting for the
sync target (i.e. cpu or device) in addition to the DMA direction of
the mapping.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 drivers/pci/swiotlb.c |   62 ++++++++++++++++++++++++++++++++------------------
 1 files changed, 40 insertions(+), 22 deletions(-)

diff --git a/drivers/pci/swiotlb.c b/drivers/pci/swiotlb.c
--- a/drivers/pci/swiotlb.c
+++ b/drivers/pci/swiotlb.c
@@ -49,6 +49,14 @@
  */
 #define IO_TLB_SHIFT 11
 
+/*
+ * Enumeration for sync targets
+ */
+enum dma_sync_target {
+	SYNC_FOR_CPU = 0,
+	SYNC_FOR_DEVICE = 1,
+};
+
 int swiotlb_force;
 
 /*
@@ -295,21 +303,28 @@ unmap_single(struct device *hwdev, char 
 }
 
 static void
-sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+sync_single(struct device *hwdev, char *dma_addr, size_t size,
+	    int dir, int target)
 {
 	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
 	char *buffer = io_tlb_orig_addr[index];
 
-	/*
-	 * bounce... copy the data back into/from the original buffer
-	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
-	 */
-	if (dir = DMA_FROM_DEVICE)
-		memcpy(buffer, dma_addr, size);
-	else if (dir = DMA_TO_DEVICE)
-		memcpy(dma_addr, buffer, size);
-	else
+	switch (target) {
+	case SYNC_FOR_CPU:
+		if (likely(dir = DMA_FROM_DEVICE || dir = DMA_BIDIRECTIONAL))
+			memcpy(buffer, dma_addr, size);
+		else if (dir != DMA_TO_DEVICE && dir != DMA_NONE)
+			BUG();
+		break;
+	case SYNC_FOR_DEVICE:
+		if (likely(dir = DMA_TO_DEVICE || dir = DMA_BIDIRECTIONAL))
+			memcpy(dma_addr, buffer, size);
+		else if (dir != DMA_FROM_DEVICE && dir != DMA_NONE)
+			BUG();
+		break;
+	default:
 		BUG();
+	}
 }
 
 void *
@@ -494,14 +509,14 @@ swiotlb_unmap_single(struct device *hwde
  */
 static inline void
 swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
-		    size_t size, int dir)
+		    size_t size, int dir, int target)
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
 	if (dir = DMA_NONE)
 		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
+		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir = DMA_FROM_DEVICE)
 		mark_clean(dma_addr, size);
 }
@@ -510,14 +525,14 @@ void
 swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 			    size_t size, int dir)
 {
-	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_DEVICE);
 }
 
 /*
@@ -525,14 +540,15 @@ swiotlb_sync_single_for_device(struct de
  */
 static inline void
 swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
-			  unsigned long offset, size_t size, int dir)
+			  unsigned long offset, size_t size,
+			  int dir, int target)
 {
 	char *dma_addr = phys_to_virt(dev_addr) + offset;
 
 	if (dir = DMA_NONE)
 		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
+		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir = DMA_FROM_DEVICE)
 		mark_clean(dma_addr, size);
 }
@@ -541,14 +557,16 @@ void
 swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 				  unsigned long offset, size_t size, int dir)
 {
-	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir,
+				  SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
 				     unsigned long offset, size_t size, int dir)
 {
-	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir,
+				  SYNC_FOR_DEVICE);
 }
 
 /*
@@ -627,7 +645,7 @@ swiotlb_unmap_sg(struct device *hwdev, s
  */
 static inline void
 swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
-		int nelems, int dir)
+		int nelems, int dir, int target)
 {
 	int i;
 
@@ -637,21 +655,21 @@ swiotlb_sync_sg(struct device *hwdev, st
 	for (i = 0; i < nelems; i++, sg++)
 		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
 			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+				    sg->dma_length, dir, target);
 }
 
 void
 swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
 			int nelems, int dir)
 {
-	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_DEVICE);
 }
 
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 5/5] swiotlb: file header comments
@ 2005-09-26 21:01             ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:01 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

Change comment at top of swiotlb.c to reflect that the code is shared
with EM64T (i.e. Intel x86_64). Also add an entry for myself so that
if I "broke it", everyone knows who "bought it"... :-)

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 drivers/pci/swiotlb.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/swiotlb.c b/drivers/pci/swiotlb.c
--- a/drivers/pci/swiotlb.c
+++ b/drivers/pci/swiotlb.c
@@ -1,7 +1,7 @@
 /*
  * Dynamic DMA mapping support.
  *
- * This implementation is for IA-64 platforms that do not support
+ * This implementation is for IA-64 and EM64T platforms that do not support
  * I/O TLBs (aka DMA address translation hardware).
  * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
  * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
@@ -11,7 +11,9 @@
  * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
  * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
  *			unnecessary i-cache flushing.
- * 04/07/.. ak          Better overflow handling. Assorted fixes.
+ * 04/07/.. ak		Better overflow handling. Assorted fixes.
+ * 05/09/10 linville	Add support for syncing ranges, support syncing for
+ *			DMA_BIDIRECTIONAL mappings, miscellaneous cleanup.
  */
 
 #include <linux/cache.h>

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-26 21:01   ` John W. Linville
@ 2005-09-26 21:33     ` Matthew Wilcox
  -1 siblings, 0 replies; 99+ messages in thread
From: Matthew Wilcox @ 2005-09-26 21:33 UTC (permalink / raw)
  To: John W. Linville
  Cc: linux-kernel, discuss, linux-ia64, linux-pci, ak, tony.luck,
	Asit.K.Mallick, gregkh

On Mon, Sep 26, 2005 at 05:01:19PM -0400, John W. Linville wrote:
> In this round, the new location for swiotlb is driver/pci/swiotlb.c.
> This is the result of discussions on lkml pointing-out that swiotlb is
> closely related to PCI.

Uh?  It implements DMA services, which aren't limited to PCI at all.
Despite the file including <linux.pci.h> and <asm/pci.h> (which should
probably both be removed), there's not a single PCI-related function in
this file.  You originally moved it to lib/ which made much more sense.

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-26 21:33     ` Matthew Wilcox
  0 siblings, 0 replies; 99+ messages in thread
From: Matthew Wilcox @ 2005-09-26 21:33 UTC (permalink / raw)
  To: John W. Linville
  Cc: linux-kernel, discuss, linux-ia64, linux-pci, ak, tony.luck,
	Asit.K.Mallick, gregkh

On Mon, Sep 26, 2005 at 05:01:19PM -0400, John W. Linville wrote:
> In this round, the new location for swiotlb is driver/pci/swiotlb.c.
> This is the result of discussions on lkml pointing-out that swiotlb is
> closely related to PCI.

Uh?  It implements DMA services, which aren't limited to PCI at all.
Despite the file including <linux.pci.h> and <asm/pci.h> (which should
probably both be removed), there's not a single PCI-related function in
this file.  You originally moved it to lib/ which made much more sense.

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-26 21:33     ` Matthew Wilcox
@ 2005-09-26 21:54       ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:54 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-kernel, discuss, linux-ia64, linux-pci, ak, tony.luck,
	Asit.K.Mallick, gregkh

On Mon, Sep 26, 2005 at 03:33:26PM -0600, Matthew Wilcox wrote:
> On Mon, Sep 26, 2005 at 05:01:19PM -0400, John W. Linville wrote:
> > In this round, the new location for swiotlb is driver/pci/swiotlb.c.
> > This is the result of discussions on lkml pointing-out that swiotlb is
> > closely related to PCI.
> 
> Uh?  It implements DMA services, which aren't limited to PCI at all.
> Despite the file including <linux.pci.h> and <asm/pci.h> (which should
> probably both be removed), there's not a single PCI-related function in
> this file.  You originally moved it to lib/ which made much more sense.

Well, now, this is a quandry isn't it...  Actually, I'm inclined to
agree with you.

Tony, et al., care to restate your reasoning for moving it under
drivers/pci?

John
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-26 21:54       ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-26 21:54 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-kernel, discuss, linux-ia64, linux-pci, ak, tony.luck,
	Asit.K.Mallick, gregkh

On Mon, Sep 26, 2005 at 03:33:26PM -0600, Matthew Wilcox wrote:
> On Mon, Sep 26, 2005 at 05:01:19PM -0400, John W. Linville wrote:
> > In this round, the new location for swiotlb is driver/pci/swiotlb.c.
> > This is the result of discussions on lkml pointing-out that swiotlb is
> > closely related to PCI.
> 
> Uh?  It implements DMA services, which aren't limited to PCI at all.
> Despite the file including <linux.pci.h> and <asm/pci.h> (which should
> probably both be removed), there's not a single PCI-related function in
> this file.  You originally moved it to lib/ which made much more sense.

Well, now, this is a quandry isn't it...  Actually, I'm inclined to
agree with you.

Tony, et al., care to restate your reasoning for moving it under
drivers/pci?

John
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* RE: [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-26 21:01   ` John W. Linville
@ 2005-09-26 22:08 ` Luck, Tony
  -1 siblings, 0 replies; 99+ messages in thread
From: Luck, Tony @ 2005-09-26 22:08 UTC (permalink / raw)
  To: John W. Linville, Matthew Wilcox
  Cc: linux-kernel, discuss, linux-ia64, linux-pci, ak, Mallick,
	Asit K, gregkh

[-- Attachment #1: Type: text/plain, Size: 1040 bytes --]

>Well, now, this is a quandry isn't it...  Actually, I'm inclined to
>agree with you.
>
>Tony, et al., care to restate your reasoning for moving it under
>drivers/pci?

Historically swiotlb.c was written with just PCI in mind (hence
all the comments ("... implement the PCI DMA API",  "The PCI address
to use is returned", "teardown the PCI dma mapping") and a few
error messages ("PCI-DMA: Out of SW-IOMMU space ...", "PCI-DMA: Memory
would be corrupted", "PCI-DMA: Random memory would be DMAed").
Perhaps back then the only options were PCI and ISA????

Matthew is probably technically right in that this is a more
generic interface ... but is it actually being used for anything
other than PCI?  Will it ever be so used?

Ignoring the comments and error messages, the attached patch
removed the <linux/pci.h> and <asm/pci.h> includes (and adds a
couple of others that are then needed, and converts a few
PCI_DMA_* defines to DMA_* ones that sound like they mean the
same thing).  Compiles, but not tested.

-Tony

[-- Attachment #2: sw.patch --]
[-- Type: application/octet-stream, Size: 1031 bytes --]

--- a/arch/ia64/lib/swiotlb.c	2005-09-26 13:03:04.000000000 -0700
+++ b/arch/ia64/lib/swiotlb.c	2005-09-26 14:54:38.000000000 -0700
@@ -15,17 +15,17 @@
  */
 
 #include <linux/cache.h>
+#include <linux/dma-mapping.h>
 #include <linux/mm.h>
 #include <linux/module.h>
-#include <linux/pci.h>
 #include <linux/spinlock.h>
 #include <linux/string.h>
 #include <linux/types.h>
 #include <linux/ctype.h>
 
 #include <asm/io.h>
-#include <asm/pci.h>
 #include <asm/dma.h>
+#include <asm/scatterlist.h>
 
 #include <linux/init.h>
 #include <linux/bootmem.h>
@@ -391,9 +391,9 @@
 	       "device %s\n", size, dev ? dev->bus_id : "?");
 
 	if (size > io_tlb_overflow && do_panic) {
-		if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL)
+		if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)
 			panic("PCI-DMA: Memory would be corrupted\n");
-		if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL)
+		if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
 			panic("PCI-DMA: Random memory would be DMAed\n");
 	}
 }

^ permalink raw reply	[flat|nested] 99+ messages in thread

* RE: [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-26 22:08 ` Luck, Tony
  0 siblings, 0 replies; 99+ messages in thread
From: Luck, Tony @ 2005-09-26 22:08 UTC (permalink / raw)
  To: John W. Linville, Matthew Wilcox
  Cc: linux-kernel, discuss, linux-ia64, linux-pci, ak, Mallick,
	Asit K, gregkh

[-- Attachment #1: Type: text/plain, Size: 1040 bytes --]

>Well, now, this is a quandry isn't it...  Actually, I'm inclined to
>agree with you.
>
>Tony, et al., care to restate your reasoning for moving it under
>drivers/pci?

Historically swiotlb.c was written with just PCI in mind (hence
all the comments ("... implement the PCI DMA API",  "The PCI address
to use is returned", "teardown the PCI dma mapping") and a few
error messages ("PCI-DMA: Out of SW-IOMMU space ...", "PCI-DMA: Memory
would be corrupted", "PCI-DMA: Random memory would be DMAed").
Perhaps back then the only options were PCI and ISA????

Matthew is probably technically right in that this is a more
generic interface ... but is it actually being used for anything
other than PCI?  Will it ever be so used?

Ignoring the comments and error messages, the attached patch
removed the <linux/pci.h> and <asm/pci.h> includes (and adds a
couple of others that are then needed, and converts a few
PCI_DMA_* defines to DMA_* ones that sound like they mean the
same thing).  Compiles, but not tested.

-Tony

[-- Attachment #2: sw.patch --]
[-- Type: application/octet-stream, Size: 1031 bytes --]

--- a/arch/ia64/lib/swiotlb.c	2005-09-26 13:03:04.000000000 -0700
+++ b/arch/ia64/lib/swiotlb.c	2005-09-26 14:54:38.000000000 -0700
@@ -15,17 +15,17 @@
  */
 
 #include <linux/cache.h>
+#include <linux/dma-mapping.h>
 #include <linux/mm.h>
 #include <linux/module.h>
-#include <linux/pci.h>
 #include <linux/spinlock.h>
 #include <linux/string.h>
 #include <linux/types.h>
 #include <linux/ctype.h>
 
 #include <asm/io.h>
-#include <asm/pci.h>
 #include <asm/dma.h>
+#include <asm/scatterlist.h>
 
 #include <linux/init.h>
 #include <linux/bootmem.h>
@@ -391,9 +391,9 @@
 	       "device %s\n", size, dev ? dev->bus_id : "?");
 
 	if (size > io_tlb_overflow && do_panic) {
-		if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL)
+		if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)
 			panic("PCI-DMA: Memory would be corrupted\n");
-		if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL)
+		if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
 			panic("PCI-DMA: Random memory would be DMAed\n");
 	}
 }

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-26 22:08 ` Luck, Tony
@ 2005-09-26 22:46   ` Grant Grundler
  -1 siblings, 0 replies; 99+ messages in thread
From: Grant Grundler @ 2005-09-26 22:46 UTC (permalink / raw)
  To: Luck, Tony
  Cc: John W. Linville, Matthew Wilcox, linux-kernel, discuss,
	linux-ia64, linux-pci, ak, Mallick, Asit K, gregkh

On Mon, Sep 26, 2005 at 03:08:23PM -0700, Luck, Tony wrote:
> Historically swiotlb.c was written with just PCI in mind (hence
> all the comments ("... implement the PCI DMA API",  "The PCI address
> to use is returned", "teardown the PCI dma mapping") and a few
> error messages ("PCI-DMA: Out of SW-IOMMU space ...", "PCI-DMA: Memory
> would be corrupted", "PCI-DMA: Random memory would be DMAed").
> Perhaps back then the only options were PCI and ISA????

Yes. The DMA interface davem/et al introduce in linux-2.4 only
supported "PCI-Like" busses. Ie the API required struct pci_dev.

> Matthew is probably technically right in that this is a more
> generic interface ... but is it actually being used for anything
> other than PCI?  Will it ever be so used?

Besides 32-bit PCI devices, I expect legacy 24-bit E/ISA DMA will
need it.  Is ISA ~= PCI? I never got a clear answer on that.
I'm inclined to say it's not.

But since swiotlb complies with DMA-API interface and is not related
to any particular type of bus, I'd rather it go into lib/ instead of
drivers/pci.

hth,
grant

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-26 22:46   ` Grant Grundler
  0 siblings, 0 replies; 99+ messages in thread
From: Grant Grundler @ 2005-09-26 22:46 UTC (permalink / raw)
  To: Luck, Tony
  Cc: John W. Linville, Matthew Wilcox, linux-kernel, discuss,
	linux-ia64, linux-pci, ak, Mallick, Asit K, gregkh

On Mon, Sep 26, 2005 at 03:08:23PM -0700, Luck, Tony wrote:
> Historically swiotlb.c was written with just PCI in mind (hence
> all the comments ("... implement the PCI DMA API",  "The PCI address
> to use is returned", "teardown the PCI dma mapping") and a few
> error messages ("PCI-DMA: Out of SW-IOMMU space ...", "PCI-DMA: Memory
> would be corrupted", "PCI-DMA: Random memory would be DMAed").
> Perhaps back then the only options were PCI and ISA????

Yes. The DMA interface davem/et al introduce in linux-2.4 only
supported "PCI-Like" busses. Ie the API required struct pci_dev.

> Matthew is probably technically right in that this is a more
> generic interface ... but is it actually being used for anything
> other than PCI?  Will it ever be so used?

Besides 32-bit PCI devices, I expect legacy 24-bit E/ISA DMA will
need it.  Is ISA ~= PCI? I never got a clear answer on that.
I'm inclined to say it's not.

But since swiotlb complies with DMA-API interface and is not related
to any particular type of bus, I'd rather it go into lib/ instead of
drivers/pci.

hth,
grant

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-26 22:46   ` Grant Grundler
@ 2005-09-27  0:14     ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-27  0:14 UTC (permalink / raw)
  To: Grant Grundler
  Cc: Luck, Tony, Matthew Wilcox, linux-kernel, discuss, linux-ia64,
	linux-pci, ak, Mallick, Asit K, gregkh

On Mon, Sep 26, 2005 at 03:46:03PM -0700, Grant Grundler wrote:

> But since swiotlb complies with DMA-API interface and is not related
> to any particular type of bus, I'd rather it go into lib/ instead of
> drivers/pci.

OK...I read Tony's patch post to imply that he is consenting to lib/ as
the location as well.  Tony, please speak-up if that is not the case.

If everyone agrees, I'll re-collect the patches moving everything to
lib/ (plus the one I forgot to repost) and Tony's patch and repost.

So, who is the proper committer for this?  Tony?  Or should I send
them directly to Andrew?

Thanks,

John
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-27  0:14     ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-27  0:14 UTC (permalink / raw)
  To: Grant Grundler
  Cc: Luck, Tony, Matthew Wilcox, linux-kernel, discuss, linux-ia64,
	linux-pci, ak, Mallick, Asit K, gregkh

On Mon, Sep 26, 2005 at 03:46:03PM -0700, Grant Grundler wrote:

> But since swiotlb complies with DMA-API interface and is not related
> to any particular type of bus, I'd rather it go into lib/ instead of
> drivers/pci.

OK...I read Tony's patch post to imply that he is consenting to lib/ as
the location as well.  Tony, please speak-up if that is not the case.

If everyone agrees, I'll re-collect the patches moving everything to
lib/ (plus the one I forgot to repost) and Tony's patch and repost.

So, who is the proper committer for this?  Tony?  Or should I send
them directly to Andrew?

Thanks,

John
-- 
John W. Linville
linville@tuxdriver.com

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-27  0:14     ` John W. Linville
@ 2005-09-27  2:47       ` Tony Luck
  -1 siblings, 0 replies; 99+ messages in thread
From: Tony Luck @ 2005-09-27  2:47 UTC (permalink / raw)
  To: Grant Grundler, Luck, Tony, Matthew Wilcox, linux-kernel,
	discuss, linux-ia64, linux-pci, ak, Mallick, Asit K, gregkh

> OK...I read Tony's patch post to imply that he is consenting to lib/ as
> the location as well.  Tony, please speak-up if that is not the case.

lib/ is certainly better than drivers/pci ... so unless anyone has a better
location, we'll go with that.

> If everyone agrees, I'll re-collect the patches moving everything to
> lib/ (plus the one I forgot to repost) and Tony's patch and repost.

My patch only removed the include for linux/pci.h and asm/pci.h
(as per Mathhew Wilcox's suggestion) and then cleaned  up the
resulting compilation fluff.  There are a few more PCI references in
comments and printk messages that should be cleaned up too.
Oh, I only compile tested for ia64 ... so it needs compile-testing
for x86-64, and boot testing for both archs.

> So, who is the proper committer for this?  Tony?  Or should I send
> them directly to Andrew?

I have one other change to swiotlb.c (from Alex Williamson) sitting
in my "test" tree, which needs to not get lost.  I'm happy to drive
this (target 2.6.15 ... this isn't 2.6.14 bug fix material). So long as
I don't get burned to a crisp for offloading this Intel specific[1] stuff
into the generic part of the tree.

-Tony

[1] Only ia64 and em64t use this right now.

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-27  2:47       ` Tony Luck
  0 siblings, 0 replies; 99+ messages in thread
From: Tony Luck @ 2005-09-27  2:47 UTC (permalink / raw)
  To: Grant Grundler, Luck, Tony, Matthew Wilcox, linux-kernel,
	discuss, linux-ia64, linux-pci, ak, Mallick, Asit K, gregkh

> OK...I read Tony's patch post to imply that he is consenting to lib/ as
> the location as well.  Tony, please speak-up if that is not the case.

lib/ is certainly better than drivers/pci ... so unless anyone has a better
location, we'll go with that.

> If everyone agrees, I'll re-collect the patches moving everything to
> lib/ (plus the one I forgot to repost) and Tony's patch and repost.

My patch only removed the include for linux/pci.h and asm/pci.h
(as per Mathhew Wilcox's suggestion) and then cleaned  up the
resulting compilation fluff.  There are a few more PCI references in
comments and printk messages that should be cleaned up too.
Oh, I only compile tested for ia64 ... so it needs compile-testing
for x86-64, and boot testing for both archs.

> So, who is the proper committer for this?  Tony?  Or should I send
> them directly to Andrew?

I have one other change to swiotlb.c (from Alex Williamson) sitting
in my "test" tree, which needs to not get lost.  I'm happy to drive
this (target 2.6.15 ... this isn't 2.6.14 bug fix material). So long as
I don't get burned to a crisp for offloading this Intel specific[1] stuff
into the generic part of the tree.

-Tony

[1] Only ia64 and em64t use this right now.

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-27  2:47       ` Tony Luck
@ 2005-09-28 21:50         ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-28 21:50 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

Conduct some maintenance of the swiotlb code:

	-- Move the code from arch/ia64/lib/ to lib/

	-- Cleanup some cruft (code duplication)

	-- Add support for syncing sub-ranges of mappings

	-- Add support for syncing DMA_BIDIRECTIONAL mappings

	-- Comment fixup & change record

Also, tack-on an x86_64 implementation of dma_sync_single_range_for_cpu
and dma_sync_single_range_for _device.  This makes use of the new
swiotlb sync sub-range support.

In this (third) round, the new location for swiotlb is (again) lib/swiotlb.c.

This set does NOT include the PCI excision that Tony Luck started after the
last round of discussion.

Patches to follow...

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-28 21:50         ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-28 21:50 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

Conduct some maintenance of the swiotlb code:

	-- Move the code from arch/ia64/lib/ to lib/

	-- Cleanup some cruft (code duplication)

	-- Add support for syncing sub-ranges of mappings

	-- Add support for syncing DMA_BIDIRECTIONAL mappings

	-- Comment fixup & change record

Also, tack-on an x86_64 implementation of dma_sync_single_range_for_cpu
and dma_sync_single_range_for _device.  This makes use of the new
swiotlb sync sub-range support.

In this (third) round, the new location for swiotlb is (again) lib/swiotlb.c.

This set does NOT include the PCI excision that Tony Luck started after the
last round of discussion.

Patches to follow...

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 1/6] swiotlb: move from arch/ia64/lib/ to lib/
  2005-09-28 21:50         ` John W. Linville
@ 2005-09-28 21:50           ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-28 21:50 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

The swiotlb implementation is shared by both IA-64 and EM64T. However,
the source itself lives under arch/ia64. This patch moves swiotlb.c
from arch/ia64/lib to lib/ and fixes-up the appropriate Makefile and
Kconfig files. No actual changes are made to swiotlb.c.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 arch/ia64/Kconfig           |    4 
 arch/ia64/lib/Makefile      |    2 
 arch/ia64/lib/swiotlb.c     |  657 --------------------------------------------
 arch/x86_64/kernel/Makefile |    2 
 lib/Makefile                |    2 
 lib/swiotlb.c               |  657 ++++++++++++++++++++++++++++++++++++++++++++
 6 files changed, 664 insertions(+), 660 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -26,6 +26,10 @@ config MMU
 	bool
 	default y
 
+config SWIOTLB
+       bool
+       default y
+
 config RWSEM_XCHGADD_ALGORITHM
 	bool
 	default y
diff --git a/arch/ia64/lib/Makefile b/arch/ia64/lib/Makefile
--- a/arch/ia64/lib/Makefile
+++ b/arch/ia64/lib/Makefile
@@ -9,7 +9,7 @@ lib-y := __divsi3.o __udivsi3.o __modsi3
 	bitop.o checksum.o clear_page.o csum_partial_copy.o		\
 	clear_user.o strncpy_from_user.o strlen_user.o strnlen_user.o	\
 	flush.o ip_fast_csum.o do_csum.o				\
-	memset.o strlen.o swiotlb.o
+	memset.o strlen.o
 
 lib-$(CONFIG_ITANIUM)	+= copy_page.o copy_user.o memcpy.o
 lib-$(CONFIG_MCKINLEY)	+= copy_page_mck.o memcpy_mck.o
diff --git a/arch/ia64/lib/swiotlb.c b/arch/ia64/lib/swiotlb.c
--- a/arch/ia64/lib/swiotlb.c
+++ b/arch/ia64/lib/swiotlb.c
@@ -1,657 +0,0 @@
-/*
- * Dynamic DMA mapping support.
- *
- * This implementation is for IA-64 platforms that do not support
- * I/O TLBs (aka DMA address translation hardware).
- * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
- * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
- * Copyright (C) 2000, 2003 Hewlett-Packard Co
- *	David Mosberger-Tang <davidm@hpl.hp.com>
- *
- * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
- * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
- *			unnecessary i-cache flushing.
- * 04/07/.. ak          Better overflow handling. Assorted fixes.
- */
-
-#include <linux/cache.h>
-#include <linux/mm.h>
-#include <linux/module.h>
-#include <linux/pci.h>
-#include <linux/spinlock.h>
-#include <linux/string.h>
-#include <linux/types.h>
-#include <linux/ctype.h>
-
-#include <asm/io.h>
-#include <asm/pci.h>
-#include <asm/dma.h>
-
-#include <linux/init.h>
-#include <linux/bootmem.h>
-
-#define OFFSET(val,align) ((unsigned long)	\
-	                   ( (val) & ( (align) - 1)))
-
-#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
-#define SG_ENT_PHYS_ADDRESS(SG)	virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
-
-/*
- * Maximum allowable number of contiguous slabs to map,
- * must be a power of 2.  What is the appropriate value ?
- * The complexity of {map,unmap}_single is linearly dependent on this value.
- */
-#define IO_TLB_SEGSIZE	128
-
-/*
- * log of the size of each IO TLB slab.  The number of slabs is command line
- * controllable.
- */
-#define IO_TLB_SHIFT 11
-
-int swiotlb_force;
-
-/*
- * Used to do a quick range check in swiotlb_unmap_single and
- * swiotlb_sync_single_*, to see if the memory was in fact allocated by this
- * API.
- */
-static char *io_tlb_start, *io_tlb_end;
-
-/*
- * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
- * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
- */
-static unsigned long io_tlb_nslabs;
-
-/*
- * When the IOMMU overflows we return a fallback buffer. This sets the size.
- */
-static unsigned long io_tlb_overflow = 32*1024;
-
-void *io_tlb_overflow_buffer;
-
-/*
- * This is a free list describing the number of free entries available from
- * each index
- */
-static unsigned int *io_tlb_list;
-static unsigned int io_tlb_index;
-
-/*
- * We need to save away the original address corresponding to a mapped entry
- * for the sync operations.
- */
-static unsigned char **io_tlb_orig_addr;
-
-/*
- * Protect the above data structures in the map and unmap calls
- */
-static DEFINE_SPINLOCK(io_tlb_lock);
-
-static int __init
-setup_io_tlb_npages(char *str)
-{
-	if (isdigit(*str)) {
-		io_tlb_nslabs = simple_strtoul(str, &str, 0);
-		/* avoid tail segment of size < IO_TLB_SEGSIZE */
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
-	if (*str == ',')
-		++str;
-	if (!strcmp(str, "force"))
-		swiotlb_force = 1;
-	return 1;
-}
-__setup("swiotlb=", setup_io_tlb_npages);
-/* make io_tlb_overflow tunable too? */
-
-/*
- * Statically reserve bounce buffer space and initialize bounce buffer data
- * structures for the software IO TLB used to implement the PCI DMA API.
- */
-void
-swiotlb_init_with_default_size (size_t default_size)
-{
-	unsigned long i;
-
-	if (!io_tlb_nslabs) {
-		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
-
-	/*
-	 * Get IO TLB memory from the low pages
-	 */
-	io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
-					       (1 << IO_TLB_SHIFT));
-	if (!io_tlb_start)
-		panic("Cannot allocate SWIOTLB buffer");
-	io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
-
-	/*
-	 * Allocate and initialize the free list array.  This array is used
-	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
-	 * between io_tlb_start and io_tlb_end.
-	 */
-	io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
-	for (i = 0; i < io_tlb_nslabs; i++)
- 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-	io_tlb_index = 0;
-	io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
-
-	/*
-	 * Get the overflow emergency buffer
-	 */
-	io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
-	printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n",
-	       virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end));
-}
-
-void
-swiotlb_init (void)
-{
-	swiotlb_init_with_default_size(64 * (1<<20));	/* default to 64MB */
-}
-
-static inline int
-address_needs_mapping(struct device *hwdev, dma_addr_t addr)
-{
-	dma_addr_t mask = 0xffffffff;
-	/* If the device has a mask, use it, otherwise default to 32 bits */
-	if (hwdev && hwdev->dma_mask)
-		mask = *hwdev->dma_mask;
-	return (addr & ~mask) != 0;
-}
-
-/*
- * Allocates bounce buffer and returns its kernel virtual address.
- */
-static void *
-map_single(struct device *hwdev, char *buffer, size_t size, int dir)
-{
-	unsigned long flags;
-	char *dma_addr;
-	unsigned int nslots, stride, index, wrap;
-	int i;
-
-	/*
-	 * For mappings greater than a page, we limit the stride (and
-	 * hence alignment) to a page size.
-	 */
-	nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	if (size > PAGE_SIZE)
-		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
-	else
-		stride = 1;
-
-	if (!nslots)
-		BUG();
-
-	/*
-	 * Find suitable number of IO TLB entries size that will fit this
-	 * request and allocate a buffer from that IO TLB pool.
-	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
-	{
-		wrap = index = ALIGN(io_tlb_index, stride);
-
-		if (index >= io_tlb_nslabs)
-			wrap = index = 0;
-
-		do {
-			/*
-			 * If we find a slot that indicates we have 'nslots'
-			 * number of contiguous buffers, we allocate the
-			 * buffers from that slot and mark the entries as '0'
-			 * indicating unavailable.
-			 */
-			if (io_tlb_list[index] >= nslots) {
-				int count = 0;
-
-				for (i = index; i < (int) (index + nslots); i++)
-					io_tlb_list[i] = 0;
-				for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-					io_tlb_list[i] = ++count;
-				dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
-
-				/*
-				 * Update the indices to avoid searching in
-				 * the next round.
-				 */
-				io_tlb_index = ((index + nslots) < io_tlb_nslabs
-						? (index + nslots) : 0);
-
-				goto found;
-			}
-			index += stride;
-			if (index >= io_tlb_nslabs)
-				index = 0;
-		} while (index != wrap);
-
-		spin_unlock_irqrestore(&io_tlb_lock, flags);
-		return NULL;
-	}
-  found:
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
-
-	/*
-	 * Save away the mapping from the original address to the DMA address.
-	 * This is needed when we sync the memory.  Then we sync the buffer if
-	 * needed.
-	 */
-	io_tlb_orig_addr[index] = buffer;
-	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
-		memcpy(dma_addr, buffer, size);
-
-	return dma_addr;
-}
-
-/*
- * dma_addr is the kernel virtual address of the bounce buffer to unmap.
- */
-static void
-unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
-{
-	unsigned long flags;
-	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	char *buffer = io_tlb_orig_addr[index];
-
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (buffer && ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL)))
-		/*
-		 * bounce... copy the data back into the original buffer * and
-		 * delete the bounce buffer.
-		 */
-		memcpy(buffer, dma_addr, size);
-
-	/*
-	 * Return the buffer to the free list by setting the corresponding
-	 * entries to indicate the number of contigous entries available.
-	 * While returning the entries to the free list, we merge the entries
-	 * with slots below and above the pool being returned.
-	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
-	{
-		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
-			 io_tlb_list[index + nslots] : 0);
-		/*
-		 * Step 1: return the slots to the free list, merging the
-		 * slots with superceeding slots
-		 */
-		for (i = index + nslots - 1; i >= index; i--)
-			io_tlb_list[i] = ++count;
-		/*
-		 * Step 2: merge the returned slots with the preceding slots,
-		 * if available (non zero)
-		 */
-		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-			io_tlb_list[i] = ++count;
-	}
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
-}
-
-static void
-sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
-{
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	char *buffer = io_tlb_orig_addr[index];
-
-	/*
-	 * bounce... copy the data back into/from the original buffer
-	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
-	 */
-	if (dir == DMA_FROM_DEVICE)
-		memcpy(buffer, dma_addr, size);
-	else if (dir == DMA_TO_DEVICE)
-		memcpy(dma_addr, buffer, size);
-	else
-		BUG();
-}
-
-void *
-swiotlb_alloc_coherent(struct device *hwdev, size_t size,
-		       dma_addr_t *dma_handle, int flags)
-{
-	unsigned long dev_addr;
-	void *ret;
-	int order = get_order(size);
-
-	/*
-	 * XXX fix me: the DMA API should pass us an explicit DMA mask
-	 * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32
-	 * bit range instead of a 16MB one).
-	 */
-	flags |= GFP_DMA;
-
-	ret = (void *)__get_free_pages(flags, order);
-	if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
-		/*
-		 * The allocated memory isn't reachable by the device.
-		 * Fall back on swiotlb_map_single().
-		 */
-		free_pages((unsigned long) ret, order);
-		ret = NULL;
-	}
-	if (!ret) {
-		/*
-		 * We are either out of memory or the device can't DMA
-		 * to GFP_DMA memory; fall back on
-		 * swiotlb_map_single(), which will grab memory from
-		 * the lowest available address range.
-		 */
-		dma_addr_t handle;
-		handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE);
-		if (dma_mapping_error(handle))
-			return NULL;
-
-		ret = phys_to_virt(handle);
-	}
-
-	memset(ret, 0, size);
-	dev_addr = virt_to_phys(ret);
-
-	/* Confirm address can be DMA'd by device */
-	if (address_needs_mapping(hwdev, dev_addr)) {
-		printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n",
-		       (unsigned long long)*hwdev->dma_mask, dev_addr);
-		panic("swiotlb_alloc_coherent: allocated memory is out of "
-		      "range for device");
-	}
-	*dma_handle = dev_addr;
-	return ret;
-}
-
-void
-swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
-		      dma_addr_t dma_handle)
-{
-	if (!(vaddr >= (void *)io_tlb_start
-                    && vaddr < (void *)io_tlb_end))
-		free_pages((unsigned long) vaddr, get_order(size));
-	else
-		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
-		swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE);
-}
-
-static void
-swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
-{
-	/*
-	 * Ran out of IOMMU space for this operation. This is very bad.
-	 * Unfortunately the drivers cannot handle this operation properly.
-	 * unless they check for pci_dma_mapping_error (most don't)
-	 * When the mapping is small enough return a static buffer to limit
-	 * the damage, or panic when the transfer is too big.
-	 */
-	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
-	       "device %s\n", size, dev ? dev->bus_id : "?");
-
-	if (size > io_tlb_overflow && do_panic) {
-		if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Memory would be corrupted\n");
-		if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Random memory would be DMAed\n");
-	}
-}
-
-/*
- * Map a single buffer of the indicated size for DMA in streaming mode.  The
- * PCI address to use is returned.
- *
- * Once the device is given the dma address, the device owns this memory until
- * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
- */
-dma_addr_t
-swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir)
-{
-	unsigned long dev_addr = virt_to_phys(ptr);
-	void *map;
-
-	if (dir == DMA_NONE)
-		BUG();
-	/*
-	 * If the pointer passed in happens to be in the device's DMA window,
-	 * we can safely return the device addr and not worry about bounce
-	 * buffering it.
-	 */
-	if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
-		return dev_addr;
-
-	/*
-	 * Oh well, have to allocate and map a bounce buffer.
-	 */
-	map = map_single(hwdev, ptr, size, dir);
-	if (!map) {
-		swiotlb_full(hwdev, size, dir, 1);
-		map = io_tlb_overflow_buffer;
-	}
-
-	dev_addr = virt_to_phys(map);
-
-	/*
-	 * Ensure that the address returned is DMA'ble
-	 */
-	if (address_needs_mapping(hwdev, dev_addr))
-		panic("map_single: bounce buffer is not DMA'ble");
-
-	return dev_addr;
-}
-
-/*
- * Since DMA is i-cache coherent, any (complete) pages that were written via
- * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to
- * flush them when they get mapped into an executable vm-area.
- */
-static void
-mark_clean(void *addr, size_t size)
-{
-	unsigned long pg_addr, end;
-
-	pg_addr = PAGE_ALIGN((unsigned long) addr);
-	end = (unsigned long) addr + size;
-	while (pg_addr + PAGE_SIZE <= end) {
-		struct page *page = virt_to_page(pg_addr);
-		set_bit(PG_arch_1, &page->flags);
-		pg_addr += PAGE_SIZE;
-	}
-}
-
-/*
- * Unmap a single streaming mode DMA translation.  The dma_addr and size must
- * match what was provided for in a previous swiotlb_map_single call.  All
- * other usages are undefined.
- *
- * After this call, reads by the cpu to the buffer are guaranteed to see
- * whatever the device wrote there.
- */
-void
-swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size,
-		     int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		unmap_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-/*
- * Make physical memory consistent for a single streaming mode DMA translation
- * after a transfer.
- *
- * If you perform a swiotlb_map_single() but wish to interrogate the buffer
- * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
- * call this function before doing so.  At the next point you give the PCI dma
- * address back to the card, you must first perform a
- * swiotlb_dma_sync_for_device, and then the device again owns the buffer
- */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-void
-swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
-			       size_t size, int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-/*
- * Map a set of buffers described by scatterlist in streaming mode for DMA.
- * This is the scatter-gather version of the above swiotlb_map_single
- * interface.  Here the scatter gather list elements are each tagged with the
- * appropriate dma address and length.  They are obtained via
- * sg_dma_{address,length}(SG).
- *
- * NOTE: An implementation may be able to use a smaller number of
- *       DMA address/length pairs than there are SG table elements.
- *       (for example via virtual mapping capabilities)
- *       The routine returns the number of addr/length pairs actually
- *       used, at most nents.
- *
- * Device ownership issues as mentioned above for swiotlb_map_single are the
- * same here.
- */
-int
-swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
-	       int dir)
-{
-	void *addr;
-	unsigned long dev_addr;
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++) {
-		addr = SG_ENT_VIRT_ADDRESS(sg);
-		dev_addr = virt_to_phys(addr);
-		if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
-			sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir));
-			if (!sg->dma_address) {
-				/* Don't panic here, we expect map_sg users
-				   to do proper error handling. */
-				swiotlb_full(hwdev, sg->length, dir, 0);
-				swiotlb_unmap_sg(hwdev, sg - i, i, dir);
-				sg[0].dma_length = 0;
-				return 0;
-			}
-		} else
-			sg->dma_address = dev_addr;
-		sg->dma_length = sg->length;
-	}
-	return nelems;
-}
-
-/*
- * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
- * concerning calls here are the same as for swiotlb_unmap_single() above.
- */
-void
-swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
-		 int dir)
-{
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir);
-		else if (dir == DMA_FROM_DEVICE)
-			mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
-}
-
-/*
- * Make physical memory consistent for a set of streaming mode DMA translations
- * after a transfer.
- *
- * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
- * and usage.
- */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
-{
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
-}
-
-void
-swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
-			   int nelems, int dir)
-{
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
-}
-
-int
-swiotlb_dma_mapping_error(dma_addr_t dma_addr)
-{
-	return (dma_addr == virt_to_phys(io_tlb_overflow_buffer));
-}
-
-/*
- * Return whether the given PCI device DMA address mask can be supported
- * properly.  For example, if your device can only drive the low 24-bits
- * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
- * this function.
- */
-int
-swiotlb_dma_supported (struct device *hwdev, u64 mask)
-{
-	return (virt_to_phys (io_tlb_end) - 1) <= mask;
-}
-
-EXPORT_SYMBOL(swiotlb_init);
-EXPORT_SYMBOL(swiotlb_map_single);
-EXPORT_SYMBOL(swiotlb_unmap_single);
-EXPORT_SYMBOL(swiotlb_map_sg);
-EXPORT_SYMBOL(swiotlb_unmap_sg);
-EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
-EXPORT_SYMBOL(swiotlb_sync_single_for_device);
-EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
-EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
-EXPORT_SYMBOL(swiotlb_dma_mapping_error);
-EXPORT_SYMBOL(swiotlb_alloc_coherent);
-EXPORT_SYMBOL(swiotlb_free_coherent);
-EXPORT_SYMBOL(swiotlb_dma_supported);
diff --git a/arch/x86_64/kernel/Makefile b/arch/x86_64/kernel/Makefile
--- a/arch/x86_64/kernel/Makefile
+++ b/arch/x86_64/kernel/Makefile
@@ -27,7 +27,6 @@ obj-$(CONFIG_CPU_FREQ)		+= cpufreq/
 obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
 obj-$(CONFIG_GART_IOMMU)	+= pci-gart.o aperture.o
 obj-$(CONFIG_DUMMY_IOMMU)	+= pci-nommu.o pci-dma.o
-obj-$(CONFIG_SWIOTLB)		+= swiotlb.o
 obj-$(CONFIG_KPROBES)		+= kprobes.o
 obj-$(CONFIG_X86_PM_TIMER)	+= pmtimer.o
 
@@ -41,7 +40,6 @@ CFLAGS_vsyscall.o		:= $(PROFILING) -g0
 bootflag-y			+= ../../i386/kernel/bootflag.o
 cpuid-$(subst m,y,$(CONFIG_X86_CPUID))  += ../../i386/kernel/cpuid.o
 topology-y                     += ../../i386/mach-default/topology.o
-swiotlb-$(CONFIG_SWIOTLB)      += ../../ia64/lib/swiotlb.o
 microcode-$(subst m,y,$(CONFIG_MICROCODE))  += ../../i386/kernel/microcode.o
 intel_cacheinfo-y		+= ../../i386/kernel/cpu/intel_cacheinfo.o
 quirks-y			+= ../../i386/kernel/quirks.o
diff --git a/lib/Makefile b/lib/Makefile
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -44,6 +44,8 @@ obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o
 obj-$(CONFIG_TEXTSEARCH_BM) += ts_bm.o
 obj-$(CONFIG_TEXTSEARCH_FSM) += ts_fsm.o
 
+obj-$(CONFIG_SWIOTLB) += swiotlb.o
+
 hostprogs-y	:= gen_crc32table
 clean-files	:= crc32table.h
 
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -0,0 +1,657 @@
+/*
+ * Dynamic DMA mapping support.
+ *
+ * This implementation is for IA-64 platforms that do not support
+ * I/O TLBs (aka DMA address translation hardware).
+ * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
+ * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
+ * Copyright (C) 2000, 2003 Hewlett-Packard Co
+ *	David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
+ * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
+ *			unnecessary i-cache flushing.
+ * 04/07/.. ak          Better overflow handling. Assorted fixes.
+ */
+
+#include <linux/cache.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ctype.h>
+
+#include <asm/io.h>
+#include <asm/pci.h>
+#include <asm/dma.h>
+
+#include <linux/init.h>
+#include <linux/bootmem.h>
+
+#define OFFSET(val,align) ((unsigned long)	\
+	                   ( (val) & ( (align) - 1)))
+
+#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
+#define SG_ENT_PHYS_ADDRESS(SG)	virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
+
+/*
+ * Maximum allowable number of contiguous slabs to map,
+ * must be a power of 2.  What is the appropriate value ?
+ * The complexity of {map,unmap}_single is linearly dependent on this value.
+ */
+#define IO_TLB_SEGSIZE	128
+
+/*
+ * log of the size of each IO TLB slab.  The number of slabs is command line
+ * controllable.
+ */
+#define IO_TLB_SHIFT 11
+
+int swiotlb_force;
+
+/*
+ * Used to do a quick range check in swiotlb_unmap_single and
+ * swiotlb_sync_single_*, to see if the memory was in fact allocated by this
+ * API.
+ */
+static char *io_tlb_start, *io_tlb_end;
+
+/*
+ * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
+ * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
+ */
+static unsigned long io_tlb_nslabs;
+
+/*
+ * When the IOMMU overflows we return a fallback buffer. This sets the size.
+ */
+static unsigned long io_tlb_overflow = 32*1024;
+
+void *io_tlb_overflow_buffer;
+
+/*
+ * This is a free list describing the number of free entries available from
+ * each index
+ */
+static unsigned int *io_tlb_list;
+static unsigned int io_tlb_index;
+
+/*
+ * We need to save away the original address corresponding to a mapped entry
+ * for the sync operations.
+ */
+static unsigned char **io_tlb_orig_addr;
+
+/*
+ * Protect the above data structures in the map and unmap calls
+ */
+static DEFINE_SPINLOCK(io_tlb_lock);
+
+static int __init
+setup_io_tlb_npages(char *str)
+{
+	if (isdigit(*str)) {
+		io_tlb_nslabs = simple_strtoul(str, &str, 0);
+		/* avoid tail segment of size < IO_TLB_SEGSIZE */
+		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	}
+	if (*str == ',')
+		++str;
+	if (!strcmp(str, "force"))
+		swiotlb_force = 1;
+	return 1;
+}
+__setup("swiotlb=", setup_io_tlb_npages);
+/* make io_tlb_overflow tunable too? */
+
+/*
+ * Statically reserve bounce buffer space and initialize bounce buffer data
+ * structures for the software IO TLB used to implement the PCI DMA API.
+ */
+void
+swiotlb_init_with_default_size (size_t default_size)
+{
+	unsigned long i;
+
+	if (!io_tlb_nslabs) {
+		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
+		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	}
+
+	/*
+	 * Get IO TLB memory from the low pages
+	 */
+	io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
+					       (1 << IO_TLB_SHIFT));
+	if (!io_tlb_start)
+		panic("Cannot allocate SWIOTLB buffer");
+	io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
+
+	/*
+	 * Allocate and initialize the free list array.  This array is used
+	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
+	 * between io_tlb_start and io_tlb_end.
+	 */
+	io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
+	for (i = 0; i < io_tlb_nslabs; i++)
+ 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+	io_tlb_index = 0;
+	io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
+
+	/*
+	 * Get the overflow emergency buffer
+	 */
+	io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
+	printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n",
+	       virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end));
+}
+
+void
+swiotlb_init (void)
+{
+	swiotlb_init_with_default_size(64 * (1<<20));	/* default to 64MB */
+}
+
+static inline int
+address_needs_mapping(struct device *hwdev, dma_addr_t addr)
+{
+	dma_addr_t mask = 0xffffffff;
+	/* If the device has a mask, use it, otherwise default to 32 bits */
+	if (hwdev && hwdev->dma_mask)
+		mask = *hwdev->dma_mask;
+	return (addr & ~mask) != 0;
+}
+
+/*
+ * Allocates bounce buffer and returns its kernel virtual address.
+ */
+static void *
+map_single(struct device *hwdev, char *buffer, size_t size, int dir)
+{
+	unsigned long flags;
+	char *dma_addr;
+	unsigned int nslots, stride, index, wrap;
+	int i;
+
+	/*
+	 * For mappings greater than a page, we limit the stride (and
+	 * hence alignment) to a page size.
+	 */
+	nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+	if (size > PAGE_SIZE)
+		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
+	else
+		stride = 1;
+
+	if (!nslots)
+		BUG();
+
+	/*
+	 * Find suitable number of IO TLB entries size that will fit this
+	 * request and allocate a buffer from that IO TLB pool.
+	 */
+	spin_lock_irqsave(&io_tlb_lock, flags);
+	{
+		wrap = index = ALIGN(io_tlb_index, stride);
+
+		if (index >= io_tlb_nslabs)
+			wrap = index = 0;
+
+		do {
+			/*
+			 * If we find a slot that indicates we have 'nslots'
+			 * number of contiguous buffers, we allocate the
+			 * buffers from that slot and mark the entries as '0'
+			 * indicating unavailable.
+			 */
+			if (io_tlb_list[index] >= nslots) {
+				int count = 0;
+
+				for (i = index; i < (int) (index + nslots); i++)
+					io_tlb_list[i] = 0;
+				for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
+					io_tlb_list[i] = ++count;
+				dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
+
+				/*
+				 * Update the indices to avoid searching in
+				 * the next round.
+				 */
+				io_tlb_index = ((index + nslots) < io_tlb_nslabs
+						? (index + nslots) : 0);
+
+				goto found;
+			}
+			index += stride;
+			if (index >= io_tlb_nslabs)
+				index = 0;
+		} while (index != wrap);
+
+		spin_unlock_irqrestore(&io_tlb_lock, flags);
+		return NULL;
+	}
+  found:
+	spin_unlock_irqrestore(&io_tlb_lock, flags);
+
+	/*
+	 * Save away the mapping from the original address to the DMA address.
+	 * This is needed when we sync the memory.  Then we sync the buffer if
+	 * needed.
+	 */
+	io_tlb_orig_addr[index] = buffer;
+	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
+		memcpy(dma_addr, buffer, size);
+
+	return dma_addr;
+}
+
+/*
+ * dma_addr is the kernel virtual address of the bounce buffer to unmap.
+ */
+static void
+unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+{
+	unsigned long flags;
+	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+	char *buffer = io_tlb_orig_addr[index];
+
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (buffer && ((dir == DMA_FROM_DEVICE) || (dir == DMA_BIDIRECTIONAL)))
+		/*
+		 * bounce... copy the data back into the original buffer * and
+		 * delete the bounce buffer.
+		 */
+		memcpy(buffer, dma_addr, size);
+
+	/*
+	 * Return the buffer to the free list by setting the corresponding
+	 * entries to indicate the number of contigous entries available.
+	 * While returning the entries to the free list, we merge the entries
+	 * with slots below and above the pool being returned.
+	 */
+	spin_lock_irqsave(&io_tlb_lock, flags);
+	{
+		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
+			 io_tlb_list[index + nslots] : 0);
+		/*
+		 * Step 1: return the slots to the free list, merging the
+		 * slots with superceeding slots
+		 */
+		for (i = index + nslots - 1; i >= index; i--)
+			io_tlb_list[i] = ++count;
+		/*
+		 * Step 2: merge the returned slots with the preceding slots,
+		 * if available (non zero)
+		 */
+		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
+			io_tlb_list[i] = ++count;
+	}
+	spin_unlock_irqrestore(&io_tlb_lock, flags);
+}
+
+static void
+sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+{
+	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+	char *buffer = io_tlb_orig_addr[index];
+
+	/*
+	 * bounce... copy the data back into/from the original buffer
+	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
+	 */
+	if (dir == DMA_FROM_DEVICE)
+		memcpy(buffer, dma_addr, size);
+	else if (dir == DMA_TO_DEVICE)
+		memcpy(dma_addr, buffer, size);
+	else
+		BUG();
+}
+
+void *
+swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+		       dma_addr_t *dma_handle, int flags)
+{
+	unsigned long dev_addr;
+	void *ret;
+	int order = get_order(size);
+
+	/*
+	 * XXX fix me: the DMA API should pass us an explicit DMA mask
+	 * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32
+	 * bit range instead of a 16MB one).
+	 */
+	flags |= GFP_DMA;
+
+	ret = (void *)__get_free_pages(flags, order);
+	if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
+		/*
+		 * The allocated memory isn't reachable by the device.
+		 * Fall back on swiotlb_map_single().
+		 */
+		free_pages((unsigned long) ret, order);
+		ret = NULL;
+	}
+	if (!ret) {
+		/*
+		 * We are either out of memory or the device can't DMA
+		 * to GFP_DMA memory; fall back on
+		 * swiotlb_map_single(), which will grab memory from
+		 * the lowest available address range.
+		 */
+		dma_addr_t handle;
+		handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE);
+		if (dma_mapping_error(handle))
+			return NULL;
+
+		ret = phys_to_virt(handle);
+	}
+
+	memset(ret, 0, size);
+	dev_addr = virt_to_phys(ret);
+
+	/* Confirm address can be DMA'd by device */
+	if (address_needs_mapping(hwdev, dev_addr)) {
+		printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n",
+		       (unsigned long long)*hwdev->dma_mask, dev_addr);
+		panic("swiotlb_alloc_coherent: allocated memory is out of "
+		      "range for device");
+	}
+	*dma_handle = dev_addr;
+	return ret;
+}
+
+void
+swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+		      dma_addr_t dma_handle)
+{
+	if (!(vaddr >= (void *)io_tlb_start
+                    && vaddr < (void *)io_tlb_end))
+		free_pages((unsigned long) vaddr, get_order(size));
+	else
+		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
+		swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE);
+}
+
+static void
+swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
+{
+	/*
+	 * Ran out of IOMMU space for this operation. This is very bad.
+	 * Unfortunately the drivers cannot handle this operation properly.
+	 * unless they check for pci_dma_mapping_error (most don't)
+	 * When the mapping is small enough return a static buffer to limit
+	 * the damage, or panic when the transfer is too big.
+	 */
+	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
+	       "device %s\n", size, dev ? dev->bus_id : "?");
+
+	if (size > io_tlb_overflow && do_panic) {
+		if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL)
+			panic("PCI-DMA: Memory would be corrupted\n");
+		if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL)
+			panic("PCI-DMA: Random memory would be DMAed\n");
+	}
+}
+
+/*
+ * Map a single buffer of the indicated size for DMA in streaming mode.  The
+ * PCI address to use is returned.
+ *
+ * Once the device is given the dma address, the device owns this memory until
+ * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
+ */
+dma_addr_t
+swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir)
+{
+	unsigned long dev_addr = virt_to_phys(ptr);
+	void *map;
+
+	if (dir == DMA_NONE)
+		BUG();
+	/*
+	 * If the pointer passed in happens to be in the device's DMA window,
+	 * we can safely return the device addr and not worry about bounce
+	 * buffering it.
+	 */
+	if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
+		return dev_addr;
+
+	/*
+	 * Oh well, have to allocate and map a bounce buffer.
+	 */
+	map = map_single(hwdev, ptr, size, dir);
+	if (!map) {
+		swiotlb_full(hwdev, size, dir, 1);
+		map = io_tlb_overflow_buffer;
+	}
+
+	dev_addr = virt_to_phys(map);
+
+	/*
+	 * Ensure that the address returned is DMA'ble
+	 */
+	if (address_needs_mapping(hwdev, dev_addr))
+		panic("map_single: bounce buffer is not DMA'ble");
+
+	return dev_addr;
+}
+
+/*
+ * Since DMA is i-cache coherent, any (complete) pages that were written via
+ * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to
+ * flush them when they get mapped into an executable vm-area.
+ */
+static void
+mark_clean(void *addr, size_t size)
+{
+	unsigned long pg_addr, end;
+
+	pg_addr = PAGE_ALIGN((unsigned long) addr);
+	end = (unsigned long) addr + size;
+	while (pg_addr + PAGE_SIZE <= end) {
+		struct page *page = virt_to_page(pg_addr);
+		set_bit(PG_arch_1, &page->flags);
+		pg_addr += PAGE_SIZE;
+	}
+}
+
+/*
+ * Unmap a single streaming mode DMA translation.  The dma_addr and size must
+ * match what was provided for in a previous swiotlb_map_single call.  All
+ * other usages are undefined.
+ *
+ * After this call, reads by the cpu to the buffer are guaranteed to see
+ * whatever the device wrote there.
+ */
+void
+swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size,
+		     int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		unmap_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
+ * Make physical memory consistent for a single streaming mode DMA translation
+ * after a transfer.
+ *
+ * If you perform a swiotlb_map_single() but wish to interrogate the buffer
+ * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
+ * call this function before doing so.  At the next point you give the PCI dma
+ * address back to the card, you must first perform a
+ * swiotlb_dma_sync_for_device, and then the device again owns the buffer
+ */
+void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
+			       size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
+ * Map a set of buffers described by scatterlist in streaming mode for DMA.
+ * This is the scatter-gather version of the above swiotlb_map_single
+ * interface.  Here the scatter gather list elements are each tagged with the
+ * appropriate dma address and length.  They are obtained via
+ * sg_dma_{address,length}(SG).
+ *
+ * NOTE: An implementation may be able to use a smaller number of
+ *       DMA address/length pairs than there are SG table elements.
+ *       (for example via virtual mapping capabilities)
+ *       The routine returns the number of addr/length pairs actually
+ *       used, at most nents.
+ *
+ * Device ownership issues as mentioned above for swiotlb_map_single are the
+ * same here.
+ */
+int
+swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+	       int dir)
+{
+	void *addr;
+	unsigned long dev_addr;
+	int i;
+
+	if (dir == DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++) {
+		addr = SG_ENT_VIRT_ADDRESS(sg);
+		dev_addr = virt_to_phys(addr);
+		if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
+			sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir));
+			if (!sg->dma_address) {
+				/* Don't panic here, we expect map_sg users
+				   to do proper error handling. */
+				swiotlb_full(hwdev, sg->length, dir, 0);
+				swiotlb_unmap_sg(hwdev, sg - i, i, dir);
+				sg[0].dma_length = 0;
+				return 0;
+			}
+		} else
+			sg->dma_address = dev_addr;
+		sg->dma_length = sg->length;
+	}
+	return nelems;
+}
+
+/*
+ * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
+ * concerning calls here are the same as for swiotlb_unmap_single() above.
+ */
+void
+swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+		 int dir)
+{
+	int i;
+
+	if (dir == DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir);
+		else if (dir == DMA_FROM_DEVICE)
+			mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
+}
+
+/*
+ * Make physical memory consistent for a set of streaming mode DMA translations
+ * after a transfer.
+ *
+ * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
+ * and usage.
+ */
+void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	int i;
+
+	if (dir == DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			sync_single(hwdev, (void *) sg->dma_address,
+				    sg->dma_length, dir);
+}
+
+void
+swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
+			   int nelems, int dir)
+{
+	int i;
+
+	if (dir == DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			sync_single(hwdev, (void *) sg->dma_address,
+				    sg->dma_length, dir);
+}
+
+int
+swiotlb_dma_mapping_error(dma_addr_t dma_addr)
+{
+	return (dma_addr == virt_to_phys(io_tlb_overflow_buffer));
+}
+
+/*
+ * Return whether the given PCI device DMA address mask can be supported
+ * properly.  For example, if your device can only drive the low 24-bits
+ * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
+ * this function.
+ */
+int
+swiotlb_dma_supported (struct device *hwdev, u64 mask)
+{
+	return (virt_to_phys (io_tlb_end) - 1) <= mask;
+}
+
+EXPORT_SYMBOL(swiotlb_init);
+EXPORT_SYMBOL(swiotlb_map_single);
+EXPORT_SYMBOL(swiotlb_unmap_single);
+EXPORT_SYMBOL(swiotlb_map_sg);
+EXPORT_SYMBOL(swiotlb_unmap_sg);
+EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
+EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
+EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
+EXPORT_SYMBOL(swiotlb_dma_mapping_error);
+EXPORT_SYMBOL(swiotlb_alloc_coherent);
+EXPORT_SYMBOL(swiotlb_free_coherent);
+EXPORT_SYMBOL(swiotlb_dma_supported);

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 1/6] swiotlb: move from arch/ia64/lib/ to lib/
@ 2005-09-28 21:50           ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-28 21:50 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

The swiotlb implementation is shared by both IA-64 and EM64T. However,
the source itself lives under arch/ia64. This patch moves swiotlb.c
from arch/ia64/lib to lib/ and fixes-up the appropriate Makefile and
Kconfig files. No actual changes are made to swiotlb.c.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 arch/ia64/Kconfig           |    4 
 arch/ia64/lib/Makefile      |    2 
 arch/ia64/lib/swiotlb.c     |  657 --------------------------------------------
 arch/x86_64/kernel/Makefile |    2 
 lib/Makefile                |    2 
 lib/swiotlb.c               |  657 ++++++++++++++++++++++++++++++++++++++++++++
 6 files changed, 664 insertions(+), 660 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -26,6 +26,10 @@ config MMU
 	bool
 	default y
 
+config SWIOTLB
+       bool
+       default y
+
 config RWSEM_XCHGADD_ALGORITHM
 	bool
 	default y
diff --git a/arch/ia64/lib/Makefile b/arch/ia64/lib/Makefile
--- a/arch/ia64/lib/Makefile
+++ b/arch/ia64/lib/Makefile
@@ -9,7 +9,7 @@ lib-y := __divsi3.o __udivsi3.o __modsi3
 	bitop.o checksum.o clear_page.o csum_partial_copy.o		\
 	clear_user.o strncpy_from_user.o strlen_user.o strnlen_user.o	\
 	flush.o ip_fast_csum.o do_csum.o				\
-	memset.o strlen.o swiotlb.o
+	memset.o strlen.o
 
 lib-$(CONFIG_ITANIUM)	+= copy_page.o copy_user.o memcpy.o
 lib-$(CONFIG_MCKINLEY)	+= copy_page_mck.o memcpy_mck.o
diff --git a/arch/ia64/lib/swiotlb.c b/arch/ia64/lib/swiotlb.c
--- a/arch/ia64/lib/swiotlb.c
+++ b/arch/ia64/lib/swiotlb.c
@@ -1,657 +0,0 @@
-/*
- * Dynamic DMA mapping support.
- *
- * This implementation is for IA-64 platforms that do not support
- * I/O TLBs (aka DMA address translation hardware).
- * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
- * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
- * Copyright (C) 2000, 2003 Hewlett-Packard Co
- *	David Mosberger-Tang <davidm@hpl.hp.com>
- *
- * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
- * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
- *			unnecessary i-cache flushing.
- * 04/07/.. ak          Better overflow handling. Assorted fixes.
- */
-
-#include <linux/cache.h>
-#include <linux/mm.h>
-#include <linux/module.h>
-#include <linux/pci.h>
-#include <linux/spinlock.h>
-#include <linux/string.h>
-#include <linux/types.h>
-#include <linux/ctype.h>
-
-#include <asm/io.h>
-#include <asm/pci.h>
-#include <asm/dma.h>
-
-#include <linux/init.h>
-#include <linux/bootmem.h>
-
-#define OFFSET(val,align) ((unsigned long)	\
-	                   ( (val) & ( (align) - 1)))
-
-#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
-#define SG_ENT_PHYS_ADDRESS(SG)	virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
-
-/*
- * Maximum allowable number of contiguous slabs to map,
- * must be a power of 2.  What is the appropriate value ?
- * The complexity of {map,unmap}_single is linearly dependent on this value.
- */
-#define IO_TLB_SEGSIZE	128
-
-/*
- * log of the size of each IO TLB slab.  The number of slabs is command line
- * controllable.
- */
-#define IO_TLB_SHIFT 11
-
-int swiotlb_force;
-
-/*
- * Used to do a quick range check in swiotlb_unmap_single and
- * swiotlb_sync_single_*, to see if the memory was in fact allocated by this
- * API.
- */
-static char *io_tlb_start, *io_tlb_end;
-
-/*
- * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
- * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
- */
-static unsigned long io_tlb_nslabs;
-
-/*
- * When the IOMMU overflows we return a fallback buffer. This sets the size.
- */
-static unsigned long io_tlb_overflow = 32*1024;
-
-void *io_tlb_overflow_buffer;
-
-/*
- * This is a free list describing the number of free entries available from
- * each index
- */
-static unsigned int *io_tlb_list;
-static unsigned int io_tlb_index;
-
-/*
- * We need to save away the original address corresponding to a mapped entry
- * for the sync operations.
- */
-static unsigned char **io_tlb_orig_addr;
-
-/*
- * Protect the above data structures in the map and unmap calls
- */
-static DEFINE_SPINLOCK(io_tlb_lock);
-
-static int __init
-setup_io_tlb_npages(char *str)
-{
-	if (isdigit(*str)) {
-		io_tlb_nslabs = simple_strtoul(str, &str, 0);
-		/* avoid tail segment of size < IO_TLB_SEGSIZE */
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
-	if (*str = ',')
-		++str;
-	if (!strcmp(str, "force"))
-		swiotlb_force = 1;
-	return 1;
-}
-__setup("swiotlb=", setup_io_tlb_npages);
-/* make io_tlb_overflow tunable too? */
-
-/*
- * Statically reserve bounce buffer space and initialize bounce buffer data
- * structures for the software IO TLB used to implement the PCI DMA API.
- */
-void
-swiotlb_init_with_default_size (size_t default_size)
-{
-	unsigned long i;
-
-	if (!io_tlb_nslabs) {
-		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
-		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
-	}
-
-	/*
-	 * Get IO TLB memory from the low pages
-	 */
-	io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
-					       (1 << IO_TLB_SHIFT));
-	if (!io_tlb_start)
-		panic("Cannot allocate SWIOTLB buffer");
-	io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
-
-	/*
-	 * Allocate and initialize the free list array.  This array is used
-	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
-	 * between io_tlb_start and io_tlb_end.
-	 */
-	io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
-	for (i = 0; i < io_tlb_nslabs; i++)
- 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
-	io_tlb_index = 0;
-	io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
-
-	/*
-	 * Get the overflow emergency buffer
-	 */
-	io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
-	printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n",
-	       virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end));
-}
-
-void
-swiotlb_init (void)
-{
-	swiotlb_init_with_default_size(64 * (1<<20));	/* default to 64MB */
-}
-
-static inline int
-address_needs_mapping(struct device *hwdev, dma_addr_t addr)
-{
-	dma_addr_t mask = 0xffffffff;
-	/* If the device has a mask, use it, otherwise default to 32 bits */
-	if (hwdev && hwdev->dma_mask)
-		mask = *hwdev->dma_mask;
-	return (addr & ~mask) != 0;
-}
-
-/*
- * Allocates bounce buffer and returns its kernel virtual address.
- */
-static void *
-map_single(struct device *hwdev, char *buffer, size_t size, int dir)
-{
-	unsigned long flags;
-	char *dma_addr;
-	unsigned int nslots, stride, index, wrap;
-	int i;
-
-	/*
-	 * For mappings greater than a page, we limit the stride (and
-	 * hence alignment) to a page size.
-	 */
-	nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	if (size > PAGE_SIZE)
-		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
-	else
-		stride = 1;
-
-	if (!nslots)
-		BUG();
-
-	/*
-	 * Find suitable number of IO TLB entries size that will fit this
-	 * request and allocate a buffer from that IO TLB pool.
-	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
-	{
-		wrap = index = ALIGN(io_tlb_index, stride);
-
-		if (index >= io_tlb_nslabs)
-			wrap = index = 0;
-
-		do {
-			/*
-			 * If we find a slot that indicates we have 'nslots'
-			 * number of contiguous buffers, we allocate the
-			 * buffers from that slot and mark the entries as '0'
-			 * indicating unavailable.
-			 */
-			if (io_tlb_list[index] >= nslots) {
-				int count = 0;
-
-				for (i = index; i < (int) (index + nslots); i++)
-					io_tlb_list[i] = 0;
-				for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-					io_tlb_list[i] = ++count;
-				dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
-
-				/*
-				 * Update the indices to avoid searching in
-				 * the next round.
-				 */
-				io_tlb_index = ((index + nslots) < io_tlb_nslabs
-						? (index + nslots) : 0);
-
-				goto found;
-			}
-			index += stride;
-			if (index >= io_tlb_nslabs)
-				index = 0;
-		} while (index != wrap);
-
-		spin_unlock_irqrestore(&io_tlb_lock, flags);
-		return NULL;
-	}
-  found:
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
-
-	/*
-	 * Save away the mapping from the original address to the DMA address.
-	 * This is needed when we sync the memory.  Then we sync the buffer if
-	 * needed.
-	 */
-	io_tlb_orig_addr[index] = buffer;
-	if (dir = DMA_TO_DEVICE || dir = DMA_BIDIRECTIONAL)
-		memcpy(dma_addr, buffer, size);
-
-	return dma_addr;
-}
-
-/*
- * dma_addr is the kernel virtual address of the bounce buffer to unmap.
- */
-static void
-unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
-{
-	unsigned long flags;
-	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	char *buffer = io_tlb_orig_addr[index];
-
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (buffer && ((dir = DMA_FROM_DEVICE) || (dir = DMA_BIDIRECTIONAL)))
-		/*
-		 * bounce... copy the data back into the original buffer * and
-		 * delete the bounce buffer.
-		 */
-		memcpy(buffer, dma_addr, size);
-
-	/*
-	 * Return the buffer to the free list by setting the corresponding
-	 * entries to indicate the number of contigous entries available.
-	 * While returning the entries to the free list, we merge the entries
-	 * with slots below and above the pool being returned.
-	 */
-	spin_lock_irqsave(&io_tlb_lock, flags);
-	{
-		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
-			 io_tlb_list[index + nslots] : 0);
-		/*
-		 * Step 1: return the slots to the free list, merging the
-		 * slots with superceeding slots
-		 */
-		for (i = index + nslots - 1; i >= index; i--)
-			io_tlb_list[i] = ++count;
-		/*
-		 * Step 2: merge the returned slots with the preceding slots,
-		 * if available (non zero)
-		 */
-		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
-			io_tlb_list[i] = ++count;
-	}
-	spin_unlock_irqrestore(&io_tlb_lock, flags);
-}
-
-static void
-sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
-{
-	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
-	char *buffer = io_tlb_orig_addr[index];
-
-	/*
-	 * bounce... copy the data back into/from the original buffer
-	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
-	 */
-	if (dir = DMA_FROM_DEVICE)
-		memcpy(buffer, dma_addr, size);
-	else if (dir = DMA_TO_DEVICE)
-		memcpy(dma_addr, buffer, size);
-	else
-		BUG();
-}
-
-void *
-swiotlb_alloc_coherent(struct device *hwdev, size_t size,
-		       dma_addr_t *dma_handle, int flags)
-{
-	unsigned long dev_addr;
-	void *ret;
-	int order = get_order(size);
-
-	/*
-	 * XXX fix me: the DMA API should pass us an explicit DMA mask
-	 * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32
-	 * bit range instead of a 16MB one).
-	 */
-	flags |= GFP_DMA;
-
-	ret = (void *)__get_free_pages(flags, order);
-	if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
-		/*
-		 * The allocated memory isn't reachable by the device.
-		 * Fall back on swiotlb_map_single().
-		 */
-		free_pages((unsigned long) ret, order);
-		ret = NULL;
-	}
-	if (!ret) {
-		/*
-		 * We are either out of memory or the device can't DMA
-		 * to GFP_DMA memory; fall back on
-		 * swiotlb_map_single(), which will grab memory from
-		 * the lowest available address range.
-		 */
-		dma_addr_t handle;
-		handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE);
-		if (dma_mapping_error(handle))
-			return NULL;
-
-		ret = phys_to_virt(handle);
-	}
-
-	memset(ret, 0, size);
-	dev_addr = virt_to_phys(ret);
-
-	/* Confirm address can be DMA'd by device */
-	if (address_needs_mapping(hwdev, dev_addr)) {
-		printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n",
-		       (unsigned long long)*hwdev->dma_mask, dev_addr);
-		panic("swiotlb_alloc_coherent: allocated memory is out of "
-		      "range for device");
-	}
-	*dma_handle = dev_addr;
-	return ret;
-}
-
-void
-swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
-		      dma_addr_t dma_handle)
-{
-	if (!(vaddr >= (void *)io_tlb_start
-                    && vaddr < (void *)io_tlb_end))
-		free_pages((unsigned long) vaddr, get_order(size));
-	else
-		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
-		swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE);
-}
-
-static void
-swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
-{
-	/*
-	 * Ran out of IOMMU space for this operation. This is very bad.
-	 * Unfortunately the drivers cannot handle this operation properly.
-	 * unless they check for pci_dma_mapping_error (most don't)
-	 * When the mapping is small enough return a static buffer to limit
-	 * the damage, or panic when the transfer is too big.
-	 */
-	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
-	       "device %s\n", size, dev ? dev->bus_id : "?");
-
-	if (size > io_tlb_overflow && do_panic) {
-		if (dir = PCI_DMA_FROMDEVICE || dir = PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Memory would be corrupted\n");
-		if (dir = PCI_DMA_TODEVICE || dir = PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Random memory would be DMAed\n");
-	}
-}
-
-/*
- * Map a single buffer of the indicated size for DMA in streaming mode.  The
- * PCI address to use is returned.
- *
- * Once the device is given the dma address, the device owns this memory until
- * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
- */
-dma_addr_t
-swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir)
-{
-	unsigned long dev_addr = virt_to_phys(ptr);
-	void *map;
-
-	if (dir = DMA_NONE)
-		BUG();
-	/*
-	 * If the pointer passed in happens to be in the device's DMA window,
-	 * we can safely return the device addr and not worry about bounce
-	 * buffering it.
-	 */
-	if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
-		return dev_addr;
-
-	/*
-	 * Oh well, have to allocate and map a bounce buffer.
-	 */
-	map = map_single(hwdev, ptr, size, dir);
-	if (!map) {
-		swiotlb_full(hwdev, size, dir, 1);
-		map = io_tlb_overflow_buffer;
-	}
-
-	dev_addr = virt_to_phys(map);
-
-	/*
-	 * Ensure that the address returned is DMA'ble
-	 */
-	if (address_needs_mapping(hwdev, dev_addr))
-		panic("map_single: bounce buffer is not DMA'ble");
-
-	return dev_addr;
-}
-
-/*
- * Since DMA is i-cache coherent, any (complete) pages that were written via
- * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to
- * flush them when they get mapped into an executable vm-area.
- */
-static void
-mark_clean(void *addr, size_t size)
-{
-	unsigned long pg_addr, end;
-
-	pg_addr = PAGE_ALIGN((unsigned long) addr);
-	end = (unsigned long) addr + size;
-	while (pg_addr + PAGE_SIZE <= end) {
-		struct page *page = virt_to_page(pg_addr);
-		set_bit(PG_arch_1, &page->flags);
-		pg_addr += PAGE_SIZE;
-	}
-}
-
-/*
- * Unmap a single streaming mode DMA translation.  The dma_addr and size must
- * match what was provided for in a previous swiotlb_map_single call.  All
- * other usages are undefined.
- *
- * After this call, reads by the cpu to the buffer are guaranteed to see
- * whatever the device wrote there.
- */
-void
-swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size,
-		     int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		unmap_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-/*
- * Make physical memory consistent for a single streaming mode DMA translation
- * after a transfer.
- *
- * If you perform a swiotlb_map_single() but wish to interrogate the buffer
- * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
- * call this function before doing so.  At the next point you give the PCI dma
- * address back to the card, you must first perform a
- * swiotlb_dma_sync_for_device, and then the device again owns the buffer
- */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-void
-swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
-			       size_t size, int dir)
-{
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
-}
-
-/*
- * Map a set of buffers described by scatterlist in streaming mode for DMA.
- * This is the scatter-gather version of the above swiotlb_map_single
- * interface.  Here the scatter gather list elements are each tagged with the
- * appropriate dma address and length.  They are obtained via
- * sg_dma_{address,length}(SG).
- *
- * NOTE: An implementation may be able to use a smaller number of
- *       DMA address/length pairs than there are SG table elements.
- *       (for example via virtual mapping capabilities)
- *       The routine returns the number of addr/length pairs actually
- *       used, at most nents.
- *
- * Device ownership issues as mentioned above for swiotlb_map_single are the
- * same here.
- */
-int
-swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
-	       int dir)
-{
-	void *addr;
-	unsigned long dev_addr;
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++) {
-		addr = SG_ENT_VIRT_ADDRESS(sg);
-		dev_addr = virt_to_phys(addr);
-		if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
-			sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir));
-			if (!sg->dma_address) {
-				/* Don't panic here, we expect map_sg users
-				   to do proper error handling. */
-				swiotlb_full(hwdev, sg->length, dir, 0);
-				swiotlb_unmap_sg(hwdev, sg - i, i, dir);
-				sg[0].dma_length = 0;
-				return 0;
-			}
-		} else
-			sg->dma_address = dev_addr;
-		sg->dma_length = sg->length;
-	}
-	return nelems;
-}
-
-/*
- * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
- * concerning calls here are the same as for swiotlb_unmap_single() above.
- */
-void
-swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
-		 int dir)
-{
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir);
-		else if (dir = DMA_FROM_DEVICE)
-			mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
-}
-
-/*
- * Make physical memory consistent for a set of streaming mode DMA translations
- * after a transfer.
- *
- * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
- * and usage.
- */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
-{
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
-}
-
-void
-swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
-			   int nelems, int dir)
-{
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
-}
-
-int
-swiotlb_dma_mapping_error(dma_addr_t dma_addr)
-{
-	return (dma_addr = virt_to_phys(io_tlb_overflow_buffer));
-}
-
-/*
- * Return whether the given PCI device DMA address mask can be supported
- * properly.  For example, if your device can only drive the low 24-bits
- * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
- * this function.
- */
-int
-swiotlb_dma_supported (struct device *hwdev, u64 mask)
-{
-	return (virt_to_phys (io_tlb_end) - 1) <= mask;
-}
-
-EXPORT_SYMBOL(swiotlb_init);
-EXPORT_SYMBOL(swiotlb_map_single);
-EXPORT_SYMBOL(swiotlb_unmap_single);
-EXPORT_SYMBOL(swiotlb_map_sg);
-EXPORT_SYMBOL(swiotlb_unmap_sg);
-EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
-EXPORT_SYMBOL(swiotlb_sync_single_for_device);
-EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
-EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
-EXPORT_SYMBOL(swiotlb_dma_mapping_error);
-EXPORT_SYMBOL(swiotlb_alloc_coherent);
-EXPORT_SYMBOL(swiotlb_free_coherent);
-EXPORT_SYMBOL(swiotlb_dma_supported);
diff --git a/arch/x86_64/kernel/Makefile b/arch/x86_64/kernel/Makefile
--- a/arch/x86_64/kernel/Makefile
+++ b/arch/x86_64/kernel/Makefile
@@ -27,7 +27,6 @@ obj-$(CONFIG_CPU_FREQ)		+= cpufreq/
 obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
 obj-$(CONFIG_GART_IOMMU)	+= pci-gart.o aperture.o
 obj-$(CONFIG_DUMMY_IOMMU)	+= pci-nommu.o pci-dma.o
-obj-$(CONFIG_SWIOTLB)		+= swiotlb.o
 obj-$(CONFIG_KPROBES)		+= kprobes.o
 obj-$(CONFIG_X86_PM_TIMER)	+= pmtimer.o
 
@@ -41,7 +40,6 @@ CFLAGS_vsyscall.o		:= $(PROFILING) -g0
 bootflag-y			+= ../../i386/kernel/bootflag.o
 cpuid-$(subst m,y,$(CONFIG_X86_CPUID))  += ../../i386/kernel/cpuid.o
 topology-y                     += ../../i386/mach-default/topology.o
-swiotlb-$(CONFIG_SWIOTLB)      += ../../ia64/lib/swiotlb.o
 microcode-$(subst m,y,$(CONFIG_MICROCODE))  += ../../i386/kernel/microcode.o
 intel_cacheinfo-y		+= ../../i386/kernel/cpu/intel_cacheinfo.o
 quirks-y			+= ../../i386/kernel/quirks.o
diff --git a/lib/Makefile b/lib/Makefile
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -44,6 +44,8 @@ obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o
 obj-$(CONFIG_TEXTSEARCH_BM) += ts_bm.o
 obj-$(CONFIG_TEXTSEARCH_FSM) += ts_fsm.o
 
+obj-$(CONFIG_SWIOTLB) += swiotlb.o
+
 hostprogs-y	:= gen_crc32table
 clean-files	:= crc32table.h
 
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -0,0 +1,657 @@
+/*
+ * Dynamic DMA mapping support.
+ *
+ * This implementation is for IA-64 platforms that do not support
+ * I/O TLBs (aka DMA address translation hardware).
+ * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
+ * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
+ * Copyright (C) 2000, 2003 Hewlett-Packard Co
+ *	David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
+ * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
+ *			unnecessary i-cache flushing.
+ * 04/07/.. ak          Better overflow handling. Assorted fixes.
+ */
+
+#include <linux/cache.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ctype.h>
+
+#include <asm/io.h>
+#include <asm/pci.h>
+#include <asm/dma.h>
+
+#include <linux/init.h>
+#include <linux/bootmem.h>
+
+#define OFFSET(val,align) ((unsigned long)	\
+	                   ( (val) & ( (align) - 1)))
+
+#define SG_ENT_VIRT_ADDRESS(sg)	(page_address((sg)->page) + (sg)->offset)
+#define SG_ENT_PHYS_ADDRESS(SG)	virt_to_phys(SG_ENT_VIRT_ADDRESS(SG))
+
+/*
+ * Maximum allowable number of contiguous slabs to map,
+ * must be a power of 2.  What is the appropriate value ?
+ * The complexity of {map,unmap}_single is linearly dependent on this value.
+ */
+#define IO_TLB_SEGSIZE	128
+
+/*
+ * log of the size of each IO TLB slab.  The number of slabs is command line
+ * controllable.
+ */
+#define IO_TLB_SHIFT 11
+
+int swiotlb_force;
+
+/*
+ * Used to do a quick range check in swiotlb_unmap_single and
+ * swiotlb_sync_single_*, to see if the memory was in fact allocated by this
+ * API.
+ */
+static char *io_tlb_start, *io_tlb_end;
+
+/*
+ * The number of IO TLB blocks (in groups of 64) betweeen io_tlb_start and
+ * io_tlb_end.  This is command line adjustable via setup_io_tlb_npages.
+ */
+static unsigned long io_tlb_nslabs;
+
+/*
+ * When the IOMMU overflows we return a fallback buffer. This sets the size.
+ */
+static unsigned long io_tlb_overflow = 32*1024;
+
+void *io_tlb_overflow_buffer;
+
+/*
+ * This is a free list describing the number of free entries available from
+ * each index
+ */
+static unsigned int *io_tlb_list;
+static unsigned int io_tlb_index;
+
+/*
+ * We need to save away the original address corresponding to a mapped entry
+ * for the sync operations.
+ */
+static unsigned char **io_tlb_orig_addr;
+
+/*
+ * Protect the above data structures in the map and unmap calls
+ */
+static DEFINE_SPINLOCK(io_tlb_lock);
+
+static int __init
+setup_io_tlb_npages(char *str)
+{
+	if (isdigit(*str)) {
+		io_tlb_nslabs = simple_strtoul(str, &str, 0);
+		/* avoid tail segment of size < IO_TLB_SEGSIZE */
+		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	}
+	if (*str = ',')
+		++str;
+	if (!strcmp(str, "force"))
+		swiotlb_force = 1;
+	return 1;
+}
+__setup("swiotlb=", setup_io_tlb_npages);
+/* make io_tlb_overflow tunable too? */
+
+/*
+ * Statically reserve bounce buffer space and initialize bounce buffer data
+ * structures for the software IO TLB used to implement the PCI DMA API.
+ */
+void
+swiotlb_init_with_default_size (size_t default_size)
+{
+	unsigned long i;
+
+	if (!io_tlb_nslabs) {
+		io_tlb_nslabs = (default_size >> IO_TLB_SHIFT);
+		io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
+	}
+
+	/*
+	 * Get IO TLB memory from the low pages
+	 */
+	io_tlb_start = alloc_bootmem_low_pages(io_tlb_nslabs *
+					       (1 << IO_TLB_SHIFT));
+	if (!io_tlb_start)
+		panic("Cannot allocate SWIOTLB buffer");
+	io_tlb_end = io_tlb_start + io_tlb_nslabs * (1 << IO_TLB_SHIFT);
+
+	/*
+	 * Allocate and initialize the free list array.  This array is used
+	 * to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
+	 * between io_tlb_start and io_tlb_end.
+	 */
+	io_tlb_list = alloc_bootmem(io_tlb_nslabs * sizeof(int));
+	for (i = 0; i < io_tlb_nslabs; i++)
+ 		io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
+	io_tlb_index = 0;
+	io_tlb_orig_addr = alloc_bootmem(io_tlb_nslabs * sizeof(char *));
+
+	/*
+	 * Get the overflow emergency buffer
+	 */
+	io_tlb_overflow_buffer = alloc_bootmem_low(io_tlb_overflow);
+	printk(KERN_INFO "Placing software IO TLB between 0x%lx - 0x%lx\n",
+	       virt_to_phys(io_tlb_start), virt_to_phys(io_tlb_end));
+}
+
+void
+swiotlb_init (void)
+{
+	swiotlb_init_with_default_size(64 * (1<<20));	/* default to 64MB */
+}
+
+static inline int
+address_needs_mapping(struct device *hwdev, dma_addr_t addr)
+{
+	dma_addr_t mask = 0xffffffff;
+	/* If the device has a mask, use it, otherwise default to 32 bits */
+	if (hwdev && hwdev->dma_mask)
+		mask = *hwdev->dma_mask;
+	return (addr & ~mask) != 0;
+}
+
+/*
+ * Allocates bounce buffer and returns its kernel virtual address.
+ */
+static void *
+map_single(struct device *hwdev, char *buffer, size_t size, int dir)
+{
+	unsigned long flags;
+	char *dma_addr;
+	unsigned int nslots, stride, index, wrap;
+	int i;
+
+	/*
+	 * For mappings greater than a page, we limit the stride (and
+	 * hence alignment) to a page size.
+	 */
+	nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+	if (size > PAGE_SIZE)
+		stride = (1 << (PAGE_SHIFT - IO_TLB_SHIFT));
+	else
+		stride = 1;
+
+	if (!nslots)
+		BUG();
+
+	/*
+	 * Find suitable number of IO TLB entries size that will fit this
+	 * request and allocate a buffer from that IO TLB pool.
+	 */
+	spin_lock_irqsave(&io_tlb_lock, flags);
+	{
+		wrap = index = ALIGN(io_tlb_index, stride);
+
+		if (index >= io_tlb_nslabs)
+			wrap = index = 0;
+
+		do {
+			/*
+			 * If we find a slot that indicates we have 'nslots'
+			 * number of contiguous buffers, we allocate the
+			 * buffers from that slot and mark the entries as '0'
+			 * indicating unavailable.
+			 */
+			if (io_tlb_list[index] >= nslots) {
+				int count = 0;
+
+				for (i = index; i < (int) (index + nslots); i++)
+					io_tlb_list[i] = 0;
+				for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
+					io_tlb_list[i] = ++count;
+				dma_addr = io_tlb_start + (index << IO_TLB_SHIFT);
+
+				/*
+				 * Update the indices to avoid searching in
+				 * the next round.
+				 */
+				io_tlb_index = ((index + nslots) < io_tlb_nslabs
+						? (index + nslots) : 0);
+
+				goto found;
+			}
+			index += stride;
+			if (index >= io_tlb_nslabs)
+				index = 0;
+		} while (index != wrap);
+
+		spin_unlock_irqrestore(&io_tlb_lock, flags);
+		return NULL;
+	}
+  found:
+	spin_unlock_irqrestore(&io_tlb_lock, flags);
+
+	/*
+	 * Save away the mapping from the original address to the DMA address.
+	 * This is needed when we sync the memory.  Then we sync the buffer if
+	 * needed.
+	 */
+	io_tlb_orig_addr[index] = buffer;
+	if (dir = DMA_TO_DEVICE || dir = DMA_BIDIRECTIONAL)
+		memcpy(dma_addr, buffer, size);
+
+	return dma_addr;
+}
+
+/*
+ * dma_addr is the kernel virtual address of the bounce buffer to unmap.
+ */
+static void
+unmap_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+{
+	unsigned long flags;
+	int i, count, nslots = ALIGN(size, 1 << IO_TLB_SHIFT) >> IO_TLB_SHIFT;
+	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+	char *buffer = io_tlb_orig_addr[index];
+
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (buffer && ((dir = DMA_FROM_DEVICE) || (dir = DMA_BIDIRECTIONAL)))
+		/*
+		 * bounce... copy the data back into the original buffer * and
+		 * delete the bounce buffer.
+		 */
+		memcpy(buffer, dma_addr, size);
+
+	/*
+	 * Return the buffer to the free list by setting the corresponding
+	 * entries to indicate the number of contigous entries available.
+	 * While returning the entries to the free list, we merge the entries
+	 * with slots below and above the pool being returned.
+	 */
+	spin_lock_irqsave(&io_tlb_lock, flags);
+	{
+		count = ((index + nslots) < ALIGN(index + 1, IO_TLB_SEGSIZE) ?
+			 io_tlb_list[index + nslots] : 0);
+		/*
+		 * Step 1: return the slots to the free list, merging the
+		 * slots with superceeding slots
+		 */
+		for (i = index + nslots - 1; i >= index; i--)
+			io_tlb_list[i] = ++count;
+		/*
+		 * Step 2: merge the returned slots with the preceding slots,
+		 * if available (non zero)
+		 */
+		for (i = index - 1; (OFFSET(i, IO_TLB_SEGSIZE) != IO_TLB_SEGSIZE -1) && io_tlb_list[i]; i--)
+			io_tlb_list[i] = ++count;
+	}
+	spin_unlock_irqrestore(&io_tlb_lock, flags);
+}
+
+static void
+sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+{
+	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
+	char *buffer = io_tlb_orig_addr[index];
+
+	/*
+	 * bounce... copy the data back into/from the original buffer
+	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
+	 */
+	if (dir = DMA_FROM_DEVICE)
+		memcpy(buffer, dma_addr, size);
+	else if (dir = DMA_TO_DEVICE)
+		memcpy(dma_addr, buffer, size);
+	else
+		BUG();
+}
+
+void *
+swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+		       dma_addr_t *dma_handle, int flags)
+{
+	unsigned long dev_addr;
+	void *ret;
+	int order = get_order(size);
+
+	/*
+	 * XXX fix me: the DMA API should pass us an explicit DMA mask
+	 * instead, or use ZONE_DMA32 (ia64 overloads ZONE_DMA to be a ~32
+	 * bit range instead of a 16MB one).
+	 */
+	flags |= GFP_DMA;
+
+	ret = (void *)__get_free_pages(flags, order);
+	if (ret && address_needs_mapping(hwdev, virt_to_phys(ret))) {
+		/*
+		 * The allocated memory isn't reachable by the device.
+		 * Fall back on swiotlb_map_single().
+		 */
+		free_pages((unsigned long) ret, order);
+		ret = NULL;
+	}
+	if (!ret) {
+		/*
+		 * We are either out of memory or the device can't DMA
+		 * to GFP_DMA memory; fall back on
+		 * swiotlb_map_single(), which will grab memory from
+		 * the lowest available address range.
+		 */
+		dma_addr_t handle;
+		handle = swiotlb_map_single(NULL, NULL, size, DMA_FROM_DEVICE);
+		if (dma_mapping_error(handle))
+			return NULL;
+
+		ret = phys_to_virt(handle);
+	}
+
+	memset(ret, 0, size);
+	dev_addr = virt_to_phys(ret);
+
+	/* Confirm address can be DMA'd by device */
+	if (address_needs_mapping(hwdev, dev_addr)) {
+		printk("hwdev DMA mask = 0x%016Lx, dev_addr = 0x%016lx\n",
+		       (unsigned long long)*hwdev->dma_mask, dev_addr);
+		panic("swiotlb_alloc_coherent: allocated memory is out of "
+		      "range for device");
+	}
+	*dma_handle = dev_addr;
+	return ret;
+}
+
+void
+swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+		      dma_addr_t dma_handle)
+{
+	if (!(vaddr >= (void *)io_tlb_start
+                    && vaddr < (void *)io_tlb_end))
+		free_pages((unsigned long) vaddr, get_order(size));
+	else
+		/* DMA_TO_DEVICE to avoid memcpy in unmap_single */
+		swiotlb_unmap_single (hwdev, dma_handle, size, DMA_TO_DEVICE);
+}
+
+static void
+swiotlb_full(struct device *dev, size_t size, int dir, int do_panic)
+{
+	/*
+	 * Ran out of IOMMU space for this operation. This is very bad.
+	 * Unfortunately the drivers cannot handle this operation properly.
+	 * unless they check for pci_dma_mapping_error (most don't)
+	 * When the mapping is small enough return a static buffer to limit
+	 * the damage, or panic when the transfer is too big.
+	 */
+	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
+	       "device %s\n", size, dev ? dev->bus_id : "?");
+
+	if (size > io_tlb_overflow && do_panic) {
+		if (dir = PCI_DMA_FROMDEVICE || dir = PCI_DMA_BIDIRECTIONAL)
+			panic("PCI-DMA: Memory would be corrupted\n");
+		if (dir = PCI_DMA_TODEVICE || dir = PCI_DMA_BIDIRECTIONAL)
+			panic("PCI-DMA: Random memory would be DMAed\n");
+	}
+}
+
+/*
+ * Map a single buffer of the indicated size for DMA in streaming mode.  The
+ * PCI address to use is returned.
+ *
+ * Once the device is given the dma address, the device owns this memory until
+ * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
+ */
+dma_addr_t
+swiotlb_map_single(struct device *hwdev, void *ptr, size_t size, int dir)
+{
+	unsigned long dev_addr = virt_to_phys(ptr);
+	void *map;
+
+	if (dir = DMA_NONE)
+		BUG();
+	/*
+	 * If the pointer passed in happens to be in the device's DMA window,
+	 * we can safely return the device addr and not worry about bounce
+	 * buffering it.
+	 */
+	if (!address_needs_mapping(hwdev, dev_addr) && !swiotlb_force)
+		return dev_addr;
+
+	/*
+	 * Oh well, have to allocate and map a bounce buffer.
+	 */
+	map = map_single(hwdev, ptr, size, dir);
+	if (!map) {
+		swiotlb_full(hwdev, size, dir, 1);
+		map = io_tlb_overflow_buffer;
+	}
+
+	dev_addr = virt_to_phys(map);
+
+	/*
+	 * Ensure that the address returned is DMA'ble
+	 */
+	if (address_needs_mapping(hwdev, dev_addr))
+		panic("map_single: bounce buffer is not DMA'ble");
+
+	return dev_addr;
+}
+
+/*
+ * Since DMA is i-cache coherent, any (complete) pages that were written via
+ * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to
+ * flush them when they get mapped into an executable vm-area.
+ */
+static void
+mark_clean(void *addr, size_t size)
+{
+	unsigned long pg_addr, end;
+
+	pg_addr = PAGE_ALIGN((unsigned long) addr);
+	end = (unsigned long) addr + size;
+	while (pg_addr + PAGE_SIZE <= end) {
+		struct page *page = virt_to_page(pg_addr);
+		set_bit(PG_arch_1, &page->flags);
+		pg_addr += PAGE_SIZE;
+	}
+}
+
+/*
+ * Unmap a single streaming mode DMA translation.  The dma_addr and size must
+ * match what was provided for in a previous swiotlb_map_single call.  All
+ * other usages are undefined.
+ *
+ * After this call, reads by the cpu to the buffer are guaranteed to see
+ * whatever the device wrote there.
+ */
+void
+swiotlb_unmap_single(struct device *hwdev, dma_addr_t dev_addr, size_t size,
+		     int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		unmap_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
+ * Make physical memory consistent for a single streaming mode DMA translation
+ * after a transfer.
+ *
+ * If you perform a swiotlb_map_single() but wish to interrogate the buffer
+ * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
+ * call this function before doing so.  At the next point you give the PCI dma
+ * address back to the card, you must first perform a
+ * swiotlb_dma_sync_for_device, and then the device again owns the buffer
+ */
+void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
+			       size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr);
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+/*
+ * Map a set of buffers described by scatterlist in streaming mode for DMA.
+ * This is the scatter-gather version of the above swiotlb_map_single
+ * interface.  Here the scatter gather list elements are each tagged with the
+ * appropriate dma address and length.  They are obtained via
+ * sg_dma_{address,length}(SG).
+ *
+ * NOTE: An implementation may be able to use a smaller number of
+ *       DMA address/length pairs than there are SG table elements.
+ *       (for example via virtual mapping capabilities)
+ *       The routine returns the number of addr/length pairs actually
+ *       used, at most nents.
+ *
+ * Device ownership issues as mentioned above for swiotlb_map_single are the
+ * same here.
+ */
+int
+swiotlb_map_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+	       int dir)
+{
+	void *addr;
+	unsigned long dev_addr;
+	int i;
+
+	if (dir = DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++) {
+		addr = SG_ENT_VIRT_ADDRESS(sg);
+		dev_addr = virt_to_phys(addr);
+		if (swiotlb_force || address_needs_mapping(hwdev, dev_addr)) {
+			sg->dma_address = (dma_addr_t) virt_to_phys(map_single(hwdev, addr, sg->length, dir));
+			if (!sg->dma_address) {
+				/* Don't panic here, we expect map_sg users
+				   to do proper error handling. */
+				swiotlb_full(hwdev, sg->length, dir, 0);
+				swiotlb_unmap_sg(hwdev, sg - i, i, dir);
+				sg[0].dma_length = 0;
+				return 0;
+			}
+		} else
+			sg->dma_address = dev_addr;
+		sg->dma_length = sg->length;
+	}
+	return nelems;
+}
+
+/*
+ * Unmap a set of streaming mode DMA translations.  Again, cpu read rules
+ * concerning calls here are the same as for swiotlb_unmap_single() above.
+ */
+void
+swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sg, int nelems,
+		 int dir)
+{
+	int i;
+
+	if (dir = DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			unmap_single(hwdev, (void *) phys_to_virt(sg->dma_address), sg->dma_length, dir);
+		else if (dir = DMA_FROM_DEVICE)
+			mark_clean(SG_ENT_VIRT_ADDRESS(sg), sg->dma_length);
+}
+
+/*
+ * Make physical memory consistent for a set of streaming mode DMA translations
+ * after a transfer.
+ *
+ * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
+ * and usage.
+ */
+void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	int i;
+
+	if (dir = DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			sync_single(hwdev, (void *) sg->dma_address,
+				    sg->dma_length, dir);
+}
+
+void
+swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
+			   int nelems, int dir)
+{
+	int i;
+
+	if (dir = DMA_NONE)
+		BUG();
+
+	for (i = 0; i < nelems; i++, sg++)
+		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
+			sync_single(hwdev, (void *) sg->dma_address,
+				    sg->dma_length, dir);
+}
+
+int
+swiotlb_dma_mapping_error(dma_addr_t dma_addr)
+{
+	return (dma_addr = virt_to_phys(io_tlb_overflow_buffer));
+}
+
+/*
+ * Return whether the given PCI device DMA address mask can be supported
+ * properly.  For example, if your device can only drive the low 24-bits
+ * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
+ * this function.
+ */
+int
+swiotlb_dma_supported (struct device *hwdev, u64 mask)
+{
+	return (virt_to_phys (io_tlb_end) - 1) <= mask;
+}
+
+EXPORT_SYMBOL(swiotlb_init);
+EXPORT_SYMBOL(swiotlb_map_single);
+EXPORT_SYMBOL(swiotlb_unmap_single);
+EXPORT_SYMBOL(swiotlb_map_sg);
+EXPORT_SYMBOL(swiotlb_unmap_sg);
+EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
+EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
+EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
+EXPORT_SYMBOL(swiotlb_dma_mapping_error);
+EXPORT_SYMBOL(swiotlb_alloc_coherent);
+EXPORT_SYMBOL(swiotlb_free_coherent);
+EXPORT_SYMBOL(swiotlb_dma_supported);

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 3/6] swiotlb: support syncing sub-ranges of mappings
  2005-09-28 21:50             ` John W. Linville
@ 2005-09-28 21:50               ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-28 21:50 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

This patch implements swiotlb_sync_single_range_for_{cpu,device}. This
is intended to support an x86_64 implementation of
dma_sync_single_range_for_{cpu,device}.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 include/asm-x86_64/swiotlb.h |    8 ++++++++
 lib/swiotlb.c                |   33 +++++++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/include/asm-x86_64/swiotlb.h b/include/asm-x86_64/swiotlb.h
--- a/include/asm-x86_64/swiotlb.h
+++ b/include/asm-x86_64/swiotlb.h
@@ -15,6 +15,14 @@ extern void swiotlb_sync_single_for_cpu(
 extern void swiotlb_sync_single_for_device(struct device *hwdev,
 					    dma_addr_t dev_addr,
 					    size_t size, int dir);
+extern void swiotlb_sync_single_range_for_cpu(struct device *hwdev,
+					      dma_addr_t dev_addr,
+					      unsigned long offset,
+					      size_t size, int dir);
+extern void swiotlb_sync_single_range_for_device(struct device *hwdev,
+						 dma_addr_t dev_addr,
+						 unsigned long offset,
+						 size_t size, int dir);
 extern void swiotlb_sync_sg_for_cpu(struct device *hwdev,
 				     struct scatterlist *sg, int nelems,
 				     int dir);
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -521,6 +521,37 @@ swiotlb_sync_single_for_device(struct de
 }
 
 /*
+ * Same as above, but for a sub-range of the mapping.
+ */
+static inline void
+swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
+			  unsigned long offset, size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr) + offset;
+
+	if (dir == DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir == DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+				  unsigned long offset, size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+}
+
+void
+swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
+				     unsigned long offset, size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+}
+
+/*
  * Map a set of buffers described by scatterlist in streaming mode for DMA.
  * This is the scatter-gather version of the above swiotlb_map_single
  * interface.  Here the scatter gather list elements are each tagged with the
@@ -648,6 +679,8 @@ EXPORT_SYMBOL(swiotlb_map_sg);
 EXPORT_SYMBOL(swiotlb_unmap_sg);
 EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_cpu);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
 EXPORT_SYMBOL(swiotlb_dma_mapping_error);

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings
  2005-09-28 21:50               ` John W. Linville
@ 2005-09-28 21:50                 ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-28 21:50 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

The current implementation of sync_single in swiotlb.c chokes on
DMA_BIDIRECTIONAL mappings. This patch adds the capability to sync
those mappings, and optimizes other syncs by accounting for the
sync target (i.e. cpu or device) in addition to the DMA direction of
the mapping.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |   62 +++++++++++++++++++++++++++++++++++++---------------------
 1 files changed, 40 insertions(+), 22 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -49,6 +49,14 @@
  */
 #define IO_TLB_SHIFT 11
 
+/*
+ * Enumeration for sync targets
+ */
+enum dma_sync_target {
+	SYNC_FOR_CPU = 0,
+	SYNC_FOR_DEVICE = 1,
+};
+
 int swiotlb_force;
 
 /*
@@ -295,21 +303,28 @@ unmap_single(struct device *hwdev, char 
 }
 
 static void
-sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+sync_single(struct device *hwdev, char *dma_addr, size_t size,
+	    int dir, int target)
 {
 	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
 	char *buffer = io_tlb_orig_addr[index];
 
-	/*
-	 * bounce... copy the data back into/from the original buffer
-	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
-	 */
-	if (dir == DMA_FROM_DEVICE)
-		memcpy(buffer, dma_addr, size);
-	else if (dir == DMA_TO_DEVICE)
-		memcpy(dma_addr, buffer, size);
-	else
+	switch (target) {
+	case SYNC_FOR_CPU:
+		if (likely(dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+			memcpy(buffer, dma_addr, size);
+		else if (dir != DMA_TO_DEVICE)
+			BUG();
+		break;
+	case SYNC_FOR_DEVICE:
+		if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
+			memcpy(dma_addr, buffer, size);
+		else if (dir != DMA_FROM_DEVICE)
+			BUG();
+		break;
+	default:
 		BUG();
+	}
 }
 
 void *
@@ -494,14 +509,14 @@ swiotlb_unmap_single(struct device *hwde
  */
 static inline void
 swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
-		    size_t size, int dir)
+		    size_t size, int dir, int target)
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
 	if (dir == DMA_NONE)
 		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
+		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir == DMA_FROM_DEVICE)
 		mark_clean(dma_addr, size);
 }
@@ -510,14 +525,14 @@ void
 swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 			    size_t size, int dir)
 {
-	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_DEVICE);
 }
 
 /*
@@ -525,14 +540,15 @@ swiotlb_sync_single_for_device(struct de
  */
 static inline void
 swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
-			  unsigned long offset, size_t size, int dir)
+			  unsigned long offset, size_t size,
+			  int dir, int target)
 {
 	char *dma_addr = phys_to_virt(dev_addr) + offset;
 
 	if (dir == DMA_NONE)
 		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
+		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir == DMA_FROM_DEVICE)
 		mark_clean(dma_addr, size);
 }
@@ -541,14 +557,16 @@ void
 swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 				  unsigned long offset, size_t size, int dir)
 {
-	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir,
+				  SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
 				     unsigned long offset, size_t size, int dir)
 {
-	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir,
+				  SYNC_FOR_DEVICE);
 }
 
 /*
@@ -627,7 +645,7 @@ swiotlb_unmap_sg(struct device *hwdev, s
  */
 static inline void
 swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
-		int nelems, int dir)
+		int nelems, int dir, int target)
 {
 	int i;
 
@@ -637,21 +655,21 @@ swiotlb_sync_sg(struct device *hwdev, st
 	for (i = 0; i < nelems; i++, sg++)
 		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
 			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+				    sg->dma_length, dir, target);
 }
 
 void
 swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
 			int nelems, int dir)
 {
-	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_DEVICE);
 }
 
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 2/6] swiotlb: cleanup some code duplication cruft
  2005-09-28 21:50           ` John W. Linville
@ 2005-09-28 21:50             ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-28 21:50 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

The implementations of swiotlb_sync_single_for_{cpu,device} are
identical. Likewise for swiotlb_syng_sg_for_{cpu,device}. This patch
move the guts of those functions to two new inline functions, and
calls the appropriate one from the bodies of those functions.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |   45 ++++++++++++++++++++++-----------------------
 1 files changed, 22 insertions(+), 23 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -492,9 +492,9 @@ swiotlb_unmap_single(struct device *hwde
  * address back to the card, you must first perform a
  * swiotlb_dma_sync_for_device, and then the device again owns the buffer
  */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
+static inline void
+swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
+		    size_t size, int dir)
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
@@ -507,17 +507,17 @@ swiotlb_sync_single_for_cpu(struct devic
 }
 
 void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+}
+
+void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir == DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir);
 }
 
 /*
@@ -594,9 +594,9 @@ swiotlb_unmap_sg(struct device *hwdev, s
  * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
  * and usage.
  */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
+static inline void
+swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
+		int nelems, int dir)
 {
 	int i;
 
@@ -610,18 +610,17 @@ swiotlb_sync_sg_for_cpu(struct device *h
 }
 
 void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+}
+
+void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	int i;
-
-	if (dir == DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
 }
 
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings
@ 2005-09-28 21:50                 ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-28 21:50 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

The current implementation of sync_single in swiotlb.c chokes on
DMA_BIDIRECTIONAL mappings. This patch adds the capability to sync
those mappings, and optimizes other syncs by accounting for the
sync target (i.e. cpu or device) in addition to the DMA direction of
the mapping.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |   62 +++++++++++++++++++++++++++++++++++++---------------------
 1 files changed, 40 insertions(+), 22 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -49,6 +49,14 @@
  */
 #define IO_TLB_SHIFT 11
 
+/*
+ * Enumeration for sync targets
+ */
+enum dma_sync_target {
+	SYNC_FOR_CPU = 0,
+	SYNC_FOR_DEVICE = 1,
+};
+
 int swiotlb_force;
 
 /*
@@ -295,21 +303,28 @@ unmap_single(struct device *hwdev, char 
 }
 
 static void
-sync_single(struct device *hwdev, char *dma_addr, size_t size, int dir)
+sync_single(struct device *hwdev, char *dma_addr, size_t size,
+	    int dir, int target)
 {
 	int index = (dma_addr - io_tlb_start) >> IO_TLB_SHIFT;
 	char *buffer = io_tlb_orig_addr[index];
 
-	/*
-	 * bounce... copy the data back into/from the original buffer
-	 * XXX How do you handle DMA_BIDIRECTIONAL here ?
-	 */
-	if (dir = DMA_FROM_DEVICE)
-		memcpy(buffer, dma_addr, size);
-	else if (dir = DMA_TO_DEVICE)
-		memcpy(dma_addr, buffer, size);
-	else
+	switch (target) {
+	case SYNC_FOR_CPU:
+		if (likely(dir = DMA_FROM_DEVICE || dir = DMA_BIDIRECTIONAL))
+			memcpy(buffer, dma_addr, size);
+		else if (dir != DMA_TO_DEVICE)
+			BUG();
+		break;
+	case SYNC_FOR_DEVICE:
+		if (likely(dir = DMA_TO_DEVICE || dir = DMA_BIDIRECTIONAL))
+			memcpy(dma_addr, buffer, size);
+		else if (dir != DMA_FROM_DEVICE)
+			BUG();
+		break;
+	default:
 		BUG();
+	}
 }
 
 void *
@@ -494,14 +509,14 @@ swiotlb_unmap_single(struct device *hwde
  */
 static inline void
 swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
-		    size_t size, int dir)
+		    size_t size, int dir, int target)
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
 	if (dir = DMA_NONE)
 		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
+		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir = DMA_FROM_DEVICE)
 		mark_clean(dma_addr, size);
 }
@@ -510,14 +525,14 @@ void
 swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 			    size_t size, int dir)
 {
-	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_DEVICE);
 }
 
 /*
@@ -525,14 +540,15 @@ swiotlb_sync_single_for_device(struct de
  */
 static inline void
 swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
-			  unsigned long offset, size_t size, int dir)
+			  unsigned long offset, size_t size,
+			  int dir, int target)
 {
 	char *dma_addr = phys_to_virt(dev_addr) + offset;
 
 	if (dir = DMA_NONE)
 		BUG();
 	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
+		sync_single(hwdev, dma_addr, size, dir, target);
 	else if (dir = DMA_FROM_DEVICE)
 		mark_clean(dma_addr, size);
 }
@@ -541,14 +557,16 @@ void
 swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
 				  unsigned long offset, size_t size, int dir)
 {
-	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir,
+				  SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
 				     unsigned long offset, size_t size, int dir)
 {
-	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir,
+				  SYNC_FOR_DEVICE);
 }
 
 /*
@@ -627,7 +645,7 @@ swiotlb_unmap_sg(struct device *hwdev, s
  */
 static inline void
 swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
-		int nelems, int dir)
+		int nelems, int dir, int target)
 {
 	int i;
 
@@ -637,21 +655,21 @@ swiotlb_sync_sg(struct device *hwdev, st
 	for (i = 0; i < nelems; i++, sg++)
 		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
 			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+				    sg->dma_length, dir, target);
 }
 
 void
 swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
 			int nelems, int dir)
 {
-	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_CPU);
 }
 
 void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_DEVICE);
 }
 
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 3/6] swiotlb: support syncing sub-ranges of mappings
@ 2005-09-28 21:50               ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-28 21:50 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

This patch implements swiotlb_sync_single_range_for_{cpu,device}. This
is intended to support an x86_64 implementation of
dma_sync_single_range_for_{cpu,device}.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 include/asm-x86_64/swiotlb.h |    8 ++++++++
 lib/swiotlb.c                |   33 +++++++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/include/asm-x86_64/swiotlb.h b/include/asm-x86_64/swiotlb.h
--- a/include/asm-x86_64/swiotlb.h
+++ b/include/asm-x86_64/swiotlb.h
@@ -15,6 +15,14 @@ extern void swiotlb_sync_single_for_cpu(
 extern void swiotlb_sync_single_for_device(struct device *hwdev,
 					    dma_addr_t dev_addr,
 					    size_t size, int dir);
+extern void swiotlb_sync_single_range_for_cpu(struct device *hwdev,
+					      dma_addr_t dev_addr,
+					      unsigned long offset,
+					      size_t size, int dir);
+extern void swiotlb_sync_single_range_for_device(struct device *hwdev,
+						 dma_addr_t dev_addr,
+						 unsigned long offset,
+						 size_t size, int dir);
 extern void swiotlb_sync_sg_for_cpu(struct device *hwdev,
 				     struct scatterlist *sg, int nelems,
 				     int dir);
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -521,6 +521,37 @@ swiotlb_sync_single_for_device(struct de
 }
 
 /*
+ * Same as above, but for a sub-range of the mapping.
+ */
+static inline void
+swiotlb_sync_single_range(struct device *hwdev, dma_addr_t dev_addr,
+			  unsigned long offset, size_t size, int dir)
+{
+	char *dma_addr = phys_to_virt(dev_addr) + offset;
+
+	if (dir = DMA_NONE)
+		BUG();
+	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
+		sync_single(hwdev, dma_addr, size, dir);
+	else if (dir = DMA_FROM_DEVICE)
+		mark_clean(dma_addr, size);
+}
+
+void
+swiotlb_sync_single_range_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+				  unsigned long offset, size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+}
+
+void
+swiotlb_sync_single_range_for_device(struct device *hwdev, dma_addr_t dev_addr,
+				     unsigned long offset, size_t size, int dir)
+{
+	swiotlb_sync_single_range(hwdev, dev_addr, offset, size, dir);
+}
+
+/*
  * Map a set of buffers described by scatterlist in streaming mode for DMA.
  * This is the scatter-gather version of the above swiotlb_map_single
  * interface.  Here the scatter gather list elements are each tagged with the
@@ -648,6 +679,8 @@ EXPORT_SYMBOL(swiotlb_map_sg);
 EXPORT_SYMBOL(swiotlb_unmap_sg);
 EXPORT_SYMBOL(swiotlb_sync_single_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_single_for_device);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_cpu);
+EXPORT_SYMBOL_GPL(swiotlb_sync_single_range_for_device);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_cpu);
 EXPORT_SYMBOL(swiotlb_sync_sg_for_device);
 EXPORT_SYMBOL(swiotlb_dma_mapping_error);

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 2/6] swiotlb: cleanup some code duplication cruft
@ 2005-09-28 21:50             ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-28 21:50 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

The implementations of swiotlb_sync_single_for_{cpu,device} are
identical. Likewise for swiotlb_syng_sg_for_{cpu,device}. This patch
move the guts of those functions to two new inline functions, and
calls the appropriate one from the bodies of those functions.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |   45 ++++++++++++++++++++++-----------------------
 1 files changed, 22 insertions(+), 23 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -492,9 +492,9 @@ swiotlb_unmap_single(struct device *hwde
  * address back to the card, you must first perform a
  * swiotlb_dma_sync_for_device, and then the device again owns the buffer
  */
-void
-swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
-			    size_t size, int dir)
+static inline void
+swiotlb_sync_single(struct device *hwdev, dma_addr_t dev_addr,
+		    size_t size, int dir)
 {
 	char *dma_addr = phys_to_virt(dev_addr);
 
@@ -507,17 +507,17 @@ swiotlb_sync_single_for_cpu(struct devic
 }
 
 void
+swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
+			    size_t size, int dir)
+{
+	swiotlb_sync_single(hwdev, dev_addr, size, dir);
+}
+
+void
 swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
 			       size_t size, int dir)
 {
-	char *dma_addr = phys_to_virt(dev_addr);
-
-	if (dir = DMA_NONE)
-		BUG();
-	if (dma_addr >= io_tlb_start && dma_addr < io_tlb_end)
-		sync_single(hwdev, dma_addr, size, dir);
-	else if (dir = DMA_FROM_DEVICE)
-		mark_clean(dma_addr, size);
+	swiotlb_sync_single(hwdev, dev_addr, size, dir);
 }
 
 /*
@@ -594,9 +594,9 @@ swiotlb_unmap_sg(struct device *hwdev, s
  * The same as swiotlb_sync_single_* but for a scatter-gather list, same rules
  * and usage.
  */
-void
-swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
-			int nelems, int dir)
+static inline void
+swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sg,
+		int nelems, int dir)
 {
 	int i;
 
@@ -610,18 +610,17 @@ swiotlb_sync_sg_for_cpu(struct device *h
 }
 
 void
+swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
+			int nelems, int dir)
+{
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
+}
+
+void
 swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
 			   int nelems, int dir)
 {
-	int i;
-
-	if (dir = DMA_NONE)
-		BUG();
-
-	for (i = 0; i < nelems; i++, sg++)
-		if (sg->dma_address != SG_ENT_PHYS_ADDRESS(sg))
-			sync_single(hwdev, (void *) sg->dma_address,
-				    sg->dma_length, dir);
+	swiotlb_sync_sg(hwdev, sg, nelems, dir);
 }
 
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 5/6] swiotlb: file header comments
  2005-09-28 21:50                 ` John W. Linville
@ 2005-09-28 21:50                   ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-28 21:50 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

Change comment at top of swiotlb.c to reflect that the code is shared
with EM64T (i.e. Intel x86_64). Also add an entry for myself so that
if I "broke it", everyone knows who "bought it"... :-)

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -1,7 +1,7 @@
 /*
  * Dynamic DMA mapping support.
  *
- * This implementation is for IA-64 platforms that do not support
+ * This implementation is for IA-64 and EM64T platforms that do not support
  * I/O TLBs (aka DMA address translation hardware).
  * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
  * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
@@ -11,7 +11,9 @@
  * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
  * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
  *			unnecessary i-cache flushing.
- * 04/07/.. ak          Better overflow handling. Assorted fixes.
+ * 04/07/.. ak		Better overflow handling. Assorted fixes.
+ * 05/09/10 linville	Add support for syncing ranges, support syncing for
+ *			DMA_BIDIRECTIONAL mappings, miscellaneous cleanup.
  */
 
 #include <linux/cache.h>

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 6/6] x86_64: implement dma_sync_single_range_for_{cpu,device}
  2005-09-28 21:50                   ` John W. Linville
  (?)
@ 2005-09-28 21:50                   ` John W. Linville
  -1 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-28 21:50 UTC (permalink / raw)
  To: linux-kernel, discuss; +Cc: ak

Re-implement dma_sync_single_range_for_{cpu,device} for x86_64 using
swiotlb_sync_single_range_for_{cpu,device}.

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 include/asm-x86_64/dma-mapping.h |   31 +++++++++++++++++++++++++++----
 1 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/include/asm-x86_64/dma-mapping.h b/include/asm-x86_64/dma-mapping.h
--- a/include/asm-x86_64/dma-mapping.h
+++ b/include/asm-x86_64/dma-mapping.h
@@ -85,10 +85,33 @@ static inline void dma_sync_single_for_d
 	flush_write_buffers();
 }
 
-#define dma_sync_single_range_for_cpu(dev, dma_handle, offset, size, dir)       \
-        dma_sync_single_for_cpu(dev, dma_handle, size, dir)
-#define dma_sync_single_range_for_device(dev, dma_handle, offset, size, dir)    \
-        dma_sync_single_for_device(dev, dma_handle, size, dir)
+static inline void dma_sync_single_range_for_cpu(struct device *hwdev,
+						 dma_addr_t dma_handle,
+						 unsigned long offset,
+						 size_t size, int direction)
+{
+	if (direction == DMA_NONE)
+		out_of_line_bug();
+
+	if (swiotlb)
+		return swiotlb_sync_single_range_for_cpu(hwdev,dma_handle,offset,size,direction);
+
+	flush_write_buffers();
+}
+
+static inline void dma_sync_single_range_for_device(struct device *hwdev,
+						    dma_addr_t dma_handle,
+						    unsigned long offset,
+						    size_t size, int direction)
+{
+        if (direction == DMA_NONE)
+		out_of_line_bug();
+
+	if (swiotlb)
+		return swiotlb_sync_single_range_for_device(hwdev,dma_handle,offset,size,direction);
+
+	flush_write_buffers();
+}
 
 static inline void dma_sync_sg_for_cpu(struct device *hwdev,
 				       struct scatterlist *sg,

^ permalink raw reply	[flat|nested] 99+ messages in thread

* [patch 2.6.14-rc2 5/6] swiotlb: file header comments
@ 2005-09-28 21:50                   ` John W. Linville
  0 siblings, 0 replies; 99+ messages in thread
From: John W. Linville @ 2005-09-28 21:50 UTC (permalink / raw)
  To: linux-kernel, discuss, linux-ia64, linux-pci
  Cc: ak, tony.luck, Asit.K.Mallick, gregkh

Change comment at top of swiotlb.c to reflect that the code is shared
with EM64T (i.e. Intel x86_64). Also add an entry for myself so that
if I "broke it", everyone knows who "bought it"... :-)

Signed-off-by: John W. Linville <linville@tuxdriver.com>
---

 lib/swiotlb.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -1,7 +1,7 @@
 /*
  * Dynamic DMA mapping support.
  *
- * This implementation is for IA-64 platforms that do not support
+ * This implementation is for IA-64 and EM64T platforms that do not support
  * I/O TLBs (aka DMA address translation hardware).
  * Copyright (C) 2000 Asit Mallick <Asit.K.Mallick@intel.com>
  * Copyright (C) 2000 Goutham Rao <goutham.rao@intel.com>
@@ -11,7 +11,9 @@
  * 03/05/07 davidm	Switch from PCI-DMA to generic device DMA API.
  * 00/12/13 davidm	Rename to swiotlb.c and add mark_clean() to avoid
  *			unnecessary i-cache flushing.
- * 04/07/.. ak          Better overflow handling. Assorted fixes.
+ * 04/07/.. ak		Better overflow handling. Assorted fixes.
+ * 05/09/10 linville	Add support for syncing ranges, support syncing for
+ *			DMA_BIDIRECTIONAL mappings, miscellaneous cleanup.
  */
 
 #include <linux/cache.h>

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.14-rc2 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
  2005-09-28 21:50         ` John W. Linville
@ 2005-09-29 22:42           ` Luck, Tony
  -1 siblings, 0 replies; 99+ messages in thread
From: Luck, Tony @ 2005-09-29 22:42 UTC (permalink / raw)
  To: John W. Linville
  Cc: linux-kernel, discuss, linux-ia64, linux-pci, ak, Asit.K.Mallick, gregkh

On Wed, Sep 28, 2005 at 05:50:47PM -0400, John W. Linville wrote:
> This set does NOT include the PCI excision that Tony Luck started after the
> last round of discussion.

Here's my attempt at part 7of6 to complete the removal of references
to PCI.  This time I fixed all the comments and printk() messages.
I even checked that it compiles on x86_64/em64t.

There will be some small amount of fuzz in this patch as I started
applying from a base that included a patch from Alex Williamson.


Andi: earlier you had made positive noises about moving
swiotlb.c up to lib/ ... now we have a specific implementation
can you confirm that you are still OK with this.  If you are,
then I'll put this whole series into my test tree so it will
get into the next -mm release.

Signed-off-by: Tony Luck <tony.luck@intel.com>

---

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -17,17 +17,17 @@
  */
 
 #include <linux/cache.h>
+#include <linux/dma-mapping.h>
 #include <linux/mm.h>
 #include <linux/module.h>
-#include <linux/pci.h>
 #include <linux/spinlock.h>
 #include <linux/string.h>
 #include <linux/types.h>
 #include <linux/ctype.h>
 
 #include <asm/io.h>
-#include <asm/pci.h>
 #include <asm/dma.h>
+#include <asm/scatterlist.h>
 
 #include <linux/init.h>
 #include <linux/bootmem.h>
@@ -127,7 +127,7 @@ __setup("swiotlb=", setup_io_tlb_npages)
 
 /*
  * Statically reserve bounce buffer space and initialize bounce buffer data
- * structures for the software IO TLB used to implement the PCI DMA API.
+ * structures for the software IO TLB used to implement the DMA API.
  */
 void
 swiotlb_init_with_default_size (size_t default_size)
@@ -502,24 +502,24 @@ swiotlb_full(struct device *dev, size_t 
 	/*
 	 * Ran out of IOMMU space for this operation. This is very bad.
 	 * Unfortunately the drivers cannot handle this operation properly.
-	 * unless they check for pci_dma_mapping_error (most don't)
+	 * unless they check for dma_mapping_error (most don't)
 	 * When the mapping is small enough return a static buffer to limit
 	 * the damage, or panic when the transfer is too big.
 	 */
-	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
+	printk(KERN_ERR "DMA: Out of SW-IOMMU space for %lu bytes at "
 	       "device %s\n", size, dev ? dev->bus_id : "?");
 
 	if (size > io_tlb_overflow && do_panic) {
-		if (dir == PCI_DMA_FROMDEVICE || dir == PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Memory would be corrupted\n");
-		if (dir == PCI_DMA_TODEVICE || dir == PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Random memory would be DMAed\n");
+		if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)
+			panic("DMA: Memory would be corrupted\n");
+		if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
+			panic("DMA: Random memory would be DMAed\n");
 	}
 }
 
 /*
  * Map a single buffer of the indicated size for DMA in streaming mode.  The
- * PCI address to use is returned.
+ * physical address to use is returned.
  *
  * Once the device is given the dma address, the device owns this memory until
  * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
@@ -606,8 +606,8 @@ swiotlb_unmap_single(struct device *hwde
  * after a transfer.
  *
  * If you perform a swiotlb_map_single() but wish to interrogate the buffer
- * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
- * call this function before doing so.  At the next point you give the PCI dma
+ * using the cpu, yet do not wish to teardown the dma mapping, you must
+ * call this function before doing so.  At the next point you give the dma
  * address back to the card, you must first perform a
  * swiotlb_dma_sync_for_device, and then the device again owns the buffer
  */
@@ -783,9 +783,9 @@ swiotlb_dma_mapping_error(dma_addr_t dma
 }
 
 /*
- * Return whether the given PCI device DMA address mask can be supported
+ * Return whether the given device DMA address mask can be supported
  * properly.  For example, if your device can only drive the low 24-bits
- * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
+ * during bus mastering, then you would pass 0x00ffffff as the mask to
  * this function.
  */
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

* Re: [patch 2.6.14-rc2 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device}
@ 2005-09-29 22:42           ` Luck, Tony
  0 siblings, 0 replies; 99+ messages in thread
From: Luck, Tony @ 2005-09-29 22:42 UTC (permalink / raw)
  To: John W. Linville
  Cc: linux-kernel, discuss, linux-ia64, linux-pci, ak, Asit.K.Mallick, gregkh

On Wed, Sep 28, 2005 at 05:50:47PM -0400, John W. Linville wrote:
> This set does NOT include the PCI excision that Tony Luck started after the
> last round of discussion.

Here's my attempt at part 7of6 to complete the removal of references
to PCI.  This time I fixed all the comments and printk() messages.
I even checked that it compiles on x86_64/em64t.

There will be some small amount of fuzz in this patch as I started
applying from a base that included a patch from Alex Williamson.


Andi: earlier you had made positive noises about moving
swiotlb.c up to lib/ ... now we have a specific implementation
can you confirm that you are still OK with this.  If you are,
then I'll put this whole series into my test tree so it will
get into the next -mm release.

Signed-off-by: Tony Luck <tony.luck@intel.com>

---

diff --git a/lib/swiotlb.c b/lib/swiotlb.c
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -17,17 +17,17 @@
  */
 
 #include <linux/cache.h>
+#include <linux/dma-mapping.h>
 #include <linux/mm.h>
 #include <linux/module.h>
-#include <linux/pci.h>
 #include <linux/spinlock.h>
 #include <linux/string.h>
 #include <linux/types.h>
 #include <linux/ctype.h>
 
 #include <asm/io.h>
-#include <asm/pci.h>
 #include <asm/dma.h>
+#include <asm/scatterlist.h>
 
 #include <linux/init.h>
 #include <linux/bootmem.h>
@@ -127,7 +127,7 @@ __setup("swiotlb=", setup_io_tlb_npages)
 
 /*
  * Statically reserve bounce buffer space and initialize bounce buffer data
- * structures for the software IO TLB used to implement the PCI DMA API.
+ * structures for the software IO TLB used to implement the DMA API.
  */
 void
 swiotlb_init_with_default_size (size_t default_size)
@@ -502,24 +502,24 @@ swiotlb_full(struct device *dev, size_t 
 	/*
 	 * Ran out of IOMMU space for this operation. This is very bad.
 	 * Unfortunately the drivers cannot handle this operation properly.
-	 * unless they check for pci_dma_mapping_error (most don't)
+	 * unless they check for dma_mapping_error (most don't)
 	 * When the mapping is small enough return a static buffer to limit
 	 * the damage, or panic when the transfer is too big.
 	 */
-	printk(KERN_ERR "PCI-DMA: Out of SW-IOMMU space for %lu bytes at "
+	printk(KERN_ERR "DMA: Out of SW-IOMMU space for %lu bytes at "
 	       "device %s\n", size, dev ? dev->bus_id : "?");
 
 	if (size > io_tlb_overflow && do_panic) {
-		if (dir = PCI_DMA_FROMDEVICE || dir = PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Memory would be corrupted\n");
-		if (dir = PCI_DMA_TODEVICE || dir = PCI_DMA_BIDIRECTIONAL)
-			panic("PCI-DMA: Random memory would be DMAed\n");
+		if (dir = DMA_FROM_DEVICE || dir = DMA_BIDIRECTIONAL)
+			panic("DMA: Memory would be corrupted\n");
+		if (dir = DMA_TO_DEVICE || dir = DMA_BIDIRECTIONAL)
+			panic("DMA: Random memory would be DMAed\n");
 	}
 }
 
 /*
  * Map a single buffer of the indicated size for DMA in streaming mode.  The
- * PCI address to use is returned.
+ * physical address to use is returned.
  *
  * Once the device is given the dma address, the device owns this memory until
  * either swiotlb_unmap_single or swiotlb_dma_sync_single is performed.
@@ -606,8 +606,8 @@ swiotlb_unmap_single(struct device *hwde
  * after a transfer.
  *
  * If you perform a swiotlb_map_single() but wish to interrogate the buffer
- * using the cpu, yet do not wish to teardown the PCI dma mapping, you must
- * call this function before doing so.  At the next point you give the PCI dma
+ * using the cpu, yet do not wish to teardown the dma mapping, you must
+ * call this function before doing so.  At the next point you give the dma
  * address back to the card, you must first perform a
  * swiotlb_dma_sync_for_device, and then the device again owns the buffer
  */
@@ -783,9 +783,9 @@ swiotlb_dma_mapping_error(dma_addr_t dma
 }
 
 /*
- * Return whether the given PCI device DMA address mask can be supported
+ * Return whether the given device DMA address mask can be supported
  * properly.  For example, if your device can only drive the low 24-bits
- * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
+ * during bus mastering, then you would pass 0x00ffffff as the mask to
  * this function.
  */
 int

^ permalink raw reply	[flat|nested] 99+ messages in thread

end of thread, other threads:[~2005-09-29 22:42 UTC | newest]

Thread overview: 99+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-09-23 18:27 [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device} Luck, Tony
2005-09-23 18:27 ` Luck, Tony
2005-09-23 18:50 ` John W. Linville
2005-09-23 18:50   ` John W. Linville
2005-09-23 21:38   ` Grant Grundler
2005-09-23 21:38     ` Grant Grundler
2005-09-26  7:01 ` Andi Kleen
2005-09-26  7:01   ` Andi Kleen
2005-09-26 21:01 ` [patch 2.6.14-rc2 0/5] " John W. Linville
2005-09-26 21:01   ` John W. Linville
2005-09-26 21:01   ` [patch 2.6.14-rc2 1/5] swiotlb: move from arch/ia64/lib to drivers/pci John W. Linville
2005-09-26 21:01     ` John W. Linville
2005-09-26 21:01     ` [patch 2.6.14-rc2 2/5] swiotlb: cleanup some code duplication cruft John W. Linville
2005-09-26 21:01       ` John W. Linville
2005-09-26 21:01       ` [patch 2.6.14-rc2 3/5] swiotlb: support syncing sub-ranges of mappings John W. Linville
2005-09-26 21:01         ` John W. Linville
2005-09-26 21:01         ` [patch 2.6.14-rc2 4/5] swiotlb: support syncing DMA_BIDIRECTIONAL mappings John W. Linville
2005-09-26 21:01           ` John W. Linville
2005-09-26 21:01           ` [patch 2.6.14-rc2 5/5] swiotlb: file header comments John W. Linville
2005-09-26 21:01             ` John W. Linville
2005-09-26 21:33   ` [patch 2.6.14-rc2 0/5] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device} Matthew Wilcox
2005-09-26 21:33     ` Matthew Wilcox
2005-09-26 21:54     ` John W. Linville
2005-09-26 21:54       ` John W. Linville
  -- strict thread matches above, loose matches on Subject: below --
2005-09-26 22:08 Luck, Tony
2005-09-26 22:08 ` Luck, Tony
2005-09-26 22:46 ` Grant Grundler
2005-09-26 22:46   ` Grant Grundler
2005-09-27  0:14   ` John W. Linville
2005-09-27  0:14     ` John W. Linville
2005-09-27  2:47     ` Tony Luck
2005-09-27  2:47       ` Tony Luck
2005-09-28 21:50       ` [patch 2.6.14-rc2 0/6] " John W. Linville
2005-09-28 21:50         ` John W. Linville
2005-09-28 21:50         ` [patch 2.6.14-rc2 1/6] swiotlb: move from arch/ia64/lib/ to lib/ John W. Linville
2005-09-28 21:50           ` John W. Linville
2005-09-28 21:50           ` [patch 2.6.14-rc2 2/6] swiotlb: cleanup some code duplication cruft John W. Linville
2005-09-28 21:50             ` John W. Linville
2005-09-28 21:50             ` [patch 2.6.14-rc2 3/6] swiotlb: support syncing sub-ranges of mappings John W. Linville
2005-09-28 21:50               ` John W. Linville
2005-09-28 21:50               ` [patch 2.6.14-rc2 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings John W. Linville
2005-09-28 21:50                 ` John W. Linville
2005-09-28 21:50                 ` [patch 2.6.14-rc2 5/6] swiotlb: file header comments John W. Linville
2005-09-28 21:50                   ` John W. Linville
2005-09-28 21:50                   ` [patch 2.6.14-rc2 6/6] x86_64: implement dma_sync_single_range_for_{cpu,device} John W. Linville
2005-09-29 22:42         ` [patch 2.6.14-rc2 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device} Luck, Tony
2005-09-29 22:42           ` Luck, Tony
2005-09-22 20:37 [patch 2.6.13 " Luck, Tony
2005-09-22 20:37 ` Luck, Tony
2005-09-22 20:41 ` Christoph Hellwig
2005-09-22 20:41   ` Christoph Hellwig
2005-09-23 18:22   ` John W. Linville
2005-09-23 18:22     ` John W. Linville
2005-09-23 18:31     ` Muli Ben-Yehuda
2005-09-23 18:31       ` Muli Ben-Yehuda
2005-08-30 18:03 [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device} Luck, Tony
2005-08-30 18:03 ` Luck, Tony
2005-08-30 18:09 ` John W. Linville
2005-08-30 18:09   ` John W. Linville
2005-08-30 18:33   ` [rfc patch] swiotlb: consolidate swiotlb_sync_single_* implementations John W. Linville
2005-08-30 18:33     ` John W. Linville
2005-08-30 18:40     ` [rfc patch] swiotlb: consolidate swiotlb_sync_sg_* implementations John W. Linville
2005-08-30 18:40       ` John W. Linville
2005-09-12 14:48   ` [patch 2.6.13 0/6] swiotlb maintenance and x86_64 dma_sync_single_range_for_{cpu,device} John W. Linville
2005-09-12 14:48     ` John W. Linville
2005-09-12 14:48     ` [patch 2.6.13 1/6] swiotlb: move from arch/ia64/lib to lib John W. Linville
2005-09-12 14:48       ` John W. Linville
2005-09-12 14:48       ` [patch 2.6.13 2/6] swiotlb: cleanup some code duplication cruft John W. Linville
2005-09-12 14:48         ` John W. Linville
2005-09-12 14:48         ` [patch 2.6.13 3/6] swiotlb: support syncing sub-ranges of mappings John W. Linville
2005-09-12 14:48           ` John W. Linville
2005-09-12 14:48           ` [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings John W. Linville
2005-09-12 14:48             ` John W. Linville
2005-09-12 14:48             ` [patch 2.6.13 5/6] swiotlb: file header comments John W. Linville
2005-09-12 14:48               ` John W. Linville
2005-09-12 14:48               ` [patch 2.6.13 6/6] x86_64: implement dma_sync_single_range_for_{cpu,device} John W. Linville
2005-09-12 15:22                 ` Andi Kleen
2005-09-12 18:51             ` [patch 2.6.13 4/6] swiotlb: support syncing DMA_BIDIRECTIONAL mappings Grant Grundler
2005-09-12 18:51               ` Grant Grundler
2005-09-12 19:51               ` John W. Linville
2005-09-12 19:51                 ` John W. Linville
2005-09-12 19:53                 ` [patch 2.6.13] swiotlb: BUG() for DMA_NONE in sync_single John W. Linville
2005-09-12 19:53                   ` John W. Linville
2005-09-12 20:23                   ` Grant Grundler
2005-09-12 20:23                     ` Grant Grundler
2005-09-12 23:45                     ` [patch 2.6.13 (take #2)] " John W. Linville
2005-09-12 23:45                       ` John W. Linville
2005-09-12 23:59                       ` Grant Grundler
2005-09-12 23:59                         ` Grant Grundler
2005-09-13  4:05                       ` [discuss] " Andi Kleen
2005-09-13  4:05                         ` Andi Kleen
2005-08-29 20:09 [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device} John W. Linville
2005-08-29 20:54 ` Andi Kleen
2005-08-29 21:48   ` John W. Linville
2005-08-30  1:14     ` [discuss] " Andi Kleen
2005-08-30 17:54       ` John W. Linville
2005-08-30 17:58         ` [patch 2.6.13] swiotlb: add swiotlb_sync_single_range_for_{cpu,device} John W. Linville
2005-08-30 17:58           ` John W. Linville
2005-08-30 18:00           ` [patch 2.6.13] x86_64: implement dma_sync_single_range_for_{cpu,device} John W. Linville

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.