linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/13] IA64: unifying ways to handle multiple sets of dma mapping ops
@ 2009-01-05 14:36 FUJITA Tomonori
  2009-01-05 14:36 ` [PATCH 01/13] add map/unmap_single_attr and map/unmap_sg_attr to struct dma_mapping_ops FUJITA Tomonori
  2009-01-07 19:56 ` [PATCH 0/13] IA64: unifying ways to handle multiple sets of dma mapping ops Sam Ravnborg
  0 siblings, 2 replies; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, fujita.tomonori

This patchset is the first part of the unification of ways to handle
multiple sets of dma mapping API. The whole work consists of three
patchset. This is for IA64 and can be applied independently.

dma_mapping_ops (or dma_ops) struct is used to handle multiple sets of
dma mapping API by X86, SPARC, and POWER. IA64 also handle multiple
sets of dma mapping API but in a very different way (some define
magic).

X86 and IA64 share VT-d and SWIOTLB code. We need several workarounds
for it because of the deference of ways to handle multiple sets of dma
mapping API (e.g., X86 people can't freely change struct
dma_mapping_ops in x86's dma-mapping.h now because it could break
IA64).  Seems POWER will use SWIOTLB code soon. I think that it's time
to unify ways to handle multiple sets of dma mapping API. After
applying the whole work, we have struct dma_map_ops
include/linux/dma-mapping.h (I also dream of changing all the archs to
use SWIOTLB in order to remove the bounce code in the block and
network stacks...).

This patchset changes IA64 to handle multiple sets of dma mapping API
in the common way (as X86, SPARC, and POWER do):

- removing dma operation hooks in struct ia64_machine_vector.

- adding a global pointer to struct dma_mapping_ops, which point to an
appropriate set of dma mapping API (VT-d, SBA, SN2, SWIOTLB, or HWSW).


I can access to a HP IA64 box and tested this patch with CONFIG
IA64_GENERIC, IA64_HP_ZX1, and IA64_HP_ZX1_SWIOTLB.

=
 arch/ia64/dig/dig_vtd_iommu.c                 |   18 +++
 arch/ia64/hp/common/hwsw_iommu.c              |  165 ++----------------------
 arch/ia64/hp/common/sba_iommu.c               |   57 ++++++---
 arch/ia64/include/asm/dma-mapping.h           |  144 +++++++++++++++------
 arch/ia64/include/asm/machvec.h               |   99 ++-------------
 arch/ia64/include/asm/machvec_dig_vtd.h       |   20 ---
 arch/ia64/include/asm/machvec_hpzx1.h         |   23 +---
 arch/ia64/include/asm/machvec_hpzx1_swiotlb.h |   27 +----
 arch/ia64/include/asm/machvec_sn2.h           |   27 +----
 arch/ia64/kernel/Makefile                     |    4 +-
 arch/ia64/kernel/dma-mapping.c                |   10 ++
 arch/ia64/kernel/pci-dma.c                    |   11 +-
 arch/ia64/kernel/pci-swiotlb.c                |   15 ++-
 arch/ia64/sn/pci/pci_dma.c                    |   83 +++++++------
 14 files changed, 262 insertions(+), 441 deletions(-)



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 01/13] add map/unmap_single_attr and map/unmap_sg_attr to struct dma_mapping_ops
  2009-01-05 14:36 [PATCH 0/13] IA64: unifying ways to handle multiple sets of dma mapping ops FUJITA Tomonori
@ 2009-01-05 14:36 ` FUJITA Tomonori
  2009-01-05 14:36   ` [PATCH 02/13] add dma_mapping_ops for SBA IOMMU FUJITA Tomonori
  2009-01-07 19:56 ` [PATCH 0/13] IA64: unifying ways to handle multiple sets of dma mapping ops Sam Ravnborg
  1 sibling, 1 reply; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, FUJITA Tomonori

This adds map/unmap_single_attr and map/unmap_sg_attr to struct
dma_mapping_ops. This enables us to move the dma operations in struct
ia64_machine_vector to struct dma_mapping_ops.

Note that we will remove map/unmap_sg and map/umap_single.

This is a preparation of struct dma_mapping_ops unification.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 arch/ia64/include/asm/dma-mapping.h |   14 ++++++++++++++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/arch/ia64/include/asm/dma-mapping.h b/arch/ia64/include/asm/dma-mapping.h
index bbab7e2..eeb2aa3 100644
--- a/arch/ia64/include/asm/dma-mapping.h
+++ b/arch/ia64/include/asm/dma-mapping.h
@@ -20,6 +20,13 @@ struct dma_mapping_ops {
 				size_t size, int direction);
 	void            (*unmap_single)(struct device *dev, dma_addr_t addr,
 				size_t size, int direction);
+	dma_addr_t      (*map_single_attrs)(struct device *dev, void *cpu_addr,
+					    size_t size, int direction,
+					    struct dma_attrs *attrs);
+	void		(*unmap_single_attrs)(struct device *dev,
+					      dma_addr_t dma_addr,
+					      size_t size, int direction,
+					      struct dma_attrs *attrs);
 	void            (*sync_single_for_cpu)(struct device *hwdev,
 				dma_addr_t dma_handle, size_t size,
 				int direction);
@@ -43,6 +50,13 @@ struct dma_mapping_ops {
 	void            (*unmap_sg)(struct device *hwdev,
 				struct scatterlist *sg, int nents,
 				int direction);
+	int             (*map_sg_attrs)(struct device *dev,
+					struct scatterlist *sg, int nents,
+					int direction, struct dma_attrs *attrs);
+	void            (*unmap_sg_attrs)(struct device *dev,
+					  struct scatterlist *sg, int nents,
+					  int direction,
+					  struct dma_attrs *attrs);
 	int             (*dma_supported_op)(struct device *hwdev, u64 mask);
 	int		is_phys;
 };
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 02/13] add dma_mapping_ops for SBA IOMMU
  2009-01-05 14:36 ` [PATCH 01/13] add map/unmap_single_attr and map/unmap_sg_attr to struct dma_mapping_ops FUJITA Tomonori
@ 2009-01-05 14:36   ` FUJITA Tomonori
  2009-01-05 14:36     ` [PATCH 03/13] add dma_mapping_ops for SWIOTLB and " FUJITA Tomonori
  0 siblings, 1 reply; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, FUJITA Tomonori

This is for IA64_HP_ZX1.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 arch/ia64/hp/common/sba_iommu.c |   16 ++++++++++++++++
 1 files changed, 16 insertions(+), 0 deletions(-)

diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c
index d98f0f4..655b9a1 100644
--- a/arch/ia64/hp/common/sba_iommu.c
+++ b/arch/ia64/hp/common/sba_iommu.c
@@ -36,6 +36,7 @@
 #include <linux/bitops.h>         /* hweight64() */
 #include <linux/crash_dump.h>
 #include <linux/iommu-helper.h>
+#include <linux/dma-mapping.h>
 
 #include <asm/delay.h>		/* ia64_get_itc() */
 #include <asm/io.h>
@@ -2180,3 +2181,18 @@ EXPORT_SYMBOL(sba_dma_mapping_error);
 EXPORT_SYMBOL(sba_dma_supported);
 EXPORT_SYMBOL(sba_alloc_coherent);
 EXPORT_SYMBOL(sba_free_coherent);
+
+struct dma_mapping_ops sba_dma_ops = {
+	.alloc_coherent		= sba_alloc_coherent,
+	.free_coherent		= sba_free_coherent,
+	.map_single_attrs	= sba_map_single_attrs,
+	.unmap_single_attrs	= sba_unmap_single_attrs,
+	.map_sg_attrs		= sba_map_sg_attrs,
+	.unmap_sg_attrs		= sba_unmap_sg_attrs,
+	.sync_single_for_cpu	= machvec_dma_sync_single,
+	.sync_sg_for_cpu	= machvec_dma_sync_sg,
+	.sync_single_for_device	= machvec_dma_sync_single,
+	.sync_sg_for_device	= machvec_dma_sync_sg,
+	.dma_supported_op	= sba_dma_supported,
+	.mapping_error		= sba_dma_mapping_error,
+};
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 03/13] add dma_mapping_ops for SWIOTLB and SBA IOMMU
  2009-01-05 14:36   ` [PATCH 02/13] add dma_mapping_ops for SBA IOMMU FUJITA Tomonori
@ 2009-01-05 14:36     ` FUJITA Tomonori
  2009-01-05 14:36       ` [PATCH 04/13] add dma_mapping_ops for intel-iommu FUJITA Tomonori
  0 siblings, 1 reply; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, FUJITA Tomonori

This is for IA64_HP_ZX1_SWIOTLB.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 arch/ia64/hp/common/hwsw_iommu.c |   17 ++++++++++++++++-
 1 files changed, 16 insertions(+), 1 deletions(-)

diff --git a/arch/ia64/hp/common/hwsw_iommu.c b/arch/ia64/hp/common/hwsw_iommu.c
index 2769dbf..a40dcdd 100644
--- a/arch/ia64/hp/common/hwsw_iommu.c
+++ b/arch/ia64/hp/common/hwsw_iommu.c
@@ -13,8 +13,8 @@
  */
 
 #include <linux/device.h>
+#include <linux/dma-mapping.h>
 #include <linux/swiotlb.h>
-
 #include <asm/machvec.h>
 
 /* swiotlb declarations & definitions: */
@@ -193,3 +193,18 @@ EXPORT_SYMBOL(hwsw_sync_single_for_cpu);
 EXPORT_SYMBOL(hwsw_sync_single_for_device);
 EXPORT_SYMBOL(hwsw_sync_sg_for_cpu);
 EXPORT_SYMBOL(hwsw_sync_sg_for_device);
+
+struct dma_mapping_ops hwsw_dma_ops = {
+	.alloc_coherent		= hwsw_alloc_coherent,
+	.free_coherent		= hwsw_free_coherent,
+	.map_single_attrs	= hwsw_map_single_attrs,
+	.unmap_single_attrs	= hwsw_unmap_single_attrs,
+	.map_sg_attrs		= hwsw_map_sg_attrs,
+	.unmap_sg_attrs		= hwsw_unmap_sg_attrs,
+	.sync_single_for_cpu	= hwsw_sync_single_for_cpu,
+	.sync_sg_for_cpu	= hwsw_sync_sg_for_cpu,
+	.sync_single_for_device	= hwsw_sync_single_for_device,
+	.sync_sg_for_device	= hwsw_sync_sg_for_device,
+	.dma_supported_op	= hwsw_dma_supported,
+	.mapping_error		= hwsw_dma_mapping_error,
+};
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 04/13] add dma_mapping_ops for intel-iommu
  2009-01-05 14:36     ` [PATCH 03/13] add dma_mapping_ops for SWIOTLB and " FUJITA Tomonori
@ 2009-01-05 14:36       ` FUJITA Tomonori
  2009-01-05 14:36         ` [PATCH 05/13] add dma_mapping_ops for SGI Altix FUJITA Tomonori
  0 siblings, 1 reply; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, FUJITA Tomonori

This is for IA64_DIG_VTD.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 arch/ia64/dig/dig_vtd_iommu.c |   18 ++++++++++++++++++
 1 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/arch/ia64/dig/dig_vtd_iommu.c b/arch/ia64/dig/dig_vtd_iommu.c
index 1c8a079..fdb8ba9 100644
--- a/arch/ia64/dig/dig_vtd_iommu.c
+++ b/arch/ia64/dig/dig_vtd_iommu.c
@@ -1,6 +1,7 @@
 #include <linux/types.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/dma-mapping.h>
 #include <linux/intel-iommu.h>
 
 void *
@@ -57,3 +58,20 @@ vtd_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
 	return 0;
 }
 EXPORT_SYMBOL_GPL(vtd_dma_mapping_error);
+
+extern int iommu_dma_supported(struct device *dev, u64 mask);
+
+struct dma_mapping_ops vtd_dma_ops = {
+	.alloc_coherent		= vtd_alloc_coherent,
+	.free_coherent		= vtd_free_coherent,
+	.map_single_attrs	= vtd_map_single_attrs,
+	.unmap_single_attrs	= vtd_unmap_single_attrs,
+	.map_sg_attrs		= vtd_map_sg_attrs,
+	.unmap_sg_attrs		= vtd_unmap_sg_attrs,
+	.sync_single_for_cpu	= machvec_dma_sync_single,
+	.sync_sg_for_cpu	= machvec_dma_sync_sg,
+	.sync_single_for_device	= machvec_dma_sync_single,
+	.sync_sg_for_device	= machvec_dma_sync_sg,
+	.dma_supported_op	= iommu_dma_supported,
+	.mapping_error		= vtd_dma_mapping_error,
+};
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 05/13] add dma_mapping_ops for SGI Altix
  2009-01-05 14:36       ` [PATCH 04/13] add dma_mapping_ops for intel-iommu FUJITA Tomonori
@ 2009-01-05 14:36         ` FUJITA Tomonori
  2009-01-05 14:36           ` [PATCH 06/13] add dma_mapping_ops for SWIOTLB FUJITA Tomonori
  0 siblings, 1 reply; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, FUJITA Tomonori

This is for IA64_SGI_SN2.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 arch/ia64/sn/pci/pci_dma.c |   16 ++++++++++++++++
 1 files changed, 16 insertions(+), 0 deletions(-)

diff --git a/arch/ia64/sn/pci/pci_dma.c b/arch/ia64/sn/pci/pci_dma.c
index 53ebb64..4ad13ff 100644
--- a/arch/ia64/sn/pci/pci_dma.c
+++ b/arch/ia64/sn/pci/pci_dma.c
@@ -11,6 +11,7 @@
 
 #include <linux/module.h>
 #include <linux/dma-attrs.h>
+#include <linux/dma-mapping.h>
 #include <asm/dma.h>
 #include <asm/sn/intr.h>
 #include <asm/sn/pcibus_provider_defs.h>
@@ -465,3 +466,18 @@ int sn_pci_legacy_write(struct pci_bus *bus, u16 port, u32 val, u8 size)
  out:
 	return ret;
 }
+
+struct dma_mapping_ops sn_dma_ops = {
+	.alloc_coherent		= sn_dma_alloc_coherent,
+	.free_coherent		= sn_dma_free_coherent,
+	.map_single_attrs	= sn_dma_map_single_attrs,
+	.unmap_single_attrs	= sn_dma_unmap_single_attrs,
+	.map_sg_attrs		= sn_dma_map_sg_attrs,
+	.unmap_sg_attrs		= sn_dma_unmap_sg_attrs,
+	.sync_single_for_cpu 	= sn_dma_sync_single_for_cpu,
+	.sync_sg_for_cpu	= sn_dma_sync_sg_for_cpu,
+	.sync_single_for_device = sn_dma_sync_single_for_device,
+	.sync_sg_for_device	= sn_dma_sync_sg_for_device,
+	.mapping_error		= sn_dma_mapping_error,
+	.dma_supported_op	= sn_dma_supported,
+};
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 06/13] add dma_mapping_ops for SWIOTLB
  2009-01-05 14:36         ` [PATCH 05/13] add dma_mapping_ops for SGI Altix FUJITA Tomonori
@ 2009-01-05 14:36           ` FUJITA Tomonori
  2009-01-05 14:36             ` [PATCH 07/13] set up dma_ops appropriately FUJITA Tomonori
  0 siblings, 1 reply; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, FUJITA Tomonori

There is already dma_mapping_ops for SWIOTLB but there are some
missing hooks.

This is for IA64_DIG_VTD, IA64_HP_ZX1_SWIOTLB, IA64_SGI_UV,
IA64_HP_SIM, IA64_XEN_GUEST and IA64_GENERIC.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 arch/ia64/kernel/Makefile      |    2 --
 arch/ia64/kernel/pci-dma.c     |    3 ---
 arch/ia64/kernel/pci-swiotlb.c |    9 ++++++++-
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/arch/ia64/kernel/Makefile b/arch/ia64/kernel/Makefile
index c381ea9..bc1f62a 100644
--- a/arch/ia64/kernel/Makefile
+++ b/arch/ia64/kernel/Makefile
@@ -43,9 +43,7 @@ ifneq ($(CONFIG_IA64_ESI),)
 obj-y				+= esi_stub.o	# must be in kernel proper
 endif
 obj-$(CONFIG_DMAR)		+= pci-dma.o
-ifeq ($(CONFIG_DMAR), y)
 obj-$(CONFIG_SWIOTLB)		+= pci-swiotlb.o
-endif
 
 # The gate DSO image is built using a special linker script.
 targets += gate.so gate-syms.o
diff --git a/arch/ia64/kernel/pci-dma.c b/arch/ia64/kernel/pci-dma.c
index 2a92f63..f8c38bd 100644
--- a/arch/ia64/kernel/pci-dma.c
+++ b/arch/ia64/kernel/pci-dma.c
@@ -32,9 +32,6 @@ int force_iommu __read_mostly = 1;
 int force_iommu __read_mostly;
 #endif
 
-/* Set this to 1 if there is a HW IOMMU in the system */
-int iommu_detected __read_mostly;
-
 /* Dummy device used for NULL arguments (normally ISA). Better would
    be probably a smaller DMA mask, but this is bug-to-bug compatible
    to i386. */
diff --git a/arch/ia64/kernel/pci-swiotlb.c b/arch/ia64/kernel/pci-swiotlb.c
index 16c5051..b62fb93 100644
--- a/arch/ia64/kernel/pci-swiotlb.c
+++ b/arch/ia64/kernel/pci-swiotlb.c
@@ -13,12 +13,18 @@
 int swiotlb __read_mostly;
 EXPORT_SYMBOL(swiotlb);
 
+/* Set this to 1 if there is a HW IOMMU in the system */
+int iommu_detected __read_mostly;
+
 struct dma_mapping_ops swiotlb_dma_ops = {
-	.mapping_error = swiotlb_dma_mapping_error,
 	.alloc_coherent = swiotlb_alloc_coherent,
 	.free_coherent = swiotlb_free_coherent,
 	.map_single = swiotlb_map_single,
 	.unmap_single = swiotlb_unmap_single,
+	.map_single_attrs = swiotlb_map_single_attrs,
+	.unmap_single_attrs = swiotlb_unmap_single_attrs,
+	.map_sg_attrs = swiotlb_map_sg_attrs,
+	.unmap_sg_attrs	= swiotlb_unmap_sg_attrs,
 	.sync_single_for_cpu = swiotlb_sync_single_for_cpu,
 	.sync_single_for_device = swiotlb_sync_single_for_device,
 	.sync_single_range_for_cpu = swiotlb_sync_single_range_for_cpu,
@@ -28,6 +34,7 @@ struct dma_mapping_ops swiotlb_dma_ops = {
 	.map_sg = swiotlb_map_sg,
 	.unmap_sg = swiotlb_unmap_sg,
 	.dma_supported_op = swiotlb_dma_supported,
+	.mapping_error = swiotlb_dma_mapping_error,
 };
 
 void __init pci_swiotlb_init(void)
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 07/13] set up dma_ops appropriately
  2009-01-05 14:36           ` [PATCH 06/13] add dma_mapping_ops for SWIOTLB FUJITA Tomonori
@ 2009-01-05 14:36             ` FUJITA Tomonori
  2009-01-05 14:36               ` [PATCH 08/13] convert the DMA API to use dma_ops FUJITA Tomonori
  0 siblings, 1 reply; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, FUJITA Tomonori

This patch introduces a global pointer, dma_ops, which points to an
appropriate dma_mapping_ops that the kernel should use. This is a
common way to handle multiple dma_mapping_ops (X86, POWER, and SPARC).

dma_ops is set in platform_dma_init. We also set it by hand where
machvec_init is callev via subsys_initcall.

- IA64_DIG_VTD uses vtd_dma_ops.
- IA64_HP_ZX1 uses sba_dma_ops.
- IA64_HP_ZX1_SWIOTLB uses hwsw_dma_ops.
- IA64_SGI_SN2 uses sn_dma_ops.
- The rest use swiotlb_dma_ops.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 arch/ia64/hp/common/hwsw_iommu.c      |    3 +++
 arch/ia64/hp/common/sba_iommu.c       |    9 +++++++++
 arch/ia64/include/asm/machvec.h       |    4 +++-
 arch/ia64/include/asm/machvec_hpzx1.h |    3 ++-
 arch/ia64/include/asm/machvec_sn2.h   |    3 ++-
 arch/ia64/kernel/Makefile             |    2 +-
 arch/ia64/kernel/dma-mapping.c        |    4 ++++
 arch/ia64/kernel/pci-dma.c            |    6 +++---
 arch/ia64/kernel/pci-swiotlb.c        |    6 ++++++
 arch/ia64/sn/pci/pci_dma.c            |    5 +++++
 10 files changed, 38 insertions(+), 7 deletions(-)
 create mode 100644 arch/ia64/kernel/dma-mapping.c

diff --git a/arch/ia64/hp/common/hwsw_iommu.c b/arch/ia64/hp/common/hwsw_iommu.c
index a40dcdd..22145de 100644
--- a/arch/ia64/hp/common/hwsw_iommu.c
+++ b/arch/ia64/hp/common/hwsw_iommu.c
@@ -56,9 +56,12 @@ use_swiotlb (struct device *dev)
 	return dev && dev->dma_mask && !hwiommu_dma_supported(dev, *dev->dma_mask);
 }
 
+struct dma_mapping_ops hwsw_dma_ops;
+
 void __init
 hwsw_init (void)
 {
+	dma_ops = &hwsw_dma_ops;
 	/* default to a smallish 2MB sw I/O TLB */
 	if (swiotlb_late_init_with_default_size (2 * (1<<20)) != 0) {
 #ifdef CONFIG_IA64_GENERIC
diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c
index 655b9a1..e82870a 100644
--- a/arch/ia64/hp/common/sba_iommu.c
+++ b/arch/ia64/hp/common/sba_iommu.c
@@ -2065,6 +2065,8 @@ static struct acpi_driver acpi_sba_ioc_driver = {
 	},
 };
 
+extern struct dma_mapping_ops swiotlb_dma_ops;
+
 static int __init
 sba_init(void)
 {
@@ -2078,6 +2080,7 @@ sba_init(void)
 	 * a successful kdump kernel boot is to use the swiotlb.
 	 */
 	if (is_kdump_kernel()) {
+		dma_ops = &swiotlb_dma_ops;
 		if (swiotlb_late_init_with_default_size(64 * (1<<20)) != 0)
 			panic("Unable to initialize software I/O TLB:"
 				  " Try machvec=dig boot option");
@@ -2093,6 +2096,7 @@ sba_init(void)
 		 * If we didn't find something sba_iommu can claim, we
 		 * need to setup the swiotlb and switch to the dig machvec.
 		 */
+		dma_ops = &swiotlb_dma_ops;
 		if (swiotlb_late_init_with_default_size(64 * (1<<20)) != 0)
 			panic("Unable to find SBA IOMMU or initialize "
 			      "software I/O TLB: Try machvec=dig boot option");
@@ -2196,3 +2200,8 @@ struct dma_mapping_ops sba_dma_ops = {
 	.dma_supported_op	= sba_dma_supported,
 	.mapping_error		= sba_dma_mapping_error,
 };
+
+void sba_dma_init(void)
+{
+	dma_ops = &sba_dma_ops;
+}
diff --git a/arch/ia64/include/asm/machvec.h b/arch/ia64/include/asm/machvec.h
index 59c17e4..d40722c 100644
--- a/arch/ia64/include/asm/machvec.h
+++ b/arch/ia64/include/asm/machvec.h
@@ -298,6 +298,8 @@ extern void machvec_init_from_cmdline(const char *cmdline);
 #  error Unknown configuration.  Update arch/ia64/include/asm/machvec.h.
 # endif /* CONFIG_IA64_GENERIC */
 
+extern void swiotlb_dma_init(void);
+
 /*
  * Define default versions so we can extend machvec for new platforms without having
  * to update the machvec files for all existing platforms.
@@ -328,7 +330,7 @@ extern void machvec_init_from_cmdline(const char *cmdline);
 # define platform_kernel_launch_event	machvec_noop
 #endif
 #ifndef platform_dma_init
-# define platform_dma_init		swiotlb_init
+# define platform_dma_init		swiotlb_dma_init
 #endif
 #ifndef platform_dma_alloc_coherent
 # define platform_dma_alloc_coherent	swiotlb_alloc_coherent
diff --git a/arch/ia64/include/asm/machvec_hpzx1.h b/arch/ia64/include/asm/machvec_hpzx1.h
index 2f57f51..dd4140b 100644
--- a/arch/ia64/include/asm/machvec_hpzx1.h
+++ b/arch/ia64/include/asm/machvec_hpzx1.h
@@ -2,6 +2,7 @@
 #define _ASM_IA64_MACHVEC_HPZX1_h
 
 extern ia64_mv_setup_t			dig_setup;
+extern ia64_mv_dma_init			sba_dma_init;
 extern ia64_mv_dma_alloc_coherent	sba_alloc_coherent;
 extern ia64_mv_dma_free_coherent	sba_free_coherent;
 extern ia64_mv_dma_map_single_attrs	sba_map_single_attrs;
@@ -20,7 +21,7 @@ extern ia64_mv_dma_mapping_error	sba_dma_mapping_error;
  */
 #define platform_name				"hpzx1"
 #define platform_setup				dig_setup
-#define platform_dma_init			machvec_noop
+#define platform_dma_init			sba_dma_init
 #define platform_dma_alloc_coherent		sba_alloc_coherent
 #define platform_dma_free_coherent		sba_free_coherent
 #define platform_dma_map_single_attrs		sba_map_single_attrs
diff --git a/arch/ia64/include/asm/machvec_sn2.h b/arch/ia64/include/asm/machvec_sn2.h
index 781308e..c1f6f87 100644
--- a/arch/ia64/include/asm/machvec_sn2.h
+++ b/arch/ia64/include/asm/machvec_sn2.h
@@ -55,6 +55,7 @@ extern ia64_mv_readb_t __sn_readb_relaxed;
 extern ia64_mv_readw_t __sn_readw_relaxed;
 extern ia64_mv_readl_t __sn_readl_relaxed;
 extern ia64_mv_readq_t __sn_readq_relaxed;
+extern ia64_mv_dma_init			sn_dma_init;
 extern ia64_mv_dma_alloc_coherent	sn_dma_alloc_coherent;
 extern ia64_mv_dma_free_coherent	sn_dma_free_coherent;
 extern ia64_mv_dma_map_single_attrs	sn_dma_map_single_attrs;
@@ -110,7 +111,7 @@ extern ia64_mv_pci_fixup_bus_t		sn_pci_fixup_bus;
 #define platform_pci_get_legacy_mem	sn_pci_get_legacy_mem
 #define platform_pci_legacy_read	sn_pci_legacy_read
 #define platform_pci_legacy_write	sn_pci_legacy_write
-#define platform_dma_init		machvec_noop
+#define platform_dma_init		sn_dma_init
 #define platform_dma_alloc_coherent	sn_dma_alloc_coherent
 #define platform_dma_free_coherent	sn_dma_free_coherent
 #define platform_dma_map_single_attrs	sn_dma_map_single_attrs
diff --git a/arch/ia64/kernel/Makefile b/arch/ia64/kernel/Makefile
index bc1f62a..f2778f2 100644
--- a/arch/ia64/kernel/Makefile
+++ b/arch/ia64/kernel/Makefile
@@ -7,7 +7,7 @@ extra-y	:= head.o init_task.o vmlinux.lds
 obj-y := acpi.o entry.o efi.o efi_stub.o gate-data.o fsys.o ia64_ksyms.o irq.o irq_ia64.o	\
 	 irq_lsapic.o ivt.o machvec.o pal.o patch.o process.o perfmon.o ptrace.o sal.o		\
 	 salinfo.o setup.o signal.o sys_ia64.o time.o traps.o unaligned.o \
-	 unwind.o mca.o mca_asm.o topology.o
+	 unwind.o mca.o mca_asm.o topology.o dma-mapping.o
 
 obj-$(CONFIG_IA64_BRL_EMU)	+= brl_emu.o
 obj-$(CONFIG_IA64_GENERIC)	+= acpi-ext.o
diff --git a/arch/ia64/kernel/dma-mapping.c b/arch/ia64/kernel/dma-mapping.c
new file mode 100644
index 0000000..876665a
--- /dev/null
+++ b/arch/ia64/kernel/dma-mapping.c
@@ -0,0 +1,4 @@
+#include <linux/dma-mapping.h>
+
+struct dma_mapping_ops *dma_ops;
+EXPORT_SYMBOL(dma_ops);
diff --git a/arch/ia64/kernel/pci-dma.c b/arch/ia64/kernel/pci-dma.c
index f8c38bd..1c1224b 100644
--- a/arch/ia64/kernel/pci-dma.c
+++ b/arch/ia64/kernel/pci-dma.c
@@ -41,8 +41,11 @@ struct device fallback_dev = {
 	.dma_mask = &fallback_dev.coherent_dma_mask,
 };
 
+extern struct dma_mapping_ops vtd_dma_ops;
+
 void __init pci_iommu_alloc(void)
 {
+	dma_ops = &vtd_dma_ops;
 	/*
 	 * The order of these functions is important for
 	 * fall-back/fail-over reasons
@@ -76,9 +79,6 @@ iommu_dma_init(void)
 	return;
 }
 
-struct dma_mapping_ops *dma_ops;
-EXPORT_SYMBOL(dma_ops);
-
 int iommu_dma_supported(struct device *dev, u64 mask)
 {
 	struct dma_mapping_ops *ops = get_dma_ops(dev);
diff --git a/arch/ia64/kernel/pci-swiotlb.c b/arch/ia64/kernel/pci-swiotlb.c
index b62fb93..9f172c8 100644
--- a/arch/ia64/kernel/pci-swiotlb.c
+++ b/arch/ia64/kernel/pci-swiotlb.c
@@ -37,6 +37,12 @@ struct dma_mapping_ops swiotlb_dma_ops = {
 	.mapping_error = swiotlb_dma_mapping_error,
 };
 
+void swiotlb_dma_init(void)
+{
+	dma_ops = &swiotlb_dma_ops;
+	swiotlb_init();
+}
+
 void __init pci_swiotlb_init(void)
 {
 	if (!iommu_detected) {
diff --git a/arch/ia64/sn/pci/pci_dma.c b/arch/ia64/sn/pci/pci_dma.c
index 4ad13ff..174a74e 100644
--- a/arch/ia64/sn/pci/pci_dma.c
+++ b/arch/ia64/sn/pci/pci_dma.c
@@ -481,3 +481,8 @@ struct dma_mapping_ops sn_dma_ops = {
 	.mapping_error		= sn_dma_mapping_error,
 	.dma_supported_op	= sn_dma_supported,
 };
+
+void sn_dma_init(void)
+{
+	dma_ops = &sn_dma_ops;
+}
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 08/13] convert the DMA API to use dma_ops
  2009-01-05 14:36             ` [PATCH 07/13] set up dma_ops appropriately FUJITA Tomonori
@ 2009-01-05 14:36               ` FUJITA Tomonori
  2009-01-05 14:36                 ` [PATCH 09/13] remove dma operations in struct ia64_machine_vector FUJITA Tomonori
  0 siblings, 1 reply; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, FUJITA Tomonori

This writes asm/dma-mapping.h to convert the DMA API to use dma_ops.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 arch/ia64/include/asm/dma-mapping.h |  113 ++++++++++++++++++++++++-----------
 1 files changed, 77 insertions(+), 36 deletions(-)

diff --git a/arch/ia64/include/asm/dma-mapping.h b/arch/ia64/include/asm/dma-mapping.h
index eeb2aa3..5298f40 100644
--- a/arch/ia64/include/asm/dma-mapping.h
+++ b/arch/ia64/include/asm/dma-mapping.h
@@ -65,52 +65,92 @@ extern struct dma_mapping_ops *dma_ops;
 extern struct ia64_machine_vector ia64_mv;
 extern void set_iommu_machvec(void);
 
-#define dma_alloc_coherent(dev, size, handle, gfp)	\
-	platform_dma_alloc_coherent(dev, size, handle, (gfp) | GFP_DMA)
+static inline void *dma_alloc_coherent(struct device *dev, size_t size,
+				       dma_addr_t *daddr, gfp_t gfp)
+{
+	return dma_ops->alloc_coherent(dev, size, daddr, gfp | GFP_DMA);
+}
 
-/* coherent mem. is cheap */
-static inline void *
-dma_alloc_noncoherent(struct device *dev, size_t size, dma_addr_t *dma_handle,
-		      gfp_t flag)
+static inline void dma_free_coherent(struct device *dev, size_t size,
+				     void *caddr, dma_addr_t daddr)
 {
-	return dma_alloc_coherent(dev, size, dma_handle, flag);
+	dma_ops->free_coherent(dev, size, caddr, daddr);
 }
-#define dma_free_coherent	platform_dma_free_coherent
-static inline void
-dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
-		     dma_addr_t dma_handle)
+
+#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
+#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
+
+static inline dma_addr_t dma_map_single_attrs(struct device *dev,
+					      void *caddr, size_t size,
+					      enum dma_data_direction dir,
+					      struct dma_attrs *attrs)
+{
+	return dma_ops->map_single_attrs(dev, caddr, size, dir, attrs);
+}
+
+static inline void dma_unmap_single_attrs(struct device *dev, dma_addr_t daddr,
+					  size_t size,
+					  enum dma_data_direction dir,
+					  struct dma_attrs *attrs)
 {
-	dma_free_coherent(dev, size, cpu_addr, dma_handle);
+	dma_ops->unmap_single_attrs(dev, daddr, size, dir, attrs);
 }
-#define dma_map_single_attrs	platform_dma_map_single_attrs
-static inline dma_addr_t dma_map_single(struct device *dev, void *cpu_addr,
-					size_t size, int dir)
+
+#define dma_map_single(d, a, s, r) dma_map_single_attrs(d, a, s, r, NULL)
+#define dma_unmap_single(d, a, s, r) dma_unmap_single_attrs(d, a, s, r, NULL)
+
+static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
+				   int nents, enum dma_data_direction dir,
+				   struct dma_attrs *attrs)
 {
-	return dma_map_single_attrs(dev, cpu_addr, size, dir, NULL);
+	return dma_ops->map_sg_attrs(dev, sgl, nents, dir, attrs);
 }
-#define dma_map_sg_attrs	platform_dma_map_sg_attrs
-static inline int dma_map_sg(struct device *dev, struct scatterlist *sgl,
-			     int nents, int dir)
+
+static inline void dma_unmap_sg_attrs(struct device *dev,
+				      struct scatterlist *sgl, int nents,
+				      enum dma_data_direction dir,
+				      struct dma_attrs *attrs)
 {
-	return dma_map_sg_attrs(dev, sgl, nents, dir, NULL);
+	dma_ops->unmap_sg_attrs(dev, sgl, nents, dir, attrs);
 }
-#define dma_unmap_single_attrs	platform_dma_unmap_single_attrs
-static inline void dma_unmap_single(struct device *dev, dma_addr_t cpu_addr,
-				    size_t size, int dir)
+
+#define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, NULL)
+#define dma_unmap_sg(d, s, n, r) dma_unmap_sg_attrs(d, s, n, r, NULL)
+
+static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t daddr,
+					   size_t size,
+					   enum dma_data_direction dir)
 {
-	return dma_unmap_single_attrs(dev, cpu_addr, size, dir, NULL);
+	dma_ops->sync_single_for_cpu(dev, daddr, size, dir);
 }
-#define dma_unmap_sg_attrs	platform_dma_unmap_sg_attrs
-static inline void dma_unmap_sg(struct device *dev, struct scatterlist *sgl,
-				int nents, int dir)
+
+static inline void dma_sync_sg_for_cpu(struct device *dev,
+				       struct scatterlist *sgl,
+				       int nents, enum dma_data_direction dir)
 {
-	return dma_unmap_sg_attrs(dev, sgl, nents, dir, NULL);
+	dma_ops->sync_sg_for_cpu(dev, sgl, nents, dir);
+}
+
+static inline void dma_sync_single_for_device(struct device *dev,
+					      dma_addr_t daddr,
+					      size_t size,
+					      enum dma_data_direction dir)
+{
+	dma_ops->sync_single_for_device(dev, daddr, size, dir);
+}
+
+static inline void dma_sync_sg_for_device(struct device *dev,
+					  struct scatterlist *sgl,
+					  int nents,
+					  enum dma_data_direction dir)
+{
+	dma_ops->sync_sg_for_device(dev, sgl, nents, dir);
+}
+
+static inline int dma_mapping_error(struct device *dev, dma_addr_t daddr)
+{
+	return dma_ops->mapping_error(dev, daddr);
 }
-#define dma_sync_single_for_cpu	platform_dma_sync_single_for_cpu
-#define dma_sync_sg_for_cpu	platform_dma_sync_sg_for_cpu
-#define dma_sync_single_for_device platform_dma_sync_single_for_device
-#define dma_sync_sg_for_device	platform_dma_sync_sg_for_device
-#define dma_mapping_error	platform_dma_mapping_error
 
 #define dma_map_page(dev, pg, off, size, dir)				\
 	dma_map_single(dev, page_address(pg) + (off), (size), (dir))
@@ -127,7 +167,10 @@ static inline void dma_unmap_sg(struct device *dev, struct scatterlist *sgl,
 #define dma_sync_single_range_for_device(dev, dma_handle, offset, size, dir)	\
 	dma_sync_single_for_device(dev, dma_handle, size, dir)
 
-#define dma_supported		platform_dma_supported
+static inline int dma_supported(struct device *dev, u64 mask)
+{
+	return dma_ops->dma_supported_op(dev, mask);
+}
 
 static inline int
 dma_set_mask (struct device *dev, u64 mask)
@@ -158,6 +201,4 @@ static inline struct dma_mapping_ops *get_dma_ops(struct device *dev)
 	return dma_ops;
 }
 
-
-
 #endif /* _ASM_IA64_DMA_MAPPING_H */
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 09/13] remove dma operations in struct ia64_machine_vector
  2009-01-05 14:36               ` [PATCH 08/13] convert the DMA API to use dma_ops FUJITA Tomonori
@ 2009-01-05 14:36                 ` FUJITA Tomonori
  2009-01-05 14:36                   ` [PATCH 10/13] make sn DMA mapping functions static FUJITA Tomonori
  0 siblings, 1 reply; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, FUJITA Tomonori

We don't need dma operation hooks in struct ia64_machine_vector
now. This also removes unused ia64_mv_dma_* typedefs.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 arch/ia64/hp/common/hwsw_iommu.c              |   20 ++++--
 arch/ia64/include/asm/machvec.h               |   89 -------------------------
 arch/ia64/include/asm/machvec_dig_vtd.h       |   20 ------
 arch/ia64/include/asm/machvec_hpzx1.h         |   20 ------
 arch/ia64/include/asm/machvec_hpzx1_swiotlb.h |   25 -------
 arch/ia64/include/asm/machvec_sn2.h           |   24 -------
 6 files changed, 12 insertions(+), 186 deletions(-)

diff --git a/arch/ia64/hp/common/hwsw_iommu.c b/arch/ia64/hp/common/hwsw_iommu.c
index 22145de..5cf750e 100644
--- a/arch/ia64/hp/common/hwsw_iommu.c
+++ b/arch/ia64/hp/common/hwsw_iommu.c
@@ -22,14 +22,18 @@ extern int swiotlb_late_init_with_default_size (size_t size);
 
 /* hwiommu declarations & definitions: */
 
-extern ia64_mv_dma_alloc_coherent	sba_alloc_coherent;
-extern ia64_mv_dma_free_coherent	sba_free_coherent;
-extern ia64_mv_dma_map_single_attrs	sba_map_single_attrs;
-extern ia64_mv_dma_unmap_single_attrs	sba_unmap_single_attrs;
-extern ia64_mv_dma_map_sg_attrs		sba_map_sg_attrs;
-extern ia64_mv_dma_unmap_sg_attrs	sba_unmap_sg_attrs;
-extern ia64_mv_dma_supported		sba_dma_supported;
-extern ia64_mv_dma_mapping_error	sba_dma_mapping_error;
+extern void *sba_alloc_coherent(struct device *, size_t, dma_addr_t *, gfp_t);
+extern void sba_free_coherent (struct device *, size_t, void *, dma_addr_t);
+extern dma_addr_t sba_map_single_attrs(struct device *, void *, size_t, int,
+				       struct dma_attrs *);
+extern void sba_unmap_single_attrs(struct device *, dma_addr_t, size_t, int,
+				   struct dma_attrs *);
+extern int sba_map_sg_attrs(struct device *, struct scatterlist *, int, int,
+			    struct dma_attrs *);
+extern void sba_unmap_sg_attrs(struct device *, struct scatterlist *, int, int,
+			       struct dma_attrs *);
+extern int sba_dma_supported (struct device *, u64);
+extern int sba_dma_mapping_error(struct device *, dma_addr_t);
 
 #define hwiommu_alloc_coherent		sba_alloc_coherent
 #define hwiommu_free_coherent		sba_free_coherent
diff --git a/arch/ia64/include/asm/machvec.h b/arch/ia64/include/asm/machvec.h
index d40722c..6be3010 100644
--- a/arch/ia64/include/asm/machvec.h
+++ b/arch/ia64/include/asm/machvec.h
@@ -45,23 +45,6 @@ typedef void ia64_mv_kernel_launch_event_t(void);
 
 /* DMA-mapping interface: */
 typedef void ia64_mv_dma_init (void);
-typedef void *ia64_mv_dma_alloc_coherent (struct device *, size_t, dma_addr_t *, gfp_t);
-typedef void ia64_mv_dma_free_coherent (struct device *, size_t, void *, dma_addr_t);
-typedef dma_addr_t ia64_mv_dma_map_single (struct device *, void *, size_t, int);
-typedef void ia64_mv_dma_unmap_single (struct device *, dma_addr_t, size_t, int);
-typedef int ia64_mv_dma_map_sg (struct device *, struct scatterlist *, int, int);
-typedef void ia64_mv_dma_unmap_sg (struct device *, struct scatterlist *, int, int);
-typedef void ia64_mv_dma_sync_single_for_cpu (struct device *, dma_addr_t, size_t, int);
-typedef void ia64_mv_dma_sync_sg_for_cpu (struct device *, struct scatterlist *, int, int);
-typedef void ia64_mv_dma_sync_single_for_device (struct device *, dma_addr_t, size_t, int);
-typedef void ia64_mv_dma_sync_sg_for_device (struct device *, struct scatterlist *, int, int);
-typedef int ia64_mv_dma_mapping_error(struct device *, dma_addr_t dma_addr);
-typedef int ia64_mv_dma_supported (struct device *, u64);
-
-typedef dma_addr_t ia64_mv_dma_map_single_attrs (struct device *, void *, size_t, int, struct dma_attrs *);
-typedef void ia64_mv_dma_unmap_single_attrs (struct device *, dma_addr_t, size_t, int, struct dma_attrs *);
-typedef int ia64_mv_dma_map_sg_attrs (struct device *, struct scatterlist *, int, int, struct dma_attrs *);
-typedef void ia64_mv_dma_unmap_sg_attrs (struct device *, struct scatterlist *, int, int, struct dma_attrs *);
 
 /*
  * WARNING: The legacy I/O space is _architected_.  Platforms are
@@ -147,18 +130,6 @@ extern void machvec_tlb_migrate_finish (struct mm_struct *);
 #  define platform_global_tlb_purge	ia64_mv.global_tlb_purge
 #  define platform_tlb_migrate_finish	ia64_mv.tlb_migrate_finish
 #  define platform_dma_init		ia64_mv.dma_init
-#  define platform_dma_alloc_coherent	ia64_mv.dma_alloc_coherent
-#  define platform_dma_free_coherent	ia64_mv.dma_free_coherent
-#  define platform_dma_map_single_attrs	ia64_mv.dma_map_single_attrs
-#  define platform_dma_unmap_single_attrs	ia64_mv.dma_unmap_single_attrs
-#  define platform_dma_map_sg_attrs	ia64_mv.dma_map_sg_attrs
-#  define platform_dma_unmap_sg_attrs	ia64_mv.dma_unmap_sg_attrs
-#  define platform_dma_sync_single_for_cpu ia64_mv.dma_sync_single_for_cpu
-#  define platform_dma_sync_sg_for_cpu	ia64_mv.dma_sync_sg_for_cpu
-#  define platform_dma_sync_single_for_device ia64_mv.dma_sync_single_for_device
-#  define platform_dma_sync_sg_for_device ia64_mv.dma_sync_sg_for_device
-#  define platform_dma_mapping_error		ia64_mv.dma_mapping_error
-#  define platform_dma_supported	ia64_mv.dma_supported
 #  define platform_irq_to_vector	ia64_mv.irq_to_vector
 #  define platform_local_vector_to_irq	ia64_mv.local_vector_to_irq
 #  define platform_pci_get_legacy_mem	ia64_mv.pci_get_legacy_mem
@@ -201,18 +172,6 @@ struct ia64_machine_vector {
 	ia64_mv_global_tlb_purge_t *global_tlb_purge;
 	ia64_mv_tlb_migrate_finish_t *tlb_migrate_finish;
 	ia64_mv_dma_init *dma_init;
-	ia64_mv_dma_alloc_coherent *dma_alloc_coherent;
-	ia64_mv_dma_free_coherent *dma_free_coherent;
-	ia64_mv_dma_map_single_attrs *dma_map_single_attrs;
-	ia64_mv_dma_unmap_single_attrs *dma_unmap_single_attrs;
-	ia64_mv_dma_map_sg_attrs *dma_map_sg_attrs;
-	ia64_mv_dma_unmap_sg_attrs *dma_unmap_sg_attrs;
-	ia64_mv_dma_sync_single_for_cpu *dma_sync_single_for_cpu;
-	ia64_mv_dma_sync_sg_for_cpu *dma_sync_sg_for_cpu;
-	ia64_mv_dma_sync_single_for_device *dma_sync_single_for_device;
-	ia64_mv_dma_sync_sg_for_device *dma_sync_sg_for_device;
-	ia64_mv_dma_mapping_error *dma_mapping_error;
-	ia64_mv_dma_supported *dma_supported;
 	ia64_mv_irq_to_vector *irq_to_vector;
 	ia64_mv_local_vector_to_irq *local_vector_to_irq;
 	ia64_mv_pci_get_legacy_mem_t *pci_get_legacy_mem;
@@ -251,18 +210,6 @@ struct ia64_machine_vector {
 	platform_global_tlb_purge,		\
 	platform_tlb_migrate_finish,		\
 	platform_dma_init,			\
-	platform_dma_alloc_coherent,		\
-	platform_dma_free_coherent,		\
-	platform_dma_map_single_attrs,		\
-	platform_dma_unmap_single_attrs,	\
-	platform_dma_map_sg_attrs,		\
-	platform_dma_unmap_sg_attrs,		\
-	platform_dma_sync_single_for_cpu,	\
-	platform_dma_sync_sg_for_cpu,		\
-	platform_dma_sync_single_for_device,	\
-	platform_dma_sync_sg_for_device,	\
-	platform_dma_mapping_error,			\
-	platform_dma_supported,			\
 	platform_irq_to_vector,			\
 	platform_local_vector_to_irq,		\
 	platform_pci_get_legacy_mem,		\
@@ -332,42 +279,6 @@ extern void swiotlb_dma_init(void);
 #ifndef platform_dma_init
 # define platform_dma_init		swiotlb_dma_init
 #endif
-#ifndef platform_dma_alloc_coherent
-# define platform_dma_alloc_coherent	swiotlb_alloc_coherent
-#endif
-#ifndef platform_dma_free_coherent
-# define platform_dma_free_coherent	swiotlb_free_coherent
-#endif
-#ifndef platform_dma_map_single_attrs
-# define platform_dma_map_single_attrs	swiotlb_map_single_attrs
-#endif
-#ifndef platform_dma_unmap_single_attrs
-# define platform_dma_unmap_single_attrs	swiotlb_unmap_single_attrs
-#endif
-#ifndef platform_dma_map_sg_attrs
-# define platform_dma_map_sg_attrs	swiotlb_map_sg_attrs
-#endif
-#ifndef platform_dma_unmap_sg_attrs
-# define platform_dma_unmap_sg_attrs	swiotlb_unmap_sg_attrs
-#endif
-#ifndef platform_dma_sync_single_for_cpu
-# define platform_dma_sync_single_for_cpu	swiotlb_sync_single_for_cpu
-#endif
-#ifndef platform_dma_sync_sg_for_cpu
-# define platform_dma_sync_sg_for_cpu		swiotlb_sync_sg_for_cpu
-#endif
-#ifndef platform_dma_sync_single_for_device
-# define platform_dma_sync_single_for_device	swiotlb_sync_single_for_device
-#endif
-#ifndef platform_dma_sync_sg_for_device
-# define platform_dma_sync_sg_for_device	swiotlb_sync_sg_for_device
-#endif
-#ifndef platform_dma_mapping_error
-# define platform_dma_mapping_error		swiotlb_dma_mapping_error
-#endif
-#ifndef platform_dma_supported
-# define  platform_dma_supported	swiotlb_dma_supported
-#endif
 #ifndef platform_irq_to_vector
 # define platform_irq_to_vector		__ia64_irq_to_vector
 #endif
diff --git a/arch/ia64/include/asm/machvec_dig_vtd.h b/arch/ia64/include/asm/machvec_dig_vtd.h
index 3400b56..6ab1de5 100644
--- a/arch/ia64/include/asm/machvec_dig_vtd.h
+++ b/arch/ia64/include/asm/machvec_dig_vtd.h
@@ -2,14 +2,6 @@
 #define _ASM_IA64_MACHVEC_DIG_VTD_h
 
 extern ia64_mv_setup_t			dig_setup;
-extern ia64_mv_dma_alloc_coherent	vtd_alloc_coherent;
-extern ia64_mv_dma_free_coherent	vtd_free_coherent;
-extern ia64_mv_dma_map_single_attrs	vtd_map_single_attrs;
-extern ia64_mv_dma_unmap_single_attrs	vtd_unmap_single_attrs;
-extern ia64_mv_dma_map_sg_attrs		vtd_map_sg_attrs;
-extern ia64_mv_dma_unmap_sg_attrs	vtd_unmap_sg_attrs;
-extern ia64_mv_dma_supported		iommu_dma_supported;
-extern ia64_mv_dma_mapping_error	vtd_dma_mapping_error;
 extern ia64_mv_dma_init			pci_iommu_alloc;
 
 /*
@@ -22,17 +14,5 @@ extern ia64_mv_dma_init			pci_iommu_alloc;
 #define platform_name				"dig_vtd"
 #define platform_setup				dig_setup
 #define platform_dma_init			pci_iommu_alloc
-#define platform_dma_alloc_coherent		vtd_alloc_coherent
-#define platform_dma_free_coherent		vtd_free_coherent
-#define platform_dma_map_single_attrs		vtd_map_single_attrs
-#define platform_dma_unmap_single_attrs		vtd_unmap_single_attrs
-#define platform_dma_map_sg_attrs		vtd_map_sg_attrs
-#define platform_dma_unmap_sg_attrs		vtd_unmap_sg_attrs
-#define platform_dma_sync_single_for_cpu	machvec_dma_sync_single
-#define platform_dma_sync_sg_for_cpu		machvec_dma_sync_sg
-#define platform_dma_sync_single_for_device	machvec_dma_sync_single
-#define platform_dma_sync_sg_for_device		machvec_dma_sync_sg
-#define platform_dma_supported			iommu_dma_supported
-#define platform_dma_mapping_error		vtd_dma_mapping_error
 
 #endif /* _ASM_IA64_MACHVEC_DIG_VTD_h */
diff --git a/arch/ia64/include/asm/machvec_hpzx1.h b/arch/ia64/include/asm/machvec_hpzx1.h
index dd4140b..3bd83d7 100644
--- a/arch/ia64/include/asm/machvec_hpzx1.h
+++ b/arch/ia64/include/asm/machvec_hpzx1.h
@@ -3,14 +3,6 @@
 
 extern ia64_mv_setup_t			dig_setup;
 extern ia64_mv_dma_init			sba_dma_init;
-extern ia64_mv_dma_alloc_coherent	sba_alloc_coherent;
-extern ia64_mv_dma_free_coherent	sba_free_coherent;
-extern ia64_mv_dma_map_single_attrs	sba_map_single_attrs;
-extern ia64_mv_dma_unmap_single_attrs	sba_unmap_single_attrs;
-extern ia64_mv_dma_map_sg_attrs		sba_map_sg_attrs;
-extern ia64_mv_dma_unmap_sg_attrs	sba_unmap_sg_attrs;
-extern ia64_mv_dma_supported		sba_dma_supported;
-extern ia64_mv_dma_mapping_error	sba_dma_mapping_error;
 
 /*
  * This stuff has dual use!
@@ -22,17 +14,5 @@ extern ia64_mv_dma_mapping_error	sba_dma_mapping_error;
 #define platform_name				"hpzx1"
 #define platform_setup				dig_setup
 #define platform_dma_init			sba_dma_init
-#define platform_dma_alloc_coherent		sba_alloc_coherent
-#define platform_dma_free_coherent		sba_free_coherent
-#define platform_dma_map_single_attrs		sba_map_single_attrs
-#define platform_dma_unmap_single_attrs		sba_unmap_single_attrs
-#define platform_dma_map_sg_attrs		sba_map_sg_attrs
-#define platform_dma_unmap_sg_attrs		sba_unmap_sg_attrs
-#define platform_dma_sync_single_for_cpu	machvec_dma_sync_single
-#define platform_dma_sync_sg_for_cpu		machvec_dma_sync_sg
-#define platform_dma_sync_single_for_device	machvec_dma_sync_single
-#define platform_dma_sync_sg_for_device		machvec_dma_sync_sg
-#define platform_dma_supported			sba_dma_supported
-#define platform_dma_mapping_error		sba_dma_mapping_error
 
 #endif /* _ASM_IA64_MACHVEC_HPZX1_h */
diff --git a/arch/ia64/include/asm/machvec_hpzx1_swiotlb.h b/arch/ia64/include/asm/machvec_hpzx1_swiotlb.h
index a842cdd..48c3a35 100644
--- a/arch/ia64/include/asm/machvec_hpzx1_swiotlb.h
+++ b/arch/ia64/include/asm/machvec_hpzx1_swiotlb.h
@@ -2,18 +2,6 @@
 #define _ASM_IA64_MACHVEC_HPZX1_SWIOTLB_h
 
 extern ia64_mv_setup_t				dig_setup;
-extern ia64_mv_dma_alloc_coherent		hwsw_alloc_coherent;
-extern ia64_mv_dma_free_coherent		hwsw_free_coherent;
-extern ia64_mv_dma_map_single_attrs		hwsw_map_single_attrs;
-extern ia64_mv_dma_unmap_single_attrs		hwsw_unmap_single_attrs;
-extern ia64_mv_dma_map_sg_attrs			hwsw_map_sg_attrs;
-extern ia64_mv_dma_unmap_sg_attrs		hwsw_unmap_sg_attrs;
-extern ia64_mv_dma_supported			hwsw_dma_supported;
-extern ia64_mv_dma_mapping_error		hwsw_dma_mapping_error;
-extern ia64_mv_dma_sync_single_for_cpu		hwsw_sync_single_for_cpu;
-extern ia64_mv_dma_sync_sg_for_cpu		hwsw_sync_sg_for_cpu;
-extern ia64_mv_dma_sync_single_for_device	hwsw_sync_single_for_device;
-extern ia64_mv_dma_sync_sg_for_device		hwsw_sync_sg_for_device;
 
 /*
  * This stuff has dual use!
@@ -23,20 +11,7 @@ extern ia64_mv_dma_sync_sg_for_device		hwsw_sync_sg_for_device;
  * the macros are used directly.
  */
 #define platform_name				"hpzx1_swiotlb"
-
 #define platform_setup				dig_setup
 #define platform_dma_init			machvec_noop
-#define platform_dma_alloc_coherent		hwsw_alloc_coherent
-#define platform_dma_free_coherent		hwsw_free_coherent
-#define platform_dma_map_single_attrs		hwsw_map_single_attrs
-#define platform_dma_unmap_single_attrs		hwsw_unmap_single_attrs
-#define platform_dma_map_sg_attrs		hwsw_map_sg_attrs
-#define platform_dma_unmap_sg_attrs		hwsw_unmap_sg_attrs
-#define platform_dma_supported			hwsw_dma_supported
-#define platform_dma_mapping_error		hwsw_dma_mapping_error
-#define platform_dma_sync_single_for_cpu	hwsw_sync_single_for_cpu
-#define platform_dma_sync_sg_for_cpu		hwsw_sync_sg_for_cpu
-#define platform_dma_sync_single_for_device	hwsw_sync_single_for_device
-#define platform_dma_sync_sg_for_device		hwsw_sync_sg_for_device
 
 #endif /* _ASM_IA64_MACHVEC_HPZX1_SWIOTLB_h */
diff --git a/arch/ia64/include/asm/machvec_sn2.h b/arch/ia64/include/asm/machvec_sn2.h
index c1f6f87..afd029b 100644
--- a/arch/ia64/include/asm/machvec_sn2.h
+++ b/arch/ia64/include/asm/machvec_sn2.h
@@ -56,18 +56,6 @@ extern ia64_mv_readw_t __sn_readw_relaxed;
 extern ia64_mv_readl_t __sn_readl_relaxed;
 extern ia64_mv_readq_t __sn_readq_relaxed;
 extern ia64_mv_dma_init			sn_dma_init;
-extern ia64_mv_dma_alloc_coherent	sn_dma_alloc_coherent;
-extern ia64_mv_dma_free_coherent	sn_dma_free_coherent;
-extern ia64_mv_dma_map_single_attrs	sn_dma_map_single_attrs;
-extern ia64_mv_dma_unmap_single_attrs	sn_dma_unmap_single_attrs;
-extern ia64_mv_dma_map_sg_attrs		sn_dma_map_sg_attrs;
-extern ia64_mv_dma_unmap_sg_attrs	sn_dma_unmap_sg_attrs;
-extern ia64_mv_dma_sync_single_for_cpu	sn_dma_sync_single_for_cpu;
-extern ia64_mv_dma_sync_sg_for_cpu	sn_dma_sync_sg_for_cpu;
-extern ia64_mv_dma_sync_single_for_device sn_dma_sync_single_for_device;
-extern ia64_mv_dma_sync_sg_for_device	sn_dma_sync_sg_for_device;
-extern ia64_mv_dma_mapping_error	sn_dma_mapping_error;
-extern ia64_mv_dma_supported		sn_dma_supported;
 extern ia64_mv_migrate_t		sn_migrate;
 extern ia64_mv_kernel_launch_event_t	sn_kernel_launch_event;
 extern ia64_mv_setup_msi_irq_t		sn_setup_msi_irq;
@@ -112,18 +100,6 @@ extern ia64_mv_pci_fixup_bus_t		sn_pci_fixup_bus;
 #define platform_pci_legacy_read	sn_pci_legacy_read
 #define platform_pci_legacy_write	sn_pci_legacy_write
 #define platform_dma_init		sn_dma_init
-#define platform_dma_alloc_coherent	sn_dma_alloc_coherent
-#define platform_dma_free_coherent	sn_dma_free_coherent
-#define platform_dma_map_single_attrs	sn_dma_map_single_attrs
-#define platform_dma_unmap_single_attrs	sn_dma_unmap_single_attrs
-#define platform_dma_map_sg_attrs	sn_dma_map_sg_attrs
-#define platform_dma_unmap_sg_attrs	sn_dma_unmap_sg_attrs
-#define platform_dma_sync_single_for_cpu sn_dma_sync_single_for_cpu
-#define platform_dma_sync_sg_for_cpu	sn_dma_sync_sg_for_cpu
-#define platform_dma_sync_single_for_device sn_dma_sync_single_for_device
-#define platform_dma_sync_sg_for_device	sn_dma_sync_sg_for_device
-#define platform_dma_mapping_error		sn_dma_mapping_error
-#define platform_dma_supported		sn_dma_supported
 #define platform_migrate		sn_migrate
 #define platform_kernel_launch_event    sn_kernel_launch_event
 #ifdef CONFIG_PCI_MSI
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 10/13] make sn DMA mapping functions static
  2009-01-05 14:36                 ` [PATCH 09/13] remove dma operations in struct ia64_machine_vector FUJITA Tomonori
@ 2009-01-05 14:36                   ` FUJITA Tomonori
  2009-01-05 14:36                     ` [PATCH 11/13] add dma_get_ops to struct ia64_machine_vector FUJITA Tomonori
  0 siblings, 1 reply; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, FUJITA Tomonori

Now we don't need to export sn DMA mapping functions.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 arch/ia64/sn/pci/pci_dma.c |   64 ++++++++++++++++++--------------------------
 1 files changed, 26 insertions(+), 38 deletions(-)

diff --git a/arch/ia64/sn/pci/pci_dma.c b/arch/ia64/sn/pci/pci_dma.c
index 174a74e..efdd694 100644
--- a/arch/ia64/sn/pci/pci_dma.c
+++ b/arch/ia64/sn/pci/pci_dma.c
@@ -32,7 +32,7 @@
  * this function.  Of course, SN only supports devices that have 32 or more
  * address bits when using the PMU.
  */
-int sn_dma_supported(struct device *dev, u64 mask)
+static int sn_dma_supported(struct device *dev, u64 mask)
 {
 	BUG_ON(dev->bus != &pci_bus_type);
 
@@ -40,7 +40,6 @@ int sn_dma_supported(struct device *dev, u64 mask)
 		return 0;
 	return 1;
 }
-EXPORT_SYMBOL(sn_dma_supported);
 
 /**
  * sn_dma_set_mask - set the DMA mask
@@ -76,8 +75,8 @@ EXPORT_SYMBOL(sn_dma_set_mask);
  * queue for a SCSI controller).  See Documentation/DMA-API.txt for
  * more information.
  */
-void *sn_dma_alloc_coherent(struct device *dev, size_t size,
-			    dma_addr_t * dma_handle, gfp_t flags)
+static void *sn_dma_alloc_coherent(struct device *dev, size_t size,
+				   dma_addr_t * dma_handle, gfp_t flags)
 {
 	void *cpuaddr;
 	unsigned long phys_addr;
@@ -125,7 +124,6 @@ void *sn_dma_alloc_coherent(struct device *dev, size_t size,
 
 	return cpuaddr;
 }
-EXPORT_SYMBOL(sn_dma_alloc_coherent);
 
 /**
  * sn_pci_free_coherent - free memory associated with coherent DMAable region
@@ -137,8 +135,8 @@ EXPORT_SYMBOL(sn_dma_alloc_coherent);
  * Frees the memory allocated by dma_alloc_coherent(), potentially unmapping
  * any associated IOMMU mappings.
  */
-void sn_dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
-			  dma_addr_t dma_handle)
+static void sn_dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
+				 dma_addr_t dma_handle)
 {
 	struct pci_dev *pdev = to_pci_dev(dev);
 	struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev);
@@ -148,7 +146,6 @@ void sn_dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
 	provider->dma_unmap(pdev, dma_handle, 0);
 	free_pages((unsigned long)cpu_addr, get_order(size));
 }
-EXPORT_SYMBOL(sn_dma_free_coherent);
 
 /**
  * sn_dma_map_single_attrs - map a single page for DMA
@@ -174,9 +171,9 @@ EXPORT_SYMBOL(sn_dma_free_coherent);
  * TODO: simplify our interface;
  *       figure out how to save dmamap handle so can use two step.
  */
-dma_addr_t sn_dma_map_single_attrs(struct device *dev, void *cpu_addr,
-				   size_t size, int direction,
-				   struct dma_attrs *attrs)
+static dma_addr_t sn_dma_map_single_attrs(struct device *dev, void *cpu_addr,
+					  size_t size, int direction,
+					  struct dma_attrs *attrs)
 {
 	dma_addr_t dma_addr;
 	unsigned long phys_addr;
@@ -202,7 +199,6 @@ dma_addr_t sn_dma_map_single_attrs(struct device *dev, void *cpu_addr,
 	}
 	return dma_addr;
 }
-EXPORT_SYMBOL(sn_dma_map_single_attrs);
 
 /**
  * sn_dma_unmap_single_attrs - unamp a DMA mapped page
@@ -216,9 +212,9 @@ EXPORT_SYMBOL(sn_dma_map_single_attrs);
  * by @dma_handle into the coherence domain.  On SN, we're always cache
  * coherent, so we just need to free any ATEs associated with this mapping.
  */
-void sn_dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
-			       size_t size, int direction,
-			       struct dma_attrs *attrs)
+static void sn_dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
+				      size_t size, int direction,
+				      struct dma_attrs *attrs)
 {
 	struct pci_dev *pdev = to_pci_dev(dev);
 	struct sn_pcibus_provider *provider = SN_PCIDEV_BUSPROVIDER(pdev);
@@ -227,7 +223,6 @@ void sn_dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
 
 	provider->dma_unmap(pdev, dma_addr, direction);
 }
-EXPORT_SYMBOL(sn_dma_unmap_single_attrs);
 
 /**
  * sn_dma_unmap_sg_attrs - unmap a DMA scatterlist
@@ -239,9 +234,9 @@ EXPORT_SYMBOL(sn_dma_unmap_single_attrs);
  *
  * Unmap a set of streaming mode DMA translations.
  */
-void sn_dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
-			   int nhwentries, int direction,
-			   struct dma_attrs *attrs)
+static void sn_dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
+				  int nhwentries, int direction,
+				  struct dma_attrs *attrs)
 {
 	int i;
 	struct pci_dev *pdev = to_pci_dev(dev);
@@ -256,7 +251,6 @@ void sn_dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
 		sg->dma_length = 0;
 	}
 }
-EXPORT_SYMBOL(sn_dma_unmap_sg_attrs);
 
 /**
  * sn_dma_map_sg_attrs - map a scatterlist for DMA
@@ -273,8 +267,8 @@ EXPORT_SYMBOL(sn_dma_unmap_sg_attrs);
  *
  * Maps each entry of @sg for DMA.
  */
-int sn_dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
-			int nhwentries, int direction, struct dma_attrs *attrs)
+static int sn_dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
+			       int nhwentries, int direction, struct dma_attrs *attrs)
 {
 	unsigned long phys_addr;
 	struct scatterlist *saved_sg = sgl, *sg;
@@ -321,41 +315,35 @@ int sn_dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
 
 	return nhwentries;
 }
-EXPORT_SYMBOL(sn_dma_map_sg_attrs);
 
-void sn_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
-				size_t size, int direction)
+static void sn_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
+				       size_t size, int direction)
 {
 	BUG_ON(dev->bus != &pci_bus_type);
 }
-EXPORT_SYMBOL(sn_dma_sync_single_for_cpu);
 
-void sn_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
-				   size_t size, int direction)
+static void sn_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
+					  size_t size, int direction)
 {
 	BUG_ON(dev->bus != &pci_bus_type);
 }
-EXPORT_SYMBOL(sn_dma_sync_single_for_device);
 
-void sn_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
-			    int nelems, int direction)
+static void sn_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
+				   int nelems, int direction)
 {
 	BUG_ON(dev->bus != &pci_bus_type);
 }
-EXPORT_SYMBOL(sn_dma_sync_sg_for_cpu);
 
-void sn_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
-			       int nelems, int direction)
+static void sn_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
+				      int nelems, int direction)
 {
 	BUG_ON(dev->bus != &pci_bus_type);
 }
-EXPORT_SYMBOL(sn_dma_sync_sg_for_device);
 
-int sn_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
+static int sn_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
 {
 	return 0;
 }
-EXPORT_SYMBOL(sn_dma_mapping_error);
 
 char *sn_pci_get_legacy_mem(struct pci_bus *bus)
 {
@@ -467,7 +455,7 @@ int sn_pci_legacy_write(struct pci_bus *bus, u16 port, u32 val, u8 size)
 	return ret;
 }
 
-struct dma_mapping_ops sn_dma_ops = {
+static struct dma_mapping_ops sn_dma_ops = {
 	.alloc_coherent		= sn_dma_alloc_coherent,
 	.free_coherent		= sn_dma_free_coherent,
 	.map_single_attrs	= sn_dma_map_single_attrs,
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 11/13] add dma_get_ops to struct ia64_machine_vector
  2009-01-05 14:36                   ` [PATCH 10/13] make sn DMA mapping functions static FUJITA Tomonori
@ 2009-01-05 14:36                     ` FUJITA Tomonori
  2009-01-05 14:36                       ` [PATCH 12/13] remove hwsw_dma_ops FUJITA Tomonori
  0 siblings, 1 reply; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, FUJITA Tomonori

This adds dma_get_ops hook to struct ia64_machine_vector. We use
dma_get_ops() in arch/ia64/kernel/dma-mapping.c, which simply returns
the global dma_ops. This is for removing hwsw_dma_ops.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 arch/ia64/include/asm/dma-mapping.h |   41 ++++++++++++++++++++--------------
 arch/ia64/include/asm/machvec.h     |    8 ++++++
 arch/ia64/kernel/dma-mapping.c      |    6 +++++
 arch/ia64/kernel/pci-dma.c          |    2 +-
 4 files changed, 39 insertions(+), 18 deletions(-)

diff --git a/arch/ia64/include/asm/dma-mapping.h b/arch/ia64/include/asm/dma-mapping.h
index 5298f40..bac3159 100644
--- a/arch/ia64/include/asm/dma-mapping.h
+++ b/arch/ia64/include/asm/dma-mapping.h
@@ -68,13 +68,15 @@ extern void set_iommu_machvec(void);
 static inline void *dma_alloc_coherent(struct device *dev, size_t size,
 				       dma_addr_t *daddr, gfp_t gfp)
 {
-	return dma_ops->alloc_coherent(dev, size, daddr, gfp | GFP_DMA);
+	struct dma_mapping_ops *ops = platform_dma_get_ops(dev);
+	return ops->alloc_coherent(dev, size, daddr, gfp | GFP_DMA);
 }
 
 static inline void dma_free_coherent(struct device *dev, size_t size,
 				     void *caddr, dma_addr_t daddr)
 {
-	dma_ops->free_coherent(dev, size, caddr, daddr);
+	struct dma_mapping_ops *ops = platform_dma_get_ops(dev);
+	ops->free_coherent(dev, size, caddr, daddr);
 }
 
 #define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
@@ -85,7 +87,8 @@ static inline dma_addr_t dma_map_single_attrs(struct device *dev,
 					      enum dma_data_direction dir,
 					      struct dma_attrs *attrs)
 {
-	return dma_ops->map_single_attrs(dev, caddr, size, dir, attrs);
+	struct dma_mapping_ops *ops = platform_dma_get_ops(dev);
+	return ops->map_single_attrs(dev, caddr, size, dir, attrs);
 }
 
 static inline void dma_unmap_single_attrs(struct device *dev, dma_addr_t daddr,
@@ -93,7 +96,8 @@ static inline void dma_unmap_single_attrs(struct device *dev, dma_addr_t daddr,
 					  enum dma_data_direction dir,
 					  struct dma_attrs *attrs)
 {
-	dma_ops->unmap_single_attrs(dev, daddr, size, dir, attrs);
+	struct dma_mapping_ops *ops = platform_dma_get_ops(dev);
+	ops->unmap_single_attrs(dev, daddr, size, dir, attrs);
 }
 
 #define dma_map_single(d, a, s, r) dma_map_single_attrs(d, a, s, r, NULL)
@@ -103,7 +107,8 @@ static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
 				   int nents, enum dma_data_direction dir,
 				   struct dma_attrs *attrs)
 {
-	return dma_ops->map_sg_attrs(dev, sgl, nents, dir, attrs);
+	struct dma_mapping_ops *ops = platform_dma_get_ops(dev);
+	return ops->map_sg_attrs(dev, sgl, nents, dir, attrs);
 }
 
 static inline void dma_unmap_sg_attrs(struct device *dev,
@@ -111,7 +116,8 @@ static inline void dma_unmap_sg_attrs(struct device *dev,
 				      enum dma_data_direction dir,
 				      struct dma_attrs *attrs)
 {
-	dma_ops->unmap_sg_attrs(dev, sgl, nents, dir, attrs);
+	struct dma_mapping_ops *ops = platform_dma_get_ops(dev);
+	ops->unmap_sg_attrs(dev, sgl, nents, dir, attrs);
 }
 
 #define dma_map_sg(d, s, n, r) dma_map_sg_attrs(d, s, n, r, NULL)
@@ -121,14 +127,16 @@ static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t daddr,
 					   size_t size,
 					   enum dma_data_direction dir)
 {
-	dma_ops->sync_single_for_cpu(dev, daddr, size, dir);
+	struct dma_mapping_ops *ops = platform_dma_get_ops(dev);
+	ops->sync_single_for_cpu(dev, daddr, size, dir);
 }
 
 static inline void dma_sync_sg_for_cpu(struct device *dev,
 				       struct scatterlist *sgl,
 				       int nents, enum dma_data_direction dir)
 {
-	dma_ops->sync_sg_for_cpu(dev, sgl, nents, dir);
+	struct dma_mapping_ops *ops = platform_dma_get_ops(dev);
+	ops->sync_sg_for_cpu(dev, sgl, nents, dir);
 }
 
 static inline void dma_sync_single_for_device(struct device *dev,
@@ -136,7 +144,8 @@ static inline void dma_sync_single_for_device(struct device *dev,
 					      size_t size,
 					      enum dma_data_direction dir)
 {
-	dma_ops->sync_single_for_device(dev, daddr, size, dir);
+	struct dma_mapping_ops *ops = platform_dma_get_ops(dev);
+	ops->sync_single_for_device(dev, daddr, size, dir);
 }
 
 static inline void dma_sync_sg_for_device(struct device *dev,
@@ -144,12 +153,14 @@ static inline void dma_sync_sg_for_device(struct device *dev,
 					  int nents,
 					  enum dma_data_direction dir)
 {
-	dma_ops->sync_sg_for_device(dev, sgl, nents, dir);
+	struct dma_mapping_ops *ops = platform_dma_get_ops(dev);
+	ops->sync_sg_for_device(dev, sgl, nents, dir);
 }
 
 static inline int dma_mapping_error(struct device *dev, dma_addr_t daddr)
 {
-	return dma_ops->mapping_error(dev, daddr);
+	struct dma_mapping_ops *ops = platform_dma_get_ops(dev);
+	return ops->mapping_error(dev, daddr);
 }
 
 #define dma_map_page(dev, pg, off, size, dir)				\
@@ -169,7 +180,8 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t daddr)
 
 static inline int dma_supported(struct device *dev, u64 mask)
 {
-	return dma_ops->dma_supported_op(dev, mask);
+	struct dma_mapping_ops *ops = platform_dma_get_ops(dev);
+	return ops->dma_supported_op(dev, mask);
 }
 
 static inline int
@@ -196,9 +208,4 @@ dma_cache_sync (struct device *dev, void *vaddr, size_t size,
 
 #define dma_is_consistent(d, h)	(1)	/* all we do is coherent memory... */
 
-static inline struct dma_mapping_ops *get_dma_ops(struct device *dev)
-{
-	return dma_ops;
-}
-
 #endif /* _ASM_IA64_DMA_MAPPING_H */
diff --git a/arch/ia64/include/asm/machvec.h b/arch/ia64/include/asm/machvec.h
index 6be3010..95e1708 100644
--- a/arch/ia64/include/asm/machvec.h
+++ b/arch/ia64/include/asm/machvec.h
@@ -45,6 +45,7 @@ typedef void ia64_mv_kernel_launch_event_t(void);
 
 /* DMA-mapping interface: */
 typedef void ia64_mv_dma_init (void);
+typedef struct dma_mapping_ops *ia64_mv_dma_get_ops(struct device *);
 
 /*
  * WARNING: The legacy I/O space is _architected_.  Platforms are
@@ -130,6 +131,7 @@ extern void machvec_tlb_migrate_finish (struct mm_struct *);
 #  define platform_global_tlb_purge	ia64_mv.global_tlb_purge
 #  define platform_tlb_migrate_finish	ia64_mv.tlb_migrate_finish
 #  define platform_dma_init		ia64_mv.dma_init
+#  define platform_dma_get_ops		ia64_mv.dma_get_ops
 #  define platform_irq_to_vector	ia64_mv.irq_to_vector
 #  define platform_local_vector_to_irq	ia64_mv.local_vector_to_irq
 #  define platform_pci_get_legacy_mem	ia64_mv.pci_get_legacy_mem
@@ -172,6 +174,7 @@ struct ia64_machine_vector {
 	ia64_mv_global_tlb_purge_t *global_tlb_purge;
 	ia64_mv_tlb_migrate_finish_t *tlb_migrate_finish;
 	ia64_mv_dma_init *dma_init;
+	ia64_mv_dma_get_ops *dma_get_ops;
 	ia64_mv_irq_to_vector *irq_to_vector;
 	ia64_mv_local_vector_to_irq *local_vector_to_irq;
 	ia64_mv_pci_get_legacy_mem_t *pci_get_legacy_mem;
@@ -210,6 +213,7 @@ struct ia64_machine_vector {
 	platform_global_tlb_purge,		\
 	platform_tlb_migrate_finish,		\
 	platform_dma_init,			\
+	platform_dma_get_ops,			\
 	platform_irq_to_vector,			\
 	platform_local_vector_to_irq,		\
 	platform_pci_get_legacy_mem,		\
@@ -246,6 +250,7 @@ extern void machvec_init_from_cmdline(const char *cmdline);
 # endif /* CONFIG_IA64_GENERIC */
 
 extern void swiotlb_dma_init(void);
+extern struct dma_mapping_ops *dma_get_ops(struct device *);
 
 /*
  * Define default versions so we can extend machvec for new platforms without having
@@ -279,6 +284,9 @@ extern void swiotlb_dma_init(void);
 #ifndef platform_dma_init
 # define platform_dma_init		swiotlb_dma_init
 #endif
+#ifndef platform_dma_get_ops
+# define platform_dma_get_ops		dma_get_ops
+#endif
 #ifndef platform_irq_to_vector
 # define platform_irq_to_vector		__ia64_irq_to_vector
 #endif
diff --git a/arch/ia64/kernel/dma-mapping.c b/arch/ia64/kernel/dma-mapping.c
index 876665a..427f696 100644
--- a/arch/ia64/kernel/dma-mapping.c
+++ b/arch/ia64/kernel/dma-mapping.c
@@ -2,3 +2,9 @@
 
 struct dma_mapping_ops *dma_ops;
 EXPORT_SYMBOL(dma_ops);
+
+struct dma_mapping_ops *dma_get_ops(struct device *dev)
+{
+	return dma_ops;
+}
+EXPORT_SYMBOL(dma_get_ops);
diff --git a/arch/ia64/kernel/pci-dma.c b/arch/ia64/kernel/pci-dma.c
index 1c1224b..640669e 100644
--- a/arch/ia64/kernel/pci-dma.c
+++ b/arch/ia64/kernel/pci-dma.c
@@ -81,7 +81,7 @@ iommu_dma_init(void)
 
 int iommu_dma_supported(struct device *dev, u64 mask)
 {
-	struct dma_mapping_ops *ops = get_dma_ops(dev);
+	struct dma_mapping_ops *ops = platform_dma_get_ops(dev);
 
 	if (ops->dma_supported_op)
 		return ops->dma_supported_op(dev, mask);
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 12/13] remove hwsw_dma_ops
  2009-01-05 14:36                     ` [PATCH 11/13] add dma_get_ops to struct ia64_machine_vector FUJITA Tomonori
@ 2009-01-05 14:36                       ` FUJITA Tomonori
  2009-01-05 14:36                         ` [PATCH 13/13] make sba DMA mapping functions static FUJITA Tomonori
  0 siblings, 1 reply; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, FUJITA Tomonori

This removes remove hwsw_dma_ops (and hwsw_*
functions). hwsw_dma_get_ops can select swiotlb_dma_ops and
sba_dma_ops appropriately.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 arch/ia64/hp/common/hwsw_iommu.c              |  183 ++-----------------------
 arch/ia64/include/asm/machvec_hpzx1_swiotlb.h |    2 +
 2 files changed, 14 insertions(+), 171 deletions(-)

diff --git a/arch/ia64/hp/common/hwsw_iommu.c b/arch/ia64/hp/common/hwsw_iommu.c
index 5cf750e..e5bbeba 100644
--- a/arch/ia64/hp/common/hwsw_iommu.c
+++ b/arch/ia64/hp/common/hwsw_iommu.c
@@ -17,55 +17,33 @@
 #include <linux/swiotlb.h>
 #include <asm/machvec.h>
 
+extern struct dma_mapping_ops sba_dma_ops, swiotlb_dma_ops;
+
 /* swiotlb declarations & definitions: */
 extern int swiotlb_late_init_with_default_size (size_t size);
 
-/* hwiommu declarations & definitions: */
-
-extern void *sba_alloc_coherent(struct device *, size_t, dma_addr_t *, gfp_t);
-extern void sba_free_coherent (struct device *, size_t, void *, dma_addr_t);
-extern dma_addr_t sba_map_single_attrs(struct device *, void *, size_t, int,
-				       struct dma_attrs *);
-extern void sba_unmap_single_attrs(struct device *, dma_addr_t, size_t, int,
-				   struct dma_attrs *);
-extern int sba_map_sg_attrs(struct device *, struct scatterlist *, int, int,
-			    struct dma_attrs *);
-extern void sba_unmap_sg_attrs(struct device *, struct scatterlist *, int, int,
-			       struct dma_attrs *);
-extern int sba_dma_supported (struct device *, u64);
-extern int sba_dma_mapping_error(struct device *, dma_addr_t);
-
-#define hwiommu_alloc_coherent		sba_alloc_coherent
-#define hwiommu_free_coherent		sba_free_coherent
-#define hwiommu_map_single_attrs	sba_map_single_attrs
-#define hwiommu_unmap_single_attrs	sba_unmap_single_attrs
-#define hwiommu_map_sg_attrs		sba_map_sg_attrs
-#define hwiommu_unmap_sg_attrs		sba_unmap_sg_attrs
-#define hwiommu_dma_supported		sba_dma_supported
-#define hwiommu_dma_mapping_error	sba_dma_mapping_error
-#define hwiommu_sync_single_for_cpu	machvec_dma_sync_single
-#define hwiommu_sync_sg_for_cpu		machvec_dma_sync_sg
-#define hwiommu_sync_single_for_device	machvec_dma_sync_single
-#define hwiommu_sync_sg_for_device	machvec_dma_sync_sg
-
-
 /*
  * Note: we need to make the determination of whether or not to use
  * the sw I/O TLB based purely on the device structure.  Anything else
  * would be unreliable or would be too intrusive.
  */
-static inline int
-use_swiotlb (struct device *dev)
+static inline int use_swiotlb(struct device *dev)
 {
-	return dev && dev->dma_mask && !hwiommu_dma_supported(dev, *dev->dma_mask);
+	return dev && dev->dma_mask &&
+		!sba_dma_ops.dma_supported_op(dev, *dev->dma_mask);
 }
 
-struct dma_mapping_ops hwsw_dma_ops;
+struct dma_mapping_ops *hwsw_dma_get_ops(struct device *dev)
+{
+	if (use_swiotlb(dev))
+		return &swiotlb_dma_ops;
+	return &sba_dma_ops;
+}
+EXPORT_SYMBOL(hwsw_dma_get_ops);
 
 void __init
 hwsw_init (void)
 {
-	dma_ops = &hwsw_dma_ops;
 	/* default to a smallish 2MB sw I/O TLB */
 	if (swiotlb_late_init_with_default_size (2 * (1<<20)) != 0) {
 #ifdef CONFIG_IA64_GENERIC
@@ -78,140 +56,3 @@ hwsw_init (void)
 #endif
 	}
 }
-
-void *
-hwsw_alloc_coherent (struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flags)
-{
-	if (use_swiotlb(dev))
-		return swiotlb_alloc_coherent(dev, size, dma_handle, flags);
-	else
-		return hwiommu_alloc_coherent(dev, size, dma_handle, flags);
-}
-
-void
-hwsw_free_coherent (struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle)
-{
-	if (use_swiotlb(dev))
-		swiotlb_free_coherent(dev, size, vaddr, dma_handle);
-	else
-		hwiommu_free_coherent(dev, size, vaddr, dma_handle);
-}
-
-dma_addr_t
-hwsw_map_single_attrs(struct device *dev, void *addr, size_t size, int dir,
-		       struct dma_attrs *attrs)
-{
-	if (use_swiotlb(dev))
-		return swiotlb_map_single_attrs(dev, addr, size, dir, attrs);
-	else
-		return hwiommu_map_single_attrs(dev, addr, size, dir, attrs);
-}
-EXPORT_SYMBOL(hwsw_map_single_attrs);
-
-void
-hwsw_unmap_single_attrs(struct device *dev, dma_addr_t iova, size_t size,
-			 int dir, struct dma_attrs *attrs)
-{
-	if (use_swiotlb(dev))
-		return swiotlb_unmap_single_attrs(dev, iova, size, dir, attrs);
-	else
-		return hwiommu_unmap_single_attrs(dev, iova, size, dir, attrs);
-}
-EXPORT_SYMBOL(hwsw_unmap_single_attrs);
-
-int
-hwsw_map_sg_attrs(struct device *dev, struct scatterlist *sglist, int nents,
-		   int dir, struct dma_attrs *attrs)
-{
-	if (use_swiotlb(dev))
-		return swiotlb_map_sg_attrs(dev, sglist, nents, dir, attrs);
-	else
-		return hwiommu_map_sg_attrs(dev, sglist, nents, dir, attrs);
-}
-EXPORT_SYMBOL(hwsw_map_sg_attrs);
-
-void
-hwsw_unmap_sg_attrs(struct device *dev, struct scatterlist *sglist, int nents,
-		     int dir, struct dma_attrs *attrs)
-{
-	if (use_swiotlb(dev))
-		return swiotlb_unmap_sg_attrs(dev, sglist, nents, dir, attrs);
-	else
-		return hwiommu_unmap_sg_attrs(dev, sglist, nents, dir, attrs);
-}
-EXPORT_SYMBOL(hwsw_unmap_sg_attrs);
-
-void
-hwsw_sync_single_for_cpu (struct device *dev, dma_addr_t addr, size_t size, int dir)
-{
-	if (use_swiotlb(dev))
-		swiotlb_sync_single_for_cpu(dev, addr, size, dir);
-	else
-		hwiommu_sync_single_for_cpu(dev, addr, size, dir);
-}
-
-void
-hwsw_sync_sg_for_cpu (struct device *dev, struct scatterlist *sg, int nelems, int dir)
-{
-	if (use_swiotlb(dev))
-		swiotlb_sync_sg_for_cpu(dev, sg, nelems, dir);
-	else
-		hwiommu_sync_sg_for_cpu(dev, sg, nelems, dir);
-}
-
-void
-hwsw_sync_single_for_device (struct device *dev, dma_addr_t addr, size_t size, int dir)
-{
-	if (use_swiotlb(dev))
-		swiotlb_sync_single_for_device(dev, addr, size, dir);
-	else
-		hwiommu_sync_single_for_device(dev, addr, size, dir);
-}
-
-void
-hwsw_sync_sg_for_device (struct device *dev, struct scatterlist *sg, int nelems, int dir)
-{
-	if (use_swiotlb(dev))
-		swiotlb_sync_sg_for_device(dev, sg, nelems, dir);
-	else
-		hwiommu_sync_sg_for_device(dev, sg, nelems, dir);
-}
-
-int
-hwsw_dma_supported (struct device *dev, u64 mask)
-{
-	if (hwiommu_dma_supported(dev, mask))
-		return 1;
-	return swiotlb_dma_supported(dev, mask);
-}
-
-int
-hwsw_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
-{
-	return hwiommu_dma_mapping_error(dev, dma_addr) ||
-		swiotlb_dma_mapping_error(dev, dma_addr);
-}
-
-EXPORT_SYMBOL(hwsw_dma_mapping_error);
-EXPORT_SYMBOL(hwsw_dma_supported);
-EXPORT_SYMBOL(hwsw_alloc_coherent);
-EXPORT_SYMBOL(hwsw_free_coherent);
-EXPORT_SYMBOL(hwsw_sync_single_for_cpu);
-EXPORT_SYMBOL(hwsw_sync_single_for_device);
-EXPORT_SYMBOL(hwsw_sync_sg_for_cpu);
-EXPORT_SYMBOL(hwsw_sync_sg_for_device);
-
-struct dma_mapping_ops hwsw_dma_ops = {
-	.alloc_coherent		= hwsw_alloc_coherent,
-	.free_coherent		= hwsw_free_coherent,
-	.map_single_attrs	= hwsw_map_single_attrs,
-	.unmap_single_attrs	= hwsw_unmap_single_attrs,
-	.map_sg_attrs		= hwsw_map_sg_attrs,
-	.unmap_sg_attrs		= hwsw_unmap_sg_attrs,
-	.sync_single_for_cpu	= hwsw_sync_single_for_cpu,
-	.sync_sg_for_cpu	= hwsw_sync_sg_for_cpu,
-	.sync_single_for_device	= hwsw_sync_single_for_device,
-	.sync_sg_for_device	= hwsw_sync_sg_for_device,
-	.dma_supported_op	= hwsw_dma_supported,
-	.mapping_error		= hwsw_dma_mapping_error,
-};
diff --git a/arch/ia64/include/asm/machvec_hpzx1_swiotlb.h b/arch/ia64/include/asm/machvec_hpzx1_swiotlb.h
index 48c3a35..1091ac3 100644
--- a/arch/ia64/include/asm/machvec_hpzx1_swiotlb.h
+++ b/arch/ia64/include/asm/machvec_hpzx1_swiotlb.h
@@ -2,6 +2,7 @@
 #define _ASM_IA64_MACHVEC_HPZX1_SWIOTLB_h
 
 extern ia64_mv_setup_t				dig_setup;
+extern ia64_mv_dma_get_ops			hwsw_dma_get_ops;
 
 /*
  * This stuff has dual use!
@@ -13,5 +14,6 @@ extern ia64_mv_setup_t				dig_setup;
 #define platform_name				"hpzx1_swiotlb"
 #define platform_setup				dig_setup
 #define platform_dma_init			machvec_noop
+#define platform_dma_get_ops			hwsw_dma_get_ops
 
 #endif /* _ASM_IA64_MACHVEC_HPZX1_SWIOTLB_h */
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 13/13] make sba DMA mapping functions static
  2009-01-05 14:36                       ` [PATCH 12/13] remove hwsw_dma_ops FUJITA Tomonori
@ 2009-01-05 14:36                         ` FUJITA Tomonori
  0 siblings, 0 replies; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-05 14:36 UTC (permalink / raw)
  To: tony.luck; +Cc: linux-ia64, linux-kernel, FUJITA Tomonori

Now we don't need to export these DMA mapping functions.

Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---
 arch/ia64/hp/common/sba_iommu.c |   34 ++++++++++++----------------------
 1 files changed, 12 insertions(+), 22 deletions(-)

diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c
index e82870a..29e7206 100644
--- a/arch/ia64/hp/common/sba_iommu.c
+++ b/arch/ia64/hp/common/sba_iommu.c
@@ -909,7 +909,7 @@ sba_mark_invalid(struct ioc *ioc, dma_addr_t iova, size_t byte_cnt)
  *
  * See Documentation/DMA-mapping.txt
  */
-dma_addr_t
+static dma_addr_t
 sba_map_single_attrs(struct device *dev, void *addr, size_t size, int dir,
 		     struct dma_attrs *attrs)
 {
@@ -991,7 +991,6 @@ sba_map_single_attrs(struct device *dev, void *addr, size_t size, int dir,
 #endif
 	return SBA_IOVA(ioc, iovp, offset);
 }
-EXPORT_SYMBOL(sba_map_single_attrs);
 
 #ifdef ENABLE_MARK_CLEAN
 static SBA_INLINE void
@@ -1027,8 +1026,8 @@ sba_mark_clean(struct ioc *ioc, dma_addr_t iova, size_t size)
  *
  * See Documentation/DMA-mapping.txt
  */
-void sba_unmap_single_attrs(struct device *dev, dma_addr_t iova, size_t size,
-			    int dir, struct dma_attrs *attrs)
+static void sba_unmap_single_attrs(struct device *dev, dma_addr_t iova, size_t size,
+				   int dir, struct dma_attrs *attrs)
 {
 	struct ioc *ioc;
 #if DELAYED_RESOURCE_CNT > 0
@@ -1095,7 +1094,6 @@ void sba_unmap_single_attrs(struct device *dev, dma_addr_t iova, size_t size,
 	spin_unlock_irqrestore(&ioc->res_lock, flags);
 #endif /* DELAYED_RESOURCE_CNT == 0 */
 }
-EXPORT_SYMBOL(sba_unmap_single_attrs);
 
 /**
  * sba_alloc_coherent - allocate/map shared mem for DMA
@@ -1105,7 +1103,7 @@ EXPORT_SYMBOL(sba_unmap_single_attrs);
  *
  * See Documentation/DMA-mapping.txt
  */
-void *
+static void *
 sba_alloc_coherent (struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flags)
 {
 	struct ioc *ioc;
@@ -1168,7 +1166,8 @@ sba_alloc_coherent (struct device *dev, size_t size, dma_addr_t *dma_handle, gfp
  *
  * See Documentation/DMA-mapping.txt
  */
-void sba_free_coherent (struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle)
+static void sba_free_coherent (struct device *dev, size_t size, void *vaddr,
+			       dma_addr_t dma_handle)
 {
 	sba_unmap_single_attrs(dev, dma_handle, size, 0, NULL);
 	free_pages((unsigned long) vaddr, get_order(size));
@@ -1423,8 +1422,8 @@ sba_coalesce_chunks(struct ioc *ioc, struct device *dev,
  *
  * See Documentation/DMA-mapping.txt
  */
-int sba_map_sg_attrs(struct device *dev, struct scatterlist *sglist, int nents,
-		     int dir, struct dma_attrs *attrs)
+static int sba_map_sg_attrs(struct device *dev, struct scatterlist *sglist,
+			    int nents, int dir, struct dma_attrs *attrs)
 {
 	struct ioc *ioc;
 	int coalesced, filled = 0;
@@ -1503,7 +1502,6 @@ int sba_map_sg_attrs(struct device *dev, struct scatterlist *sglist, int nents,
 
 	return filled;
 }
-EXPORT_SYMBOL(sba_map_sg_attrs);
 
 /**
  * sba_unmap_sg_attrs - unmap Scatter/Gather list
@@ -1515,8 +1513,8 @@ EXPORT_SYMBOL(sba_map_sg_attrs);
  *
  * See Documentation/DMA-mapping.txt
  */
-void sba_unmap_sg_attrs(struct device *dev, struct scatterlist *sglist,
-			int nents, int dir, struct dma_attrs *attrs)
+static void sba_unmap_sg_attrs(struct device *dev, struct scatterlist *sglist,
+			       int nents, int dir, struct dma_attrs *attrs)
 {
 #ifdef ASSERT_PDIR_SANITY
 	struct ioc *ioc;
@@ -1552,7 +1550,6 @@ void sba_unmap_sg_attrs(struct device *dev, struct scatterlist *sglist,
 #endif
 
 }
-EXPORT_SYMBOL(sba_unmap_sg_attrs);
 
 /**************************************************************
 *
@@ -2143,15 +2140,13 @@ nosbagart(char *str)
 	return 1;
 }
 
-int
-sba_dma_supported (struct device *dev, u64 mask)
+static int sba_dma_supported (struct device *dev, u64 mask)
 {
 	/* make sure it's at least 32bit capable */
 	return ((mask & 0xFFFFFFFFUL) == 0xFFFFFFFFUL);
 }
 
-int
-sba_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
+static int sba_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
 {
 	return 0;
 }
@@ -2181,11 +2176,6 @@ sba_page_override(char *str)
 
 __setup("sbapagesize=",sba_page_override);
 
-EXPORT_SYMBOL(sba_dma_mapping_error);
-EXPORT_SYMBOL(sba_dma_supported);
-EXPORT_SYMBOL(sba_alloc_coherent);
-EXPORT_SYMBOL(sba_free_coherent);
-
 struct dma_mapping_ops sba_dma_ops = {
 	.alloc_coherent		= sba_alloc_coherent,
 	.free_coherent		= sba_free_coherent,
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/13] IA64: unifying ways to handle multiple sets of dma mapping ops
  2009-01-05 14:36 [PATCH 0/13] IA64: unifying ways to handle multiple sets of dma mapping ops FUJITA Tomonori
  2009-01-05 14:36 ` [PATCH 01/13] add map/unmap_single_attr and map/unmap_sg_attr to struct dma_mapping_ops FUJITA Tomonori
@ 2009-01-07 19:56 ` Sam Ravnborg
  2009-01-08  1:55   ` FUJITA Tomonori
  1 sibling, 1 reply; 16+ messages in thread
From: Sam Ravnborg @ 2009-01-07 19:56 UTC (permalink / raw)
  To: FUJITA Tomonori; +Cc: tony.luck, linux-ia64, linux-kernel, David S. Miller

On Mon, Jan 05, 2009 at 11:36:05PM +0900, FUJITA Tomonori wrote:
> This patchset is the first part of the unification of ways to handle
> multiple sets of dma mapping API. The whole work consists of three
> patchset. This is for IA64 and can be applied independently.
> 
> dma_mapping_ops (or dma_ops) struct is used to handle multiple sets of
> dma mapping API by X86, SPARC, and POWER. IA64 also handle multiple
> sets of dma mapping API but in a very different way (some define
> magic).
> 
> X86 and IA64 share VT-d and SWIOTLB code. We need several workarounds
> for it because of the deference of ways to handle multiple sets of dma
> mapping API (e.g., X86 people can't freely change struct
> dma_mapping_ops in x86's dma-mapping.h now because it could break
> IA64).  Seems POWER will use SWIOTLB code soon. I think that it's time
> to unify ways to handle multiple sets of dma mapping API. After
> applying the whole work, we have struct dma_map_ops
> include/linux/dma-mapping.h (I also dream of changing all the archs to
> use SWIOTLB in order to remove the bounce code in the block and
> network stacks...).
> 
> This patchset changes IA64 to handle multiple sets of dma mapping API
> in the common way (as X86, SPARC, and POWER do):

Do you have any plans to update sparc too?
Maybe it is not relevant.

	Sam

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/13] IA64: unifying ways to handle multiple sets of dma mapping ops
  2009-01-07 19:56 ` [PATCH 0/13] IA64: unifying ways to handle multiple sets of dma mapping ops Sam Ravnborg
@ 2009-01-08  1:55   ` FUJITA Tomonori
  0 siblings, 0 replies; 16+ messages in thread
From: FUJITA Tomonori @ 2009-01-08  1:55 UTC (permalink / raw)
  To: sam; +Cc: fujita.tomonori, tony.luck, linux-ia64, linux-kernel, davem

On Wed, 7 Jan 2009 20:56:46 +0100
Sam Ravnborg <sam@ravnborg.org> wrote:

> On Mon, Jan 05, 2009 at 11:36:05PM +0900, FUJITA Tomonori wrote:
> > This patchset is the first part of the unification of ways to handle
> > multiple sets of dma mapping API. The whole work consists of three
> > patchset. This is for IA64 and can be applied independently.
> > 
> > dma_mapping_ops (or dma_ops) struct is used to handle multiple sets of
> > dma mapping API by X86, SPARC, and POWER. IA64 also handle multiple
> > sets of dma mapping API but in a very different way (some define
> > magic).
> > 
> > X86 and IA64 share VT-d and SWIOTLB code. We need several workarounds
> > for it because of the deference of ways to handle multiple sets of dma
> > mapping API (e.g., X86 people can't freely change struct
> > dma_mapping_ops in x86's dma-mapping.h now because it could break
> > IA64).  Seems POWER will use SWIOTLB code soon. I think that it's time
> > to unify ways to handle multiple sets of dma mapping API. After
> > applying the whole work, we have struct dma_map_ops
> > include/linux/dma-mapping.h (I also dream of changing all the archs to
> > use SWIOTLB in order to remove the bounce code in the block and
> > network stacks...).
> > 
> > This patchset changes IA64 to handle multiple sets of dma mapping API
> > in the common way (as X86, SPARC, and POWER do):
> 
> Do you have any plans to update sparc too?
> Maybe it is not relevant.

I'll do after finishing IA64 and X86. It's not a huge gain since SPARC
doesn't share IOMMU code with other architectures. But I think that it
would be nice to remove arch/sparc/include/asm/dma-mapping_64.h

The long-term goal is removing the bounce code in the network and
block stacks, and some drivers (using sorta SWIOTLB on non IOMMU
systems instead). So I try to convert all the architectures to use
this generic mechanism (though I'm not sure all the arch maintainers
agree with it).

Thanks,

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2009-01-08  1:55 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-01-05 14:36 [PATCH 0/13] IA64: unifying ways to handle multiple sets of dma mapping ops FUJITA Tomonori
2009-01-05 14:36 ` [PATCH 01/13] add map/unmap_single_attr and map/unmap_sg_attr to struct dma_mapping_ops FUJITA Tomonori
2009-01-05 14:36   ` [PATCH 02/13] add dma_mapping_ops for SBA IOMMU FUJITA Tomonori
2009-01-05 14:36     ` [PATCH 03/13] add dma_mapping_ops for SWIOTLB and " FUJITA Tomonori
2009-01-05 14:36       ` [PATCH 04/13] add dma_mapping_ops for intel-iommu FUJITA Tomonori
2009-01-05 14:36         ` [PATCH 05/13] add dma_mapping_ops for SGI Altix FUJITA Tomonori
2009-01-05 14:36           ` [PATCH 06/13] add dma_mapping_ops for SWIOTLB FUJITA Tomonori
2009-01-05 14:36             ` [PATCH 07/13] set up dma_ops appropriately FUJITA Tomonori
2009-01-05 14:36               ` [PATCH 08/13] convert the DMA API to use dma_ops FUJITA Tomonori
2009-01-05 14:36                 ` [PATCH 09/13] remove dma operations in struct ia64_machine_vector FUJITA Tomonori
2009-01-05 14:36                   ` [PATCH 10/13] make sn DMA mapping functions static FUJITA Tomonori
2009-01-05 14:36                     ` [PATCH 11/13] add dma_get_ops to struct ia64_machine_vector FUJITA Tomonori
2009-01-05 14:36                       ` [PATCH 12/13] remove hwsw_dma_ops FUJITA Tomonori
2009-01-05 14:36                         ` [PATCH 13/13] make sba DMA mapping functions static FUJITA Tomonori
2009-01-07 19:56 ` [PATCH 0/13] IA64: unifying ways to handle multiple sets of dma mapping ops Sam Ravnborg
2009-01-08  1:55   ` FUJITA Tomonori

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).