linux-pci.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] Clean up DMA API & IOMMU dma_addr_t usage
@ 2014-05-06 22:48 Bjorn Helgaas
  2014-05-06 22:48 ` [PATCH v2 1/5] DMA-API: Clarify physical/bus address distinction Bjorn Helgaas
                   ` (4 more replies)
  0 siblings, 5 replies; 24+ messages in thread
From: Bjorn Helgaas @ 2014-05-06 22:48 UTC (permalink / raw)
  Cc: Arnd Bergmann, linux-doc, Greg Kroah-Hartman, Joerg Roedel,
	Randy Dunlap, Liviu Dudau, linux-kernel, James Bottomley,
	linux-pci, David Woodhouse

I thought this was going to be a simple doc clarification patch, but
it's suffered a bit of feature creep.

The main point is that the DMA API uses "dma_addr_t" as the type for
addresses to be programmed into devices for doing DMA, but the
documentation sometimes refers to these as "bus addresses" and sometimes as
"physical addresses."  So I tried to use "bus address" consistently.

I added a few comments about what dma_addr_t is.  These are likely the most
controversial changes.  For example:

  A dma_addr_t can hold any valid DMA or bus address for the platform.
  It can be given to a device to use as a DMA source or target, or it
  may appear on the bus when a cpu performs programmed I/O.  A cpu
  cannot reference a dma_addr_t directly because there may be
  translation between its physical address space and the bus address
  space.

  This type can hold any valid DMA or bus address for the platform and
  should be used everywhere you hold a DMA address returned from the DMA
  mapping functions or a bus address read from a device register such as a
  PCI BAR.

I also made minor capitalization, punctuation, and wording changes for
consistency and clarity.

Secondly, dma_declare_coherent_memory() takes two addresses for a region of
memory: a "dma_addr_t bus_addr" and a "dma_addr_t device_addr".  I thought
this was misleading, so the second patch converts "bus_addr" to a
"phys_addr_t phys_addr" since it should be a CPU physical address suitable
for ioremapping.

The third patch fixes up one dma_declare_coherent_memory() caller to
convert a PCI bus address to a CPU physical address (only a theoretical
issue, since they are identical on the affected platform).

Some of the IOMMU API uses dma_addr_t for bus addresses (also known as
IOVAs), but not all of it, so the fourth patch uses dma_addr_t more
consistently.

The last patch is a trivial cleanup of exynos_iommu_ops to be consistent
with other iommu_ops structures.

---

Bjorn Helgaas (5):
      DMA-API: Clarify physical/bus address distinction
      DMA-API: Change dma_declare_coherent_memory() CPU address to phys_addr_t
      sh/PCI: Pass GAPSPCI_DMA_BASE CPU address to dma_declare_coherent_memory()
      iommu: Use dma_addr_t for IOVA arguments
      iommu/exynos: Remove unnecessary "&" from function pointers


 Documentation/DMA-API-HOWTO.txt          |  122 +++++++++++--------------
 Documentation/DMA-API.txt                |  149 ++++++++++++++++--------------
 Documentation/DMA-ISA-LPC.txt            |    4 -
 arch/sh/drivers/pci/fixups-dreamcast.c   |    8 +-
 drivers/base/dma-coherent.c              |   10 +-
 drivers/base/dma-mapping.c               |    6 +
 drivers/gpu/drm/msm/msm_iommu.c          |    2 
 drivers/infiniband/hw/usnic/usnic_uiom.c |    2 
 drivers/iommu/amd_iommu.c                |    4 -
 drivers/iommu/arm-smmu.c                 |    4 -
 drivers/iommu/exynos-iommu.c             |   18 ++--
 drivers/iommu/intel-iommu.c              |    4 -
 drivers/iommu/iommu.c                    |    6 +
 drivers/iommu/msm_iommu.c                |    6 +
 drivers/iommu/omap-iommu.c               |    4 -
 drivers/iommu/shmobile-iommu.c           |    4 -
 drivers/iommu/tegra-gart.c               |    4 -
 drivers/iommu/tegra-smmu.c               |    4 -
 drivers/remoteproc/remoteproc_core.c     |    2 
 include/asm-generic/dma-coherent.h       |   13 +--
 include/linux/dma-mapping.h              |   14 ++-
 include/linux/iommu.h                    |   16 ++-
 include/linux/types.h                    |    1 
 23 files changed, 208 insertions(+), 199 deletions(-)

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v2 1/5] DMA-API: Clarify physical/bus address distinction
  2014-05-06 22:48 [PATCH v2 0/5] Clean up DMA API & IOMMU dma_addr_t usage Bjorn Helgaas
@ 2014-05-06 22:48 ` Bjorn Helgaas
  2014-05-07  7:37   ` Arnd Bergmann
  2014-05-08  9:24   ` Greg Kroah-Hartman
  2014-05-06 22:48 ` [PATCH v2 2/5] DMA-API: Change dma_declare_coherent_memory() CPU address to phys_addr_t Bjorn Helgaas
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 24+ messages in thread
From: Bjorn Helgaas @ 2014-05-06 22:48 UTC (permalink / raw)
  Cc: Arnd Bergmann, linux-doc, Greg Kroah-Hartman, Joerg Roedel,
	Randy Dunlap, Liviu Dudau, linux-kernel, James Bottomley,
	linux-pci, David Woodhouse

The DMA-API documentation sometimes refers to "physical addresses" when it
really means "bus addresses."  Sometimes these are identical, but they may
be different if the bridge leading to the bus performs address translation.
Update the documentation to use "bus address" when appropriate.

Also, consistently capitalize "DMA", use parens with function names, use
dev_printk() in examples, and reword a few sections for clarity.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
---
 Documentation/DMA-API-HOWTO.txt |  122 ++++++++++++++++------------------
 Documentation/DMA-API.txt       |  140 ++++++++++++++++++++-------------------
 Documentation/DMA-ISA-LPC.txt   |    4 +
 include/linux/dma-mapping.h     |    7 ++
 include/linux/types.h           |    1 
 5 files changed, 140 insertions(+), 134 deletions(-)

diff --git a/Documentation/DMA-API-HOWTO.txt b/Documentation/DMA-API-HOWTO.txt
index 5e983031cc11..ad07f89974f9 100644
--- a/Documentation/DMA-API-HOWTO.txt
+++ b/Documentation/DMA-API-HOWTO.txt
@@ -9,16 +9,14 @@ This is a guide to device driver writers on how to use the DMA API
 with example pseudo-code.  For a concise description of the API, see
 DMA-API.txt.
 
-Most of the 64bit platforms have special hardware that translates bus
+Most 64-bit platforms have special IOMMU hardware that translates bus
 addresses (DMA addresses) into physical addresses.  This is similar to
 how page tables and/or a TLB translates virtual addresses to physical
-addresses on a CPU.  This is needed so that e.g. PCI devices can
-access with a Single Address Cycle (32bit DMA address) any page in the
-64bit physical address space.  Previously in Linux those 64bit
+addresses on a CPU.  This is needed so that, e.g., PCI devices can
+access with a Single Address Cycle (32-bit DMA address) any page in the
+64-bit physical address space.  Previously in Linux those 64-bit
 platforms had to set artificial limits on the maximum RAM size in the
-system, so that the virt_to_bus() static scheme works (the DMA address
-translation tables were simply filled on bootup to map each bus
-address to the physical page __pa(bus_to_virt())).
+system so devices could address all physical memory.
 
 So that Linux can use the dynamic DMA mapping, it needs some help from the
 drivers, namely it has to take into account that DMA addresses should be
@@ -30,16 +28,16 @@ hardware exists.
 
 Note that the DMA API works with any bus independent of the underlying
 microprocessor architecture. You should use the DMA API rather than
-the bus specific DMA API (e.g. pci_dma_*).
+the bus-specific DMA API (e.g. pci_dma_*).
 
 First of all, you should make sure
 
 #include <linux/dma-mapping.h>
 
-is in your driver. This file will obtain for you the definition of the
-dma_addr_t (which can hold any valid DMA address for the platform)
-type which should be used everywhere you hold a DMA (bus) address
-returned from the DMA mapping functions.
+is in your driver, which provides the definition of dma_addr_t.  This type
+can hold any valid DMA or bus address for the platform and should be used
+everywhere you hold a DMA address returned from the DMA mapping functions
+or a bus address read from a device register such as a PCI BAR.
 
 			 What memory is DMA'able?
 
@@ -123,9 +121,9 @@ Here, dev is a pointer to the device struct of your device, and mask
 is a bit mask describing which bits of an address your device
 supports.  It returns zero if your card can perform DMA properly on
 the machine given the address mask you provided.  In general, the
-device struct of your device is embedded in the bus specific device
-struct of your device.  For example, a pointer to the device struct of
-your PCI device is pdev->dev (pdev is a pointer to the PCI device
+device struct of your device is embedded in the bus-specific device
+struct of your device.  For example, &pdev->dev is a pointer to the
+device struct of a PCI device (pdev is a pointer to the PCI device
 struct of your device).
 
 If it returns non-zero, your device cannot perform DMA properly on
@@ -147,8 +145,7 @@ exactly why.
 The standard 32-bit addressing device would do something like this:
 
 	if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
-		printk(KERN_WARNING
-		       "mydev: No suitable DMA available.\n");
+		dev_warn(dev, "mydev: No suitable DMA available\n");
 		goto ignore_this_device;
 	}
 
@@ -170,8 +167,7 @@ all 64-bits when accessing streaming DMA:
 	} else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
 		using_dac = 0;
 	} else {
-		printk(KERN_WARNING
-		       "mydev: No suitable DMA available.\n");
+		dev_warn(dev, "mydev: No suitable DMA available\n");
 		goto ignore_this_device;
 	}
 
@@ -187,8 +183,7 @@ the case would look like this:
 		using_dac = 0;
 		consistent_using_dac = 0;
 	} else {
-		printk(KERN_WARNING
-		       "mydev: No suitable DMA available.\n");
+		dev_warn(dev, "mydev: No suitable DMA available\n");
 		goto ignore_this_device;
 	}
 
@@ -201,8 +196,7 @@ Finally, if your device can only drive the low 24-bits of
 address you might do something like:
 
 	if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
-		printk(KERN_WARNING
-		       "mydev: 24-bit DMA addressing not available.\n");
+		dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
 		goto ignore_this_device;
 	}
 
@@ -232,14 +226,14 @@ Here is pseudo-code showing how this might be done:
 		card->playback_enabled = 1;
 	} else {
 		card->playback_enabled = 0;
-		printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n",
+		dev_warn(dev, "%s: Playback disabled due to DMA limitations\n",
 		       card->name);
 	}
 	if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
 		card->record_enabled = 1;
 	} else {
 		card->record_enabled = 0;
-		printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n",
+		dev_warn(dev, "%s: Record disabled due to DMA limitations\n",
 		       card->name);
 	}
 
@@ -331,7 +325,7 @@ context with the GFP_ATOMIC flag.
 Size is the length of the region you want to allocate, in bytes.
 
 This routine will allocate RAM for that region, so it acts similarly to
-__get_free_pages (but takes size instead of a page order).  If your
+__get_free_pages() (but takes size instead of a page order).  If your
 driver needs regions sized smaller than a page, you may prefer using
 the dma_pool interface, described below.
 
@@ -343,11 +337,11 @@ the consistent DMA mask has been explicitly changed via
 dma_set_coherent_mask().  This is true of the dma_pool interface as
 well.
 
-dma_alloc_coherent returns two values: the virtual address which you
+dma_alloc_coherent() returns two values: the virtual address which you
 can use to access it from the CPU and dma_handle which you pass to the
 card.
 
-The cpu return address and the DMA bus master address are both
+The CPU virtual address and the DMA bus address are both
 guaranteed to be aligned to the smallest PAGE_SIZE order which
 is greater than or equal to the requested size.  This invariant
 exists (for example) to guarantee that if you allocate a chunk
@@ -359,13 +353,13 @@ To unmap and free such a DMA region, you call:
 	dma_free_coherent(dev, size, cpu_addr, dma_handle);
 
 where dev, size are the same as in the above call and cpu_addr and
-dma_handle are the values dma_alloc_coherent returned to you.
+dma_handle are the values dma_alloc_coherent() returned to you.
 This function may not be called in interrupt context.
 
 If your driver needs lots of smaller memory regions, you can write
-custom code to subdivide pages returned by dma_alloc_coherent,
+custom code to subdivide pages returned by dma_alloc_coherent(),
 or you can use the dma_pool API to do that.  A dma_pool is like
-a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages.
+a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages().
 Also, it understands common hardware constraints for alignment,
 like queue heads needing to be aligned on N byte boundaries.
 
@@ -381,29 +375,29 @@ type of data is "align" (which is expressed in bytes, and must be a
 power of two).  If your device has no boundary crossing restrictions,
 pass 0 for alloc; passing 4096 says memory allocated from this pool
 must not cross 4KByte boundaries (but at that time it may be better to
-go for dma_alloc_coherent directly instead).
+use dma_alloc_coherent() directly instead).
 
-Allocate memory from a dma pool like this:
+Allocate memory from a DMA pool like this:
 
 	cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
 
 flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor
-holding SMP locks), SLAB_ATOMIC otherwise.  Like dma_alloc_coherent,
+holding SMP locks), SLAB_ATOMIC otherwise.  Like dma_alloc_coherent(),
 this returns two values, cpu_addr and dma_handle.
 
 Free memory that was allocated from a dma_pool like this:
 
 	dma_pool_free(pool, cpu_addr, dma_handle);
 
-where pool is what you passed to dma_pool_alloc, and cpu_addr and
-dma_handle are the values dma_pool_alloc returned. This function
+where pool is what you passed to dma_pool_alloc(), and cpu_addr and
+dma_handle are the values dma_pool_alloc() returned. This function
 may be called in interrupt context.
 
 Destroy a dma_pool by calling:
 
 	dma_pool_destroy(pool);
 
-Make sure you've called dma_pool_free for all memory allocated
+Make sure you've called dma_pool_free() for all memory allocated
 from a pool before you destroy the pool. This function may not
 be called in interrupt context.
 
@@ -418,7 +412,7 @@ one of the following values:
  DMA_FROM_DEVICE
  DMA_NONE
 
-One should provide the exact DMA direction if you know it.
+You should provide the exact DMA direction if you know it.
 
 DMA_TO_DEVICE means "from main memory to the device"
 DMA_FROM_DEVICE means "from the device to main memory"
@@ -489,14 +483,14 @@ and to unmap it:
 	dma_unmap_single(dev, dma_handle, size, direction);
 
 You should call dma_mapping_error() as dma_map_single() could fail and return
-error. Not all dma implementations support dma_mapping_error() interface.
+error. Not all DMA implementations support the dma_mapping_error() interface.
 However, it is a good practice to call dma_mapping_error() interface, which
 will invoke the generic mapping error check interface. Doing so will ensure
-that the mapping code will work correctly on all dma implementations without
+that the mapping code will work correctly on all DMA implementations without
 any dependency on the specifics of the underlying implementation. Using the
 returned address without checking for errors could result in failures ranging
 from panics to silent data corruption. A couple of examples of incorrect ways
-to check for errors that make assumptions about the underlying dma
+to check for errors that make assumptions about the underlying DMA
 implementation are as follows and these are applicable to dma_map_page() as
 well.
 
@@ -516,12 +510,12 @@ Incorrect example 2:
 		goto map_error;
 	}
 
-You should call dma_unmap_single when the DMA activity is finished, e.g.
+You should call dma_unmap_single() when the DMA activity is finished, e.g.,
 from the interrupt which told you that the DMA transfer is done.
 
-Using cpu pointers like this for single mappings has a disadvantage,
+Using cpu pointers like this for single mappings has a disadvantage:
 you cannot reference HIGHMEM memory in this way.  Thus, there is a
-map/unmap interface pair akin to dma_{map,unmap}_single.  These
+map/unmap interface pair akin to dma_{map,unmap}_single().  These
 interfaces deal with page/offset pairs instead of cpu pointers.
 Specifically:
 
@@ -550,7 +544,7 @@ Here, "offset" means byte offset within the given page.
 You should call dma_mapping_error() as dma_map_page() could fail and return
 error as outlined under the dma_map_single() discussion.
 
-You should call dma_unmap_page when the DMA activity is finished, e.g.
+You should call dma_unmap_page() when the DMA activity is finished, e.g.,
 from the interrupt which told you that the DMA transfer is done.
 
 With scatterlists, you map a region gathered from several regions by:
@@ -588,18 +582,16 @@ PLEASE NOTE:  The 'nents' argument to the dma_unmap_sg call must be
 	      it should _NOT_ be the 'count' value _returned_ from the
               dma_map_sg call.
 
-Every dma_map_{single,sg} call should have its dma_unmap_{single,sg}
-counterpart, because the bus address space is a shared resource (although
-in some ports the mapping is per each BUS so less devices contend for the
-same bus address space) and you could render the machine unusable by eating
-all bus addresses.
+Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
+counterpart, because the bus address space is a shared resource and
+you could render the machine unusable by consuming all bus addresses.
 
 If you need to use the same streaming DMA region multiple times and touch
 the data in between the DMA transfers, the buffer needs to be synced
-properly in order for the cpu and device to see the most uptodate and
+properly in order for the cpu and device to see the most up-to-date and
 correct copy of the DMA buffer.
 
-So, firstly, just map it with dma_map_{single,sg}, and after each DMA
+So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
 transfer call either:
 
 	dma_sync_single_for_cpu(dev, dma_handle, size, direction);
@@ -623,9 +615,9 @@ or:
 as appropriate.
 
 After the last DMA transfer call one of the DMA unmap routines
-dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_*
-call till dma_unmap_*, then you don't have to call the dma_sync_*
-routines at all.
+dma_unmap_{single,sg}(). If you don't touch the data from the first
+dma_map_*() call till dma_unmap_*(), then you don't have to call the
+dma_sync_*() routines at all.
 
 Here is pseudo code which shows a situation in which you would need
 to use the dma_sync_*() interfaces.
@@ -690,12 +682,12 @@ to use the dma_sync_*() interfaces.
 		}
 	}
 
-Drivers converted fully to this interface should not use virt_to_bus any
-longer, nor should they use bus_to_virt. Some drivers have to be changed a
-little bit, because there is no longer an equivalent to bus_to_virt in the
+Drivers converted fully to this interface should not use virt_to_bus() any
+longer, nor should they use bus_to_virt(). Some drivers have to be changed a
+little bit, because there is no longer an equivalent to bus_to_virt() in the
 dynamic DMA mapping scheme - you have to always store the DMA addresses
-returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single
-calls (dma_map_sg stores them in the scatterlist itself if the platform
+returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single()
+calls (dma_map_sg() stores them in the scatterlist itself if the platform
 supports dynamic DMA mapping in hardware) in your driver structures and/or
 in the card registers.
 
@@ -709,9 +701,9 @@ as it is impossible to correctly support them.
 DMA address space is limited on some architectures and an allocation
 failure can be determined by:
 
-- checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0
+- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
 
-- checking the returned dma_addr_t of dma_map_single and dma_map_page
+- checking the dma_addr_t returned from dma_map_single() and dma_map_page()
   by using dma_mapping_error():
 
 	dma_addr_t dma_handle;
@@ -794,7 +786,7 @@ Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when
 		dma_unmap_single(array[i].dma_addr);
 	}
 
-Networking drivers must call dev_kfree_skb to free the socket buffer
+Networking drivers must call dev_kfree_skb() to free the socket buffer
 and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
 (ndo_start_xmit). This means that the socket buffer is just dropped in
 the failure case.
@@ -831,7 +823,7 @@ transform some example code.
 		DEFINE_DMA_UNMAP_LEN(len);
 	};
 
-2) Use dma_unmap_{addr,len}_set to set these values.
+2) Use dma_unmap_{addr,len}_set() to set these values.
    Example, before:
 
 	ringp->mapping = FOO;
@@ -842,7 +834,7 @@ transform some example code.
 	dma_unmap_addr_set(ringp, mapping, FOO);
 	dma_unmap_len_set(ringp, len, BAR);
 
-3) Use dma_unmap_{addr,len} to access these values.
+3) Use dma_unmap_{addr,len}() to access these values.
    Example, before:
 
 	dma_unmap_single(dev, ringp->mapping, ringp->len,
diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
index e865279cec58..1699e26c6284 100644
--- a/Documentation/DMA-API.txt
+++ b/Documentation/DMA-API.txt
@@ -4,22 +4,27 @@
         James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
 
 This document describes the DMA API.  For a more gentle introduction
-of the API (and actual examples) see
-Documentation/DMA-API-HOWTO.txt.
+of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt.
 
-This API is split into two pieces.  Part I describes the API.  Part II
-describes the extensions to the API for supporting non-consistent
-memory machines.  Unless you know that your driver absolutely has to
-support non-consistent platforms (this is usually only legacy
-platforms) you should only use the API described in part I.
+This API is split into two pieces.  Part I describes the basic API.
+Part II describes extensions for supporting non-consistent memory
+machines.  Unless you know that your driver absolutely has to support
+non-consistent platforms (this is usually only legacy platforms) you
+should only use the API described in part I.
 
 Part I - dma_ API
 -------------------------------------
 
-To get the dma_ API, you must #include <linux/dma-mapping.h>
+To get the dma_ API, you must #include <linux/dma-mapping.h>.  This
+provides dma_addr_t and the interfaces described below.
 
+A dma_addr_t can hold any valid DMA or bus address for the platform.  It
+can be given to a device to use as a DMA source or target, or it may appear
+on the bus when a cpu performs programmed I/O.  A cpu cannot reference a
+dma_addr_t directly because there may be translation between its physical
+address space and the bus address space.
 
-Part Ia - Using large dma-coherent buffers
+Part Ia - Using large DMA-coherent buffers
 ------------------------------------------
 
 void *
@@ -33,20 +38,21 @@ to make sure to flush the processor's write buffers before telling
 devices to read that memory.)
 
 This routine allocates a region of <size> bytes of consistent memory.
-It also returns a <dma_handle> which may be cast to an unsigned
-integer the same width as the bus and used as the physical address
-base of the region.
 
-Returns: a pointer to the allocated region (in the processor's virtual
+It returns a pointer to the allocated region (in the processor's virtual
 address space) or NULL if the allocation failed.
 
+It also returns a <dma_handle> which may be cast to an unsigned integer the
+same width as the bus and given to the device as the bus address base of
+the region.
+
 Note: consistent memory can be expensive on some platforms, and the
 minimum allocation length may be as big as a page, so you should
 consolidate your requests for consistent memory as much as possible.
 The simplest way to do that is to use the dma_pool calls (see below).
 
-The flag parameter (dma_alloc_coherent only) allows the caller to
-specify the GFP_ flags (see kmalloc) for the allocation (the
+The flag parameter (dma_alloc_coherent() only) allows the caller to
+specify the GFP_ flags (see kmalloc()) for the allocation (the
 implementation may choose to ignore flags that affect the location of
 the returned memory, like GFP_DMA).
 
@@ -61,24 +67,24 @@ void
 dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
 			   dma_addr_t dma_handle)
 
-Free the region of consistent memory you previously allocated.  dev,
-size and dma_handle must all be the same as those passed into the
-consistent allocate.  cpu_addr must be the virtual address returned by
-the consistent allocate.
+Free a region of consistent memory you previously allocated.  dev,
+size and dma_handle must all be the same as those passed into
+dma_alloc_coherent().  cpu_addr must be the virtual address returned by
+the dma_alloc_coherent().
 
 Note that unlike their sibling allocation calls, these routines
 may only be called with IRQs enabled.
 
 
-Part Ib - Using small dma-coherent buffers
+Part Ib - Using small DMA-coherent buffers
 ------------------------------------------
 
 To get this part of the dma_ API, you must #include <linux/dmapool.h>
 
-Many drivers need lots of small dma-coherent memory regions for DMA
+Many drivers need lots of small DMA-coherent memory regions for DMA
 descriptors or I/O buffers.  Rather than allocating in units of a page
 or more using dma_alloc_coherent(), you can use DMA pools.  These work
-much like a struct kmem_cache, except that they use the dma-coherent allocator,
+much like a struct kmem_cache, except that they use the DMA-coherent allocator,
 not __get_free_pages().  Also, they understand common hardware constraints
 for alignment, like queue heads needing to be aligned on N-byte boundaries.
 
@@ -87,7 +93,7 @@ for alignment, like queue heads needing to be aligned on N-byte boundaries.
 	dma_pool_create(const char *name, struct device *dev,
 			size_t size, size_t align, size_t alloc);
 
-The pool create() routines initialize a pool of dma-coherent buffers
+dma_pool_create() initializes a pool of DMA-coherent buffers
 for use with a given device.  It must be called in a context which
 can sleep.
 
@@ -102,25 +108,26 @@ from this pool must not cross 4KByte boundaries.
 	void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
 			dma_addr_t *dma_handle);
 
-This allocates memory from the pool; the returned memory will meet the size
-and alignment requirements specified at creation time.  Pass GFP_ATOMIC to
-prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks),
-pass GFP_KERNEL to allow blocking.  Like dma_alloc_coherent(), this returns
-two values:  an address usable by the cpu, and the dma address usable by the
-pool's device.
+This allocates memory from the pool; the returned memory will meet the
+size and alignment requirements specified at creation time.  Pass
+GFP_ATOMIC to prevent blocking, or if it's permitted (not
+in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
+blocking.  Like dma_alloc_coherent(), this returns two values:  an
+address usable by the cpu, and the DMA address usable by the pool's
+device.
 
 
 	void dma_pool_free(struct dma_pool *pool, void *vaddr,
 			dma_addr_t addr);
 
 This puts memory back into the pool.  The pool is what was passed to
-the pool allocation routine; the cpu (vaddr) and dma addresses are what
+dma_pool_alloc(); the cpu (vaddr) and DMA addresses are what
 were returned when that routine allocated the memory being freed.
 
 
 	void dma_pool_destroy(struct dma_pool *pool);
 
-The pool destroy() routines free the resources of the pool.  They must be
+dma_pool_destroy() frees the resources of the pool.  It must be
 called in a context which can sleep.  Make sure you've freed all allocated
 memory back to the pool before you destroy it.
 
@@ -187,9 +194,9 @@ dma_map_single(struct device *dev, void *cpu_addr, size_t size,
 		      enum dma_data_direction direction)
 
 Maps a piece of processor virtual memory so it can be accessed by the
-device and returns the physical handle of the memory.
+device and returns the bus address of the memory.
 
-The direction for both api's may be converted freely by casting.
+The direction for both APIs may be converted freely by casting.
 However the dma_ API uses a strongly typed enumerator for its
 direction:
 
@@ -198,31 +205,30 @@ DMA_TO_DEVICE		data is going from the memory to the device
 DMA_FROM_DEVICE		data is coming from the device to the memory
 DMA_BIDIRECTIONAL	direction isn't known
 
-Notes:  Not all memory regions in a machine can be mapped by this
-API.  Further, regions that appear to be physically contiguous in
-kernel virtual space may not be contiguous as physical memory.  Since
-this API does not provide any scatter/gather capability, it will fail
-if the user tries to map a non-physically contiguous piece of memory.
-For this reason, it is recommended that memory mapped by this API be
-obtained only from sources which guarantee it to be physically contiguous
-(like kmalloc).
-
-Further, the physical address of the memory must be within the
-dma_mask of the device (the dma_mask represents a bit mask of the
-addressable region for the device.  I.e., if the physical address of
-the memory anded with the dma_mask is still equal to the physical
-address, then the device can perform DMA to the memory).  In order to
+Notes:  Not all memory regions in a machine can be mapped by this API.
+Further, contiguous kernel virtual space may not be contiguous as
+physical memory.  Since this API does not provide any scatter/gather
+capability, it will fail if the user tries to map a non-physically
+contiguous piece of memory.  For this reason, memory to be mapped by
+this API should be obtained from sources which guarantee it to be
+physically contiguous (like kmalloc).
+
+Further, the bus address of the memory must be within the
+dma_mask of the device (the dma_mask is a bit mask of the
+addressable region for the device, i.e., if the bus address of
+the memory ANDed with the dma_mask is still equal to the bus
+address, then the device can perform DMA to the memory).  To
 ensure that the memory allocated by kmalloc is within the dma_mask,
 the driver may specify various platform-dependent flags to restrict
-the physical memory range of the allocation (e.g. on x86, GFP_DMA
-guarantees to be within the first 16Mb of available physical memory,
+the bus address range of the allocation (e.g., on x86, GFP_DMA
+guarantees to be within the first 16MB of available bus addresses,
 as required by ISA devices).
 
 Note also that the above constraints on physical contiguity and
 dma_mask may not apply if the platform has an IOMMU (a device which
-supplies a physical to virtual mapping between the I/O memory bus and
-the device).  However, to be portable, device driver writers may *not*
-assume that such an IOMMU exists.
+maps an I/O bus address to a physical memory address).  However, to be
+portable, device driver writers may *not* assume that such an IOMMU
+exists.
 
 Warnings:  Memory coherency operates at a granularity called the cache
 line width.  In order for memory mapped by this API to operate
@@ -281,9 +287,9 @@ cache width is.
 int
 dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
 
-In some circumstances dma_map_single and dma_map_page will fail to create
+In some circumstances dma_map_single() and dma_map_page() will fail to create
 a mapping. A driver can check for these errors by testing the returned
-dma address with dma_mapping_error(). A non-zero return value means the mapping
+DMA address with dma_mapping_error(). A non-zero return value means the mapping
 could not be created and the driver should take appropriate action (e.g.
 reduce current DMA mapping usage or delay and try again later).
 
@@ -291,7 +297,7 @@ reduce current DMA mapping usage or delay and try again later).
 	dma_map_sg(struct device *dev, struct scatterlist *sg,
 		int nents, enum dma_data_direction direction)
 
-Returns: the number of physical segments mapped (this may be shorter
+Returns: the number of bus address segments mapped (this may be shorter
 than <nents> passed in if some elements of the scatter/gather list are
 physically or virtually adjacent and an IOMMU maps them with a single
 entry).
@@ -299,7 +305,7 @@ entry).
 Please note that the sg cannot be mapped again if it has been mapped once.
 The mapping process is allowed to destroy information in the sg.
 
-As with the other mapping interfaces, dma_map_sg can fail. When it
+As with the other mapping interfaces, dma_map_sg() can fail. When it
 does, 0 is returned and a driver must take appropriate action. It is
 critical that the driver do something, in the case of a block driver
 aborting the request or even oopsing is better than doing nothing and
@@ -335,7 +341,7 @@ must be the same as those and passed in to the scatter/gather mapping
 API.
 
 Note: <nents> must be the number you passed in, *not* the number of
-physical entries returned.
+bus address entries returned.
 
 void
 dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
@@ -391,10 +397,10 @@ The four functions above are just like the counterpart functions
 without the _attrs suffixes, except that they pass an optional
 struct dma_attrs*.
 
-struct dma_attrs encapsulates a set of "dma attributes". For the
+struct dma_attrs encapsulates a set of "DMA attributes". For the
 definition of struct dma_attrs see linux/dma-attrs.h.
 
-The interpretation of dma attributes is architecture-specific, and
+The interpretation of DMA attributes is architecture-specific, and
 each attribute should be documented in Documentation/DMA-attributes.txt.
 
 If struct dma_attrs* is NULL, the semantics of each of these
@@ -458,7 +464,7 @@ Note: where the platform can return consistent memory, it will
 guarantee that the sync points become nops.
 
 Warning:  Handling non-consistent memory is a real pain.  You should
-only ever use this API if you positively know your driver will be
+only use this API if you positively know your driver will be
 required to work on one of the rare (usually non-PCI) architectures
 that simply cannot make consistent memory.
 
@@ -496,26 +502,26 @@ dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
 			    dma_addr_t device_addr, size_t size, int
 			    flags)
 
-Declare region of memory to be handed out by dma_alloc_coherent when
+Declare region of memory to be handed out by dma_alloc_coherent() when
 it's asked for coherent memory for this device.
 
 bus_addr is the physical address to which the memory is currently
 assigned in the bus responding region (this will be used by the
 platform to perform the mapping).
 
-device_addr is the physical address the device needs to be programmed
+device_addr is the bus address the device needs to be programmed
 with actually to address this memory (this will be handed out as the
 dma_addr_t in dma_alloc_coherent()).
 
 size is the size of the area (must be multiples of PAGE_SIZE).
 
-flags can be or'd together and are:
+flags can be ORed together and are:
 
 DMA_MEMORY_MAP - request that the memory returned from
 dma_alloc_coherent() be directly writable.
 
 DMA_MEMORY_IO - request that the memory returned from
-dma_alloc_coherent() be addressable using read/write/memcpy_toio etc.
+dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc.
 
 One or both of these flags must be present.
 
@@ -572,7 +578,7 @@ region is occupied.
 Part III - Debug drivers use of the DMA-API
 -------------------------------------------
 
-The DMA-API as described above as some constraints. DMA addresses must be
+The DMA-API as described above has some constraints. DMA addresses must be
 released with the corresponding function with the same size for example. With
 the advent of hardware IOMMUs it becomes more and more important that drivers
 do not violate those constraints. In the worst case such a violation can
@@ -690,11 +696,11 @@ architectural default.
 void debug_dmap_mapping_error(struct device *dev, dma_addr_t dma_addr);
 
 dma-debug interface debug_dma_mapping_error() to debug drivers that fail
-to check dma mapping errors on addresses returned by dma_map_single() and
+to check DMA mapping errors on addresses returned by dma_map_single() and
 dma_map_page() interfaces. This interface clears a flag set by
 debug_dma_map_page() to indicate that dma_mapping_error() has been called by
 the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
 this flag is still set, prints warning message that includes call trace that
 leads up to the unmap. This interface can be called from dma_mapping_error()
-routines to enable dma mapping error check debugging.
+routines to enable DMA mapping error check debugging.
 
diff --git a/Documentation/DMA-ISA-LPC.txt b/Documentation/DMA-ISA-LPC.txt
index e767805b4182..b1a19835e907 100644
--- a/Documentation/DMA-ISA-LPC.txt
+++ b/Documentation/DMA-ISA-LPC.txt
@@ -16,7 +16,7 @@ To do ISA style DMA you need to include two headers:
 #include <asm/dma.h>
 
 The first is the generic DMA API used to convert virtual addresses to
-physical addresses (see Documentation/DMA-API.txt for details).
+bus addresses (see Documentation/DMA-API.txt for details).
 
 The second contains the routines specific to ISA DMA transfers. Since
 this is not present on all platforms make sure you construct your
@@ -50,7 +50,7 @@ early as possible and not release it until the driver is unloaded.)
 Part III - Address translation
 ------------------------------
 
-To translate the virtual address to a physical use the normal DMA
+To translate the virtual address to a bus address, use the normal DMA
 API. Do _not_ use isa_virt_to_phys() even though it does the same
 thing. The reason for this is that the function isa_virt_to_phys()
 will require a Kconfig dependency to ISA, not just ISA_DMA_API which
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index fd4aee29ad10..0808d3e7e218 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -8,6 +8,13 @@
 #include <linux/dma-direction.h>
 #include <linux/scatterlist.h>
 
+/*
+ * A dma_addr_t can hold any valid DMA or bus address for the platform.
+ * It can be given to a device to use as a DMA source or target, or it may
+ * appear on the bus when a CPU performs programmed I/O.  A CPU cannot
+ * reference a dma_addr_t directly because there may be translation between
+ * its physical address space and the bus address space.
+ */
 struct dma_map_ops {
 	void* (*alloc)(struct device *dev, size_t size,
 				dma_addr_t *dma_handle, gfp_t gfp,
diff --git a/include/linux/types.h b/include/linux/types.h
index 4d118ba11349..a0bb7048687f 100644
--- a/include/linux/types.h
+++ b/include/linux/types.h
@@ -142,6 +142,7 @@ typedef unsigned long blkcnt_t;
 #define pgoff_t unsigned long
 #endif
 
+/* A dma_addr_t can hold any valid DMA or bus address for the platform */
 #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
 typedef u64 dma_addr_t;
 #else


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 2/5] DMA-API: Change dma_declare_coherent_memory() CPU address to phys_addr_t
  2014-05-06 22:48 [PATCH v2 0/5] Clean up DMA API & IOMMU dma_addr_t usage Bjorn Helgaas
  2014-05-06 22:48 ` [PATCH v2 1/5] DMA-API: Clarify physical/bus address distinction Bjorn Helgaas
@ 2014-05-06 22:48 ` Bjorn Helgaas
  2014-05-07  7:38   ` Arnd Bergmann
  2014-05-06 22:48 ` [PATCH v2 3/5] sh/PCI: Pass GAPSPCI_DMA_BASE CPU address to dma_declare_coherent_memory() Bjorn Helgaas
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 24+ messages in thread
From: Bjorn Helgaas @ 2014-05-06 22:48 UTC (permalink / raw)
  Cc: Arnd Bergmann, linux-doc, Greg Kroah-Hartman, Joerg Roedel,
	Randy Dunlap, Liviu Dudau, linux-kernel, James Bottomley,
	linux-pci, David Woodhouse

dma_declare_coherent_memory() takes two addresses for a region of memory: a
"bus_addr" and a "device_addr".  I think the intent is that "bus_addr" is
the physical address a *CPU* would use to access the region, and
"device_addr" is the bus address the *device* would use to address the
region.

Rename "bus_addr" to "phys_addr" and change its type to phys_addr_t.
Most callers already supply a phys_addr_t for this argument.  The others
supply a 32-bit integer (a constant, unsigned int, or __u32) and need no
change.

Use "unsigned long", not phys_addr_t, to hold PFNs.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
---
 Documentation/DMA-API.txt          |    9 ++++-----
 drivers/base/dma-coherent.c        |   10 +++++-----
 drivers/base/dma-mapping.c         |    6 +++---
 include/asm-generic/dma-coherent.h |   13 +++++--------
 include/linux/dma-mapping.h        |    7 ++++---
 5 files changed, 21 insertions(+), 24 deletions(-)

diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
index 1699e26c6284..de4671f81395 100644
--- a/Documentation/DMA-API.txt
+++ b/Documentation/DMA-API.txt
@@ -498,19 +498,18 @@ continuing on for size.  Again, you *must* observe the cache line
 boundaries when doing this.
 
 int
-dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
+dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
 			    dma_addr_t device_addr, size_t size, int
 			    flags)
 
 Declare region of memory to be handed out by dma_alloc_coherent() when
 it's asked for coherent memory for this device.
 
-bus_addr is the physical address to which the memory is currently
-assigned in the bus responding region (this will be used by the
-platform to perform the mapping).
+phys_addr is the cpu physical address to which the memory is currently
+assigned (this will be ioremapped so the cpu can access the region).
 
 device_addr is the bus address the device needs to be programmed
-with actually to address this memory (this will be handed out as the
+with to actually address this memory (this will be handed out as the
 dma_addr_t in dma_alloc_coherent()).
 
 size is the size of the area (must be multiples of PAGE_SIZE).
diff --git a/drivers/base/dma-coherent.c b/drivers/base/dma-coherent.c
index bc256b641027..7d6e84a51424 100644
--- a/drivers/base/dma-coherent.c
+++ b/drivers/base/dma-coherent.c
@@ -10,13 +10,13 @@
 struct dma_coherent_mem {
 	void		*virt_base;
 	dma_addr_t	device_base;
-	phys_addr_t	pfn_base;
+	unsigned long	pfn_base;
 	int		size;
 	int		flags;
 	unsigned long	*bitmap;
 };
 
-int dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
+int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
 				dma_addr_t device_addr, size_t size, int flags)
 {
 	void __iomem *mem_base = NULL;
@@ -32,7 +32,7 @@ int dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
 
 	/* FIXME: this routine just ignores DMA_MEMORY_INCLUDES_CHILDREN */
 
-	mem_base = ioremap(bus_addr, size);
+	mem_base = ioremap(phys_addr, size);
 	if (!mem_base)
 		goto out;
 
@@ -45,7 +45,7 @@ int dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
 
 	dev->dma_mem->virt_base = mem_base;
 	dev->dma_mem->device_base = device_addr;
-	dev->dma_mem->pfn_base = PFN_DOWN(bus_addr);
+	dev->dma_mem->pfn_base = PFN_DOWN(phys_addr);
 	dev->dma_mem->size = pages;
 	dev->dma_mem->flags = flags;
 
@@ -208,7 +208,7 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
 
 		*ret = -ENXIO;
 		if (off < count && user_count <= count - off) {
-			unsigned pfn = mem->pfn_base + start + off;
+			unsigned long pfn = mem->pfn_base + start + off;
 			*ret = remap_pfn_range(vma, vma->vm_start, pfn,
 					       user_count << PAGE_SHIFT,
 					       vma->vm_page_prot);
diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c
index 0ce39a33b3c2..6cd08e145bfa 100644
--- a/drivers/base/dma-mapping.c
+++ b/drivers/base/dma-mapping.c
@@ -175,7 +175,7 @@ static void dmam_coherent_decl_release(struct device *dev, void *res)
 /**
  * dmam_declare_coherent_memory - Managed dma_declare_coherent_memory()
  * @dev: Device to declare coherent memory for
- * @bus_addr: Bus address of coherent memory to be declared
+ * @phys_addr: Physical address of coherent memory to be declared
  * @device_addr: Device address of coherent memory to be declared
  * @size: Size of coherent memory to be declared
  * @flags: Flags
@@ -185,7 +185,7 @@ static void dmam_coherent_decl_release(struct device *dev, void *res)
  * RETURNS:
  * 0 on success, -errno on failure.
  */
-int dmam_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
+int dmam_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
 				 dma_addr_t device_addr, size_t size, int flags)
 {
 	void *res;
@@ -195,7 +195,7 @@ int dmam_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
 	if (!res)
 		return -ENOMEM;
 
-	rc = dma_declare_coherent_memory(dev, bus_addr, device_addr, size,
+	rc = dma_declare_coherent_memory(dev, phys_addr, device_addr, size,
 					 flags);
 	if (rc == 0)
 		devres_add(dev, res);
diff --git a/include/asm-generic/dma-coherent.h b/include/asm-generic/dma-coherent.h
index 2be8a2dbc868..0297e5875798 100644
--- a/include/asm-generic/dma-coherent.h
+++ b/include/asm-generic/dma-coherent.h
@@ -16,16 +16,13 @@ int dma_mmap_from_coherent(struct device *dev, struct vm_area_struct *vma,
  * Standard interface
  */
 #define ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY
-extern int
-dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
-			    dma_addr_t device_addr, size_t size, int flags);
+int dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
+				dma_addr_t device_addr, size_t size, int flags);
 
-extern void
-dma_release_declared_memory(struct device *dev);
+void dma_release_declared_memory(struct device *dev);
 
-extern void *
-dma_mark_declared_memory_occupied(struct device *dev,
-				  dma_addr_t device_addr, size_t size);
+void *dma_mark_declared_memory_occupied(struct device *dev,
+					dma_addr_t device_addr, size_t size);
 #else
 #define dma_alloc_from_coherent(dev, size, handle, ret) (0)
 #define dma_release_from_coherent(dev, order, vaddr) (0)
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 0808d3e7e218..3ec7c831e16a 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -193,7 +193,7 @@ static inline int dma_get_cache_alignment(void)
 
 #ifndef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY
 static inline int
-dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
+dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
 			    dma_addr_t device_addr, size_t size, int flags)
 {
 	return 0;
@@ -224,13 +224,14 @@ extern void *dmam_alloc_noncoherent(struct device *dev, size_t size,
 extern void dmam_free_noncoherent(struct device *dev, size_t size, void *vaddr,
 				  dma_addr_t dma_handle);
 #ifdef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY
-extern int dmam_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
+extern int dmam_declare_coherent_memory(struct device *dev,
+					phys_addr_t phys_addr,
 					dma_addr_t device_addr, size_t size,
 					int flags);
 extern void dmam_release_declared_memory(struct device *dev);
 #else /* ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY */
 static inline int dmam_declare_coherent_memory(struct device *dev,
-				dma_addr_t bus_addr, dma_addr_t device_addr,
+				phys_addr_t phys_addr, dma_addr_t device_addr,
 				size_t size, gfp_t gfp)
 {
 	return 0;


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 3/5] sh/PCI: Pass GAPSPCI_DMA_BASE CPU address to dma_declare_coherent_memory()
  2014-05-06 22:48 [PATCH v2 0/5] Clean up DMA API & IOMMU dma_addr_t usage Bjorn Helgaas
  2014-05-06 22:48 ` [PATCH v2 1/5] DMA-API: Clarify physical/bus address distinction Bjorn Helgaas
  2014-05-06 22:48 ` [PATCH v2 2/5] DMA-API: Change dma_declare_coherent_memory() CPU address to phys_addr_t Bjorn Helgaas
@ 2014-05-06 22:48 ` Bjorn Helgaas
  2014-05-07  7:55   ` Arnd Bergmann
  2014-05-06 22:48 ` [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments Bjorn Helgaas
  2014-05-06 22:48 ` [PATCH v2 5/5] iommu/exynos: Remove unnecessary "&" from function pointers Bjorn Helgaas
  4 siblings, 1 reply; 24+ messages in thread
From: Bjorn Helgaas @ 2014-05-06 22:48 UTC (permalink / raw)
  Cc: Arnd Bergmann, linux-doc, Greg Kroah-Hartman, Joerg Roedel,
	Randy Dunlap, Liviu Dudau, linux-kernel, James Bottomley,
	linux-pci, David Woodhouse

dma_declare_coherent_memory() needs both the CPU physical address and the
bus address of the device memory.  They are likely the same on this
platform, but in general we should use pcibios_bus_to_resource() to account
for any address translation done by the PCI host bridge.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: Magnus Damm <damm@opensource.se>
CC: linux-sh@vger.kernel.org
---
 arch/sh/drivers/pci/fixups-dreamcast.c |    8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/sh/drivers/pci/fixups-dreamcast.c b/arch/sh/drivers/pci/fixups-dreamcast.c
index d6cde700e316..987c788f5a4f 100644
--- a/arch/sh/drivers/pci/fixups-dreamcast.c
+++ b/arch/sh/drivers/pci/fixups-dreamcast.c
@@ -31,6 +31,8 @@
 static void gapspci_fixup_resources(struct pci_dev *dev)
 {
 	struct pci_channel *p = dev->sysdata;
+	struct pci_bus_region region;
+	struct resource res;
 
 	printk(KERN_NOTICE "PCI: Fixing up device %s\n", pci_name(dev));
 
@@ -51,8 +53,12 @@ static void gapspci_fixup_resources(struct pci_dev *dev)
 		/*
 		 * Redirect dma memory allocations to special memory window.
 		 */
+		region.start = GAPSPCI_DMA_BASE;
+		region.end = GAPSPCI_DMA_BASE + GAPSPCI_DMA_SIZE - 1;
+		res.flags = IORESOURCE_MEM;
+		pcibios_bus_to_resource(dev->bus, &res, &region);
 		BUG_ON(!dma_declare_coherent_memory(&dev->dev,
-						GAPSPCI_DMA_BASE,
+						res->start,
 						GAPSPCI_DMA_BASE,
 						GAPSPCI_DMA_SIZE,
 						DMA_MEMORY_MAP |


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments
  2014-05-06 22:48 [PATCH v2 0/5] Clean up DMA API & IOMMU dma_addr_t usage Bjorn Helgaas
                   ` (2 preceding siblings ...)
  2014-05-06 22:48 ` [PATCH v2 3/5] sh/PCI: Pass GAPSPCI_DMA_BASE CPU address to dma_declare_coherent_memory() Bjorn Helgaas
@ 2014-05-06 22:48 ` Bjorn Helgaas
  2014-05-07  7:58   ` Arnd Bergmann
  2014-05-06 22:48 ` [PATCH v2 5/5] iommu/exynos: Remove unnecessary "&" from function pointers Bjorn Helgaas
  4 siblings, 1 reply; 24+ messages in thread
From: Bjorn Helgaas @ 2014-05-06 22:48 UTC (permalink / raw)
  Cc: Arnd Bergmann, linux-doc, Greg Kroah-Hartman, Joerg Roedel,
	Randy Dunlap, Liviu Dudau, linux-kernel, James Bottomley,
	linux-pci, David Woodhouse

Convert the "iova" arguments of iommu_map(), iommu_unmap(), etc., from
"unsigned long" to dma_addr_t.

bb5547acfcd8 ("iommu/fsl: Make iova dma_addr_t in the iommu_iova_to_phys
API") did this for iommu_iova_to_phys(), but didn't fix the rest of the
IOMMU API.

This changes the following generic functions and function pointer types:

  iommu_map()
  iommu_unmap()
  struct iommu_ops.map
  struct iommu_ops.unmap
  iommu_fault_handler_t
  report_iommu_fault()

and the following implementations and users of them:

  amd_iommu_map(), amd_iommu_unmap()
  arm_smmu_map(), arm_smmu_unmap()
  exynos_iommu_map(), exynos_iommu_unmap()
  intel_iommu_map(), intel_iommu_unmap()
  msm_iommu_map(), msm_iommu_unmap()
  omap_iommu_map(), omap_iommu_unmap()
  shmobile_iommu_map(), shmobile_iommu_unmap()
  gart_iommu_map(), gart_iommu_unmap() (tegra)
  smmu_iommu_map(), smmu_iommu_unmap() (tegra)

  msm_fault_handler()
  usnic_uiom_dma_fault()
  rproc_iommu_fault()

There are many internal places that use "unsigned long", u32, u64, etc.  I
left those alone because they probably use types appropriate for their
hardware.  My intent is to make the generic interfaces consistent.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
---
 drivers/gpu/drm/msm/msm_iommu.c          |    2 +-
 drivers/infiniband/hw/usnic/usnic_uiom.c |    2 +-
 drivers/iommu/amd_iommu.c                |    4 ++--
 drivers/iommu/arm-smmu.c                 |    4 ++--
 drivers/iommu/exynos-iommu.c             |    4 ++--
 drivers/iommu/intel-iommu.c              |    4 ++--
 drivers/iommu/iommu.c                    |    6 +++---
 drivers/iommu/msm_iommu.c                |    6 +++---
 drivers/iommu/omap-iommu.c               |    4 ++--
 drivers/iommu/shmobile-iommu.c           |    4 ++--
 drivers/iommu/tegra-gart.c               |    4 ++--
 drivers/iommu/tegra-smmu.c               |    4 ++--
 drivers/remoteproc/remoteproc_core.c     |    2 +-
 include/linux/iommu.h                    |   16 ++++++++--------
 14 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c
index 92b745986231..efe4aaf3f5ce 100644
--- a/drivers/gpu/drm/msm/msm_iommu.c
+++ b/drivers/gpu/drm/msm/msm_iommu.c
@@ -25,7 +25,7 @@ struct msm_iommu {
 #define to_msm_iommu(x) container_of(x, struct msm_iommu, base)
 
 static int msm_fault_handler(struct iommu_domain *iommu, struct device *dev,
-		unsigned long iova, int flags, void *arg)
+		dma_addr_t iova, int flags, void *arg)
 {
 	DBG("*** fault: iova=%08lx, flags=%d", iova, flags);
 	return 0;
diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c
index 801a1d6937e4..6c87970a359f 100644
--- a/drivers/infiniband/hw/usnic/usnic_uiom.c
+++ b/drivers/infiniband/hw/usnic/usnic_uiom.c
@@ -67,7 +67,7 @@ static void usnic_uiom_reg_account(struct work_struct *work)
 
 static int usnic_uiom_dma_fault(struct iommu_domain *domain,
 				struct device *dev,
-				unsigned long iova, int flags,
+				dma_addr_t iova, int flags,
 				void *token)
 {
 	usnic_err("Device %s iommu fault domain 0x%pK va 0x%lx flags 0x%x\n",
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index c949520bd196..d1bcf05e5577 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -3393,7 +3393,7 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
 	return ret;
 }
 
-static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
+static int amd_iommu_map(struct iommu_domain *dom, dma_addr_t iova,
 			 phys_addr_t paddr, size_t page_size, int iommu_prot)
 {
 	struct protection_domain *domain = dom->priv;
@@ -3415,7 +3415,7 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
 	return ret;
 }
 
-static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
+static size_t amd_iommu_unmap(struct iommu_domain *dom, dma_addr_t iova,
 			   size_t page_size)
 {
 	struct protection_domain *domain = dom->priv;
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 8b89e33a89fe..69e8246b2241 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -1476,7 +1476,7 @@ out_unlock:
 	return ret;
 }
 
-static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
+static int arm_smmu_map(struct iommu_domain *domain, dma_addr_t iova,
 			phys_addr_t paddr, size_t size, int prot)
 {
 	struct arm_smmu_domain *smmu_domain = domain->priv;
@@ -1491,7 +1491,7 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova,
 	return arm_smmu_handle_mapping(smmu_domain, iova, paddr, size, prot);
 }
 
-static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova,
+static size_t arm_smmu_unmap(struct iommu_domain *domain, dma_addr_t iova,
 			     size_t size)
 {
 	int ret;
diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
index 074018979cdf..573247097715 100644
--- a/drivers/iommu/exynos-iommu.c
+++ b/drivers/iommu/exynos-iommu.c
@@ -878,7 +878,7 @@ static int lv2set_page(unsigned long *pent, phys_addr_t paddr, size_t size,
 	return 0;
 }
 
-static int exynos_iommu_map(struct iommu_domain *domain, unsigned long iova,
+static int exynos_iommu_map(struct iommu_domain *domain, dma_addr_t iova,
 			 phys_addr_t paddr, size_t size, int prot)
 {
 	struct exynos_iommu_domain *priv = domain->priv;
@@ -919,7 +919,7 @@ static int exynos_iommu_map(struct iommu_domain *domain, unsigned long iova,
 }
 
 static size_t exynos_iommu_unmap(struct iommu_domain *domain,
-					       unsigned long iova, size_t size)
+					       dma_addr_t iova, size_t size)
 {
 	struct exynos_iommu_domain *priv = domain->priv;
 	struct sysmmu_drvdata *data;
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 69fa7da5e48b..683cdab05d3d 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -4242,7 +4242,7 @@ static void intel_iommu_detach_device(struct iommu_domain *domain,
 }
 
 static int intel_iommu_map(struct iommu_domain *domain,
-			   unsigned long iova, phys_addr_t hpa,
+			   dma_addr_t iova, phys_addr_t hpa,
 			   size_t size, int iommu_prot)
 {
 	struct dmar_domain *dmar_domain = domain->priv;
@@ -4280,7 +4280,7 @@ static int intel_iommu_map(struct iommu_domain *domain,
 }
 
 static size_t intel_iommu_unmap(struct iommu_domain *domain,
-				unsigned long iova, size_t size)
+				dma_addr_t iova, size_t size)
 {
 	struct dmar_domain *dmar_domain = domain->priv;
 	struct page *freelist = NULL;
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index e5555fcfe703..5c4f2e07b579 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -796,10 +796,10 @@ static size_t iommu_pgsize(struct iommu_domain *domain,
 	return pgsize;
 }
 
-int iommu_map(struct iommu_domain *domain, unsigned long iova,
+int iommu_map(struct iommu_domain *domain, dma_addr_t iova,
 	      phys_addr_t paddr, size_t size, int prot)
 {
-	unsigned long orig_iova = iova;
+	dma_addr_t orig_iova = iova;
 	unsigned int min_pagesz;
 	size_t orig_size = size;
 	int ret = 0;
@@ -849,7 +849,7 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
 }
 EXPORT_SYMBOL_GPL(iommu_map);
 
-size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
+size_t iommu_unmap(struct iommu_domain *domain, dma_addr_t iova, size_t size)
 {
 	size_t unmapped_page, unmapped = 0;
 	unsigned int min_pagesz;
diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
index f5ff657f49fa..5159ad6b79e7 100644
--- a/drivers/iommu/msm_iommu.c
+++ b/drivers/iommu/msm_iommu.c
@@ -359,7 +359,7 @@ fail:
 	spin_unlock_irqrestore(&msm_iommu_lock, flags);
 }
 
-static int msm_iommu_map(struct iommu_domain *domain, unsigned long va,
+static int msm_iommu_map(struct iommu_domain *domain, dma_addr_t va,
 			 phys_addr_t pa, size_t len, int prot)
 {
 	struct msm_priv *priv;
@@ -470,8 +470,8 @@ fail:
 	return ret;
 }
 
-static size_t msm_iommu_unmap(struct iommu_domain *domain, unsigned long va,
-			    size_t len)
+static size_t msm_iommu_unmap(struct iommu_domain *domain, dma_addr_t va,
+			      size_t len)
 {
 	struct msm_priv *priv;
 	unsigned long flags;
diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
index 7fcbfc498fa9..76a34af53190 100644
--- a/drivers/iommu/omap-iommu.c
+++ b/drivers/iommu/omap-iommu.c
@@ -1054,7 +1054,7 @@ static u32 iotlb_init_entry(struct iotlb_entry *e, u32 da, u32 pa,
 	return iopgsz_to_bytes(e->pgsz);
 }
 
-static int omap_iommu_map(struct iommu_domain *domain, unsigned long da,
+static int omap_iommu_map(struct iommu_domain *domain, dma_addr_t da,
 			 phys_addr_t pa, size_t bytes, int prot)
 {
 	struct omap_iommu_domain *omap_domain = domain->priv;
@@ -1084,7 +1084,7 @@ static int omap_iommu_map(struct iommu_domain *domain, unsigned long da,
 	return ret;
 }
 
-static size_t omap_iommu_unmap(struct iommu_domain *domain, unsigned long da,
+static size_t omap_iommu_unmap(struct iommu_domain *domain, dma_addr_t da,
 			    size_t size)
 {
 	struct omap_iommu_domain *omap_domain = domain->priv;
diff --git a/drivers/iommu/shmobile-iommu.c b/drivers/iommu/shmobile-iommu.c
index 464acda0bbc4..037bf65e8ac2 100644
--- a/drivers/iommu/shmobile-iommu.c
+++ b/drivers/iommu/shmobile-iommu.c
@@ -210,7 +210,7 @@ static void l2free(struct shmobile_iommu_domain *sh_domain,
 	}
 }
 
-static int shmobile_iommu_map(struct iommu_domain *domain, unsigned long iova,
+static int shmobile_iommu_map(struct iommu_domain *domain, dma_addr_t iova,
 			      phys_addr_t paddr, size_t size, int prot)
 {
 	struct shmobile_iommu_domain_pgtable l2 = { .pgtable = NULL };
@@ -255,7 +255,7 @@ static int shmobile_iommu_map(struct iommu_domain *domain, unsigned long iova,
 }
 
 static size_t shmobile_iommu_unmap(struct iommu_domain *domain,
-				   unsigned long iova, size_t size)
+				   dma_addr_t iova, size_t size)
 {
 	struct shmobile_iommu_domain_pgtable l2 = { .pgtable = NULL };
 	struct shmobile_iommu_domain *sh_domain = domain->priv;
diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c
index dba1a9fd5070..1ac5fa0d2f13 100644
--- a/drivers/iommu/tegra-gart.c
+++ b/drivers/iommu/tegra-gart.c
@@ -239,7 +239,7 @@ static void gart_iommu_domain_destroy(struct iommu_domain *domain)
 	domain->priv = NULL;
 }
 
-static int gart_iommu_map(struct iommu_domain *domain, unsigned long iova,
+static int gart_iommu_map(struct iommu_domain *domain, dma_addr_t iova,
 			  phys_addr_t pa, size_t bytes, int prot)
 {
 	struct gart_device *gart = domain->priv;
@@ -262,7 +262,7 @@ static int gart_iommu_map(struct iommu_domain *domain, unsigned long iova,
 	return 0;
 }
 
-static size_t gart_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
+static size_t gart_iommu_unmap(struct iommu_domain *domain, dma_addr_t iova,
 			       size_t bytes)
 {
 	struct gart_device *gart = domain->priv;
diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
index 605b5b46a903..bdac538d8bef 100644
--- a/drivers/iommu/tegra-smmu.c
+++ b/drivers/iommu/tegra-smmu.c
@@ -724,7 +724,7 @@ static void __smmu_iommu_map_pfn(struct smmu_as *as, dma_addr_t iova,
 	put_signature(as, iova, pfn);
 }
 
-static int smmu_iommu_map(struct iommu_domain *domain, unsigned long iova,
+static int smmu_iommu_map(struct iommu_domain *domain, dma_addr_t iova,
 			  phys_addr_t pa, size_t bytes, int prot)
 {
 	struct smmu_as *as = domain->priv;
@@ -742,7 +742,7 @@ static int smmu_iommu_map(struct iommu_domain *domain, unsigned long iova,
 	return 0;
 }
 
-static size_t smmu_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
+static size_t smmu_iommu_unmap(struct iommu_domain *domain, dma_addr_t iova,
 			       size_t bytes)
 {
 	struct smmu_as *as = domain->priv;
diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
index 3cd85a638afa..ed22d6c6f204 100644
--- a/drivers/remoteproc/remoteproc_core.c
+++ b/drivers/remoteproc/remoteproc_core.c
@@ -73,7 +73,7 @@ static const char *rproc_crash_to_string(enum rproc_crash_type type)
  * will try to access an unmapped device address.
  */
 static int rproc_iommu_fault(struct iommu_domain *domain, struct device *dev,
-		unsigned long iova, int flags, void *token)
+		dma_addr_t iova, int flags, void *token)
 {
 	struct rproc *rproc = token;
 
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index b96a5b2136e4..5877aa7720b2 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -41,7 +41,7 @@ struct notifier_block;
 #define IOMMU_FAULT_WRITE	0x1
 
 typedef int (*iommu_fault_handler_t)(struct iommu_domain *,
-			struct device *, unsigned long, int, void *);
+			struct device *, dma_addr_t, int, void *);
 
 struct iommu_domain_geometry {
 	dma_addr_t aperture_start; /* First address that can be mapped    */
@@ -106,9 +106,9 @@ struct iommu_ops {
 	void (*domain_destroy)(struct iommu_domain *domain);
 	int (*attach_dev)(struct iommu_domain *domain, struct device *dev);
 	void (*detach_dev)(struct iommu_domain *domain, struct device *dev);
-	int (*map)(struct iommu_domain *domain, unsigned long iova,
+	int (*map)(struct iommu_domain *domain, dma_addr_t iova,
 		   phys_addr_t paddr, size_t size, int prot);
-	size_t (*unmap)(struct iommu_domain *domain, unsigned long iova,
+	size_t (*unmap)(struct iommu_domain *domain, dma_addr_t iova,
 		     size_t size);
 	phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova);
 	int (*domain_has_cap)(struct iommu_domain *domain,
@@ -149,9 +149,9 @@ extern int iommu_attach_device(struct iommu_domain *domain,
 			       struct device *dev);
 extern void iommu_detach_device(struct iommu_domain *domain,
 				struct device *dev);
-extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
+extern int iommu_map(struct iommu_domain *domain, dma_addr_t iova,
 		     phys_addr_t paddr, size_t size, int prot);
-extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova,
+extern size_t iommu_unmap(struct iommu_domain *domain, dma_addr_t iova,
 		       size_t size);
 extern phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova);
 extern int iommu_domain_has_cap(struct iommu_domain *domain,
@@ -217,7 +217,7 @@ extern void iommu_domain_window_disable(struct iommu_domain *domain, u32 wnd_nr)
  * elicit the default behavior of the IOMMU drivers).
  */
 static inline int report_iommu_fault(struct iommu_domain *domain,
-		struct device *dev, unsigned long iova, int flags)
+		struct device *dev, dma_addr_t iova, int flags)
 {
 	int ret = -ENOSYS;
 
@@ -268,13 +268,13 @@ static inline void iommu_detach_device(struct iommu_domain *domain,
 {
 }
 
-static inline int iommu_map(struct iommu_domain *domain, unsigned long iova,
+static inline int iommu_map(struct iommu_domain *domain, dma_addr_t iova,
 			    phys_addr_t paddr, int gfp_order, int prot)
 {
 	return -ENODEV;
 }
 
-static inline int iommu_unmap(struct iommu_domain *domain, unsigned long iova,
+static inline int iommu_unmap(struct iommu_domain *domain, dma_addr_t iova,
 			      int gfp_order)
 {
 	return -ENODEV;


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 5/5] iommu/exynos: Remove unnecessary "&" from function pointers
  2014-05-06 22:48 [PATCH v2 0/5] Clean up DMA API & IOMMU dma_addr_t usage Bjorn Helgaas
                   ` (3 preceding siblings ...)
  2014-05-06 22:48 ` [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments Bjorn Helgaas
@ 2014-05-06 22:48 ` Bjorn Helgaas
  2014-05-07  7:59   ` Arnd Bergmann
  4 siblings, 1 reply; 24+ messages in thread
From: Bjorn Helgaas @ 2014-05-06 22:48 UTC (permalink / raw)
  Cc: Arnd Bergmann, linux-doc, Greg Kroah-Hartman, Joerg Roedel,
	Randy Dunlap, Liviu Dudau, linux-kernel, James Bottomley,
	linux-pci, David Woodhouse

Remove the unnecessary "&" from the function pointers in exynos_iommu_ops.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
---
 drivers/iommu/exynos-iommu.c |   14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
index 573247097715..7f5ffd737960 100644
--- a/drivers/iommu/exynos-iommu.c
+++ b/drivers/iommu/exynos-iommu.c
@@ -1011,13 +1011,13 @@ static phys_addr_t exynos_iommu_iova_to_phys(struct iommu_domain *domain,
 }
 
 static struct iommu_ops exynos_iommu_ops = {
-	.domain_init = &exynos_iommu_domain_init,
-	.domain_destroy = &exynos_iommu_domain_destroy,
-	.attach_dev = &exynos_iommu_attach_device,
-	.detach_dev = &exynos_iommu_detach_device,
-	.map = &exynos_iommu_map,
-	.unmap = &exynos_iommu_unmap,
-	.iova_to_phys = &exynos_iommu_iova_to_phys,
+	.domain_init = exynos_iommu_domain_init,
+	.domain_destroy = exynos_iommu_domain_destroy,
+	.attach_dev = exynos_iommu_attach_device,
+	.detach_dev = exynos_iommu_detach_device,
+	.map = exynos_iommu_map,
+	.unmap = exynos_iommu_unmap,
+	.iova_to_phys = exynos_iommu_iova_to_phys,
 	.pgsize_bitmap = SECT_SIZE | LPAGE_SIZE | SPAGE_SIZE,
 };
 


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/5] DMA-API: Clarify physical/bus address distinction
  2014-05-06 22:48 ` [PATCH v2 1/5] DMA-API: Clarify physical/bus address distinction Bjorn Helgaas
@ 2014-05-07  7:37   ` Arnd Bergmann
  2014-05-07 18:43     ` Bjorn Helgaas
  2014-05-08  9:24   ` Greg Kroah-Hartman
  1 sibling, 1 reply; 24+ messages in thread
From: Arnd Bergmann @ 2014-05-07  7:37 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-doc, Greg Kroah-Hartman, Joerg Roedel, Randy Dunlap,
	Liviu Dudau, linux-kernel, James Bottomley, linux-pci,
	David Woodhouse

On Tuesday 06 May 2014 16:48:19 Bjorn Helgaas wrote:
> The DMA-API documentation sometimes refers to "physical addresses" when it
> really means "bus addresses."  Sometimes these are identical, but they may
> be different if the bridge leading to the bus performs address translation.
> Update the documentation to use "bus address" when appropriate.
> 
> Also, consistently capitalize "DMA", use parens with function names, use
> dev_printk() in examples, and reword a few sections for clarity.
> 
> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>

Looks great!

Acked-by: Arnd Bergmann <arnd@arndb.de>

Just some minor comments that you may include if you like (my Ack
holds if you don't as well).

> @@ -30,16 +28,16 @@ hardware exists.
>  
>  Note that the DMA API works with any bus independent of the underlying
>  microprocessor architecture. You should use the DMA API rather than
> -the bus specific DMA API (e.g. pci_dma_*).
> +the bus-specific DMA API (e.g. pci_dma_*).

It might make sense to change the example to dma_map_* rather than pci_dma_*,
which is rarely used these days. I think there was at one point a move
to replace remove the include/asm-generic/pci-dma-compat.h APIs.

>  First of all, you should make sure
>  
>  #include <linux/dma-mapping.h>
>  
> -is in your driver. This file will obtain for you the definition of the
> -dma_addr_t (which can hold any valid DMA address for the platform)
> -type which should be used everywhere you hold a DMA (bus) address
> -returned from the DMA mapping functions.
> +is in your driver, which provides the definition of dma_addr_t.  This type
> +can hold any valid DMA or bus address for the platform and should be used
> +everywhere you hold a DMA address returned from the DMA mapping functions
> +or a bus address read from a device register such as a PCI BAR.

The PCI BAR example is misleading I think: While the raw value of the
BAR would be a dma_addr_t that can be used for pci-pci DMA, we normally
only deal with translated BARs from pci_resource_*, which would be
a resource_size_t in the same space as phys_addr_t, which has the
PCI mem_offset added in.

> + * A dma_addr_t can hold any valid DMA or bus address for the platform.
> + * It can be given to a device to use as a DMA source or target, or it may
> + * appear on the bus when a CPU performs programmed I/O.  A CPU cannot
> + * reference a dma_addr_t directly because there may be translation between
> + * its physical address space and the bus address space.

On a similar note, I think the part 'or it may appear on the bus when a CPU
performs programmed I/O' is somewhat misleading: While true in theory, we
would never use a dma_addr_t to store an address to be used for PIO, because
the CPU needs to use either the phys_addr_t value associated with the physical
MMIO address or the __iomem pointer for the virtually mapped address.

	Arnd

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 2/5] DMA-API: Change dma_declare_coherent_memory() CPU address to phys_addr_t
  2014-05-06 22:48 ` [PATCH v2 2/5] DMA-API: Change dma_declare_coherent_memory() CPU address to phys_addr_t Bjorn Helgaas
@ 2014-05-07  7:38   ` Arnd Bergmann
  0 siblings, 0 replies; 24+ messages in thread
From: Arnd Bergmann @ 2014-05-07  7:38 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-doc, Greg Kroah-Hartman, Joerg Roedel, Randy Dunlap,
	Liviu Dudau, linux-kernel, James Bottomley, linux-pci,
	David Woodhouse

On Tuesday 06 May 2014 16:48:26 Bjorn Helgaas wrote:
> dma_declare_coherent_memory() takes two addresses for a region of memory: a
> "bus_addr" and a "device_addr".  I think the intent is that "bus_addr" is
> the physical address a *CPU* would use to access the region, and
> "device_addr" is the bus address the *device* would use to address the
> region.
> 
> Rename "bus_addr" to "phys_addr" and change its type to phys_addr_t.
> Most callers already supply a phys_addr_t for this argument.  The others
> supply a 32-bit integer (a constant, unsigned int, or __u32) and need no
> change.
> 
> Use "unsigned long", not phys_addr_t, to hold PFNs.
> 
> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
> 

Acked-by: Arnd Bergmann <arnd@arndb.de>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/5] sh/PCI: Pass GAPSPCI_DMA_BASE CPU address to dma_declare_coherent_memory()
  2014-05-06 22:48 ` [PATCH v2 3/5] sh/PCI: Pass GAPSPCI_DMA_BASE CPU address to dma_declare_coherent_memory() Bjorn Helgaas
@ 2014-05-07  7:55   ` Arnd Bergmann
  2014-05-07  8:15     ` Arnd Bergmann
  0 siblings, 1 reply; 24+ messages in thread
From: Arnd Bergmann @ 2014-05-07  7:55 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-doc, Greg Kroah-Hartman, Joerg Roedel, Randy Dunlap,
	Liviu Dudau, linux-kernel, James Bottomley, linux-pci,
	David Woodhouse

On Tuesday 06 May 2014 16:48:33 Bjorn Helgaas wrote:
> @@ -51,8 +53,12 @@ static void gapspci_fixup_resources(struct pci_dev *dev)
>                 /*
>                  * Redirect dma memory allocations to special memory window.
>                  */
> +               region.start = GAPSPCI_DMA_BASE;
> +               region.end = GAPSPCI_DMA_BASE + GAPSPCI_DMA_SIZE - 1;
> +               res.flags = IORESOURCE_MEM;
> +               pcibios_bus_to_resource(dev->bus, &res, &region);
>                 BUG_ON(!dma_declare_coherent_memory(&dev->dev,
> -                                               GAPSPCI_DMA_BASE,
> +                                               res->start,
>                                                 GAPSPCI_DMA_BASE,
>                                                 GAPSPCI_DMA_SIZE,
>                                                 DMA_MEMORY_MAP |

Not sure if this is an improvement. I understand the intention, but
it's actually possible for the offset to be different both ways:

Your patch applies the outbound mem_offset that was provided when
registering the MMIO resource for the PCI host bridge. What the driver
needs instead is the inbound DMA offset that gets applied by some
host bridges that don't have a 1:1 mapping but also don't have an
IOMMU. We know that on this particular platform, both are zero, so
the original code works and it will keep working with your change,
but I think it's a mistake anyway. I have seen both kinds of offsets
in the past on real machines, but I am not aware of any where they
are both nonzero and have the same value.

	Arnd

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments
  2014-05-06 22:48 ` [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments Bjorn Helgaas
@ 2014-05-07  7:58   ` Arnd Bergmann
  2014-05-08  0:18     ` Bjorn Helgaas
  0 siblings, 1 reply; 24+ messages in thread
From: Arnd Bergmann @ 2014-05-07  7:58 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-doc, Greg Kroah-Hartman, Joerg Roedel, Randy Dunlap,
	Liviu Dudau, linux-kernel, James Bottomley, linux-pci,
	David Woodhouse

On Tuesday 06 May 2014 16:48:40 Bjorn Helgaas wrote:
> Convert the "iova" arguments of iommu_map(), iommu_unmap(), etc., from
> "unsigned long" to dma_addr_t.
> 
> bb5547acfcd8 ("iommu/fsl: Make iova dma_addr_t in the iommu_iova_to_phys
> API") did this for iommu_iova_to_phys(), but didn't fix the rest of the
> IOMMU API.

This patch looks 100% correct, but I'm not convinced it's a good idea:
On 32-bit platforms (i.e. most of the ones you change), doing 64-bit
arithmetic has a noticeable overhead. I am not aware of an IOMMU that
actually uses 64-bit DMA addresses on its slave side, usually they
are used to translate addresses from 32-bit masters into 64-bit
memory addresses, so using 'unsigned long' seems better from a practical
point of view as opposed to the strict type correctness.

	Arnd

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 5/5] iommu/exynos: Remove unnecessary "&" from function pointers
  2014-05-06 22:48 ` [PATCH v2 5/5] iommu/exynos: Remove unnecessary "&" from function pointers Bjorn Helgaas
@ 2014-05-07  7:59   ` Arnd Bergmann
  0 siblings, 0 replies; 24+ messages in thread
From: Arnd Bergmann @ 2014-05-07  7:59 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-doc, Greg Kroah-Hartman, Joerg Roedel, Randy Dunlap,
	Liviu Dudau, linux-kernel, James Bottomley, linux-pci,
	David Woodhouse

On Tuesday 06 May 2014 16:48:46 Bjorn Helgaas wrote:
> Remove the unnecessary "&" from the function pointers in exynos_iommu_ops.
> 
> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>

Acked-by: Arnd Bergmann <arnd@arndb.de>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/5] sh/PCI: Pass GAPSPCI_DMA_BASE CPU address to dma_declare_coherent_memory()
  2014-05-07  7:55   ` Arnd Bergmann
@ 2014-05-07  8:15     ` Arnd Bergmann
  2014-05-07 23:18       ` Bjorn Helgaas
  0 siblings, 1 reply; 24+ messages in thread
From: Arnd Bergmann @ 2014-05-07  8:15 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-doc, Greg Kroah-Hartman, Joerg Roedel, Randy Dunlap,
	Liviu Dudau, linux-kernel, James Bottomley, linux-pci,
	David Woodhouse

On Wednesday 07 May 2014 09:55:16 Arnd Bergmann wrote:
> On Tuesday 06 May 2014 16:48:33 Bjorn Helgaas wrote:
> > @@ -51,8 +53,12 @@ static void gapspci_fixup_resources(struct pci_dev *dev)
> >                 /*
> >                  * Redirect dma memory allocations to special memory window.
> >                  */
> > +               region.start = GAPSPCI_DMA_BASE;
> > +               region.end = GAPSPCI_DMA_BASE + GAPSPCI_DMA_SIZE - 1;
> > +               res.flags = IORESOURCE_MEM;
> > +               pcibios_bus_to_resource(dev->bus, &res, &region);
> >                 BUG_ON(!dma_declare_coherent_memory(&dev->dev,
> > -                                               GAPSPCI_DMA_BASE,
> > +                                               res->start,
> >                                                 GAPSPCI_DMA_BASE,
> >                                                 GAPSPCI_DMA_SIZE,
> >                                                 DMA_MEMORY_MAP |
> 
> Not sure if this is an improvement. I understand the intention, but
> it's actually possible for the offset to be different both ways:
> 
> Your patch applies the outbound mem_offset that was provided when
> registering the MMIO resource for the PCI host bridge. What the driver
> needs instead is the inbound DMA offset that gets applied by some
> host bridges that don't have a 1:1 mapping but also don't have an
> IOMMU. We know that on this particular platform, both are zero, so
> the original code works and it will keep working with your change,
> but I think it's a mistake anyway. I have seen both kinds of offsets
> in the past on real machines, but I am not aware of any where they
> are both nonzero and have the same value.

Hmm, looking at it again, it seem even weirder: the address we register
for DMA allocations is the same that gets passed into the PCI host
for MMIO resources. Something is very strange here. Still, I'd rather
not touch it at all, whatever it does.

	Arnd

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/5] DMA-API: Clarify physical/bus address distinction
  2014-05-07  7:37   ` Arnd Bergmann
@ 2014-05-07 18:43     ` Bjorn Helgaas
  0 siblings, 0 replies; 24+ messages in thread
From: Bjorn Helgaas @ 2014-05-07 18:43 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linux-doc, Greg Kroah-Hartman, Joerg Roedel, Randy Dunlap,
	Liviu Dudau, linux-kernel, James Bottomley, linux-pci,
	David Woodhouse

On Wed, May 07, 2014 at 09:37:04AM +0200, Arnd Bergmann wrote:
> On Tuesday 06 May 2014 16:48:19 Bjorn Helgaas wrote:
> > The DMA-API documentation sometimes refers to "physical addresses" when it
> > really means "bus addresses."  Sometimes these are identical, but they may
> > be different if the bridge leading to the bus performs address translation.
> > Update the documentation to use "bus address" when appropriate.
> > 
> > Also, consistently capitalize "DMA", use parens with function names, use
> > dev_printk() in examples, and reword a few sections for clarity.
> > 
> > Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
> 
> Looks great!
> 
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> 
> Just some minor comments that you may include if you like (my Ack
> holds if you don't as well).
> 
> > @@ -30,16 +28,16 @@ hardware exists.
> >  
> >  Note that the DMA API works with any bus independent of the underlying
> >  microprocessor architecture. You should use the DMA API rather than
> > -the bus specific DMA API (e.g. pci_dma_*).
> > +the bus-specific DMA API (e.g. pci_dma_*).
> 
> It might make sense to change the example to dma_map_* rather than pci_dma_*,
> which is rarely used these days. I think there was at one point a move
> to replace remove the include/asm-generic/pci-dma-compat.h APIs.

I reworded this as:

  You should use the DMA API rather than the bus-specific DMA API, i.e.,
  use the dma_map_*() interfaces rather than the pci_map_*() interfaces.

Does that clear it up?

> >  First of all, you should make sure
> >  
> >  #include <linux/dma-mapping.h>
> >  
> > -is in your driver. This file will obtain for you the definition of the
> > -dma_addr_t (which can hold any valid DMA address for the platform)
> > -type which should be used everywhere you hold a DMA (bus) address
> > -returned from the DMA mapping functions.
> > +is in your driver, which provides the definition of dma_addr_t.  This type
> > +can hold any valid DMA or bus address for the platform and should be used
> > +everywhere you hold a DMA address returned from the DMA mapping functions
> > +or a bus address read from a device register such as a PCI BAR.
> 
> The PCI BAR example is misleading I think: While the raw value of the
> BAR would be a dma_addr_t that can be used for pci-pci DMA, we normally
> only deal with translated BARs from pci_resource_*, which would be
> a resource_size_t in the same space as phys_addr_t, which has the
> PCI mem_offset added in.

I removed the last line ("or a bus address ...")

> > + * A dma_addr_t can hold any valid DMA or bus address for the platform.
> > + * It can be given to a device to use as a DMA source or target, or it may
> > + * appear on the bus when a CPU performs programmed I/O.  A CPU cannot
> > + * reference a dma_addr_t directly because there may be translation between
> > + * its physical address space and the bus address space.
> 
> On a similar note, I think the part 'or it may appear on the bus when a CPU
> performs programmed I/O' is somewhat misleading: While true in theory, we
> would never use a dma_addr_t to store an address to be used for PIO, because
> the CPU needs to use either the phys_addr_t value associated with the physical
> MMIO address or the __iomem pointer for the virtually mapped address.

Yep, makes sense, I removed that too, thanks!

I wrote the text below to give a little background.  Maybe it's overkill for
DMA-API-HOWTO.txt, but there really isn't much coverage of this elsewhere
in Documentation/.  If I did include this, I'd propose removing this text
at the same time (I think it's a bit over-specific now, and I still have a
brief IOMMU description):

  -Most of the 64bit platforms have special hardware that translates bus
  -addresses (DMA addresses) into physical addresses.  This is similar to
  -how page tables and/or a TLB translates virtual addresses to physical
  -addresses on a CPU.  This is needed so that e.g. PCI devices can
  -access with a Single Address Cycle (32bit DMA address) any page in the
  -64bit physical address space.  Previously in Linux those 64bit
  -platforms had to set artificial limits on the maximum RAM size in the
  -system, so that the virt_to_bus() static scheme works (the DMA address
  -translation tables were simply filled on bootup to map each bus
  -address to the physical page __pa(bus_to_virt())).

Bjorn


                       CPU and DMA addresses

There are several kinds of addresses involved in the DMA API, and it's
important to understand the differences.

The kernel normally uses virtual addresses.  Any address returned by
kmalloc(), vmalloc(), and similar interfaces is a virtual address and can
be stored in a "void *".

The virtual memory system (TLB, page tables, etc.) translates virtual
addresses to CPU physical addresses, which are stored as "phys_addr_t" or
"resource_size_t".  The kernel manages device resources like registers as
physical addresses.  These are the addresses in /proc/iomem.  The physical
address is not directly useful to a driver; it must use ioremap() to map
the space and produce a virtual address.

I/O devices use a third kind of address, a "bus address" or "DMA address".
If a device has registers at an MMIO address, or if it performs DMA to read  
or write system memory, the addresses used by the device are bus addresses.
In some systems, bus addresses are identical to CPU physical addresses, but
in general they are not.  IOMMUs and host bridges can produce arbitrary
mappings between physical and bus addresses.

Here's a picture and some examples:

               CPU                  CPU                  Bus
             Virtual              Physical             Address
             Address              Address               Space
              Space                Space

            +-------+             +------+             +------+
            |       |             |MMIO  |   Offset    |      |
            |       |  Virtual    |Space |   applied   |      |
          C +-------+ --------> B +------+ ----------> +------+ A
            |       |  mapping    |      |   by host   |      |
  +-----+   |       |             |      |   bridge    |      |   +--------+
  |     |   |       |             +------+             |      |   |        |
  | CPU |   |       |             | RAM  |             |      |   | Device |
  |     |   |       |             |      |             |      |   |        |
  +-----+   +-------+             +------+             +------+   +--------+
            |       |  Virtual    |Buffer|   Mapping   |      |
          X +-------+ --------> Y +------+ <---------- +------+ Z
            |       |  mapping    | RAM  |   by IOMMU
            |       |             |      |
            |       |             |      |
            +-------+             +------+

During the enumeration process, the kernel learns about I/O devices and
their MMIO space and the host bridges that connect them to the system.  For
example, if a PCI device has a BAR, the kernel reads the bus address (A)
from the BAR and converts it to a CPU physical address (B).  The address B
is stored in a struct resource and usually exposed via /proc/iomem.  When a
driver claims a device, it typically uses ioremap() to map physical address
B at a virtual address (C).  It can then pass C to interfaces like 
ioread32() to perform MMIO accesses to device registers.

If the device supports DMA, the driver sets up a buffer using kmalloc() or
a similar interface, which returns a virtual address (X).  The virtual
memory system maps X to a physical address (Y) in system RAM.  The driver
can use virtual address X to access the buffer, but the device itself
cannot because DMA doesn't go through the CPU virtual memory system.

In some simple systems, the device can do DMA directly to physical
address Y.  But in many others, there is special IOMMU hardware that
translates bus addresses, e.g., Z, to physical addresses.  This is part of
the reason for the DMA API: the driver can give a virtual address X to an
interface like dma_map_single(), which sets up any required IOMMU mapping
and returns the bus address Z.  The driver then tells the device to do DMA
to Z, and the IOMMU maps it to the buffer in system RAM.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/5] sh/PCI: Pass GAPSPCI_DMA_BASE CPU address to dma_declare_coherent_memory()
  2014-05-07  8:15     ` Arnd Bergmann
@ 2014-05-07 23:18       ` Bjorn Helgaas
  2014-05-08 11:36         ` Arnd Bergmann
  0 siblings, 1 reply; 24+ messages in thread
From: Bjorn Helgaas @ 2014-05-07 23:18 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linux-doc, Greg Kroah-Hartman, Joerg Roedel, Randy Dunlap,
	Liviu Dudau, linux-kernel, James Bottomley, linux-pci,
	David Woodhouse, Magnus Damm, linux-sh

[+cc Magnus, linux-sh (sorry, I forgot to tell stgit to cc you)]

On Wed, May 07, 2014 at 10:15:10AM +0200, Arnd Bergmann wrote:
> On Wednesday 07 May 2014 09:55:16 Arnd Bergmann wrote:
> > On Tuesday 06 May 2014 16:48:33 Bjorn Helgaas wrote:
> > > @@ -51,8 +53,12 @@ static void gapspci_fixup_resources(struct pci_dev *dev)
> > >                 /*
> > >                  * Redirect dma memory allocations to special memory window.
> > >                  */
> > > +               region.start = GAPSPCI_DMA_BASE;
> > > +               region.end = GAPSPCI_DMA_BASE + GAPSPCI_DMA_SIZE - 1;
> > > +               res.flags = IORESOURCE_MEM;
> > > +               pcibios_bus_to_resource(dev->bus, &res, &region);
> > >                 BUG_ON(!dma_declare_coherent_memory(&dev->dev,
> > > -                                               GAPSPCI_DMA_BASE,
> > > +                                               res->start,
> > >                                                 GAPSPCI_DMA_BASE,
> > >                                                 GAPSPCI_DMA_SIZE,
> > >                                                 DMA_MEMORY_MAP |
> > 
> > Not sure if this is an improvement. I understand the intention, but
> > it's actually possible for the offset to be different both ways:
> > 
> > Your patch applies the outbound mem_offset that was provided when
> > registering the MMIO resource for the PCI host bridge. What the driver
> > needs instead is the inbound DMA offset that gets applied by some
> > host bridges that don't have a 1:1 mapping but also don't have an
> > IOMMU.

I don't understand where the inbound DMA offset comes into play.  As I
understand it, we're talking about a region of memory on the PCI bus,
not in system RAM, and there are two ways to access it: (1) the CPU
makes MMIO accesses to it, and (2) the device makes DMA accesses to
it.

The CPU accesses go through the host bridge MMIO aperture and will
have the outbound mem_offset applied to them.  The DMA accesses are
handled by the device itself and there's no host bridge or IOMMU or
inbound DMA offset involved.

I think there are two reasonable ways a PCI driver could use
dma_declare_coherent_memory():

  1) The bus address of the region might be in a BAR.  In that case,
  it should pass pci_resource_start() (the CPU phys_addr_t) as the
  first address, and pci_bus_address(pci_resource_start()) (the bus
  dma_addr_t) as the second.

  2) The bus address of the region might be discovered from the device
  in some non-standard way.  In this case, the driver reads the bus
  address dma_addr_t directly from the device, and it should use
  something like the pcibios_bus_to_resource() code I proposed to find
  the CPU phys_addr_t.

For this platform, all the addresses are hard-coded and there is no
outbound MMIO offset, so we can't tell which to use and in this
system, it doesn't matter if we do anything at all.

The only reason I would make a change here is because the current code
cannot be safely copied (another driver might be used in systems where
the host bridge does have an outbound MMIO offset).  If we make this
code use either strategy, it would be a clue to future users that they
should not assume the physical address is the same as the bus address.

I changed the patch (below) to add a comment and to use the first
strategy (though there isn't an actual BAR, so we can't do it exactly
that way).

> > We know that on this particular platform, both are zero, so
> > the original code works and it will keep working with your change,
> > but I think it's a mistake anyway. I have seen both kinds of offsets
> > in the past on real machines, but I am not aware of any where they
> > are both nonzero and have the same value.
> 
> Hmm, looking at it again, it seem even weirder: the address we register
> for DMA allocations is the same that gets passed into the PCI host
> for MMIO resources. Something is very strange here. Still, I'd rather
> not touch it at all, whatever it does.

It's definitely strange.  It looks to me like the memory on the device
takes up the entire 32KB host bridge MMIO aperture.  I don't know what
driver uses this, but the device must be programmed via I/O ports,
which sort of makes sense given the previous lines in this quirk that
set the BAR1 resource to be in the I/O aperture.

Bjorn



sh/PCI: Pass GAPSPCI_DMA_BASE CPU & bus address to dma_declare_coherent_memory()

From: Bjorn Helgaas <bhelgaas@google.com>

dma_declare_coherent_memory() needs both the CPU physical address and the
bus address of the device memory.  They are the same on this platform, but
in general we should use pcibios_resource_to_bus() to account for any
address translation done by the PCI host bridge.

This makes no difference on Dreamcast, but is safer if the usage is copied
to future drivers.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: Magnus Damm <damm@opensource.se>
CC: linux-sh@vger.kernel.org
---
 arch/sh/drivers/pci/fixups-dreamcast.c |   18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/arch/sh/drivers/pci/fixups-dreamcast.c b/arch/sh/drivers/pci/fixups-dreamcast.c
index d6cde700e316..f22127f966ce 100644
--- a/arch/sh/drivers/pci/fixups-dreamcast.c
+++ b/arch/sh/drivers/pci/fixups-dreamcast.c
@@ -31,6 +31,8 @@
 static void gapspci_fixup_resources(struct pci_dev *dev)
 {
 	struct pci_channel *p = dev->sysdata;
+	struct resource res;
+	struct pci_bus_region region;
 
 	printk(KERN_NOTICE "PCI: Fixing up device %s\n", pci_name(dev));
 
@@ -50,11 +52,21 @@ static void gapspci_fixup_resources(struct pci_dev *dev)
 
 		/*
 		 * Redirect dma memory allocations to special memory window.
+		 *
+		 * If this GAPSPCI region were mapped by a BAR, the CPU
+		 * phys_addr_t would be pci_resource_start(), and the bus
+		 * address would be pci_bus_address(pci_resource_start()).
+		 * But apparently there's no BAR mapping it, so we just
+		 * "know" its CPU address is GAPSPCI_DMA_BASE.
 		 */
+		res.start = GAPSPCI_DMA_BASE;
+		res.end = GAPSPCI_DMA_BASE + GAPSPCI_DMA_SIZE - 1;
+		res.flags = IORESOURCE_MEM;
+		pcibios_resource_to_bus(dev, &region, &resource);
 		BUG_ON(!dma_declare_coherent_memory(&dev->dev,
-						GAPSPCI_DMA_BASE,
-						GAPSPCI_DMA_BASE,
-						GAPSPCI_DMA_SIZE,
+						res.start,
+						region.start,
+						resource_size(&res),
 						DMA_MEMORY_MAP |
 						DMA_MEMORY_EXCLUSIVE));
 		break;

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments
  2014-05-07  7:58   ` Arnd Bergmann
@ 2014-05-08  0:18     ` Bjorn Helgaas
  2014-05-08 10:44       ` Arnd Bergmann
  0 siblings, 1 reply; 24+ messages in thread
From: Bjorn Helgaas @ 2014-05-08  0:18 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linux-doc, Greg Kroah-Hartman, Joerg Roedel, Randy Dunlap,
	Liviu Dudau, linux-kernel, James Bottomley, linux-pci,
	David Woodhouse

On Wed, May 07, 2014 at 09:58:58AM +0200, Arnd Bergmann wrote:
> On Tuesday 06 May 2014 16:48:40 Bjorn Helgaas wrote:
> > Convert the "iova" arguments of iommu_map(), iommu_unmap(), etc., from
> > "unsigned long" to dma_addr_t.
> > 
> > bb5547acfcd8 ("iommu/fsl: Make iova dma_addr_t in the iommu_iova_to_phys
> > API") did this for iommu_iova_to_phys(), but didn't fix the rest of the
> > IOMMU API.
> 
> This patch looks 100% correct, but I'm not convinced it's a good idea:
> On 32-bit platforms (i.e. most of the ones you change), doing 64-bit
> arithmetic has a noticeable overhead. I am not aware of an IOMMU that
> actually uses 64-bit DMA addresses on its slave side, usually they
> are used to translate addresses from 32-bit masters into 64-bit
> memory addresses, so using 'unsigned long' seems better from a practical
> point of view as opposed to the strict type correctness.

The current x86 IOMMUs support DMA addresses larger than 32 bits, but
obviously those platforms usually run 64-bit kernels so "unsigned
long" is already 64 bits.

I guess you're thinking about cases where "unsigned long" is 32 bits,
the IOMMU only supports 32 bit DMA addresses, and dma_addr_t is 64
bits.  If the IOMMU only supports 32-bit DMA addresses, is there any
value in having a 64-bit dma_addr_t?

Bjorn

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/5] DMA-API: Clarify physical/bus address distinction
  2014-05-06 22:48 ` [PATCH v2 1/5] DMA-API: Clarify physical/bus address distinction Bjorn Helgaas
  2014-05-07  7:37   ` Arnd Bergmann
@ 2014-05-08  9:24   ` Greg Kroah-Hartman
  1 sibling, 0 replies; 24+ messages in thread
From: Greg Kroah-Hartman @ 2014-05-08  9:24 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Arnd Bergmann, linux-doc, Joerg Roedel, Randy Dunlap,
	Liviu Dudau, linux-kernel, James Bottomley, linux-pci,
	David Woodhouse

On Tue, May 06, 2014 at 04:48:19PM -0600, Bjorn Helgaas wrote:
> The DMA-API documentation sometimes refers to "physical addresses" when it
> really means "bus addresses."  Sometimes these are identical, but they may
> be different if the bridge leading to the bus performs address translation.
> Update the documentation to use "bus address" when appropriate.
> 
> Also, consistently capitalize "DMA", use parens with function names, use
> dev_printk() in examples, and reword a few sections for clarity.
> 
> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>

Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Thanks for cleaning this up.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments
  2014-05-08  0:18     ` Bjorn Helgaas
@ 2014-05-08 10:44       ` Arnd Bergmann
  2014-05-08 20:30         ` Bjorn Helgaas
  0 siblings, 1 reply; 24+ messages in thread
From: Arnd Bergmann @ 2014-05-08 10:44 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-doc, Greg Kroah-Hartman, Joerg Roedel, Randy Dunlap,
	Liviu Dudau, linux-kernel, James Bottomley, linux-pci,
	David Woodhouse

On Wednesday 07 May 2014 18:18:14 Bjorn Helgaas wrote:
> On Wed, May 07, 2014 at 09:58:58AM +0200, Arnd Bergmann wrote:
> > On Tuesday 06 May 2014 16:48:40 Bjorn Helgaas wrote:
> > > Convert the "iova" arguments of iommu_map(), iommu_unmap(), etc., from
> > > "unsigned long" to dma_addr_t.
> > > 
> > > bb5547acfcd8 ("iommu/fsl: Make iova dma_addr_t in the iommu_iova_to_phys
> > > API") did this for iommu_iova_to_phys(), but didn't fix the rest of the
> > > IOMMU API.
> > 
> > This patch looks 100% correct, but I'm not convinced it's a good idea:
> > On 32-bit platforms (i.e. most of the ones you change), doing 64-bit
> > arithmetic has a noticeable overhead. I am not aware of an IOMMU that
> > actually uses 64-bit DMA addresses on its slave side, usually they
> > are used to translate addresses from 32-bit masters into 64-bit
> > memory addresses, so using 'unsigned long' seems better from a practical
> > point of view as opposed to the strict type correctness.
> 
> The current x86 IOMMUs support DMA addresses larger than 32 bits, but
> obviously those platforms usually run 64-bit kernels so "unsigned
> long" is already 64 bits.
> 
> I guess you're thinking about cases where "unsigned long" is 32 bits,
> the IOMMU only supports 32 bit DMA addresses, and dma_addr_t is 64
> bits.  If the IOMMU only supports 32-bit DMA addresses, is there any
> value in having a 64-bit dma_addr_t?

Two cases:

a) You can have a system with some DMA masters that are 64-bit capable,
   and other masters that can only do 32-bit DMA and for this reason
   use an IOMMU. I expect to see more of these in the future.

b) We build kernels that run on a multitude of ARM32 platforms these
   days. We have some that require a 32-bit dma_addr_t and some that
   don't (because they always use swiotlb or an IOMMU).

	Arnd

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/5] sh/PCI: Pass GAPSPCI_DMA_BASE CPU address to dma_declare_coherent_memory()
  2014-05-07 23:18       ` Bjorn Helgaas
@ 2014-05-08 11:36         ` Arnd Bergmann
  0 siblings, 0 replies; 24+ messages in thread
From: Arnd Bergmann @ 2014-05-08 11:36 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: linux-doc, Greg Kroah-Hartman, Joerg Roedel, Randy Dunlap,
	Liviu Dudau, linux-kernel, James Bottomley, linux-pci,
	David Woodhouse, Magnus Damm, linux-sh

On Wednesday 07 May 2014 17:18:22 Bjorn Helgaas wrote:
> [+cc Magnus, linux-sh (sorry, I forgot to tell stgit to cc you)]
> 
> On Wed, May 07, 2014 at 10:15:10AM +0200, Arnd Bergmann wrote:
> > On Wednesday 07 May 2014 09:55:16 Arnd Bergmann wrote:
> > > On Tuesday 06 May 2014 16:48:33 Bjorn Helgaas wrote:
> > > > @@ -51,8 +53,12 @@ static void gapspci_fixup_resources(struct pci_dev *dev)
> > > >                 /*
> > > >                  * Redirect dma memory allocations to special memory window.
> > > >                  */
> > > > +               region.start = GAPSPCI_DMA_BASE;
> > > > +               region.end = GAPSPCI_DMA_BASE + GAPSPCI_DMA_SIZE - 1;
> > > > +               res.flags = IORESOURCE_MEM;
> > > > +               pcibios_bus_to_resource(dev->bus, &res, &region);
> > > >                 BUG_ON(!dma_declare_coherent_memory(&dev->dev,
> > > > -                                               GAPSPCI_DMA_BASE,
> > > > +                                               res->start,
> > > >                                                 GAPSPCI_DMA_BASE,
> > > >                                                 GAPSPCI_DMA_SIZE,
> > > >                                                 DMA_MEMORY_MAP |
> > > 
> > > Not sure if this is an improvement. I understand the intention, but
> > > it's actually possible for the offset to be different both ways:
> > > 
> > > Your patch applies the outbound mem_offset that was provided when
> > > registering the MMIO resource for the PCI host bridge. What the driver
> > > needs instead is the inbound DMA offset that gets applied by some
> > > host bridges that don't have a 1:1 mapping but also don't have an
> > > IOMMU.
> 
> I don't understand where the inbound DMA offset comes into play.  As I
> understand it, we're talking about a region of memory on the PCI bus,
> not in system RAM, and there are two ways to access it: (1) the CPU
> makes MMIO accesses to it, and (2) the device makes DMA accesses to
> it.

I was actually assuming that it's in system RAM but happens to have
some property that lets PCI devices access it coherently.

Your explanation also seems plausible, I have no idea which one is
true.

> The CPU accesses go through the host bridge MMIO aperture and will
> have the outbound mem_offset applied to them.  The DMA accesses are
> handled by the device itself and there's no host bridge or IOMMU or
> inbound DMA offset involved.
> 
> I think there are two reasonable ways a PCI driver could use
> dma_declare_coherent_memory():
> 
>   1) The bus address of the region might be in a BAR.  In that case,
>   it should pass pci_resource_start() (the CPU phys_addr_t) as the
>   first address, and pci_bus_address(pci_resource_start()) (the bus
>   dma_addr_t) as the second.
> 
>   2) The bus address of the region might be discovered from the device
>   in some non-standard way.  In this case, the driver reads the bus
>   address dma_addr_t directly from the device, and it should use
>   something like the pcibios_bus_to_resource() code I proposed to find
>   the CPU phys_addr_t.
> 
> For this platform, all the addresses are hard-coded and there is no
> outbound MMIO offset, so we can't tell which to use and in this
> system, it doesn't matter if we do anything at all.

Based on your initial assumption, this all is correct.

> The only reason I would make a change here is because the current code
> cannot be safely copied (another driver might be used in systems where
> the host bridge does have an outbound MMIO offset).  If we make this
> code use either strategy, it would be a clue to future users that they
> should not assume the physical address is the same as the bus address.
> 
> I changed the patch (below) to add a comment and to use the first
> strategy (though there isn't an actual BAR, so we can't do it exactly
> that way).

Ok.

> > > We know that on this particular platform, both are zero, so
> > > the original code works and it will keep working with your change,
> > > but I think it's a mistake anyway. I have seen both kinds of offsets
> > > in the past on real machines, but I am not aware of any where they
> > > are both nonzero and have the same value.
> > 
> > Hmm, looking at it again, it seem even weirder: the address we register
> > for DMA allocations is the same that gets passed into the PCI host
> > for MMIO resources. Something is very strange here. Still, I'd rather
> > not touch it at all, whatever it does.
> 
> It's definitely strange.  It looks to me like the memory on the device
> takes up the entire 32KB host bridge MMIO aperture.  I don't know what
> driver uses this, but the device must be programmed via I/O ports,
> which sort of makes sense given the previous lines in this quirk that
> set the BAR1 resource to be in the I/O aperture.

>From what I can tell, the only device on the PCI bus is an ethernet
adapter with the 8139too configured into MMIO mode (the driver as well
as the hardware supports both PIO and MMIO modes).

The gapspci_fixup_resources() in arch/sh/drivers/pci/fixups-dreamcast.c
seems to manually adjust MMIO BAR to point to the correct place, which
is probably required because the MMIO aperture provided is fake.


> sh/PCI: Pass GAPSPCI_DMA_BASE CPU & bus address to dma_declare_coherent_memory()
> 
> From: Bjorn Helgaas <bhelgaas@google.com>
> 
> dma_declare_coherent_memory() needs both the CPU physical address and the
> bus address of the device memory.  They are the same on this platform, but
> in general we should use pcibios_resource_to_bus() to account for any
> address translation done by the PCI host bridge.
> 
> This makes no difference on Dreamcast, but is safer if the usage is copied
> to future drivers.
> 
> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
> CC: Magnus Damm <damm@opensource.se>
> CC: linux-sh@vger.kernel.org

Unless Magnus or someone else is able to clarify where GAPSPCI_DMA_BASE
actually is in the system (on the CPU bus or the PCI bus), your patch
does help anyone copying this code to a driver that needs it for coherent
memory on the PCI bus.

Acked-by: Arnd Bergmann <arnd@arndb.de>

> ---
>  arch/sh/drivers/pci/fixups-dreamcast.c |   18 +++++++++++++++---
>  1 file changed, 15 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/sh/drivers/pci/fixups-dreamcast.c b/arch/sh/drivers/pci/fixups-dreamcast.c
> index d6cde700e316..f22127f966ce 100644
> --- a/arch/sh/drivers/pci/fixups-dreamcast.c
> +++ b/arch/sh/drivers/pci/fixups-dreamcast.c
> @@ -31,6 +31,8 @@
>  static void gapspci_fixup_resources(struct pci_dev *dev)
>  {
>  	struct pci_channel *p = dev->sysdata;
> +	struct resource res;
> +	struct pci_bus_region region;
>  
>  	printk(KERN_NOTICE "PCI: Fixing up device %s\n", pci_name(dev));
>  
> @@ -50,11 +52,21 @@ static void gapspci_fixup_resources(struct pci_dev *dev)
>  
>  		/*
>  		 * Redirect dma memory allocations to special memory window.
> +		 *
> +		 * If this GAPSPCI region were mapped by a BAR, the CPU
> +		 * phys_addr_t would be pci_resource_start(), and the bus
> +		 * address would be pci_bus_address(pci_resource_start()).
> +		 * But apparently there's no BAR mapping it, so we just
> +		 * "know" its CPU address is GAPSPCI_DMA_BASE.
>  		 */
> +		res.start = GAPSPCI_DMA_BASE;
> +		res.end = GAPSPCI_DMA_BASE + GAPSPCI_DMA_SIZE - 1;
> +		res.flags = IORESOURCE_MEM;
> +		pcibios_resource_to_bus(dev, &region, &resource);
>  		BUG_ON(!dma_declare_coherent_memory(&dev->dev,
> -						GAPSPCI_DMA_BASE,
> -						GAPSPCI_DMA_BASE,
> -						GAPSPCI_DMA_SIZE,
> +						res.start,
> +						region.start,
> +						resource_size(&res),
>  						DMA_MEMORY_MAP |
>  						DMA_MEMORY_EXCLUSIVE));
>  		break;


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments
  2014-05-08 10:44       ` Arnd Bergmann
@ 2014-05-08 20:30         ` Bjorn Helgaas
  2014-05-09  9:58           ` David Woodhouse
  0 siblings, 1 reply; 24+ messages in thread
From: Bjorn Helgaas @ 2014-05-08 20:30 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linux-doc, Greg Kroah-Hartman, Joerg Roedel, Randy Dunlap,
	Liviu Dudau, linux-kernel, James Bottomley, linux-pci,
	David Woodhouse

On Thu, May 08, 2014 at 12:44:12PM +0200, Arnd Bergmann wrote:
> On Wednesday 07 May 2014 18:18:14 Bjorn Helgaas wrote:
> > On Wed, May 07, 2014 at 09:58:58AM +0200, Arnd Bergmann wrote:
> > > On Tuesday 06 May 2014 16:48:40 Bjorn Helgaas wrote:
> > > > Convert the "iova" arguments of iommu_map(), iommu_unmap(), etc., from
> > > > "unsigned long" to dma_addr_t.
> > > > 
> > > > bb5547acfcd8 ("iommu/fsl: Make iova dma_addr_t in the iommu_iova_to_phys
> > > > API") did this for iommu_iova_to_phys(), but didn't fix the rest of the
> > > > IOMMU API.
> > > 
> > > This patch looks 100% correct, but I'm not convinced it's a good idea:
> > > On 32-bit platforms (i.e. most of the ones you change), doing 64-bit
> > > arithmetic has a noticeable overhead. I am not aware of an IOMMU that
> > > actually uses 64-bit DMA addresses on its slave side, usually they
> > > are used to translate addresses from 32-bit masters into 64-bit
> > > memory addresses, so using 'unsigned long' seems better from a practical
> > > point of view as opposed to the strict type correctness.
> > 
> > The current x86 IOMMUs support DMA addresses larger than 32 bits, but
> > obviously those platforms usually run 64-bit kernels so "unsigned
> > long" is already 64 bits.
> > 
> > I guess you're thinking about cases where "unsigned long" is 32 bits,
> > the IOMMU only supports 32 bit DMA addresses, and dma_addr_t is 64
> > bits.  If the IOMMU only supports 32-bit DMA addresses, is there any
> > value in having a 64-bit dma_addr_t?
> 
> Two cases:
> 
> a) You can have a system with some DMA masters that are 64-bit capable,
>    and other masters that can only do 32-bit DMA and for this reason
>    use an IOMMU. I expect to see more of these in the future.
> 
> b) We build kernels that run on a multitude of ARM32 platforms these
>    days. We have some that require a 32-bit dma_addr_t and some that
>    don't (because they always use swiotlb or an IOMMU).

Those both make sense (I assume you meant some ARM32 platforms require
a *64-bit* dma_addr_t, i.e., you might want to build an ILP32 kernel
with a 64-bit dma_addr_t).

I doubt there would be a noticeable performance effect since these are
relatively low-frequency interfaces (map, unmap, report_fault), but I
don't have any numbers, so I'll drop this for now.

Bjorn

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments
  2014-05-08 20:30         ` Bjorn Helgaas
@ 2014-05-09  9:58           ` David Woodhouse
  2014-05-09 15:32             ` Bjorn Helgaas
  0 siblings, 1 reply; 24+ messages in thread
From: David Woodhouse @ 2014-05-09  9:58 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: Arnd Bergmann, linux-doc, Greg Kroah-Hartman, Joerg Roedel,
	Randy Dunlap, Liviu Dudau, linux-kernel, James Bottomley,
	linux-pci

[-- Attachment #1: Type: text/plain, Size: 1503 bytes --]

On Thu, 2014-05-08 at 14:30 -0600, Bjorn Helgaas wrote:
> I doubt there would be a noticeable performance effect since these are
> relatively low-frequency interfaces (map, unmap, report_fault), 

That point of view makes me sad.

There are people who care deeply about the performance of IOMMU API
map/unmap. It isn't used *just* for virtual machines any more. See
drivers/infiniband/hw/usnic/usnic_uiom.c for example.

(Yes, they probably ought to be using SVM. But that's not going to
happen on the current generation of hardware.)

I also hold out *some* hope for consolidating the map/unmap functions
for the IOMMU and DMA APIs at some point. The main difference is that
the DMA API allocates an IOVA for itself, while the IOMMU API is given
the bus address too.

So we end up with duplicated map/unmap functions, *and* all the IOMMU
drivers implementing their own IOVA allocator.

I'd like to see if we can have a single IOVA allocator for the DMA API
to use, and let IOMMU drivers implement *just* the IOMMU API style of
map/unmap functions where they're *told* where to put it.

Which is another reason I'm not quite ready to shrug and say that IOMMU
API map/unmap performance is uninteresting. Although since it's
predicated here by "on 32-bit systems", I'm not actually throwing my
toys out of the pram... :)

-- 
David Woodhouse                            Open Source Technology Centre
David.Woodhouse@intel.com                              Intel Corporation

[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5745 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments
  2014-05-09  9:58           ` David Woodhouse
@ 2014-05-09 15:32             ` Bjorn Helgaas
  2014-05-09 19:52               ` Arnd Bergmann
  0 siblings, 1 reply; 24+ messages in thread
From: Bjorn Helgaas @ 2014-05-09 15:32 UTC (permalink / raw)
  To: David Woodhouse
  Cc: Arnd Bergmann, linux-doc, Greg Kroah-Hartman, Joerg Roedel,
	Randy Dunlap, Liviu Dudau, linux-kernel, James Bottomley,
	linux-pci

On Fri, May 09, 2014 at 10:58:50AM +0100, David Woodhouse wrote:
> On Thu, 2014-05-08 at 14:30 -0600, Bjorn Helgaas wrote:
> > I doubt there would be a noticeable performance effect since these are
> > relatively low-frequency interfaces (map, unmap, report_fault), 
> 
> That point of view makes me sad.

Don't be sad; Arnd successfully fended off the challenge :)

> There are people who care deeply about the performance of IOMMU API
> map/unmap. It isn't used *just* for virtual machines any more. See
> drivers/infiniband/hw/usnic/usnic_uiom.c for example.

Of course we should care about IOMMU API performance.  We should also
care about interface consistency, and it seems there's a tradeoff in
this case.  I said "relatively" because I expect map/unmap to be less
frequent than read/write operations that use the mapping.  I don't
know anything about infiniband, so maybe that assumption is false
there.

> I also hold out *some* hope for consolidating the map/unmap functions
> for the IOMMU and DMA APIs at some point. The main difference is that
> the DMA API allocates an IOVA for itself, while the IOMMU API is given
> the bus address too.

I find this aspect of these APIs confusing, so I agree that it would
be nice if these could be consolidated somehow.

Bjorn

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments
  2014-05-09 15:32             ` Bjorn Helgaas
@ 2014-05-09 19:52               ` Arnd Bergmann
  2014-05-09 20:19                 ` Bjorn Helgaas
  0 siblings, 1 reply; 24+ messages in thread
From: Arnd Bergmann @ 2014-05-09 19:52 UTC (permalink / raw)
  To: Bjorn Helgaas
  Cc: David Woodhouse, linux-doc, Greg Kroah-Hartman, Joerg Roedel,
	Randy Dunlap, Liviu Dudau, linux-kernel, James Bottomley,
	linux-pci

On Friday 09 May 2014 09:32:26 Bjorn Helgaas wrote:
> > There are people who care deeply about the performance of IOMMU API
> > map/unmap. It isn't used *just* for virtual machines any more. See
> > drivers/infiniband/hw/usnic/usnic_uiom.c for example.
> 
> Of course we should care about IOMMU API performance.  We should also
> care about interface consistency, and it seems there's a tradeoff in
> this case.  I said "relatively" because I expect map/unmap to be less
> frequent than read/write operations that use the mapping.  I don't
> know anything about infiniband, so maybe that assumption is false
> there.

In most drivers using the streaming DMA API, every mapping is used
exactly once. Think of network or block drivers: they rarely send
the same data twice to the device, and it usually comes from or
goes to some user space buffer.

	Arnd

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments
  2014-05-09 19:52               ` Arnd Bergmann
@ 2014-05-09 20:19                 ` Bjorn Helgaas
  2014-05-09 20:25                   ` James Bottomley
  0 siblings, 1 reply; 24+ messages in thread
From: Bjorn Helgaas @ 2014-05-09 20:19 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: David Woodhouse, linux-doc, Greg Kroah-Hartman, Joerg Roedel,
	Randy Dunlap, Liviu Dudau, linux-kernel, James Bottomley,
	linux-pci

On Fri, May 9, 2014 at 1:52 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Friday 09 May 2014 09:32:26 Bjorn Helgaas wrote:
>> > There are people who care deeply about the performance of IOMMU API
>> > map/unmap. It isn't used *just* for virtual machines any more. See
>> > drivers/infiniband/hw/usnic/usnic_uiom.c for example.
>>
>> Of course we should care about IOMMU API performance.  We should also
>> care about interface consistency, and it seems there's a tradeoff in
>> this case.  I said "relatively" because I expect map/unmap to be less
>> frequent than read/write operations that use the mapping.  I don't
>> know anything about infiniband, so maybe that assumption is false
>> there.
>
> In most drivers using the streaming DMA API, every mapping is used
> exactly once. Think of network or block drivers: they rarely send
> the same data twice to the device, and it usually comes from or
> goes to some user space buffer.

Oh, good point.  I don't work that high up in the stack, so thanks for
reminding me of that.

Bjorn

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments
  2014-05-09 20:19                 ` Bjorn Helgaas
@ 2014-05-09 20:25                   ` James Bottomley
  0 siblings, 0 replies; 24+ messages in thread
From: James Bottomley @ 2014-05-09 20:25 UTC (permalink / raw)
  To: bhelgaas
  Cc: linux-kernel, rdunlap, Liviu.Dudau, joro, gregkh, dwmw2,
	linux-pci, arnd, linux-doc

On Fri, 2014-05-09 at 14:19 -0600, Bjorn Helgaas wrote:
> On Fri, May 9, 2014 at 1:52 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> > On Friday 09 May 2014 09:32:26 Bjorn Helgaas wrote:
> >> > There are people who care deeply about the performance of IOMMU API
> >> > map/unmap. It isn't used *just* for virtual machines any more. See
> >> > drivers/infiniband/hw/usnic/usnic_uiom.c for example.
> >>
> >> Of course we should care about IOMMU API performance.  We should also
> >> care about interface consistency, and it seems there's a tradeoff in
> >> this case.  I said "relatively" because I expect map/unmap to be less
> >> frequent than read/write operations that use the mapping.  I don't
> >> know anything about infiniband, so maybe that assumption is false
> >> there.
> >
> > In most drivers using the streaming DMA API, every mapping is used
> > exactly once. Think of network or block drivers: they rarely send
> > the same data twice to the device, and it usually comes from or
> > goes to some user space buffer.
> 
> Oh, good point.  I don't work that high up in the stack, so thanks for
> reminding me of that.

To round this out, for most devices we have two types of mappings: the
mailbox ones, which designate regions of communication memory between
the kernel and the device which are usually permanent mappings, and the
transmission mappings: every bit of data we send to the device is
mapped, sent/received and then unmapped.  The setup and teardown costs
factor into the throughput.  Some high iops devices (like SSD or high
speed net) are peculiarly sensitive to this.

James


^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2014-05-09 20:25 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-06 22:48 [PATCH v2 0/5] Clean up DMA API & IOMMU dma_addr_t usage Bjorn Helgaas
2014-05-06 22:48 ` [PATCH v2 1/5] DMA-API: Clarify physical/bus address distinction Bjorn Helgaas
2014-05-07  7:37   ` Arnd Bergmann
2014-05-07 18:43     ` Bjorn Helgaas
2014-05-08  9:24   ` Greg Kroah-Hartman
2014-05-06 22:48 ` [PATCH v2 2/5] DMA-API: Change dma_declare_coherent_memory() CPU address to phys_addr_t Bjorn Helgaas
2014-05-07  7:38   ` Arnd Bergmann
2014-05-06 22:48 ` [PATCH v2 3/5] sh/PCI: Pass GAPSPCI_DMA_BASE CPU address to dma_declare_coherent_memory() Bjorn Helgaas
2014-05-07  7:55   ` Arnd Bergmann
2014-05-07  8:15     ` Arnd Bergmann
2014-05-07 23:18       ` Bjorn Helgaas
2014-05-08 11:36         ` Arnd Bergmann
2014-05-06 22:48 ` [PATCH v2 4/5] iommu: Use dma_addr_t for IOVA arguments Bjorn Helgaas
2014-05-07  7:58   ` Arnd Bergmann
2014-05-08  0:18     ` Bjorn Helgaas
2014-05-08 10:44       ` Arnd Bergmann
2014-05-08 20:30         ` Bjorn Helgaas
2014-05-09  9:58           ` David Woodhouse
2014-05-09 15:32             ` Bjorn Helgaas
2014-05-09 19:52               ` Arnd Bergmann
2014-05-09 20:19                 ` Bjorn Helgaas
2014-05-09 20:25                   ` James Bottomley
2014-05-06 22:48 ` [PATCH v2 5/5] iommu/exynos: Remove unnecessary "&" from function pointers Bjorn Helgaas
2014-05-07  7:59   ` Arnd Bergmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).