All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2 v2] Introduce IOMMU-API TLB Flushing Interface
@ 2017-08-23 13:50 Joerg Roedel
       [not found] ` <1503496204-2527-1-git-send-email-joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Joerg Roedel @ 2017-08-23 13:50 UTC (permalink / raw)
  To: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA; +Cc: Zhen Lei, Will Deacon

Hi,

here is the second version of the patch-set to introduce an
explicit TLB-flushing interface to the IOMMU-API. This
version changed the interface quite a lot compared to the
first version.

This version does not change the semantics of the
iommu_map() and iommu_unmap() functions anymore. The reason
is that the semantic-change doesn't make sense for
iommu_map(), as this path does not require a TLB-flush in
almost all cases. And only changing the semantics of the
iommu_unmap() path would introduce an asymmetry to the
IOMMU-API, which makes it harder to use.

This series introduces the new function iommu_unmap_fast()
instead, which is allowed to return with dirty TLBs. The
three functions for TLB-flushing have not changed compared
to the first version of this patch-set.

Since the iommu_map()/unmap() functions don't change their
semantics anymore, I've also dropped all patches which
convert the existing users.

Please review.

Thanks,

	Joerg

Joerg Roedel (2):
  iommu/amd: Rename a few flush functions
  iommu: Introduce Interface for IOMMU TLB Flushing

 drivers/iommu/amd_iommu.c | 16 +++++------
 drivers/iommu/iommu.c     | 32 ++++++++++++++++++---
 include/linux/iommu.h     | 72 ++++++++++++++++++++++++++++++++++++++++++++++-
 3 files changed, 107 insertions(+), 13 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/2] iommu/amd: Rename a few flush functions
       [not found] ` <1503496204-2527-1-git-send-email-joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
@ 2017-08-23 13:50   ` Joerg Roedel
  2017-08-23 13:50   ` [PATCH 2/2] iommu: Introduce Interface for IOMMU TLB Flushing Joerg Roedel
  1 sibling, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2017-08-23 13:50 UTC (permalink / raw)
  To: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: Joerg Roedel, Zhen Lei, Will Deacon

From: Joerg Roedel <jroedel-l3A5Bk7waGM@public.gmane.org>

Rename a few iommu cache-flush functions that start with
iommu_ to start with amd_iommu now. This is to prevent name
collisions with generic iommu code later on.

Signed-off-by: Joerg Roedel <jroedel-l3A5Bk7waGM@public.gmane.org>
---
 drivers/iommu/amd_iommu.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 354cbd6..5469457 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1165,7 +1165,7 @@ static int iommu_flush_dte(struct amd_iommu *iommu, u16 devid)
 	return iommu_queue_command(iommu, &cmd);
 }
 
-static void iommu_flush_dte_all(struct amd_iommu *iommu)
+static void amd_iommu_flush_dte_all(struct amd_iommu *iommu)
 {
 	u32 devid;
 
@@ -1179,7 +1179,7 @@ static void iommu_flush_dte_all(struct amd_iommu *iommu)
  * This function uses heavy locking and may disable irqs for some time. But
  * this is no issue because it is only called during resume.
  */
-static void iommu_flush_tlb_all(struct amd_iommu *iommu)
+static void amd_iommu_flush_tlb_all(struct amd_iommu *iommu)
 {
 	u32 dom_id;
 
@@ -1193,7 +1193,7 @@ static void iommu_flush_tlb_all(struct amd_iommu *iommu)
 	iommu_completion_wait(iommu);
 }
 
-static void iommu_flush_all(struct amd_iommu *iommu)
+static void amd_iommu_flush_all(struct amd_iommu *iommu)
 {
 	struct iommu_cmd cmd;
 
@@ -1212,7 +1212,7 @@ static void iommu_flush_irt(struct amd_iommu *iommu, u16 devid)
 	iommu_queue_command(iommu, &cmd);
 }
 
-static void iommu_flush_irt_all(struct amd_iommu *iommu)
+static void amd_iommu_flush_irt_all(struct amd_iommu *iommu)
 {
 	u32 devid;
 
@@ -1225,11 +1225,11 @@ static void iommu_flush_irt_all(struct amd_iommu *iommu)
 void iommu_flush_all_caches(struct amd_iommu *iommu)
 {
 	if (iommu_feature(iommu, FEATURE_IA)) {
-		iommu_flush_all(iommu);
+		amd_iommu_flush_all(iommu);
 	} else {
-		iommu_flush_dte_all(iommu);
-		iommu_flush_irt_all(iommu);
-		iommu_flush_tlb_all(iommu);
+		amd_iommu_flush_dte_all(iommu);
+		amd_iommu_flush_irt_all(iommu);
+		amd_iommu_flush_tlb_all(iommu);
 	}
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/2] iommu: Introduce Interface for IOMMU TLB Flushing
       [not found] ` <1503496204-2527-1-git-send-email-joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
  2017-08-23 13:50   ` [PATCH 1/2] iommu/amd: Rename a few flush functions Joerg Roedel
@ 2017-08-23 13:50   ` Joerg Roedel
       [not found]     ` <1503496204-2527-3-git-send-email-joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
  1 sibling, 1 reply; 10+ messages in thread
From: Joerg Roedel @ 2017-08-23 13:50 UTC (permalink / raw)
  To: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: Joerg Roedel, Zhen Lei, Will Deacon

From: Joerg Roedel <jroedel-l3A5Bk7waGM@public.gmane.org>

With the current IOMMU-API the hardware TLBs have to be
flushed in every iommu_ops->unmap() call-back.

For unmapping large amounts of address space, like it
happens when a KVM domain with assigned devices is
destroyed, this causes thousands of unnecessary TLB flushes
in the IOMMU hardware because the unmap call-back runs for
every unmapped physical page.

With the TLB Flush Interface and the new iommu_unmap_fast()
function introduced here the need to clean the hardware TLBs
is removed from the unmapping code-path. Users of
iommu_unmap_fast() have to explicitly call the TLB-Flush
functions to sync the page-table changes to the hardware.

Three functions for TLB-Flushes are introduced:

	* iommu_flush_tlb_all() - Flushes all TLB entries
	                          associated with that
				  domain. TLBs entries are
				  flushed when this function
				  returns.

	* iommu_tlb_range_add() - This will add a given
				  range to the flush queue
				  for this domain.

	* iommu_tlb_sync() - Flushes all queued ranges from
			     the hardware TLBs. Returns when
			     the flush is finished.

The semantic of this interface is intentionally similar to
the iommu_gather_ops from the io-pgtable code.

Cc: Alex Williamson <alex.williamson-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Will Deacon <will.deacon-5wv7dgnIgG8@public.gmane.org>
Cc: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
Signed-off-by: Joerg Roedel <jroedel-l3A5Bk7waGM@public.gmane.org>
---
 drivers/iommu/iommu.c | 32 ++++++++++++++++++++---
 include/linux/iommu.h | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 99 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 3f6ea16..0f68342 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -527,6 +527,8 @@ static int iommu_group_create_direct_mappings(struct iommu_group *group,
 
 	}
 
+	iommu_flush_tlb_all(domain);
+
 out:
 	iommu_put_resv_regions(dev, &mappings);
 
@@ -1556,13 +1558,16 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
 }
 EXPORT_SYMBOL_GPL(iommu_map);
 
-size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
+static size_t __iommu_unmap(struct iommu_domain *domain,
+			    unsigned long iova, size_t size,
+			    bool sync)
 {
+	const struct iommu_ops *ops = domain->ops;
 	size_t unmapped_page, unmapped = 0;
-	unsigned int min_pagesz;
 	unsigned long orig_iova = iova;
+	unsigned int min_pagesz;
 
-	if (unlikely(domain->ops->unmap == NULL ||
+	if (unlikely(ops->unmap == NULL ||
 		     domain->pgsize_bitmap == 0UL))
 		return -ENODEV;
 
@@ -1592,10 +1597,13 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
 	while (unmapped < size) {
 		size_t pgsize = iommu_pgsize(domain, iova, size - unmapped);
 
-		unmapped_page = domain->ops->unmap(domain, iova, pgsize);
+		unmapped_page = ops->unmap(domain, iova, pgsize);
 		if (!unmapped_page)
 			break;
 
+		if (sync && ops->iotlb_range_add)
+			ops->iotlb_range_add(domain, iova, pgsize);
+
 		pr_debug("unmapped: iova 0x%lx size 0x%zx\n",
 			 iova, unmapped_page);
 
@@ -1603,11 +1611,27 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
 		unmapped += unmapped_page;
 	}
 
+	if (sync && ops->iotlb_sync)
+		ops->iotlb_sync(domain);
+
 	trace_unmap(orig_iova, size, unmapped);
 	return unmapped;
 }
+
+size_t iommu_unmap(struct iommu_domain *domain,
+		   unsigned long iova, size_t size)
+{
+	return __iommu_unmap(domain, iova, size, true);
+}
 EXPORT_SYMBOL_GPL(iommu_unmap);
 
+size_t iommu_unmap_fast(struct iommu_domain *domain,
+			unsigned long iova, size_t size)
+{
+	return __iommu_unmap(domain, iova, size, false);
+}
+EXPORT_SYMBOL_GPL(iommu_unmap_fast);
+
 size_t default_iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
 			 struct scatterlist *sg, unsigned int nents, int prot)
 {
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 2cb54ad..67fa954 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -167,6 +167,10 @@ struct iommu_resv_region {
  * @map: map a physically contiguous memory region to an iommu domain
  * @unmap: unmap a physically contiguous memory region from an iommu domain
  * @map_sg: map a scatter-gather list of physically contiguous memory chunks
+ * @flush_tlb_all: Synchronously flush all hardware TLBs for this domain
+ * @tlb_range_add: Add a given iova range to the flush queue for this domain
+ * @tlb_sync: Flush all queued ranges from the hardware TLBs and empty flush
+ *            queue
  * to an iommu domain
  * @iova_to_phys: translate iova to physical address
  * @add_device: add device to iommu grouping
@@ -199,6 +203,10 @@ struct iommu_ops {
 		     size_t size);
 	size_t (*map_sg)(struct iommu_domain *domain, unsigned long iova,
 			 struct scatterlist *sg, unsigned int nents, int prot);
+	void (*flush_iotlb_all)(struct iommu_domain *domain);
+	void (*iotlb_range_add)(struct iommu_domain *domain,
+				unsigned long iova, size_t size);
+	void (*iotlb_sync)(struct iommu_domain *domain);
 	phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova);
 	int (*add_device)(struct device *dev);
 	void (*remove_device)(struct device *dev);
@@ -286,7 +294,9 @@ extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev);
 extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
 		     phys_addr_t paddr, size_t size, int prot);
 extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova,
-		       size_t size);
+			  size_t size);
+extern size_t iommu_unmap_fast(struct iommu_domain *domain,
+			       unsigned long iova, size_t size);
 extern size_t default_iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
 				struct scatterlist *sg,unsigned int nents,
 				int prot);
@@ -343,6 +353,25 @@ extern void iommu_domain_window_disable(struct iommu_domain *domain, u32 wnd_nr)
 extern int report_iommu_fault(struct iommu_domain *domain, struct device *dev,
 			      unsigned long iova, int flags);
 
+static inline void iommu_flush_tlb_all(struct iommu_domain *domain)
+{
+	if (domain->ops->flush_iotlb_all)
+		domain->ops->flush_iotlb_all(domain);
+}
+
+static inline void iommu_tlb_range_add(struct iommu_domain *domain,
+				       unsigned long iova, size_t size)
+{
+	if (domain->ops->iotlb_range_add)
+		domain->ops->iotlb_range_add(domain, iova, size);
+}
+
+static inline void iommu_tlb_sync(struct iommu_domain *domain)
+{
+	if (domain->ops->iotlb_sync)
+		domain->ops->iotlb_sync(domain);
+}
+
 static inline size_t iommu_map_sg(struct iommu_domain *domain,
 				  unsigned long iova, struct scatterlist *sg,
 				  unsigned int nents, int prot)
@@ -350,6 +379,20 @@ static inline size_t iommu_map_sg(struct iommu_domain *domain,
 	return domain->ops->map_sg(domain, iova, sg, nents, prot);
 }
 
+static inline size_t iommu_map_sg_sync(struct iommu_domain *domain,
+				       unsigned long iova,
+				       struct scatterlist *sg,
+				       unsigned int nents, int prot)
+{
+	size_t size = domain->ops->map_sg(domain, iova, sg, nents, prot);
+	if (size > 0) {
+		iommu_tlb_range_add(domain, iova, size);
+		iommu_tlb_sync(domain);
+	}
+
+	return size;
+}
+
 /* PCI device grouping function */
 extern struct iommu_group *pci_device_group(struct device *dev);
 /* Generic device grouping function */
@@ -436,6 +479,12 @@ static inline int iommu_unmap(struct iommu_domain *domain, unsigned long iova,
 	return -ENODEV;
 }
 
+static inline int iommu_unmap_fast(struct iommu_domain *domain, unsigned long iova,
+				   int gfp_order)
+{
+	return -ENODEV;
+}
+
 static inline size_t iommu_map_sg(struct iommu_domain *domain,
 				  unsigned long iova, struct scatterlist *sg,
 				  unsigned int nents, int prot)
@@ -443,6 +492,27 @@ static inline size_t iommu_map_sg(struct iommu_domain *domain,
 	return -ENODEV;
 }
 
+static inline size_t iommu_map_sg_sync(struct iommu_domain *domain,
+				       unsigned long iova,
+				       struct scatterlist *sg,
+				       unsigned int nents, int prot)
+{
+	return -ENODEV;
+}
+
+static inline void iommu_flush_tlb_all(struct iommu_domain *domain)
+{
+}
+
+static inline void iommu_tlb_range_add(struct iommu_domain *domain,
+				       unsigned long iova, size_t size)
+{
+}
+
+static inline void iommu_tlb_sync(struct iommu_domain *domain)
+{
+}
+
 static inline int iommu_domain_window_enable(struct iommu_domain *domain,
 					     u32 wnd_nr, phys_addr_t paddr,
 					     u64 size, int prot)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] iommu: Introduce Interface for IOMMU TLB Flushing
       [not found]     ` <1503496204-2527-3-git-send-email-joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
@ 2017-08-29  2:53       ` Leizhen (ThunderTown)
       [not found]         ` <59A4D72F.9040600-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
  2017-08-29 11:23       ` Robin Murphy
  2017-09-01 17:20       ` Will Deacon
  2 siblings, 1 reply; 10+ messages in thread
From: Leizhen (ThunderTown) @ 2017-08-29  2:53 UTC (permalink / raw)
  To: Joerg Roedel, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: Joerg Roedel, Will Deacon



On 2017/8/23 21:50, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel-l3A5Bk7waGM@public.gmane.org>
> 
> With the current IOMMU-API the hardware TLBs have to be
> flushed in every iommu_ops->unmap() call-back.
> 
> For unmapping large amounts of address space, like it
> happens when a KVM domain with assigned devices is
> destroyed, this causes thousands of unnecessary TLB flushes
> in the IOMMU hardware because the unmap call-back runs for
> every unmapped physical page.
> 
> With the TLB Flush Interface and the new iommu_unmap_fast()
> function introduced here the need to clean the hardware TLBs
> is removed from the unmapping code-path. Users of
> iommu_unmap_fast() have to explicitly call the TLB-Flush
> functions to sync the page-table changes to the hardware.
> 
> Three functions for TLB-Flushes are introduced:
> 
> 	* iommu_flush_tlb_all() - Flushes all TLB entries
> 	                          associated with that
> 				  domain. TLBs entries are
> 				  flushed when this function
> 				  returns.
> 
> 	* iommu_tlb_range_add() - This will add a given
> 				  range to the flush queue
> 				  for this domain.
> 
> 	* iommu_tlb_sync() - Flushes all queued ranges from
> 			     the hardware TLBs. Returns when
> 			     the flush is finished.
> 
> The semantic of this interface is intentionally similar to
> the iommu_gather_ops from the io-pgtable code.
> 
> Cc: Alex Williamson <alex.williamson-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Will Deacon <will.deacon-5wv7dgnIgG8@public.gmane.org>
> Cc: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
> Signed-off-by: Joerg Roedel <jroedel-l3A5Bk7waGM@public.gmane.org>
> ---
>  drivers/iommu/iommu.c | 32 ++++++++++++++++++++---
>  include/linux/iommu.h | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 99 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 3f6ea16..0f68342 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -527,6 +527,8 @@ static int iommu_group_create_direct_mappings(struct iommu_group *group,
>  
>  	}
>  
> +	iommu_flush_tlb_all(domain);
> +
>  out:
>  	iommu_put_resv_regions(dev, &mappings);
>  
> @@ -1556,13 +1558,16 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
>  }
>  EXPORT_SYMBOL_GPL(iommu_map);
>  
> -size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
> +static size_t __iommu_unmap(struct iommu_domain *domain,
> +			    unsigned long iova, size_t size,
> +			    bool sync)
>  {
> +	const struct iommu_ops *ops = domain->ops;
>  	size_t unmapped_page, unmapped = 0;
> -	unsigned int min_pagesz;
>  	unsigned long orig_iova = iova;
> +	unsigned int min_pagesz;
>  
> -	if (unlikely(domain->ops->unmap == NULL ||
> +	if (unlikely(ops->unmap == NULL ||
>  		     domain->pgsize_bitmap == 0UL))
>  		return -ENODEV;
>  
> @@ -1592,10 +1597,13 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
>  	while (unmapped < size) {
>  		size_t pgsize = iommu_pgsize(domain, iova, size - unmapped);
>  
> -		unmapped_page = domain->ops->unmap(domain, iova, pgsize);
> +		unmapped_page = ops->unmap(domain, iova, pgsize);
>  		if (!unmapped_page)
>  			break;
>  
> +		if (sync && ops->iotlb_range_add)
> +			ops->iotlb_range_add(domain, iova, pgsize);
> +
>  		pr_debug("unmapped: iova 0x%lx size 0x%zx\n",
>  			 iova, unmapped_page);
>  
> @@ -1603,11 +1611,27 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
>  		unmapped += unmapped_page;
>  	}
>  
> +	if (sync && ops->iotlb_sync)
> +		ops->iotlb_sync(domain);
> +
>  	trace_unmap(orig_iova, size, unmapped);
>  	return unmapped;
>  }
> +
> +size_t iommu_unmap(struct iommu_domain *domain,
> +		   unsigned long iova, size_t size)
> +{
> +	return __iommu_unmap(domain, iova, size, true);
> +}
>  EXPORT_SYMBOL_GPL(iommu_unmap);
>  
> +size_t iommu_unmap_fast(struct iommu_domain *domain,
> +			unsigned long iova, size_t size)
> +{
Do we need to add a check "if (!domain->ops->iotlb_sync)". Suppose the new added three hooks are not
registered, we should fallback to iommu_unmap.

> +	return __iommu_unmap(domain, iova, size, false);
> +}
> +EXPORT_SYMBOL_GPL(iommu_unmap_fast);
> +
>  size_t default_iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
>  			 struct scatterlist *sg, unsigned int nents, int prot)
>  {
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 2cb54ad..67fa954 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -167,6 +167,10 @@ struct iommu_resv_region {
>   * @map: map a physically contiguous memory region to an iommu domain
>   * @unmap: unmap a physically contiguous memory region from an iommu domain
>   * @map_sg: map a scatter-gather list of physically contiguous memory chunks
> + * @flush_tlb_all: Synchronously flush all hardware TLBs for this domain
> + * @tlb_range_add: Add a given iova range to the flush queue for this domain
> + * @tlb_sync: Flush all queued ranges from the hardware TLBs and empty flush
> + *            queue
>   * to an iommu domain
>   * @iova_to_phys: translate iova to physical address
>   * @add_device: add device to iommu grouping
> @@ -199,6 +203,10 @@ struct iommu_ops {
>  		     size_t size);
>  	size_t (*map_sg)(struct iommu_domain *domain, unsigned long iova,
>  			 struct scatterlist *sg, unsigned int nents, int prot);
> +	void (*flush_iotlb_all)(struct iommu_domain *domain);
> +	void (*iotlb_range_add)(struct iommu_domain *domain,
> +				unsigned long iova, size_t size);
> +	void (*iotlb_sync)(struct iommu_domain *domain);
I think we'd better to make sure all these three hooks are registered or all are not, in
function __iommu_domain_alloc or some other suitable place.

>  	phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t iova);
>  	int (*add_device)(struct device *dev);
>  	void (*remove_device)(struct device *dev);
> @@ -286,7 +294,9 @@ extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev);
>  extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
>  		     phys_addr_t paddr, size_t size, int prot);
>  extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova,
> -		       size_t size);
> +			  size_t size);
> +extern size_t iommu_unmap_fast(struct iommu_domain *domain,
> +			       unsigned long iova, size_t size);
>  extern size_t default_iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
>  				struct scatterlist *sg,unsigned int nents,
>  				int prot);
> @@ -343,6 +353,25 @@ extern void iommu_domain_window_disable(struct iommu_domain *domain, u32 wnd_nr)
>  extern int report_iommu_fault(struct iommu_domain *domain, struct device *dev,
>  			      unsigned long iova, int flags);
>  
> +static inline void iommu_flush_tlb_all(struct iommu_domain *domain)
> +{
> +	if (domain->ops->flush_iotlb_all)
> +		domain->ops->flush_iotlb_all(domain);
> +}
> +
> +static inline void iommu_tlb_range_add(struct iommu_domain *domain,
> +				       unsigned long iova, size_t size)
> +{
> +	if (domain->ops->iotlb_range_add)
> +		domain->ops->iotlb_range_add(domain, iova, size);
> +}
> +
> +static inline void iommu_tlb_sync(struct iommu_domain *domain)
> +{
> +	if (domain->ops->iotlb_sync)
> +		domain->ops->iotlb_sync(domain);
> +}
> +
>  static inline size_t iommu_map_sg(struct iommu_domain *domain,
>  				  unsigned long iova, struct scatterlist *sg,
>  				  unsigned int nents, int prot)
> @@ -350,6 +379,20 @@ static inline size_t iommu_map_sg(struct iommu_domain *domain,
>  	return domain->ops->map_sg(domain, iova, sg, nents, prot);
>  }
>  
> +static inline size_t iommu_map_sg_sync(struct iommu_domain *domain,
> +				       unsigned long iova,
> +				       struct scatterlist *sg,
> +				       unsigned int nents, int prot)
> +{
> +	size_t size = domain->ops->map_sg(domain, iova, sg, nents, prot);
> +	if (size > 0) {
> +		iommu_tlb_range_add(domain, iova, size);
> +		iommu_tlb_sync(domain);
> +	}
> +
> +	return size;
> +}
> +
>  /* PCI device grouping function */
>  extern struct iommu_group *pci_device_group(struct device *dev);
>  /* Generic device grouping function */
> @@ -436,6 +479,12 @@ static inline int iommu_unmap(struct iommu_domain *domain, unsigned long iova,
>  	return -ENODEV;
>  }
>  
> +static inline int iommu_unmap_fast(struct iommu_domain *domain, unsigned long iova,
> +				   int gfp_order)
> +{
> +	return -ENODEV;
> +}
> +
>  static inline size_t iommu_map_sg(struct iommu_domain *domain,
>  				  unsigned long iova, struct scatterlist *sg,
>  				  unsigned int nents, int prot)
> @@ -443,6 +492,27 @@ static inline size_t iommu_map_sg(struct iommu_domain *domain,
>  	return -ENODEV;
>  }
>  
> +static inline size_t iommu_map_sg_sync(struct iommu_domain *domain,
> +				       unsigned long iova,
> +				       struct scatterlist *sg,
> +				       unsigned int nents, int prot)
> +{
> +	return -ENODEV;
> +}
> +
> +static inline void iommu_flush_tlb_all(struct iommu_domain *domain)
> +{
> +}
> +
> +static inline void iommu_tlb_range_add(struct iommu_domain *domain,
> +				       unsigned long iova, size_t size)
> +{
> +}
> +
> +static inline void iommu_tlb_sync(struct iommu_domain *domain)
> +{
> +}
> +
>  static inline int iommu_domain_window_enable(struct iommu_domain *domain,
>  					     u32 wnd_nr, phys_addr_t paddr,
>  					     u64 size, int prot)
> 

-- 
Thanks!
BestRegards

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] iommu: Introduce Interface for IOMMU TLB Flushing
       [not found]         ` <59A4D72F.9040600-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
@ 2017-08-29 11:19           ` Robin Murphy
       [not found]             ` <837fc6a8-4b9b-7ae3-5c74-6f3c202a38fb-5wv7dgnIgG8@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Robin Murphy @ 2017-08-29 11:19 UTC (permalink / raw)
  To: Leizhen (ThunderTown),
	Joerg Roedel, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: Joerg Roedel, Will Deacon

On 29/08/17 03:53, Leizhen (ThunderTown) wrote:
[...]
>> -size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
>> +static size_t __iommu_unmap(struct iommu_domain *domain,
>> +			    unsigned long iova, size_t size,
>> +			    bool sync)
>>  {
>> +	const struct iommu_ops *ops = domain->ops;
>>  	size_t unmapped_page, unmapped = 0;
>> -	unsigned int min_pagesz;
>>  	unsigned long orig_iova = iova;
>> +	unsigned int min_pagesz;
>>  
>> -	if (unlikely(domain->ops->unmap == NULL ||
>> +	if (unlikely(ops->unmap == NULL ||
>>  		     domain->pgsize_bitmap == 0UL))
>>  		return -ENODEV;
>>  
>> @@ -1592,10 +1597,13 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
>>  	while (unmapped < size) {
>>  		size_t pgsize = iommu_pgsize(domain, iova, size - unmapped);
>>  
>> -		unmapped_page = domain->ops->unmap(domain, iova, pgsize);
>> +		unmapped_page = ops->unmap(domain, iova, pgsize);
>>  		if (!unmapped_page)
>>  			break;
>>  
>> +		if (sync && ops->iotlb_range_add)
>> +			ops->iotlb_range_add(domain, iova, pgsize);
>> +
>>  		pr_debug("unmapped: iova 0x%lx size 0x%zx\n",
>>  			 iova, unmapped_page);
>>  
>> @@ -1603,11 +1611,27 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
>>  		unmapped += unmapped_page;
>>  	}
>>  
>> +	if (sync && ops->iotlb_sync)
>> +		ops->iotlb_sync(domain);
>> +
>>  	trace_unmap(orig_iova, size, unmapped);
>>  	return unmapped;
>>  }
>> +
>> +size_t iommu_unmap(struct iommu_domain *domain,
>> +		   unsigned long iova, size_t size)
>> +{
>> +	return __iommu_unmap(domain, iova, size, true);
>> +}
>>  EXPORT_SYMBOL_GPL(iommu_unmap);
>>  
>> +size_t iommu_unmap_fast(struct iommu_domain *domain,
>> +			unsigned long iova, size_t size)
>> +{
> Do we need to add a check "if (!domain->ops->iotlb_sync)". Suppose the new added three hooks are not
> registered, we should fallback to iommu_unmap.

If those callbacks don't exist, then iommu_unmap() isn't going to be
able to call them either, so I don't see what difference that makes :/

>> +	return __iommu_unmap(domain, iova, size, false);
>> +}
>> +EXPORT_SYMBOL_GPL(iommu_unmap_fast);
>> +
>>  size_t default_iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
>>  			 struct scatterlist *sg, unsigned int nents, int prot)
>>  {
>> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
>> index 2cb54ad..67fa954 100644
>> --- a/include/linux/iommu.h
>> +++ b/include/linux/iommu.h
>> @@ -167,6 +167,10 @@ struct iommu_resv_region {
>>   * @map: map a physically contiguous memory region to an iommu domain
>>   * @unmap: unmap a physically contiguous memory region from an iommu domain
>>   * @map_sg: map a scatter-gather list of physically contiguous memory chunks
>> + * @flush_tlb_all: Synchronously flush all hardware TLBs for this domain
>> + * @tlb_range_add: Add a given iova range to the flush queue for this domain
>> + * @tlb_sync: Flush all queued ranges from the hardware TLBs and empty flush
>> + *            queue
>>   * to an iommu domain
>>   * @iova_to_phys: translate iova to physical address
>>   * @add_device: add device to iommu grouping
>> @@ -199,6 +203,10 @@ struct iommu_ops {
>>  		     size_t size);
>>  	size_t (*map_sg)(struct iommu_domain *domain, unsigned long iova,
>>  			 struct scatterlist *sg, unsigned int nents, int prot);
>> +	void (*flush_iotlb_all)(struct iommu_domain *domain);
>> +	void (*iotlb_range_add)(struct iommu_domain *domain,
>> +				unsigned long iova, size_t size);
>> +	void (*iotlb_sync)(struct iommu_domain *domain);
> I think we'd better to make sure all these three hooks are registered or all are not, in
> function __iommu_domain_alloc or some other suitable place.

I'd prefer for them to be individually optional than for drivers to have
to implement no-op callbacks - e.g. for SMMUv2 where issuing TLBIs is
relatively cheap, but latency-sensitive, we're probably better off not
bothering with with .iotlb_range_add (leaving the TLBIs implicit in
.unmap) only implementing .iotlb_sync.

Robin.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] iommu: Introduce Interface for IOMMU TLB Flushing
       [not found]     ` <1503496204-2527-3-git-send-email-joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
  2017-08-29  2:53       ` Leizhen (ThunderTown)
@ 2017-08-29 11:23       ` Robin Murphy
       [not found]         ` <811dfba8-097c-0deb-c283-a7b1e0c6ee38-5wv7dgnIgG8@public.gmane.org>
  2017-09-01 17:20       ` Will Deacon
  2 siblings, 1 reply; 10+ messages in thread
From: Robin Murphy @ 2017-08-29 11:23 UTC (permalink / raw)
  To: Joerg Roedel, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: Joerg Roedel, Zhen Lei, Will Deacon

On 23/08/17 14:50, Joerg Roedel wrote:
[...]
> @@ -350,6 +379,20 @@ static inline size_t iommu_map_sg(struct iommu_domain *domain,
>  	return domain->ops->map_sg(domain, iova, sg, nents, prot);
>  }
>  
> +static inline size_t iommu_map_sg_sync(struct iommu_domain *domain,
> +				       unsigned long iova,
> +				       struct scatterlist *sg,
> +				       unsigned int nents, int prot)
> +{
> +	size_t size = domain->ops->map_sg(domain, iova, sg, nents, prot);
> +	if (size > 0) {
> +		iommu_tlb_range_add(domain, iova, size);
> +		iommu_tlb_sync(domain);
> +	}
> +
> +	return size;
> +}

Do we still need this, or has it just slipped through from v1?

> +
>  /* PCI device grouping function */
>  extern struct iommu_group *pci_device_group(struct device *dev);
>  /* Generic device grouping function */
> @@ -436,6 +479,12 @@ static inline int iommu_unmap(struct iommu_domain *domain, unsigned long iova,
>  	return -ENODEV;
>  }
>  
> +static inline int iommu_unmap_fast(struct iommu_domain *domain, unsigned long iova,
> +				   int gfp_order)
> +{
> +	return -ENODEV;
> +}
> +
>  static inline size_t iommu_map_sg(struct iommu_domain *domain,
>  				  unsigned long iova, struct scatterlist *sg,
>  				  unsigned int nents, int prot)
> @@ -443,6 +492,27 @@ static inline size_t iommu_map_sg(struct iommu_domain *domain,
>  	return -ENODEV;
>  }
>  
> +static inline size_t iommu_map_sg_sync(struct iommu_domain *domain,
> +				       unsigned long iova,
> +				       struct scatterlist *sg,
> +				       unsigned int nents, int prot)
> +{
> +	return -ENODEV;
> +}

Ditto here.

Robin.

> +
> +static inline void iommu_flush_tlb_all(struct iommu_domain *domain)
> +{
> +}
> +
> +static inline void iommu_tlb_range_add(struct iommu_domain *domain,
> +				       unsigned long iova, size_t size)
> +{
> +}
> +
> +static inline void iommu_tlb_sync(struct iommu_domain *domain)
> +{
> +}
> +
>  static inline int iommu_domain_window_enable(struct iommu_domain *domain,
>  					     u32 wnd_nr, phys_addr_t paddr,
>  					     u64 size, int prot)
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] iommu: Introduce Interface for IOMMU TLB Flushing
       [not found]             ` <837fc6a8-4b9b-7ae3-5c74-6f3c202a38fb-5wv7dgnIgG8@public.gmane.org>
@ 2017-08-29 12:04               ` Leizhen (ThunderTown)
  0 siblings, 0 replies; 10+ messages in thread
From: Leizhen (ThunderTown) @ 2017-08-29 12:04 UTC (permalink / raw)
  To: Robin Murphy, Joerg Roedel,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: Joerg Roedel, Will Deacon



On 2017/8/29 19:19, Robin Murphy wrote:
> On 29/08/17 03:53, Leizhen (ThunderTown) wrote:
> [...]
>>> -size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
>>> +static size_t __iommu_unmap(struct iommu_domain *domain,
>>> +			    unsigned long iova, size_t size,
>>> +			    bool sync)
>>>  {
>>> +	const struct iommu_ops *ops = domain->ops;
>>>  	size_t unmapped_page, unmapped = 0;
>>> -	unsigned int min_pagesz;
>>>  	unsigned long orig_iova = iova;
>>> +	unsigned int min_pagesz;
>>>  
>>> -	if (unlikely(domain->ops->unmap == NULL ||
>>> +	if (unlikely(ops->unmap == NULL ||
>>>  		     domain->pgsize_bitmap == 0UL))
>>>  		return -ENODEV;
>>>  
>>> @@ -1592,10 +1597,13 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
>>>  	while (unmapped < size) {
>>>  		size_t pgsize = iommu_pgsize(domain, iova, size - unmapped);
>>>  
>>> -		unmapped_page = domain->ops->unmap(domain, iova, pgsize);
>>> +		unmapped_page = ops->unmap(domain, iova, pgsize);
>>>  		if (!unmapped_page)
>>>  			break;
>>>  
>>> +		if (sync && ops->iotlb_range_add)
>>> +			ops->iotlb_range_add(domain, iova, pgsize);
>>> +
>>>  		pr_debug("unmapped: iova 0x%lx size 0x%zx\n",
>>>  			 iova, unmapped_page);
>>>  
>>> @@ -1603,11 +1611,27 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
>>>  		unmapped += unmapped_page;
>>>  	}
>>>  
>>> +	if (sync && ops->iotlb_sync)
>>> +		ops->iotlb_sync(domain);
>>> +
>>>  	trace_unmap(orig_iova, size, unmapped);
>>>  	return unmapped;
>>>  }
>>> +
>>> +size_t iommu_unmap(struct iommu_domain *domain,
>>> +		   unsigned long iova, size_t size)
>>> +{
>>> +	return __iommu_unmap(domain, iova, size, true);
>>> +}
>>>  EXPORT_SYMBOL_GPL(iommu_unmap);
>>>  
>>> +size_t iommu_unmap_fast(struct iommu_domain *domain,
>>> +			unsigned long iova, size_t size)
>>> +{
>> Do we need to add a check "if (!domain->ops->iotlb_sync)". Suppose the new added three hooks are not
>> registered, we should fallback to iommu_unmap.
> 
> If those callbacks don't exist, then iommu_unmap() isn't going to be
> able to call them either, so I don't see what difference that makes :/
Yes, you're right, I see.

> 
>>> +	return __iommu_unmap(domain, iova, size, false);
>>> +}
>>> +EXPORT_SYMBOL_GPL(iommu_unmap_fast);
>>> +
>>>  size_t default_iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
>>>  			 struct scatterlist *sg, unsigned int nents, int prot)
>>>  {
>>> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
>>> index 2cb54ad..67fa954 100644
>>> --- a/include/linux/iommu.h
>>> +++ b/include/linux/iommu.h
>>> @@ -167,6 +167,10 @@ struct iommu_resv_region {
>>>   * @map: map a physically contiguous memory region to an iommu domain
>>>   * @unmap: unmap a physically contiguous memory region from an iommu domain
>>>   * @map_sg: map a scatter-gather list of physically contiguous memory chunks
>>> + * @flush_tlb_all: Synchronously flush all hardware TLBs for this domain
>>> + * @tlb_range_add: Add a given iova range to the flush queue for this domain
>>> + * @tlb_sync: Flush all queued ranges from the hardware TLBs and empty flush
>>> + *            queue
>>>   * to an iommu domain
>>>   * @iova_to_phys: translate iova to physical address
>>>   * @add_device: add device to iommu grouping
>>> @@ -199,6 +203,10 @@ struct iommu_ops {
>>>  		     size_t size);
>>>  	size_t (*map_sg)(struct iommu_domain *domain, unsigned long iova,
>>>  			 struct scatterlist *sg, unsigned int nents, int prot);
>>> +	void (*flush_iotlb_all)(struct iommu_domain *domain);
>>> +	void (*iotlb_range_add)(struct iommu_domain *domain,
>>> +				unsigned long iova, size_t size);
>>> +	void (*iotlb_sync)(struct iommu_domain *domain);
>> I think we'd better to make sure all these three hooks are registered or all are not, in
>> function __iommu_domain_alloc or some other suitable place.
> 
> I'd prefer for them to be individually optional than for drivers to have
> to implement no-op callbacks - e.g. for SMMUv2 where issuing TLBIs is
> relatively cheap, but latency-sensitive, we're probably better off not
> bothering with with .iotlb_range_add (leaving the TLBIs implicit in
> .unmap) only implementing .iotlb_sync.
OK, so that the arch iommu can free to do so.

> 
> Robin.
> 
> .
> 

-- 
Thanks!
BestRegards

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] iommu: Introduce Interface for IOMMU TLB Flushing
       [not found]         ` <811dfba8-097c-0deb-c283-a7b1e0c6ee38-5wv7dgnIgG8@public.gmane.org>
@ 2017-08-29 12:12           ` Joerg Roedel
  0 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2017-08-29 12:12 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Will Deacon, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Zhen Lei

On Tue, Aug 29, 2017 at 12:23:36PM +0100, Robin Murphy wrote:
> On 23/08/17 14:50, Joerg Roedel wrote:
> [...]
> > @@ -350,6 +379,20 @@ static inline size_t iommu_map_sg(struct iommu_domain *domain,
> >  	return domain->ops->map_sg(domain, iova, sg, nents, prot);
> >  }
> >  
> > +static inline size_t iommu_map_sg_sync(struct iommu_domain *domain,
> > +				       unsigned long iova,
> > +				       struct scatterlist *sg,
> > +				       unsigned int nents, int prot)
> > +{
> > +	size_t size = domain->ops->map_sg(domain, iova, sg, nents, prot);
> > +	if (size > 0) {
> > +		iommu_tlb_range_add(domain, iova, size);
> > +		iommu_tlb_sync(domain);
> > +	}
> > +
> > +	return size;
> > +}
> 
> Do we still need this, or has it just slipped through from v1?

Ah, this slipped through, thanks for noticing. I will remove it.


Thanks,

	Joerg

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] iommu: Introduce Interface for IOMMU TLB Flushing
       [not found]     ` <1503496204-2527-3-git-send-email-joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
  2017-08-29  2:53       ` Leizhen (ThunderTown)
  2017-08-29 11:23       ` Robin Murphy
@ 2017-09-01 17:20       ` Will Deacon
       [not found]         ` <20170901172044.GB20817-5wv7dgnIgG8@public.gmane.org>
  2 siblings, 1 reply; 10+ messages in thread
From: Will Deacon @ 2017-09-01 17:20 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Joerg Roedel, Zhen Lei,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Hi Joerg,

On Wed, Aug 23, 2017 at 03:50:04PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel-l3A5Bk7waGM@public.gmane.org>
> 
> With the current IOMMU-API the hardware TLBs have to be
> flushed in every iommu_ops->unmap() call-back.
> 
> For unmapping large amounts of address space, like it
> happens when a KVM domain with assigned devices is
> destroyed, this causes thousands of unnecessary TLB flushes
> in the IOMMU hardware because the unmap call-back runs for
> every unmapped physical page.
> 
> With the TLB Flush Interface and the new iommu_unmap_fast()
> function introduced here the need to clean the hardware TLBs
> is removed from the unmapping code-path. Users of
> iommu_unmap_fast() have to explicitly call the TLB-Flush
> functions to sync the page-table changes to the hardware.
> 
> Three functions for TLB-Flushes are introduced:
> 
> 	* iommu_flush_tlb_all() - Flushes all TLB entries
> 	                          associated with that
> 				  domain. TLBs entries are
> 				  flushed when this function
> 				  returns.
> 
> 	* iommu_tlb_range_add() - This will add a given
> 				  range to the flush queue
> 				  for this domain.
> 
> 	* iommu_tlb_sync() - Flushes all queued ranges from
> 			     the hardware TLBs. Returns when
> 			     the flush is finished.
> 
> The semantic of this interface is intentionally similar to
> the iommu_gather_ops from the io-pgtable code.
> 
> Cc: Alex Williamson <alex.williamson-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Will Deacon <will.deacon-5wv7dgnIgG8@public.gmane.org>
> Cc: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
> Signed-off-by: Joerg Roedel <jroedel-l3A5Bk7waGM@public.gmane.org>
> ---
>  drivers/iommu/iommu.c | 32 ++++++++++++++++++++---
>  include/linux/iommu.h | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 99 insertions(+), 5 deletions(-)

[...]

> +size_t iommu_unmap(struct iommu_domain *domain,
> +		   unsigned long iova, size_t size)
> +{
> +	return __iommu_unmap(domain, iova, size, true);
> +}
>  EXPORT_SYMBOL_GPL(iommu_unmap);
>  
> +size_t iommu_unmap_fast(struct iommu_domain *domain,
> +			unsigned long iova, size_t size)
> +{
> +	return __iommu_unmap(domain, iova, size, false);
> +}
> +EXPORT_SYMBOL_GPL(iommu_unmap_fast);

Really minor nit, but I think that iommu_unmap_nosync is a more descriptive
name (who wouldn't want to use the _fast version?!).

But anyway, this looks like a really welcome change and it addresses most
of my concerns on v1, so, modulo Robin's comments about map_sg:

Reviewed-by: Will Deacon <will.deacon-5wv7dgnIgG8@public.gmane.org>

Will

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/2] iommu: Introduce Interface for IOMMU TLB Flushing
       [not found]         ` <20170901172044.GB20817-5wv7dgnIgG8@public.gmane.org>
@ 2017-09-01 21:45           ` Joerg Roedel
  0 siblings, 0 replies; 10+ messages in thread
From: Joerg Roedel @ 2017-09-01 21:45 UTC (permalink / raw)
  To: Will Deacon
  Cc: Joerg Roedel, Zhen Lei,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Hi Will,

On Fri, Sep 01, 2017 at 06:20:45PM +0100, Will Deacon wrote:
> On Wed, Aug 23, 2017 at 03:50:04PM +0200, Joerg Roedel wrote:

> > +EXPORT_SYMBOL_GPL(iommu_unmap_fast);
> 
> Really minor nit, but I think that iommu_unmap_nosync is a more descriptive
> name (who wouldn't want to use the _fast version?!).

Yeah, but I wanted the name to motivate people to use it, or at least
think about why they are not using the _fast variant. Therefore I
decided for that name and not the more descriptive _sync version.

> But anyway, this looks like a really welcome change and it addresses most
> of my concerns on v1, so, modulo Robin's comments about map_sg:

Thanks, I removed the stale map_sg_sync stuff before applying.


Regards,

	Joerg

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-09-01 21:45 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-23 13:50 [PATCH 0/2 v2] Introduce IOMMU-API TLB Flushing Interface Joerg Roedel
     [not found] ` <1503496204-2527-1-git-send-email-joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
2017-08-23 13:50   ` [PATCH 1/2] iommu/amd: Rename a few flush functions Joerg Roedel
2017-08-23 13:50   ` [PATCH 2/2] iommu: Introduce Interface for IOMMU TLB Flushing Joerg Roedel
     [not found]     ` <1503496204-2527-3-git-send-email-joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org>
2017-08-29  2:53       ` Leizhen (ThunderTown)
     [not found]         ` <59A4D72F.9040600-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
2017-08-29 11:19           ` Robin Murphy
     [not found]             ` <837fc6a8-4b9b-7ae3-5c74-6f3c202a38fb-5wv7dgnIgG8@public.gmane.org>
2017-08-29 12:04               ` Leizhen (ThunderTown)
2017-08-29 11:23       ` Robin Murphy
     [not found]         ` <811dfba8-097c-0deb-c283-a7b1e0c6ee38-5wv7dgnIgG8@public.gmane.org>
2017-08-29 12:12           ` Joerg Roedel
2017-09-01 17:20       ` Will Deacon
     [not found]         ` <20170901172044.GB20817-5wv7dgnIgG8@public.gmane.org>
2017-09-01 21:45           ` Joerg Roedel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.