All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/1] Optimise IOVA allocations for PCI devices
@ 2017-09-18 10:56 ` Tomasz Nowicki
  0 siblings, 0 replies; 14+ messages in thread
From: Tomasz Nowicki @ 2017-09-18 10:56 UTC (permalink / raw)
  To: joro, robin.murphy, will.deacon
  Cc: lorenzo.pieralisi, Jayachandran.Nair, Ganapatrao.Kulkarni,
	ard.biesheuvel, linux-kernel, iommu, linux-arm-kernel,
	Tomasz Nowicki

Here is my test setup where I have stareted performance measurements.

 ------------  PCIe  -------------   TX   -------------  PCIe  -----
| ThunderX2  |------| Intel XL710 | ---> | Intel XL710 |------| X86 |
| (128 cpus) |      |   40GbE     |      |    40GbE    |       -----
 ------------        -------------        -------------

As the reference lets take v4.13 host, SMMUv3 off and 1-thread iperf
taskset to one CPU. The performance results I got:

SMMU off -> 100%
SMMU on -> 0,02%

I followed down the DMA mapping path and found out IOVA 32-bit space
full so that kernel was flushing rcaches for all CPUs in (1).
For 128 CPUs, this kills the performance. Furthermore, for my case, rcaches
contained PFNs > 32-bit mostly so the second round of IOVA allocation failed
as well. As the consequence IOVA had to be allocated outside of 32-bit (2)
from scratch since all rcaches have been flushed in (1).

    if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
(1)-->  iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);

    if (!iova)
(2)-->  iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);

My fix simply introduces parameter for alloc_iova_fast() to decide whether
rcache flush has to be done or not. All users follow mentioned scenario
so they should let flush as the last chance to avoid time costly iteration
over all CPUs.

This bring my iperf performance back to 100% with SMMU on.

My bad feelings regarding this solution is that machines with relatively
small numbers of CPUs may get DAC addresses more frequently for PCI
devices. Please let me know your thoughts.

Tomasz Nowicki (1):
  iommu/iova: Make rcache flush optional on IOVA allocation failure

 drivers/iommu/amd_iommu.c   | 5 +++--
 drivers/iommu/dma-iommu.c   | 6 ++++--
 drivers/iommu/intel-iommu.c | 5 +++--
 drivers/iommu/iova.c        | 7 +++----
 include/linux/iova.h        | 5 +++--
 5 files changed, 16 insertions(+), 12 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 0/1] Optimise IOVA allocations for PCI devices
@ 2017-09-18 10:56 ` Tomasz Nowicki
  0 siblings, 0 replies; 14+ messages in thread
From: Tomasz Nowicki @ 2017-09-18 10:56 UTC (permalink / raw)
  To: linux-arm-kernel

Here is my test setup where I have stareted performance measurements.

 ------------  PCIe  -------------   TX   -------------  PCIe  -----
| ThunderX2  |------| Intel XL710 | ---> | Intel XL710 |------| X86 |
| (128 cpus) |      |   40GbE     |      |    40GbE    |       -----
 ------------        -------------        -------------

As the reference lets take v4.13 host, SMMUv3 off and 1-thread iperf
taskset to one CPU. The performance results I got:

SMMU off -> 100%
SMMU on -> 0,02%

I followed down the DMA mapping path and found out IOVA 32-bit space
full so that kernel was flushing rcaches for all CPUs in (1).
For 128 CPUs, this kills the performance. Furthermore, for my case, rcaches
contained PFNs > 32-bit mostly so the second round of IOVA allocation failed
as well. As the consequence IOVA had to be allocated outside of 32-bit (2)
from scratch since all rcaches have been flushed in (1).

    if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
(1)-->  iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);

    if (!iova)
(2)-->  iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);

My fix simply introduces parameter for alloc_iova_fast() to decide whether
rcache flush has to be done or not. All users follow mentioned scenario
so they should let flush as the last chance to avoid time costly iteration
over all CPUs.

This bring my iperf performance back to 100% with SMMU on.

My bad feelings regarding this solution is that machines with relatively
small numbers of CPUs may get DAC addresses more frequently for PCI
devices. Please let me know your thoughts.

Tomasz Nowicki (1):
  iommu/iova: Make rcache flush optional on IOVA allocation failure

 drivers/iommu/amd_iommu.c   | 5 +++--
 drivers/iommu/dma-iommu.c   | 6 ++++--
 drivers/iommu/intel-iommu.c | 5 +++--
 drivers/iommu/iova.c        | 7 +++----
 include/linux/iova.h        | 5 +++--
 5 files changed, 16 insertions(+), 12 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
  2017-09-18 10:56 ` Tomasz Nowicki
@ 2017-09-18 10:56   ` Tomasz Nowicki
  -1 siblings, 0 replies; 14+ messages in thread
From: Tomasz Nowicki @ 2017-09-18 10:56 UTC (permalink / raw)
  To: joro, robin.murphy, will.deacon
  Cc: lorenzo.pieralisi, Jayachandran.Nair, Ganapatrao.Kulkarni,
	ard.biesheuvel, linux-kernel, iommu, linux-arm-kernel,
	Tomasz Nowicki, Tomasz Nowicki

Since IOVA allocation failure is not unusual case we need to flush
CPUs' rcache in hope we will succeed in next round.

However, it is useful to decide whether we need rcache flush step because
of two reasons:
- Not scalability. On large system with ~100 CPUs iterating and flushing
  rcache for each CPU becomes serious bottleneck so we may want to deffer it.
- free_cpu_cached_iovas() does not care about max PFN we are interested in.
  Thus we may flush our rcaches and still get no new IOVA like in the
  commonly used scenario:

    if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
        iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);

    if (!iova)
        iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);

   1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get
      PCI devices a SAC address
   2. alloc_iova() fails due to full 32-bit space
   3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas()
      throws entries away for nothing and alloc_iova() fails again
   4. Next alloc_iova_fast() call cannot take advantage of rcache since we
      have just defeated caches. In this case we pick the slowest option
      to proceed.

This patch reworks flushed_rcache local flag to be additional function
argument instead and control rcache flush step. Also, it updates all users
to do the flush as the last chance.

Signed-off-by: Tomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com>
---
 drivers/iommu/amd_iommu.c   | 5 +++--
 drivers/iommu/dma-iommu.c   | 6 ++++--
 drivers/iommu/intel-iommu.c | 5 +++--
 drivers/iommu/iova.c        | 7 +++----
 include/linux/iova.h        | 5 +++--
 5 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 8d2ec60..ce68986 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1604,10 +1604,11 @@ static unsigned long dma_ops_alloc_iova(struct device *dev,
 
 	if (dma_mask > DMA_BIT_MASK(32))
 		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
-				      IOVA_PFN(DMA_BIT_MASK(32)));
+				      IOVA_PFN(DMA_BIT_MASK(32)), false);
 
 	if (!pfn)
-		pfn = alloc_iova_fast(&dma_dom->iovad, pages, IOVA_PFN(dma_mask));
+		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
+				      IOVA_PFN(dma_mask), true);
 
 	return (pfn << PAGE_SHIFT);
 }
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 191be9c..25914d3 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -370,10 +370,12 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
 
 	/* Try to get PCI devices a SAC address */
 	if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
-		iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
+		iova = alloc_iova_fast(iovad, iova_len,
+				       DMA_BIT_MASK(32) >> shift, false);
 
 	if (!iova)
-		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
+		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift,
+				       true);
 
 	return (dma_addr_t)iova << shift;
 }
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 05c0c3a..75c8320 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3460,11 +3460,12 @@ static unsigned long intel_alloc_iova(struct device *dev,
 		 * from higher range
 		 */
 		iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
-					   IOVA_PFN(DMA_BIT_MASK(32)));
+					   IOVA_PFN(DMA_BIT_MASK(32)), false);
 		if (iova_pfn)
 			return iova_pfn;
 	}
-	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages, IOVA_PFN(dma_mask));
+	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
+				   IOVA_PFN(dma_mask), true);
 	if (unlikely(!iova_pfn)) {
 		pr_err("Allocating %ld-page iova for %s failed",
 		       nrpages, dev_name(dev));
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index f88acad..1a18b14 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -358,9 +358,8 @@ EXPORT_SYMBOL_GPL(free_iova);
 */
 unsigned long
 alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
-		unsigned long limit_pfn)
+		unsigned long limit_pfn, bool flush_rcache)
 {
-	bool flushed_rcache = false;
 	unsigned long iova_pfn;
 	struct iova *new_iova;
 
@@ -373,11 +372,11 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
 	if (!new_iova) {
 		unsigned int cpu;
 
-		if (flushed_rcache)
+		if (!flush_rcache)
 			return 0;
 
 		/* Try replenishing IOVAs by flushing rcache. */
-		flushed_rcache = true;
+		flush_rcache = false;
 		for_each_online_cpu(cpu)
 			free_cpu_cached_iovas(cpu, iovad);
 		goto retry;
diff --git a/include/linux/iova.h b/include/linux/iova.h
index 58c2a36..8fdcb66 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -97,7 +97,7 @@ struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
 void free_iova_fast(struct iova_domain *iovad, unsigned long pfn,
 		    unsigned long size);
 unsigned long alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
-			      unsigned long limit_pfn);
+			      unsigned long limit_pfn, bool flush_rcache);
 struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
 	unsigned long pfn_hi);
 void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
@@ -151,7 +151,8 @@ static inline void free_iova_fast(struct iova_domain *iovad,
 
 static inline unsigned long alloc_iova_fast(struct iova_domain *iovad,
 					    unsigned long size,
-					    unsigned long limit_pfn)
+					    unsigned long limit_pfn,
+					    bool flush_rcache)
 {
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
@ 2017-09-18 10:56   ` Tomasz Nowicki
  0 siblings, 0 replies; 14+ messages in thread
From: Tomasz Nowicki @ 2017-09-18 10:56 UTC (permalink / raw)
  To: linux-arm-kernel

Since IOVA allocation failure is not unusual case we need to flush
CPUs' rcache in hope we will succeed in next round.

However, it is useful to decide whether we need rcache flush step because
of two reasons:
- Not scalability. On large system with ~100 CPUs iterating and flushing
  rcache for each CPU becomes serious bottleneck so we may want to deffer it.
- free_cpu_cached_iovas() does not care about max PFN we are interested in.
  Thus we may flush our rcaches and still get no new IOVA like in the
  commonly used scenario:

    if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
        iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);

    if (!iova)
        iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);

   1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get
      PCI devices a SAC address
   2. alloc_iova() fails due to full 32-bit space
   3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas()
      throws entries away for nothing and alloc_iova() fails again
   4. Next alloc_iova_fast() call cannot take advantage of rcache since we
      have just defeated caches. In this case we pick the slowest option
      to proceed.

This patch reworks flushed_rcache local flag to be additional function
argument instead and control rcache flush step. Also, it updates all users
to do the flush as the last chance.

Signed-off-by: Tomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com>
---
 drivers/iommu/amd_iommu.c   | 5 +++--
 drivers/iommu/dma-iommu.c   | 6 ++++--
 drivers/iommu/intel-iommu.c | 5 +++--
 drivers/iommu/iova.c        | 7 +++----
 include/linux/iova.h        | 5 +++--
 5 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 8d2ec60..ce68986 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1604,10 +1604,11 @@ static unsigned long dma_ops_alloc_iova(struct device *dev,
 
 	if (dma_mask > DMA_BIT_MASK(32))
 		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
-				      IOVA_PFN(DMA_BIT_MASK(32)));
+				      IOVA_PFN(DMA_BIT_MASK(32)), false);
 
 	if (!pfn)
-		pfn = alloc_iova_fast(&dma_dom->iovad, pages, IOVA_PFN(dma_mask));
+		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
+				      IOVA_PFN(dma_mask), true);
 
 	return (pfn << PAGE_SHIFT);
 }
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 191be9c..25914d3 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -370,10 +370,12 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
 
 	/* Try to get PCI devices a SAC address */
 	if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
-		iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
+		iova = alloc_iova_fast(iovad, iova_len,
+				       DMA_BIT_MASK(32) >> shift, false);
 
 	if (!iova)
-		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
+		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift,
+				       true);
 
 	return (dma_addr_t)iova << shift;
 }
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 05c0c3a..75c8320 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3460,11 +3460,12 @@ static unsigned long intel_alloc_iova(struct device *dev,
 		 * from higher range
 		 */
 		iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
-					   IOVA_PFN(DMA_BIT_MASK(32)));
+					   IOVA_PFN(DMA_BIT_MASK(32)), false);
 		if (iova_pfn)
 			return iova_pfn;
 	}
-	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages, IOVA_PFN(dma_mask));
+	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
+				   IOVA_PFN(dma_mask), true);
 	if (unlikely(!iova_pfn)) {
 		pr_err("Allocating %ld-page iova for %s failed",
 		       nrpages, dev_name(dev));
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index f88acad..1a18b14 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -358,9 +358,8 @@ EXPORT_SYMBOL_GPL(free_iova);
 */
 unsigned long
 alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
-		unsigned long limit_pfn)
+		unsigned long limit_pfn, bool flush_rcache)
 {
-	bool flushed_rcache = false;
 	unsigned long iova_pfn;
 	struct iova *new_iova;
 
@@ -373,11 +372,11 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
 	if (!new_iova) {
 		unsigned int cpu;
 
-		if (flushed_rcache)
+		if (!flush_rcache)
 			return 0;
 
 		/* Try replenishing IOVAs by flushing rcache. */
-		flushed_rcache = true;
+		flush_rcache = false;
 		for_each_online_cpu(cpu)
 			free_cpu_cached_iovas(cpu, iovad);
 		goto retry;
diff --git a/include/linux/iova.h b/include/linux/iova.h
index 58c2a36..8fdcb66 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -97,7 +97,7 @@ struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
 void free_iova_fast(struct iova_domain *iovad, unsigned long pfn,
 		    unsigned long size);
 unsigned long alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
-			      unsigned long limit_pfn);
+			      unsigned long limit_pfn, bool flush_rcache);
 struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
 	unsigned long pfn_hi);
 void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
@@ -151,7 +151,8 @@ static inline void free_iova_fast(struct iova_domain *iovad,
 
 static inline unsigned long alloc_iova_fast(struct iova_domain *iovad,
 					    unsigned long size,
-					    unsigned long limit_pfn)
+					    unsigned long limit_pfn,
+					    bool flush_rcache)
 {
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
  2017-09-18 10:56   ` Tomasz Nowicki
@ 2017-09-18 16:02     ` Robin Murphy
  -1 siblings, 0 replies; 14+ messages in thread
From: Robin Murphy @ 2017-09-18 16:02 UTC (permalink / raw)
  To: Tomasz Nowicki, joro, will.deacon
  Cc: lorenzo.pieralisi, Jayachandran.Nair, Ganapatrao.Kulkarni,
	ard.biesheuvel, linux-kernel, iommu, linux-arm-kernel,
	Nate Watterson

Hi Tomasz,

On 18/09/17 11:56, Tomasz Nowicki wrote:
> Since IOVA allocation failure is not unusual case we need to flush
> CPUs' rcache in hope we will succeed in next round.
> 
> However, it is useful to decide whether we need rcache flush step because
> of two reasons:
> - Not scalability. On large system with ~100 CPUs iterating and flushing
>   rcache for each CPU becomes serious bottleneck so we may want to deffer it.
> - free_cpu_cached_iovas() does not care about max PFN we are interested in.
>   Thus we may flush our rcaches and still get no new IOVA like in the
>   commonly used scenario:
> 
>     if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>         iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
> 
>     if (!iova)
>         iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
> 
>    1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get
>       PCI devices a SAC address
>    2. alloc_iova() fails due to full 32-bit space
>    3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas()
>       throws entries away for nothing and alloc_iova() fails again
>    4. Next alloc_iova_fast() call cannot take advantage of rcache since we
>       have just defeated caches. In this case we pick the slowest option
>       to proceed.
> 
> This patch reworks flushed_rcache local flag to be additional function
> argument instead and control rcache flush step. Also, it updates all users
> to do the flush as the last chance.

Looks like you've run into the same thing Nate found[1] - I came up with
almost the exact same patch, only with separate alloc_iova_fast() and
alloc_iova_fast_noretry() wrapper functions, but on reflection, just
exposing the bool to callers is probably simpler. One nit, can you
document it in the kerneldoc comment too? With that:

Reviewed-by: Robin Murphy <robin.murphy@arm.com>

Thanks,
Robin.

[1]:https://www.mail-archive.com/iommu@lists.linux-foundation.org/msg19758.html

> 
> Signed-off-by: Tomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com>
> ---
>  drivers/iommu/amd_iommu.c   | 5 +++--
>  drivers/iommu/dma-iommu.c   | 6 ++++--
>  drivers/iommu/intel-iommu.c | 5 +++--
>  drivers/iommu/iova.c        | 7 +++----
>  include/linux/iova.h        | 5 +++--
>  5 files changed, 16 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index 8d2ec60..ce68986 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -1604,10 +1604,11 @@ static unsigned long dma_ops_alloc_iova(struct device *dev,
>  
>  	if (dma_mask > DMA_BIT_MASK(32))
>  		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
> -				      IOVA_PFN(DMA_BIT_MASK(32)));
> +				      IOVA_PFN(DMA_BIT_MASK(32)), false);
>  
>  	if (!pfn)
> -		pfn = alloc_iova_fast(&dma_dom->iovad, pages, IOVA_PFN(dma_mask));
> +		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
> +				      IOVA_PFN(dma_mask), true);
>  
>  	return (pfn << PAGE_SHIFT);
>  }
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 191be9c..25914d3 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -370,10 +370,12 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
>  
>  	/* Try to get PCI devices a SAC address */
>  	if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
> -		iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
> +		iova = alloc_iova_fast(iovad, iova_len,
> +				       DMA_BIT_MASK(32) >> shift, false);
>  
>  	if (!iova)
> -		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
> +		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift,
> +				       true);
>  
>  	return (dma_addr_t)iova << shift;
>  }
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 05c0c3a..75c8320 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -3460,11 +3460,12 @@ static unsigned long intel_alloc_iova(struct device *dev,
>  		 * from higher range
>  		 */
>  		iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
> -					   IOVA_PFN(DMA_BIT_MASK(32)));
> +					   IOVA_PFN(DMA_BIT_MASK(32)), false);
>  		if (iova_pfn)
>  			return iova_pfn;
>  	}
> -	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages, IOVA_PFN(dma_mask));
> +	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
> +				   IOVA_PFN(dma_mask), true);
>  	if (unlikely(!iova_pfn)) {
>  		pr_err("Allocating %ld-page iova for %s failed",
>  		       nrpages, dev_name(dev));
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index f88acad..1a18b14 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -358,9 +358,8 @@ EXPORT_SYMBOL_GPL(free_iova);
>  */
>  unsigned long
>  alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
> -		unsigned long limit_pfn)
> +		unsigned long limit_pfn, bool flush_rcache)
>  {
> -	bool flushed_rcache = false;
>  	unsigned long iova_pfn;
>  	struct iova *new_iova;
>  
> @@ -373,11 +372,11 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>  	if (!new_iova) {
>  		unsigned int cpu;
>  
> -		if (flushed_rcache)
> +		if (!flush_rcache)
>  			return 0;
>  
>  		/* Try replenishing IOVAs by flushing rcache. */
> -		flushed_rcache = true;
> +		flush_rcache = false;
>  		for_each_online_cpu(cpu)
>  			free_cpu_cached_iovas(cpu, iovad);
>  		goto retry;
> diff --git a/include/linux/iova.h b/include/linux/iova.h
> index 58c2a36..8fdcb66 100644
> --- a/include/linux/iova.h
> +++ b/include/linux/iova.h
> @@ -97,7 +97,7 @@ struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
>  void free_iova_fast(struct iova_domain *iovad, unsigned long pfn,
>  		    unsigned long size);
>  unsigned long alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
> -			      unsigned long limit_pfn);
> +			      unsigned long limit_pfn, bool flush_rcache);
>  struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
>  	unsigned long pfn_hi);
>  void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
> @@ -151,7 +151,8 @@ static inline void free_iova_fast(struct iova_domain *iovad,
>  
>  static inline unsigned long alloc_iova_fast(struct iova_domain *iovad,
>  					    unsigned long size,
> -					    unsigned long limit_pfn)
> +					    unsigned long limit_pfn,
> +					    bool flush_rcache)
>  {
>  	return 0;
>  }
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
@ 2017-09-18 16:02     ` Robin Murphy
  0 siblings, 0 replies; 14+ messages in thread
From: Robin Murphy @ 2017-09-18 16:02 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Tomasz,

On 18/09/17 11:56, Tomasz Nowicki wrote:
> Since IOVA allocation failure is not unusual case we need to flush
> CPUs' rcache in hope we will succeed in next round.
> 
> However, it is useful to decide whether we need rcache flush step because
> of two reasons:
> - Not scalability. On large system with ~100 CPUs iterating and flushing
>   rcache for each CPU becomes serious bottleneck so we may want to deffer it.
> - free_cpu_cached_iovas() does not care about max PFN we are interested in.
>   Thus we may flush our rcaches and still get no new IOVA like in the
>   commonly used scenario:
> 
>     if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>         iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
> 
>     if (!iova)
>         iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
> 
>    1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get
>       PCI devices a SAC address
>    2. alloc_iova() fails due to full 32-bit space
>    3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas()
>       throws entries away for nothing and alloc_iova() fails again
>    4. Next alloc_iova_fast() call cannot take advantage of rcache since we
>       have just defeated caches. In this case we pick the slowest option
>       to proceed.
> 
> This patch reworks flushed_rcache local flag to be additional function
> argument instead and control rcache flush step. Also, it updates all users
> to do the flush as the last chance.

Looks like you've run into the same thing Nate found[1] - I came up with
almost the exact same patch, only with separate alloc_iova_fast() and
alloc_iova_fast_noretry() wrapper functions, but on reflection, just
exposing the bool to callers is probably simpler. One nit, can you
document it in the kerneldoc comment too? With that:

Reviewed-by: Robin Murphy <robin.murphy@arm.com>

Thanks,
Robin.

[1]:https://www.mail-archive.com/iommu at lists.linux-foundation.org/msg19758.html

> 
> Signed-off-by: Tomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com>
> ---
>  drivers/iommu/amd_iommu.c   | 5 +++--
>  drivers/iommu/dma-iommu.c   | 6 ++++--
>  drivers/iommu/intel-iommu.c | 5 +++--
>  drivers/iommu/iova.c        | 7 +++----
>  include/linux/iova.h        | 5 +++--
>  5 files changed, 16 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index 8d2ec60..ce68986 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -1604,10 +1604,11 @@ static unsigned long dma_ops_alloc_iova(struct device *dev,
>  
>  	if (dma_mask > DMA_BIT_MASK(32))
>  		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
> -				      IOVA_PFN(DMA_BIT_MASK(32)));
> +				      IOVA_PFN(DMA_BIT_MASK(32)), false);
>  
>  	if (!pfn)
> -		pfn = alloc_iova_fast(&dma_dom->iovad, pages, IOVA_PFN(dma_mask));
> +		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
> +				      IOVA_PFN(dma_mask), true);
>  
>  	return (pfn << PAGE_SHIFT);
>  }
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 191be9c..25914d3 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -370,10 +370,12 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
>  
>  	/* Try to get PCI devices a SAC address */
>  	if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
> -		iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
> +		iova = alloc_iova_fast(iovad, iova_len,
> +				       DMA_BIT_MASK(32) >> shift, false);
>  
>  	if (!iova)
> -		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
> +		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift,
> +				       true);
>  
>  	return (dma_addr_t)iova << shift;
>  }
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 05c0c3a..75c8320 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -3460,11 +3460,12 @@ static unsigned long intel_alloc_iova(struct device *dev,
>  		 * from higher range
>  		 */
>  		iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
> -					   IOVA_PFN(DMA_BIT_MASK(32)));
> +					   IOVA_PFN(DMA_BIT_MASK(32)), false);
>  		if (iova_pfn)
>  			return iova_pfn;
>  	}
> -	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages, IOVA_PFN(dma_mask));
> +	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
> +				   IOVA_PFN(dma_mask), true);
>  	if (unlikely(!iova_pfn)) {
>  		pr_err("Allocating %ld-page iova for %s failed",
>  		       nrpages, dev_name(dev));
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index f88acad..1a18b14 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -358,9 +358,8 @@ EXPORT_SYMBOL_GPL(free_iova);
>  */
>  unsigned long
>  alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
> -		unsigned long limit_pfn)
> +		unsigned long limit_pfn, bool flush_rcache)
>  {
> -	bool flushed_rcache = false;
>  	unsigned long iova_pfn;
>  	struct iova *new_iova;
>  
> @@ -373,11 +372,11 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>  	if (!new_iova) {
>  		unsigned int cpu;
>  
> -		if (flushed_rcache)
> +		if (!flush_rcache)
>  			return 0;
>  
>  		/* Try replenishing IOVAs by flushing rcache. */
> -		flushed_rcache = true;
> +		flush_rcache = false;
>  		for_each_online_cpu(cpu)
>  			free_cpu_cached_iovas(cpu, iovad);
>  		goto retry;
> diff --git a/include/linux/iova.h b/include/linux/iova.h
> index 58c2a36..8fdcb66 100644
> --- a/include/linux/iova.h
> +++ b/include/linux/iova.h
> @@ -97,7 +97,7 @@ struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
>  void free_iova_fast(struct iova_domain *iovad, unsigned long pfn,
>  		    unsigned long size);
>  unsigned long alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
> -			      unsigned long limit_pfn);
> +			      unsigned long limit_pfn, bool flush_rcache);
>  struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
>  	unsigned long pfn_hi);
>  void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
> @@ -151,7 +151,8 @@ static inline void free_iova_fast(struct iova_domain *iovad,
>  
>  static inline unsigned long alloc_iova_fast(struct iova_domain *iovad,
>  					    unsigned long size,
> -					    unsigned long limit_pfn)
> +					    unsigned long limit_pfn,
> +					    bool flush_rcache)
>  {
>  	return 0;
>  }
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
@ 2017-09-19  2:57       ` Nate Watterson
  0 siblings, 0 replies; 14+ messages in thread
From: Nate Watterson @ 2017-09-19  2:57 UTC (permalink / raw)
  To: Robin Murphy, Tomasz Nowicki, joro, will.deacon
  Cc: lorenzo.pieralisi, Jayachandran.Nair, Ganapatrao.Kulkarni,
	ard.biesheuvel, linux-kernel, iommu, linux-arm-kernel

Hi Tomasz,

On 9/18/2017 12:02 PM, Robin Murphy wrote:
> Hi Tomasz,
> 
> On 18/09/17 11:56, Tomasz Nowicki wrote:
>> Since IOVA allocation failure is not unusual case we need to flush
>> CPUs' rcache in hope we will succeed in next round.
>>
>> However, it is useful to decide whether we need rcache flush step because
>> of two reasons:
>> - Not scalability. On large system with ~100 CPUs iterating and flushing
>>    rcache for each CPU becomes serious bottleneck so we may want to deffer it.
s/deffer/defer

>> - free_cpu_cached_iovas() does not care about max PFN we are interested in.
>>    Thus we may flush our rcaches and still get no new IOVA like in the
>>    commonly used scenario:
>>
>>      if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>>          iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
>>
>>      if (!iova)
>>          iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>>
>>     1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get
>>        PCI devices a SAC address
>>     2. alloc_iova() fails due to full 32-bit space
>>     3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas()
>>        throws entries away for nothing and alloc_iova() fails again
>>     4. Next alloc_iova_fast() call cannot take advantage of rcache since we
>>        have just defeated caches. In this case we pick the slowest option
>>        to proceed.
>>
>> This patch reworks flushed_rcache local flag to be additional function
>> argument instead and control rcache flush step. Also, it updates all users
>> to do the flush as the last chance.
> 
> Looks like you've run into the same thing Nate found[1] - I came up with
> almost the exact same patch, only with separate alloc_iova_fast() and
> alloc_iova_fast_noretry() wrapper functions, but on reflection, just
> exposing the bool to callers is probably simpler. One nit, can you
> document it in the kerneldoc comment too? With that:
> 
> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
> 
> Thanks,
> Robin.
> 
> [1]:https://www.mail-archive.com/iommu@lists.linux-foundation.org/msg19758.html
This patch completely resolves the issue I reported in [1]!!
Tested-by: Nate Watterson <nwatters@codeaurora.org>

Thanks,
Nate
> 
>>
>> Signed-off-by: Tomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com>
>> ---
>>   drivers/iommu/amd_iommu.c   | 5 +++--
>>   drivers/iommu/dma-iommu.c   | 6 ++++--
>>   drivers/iommu/intel-iommu.c | 5 +++--
>>   drivers/iommu/iova.c        | 7 +++----
>>   include/linux/iova.h        | 5 +++--
>>   5 files changed, 16 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
>> index 8d2ec60..ce68986 100644
>> --- a/drivers/iommu/amd_iommu.c
>> +++ b/drivers/iommu/amd_iommu.c
>> @@ -1604,10 +1604,11 @@ static unsigned long dma_ops_alloc_iova(struct device *dev,
>>   
>>   	if (dma_mask > DMA_BIT_MASK(32))
>>   		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
>> -				      IOVA_PFN(DMA_BIT_MASK(32)));
>> +				      IOVA_PFN(DMA_BIT_MASK(32)), false);
>>   
>>   	if (!pfn)
>> -		pfn = alloc_iova_fast(&dma_dom->iovad, pages, IOVA_PFN(dma_mask));
>> +		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
>> +				      IOVA_PFN(dma_mask), true);
>>   
>>   	return (pfn << PAGE_SHIFT);
>>   }
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index 191be9c..25914d3 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -370,10 +370,12 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
>>   
>>   	/* Try to get PCI devices a SAC address */
>>   	if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>> -		iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
>> +		iova = alloc_iova_fast(iovad, iova_len,
>> +				       DMA_BIT_MASK(32) >> shift, false);
>>   
>>   	if (!iova)
>> -		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>> +		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift,
>> +				       true);
>>   
>>   	return (dma_addr_t)iova << shift;
>>   }
>> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
>> index 05c0c3a..75c8320 100644
>> --- a/drivers/iommu/intel-iommu.c
>> +++ b/drivers/iommu/intel-iommu.c
>> @@ -3460,11 +3460,12 @@ static unsigned long intel_alloc_iova(struct device *dev,
>>   		 * from higher range
>>   		 */
>>   		iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
>> -					   IOVA_PFN(DMA_BIT_MASK(32)));
>> +					   IOVA_PFN(DMA_BIT_MASK(32)), false);
>>   		if (iova_pfn)
>>   			return iova_pfn;
>>   	}
>> -	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages, IOVA_PFN(dma_mask));
>> +	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
>> +				   IOVA_PFN(dma_mask), true);
>>   	if (unlikely(!iova_pfn)) {
>>   		pr_err("Allocating %ld-page iova for %s failed",
>>   		       nrpages, dev_name(dev));
>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>> index f88acad..1a18b14 100644
>> --- a/drivers/iommu/iova.c
>> +++ b/drivers/iommu/iova.c
>> @@ -358,9 +358,8 @@ EXPORT_SYMBOL_GPL(free_iova);
>>   */
>>   unsigned long
>>   alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>> -		unsigned long limit_pfn)
>> +		unsigned long limit_pfn, bool flush_rcache)
>>   {
>> -	bool flushed_rcache = false;
>>   	unsigned long iova_pfn;
>>   	struct iova *new_iova;
>>   
>> @@ -373,11 +372,11 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>>   	if (!new_iova) {
>>   		unsigned int cpu;
>>   
>> -		if (flushed_rcache)
>> +		if (!flush_rcache)
>>   			return 0;
>>   
>>   		/* Try replenishing IOVAs by flushing rcache. */
>> -		flushed_rcache = true;
>> +		flush_rcache = false;
>>   		for_each_online_cpu(cpu)
>>   			free_cpu_cached_iovas(cpu, iovad);
>>   		goto retry;
>> diff --git a/include/linux/iova.h b/include/linux/iova.h
>> index 58c2a36..8fdcb66 100644
>> --- a/include/linux/iova.h
>> +++ b/include/linux/iova.h
>> @@ -97,7 +97,7 @@ struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
>>   void free_iova_fast(struct iova_domain *iovad, unsigned long pfn,
>>   		    unsigned long size);
>>   unsigned long alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>> -			      unsigned long limit_pfn);
>> +			      unsigned long limit_pfn, bool flush_rcache);
>>   struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
>>   	unsigned long pfn_hi);
>>   void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
>> @@ -151,7 +151,8 @@ static inline void free_iova_fast(struct iova_domain *iovad,
>>   
>>   static inline unsigned long alloc_iova_fast(struct iova_domain *iovad,
>>   					    unsigned long size,
>> -					    unsigned long limit_pfn)
>> +					    unsigned long limit_pfn,
>> +					    bool flush_rcache)
>>   {
>>   	return 0;
>>   }
>>
> 

-- 
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
@ 2017-09-19  2:57       ` Nate Watterson
  0 siblings, 0 replies; 14+ messages in thread
From: Nate Watterson @ 2017-09-19  2:57 UTC (permalink / raw)
  To: Robin Murphy, Tomasz Nowicki, joro-zLv9SwRftAIdnm+yROfE0A,
	will.deacon-5wv7dgnIgG8
  Cc: Jayachandran.Nair-YGCgFSpz5w/QT0dZR+AlfA,
	ard.biesheuvel-QSEj5FYQhm4dnm+yROfE0A,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Ganapatrao.Kulkarni-YGCgFSpz5w/QT0dZR+AlfA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

Hi Tomasz,

On 9/18/2017 12:02 PM, Robin Murphy wrote:
> Hi Tomasz,
> 
> On 18/09/17 11:56, Tomasz Nowicki wrote:
>> Since IOVA allocation failure is not unusual case we need to flush
>> CPUs' rcache in hope we will succeed in next round.
>>
>> However, it is useful to decide whether we need rcache flush step because
>> of two reasons:
>> - Not scalability. On large system with ~100 CPUs iterating and flushing
>>    rcache for each CPU becomes serious bottleneck so we may want to deffer it.
s/deffer/defer

>> - free_cpu_cached_iovas() does not care about max PFN we are interested in.
>>    Thus we may flush our rcaches and still get no new IOVA like in the
>>    commonly used scenario:
>>
>>      if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>>          iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
>>
>>      if (!iova)
>>          iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>>
>>     1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get
>>        PCI devices a SAC address
>>     2. alloc_iova() fails due to full 32-bit space
>>     3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas()
>>        throws entries away for nothing and alloc_iova() fails again
>>     4. Next alloc_iova_fast() call cannot take advantage of rcache since we
>>        have just defeated caches. In this case we pick the slowest option
>>        to proceed.
>>
>> This patch reworks flushed_rcache local flag to be additional function
>> argument instead and control rcache flush step. Also, it updates all users
>> to do the flush as the last chance.
> 
> Looks like you've run into the same thing Nate found[1] - I came up with
> almost the exact same patch, only with separate alloc_iova_fast() and
> alloc_iova_fast_noretry() wrapper functions, but on reflection, just
> exposing the bool to callers is probably simpler. One nit, can you
> document it in the kerneldoc comment too? With that:
> 
> Reviewed-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
> 
> Thanks,
> Robin.
> 
> [1]:https://www.mail-archive.com/iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org/msg19758.html
This patch completely resolves the issue I reported in [1]!!
Tested-by: Nate Watterson <nwatters-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>

Thanks,
Nate
> 
>>
>> Signed-off-by: Tomasz Nowicki <Tomasz.Nowicki-M3mlKVOIwJVv6pq1l3V1OdBPR1lH4CV8@public.gmane.org>
>> ---
>>   drivers/iommu/amd_iommu.c   | 5 +++--
>>   drivers/iommu/dma-iommu.c   | 6 ++++--
>>   drivers/iommu/intel-iommu.c | 5 +++--
>>   drivers/iommu/iova.c        | 7 +++----
>>   include/linux/iova.h        | 5 +++--
>>   5 files changed, 16 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
>> index 8d2ec60..ce68986 100644
>> --- a/drivers/iommu/amd_iommu.c
>> +++ b/drivers/iommu/amd_iommu.c
>> @@ -1604,10 +1604,11 @@ static unsigned long dma_ops_alloc_iova(struct device *dev,
>>   
>>   	if (dma_mask > DMA_BIT_MASK(32))
>>   		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
>> -				      IOVA_PFN(DMA_BIT_MASK(32)));
>> +				      IOVA_PFN(DMA_BIT_MASK(32)), false);
>>   
>>   	if (!pfn)
>> -		pfn = alloc_iova_fast(&dma_dom->iovad, pages, IOVA_PFN(dma_mask));
>> +		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
>> +				      IOVA_PFN(dma_mask), true);
>>   
>>   	return (pfn << PAGE_SHIFT);
>>   }
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index 191be9c..25914d3 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -370,10 +370,12 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
>>   
>>   	/* Try to get PCI devices a SAC address */
>>   	if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>> -		iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
>> +		iova = alloc_iova_fast(iovad, iova_len,
>> +				       DMA_BIT_MASK(32) >> shift, false);
>>   
>>   	if (!iova)
>> -		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>> +		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift,
>> +				       true);
>>   
>>   	return (dma_addr_t)iova << shift;
>>   }
>> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
>> index 05c0c3a..75c8320 100644
>> --- a/drivers/iommu/intel-iommu.c
>> +++ b/drivers/iommu/intel-iommu.c
>> @@ -3460,11 +3460,12 @@ static unsigned long intel_alloc_iova(struct device *dev,
>>   		 * from higher range
>>   		 */
>>   		iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
>> -					   IOVA_PFN(DMA_BIT_MASK(32)));
>> +					   IOVA_PFN(DMA_BIT_MASK(32)), false);
>>   		if (iova_pfn)
>>   			return iova_pfn;
>>   	}
>> -	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages, IOVA_PFN(dma_mask));
>> +	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
>> +				   IOVA_PFN(dma_mask), true);
>>   	if (unlikely(!iova_pfn)) {
>>   		pr_err("Allocating %ld-page iova for %s failed",
>>   		       nrpages, dev_name(dev));
>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>> index f88acad..1a18b14 100644
>> --- a/drivers/iommu/iova.c
>> +++ b/drivers/iommu/iova.c
>> @@ -358,9 +358,8 @@ EXPORT_SYMBOL_GPL(free_iova);
>>   */
>>   unsigned long
>>   alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>> -		unsigned long limit_pfn)
>> +		unsigned long limit_pfn, bool flush_rcache)
>>   {
>> -	bool flushed_rcache = false;
>>   	unsigned long iova_pfn;
>>   	struct iova *new_iova;
>>   
>> @@ -373,11 +372,11 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>>   	if (!new_iova) {
>>   		unsigned int cpu;
>>   
>> -		if (flushed_rcache)
>> +		if (!flush_rcache)
>>   			return 0;
>>   
>>   		/* Try replenishing IOVAs by flushing rcache. */
>> -		flushed_rcache = true;
>> +		flush_rcache = false;
>>   		for_each_online_cpu(cpu)
>>   			free_cpu_cached_iovas(cpu, iovad);
>>   		goto retry;
>> diff --git a/include/linux/iova.h b/include/linux/iova.h
>> index 58c2a36..8fdcb66 100644
>> --- a/include/linux/iova.h
>> +++ b/include/linux/iova.h
>> @@ -97,7 +97,7 @@ struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
>>   void free_iova_fast(struct iova_domain *iovad, unsigned long pfn,
>>   		    unsigned long size);
>>   unsigned long alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>> -			      unsigned long limit_pfn);
>> +			      unsigned long limit_pfn, bool flush_rcache);
>>   struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
>>   	unsigned long pfn_hi);
>>   void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
>> @@ -151,7 +151,8 @@ static inline void free_iova_fast(struct iova_domain *iovad,
>>   
>>   static inline unsigned long alloc_iova_fast(struct iova_domain *iovad,
>>   					    unsigned long size,
>> -					    unsigned long limit_pfn)
>> +					    unsigned long limit_pfn,
>> +					    bool flush_rcache)
>>   {
>>   	return 0;
>>   }
>>
> 

-- 
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
@ 2017-09-19  2:57       ` Nate Watterson
  0 siblings, 0 replies; 14+ messages in thread
From: Nate Watterson @ 2017-09-19  2:57 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Tomasz,

On 9/18/2017 12:02 PM, Robin Murphy wrote:
> Hi Tomasz,
> 
> On 18/09/17 11:56, Tomasz Nowicki wrote:
>> Since IOVA allocation failure is not unusual case we need to flush
>> CPUs' rcache in hope we will succeed in next round.
>>
>> However, it is useful to decide whether we need rcache flush step because
>> of two reasons:
>> - Not scalability. On large system with ~100 CPUs iterating and flushing
>>    rcache for each CPU becomes serious bottleneck so we may want to deffer it.
s/deffer/defer

>> - free_cpu_cached_iovas() does not care about max PFN we are interested in.
>>    Thus we may flush our rcaches and still get no new IOVA like in the
>>    commonly used scenario:
>>
>>      if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>>          iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
>>
>>      if (!iova)
>>          iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>>
>>     1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get
>>        PCI devices a SAC address
>>     2. alloc_iova() fails due to full 32-bit space
>>     3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas()
>>        throws entries away for nothing and alloc_iova() fails again
>>     4. Next alloc_iova_fast() call cannot take advantage of rcache since we
>>        have just defeated caches. In this case we pick the slowest option
>>        to proceed.
>>
>> This patch reworks flushed_rcache local flag to be additional function
>> argument instead and control rcache flush step. Also, it updates all users
>> to do the flush as the last chance.
> 
> Looks like you've run into the same thing Nate found[1] - I came up with
> almost the exact same patch, only with separate alloc_iova_fast() and
> alloc_iova_fast_noretry() wrapper functions, but on reflection, just
> exposing the bool to callers is probably simpler. One nit, can you
> document it in the kerneldoc comment too? With that:
> 
> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
> 
> Thanks,
> Robin.
> 
> [1]:https://www.mail-archive.com/iommu at lists.linux-foundation.org/msg19758.html
This patch completely resolves the issue I reported in [1]!!
Tested-by: Nate Watterson <nwatters@codeaurora.org>

Thanks,
Nate
> 
>>
>> Signed-off-by: Tomasz Nowicki <Tomasz.Nowicki@caviumnetworks.com>
>> ---
>>   drivers/iommu/amd_iommu.c   | 5 +++--
>>   drivers/iommu/dma-iommu.c   | 6 ++++--
>>   drivers/iommu/intel-iommu.c | 5 +++--
>>   drivers/iommu/iova.c        | 7 +++----
>>   include/linux/iova.h        | 5 +++--
>>   5 files changed, 16 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
>> index 8d2ec60..ce68986 100644
>> --- a/drivers/iommu/amd_iommu.c
>> +++ b/drivers/iommu/amd_iommu.c
>> @@ -1604,10 +1604,11 @@ static unsigned long dma_ops_alloc_iova(struct device *dev,
>>   
>>   	if (dma_mask > DMA_BIT_MASK(32))
>>   		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
>> -				      IOVA_PFN(DMA_BIT_MASK(32)));
>> +				      IOVA_PFN(DMA_BIT_MASK(32)), false);
>>   
>>   	if (!pfn)
>> -		pfn = alloc_iova_fast(&dma_dom->iovad, pages, IOVA_PFN(dma_mask));
>> +		pfn = alloc_iova_fast(&dma_dom->iovad, pages,
>> +				      IOVA_PFN(dma_mask), true);
>>   
>>   	return (pfn << PAGE_SHIFT);
>>   }
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index 191be9c..25914d3 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -370,10 +370,12 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
>>   
>>   	/* Try to get PCI devices a SAC address */
>>   	if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>> -		iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
>> +		iova = alloc_iova_fast(iovad, iova_len,
>> +				       DMA_BIT_MASK(32) >> shift, false);
>>   
>>   	if (!iova)
>> -		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>> +		iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift,
>> +				       true);
>>   
>>   	return (dma_addr_t)iova << shift;
>>   }
>> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
>> index 05c0c3a..75c8320 100644
>> --- a/drivers/iommu/intel-iommu.c
>> +++ b/drivers/iommu/intel-iommu.c
>> @@ -3460,11 +3460,12 @@ static unsigned long intel_alloc_iova(struct device *dev,
>>   		 * from higher range
>>   		 */
>>   		iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
>> -					   IOVA_PFN(DMA_BIT_MASK(32)));
>> +					   IOVA_PFN(DMA_BIT_MASK(32)), false);
>>   		if (iova_pfn)
>>   			return iova_pfn;
>>   	}
>> -	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages, IOVA_PFN(dma_mask));
>> +	iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
>> +				   IOVA_PFN(dma_mask), true);
>>   	if (unlikely(!iova_pfn)) {
>>   		pr_err("Allocating %ld-page iova for %s failed",
>>   		       nrpages, dev_name(dev));
>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>> index f88acad..1a18b14 100644
>> --- a/drivers/iommu/iova.c
>> +++ b/drivers/iommu/iova.c
>> @@ -358,9 +358,8 @@ EXPORT_SYMBOL_GPL(free_iova);
>>   */
>>   unsigned long
>>   alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>> -		unsigned long limit_pfn)
>> +		unsigned long limit_pfn, bool flush_rcache)
>>   {
>> -	bool flushed_rcache = false;
>>   	unsigned long iova_pfn;
>>   	struct iova *new_iova;
>>   
>> @@ -373,11 +372,11 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>>   	if (!new_iova) {
>>   		unsigned int cpu;
>>   
>> -		if (flushed_rcache)
>> +		if (!flush_rcache)
>>   			return 0;
>>   
>>   		/* Try replenishing IOVAs by flushing rcache. */
>> -		flushed_rcache = true;
>> +		flush_rcache = false;
>>   		for_each_online_cpu(cpu)
>>   			free_cpu_cached_iovas(cpu, iovad);
>>   		goto retry;
>> diff --git a/include/linux/iova.h b/include/linux/iova.h
>> index 58c2a36..8fdcb66 100644
>> --- a/include/linux/iova.h
>> +++ b/include/linux/iova.h
>> @@ -97,7 +97,7 @@ struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
>>   void free_iova_fast(struct iova_domain *iovad, unsigned long pfn,
>>   		    unsigned long size);
>>   unsigned long alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>> -			      unsigned long limit_pfn);
>> +			      unsigned long limit_pfn, bool flush_rcache);
>>   struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
>>   	unsigned long pfn_hi);
>>   void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
>> @@ -151,7 +151,8 @@ static inline void free_iova_fast(struct iova_domain *iovad,
>>   
>>   static inline unsigned long alloc_iova_fast(struct iova_domain *iovad,
>>   					    unsigned long size,
>> -					    unsigned long limit_pfn)
>> +					    unsigned long limit_pfn,
>> +					    bool flush_rcache)
>>   {
>>   	return 0;
>>   }
>>
> 

-- 
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
  2017-09-18 16:02     ` Robin Murphy
@ 2017-09-19  8:03       ` Tomasz Nowicki
  -1 siblings, 0 replies; 14+ messages in thread
From: Tomasz Nowicki @ 2017-09-19  8:03 UTC (permalink / raw)
  To: Robin Murphy, Tomasz Nowicki, joro, will.deacon
  Cc: lorenzo.pieralisi, Jayachandran.Nair, Ganapatrao.Kulkarni,
	ard.biesheuvel, linux-kernel, iommu, linux-arm-kernel,
	Nate Watterson

Hi Robin,

On 18.09.2017 18:02, Robin Murphy wrote:
> Hi Tomasz,
> 
> On 18/09/17 11:56, Tomasz Nowicki wrote:
>> Since IOVA allocation failure is not unusual case we need to flush
>> CPUs' rcache in hope we will succeed in next round.
>>
>> However, it is useful to decide whether we need rcache flush step because
>> of two reasons:
>> - Not scalability. On large system with ~100 CPUs iterating and flushing
>>    rcache for each CPU becomes serious bottleneck so we may want to deffer it.
>> - free_cpu_cached_iovas() does not care about max PFN we are interested in.
>>    Thus we may flush our rcaches and still get no new IOVA like in the
>>    commonly used scenario:
>>
>>      if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>>          iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
>>
>>      if (!iova)
>>          iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>>
>>     1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get
>>        PCI devices a SAC address
>>     2. alloc_iova() fails due to full 32-bit space
>>     3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas()
>>        throws entries away for nothing and alloc_iova() fails again
>>     4. Next alloc_iova_fast() call cannot take advantage of rcache since we
>>        have just defeated caches. In this case we pick the slowest option
>>        to proceed.
>>
>> This patch reworks flushed_rcache local flag to be additional function
>> argument instead and control rcache flush step. Also, it updates all users
>> to do the flush as the last chance.
> 
> Looks like you've run into the same thing Nate found[1] - I came up with
> almost the exact same patch, only with separate alloc_iova_fast() and
> alloc_iova_fast_noretry() wrapper functions, but on reflection, just
> exposing the bool to callers is probably simpler. One nit, can you
> document it in the kerneldoc comment too? With that:
> 
> Reviewed-by: Robin Murphy <robin.murphy@arm.com>

Thanks! I will add missing comment.

Tomasz

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
@ 2017-09-19  8:03       ` Tomasz Nowicki
  0 siblings, 0 replies; 14+ messages in thread
From: Tomasz Nowicki @ 2017-09-19  8:03 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Robin,

On 18.09.2017 18:02, Robin Murphy wrote:
> Hi Tomasz,
> 
> On 18/09/17 11:56, Tomasz Nowicki wrote:
>> Since IOVA allocation failure is not unusual case we need to flush
>> CPUs' rcache in hope we will succeed in next round.
>>
>> However, it is useful to decide whether we need rcache flush step because
>> of two reasons:
>> - Not scalability. On large system with ~100 CPUs iterating and flushing
>>    rcache for each CPU becomes serious bottleneck so we may want to deffer it.
>> - free_cpu_cached_iovas() does not care about max PFN we are interested in.
>>    Thus we may flush our rcaches and still get no new IOVA like in the
>>    commonly used scenario:
>>
>>      if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>>          iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> shift);
>>
>>      if (!iova)
>>          iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>>
>>     1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to get
>>        PCI devices a SAC address
>>     2. alloc_iova() fails due to full 32-bit space
>>     3. rcaches contain PFNs out of 32-bit space so free_cpu_cached_iovas()
>>        throws entries away for nothing and alloc_iova() fails again
>>     4. Next alloc_iova_fast() call cannot take advantage of rcache since we
>>        have just defeated caches. In this case we pick the slowest option
>>        to proceed.
>>
>> This patch reworks flushed_rcache local flag to be additional function
>> argument instead and control rcache flush step. Also, it updates all users
>> to do the flush as the last chance.
> 
> Looks like you've run into the same thing Nate found[1] - I came up with
> almost the exact same patch, only with separate alloc_iova_fast() and
> alloc_iova_fast_noretry() wrapper functions, but on reflection, just
> exposing the bool to callers is probably simpler. One nit, can you
> document it in the kerneldoc comment too? With that:
> 
> Reviewed-by: Robin Murphy <robin.murphy@arm.com>

Thanks! I will add missing comment.

Tomasz

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
@ 2017-09-19  8:10         ` Tomasz Nowicki
  0 siblings, 0 replies; 14+ messages in thread
From: Tomasz Nowicki @ 2017-09-19  8:10 UTC (permalink / raw)
  To: Nate Watterson, Robin Murphy, Tomasz Nowicki, joro, will.deacon
  Cc: lorenzo.pieralisi, Jayachandran.Nair, Ganapatrao.Kulkarni,
	ard.biesheuvel, linux-kernel, iommu, linux-arm-kernel

Hi Nate,

On 19.09.2017 04:57, Nate Watterson wrote:
> Hi Tomasz,
> 
> On 9/18/2017 12:02 PM, Robin Murphy wrote:
>> Hi Tomasz,
>>
>> On 18/09/17 11:56, Tomasz Nowicki wrote:
>>> Since IOVA allocation failure is not unusual case we need to flush
>>> CPUs' rcache in hope we will succeed in next round.
>>>
>>> However, it is useful to decide whether we need rcache flush step 
>>> because
>>> of two reasons:
>>> - Not scalability. On large system with ~100 CPUs iterating and flushing
>>>    rcache for each CPU becomes serious bottleneck so we may want to 
>>> deffer it.
> s/deffer/defer
> 
>>> - free_cpu_cached_iovas() does not care about max PFN we are 
>>> interested in.
>>>    Thus we may flush our rcaches and still get no new IOVA like in the
>>>    commonly used scenario:
>>>
>>>      if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>>>          iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> 
>>> shift);
>>>
>>>      if (!iova)
>>>          iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>>>
>>>     1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to 
>>> get
>>>        PCI devices a SAC address
>>>     2. alloc_iova() fails due to full 32-bit space
>>>     3. rcaches contain PFNs out of 32-bit space so 
>>> free_cpu_cached_iovas()
>>>        throws entries away for nothing and alloc_iova() fails again
>>>     4. Next alloc_iova_fast() call cannot take advantage of rcache 
>>> since we
>>>        have just defeated caches. In this case we pick the slowest 
>>> option
>>>        to proceed.
>>>
>>> This patch reworks flushed_rcache local flag to be additional function
>>> argument instead and control rcache flush step. Also, it updates all 
>>> users
>>> to do the flush as the last chance.
>>
>> Looks like you've run into the same thing Nate found[1] - I came up with
>> almost the exact same patch, only with separate alloc_iova_fast() and
>> alloc_iova_fast_noretry() wrapper functions, but on reflection, just
>> exposing the bool to callers is probably simpler. One nit, can you
>> document it in the kerneldoc comment too? With that:
>>
>> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
>>
>> Thanks,
>> Robin.
>>
>> [1]:https://www.mail-archive.com/iommu@lists.linux-foundation.org/msg19758.html 
>>
> This patch completely resolves the issue I reported in [1]!!

I somehow missed your observations in [1] :/
Anyway, it's great it fixes performance for you too.

> Tested-by: Nate Watterson <nwatters@codeaurora.org>

Thanks!
Tomasz

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
@ 2017-09-19  8:10         ` Tomasz Nowicki
  0 siblings, 0 replies; 14+ messages in thread
From: Tomasz Nowicki @ 2017-09-19  8:10 UTC (permalink / raw)
  To: Nate Watterson, Robin Murphy, Tomasz Nowicki,
	joro-zLv9SwRftAIdnm+yROfE0A, will.deacon-5wv7dgnIgG8
  Cc: Jayachandran.Nair-YGCgFSpz5w/QT0dZR+AlfA,
	ard.biesheuvel-QSEj5FYQhm4dnm+yROfE0A,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	Ganapatrao.Kulkarni-YGCgFSpz5w/QT0dZR+AlfA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r

Hi Nate,

On 19.09.2017 04:57, Nate Watterson wrote:
> Hi Tomasz,
> 
> On 9/18/2017 12:02 PM, Robin Murphy wrote:
>> Hi Tomasz,
>>
>> On 18/09/17 11:56, Tomasz Nowicki wrote:
>>> Since IOVA allocation failure is not unusual case we need to flush
>>> CPUs' rcache in hope we will succeed in next round.
>>>
>>> However, it is useful to decide whether we need rcache flush step 
>>> because
>>> of two reasons:
>>> - Not scalability. On large system with ~100 CPUs iterating and flushing
>>>    rcache for each CPU becomes serious bottleneck so we may want to 
>>> deffer it.
> s/deffer/defer
> 
>>> - free_cpu_cached_iovas() does not care about max PFN we are 
>>> interested in.
>>>    Thus we may flush our rcaches and still get no new IOVA like in the
>>>    commonly used scenario:
>>>
>>>      if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>>>          iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> 
>>> shift);
>>>
>>>      if (!iova)
>>>          iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>>>
>>>     1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to 
>>> get
>>>        PCI devices a SAC address
>>>     2. alloc_iova() fails due to full 32-bit space
>>>     3. rcaches contain PFNs out of 32-bit space so 
>>> free_cpu_cached_iovas()
>>>        throws entries away for nothing and alloc_iova() fails again
>>>     4. Next alloc_iova_fast() call cannot take advantage of rcache 
>>> since we
>>>        have just defeated caches. In this case we pick the slowest 
>>> option
>>>        to proceed.
>>>
>>> This patch reworks flushed_rcache local flag to be additional function
>>> argument instead and control rcache flush step. Also, it updates all 
>>> users
>>> to do the flush as the last chance.
>>
>> Looks like you've run into the same thing Nate found[1] - I came up with
>> almost the exact same patch, only with separate alloc_iova_fast() and
>> alloc_iova_fast_noretry() wrapper functions, but on reflection, just
>> exposing the bool to callers is probably simpler. One nit, can you
>> document it in the kerneldoc comment too? With that:
>>
>> Reviewed-by: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org>
>>
>> Thanks,
>> Robin.
>>
>> [1]:https://www.mail-archive.com/iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org/msg19758.html 
>>
> This patch completely resolves the issue I reported in [1]!!

I somehow missed your observations in [1] :/
Anyway, it's great it fixes performance for you too.

> Tested-by: Nate Watterson <nwatters-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>

Thanks!
Tomasz

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure
@ 2017-09-19  8:10         ` Tomasz Nowicki
  0 siblings, 0 replies; 14+ messages in thread
From: Tomasz Nowicki @ 2017-09-19  8:10 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Nate,

On 19.09.2017 04:57, Nate Watterson wrote:
> Hi Tomasz,
> 
> On 9/18/2017 12:02 PM, Robin Murphy wrote:
>> Hi Tomasz,
>>
>> On 18/09/17 11:56, Tomasz Nowicki wrote:
>>> Since IOVA allocation failure is not unusual case we need to flush
>>> CPUs' rcache in hope we will succeed in next round.
>>>
>>> However, it is useful to decide whether we need rcache flush step 
>>> because
>>> of two reasons:
>>> - Not scalability. On large system with ~100 CPUs iterating and flushing
>>>    rcache for each CPU becomes serious bottleneck so we may want to 
>>> deffer it.
> s/deffer/defer
> 
>>> - free_cpu_cached_iovas() does not care about max PFN we are 
>>> interested in.
>>>    Thus we may flush our rcaches and still get no new IOVA like in the
>>>    commonly used scenario:
>>>
>>>      if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
>>>          iova = alloc_iova_fast(iovad, iova_len, DMA_BIT_MASK(32) >> 
>>> shift);
>>>
>>>      if (!iova)
>>>          iova = alloc_iova_fast(iovad, iova_len, dma_limit >> shift);
>>>
>>>     1. First alloc_iova_fast() call is limited to DMA_BIT_MASK(32) to 
>>> get
>>>        PCI devices a SAC address
>>>     2. alloc_iova() fails due to full 32-bit space
>>>     3. rcaches contain PFNs out of 32-bit space so 
>>> free_cpu_cached_iovas()
>>>        throws entries away for nothing and alloc_iova() fails again
>>>     4. Next alloc_iova_fast() call cannot take advantage of rcache 
>>> since we
>>>        have just defeated caches. In this case we pick the slowest 
>>> option
>>>        to proceed.
>>>
>>> This patch reworks flushed_rcache local flag to be additional function
>>> argument instead and control rcache flush step. Also, it updates all 
>>> users
>>> to do the flush as the last chance.
>>
>> Looks like you've run into the same thing Nate found[1] - I came up with
>> almost the exact same patch, only with separate alloc_iova_fast() and
>> alloc_iova_fast_noretry() wrapper functions, but on reflection, just
>> exposing the bool to callers is probably simpler. One nit, can you
>> document it in the kerneldoc comment too? With that:
>>
>> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
>>
>> Thanks,
>> Robin.
>>
>> [1]:https://www.mail-archive.com/iommu at lists.linux-foundation.org/msg19758.html 
>>
> This patch completely resolves the issue I reported in [1]!!

I somehow missed your observations in [1] :/
Anyway, it's great it fixes performance for you too.

> Tested-by: Nate Watterson <nwatters@codeaurora.org>

Thanks!
Tomasz

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-09-19  8:10 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-18 10:56 [PATCH 0/1] Optimise IOVA allocations for PCI devices Tomasz Nowicki
2017-09-18 10:56 ` Tomasz Nowicki
2017-09-18 10:56 ` [PATCH 1/1] iommu/iova: Make rcache flush optional on IOVA allocation failure Tomasz Nowicki
2017-09-18 10:56   ` Tomasz Nowicki
2017-09-18 16:02   ` Robin Murphy
2017-09-18 16:02     ` Robin Murphy
2017-09-19  2:57     ` Nate Watterson
2017-09-19  2:57       ` Nate Watterson
2017-09-19  2:57       ` Nate Watterson
2017-09-19  8:10       ` Tomasz Nowicki
2017-09-19  8:10         ` Tomasz Nowicki
2017-09-19  8:10         ` Tomasz Nowicki
2017-09-19  8:03     ` Tomasz Nowicki
2017-09-19  8:03       ` Tomasz Nowicki

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.