iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] drivers: iommu: reset cached node if dma_mask is changed
       [not found] <CGME20200504185042epcas5p11447ae722d33bd00c7d002a9d1b8d6c1@epcas5p1.samsung.com>
@ 2020-05-04 18:37 ` Ajay Kumar
  2020-05-07 13:37   ` Robin Murphy
  2020-05-13  8:33   ` Joerg Roedel
  0 siblings, 2 replies; 4+ messages in thread
From: Ajay Kumar @ 2020-05-04 18:37 UTC (permalink / raw)
  To: iommu, linux-mm; +Cc: Sathyam Panda, shaik.ameer, robin.murphy, Ajay Kumar

The current IOVA allocation code stores a cached copy of the
first allocated IOVA address node, and all the subsequent allocations
have no way to get past(higher than) the first allocated IOVA range.

This causes issue when dma_mask for the master device is changed.
Though the DMA window is increased, the allocation code unaware of
the change, goes ahead allocating IOVA address lower than the
first allocated IOVA address.

This patch adds a check for dma_mask change in the IOVA allocation
function and resets the cached IOVA node to anchor node everytime
the dma_mask change is observed.

NOTE:
 This patch is needed to address the issue discussed in below thread:
 https://www.spinics.net/lists/iommu/msg43586.html

Signed-off-by: Ajay Kumar <ajaykumar.rs@samsung.com>
Signed-off-by: Sathyam Panda <sathya.panda@samsung.com>
---
 drivers/iommu/iova.c | 17 ++++++++++++++++-
 include/linux/iova.h |  1 +
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index 41c605b0058f..0e99975036ae 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -44,6 +44,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule,
 	iovad->granule = granule;
 	iovad->start_pfn = start_pfn;
 	iovad->dma_32bit_pfn = 1UL << (32 - iova_shift(iovad));
+	iovad->curr_limit_pfn = iovad->dma_32bit_pfn;
 	iovad->max32_alloc_size = iovad->dma_32bit_pfn;
 	iovad->flush_cb = NULL;
 	iovad->fq = NULL;
@@ -116,9 +117,20 @@ EXPORT_SYMBOL_GPL(init_iova_flush_queue);
 static struct rb_node *
 __get_cached_rbnode(struct iova_domain *iovad, unsigned long limit_pfn)
 {
-	if (limit_pfn <= iovad->dma_32bit_pfn)
+	if (limit_pfn <= iovad->dma_32bit_pfn) {
+		/* re-init cached node if DMA limit has changed */
+		if (limit_pfn != iovad->curr_limit_pfn) {
+			iovad->cached32_node = &iovad->anchor.node;
+			iovad->curr_limit_pfn = limit_pfn;
+		}
 		return iovad->cached32_node;
+	}
 
+	/* re-init cached node if DMA limit has changed */
+	if (limit_pfn != iovad->curr_limit_pfn) {
+		iovad->cached_node = &iovad->anchor.node;
+		iovad->curr_limit_pfn = limit_pfn;
+	}
 	return iovad->cached_node;
 }
 
@@ -190,6 +202,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
 	if (size_aligned)
 		align_mask <<= fls_long(size - 1);
 
+	if (limit_pfn != iovad->curr_limit_pfn)
+		iovad->max32_alloc_size = iovad->dma_32bit_pfn;
+
 	/* Walk the tree backwards */
 	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
 	if (limit_pfn <= iovad->dma_32bit_pfn &&
diff --git a/include/linux/iova.h b/include/linux/iova.h
index a0637abffee8..be2220c096ef 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -73,6 +73,7 @@ struct iova_domain {
 	unsigned long	granule;	/* pfn granularity for this domain */
 	unsigned long	start_pfn;	/* Lower limit for this domain */
 	unsigned long	dma_32bit_pfn;
+	unsigned long	curr_limit_pfn;	/* Current max limit for this domain */
 	unsigned long	max32_alloc_size; /* Size of last failed allocation */
 	struct iova_fq __percpu *fq;	/* Flush Queue */
 
-- 
2.17.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [RFC PATCH] drivers: iommu: reset cached node if dma_mask is changed
  2020-05-04 18:37 ` [RFC PATCH] drivers: iommu: reset cached node if dma_mask is changed Ajay Kumar
@ 2020-05-07 13:37   ` Robin Murphy
  2020-05-07 17:44     ` Ajay kumar
  2020-05-13  8:33   ` Joerg Roedel
  1 sibling, 1 reply; 4+ messages in thread
From: Robin Murphy @ 2020-05-07 13:37 UTC (permalink / raw)
  To: Ajay Kumar, iommu, linux-mm; +Cc: shaik.ameer, Sathyam Panda

On 2020-05-04 7:37 pm, Ajay Kumar wrote:
> The current IOVA allocation code stores a cached copy of the
> first allocated IOVA address node, and all the subsequent allocations
> have no way to get past(higher than) the first allocated IOVA range.

Strictly they do, after that first allocation gets freed, or if the 
first limit was <=32 bits and the subsequent limit >32 bits ;)

> This causes issue when dma_mask for the master device is changed.
> Though the DMA window is increased, the allocation code unaware of
> the change, goes ahead allocating IOVA address lower than the
> first allocated IOVA address.
> 
> This patch adds a check for dma_mask change in the IOVA allocation
> function and resets the cached IOVA node to anchor node everytime
> the dma_mask change is observed.

This isn't the right approach, since limit_pfn is by design a transient 
per-allocation thing. Devices with different limits may well be 
allocating from the same IOVA domain concurrently, which is the whole 
reason for maintaining two cached nodes to serve the expected PCI case 
of mixing 32-bit and 64-bit limits. Trying to track a per-allocation 
property on a per-domain basis is just going to thrash and massively 
hurt such cases.

A somewhat more appropriate fix to the allocation loop itself has been 
proposed here:

https://lore.kernel.org/linux-iommu/1588795317-20879-1-git-send-email-vjitta@codeaurora.org/

Robin.

> NOTE:
>   This patch is needed to address the issue discussed in below thread:
>   https://www.spinics.net/lists/iommu/msg43586.html
> 
> Signed-off-by: Ajay Kumar <ajaykumar.rs@samsung.com>
> Signed-off-by: Sathyam Panda <sathya.panda@samsung.com>
> ---
>   drivers/iommu/iova.c | 17 ++++++++++++++++-
>   include/linux/iova.h |  1 +
>   2 files changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 41c605b0058f..0e99975036ae 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -44,6 +44,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule,
>   	iovad->granule = granule;
>   	iovad->start_pfn = start_pfn;
>   	iovad->dma_32bit_pfn = 1UL << (32 - iova_shift(iovad));
> +	iovad->curr_limit_pfn = iovad->dma_32bit_pfn;
>   	iovad->max32_alloc_size = iovad->dma_32bit_pfn;
>   	iovad->flush_cb = NULL;
>   	iovad->fq = NULL;
> @@ -116,9 +117,20 @@ EXPORT_SYMBOL_GPL(init_iova_flush_queue);
>   static struct rb_node *
>   __get_cached_rbnode(struct iova_domain *iovad, unsigned long limit_pfn)
>   {
> -	if (limit_pfn <= iovad->dma_32bit_pfn)
> +	if (limit_pfn <= iovad->dma_32bit_pfn) {
> +		/* re-init cached node if DMA limit has changed */
> +		if (limit_pfn != iovad->curr_limit_pfn) {
> +			iovad->cached32_node = &iovad->anchor.node;
> +			iovad->curr_limit_pfn = limit_pfn;
> +		}
>   		return iovad->cached32_node;
> +	}
>   
> +	/* re-init cached node if DMA limit has changed */
> +	if (limit_pfn != iovad->curr_limit_pfn) {
> +		iovad->cached_node = &iovad->anchor.node;
> +		iovad->curr_limit_pfn = limit_pfn;
> +	}
>   	return iovad->cached_node;
>   }
>   
> @@ -190,6 +202,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
>   	if (size_aligned)
>   		align_mask <<= fls_long(size - 1);
>   
> +	if (limit_pfn != iovad->curr_limit_pfn)
> +		iovad->max32_alloc_size = iovad->dma_32bit_pfn;
> +
>   	/* Walk the tree backwards */
>   	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
>   	if (limit_pfn <= iovad->dma_32bit_pfn &&
> diff --git a/include/linux/iova.h b/include/linux/iova.h
> index a0637abffee8..be2220c096ef 100644
> --- a/include/linux/iova.h
> +++ b/include/linux/iova.h
> @@ -73,6 +73,7 @@ struct iova_domain {
>   	unsigned long	granule;	/* pfn granularity for this domain */
>   	unsigned long	start_pfn;	/* Lower limit for this domain */
>   	unsigned long	dma_32bit_pfn;
> +	unsigned long	curr_limit_pfn;	/* Current max limit for this domain */
>   	unsigned long	max32_alloc_size; /* Size of last failed allocation */
>   	struct iova_fq __percpu *fq;	/* Flush Queue */
>   
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [RFC PATCH] drivers: iommu: reset cached node if dma_mask is changed
  2020-05-07 13:37   ` Robin Murphy
@ 2020-05-07 17:44     ` Ajay kumar
  0 siblings, 0 replies; 4+ messages in thread
From: Ajay kumar @ 2020-05-07 17:44 UTC (permalink / raw)
  To: Robin Murphy; +Cc: linux-mm, iommu, shaik.ameer, Ajay Kumar, Sathyam Panda

Hi Robin,

On 5/7/20, Robin Murphy <robin.murphy@arm.com> wrote:
> On 2020-05-04 7:37 pm, Ajay Kumar wrote:
>> The current IOVA allocation code stores a cached copy of the
>> first allocated IOVA address node, and all the subsequent allocations
>> have no way to get past(higher than) the first allocated IOVA range.
>
> Strictly they do, after that first allocation gets freed, or if the
> first limit was <=32 bits and the subsequent limit >32 bits ;)
In my case, the first allocated buffer is the firmware buffer,
and its not released.

>> This causes issue when dma_mask for the master device is changed.
>> Though the DMA window is increased, the allocation code unaware of
>> the change, goes ahead allocating IOVA address lower than the
>> first allocated IOVA address.
>>
>> This patch adds a check for dma_mask change in the IOVA allocation
>> function and resets the cached IOVA node to anchor node everytime
>> the dma_mask change is observed.
>
> This isn't the right approach, since limit_pfn is by design a transient
> per-allocation thing. Devices with different limits may well be
> allocating from the same IOVA domain concurrently, which is the whole
> reason for maintaining two cached nodes to serve the expected PCI case
> of mixing 32-bit and 64-bit limits. Trying to track a per-allocation
> property on a per-domain basis is just going to thrash and massively
> hurt such cases.
Agreed. But if two or more non PCI devices share the same domain,
and though one of the devices has higher addressable limit,
the current logic forces it to take the lower window.
> A somewhat more appropriate fix to the allocation loop itself has been
> proposed here:
>
> https://lore.kernel.org/linux-iommu/1588795317-20879-1-git-send-email-vjitta@codeaurora.org/
This patch addresses my problem indirectly. Shall add my concerns there.

Ajay
>> NOTE:
>>   This patch is needed to address the issue discussed in below thread:
>>   https://www.spinics.net/lists/iommu/msg43586.html
>>
>> Signed-off-by: Ajay Kumar <ajaykumar.rs@samsung.com>
>> Signed-off-by: Sathyam Panda <sathya.panda@samsung.com>
>> ---
>>   drivers/iommu/iova.c | 17 ++++++++++++++++-
>>   include/linux/iova.h |  1 +
>>   2 files changed, 17 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>> index 41c605b0058f..0e99975036ae 100644
>> --- a/drivers/iommu/iova.c
>> +++ b/drivers/iommu/iova.c
>> @@ -44,6 +44,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned
>> long granule,
>>   	iovad->granule = granule;
>>   	iovad->start_pfn = start_pfn;
>>   	iovad->dma_32bit_pfn = 1UL << (32 - iova_shift(iovad));
>> +	iovad->curr_limit_pfn = iovad->dma_32bit_pfn;
>>   	iovad->max32_alloc_size = iovad->dma_32bit_pfn;
>>   	iovad->flush_cb = NULL;
>>   	iovad->fq = NULL;
>> @@ -116,9 +117,20 @@ EXPORT_SYMBOL_GPL(init_iova_flush_queue);
>>   static struct rb_node *
>>   __get_cached_rbnode(struct iova_domain *iovad, unsigned long limit_pfn)
>>   {
>> -	if (limit_pfn <= iovad->dma_32bit_pfn)
>> +	if (limit_pfn <= iovad->dma_32bit_pfn) {
>> +		/* re-init cached node if DMA limit has changed */
>> +		if (limit_pfn != iovad->curr_limit_pfn) {
>> +			iovad->cached32_node = &iovad->anchor.node;
>> +			iovad->curr_limit_pfn = limit_pfn;
>> +		}
>>   		return iovad->cached32_node;
>> +	}
>>
>> +	/* re-init cached node if DMA limit has changed */
>> +	if (limit_pfn != iovad->curr_limit_pfn) {
>> +		iovad->cached_node = &iovad->anchor.node;
>> +		iovad->curr_limit_pfn = limit_pfn;
>> +	}
>>   	return iovad->cached_node;
>>   }
>>
>> @@ -190,6 +202,9 @@ static int __alloc_and_insert_iova_range(struct
>> iova_domain *iovad,
>>   	if (size_aligned)
>>   		align_mask <<= fls_long(size - 1);
>>
>> +	if (limit_pfn != iovad->curr_limit_pfn)
>> +		iovad->max32_alloc_size = iovad->dma_32bit_pfn;
>> +
>>   	/* Walk the tree backwards */
>>   	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
>>   	if (limit_pfn <= iovad->dma_32bit_pfn &&
>> diff --git a/include/linux/iova.h b/include/linux/iova.h
>> index a0637abffee8..be2220c096ef 100644
>> --- a/include/linux/iova.h
>> +++ b/include/linux/iova.h
>> @@ -73,6 +73,7 @@ struct iova_domain {
>>   	unsigned long	granule;	/* pfn granularity for this domain */
>>   	unsigned long	start_pfn;	/* Lower limit for this domain */
>>   	unsigned long	dma_32bit_pfn;
>> +	unsigned long	curr_limit_pfn;	/* Current max limit for this domain */
>>   	unsigned long	max32_alloc_size; /* Size of last failed allocation */
>>   	struct iova_fq __percpu *fq;	/* Flush Queue */
>>
>>
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [RFC PATCH] drivers: iommu: reset cached node if dma_mask is changed
  2020-05-04 18:37 ` [RFC PATCH] drivers: iommu: reset cached node if dma_mask is changed Ajay Kumar
  2020-05-07 13:37   ` Robin Murphy
@ 2020-05-13  8:33   ` Joerg Roedel
  1 sibling, 0 replies; 4+ messages in thread
From: Joerg Roedel @ 2020-05-13  8:33 UTC (permalink / raw)
  To: Ajay Kumar, Robin Murphy
  Cc: Sathyam Panda, linux-mm, iommu, shaik.ameer, robin.murphy

Adding Robin.

On Tue, May 05, 2020 at 12:07:59AM +0530, Ajay Kumar wrote:
> The current IOVA allocation code stores a cached copy of the
> first allocated IOVA address node, and all the subsequent allocations
> have no way to get past(higher than) the first allocated IOVA range.
> 
> This causes issue when dma_mask for the master device is changed.
> Though the DMA window is increased, the allocation code unaware of
> the change, goes ahead allocating IOVA address lower than the
> first allocated IOVA address.
> 
> This patch adds a check for dma_mask change in the IOVA allocation
> function and resets the cached IOVA node to anchor node everytime
> the dma_mask change is observed.
> 
> NOTE:
>  This patch is needed to address the issue discussed in below thread:
>  https://www.spinics.net/lists/iommu/msg43586.html
> 
> Signed-off-by: Ajay Kumar <ajaykumar.rs@samsung.com>
> Signed-off-by: Sathyam Panda <sathya.panda@samsung.com>
> ---
>  drivers/iommu/iova.c | 17 ++++++++++++++++-
>  include/linux/iova.h |  1 +
>  2 files changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 41c605b0058f..0e99975036ae 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -44,6 +44,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule,
>  	iovad->granule = granule;
>  	iovad->start_pfn = start_pfn;
>  	iovad->dma_32bit_pfn = 1UL << (32 - iova_shift(iovad));
> +	iovad->curr_limit_pfn = iovad->dma_32bit_pfn;
>  	iovad->max32_alloc_size = iovad->dma_32bit_pfn;
>  	iovad->flush_cb = NULL;
>  	iovad->fq = NULL;
> @@ -116,9 +117,20 @@ EXPORT_SYMBOL_GPL(init_iova_flush_queue);
>  static struct rb_node *
>  __get_cached_rbnode(struct iova_domain *iovad, unsigned long limit_pfn)
>  {
> -	if (limit_pfn <= iovad->dma_32bit_pfn)
> +	if (limit_pfn <= iovad->dma_32bit_pfn) {
> +		/* re-init cached node if DMA limit has changed */
> +		if (limit_pfn != iovad->curr_limit_pfn) {
> +			iovad->cached32_node = &iovad->anchor.node;
> +			iovad->curr_limit_pfn = limit_pfn;
> +		}
>  		return iovad->cached32_node;
> +	}
>  
> +	/* re-init cached node if DMA limit has changed */
> +	if (limit_pfn != iovad->curr_limit_pfn) {
> +		iovad->cached_node = &iovad->anchor.node;
> +		iovad->curr_limit_pfn = limit_pfn;
> +	}
>  	return iovad->cached_node;
>  }
>  
> @@ -190,6 +202,9 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
>  	if (size_aligned)
>  		align_mask <<= fls_long(size - 1);
>  
> +	if (limit_pfn != iovad->curr_limit_pfn)
> +		iovad->max32_alloc_size = iovad->dma_32bit_pfn;
> +
>  	/* Walk the tree backwards */
>  	spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
>  	if (limit_pfn <= iovad->dma_32bit_pfn &&
> diff --git a/include/linux/iova.h b/include/linux/iova.h
> index a0637abffee8..be2220c096ef 100644
> --- a/include/linux/iova.h
> +++ b/include/linux/iova.h
> @@ -73,6 +73,7 @@ struct iova_domain {
>  	unsigned long	granule;	/* pfn granularity for this domain */
>  	unsigned long	start_pfn;	/* Lower limit for this domain */
>  	unsigned long	dma_32bit_pfn;
> +	unsigned long	curr_limit_pfn;	/* Current max limit for this domain */
>  	unsigned long	max32_alloc_size; /* Size of last failed allocation */
>  	struct iova_fq __percpu *fq;	/* Flush Queue */
>  
> -- 
> 2.17.1
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-05-13  8:33 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CGME20200504185042epcas5p11447ae722d33bd00c7d002a9d1b8d6c1@epcas5p1.samsung.com>
2020-05-04 18:37 ` [RFC PATCH] drivers: iommu: reset cached node if dma_mask is changed Ajay Kumar
2020-05-07 13:37   ` Robin Murphy
2020-05-07 17:44     ` Ajay kumar
2020-05-13  8:33   ` Joerg Roedel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).