linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] iommu/iova: Fix tracking of recently failed iova address size
@ 2019-03-15 15:56 Robert Richter
  2019-03-18 15:19 ` Robin Murphy
  0 siblings, 1 reply; 4+ messages in thread
From: Robert Richter @ 2019-03-15 15:56 UTC (permalink / raw)
  To: Joerg Roedel
  Cc: Ganapatrao Kulkarni, Robin Murphy, Robert Richter, iommu, linux-kernel

We track the smallest size that failed for a 32 bit allocation. The
Size decreases only and if we actually walked the tree and noticed an
allocation failure. Current code is broken and wrongly updates the
size value even if we did not try an allocation. This leads to
increased size values and we might go the slow path again even if we
have seen a failure before for the same or a smaller size.

Cc: <stable@vger.kernel.org> # 4.20+
Fixes: bee60e94a1e2 ("iommu/iova: Optimise attempts to allocate iova from 32bit address range")
Signed-off-by: Robert Richter <rrichter@marvell.com>
---
 drivers/iommu/iova.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index f8d3ba247523..2de8122e218f 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -207,8 +207,10 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
 		curr_iova = rb_entry(curr, struct iova, node);
 	} while (curr && new_pfn <= curr_iova->pfn_hi);
 
-	if (limit_pfn < size || new_pfn < iovad->start_pfn)
+	if (limit_pfn < size || new_pfn < iovad->start_pfn) {
+		iovad->max32_alloc_size = size;
 		goto iova32_full;
+	}
 
 	/* pfn_lo will point to size aligned address if size_aligned is set */
 	new->pfn_lo = new_pfn;
@@ -222,7 +224,6 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
 	return 0;
 
 iova32_full:
-	iovad->max32_alloc_size = size;
 	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
 	return -ENOMEM;
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] iommu/iova: Fix tracking of recently failed iova address size
  2019-03-15 15:56 [PATCH] iommu/iova: Fix tracking of recently failed iova address size Robert Richter
@ 2019-03-18 15:19 ` Robin Murphy
  2019-03-20 18:57   ` [PATCH v2] iommu/iova: Fix tracking of recently failed iova address Robert Richter
  0 siblings, 1 reply; 4+ messages in thread
From: Robin Murphy @ 2019-03-18 15:19 UTC (permalink / raw)
  To: Robert Richter, Joerg Roedel; +Cc: Ganapatrao Kulkarni, iommu, linux-kernel

On 15/03/2019 15:56, Robert Richter wrote:
> We track the smallest size that failed for a 32 bit allocation. The
> Size decreases only and if we actually walked the tree and noticed an
> allocation failure. Current code is broken and wrongly updates the
> size value even if we did not try an allocation. This leads to
> increased size values and we might go the slow path again even if we
> have seen a failure before for the same or a smaller size.

That description wasn't too clear (since it rather contradicts itself by 
starting off with "XYZ happens" when the whole point is that XYZ doesn't 
actually happen properly), but having gone and looked at the code in 
context I think I understand it now - specifically, it's that the 
early-exit path for detecting that a 32-bit allocation request is too 
big to possibly succeed should never have gone via the route which 
assigns to max32_alloc_size.

In that respect, the diff looks correct, so modulo possibly tweaking the 
commit message,

Reviewed-by: Robin Murphy <robin.murphy@arm.com>

Thanks,
Robin.

> Cc: <stable@vger.kernel.org> # 4.20+
> Fixes: bee60e94a1e2 ("iommu/iova: Optimise attempts to allocate iova from 32bit address range")
> Signed-off-by: Robert Richter <rrichter@marvell.com>
> ---
>   drivers/iommu/iova.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index f8d3ba247523..2de8122e218f 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -207,8 +207,10 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
>   		curr_iova = rb_entry(curr, struct iova, node);
>   	} while (curr && new_pfn <= curr_iova->pfn_hi);
>   
> -	if (limit_pfn < size || new_pfn < iovad->start_pfn)
> +	if (limit_pfn < size || new_pfn < iovad->start_pfn) {
> +		iovad->max32_alloc_size = size;
>   		goto iova32_full;
> +	}
>   
>   	/* pfn_lo will point to size aligned address if size_aligned is set */
>   	new->pfn_lo = new_pfn;
> @@ -222,7 +224,6 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
>   	return 0;
>   
>   iova32_full:
> -	iovad->max32_alloc_size = size;
>   	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
>   	return -ENOMEM;
>   }
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v2] iommu/iova: Fix tracking of recently failed iova address
  2019-03-18 15:19 ` Robin Murphy
@ 2019-03-20 18:57   ` Robert Richter
  2019-03-22  9:31     ` Joerg Roedel
  0 siblings, 1 reply; 4+ messages in thread
From: Robert Richter @ 2019-03-20 18:57 UTC (permalink / raw)
  To: Robin Murphy; +Cc: Joerg Roedel, Ganapatrao Kulkarni, iommu, linux-kernel

On 18.03.19 15:19:23, Robin Murphy wrote:
> On 15/03/2019 15:56, Robert Richter wrote:
> > We track the smallest size that failed for a 32 bit allocation. The
> > Size decreases only and if we actually walked the tree and noticed an
> > allocation failure. Current code is broken and wrongly updates the
> > size value even if we did not try an allocation. This leads to
> > increased size values and we might go the slow path again even if we
> > have seen a failure before for the same or a smaller size.
> 
> That description wasn't too clear (since it rather contradicts itself by
> starting off with "XYZ happens" when the whole point is that XYZ doesn't
> actually happen properly), but having gone and looked at the code in context
> I think I understand it now - specifically, it's that the early-exit path
> for detecting that a 32-bit allocation request is too big to possibly
> succeed should never have gone via the route which assigns to
> max32_alloc_size.
> 
> In that respect, the diff looks correct, so modulo possibly tweaking the
> commit message,
> 
> Reviewed-by: Robin Murphy <robin.murphy@arm.com>

Robin, thanks for your review.

I hope the following description is better now.

Thanks,

-Robert

-- >8 --
From: Robert Richter <rrichter@marvell.com>
Subject: [PATCH v2] iommu/iova: Fix tracking of recently failed iova address
 size

If a 32 bit allocation request is too big to possibly succeed, it
early exits with a failure and then should never update max32_alloc_
size. This patch fixes current code, now the size is only updated if
the slow path failed while walking the tree. Without the fix the
allocation may enter the slow path again even if there was a failure
before of a request with the same or a smaller size.

Cc: <stable@vger.kernel.org> # 4.20+
Fixes: bee60e94a1e2 ("iommu/iova: Optimise attempts to allocate iova from 32bit address range")
Signed-off-by: Robert Richter <rrichter@marvell.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Robert Richter <rrichter@marvell.com>
---
 drivers/iommu/iova.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index f8d3ba247523..2de8122e218f 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -207,8 +207,10 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
 		curr_iova = rb_entry(curr, struct iova, node);
 	} while (curr && new_pfn <= curr_iova->pfn_hi);
 
-	if (limit_pfn < size || new_pfn < iovad->start_pfn)
+	if (limit_pfn < size || new_pfn < iovad->start_pfn) {
+		iovad->max32_alloc_size = size;
 		goto iova32_full;
+	}
 
 	/* pfn_lo will point to size aligned address if size_aligned is set */
 	new->pfn_lo = new_pfn;
@@ -222,7 +224,6 @@ static int __alloc_and_insert_iova_range(struct iova_domain *iovad,
 	return 0;
 
 iova32_full:
-	iovad->max32_alloc_size = size;
 	spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
 	return -ENOMEM;
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2] iommu/iova: Fix tracking of recently failed iova address
  2019-03-20 18:57   ` [PATCH v2] iommu/iova: Fix tracking of recently failed iova address Robert Richter
@ 2019-03-22  9:31     ` Joerg Roedel
  0 siblings, 0 replies; 4+ messages in thread
From: Joerg Roedel @ 2019-03-22  9:31 UTC (permalink / raw)
  To: Robert Richter; +Cc: Robin Murphy, Ganapatrao Kulkarni, iommu, linux-kernel

On Wed, Mar 20, 2019 at 06:57:23PM +0000, Robert Richter wrote:
> From: Robert Richter <rrichter@marvell.com>
> Subject: [PATCH v2] iommu/iova: Fix tracking of recently failed iova address
>  size
> 
> If a 32 bit allocation request is too big to possibly succeed, it
> early exits with a failure and then should never update max32_alloc_
> size. This patch fixes current code, now the size is only updated if
> the slow path failed while walking the tree. Without the fix the
> allocation may enter the slow path again even if there was a failure
> before of a request with the same or a smaller size.
> 
> Cc: <stable@vger.kernel.org> # 4.20+
> Fixes: bee60e94a1e2 ("iommu/iova: Optimise attempts to allocate iova from 32bit address range")
> Signed-off-by: Robert Richter <rrichter@marvell.com>
> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
> Signed-off-by: Robert Richter <rrichter@marvell.com>
> ---
>  drivers/iommu/iova.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)

Applied, thanks.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-03-22  9:31 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-15 15:56 [PATCH] iommu/iova: Fix tracking of recently failed iova address size Robert Richter
2019-03-18 15:19 ` Robin Murphy
2019-03-20 18:57   ` [PATCH v2] iommu/iova: Fix tracking of recently failed iova address Robert Richter
2019-03-22  9:31     ` Joerg Roedel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).