All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vijayanand Jitta <vjitta@codeaurora.org>
To: Robin Murphy <robin.murphy@arm.com>,
	joro@8bytes.org, iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: vinmenon@codeaurora.org, kernel-team@android.com
Subject: Re: [PATCH] iommu/iova: Retry from last rb tree node if iova search fails
Date: Sat, 9 May 2020 00:25:41 +0530	[thread overview]
Message-ID: <b80fdf37-e635-2d65-c523-8e1d0bd8085b@codeaurora.org> (raw)
In-Reply-To: <d9bfde9f-8f16-bf1b-311b-ea6c2b8ab93d@arm.com>



On 5/7/2020 6:54 PM, Robin Murphy wrote:
> On 2020-05-06 9:01 pm, vjitta@codeaurora.org wrote:
>> From: Vijayanand Jitta <vjitta@codeaurora.org>
>>
>> When ever a new iova alloc request comes iova is always searched
>> from the cached node and the nodes which are previous to cached
>> node. So, even if there is free iova space available in the nodes
>> which are next to the cached node iova allocation can still fail
>> because of this approach.
>>
>> Consider the following sequence of iova alloc and frees on
>> 1GB of iova space
>>
>> 1) alloc - 500MB
>> 2) alloc - 12MB
>> 3) alloc - 499MB
>> 4) free -  12MB which was allocated in step 2
>> 5) alloc - 13MB
>>
>> After the above sequence we will have 12MB of free iova space and
>> cached node will be pointing to the iova pfn of last alloc of 13MB
>> which will be the lowest iova pfn of that iova space. Now if we get an
>> alloc request of 2MB we just search from cached node and then look
>> for lower iova pfn's for free iova and as they aren't any, iova alloc
>> fails though there is 12MB of free iova space.
> 
> Yup, this could definitely do with improving. Unfortunately I think this
> particular implementation is slightly flawed...
> 
>> To avoid such iova search failures do a retry from the last rb tree node
>> when iova search fails, this will search the entire tree and get an iova
>> if its available
>>
>> Signed-off-by: Vijayanand Jitta <vjitta@codeaurora.org>
>> ---
>>   drivers/iommu/iova.c | 11 +++++++++++
>>   1 file changed, 11 insertions(+)
>>
>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>> index 0e6a953..2985222 100644
>> --- a/drivers/iommu/iova.c
>> +++ b/drivers/iommu/iova.c
>> @@ -186,6 +186,7 @@ static int __alloc_and_insert_iova_range(struct
>> iova_domain *iovad,
>>       unsigned long flags;
>>       unsigned long new_pfn;
>>       unsigned long align_mask = ~0UL;
>> +    bool retry = false;
>>         if (size_aligned)
>>           align_mask <<= fls_long(size - 1);
>> @@ -198,6 +199,8 @@ static int __alloc_and_insert_iova_range(struct
>> iova_domain *iovad,
>>         curr = __get_cached_rbnode(iovad, limit_pfn);
>>       curr_iova = rb_entry(curr, struct iova, node);
>> +
>> +retry_search:
>>       do {
>>           limit_pfn = min(limit_pfn, curr_iova->pfn_lo);
>>           new_pfn = (limit_pfn - size) & align_mask;
>> @@ -207,6 +210,14 @@ static int __alloc_and_insert_iova_range(struct
>> iova_domain *iovad,
>>       } while (curr && new_pfn <= curr_iova->pfn_hi);
>>         if (limit_pfn < size || new_pfn < iovad->start_pfn) {
>> +        if (!retry) {
>> +            curr = rb_last(&iovad->rbroot);
> 
> Why walk when there's an anchor node there already? However...
> 
>> +            curr_iova = rb_entry(curr, struct iova, node);
>> +            limit_pfn = curr_iova->pfn_lo;
> 
> ...this doesn't look right, as by now we've lost the original limit_pfn
> supplied by the caller, so are highly likely to allocate beyond the
> range our caller asked for. In fact AFAICS we'd start allocating from
> directly directly below the anchor node, beyond the end of the entire
> address space.
> 
> The logic I was imagining we want here was something like the rapidly
> hacked up (and untested) diff below.
> 
> Thanks,
> Robin.
> 

Thanks for your comments ,I have gone through below logic and I see some
issue with retry check as there could be case where alloc_lo is set to
some pfn other than start_pfn in that case we don't retry and there can
still be iova available. I understand its a hacked up version, I can
work on this.

But how about we just store limit_pfn and get the node using that and
retry for once from that node, it would be similar to my patch just
correcting the curr node and limit_pfn update in retry check. do you see
any issue with this approach ?


Thanks,
Vijay.
> ----->8-----
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 0e6a9536eca6..3574c19272d6 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -186,6 +186,7 @@ static int __alloc_and_insert_iova_range(struct
> iova_domain *iovad,
>         unsigned long flags;
>         unsigned long new_pfn;
>         unsigned long align_mask = ~0UL;
> +       unsigned long alloc_hi, alloc_lo;
> 
>         if (size_aligned)
>                 align_mask <<= fls_long(size - 1);
> @@ -196,17 +197,27 @@ static int __alloc_and_insert_iova_range(struct
> iova_domain *iovad,
>                         size >= iovad->max32_alloc_size)
>                 goto iova32_full;
> 
> +       alloc_hi = IOVA_ANCHOR;
> +       alloc_lo = iovad->start_pfn;
> +retry:
>         curr = __get_cached_rbnode(iovad, limit_pfn);
>         curr_iova = rb_entry(curr, struct iova, node);
> +       if (alloc_hi < curr_iova->pfn_hi) {
> +               alloc_lo = curr_iova->pfn_hi;
> +               alloc_hi = limit_pfn;
> +       }
> +
>         do {
> -               limit_pfn = min(limit_pfn, curr_iova->pfn_lo);
> -               new_pfn = (limit_pfn - size) & align_mask;
> +               alloc_hi = min(alloc_hi, curr_iova->pfn_lo);
> +               new_pfn = (alloc_hi - size) & align_mask;
>                 prev = curr;
>                 curr = rb_prev(curr);
>                 curr_iova = rb_entry(curr, struct iova, node);
>         } while (curr && new_pfn <= curr_iova->pfn_hi);
> 
> -       if (limit_pfn < size || new_pfn < iovad->start_pfn) {
> +       if (limit_pfn < size || new_pfn < alloc_lo) {
> +               if (alloc_lo == iovad->start_pfn)
> +                       goto retry;
>                 iovad->max32_alloc_size = size;
>                 goto iova32_full;
>         }

WARNING: multiple messages have this Message-ID (diff)
From: Vijayanand Jitta <vjitta@codeaurora.org>
To: Robin Murphy <robin.murphy@arm.com>,
	joro@8bytes.org, iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: vinmenon@codeaurora.org, kernel-team@android.com
Subject: Re: [PATCH] iommu/iova: Retry from last rb tree node if iova search fails
Date: Sat, 9 May 2020 00:25:41 +0530	[thread overview]
Message-ID: <b80fdf37-e635-2d65-c523-8e1d0bd8085b@codeaurora.org> (raw)
In-Reply-To: <d9bfde9f-8f16-bf1b-311b-ea6c2b8ab93d@arm.com>



On 5/7/2020 6:54 PM, Robin Murphy wrote:
> On 2020-05-06 9:01 pm, vjitta@codeaurora.org wrote:
>> From: Vijayanand Jitta <vjitta@codeaurora.org>
>>
>> When ever a new iova alloc request comes iova is always searched
>> from the cached node and the nodes which are previous to cached
>> node. So, even if there is free iova space available in the nodes
>> which are next to the cached node iova allocation can still fail
>> because of this approach.
>>
>> Consider the following sequence of iova alloc and frees on
>> 1GB of iova space
>>
>> 1) alloc - 500MB
>> 2) alloc - 12MB
>> 3) alloc - 499MB
>> 4) free -  12MB which was allocated in step 2
>> 5) alloc - 13MB
>>
>> After the above sequence we will have 12MB of free iova space and
>> cached node will be pointing to the iova pfn of last alloc of 13MB
>> which will be the lowest iova pfn of that iova space. Now if we get an
>> alloc request of 2MB we just search from cached node and then look
>> for lower iova pfn's for free iova and as they aren't any, iova alloc
>> fails though there is 12MB of free iova space.
> 
> Yup, this could definitely do with improving. Unfortunately I think this
> particular implementation is slightly flawed...
> 
>> To avoid such iova search failures do a retry from the last rb tree node
>> when iova search fails, this will search the entire tree and get an iova
>> if its available
>>
>> Signed-off-by: Vijayanand Jitta <vjitta@codeaurora.org>
>> ---
>>   drivers/iommu/iova.c | 11 +++++++++++
>>   1 file changed, 11 insertions(+)
>>
>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>> index 0e6a953..2985222 100644
>> --- a/drivers/iommu/iova.c
>> +++ b/drivers/iommu/iova.c
>> @@ -186,6 +186,7 @@ static int __alloc_and_insert_iova_range(struct
>> iova_domain *iovad,
>>       unsigned long flags;
>>       unsigned long new_pfn;
>>       unsigned long align_mask = ~0UL;
>> +    bool retry = false;
>>         if (size_aligned)
>>           align_mask <<= fls_long(size - 1);
>> @@ -198,6 +199,8 @@ static int __alloc_and_insert_iova_range(struct
>> iova_domain *iovad,
>>         curr = __get_cached_rbnode(iovad, limit_pfn);
>>       curr_iova = rb_entry(curr, struct iova, node);
>> +
>> +retry_search:
>>       do {
>>           limit_pfn = min(limit_pfn, curr_iova->pfn_lo);
>>           new_pfn = (limit_pfn - size) & align_mask;
>> @@ -207,6 +210,14 @@ static int __alloc_and_insert_iova_range(struct
>> iova_domain *iovad,
>>       } while (curr && new_pfn <= curr_iova->pfn_hi);
>>         if (limit_pfn < size || new_pfn < iovad->start_pfn) {
>> +        if (!retry) {
>> +            curr = rb_last(&iovad->rbroot);
> 
> Why walk when there's an anchor node there already? However...
> 
>> +            curr_iova = rb_entry(curr, struct iova, node);
>> +            limit_pfn = curr_iova->pfn_lo;
> 
> ...this doesn't look right, as by now we've lost the original limit_pfn
> supplied by the caller, so are highly likely to allocate beyond the
> range our caller asked for. In fact AFAICS we'd start allocating from
> directly directly below the anchor node, beyond the end of the entire
> address space.
> 
> The logic I was imagining we want here was something like the rapidly
> hacked up (and untested) diff below.
> 
> Thanks,
> Robin.
> 

Thanks for your comments ,I have gone through below logic and I see some
issue with retry check as there could be case where alloc_lo is set to
some pfn other than start_pfn in that case we don't retry and there can
still be iova available. I understand its a hacked up version, I can
work on this.

But how about we just store limit_pfn and get the node using that and
retry for once from that node, it would be similar to my patch just
correcting the curr node and limit_pfn update in retry check. do you see
any issue with this approach ?


Thanks,
Vijay.
> ----->8-----
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 0e6a9536eca6..3574c19272d6 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -186,6 +186,7 @@ static int __alloc_and_insert_iova_range(struct
> iova_domain *iovad,
>         unsigned long flags;
>         unsigned long new_pfn;
>         unsigned long align_mask = ~0UL;
> +       unsigned long alloc_hi, alloc_lo;
> 
>         if (size_aligned)
>                 align_mask <<= fls_long(size - 1);
> @@ -196,17 +197,27 @@ static int __alloc_and_insert_iova_range(struct
> iova_domain *iovad,
>                         size >= iovad->max32_alloc_size)
>                 goto iova32_full;
> 
> +       alloc_hi = IOVA_ANCHOR;
> +       alloc_lo = iovad->start_pfn;
> +retry:
>         curr = __get_cached_rbnode(iovad, limit_pfn);
>         curr_iova = rb_entry(curr, struct iova, node);
> +       if (alloc_hi < curr_iova->pfn_hi) {
> +               alloc_lo = curr_iova->pfn_hi;
> +               alloc_hi = limit_pfn;
> +       }
> +
>         do {
> -               limit_pfn = min(limit_pfn, curr_iova->pfn_lo);
> -               new_pfn = (limit_pfn - size) & align_mask;
> +               alloc_hi = min(alloc_hi, curr_iova->pfn_lo);
> +               new_pfn = (alloc_hi - size) & align_mask;
>                 prev = curr;
>                 curr = rb_prev(curr);
>                 curr_iova = rb_entry(curr, struct iova, node);
>         } while (curr && new_pfn <= curr_iova->pfn_hi);
> 
> -       if (limit_pfn < size || new_pfn < iovad->start_pfn) {
> +       if (limit_pfn < size || new_pfn < alloc_lo) {
> +               if (alloc_lo == iovad->start_pfn)
> +                       goto retry;
>                 iovad->max32_alloc_size = size;
>                 goto iova32_full;
>         }
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  parent reply	other threads:[~2020-05-08 18:56 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-06 20:01 [PATCH] iommu/iova: Retry from last rb tree node if iova search fails vjitta
2020-05-06 20:01 ` vjitta
2020-05-07 13:24 ` Robin Murphy
2020-05-07 13:24   ` Robin Murphy
2020-05-07 18:22   ` Ajay kumar
2020-05-07 18:22     ` Ajay kumar
2020-05-07 18:33     ` Robin Murphy
2020-05-07 18:33       ` Robin Murphy
2020-05-08 18:55   ` Vijayanand Jitta [this message]
2020-05-08 18:55     ` Vijayanand Jitta
2020-05-11 11:14     ` Vijayanand Jitta
2020-05-11 11:14       ` Vijayanand Jitta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b80fdf37-e635-2d65-c523-8e1d0bd8085b@codeaurora.org \
    --to=vjitta@codeaurora.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=kernel-team@android.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=vinmenon@codeaurora.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.