iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Vijayanand Jitta <vjitta@codeaurora.org>
To: Robin Murphy <robin.murphy@arm.com>,
	joro@8bytes.org, iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: vinmenon@codeaurora.org, kernel-team@android.com
Subject: Re: [PATCH v2 2/2] iommu/iova: Free global iova rcache on iova alloc failure
Date: Mon, 28 Sep 2020 18:11:41 +0530	[thread overview]
Message-ID: <9dac89a4-553a-efe2-08a1-6a3a5fbc97a8@codeaurora.org> (raw)
In-Reply-To: <2f20160a-b9da-4fa3-3796-ed90c6175ebe@arm.com>



On 9/18/2020 8:11 PM, Robin Murphy wrote:
> On 2020-08-20 13:49, vjitta@codeaurora.org wrote:
>> From: Vijayanand Jitta <vjitta@codeaurora.org>
>>
>> When ever an iova alloc request fails we free the iova
>> ranges present in the percpu iova rcaches and then retry
>> but the global iova rcache is not freed as a result we could
>> still see iova alloc failure even after retry as global
>> rcache is holding the iova's which can cause fragmentation.
>> So, free the global iova rcache as well and then go for the
>> retry.
>>
>> Signed-off-by: Vijayanand Jitta <vjitta@codeaurora.org>
>> ---
>>   drivers/iommu/iova.c | 23 +++++++++++++++++++++++
>>   include/linux/iova.h |  6 ++++++
>>   2 files changed, 29 insertions(+)
>>
>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>> index 4e77116..5836c87 100644
>> --- a/drivers/iommu/iova.c
>> +++ b/drivers/iommu/iova.c
>> @@ -442,6 +442,7 @@ struct iova *find_iova(struct iova_domain *iovad,
>> unsigned long pfn)
>>           flush_rcache = false;
>>           for_each_online_cpu(cpu)
>>               free_cpu_cached_iovas(cpu, iovad);
>> +        free_global_cached_iovas(iovad);
>>           goto retry;
>>       }
>>   @@ -1055,5 +1056,27 @@ void free_cpu_cached_iovas(unsigned int cpu,
>> struct iova_domain *iovad)
>>       }
>>   }
>>   +/*
>> + * free all the IOVA ranges of global cache
>> + */
>> +void free_global_cached_iovas(struct iova_domain *iovad)
> 
> As John pointed out last time, this should be static and the header
> changes dropped.
> 
> (TBH we should probably register our own hotplug notifier instance for a
> flush queue, so that external code has no need to poke at the per-CPU
> caches either)
> 
> Robin.
> 

Right, I have made it static and dropped header changes in v3.
can you please review that.

Thanks,
Vijay
>> +{
>> +    struct iova_rcache *rcache;
>> +    unsigned long flags;
>> +    int i, j;
>> +
>> +    for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
>> +        rcache = &iovad->rcaches[i];
>> +        spin_lock_irqsave(&rcache->lock, flags);
>> +        for (j = 0; j < rcache->depot_size; ++j) {
>> +            iova_magazine_free_pfns(rcache->depot[j], iovad);
>> +            iova_magazine_free(rcache->depot[j]);
>> +            rcache->depot[j] = NULL;
>> +        }
>> +        rcache->depot_size = 0;
>> +        spin_unlock_irqrestore(&rcache->lock, flags);
>> +    }
>> +}
>> +
>>   MODULE_AUTHOR("Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>");
>>   MODULE_LICENSE("GPL");
>> diff --git a/include/linux/iova.h b/include/linux/iova.h
>> index a0637ab..a905726 100644
>> --- a/include/linux/iova.h
>> +++ b/include/linux/iova.h
>> @@ -163,6 +163,7 @@ int init_iova_flush_queue(struct iova_domain *iovad,
>>   struct iova *split_and_remove_iova(struct iova_domain *iovad,
>>       struct iova *iova, unsigned long pfn_lo, unsigned long pfn_hi);
>>   void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain
>> *iovad);
>> +void free_global_cached_iovas(struct iova_domain *iovad);
>>   #else
>>   static inline int iova_cache_get(void)
>>   {
>> @@ -270,6 +271,11 @@ static inline void free_cpu_cached_iovas(unsigned
>> int cpu,
>>                        struct iova_domain *iovad)
>>   {
>>   }
>> +
>> +static inline void free_global_cached_iovas(struct iova_domain *iovad)
>> +{
>> +}
>> +
>>   #endif
>>     #endif
>>

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
member of Code Aurora Forum, hosted by The Linux Foundation
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2020-09-28 12:42 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-20 12:49 [PATCH v2 1/2] iommu/iova: Retry from last rb tree node if iova search fails vjitta
2020-08-20 12:49 ` [PATCH v2 2/2] iommu/iova: Free global iova rcache on iova alloc failure vjitta
2020-09-18 14:41   ` Robin Murphy
2020-09-28 12:41     ` Vijayanand Jitta [this message]
2020-09-30  5:59       ` Vijayanand Jitta
2020-08-28  7:31 ` [PATCH v2 1/2] iommu/iova: Retry from last rb tree node if iova search fails Vijayanand Jitta
2020-09-14  4:50   ` Vijayanand Jitta
2020-09-18  8:18     ` Joerg Roedel
2020-09-18 14:18 ` Robin Murphy
2020-09-30  5:48   ` Vijayanand Jitta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9dac89a4-553a-efe2-08a1-6a3a5fbc97a8@codeaurora.org \
    --to=vjitta@codeaurora.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=kernel-team@android.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=vinmenon@codeaurora.org \
    --subject='Re: [PATCH v2 2/2] iommu/iova: Free global iova rcache on iova alloc failure' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).