linux-arm-msm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
To: Will Deacon <will@kernel.org>,
	Robin Murphy <robin.murphy@arm.com>,
	Joerg Roedel <joro@8bytes.org>
Cc: iommu@lists.linux-foundation.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	Doug Anderson <dianders@chromium.org>,
	Krishna Reddy <vdumpa@nvidia.com>,
	Thierry Reding <treding@nvidia.com>,
	Tomasz Figa <tfiga@chromium.org>,
	Bjorn Andersson <bjorn.andersson@linaro.org>
Subject: Re: [PATCHv3] iommu/arm-smmu: Optimize ->tlb_flush_walk() for qcom implementation
Date: Wed, 21 Jul 2021 10:18:56 +0530	[thread overview]
Message-ID: <bf3f323c66eb8bd701a5f96933a08e11@codeaurora.org> (raw)
In-Reply-To: <ee8314e4752aa777131dda22c6f4e9ef@codeaurora.org>

Hi Robin, Will,

On 2021-07-12 09:39, Sai Prakash Ranjan wrote:
> Hi Robin,
> 
> On 2021-06-23 19:12, Sai Prakash Ranjan wrote:
>> Currently for iommu_unmap() of large scatter-gather list with page 
>> size
>> elements, the majority of time is spent in flushing of partial walks 
>> in
>> __arm_lpae_unmap() which is a VA based TLB invalidation invalidating
>> page-by-page on iommus like arm-smmu-v2 (TLBIVA).
>> 
>> For example: to unmap a 32MB scatter-gather list with page size 
>> elements
>> (8192 entries), there are 16->2MB buffer unmaps based on the pgsize 
>> (2MB
>> for 4K granule) and each of 2MB will further result in 512 TLBIVAs 
>> (2MB/4K)
>> resulting in a total of 8192 TLBIVAs (512*16) for 16->2MB causing a 
>> huge
>> overhead.
>> 
>> On qcom implementation, there are several performance improvements for
>> TLB cache invalidations in HW like wait-for-safe (for realtime clients
>> such as camera and display) and few others to allow for cache
>> lookups/updates when TLBI is in progress for the same context bank.
>> So the cost of over-invalidation is less compared to the unmap latency
>> on several usecases like camera which deals with large buffers. So,
>> ASID based TLB invalidations (TLBIASID) can be used to invalidate the
>> entire context for partial walk flush thereby improving the unmap
>> latency.
>> 
>> Non-strict mode can use this by default for all platforms given its
>> all about over-invalidation saving time on individual unmaps and
>> non-deterministic generally.
>> 
>> For this example of 32MB scatter-gather list unmap, this change 
>> results
>> in just 16 ASID based TLB invalidations (TLBIASIDs) as opposed to 8192
>> TLBIVAs thereby increasing the performance of unmaps drastically.
>> 
>> Test on QTI SM8150 SoC for 10 iterations of iommu_{map_sg}/unmap:
>> (average over 10 iterations)
>> 
>> Before this optimization:
>> 
>>     size        iommu_map_sg      iommu_unmap
>>       4K            2.067 us         1.854 us
>>      64K            9.598 us         8.802 us
>>       1M          148.890 us       130.718 us
>>       2M          305.864 us        67.291 us
>>      12M         1793.604 us       390.838 us
>>      16M         2386.848 us       518.187 us
>>      24M         3563.296 us       775.989 us
>>      32M         4747.171 us      1033.364 us
>> 
>> After this optimization:
>> 
>>     size        iommu_map_sg      iommu_unmap
>>       4K            1.723 us         1.765 us
>>      64K            9.880 us         8.869 us
>>       1M          155.364 us       135.223 us
>>       2M          303.906 us         5.385 us
>>      12M         1786.557 us        21.250 us
>>      16M         2391.890 us        27.437 us
>>      24M         3570.895 us        39.937 us
>>      32M         4755.234 us        51.797 us
>> 
>> This is further reduced once the map/unmap_pages() support gets in 
>> which
>> will result in just 1 TLBIASID as compared to 16 TLBIASIDs.
>> 
>> Real world data also shows big difference in unmap performance as 
>> below:
>> 
>> There were reports of camera frame drops because of high overhead in
>> iommu unmap without this optimization because of frequent unmaps 
>> issued
>> by camera of about 100MB/s taking more than 100ms thereby causing 
>> frame
>> drops.
>> 
>> Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
>> ---
>> 
>> Changes in v3:
>>  * Move the logic to arm-smmu driver from io-pgtable (Robin)
>>  * Use a new set of iommu_flush_ops->arm_smmu_s1_tlb_impl_ops and use
>> it for qcom impl
>> 
>> Changes in v2:
>>  * Add a quirk to choose tlb_flush_all in partial walk flush
>>  * Set the quirk for QTI SoC implementation
>> 
>> ---
>>  drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 13 +++++++++++++
>>  drivers/iommu/arm/arm-smmu/arm-smmu.c      | 17 ++++++++++++++++-
>>  2 files changed, 29 insertions(+), 1 deletion(-)
>> 
>> diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
>> b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
>> index 7771d40176de..218c71465819 100644
>> --- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
>> +++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
>> @@ -10,6 +10,8 @@
>> 
>>  #include "arm-smmu.h"
>> 
>> +extern const struct iommu_flush_ops arm_smmu_s1_tlb_impl_ops;
>> +
>>  struct qcom_smmu {
>>  	struct arm_smmu_device smmu;
>>  	bool bypass_quirk;
>> @@ -146,6 +148,8 @@ static int qcom_adreno_smmu_init_context(struct
>> arm_smmu_domain *smmu_domain,
>>  {
>>  	struct adreno_smmu_priv *priv;
>> 
>> +	pgtbl_cfg->tlb = &arm_smmu_s1_tlb_impl_ops;
>> +
>>  	/* Only enable split pagetables for the GPU device (SID 0) */
>>  	if (!qcom_adreno_smmu_is_gpu_device(dev))
>>  		return 0;
>> @@ -185,6 +189,14 @@ static const struct of_device_id
>> qcom_smmu_client_of_match[] __maybe_unused = {
>>  	{ }
>>  };
>> 
>> +static int qcom_smmu_init_context(struct arm_smmu_domain 
>> *smmu_domain,
>> +		struct io_pgtable_cfg *pgtbl_cfg, struct device *dev)
>> +{
>> +	pgtbl_cfg->tlb = &arm_smmu_s1_tlb_impl_ops;
>> +
>> +	return 0;
>> +}
>> +
>>  static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu)
>>  {
>>  	unsigned int last_s2cr = ARM_SMMU_GR0_S2CR(smmu->num_mapping_groups 
>> - 1);
>> @@ -308,6 +320,7 @@ static int qcom_smmu500_reset(struct 
>> arm_smmu_device *smmu)
>>  }
>> 
>>  static const struct arm_smmu_impl qcom_smmu_impl = {
>> +	.init_context = qcom_smmu_init_context,
>>  	.cfg_probe = qcom_smmu_cfg_probe,
>>  	.def_domain_type = qcom_smmu_def_domain_type,
>>  	.reset = qcom_smmu500_reset,
>> diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c
>> b/drivers/iommu/arm/arm-smmu/arm-smmu.c
>> index d3c6f54110a5..f3845e822565 100644
>> --- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
>> +++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
>> @@ -341,6 +341,12 @@ static void arm_smmu_tlb_add_page_s1(struct
>> iommu_iotlb_gather *gather,
>>  				  ARM_SMMU_CB_S1_TLBIVAL);
>>  }
>> 
>> +static void arm_smmu_tlb_inv_walk_impl_s1(unsigned long iova, size_t 
>> size,
>> +				     size_t granule, void *cookie)
>> +{
>> +	arm_smmu_tlb_inv_context_s1(cookie);
>> +}
>> +
>>  static void arm_smmu_tlb_inv_walk_s2(unsigned long iova, size_t size,
>>  				     size_t granule, void *cookie)
>>  {
>> @@ -388,6 +394,12 @@ static const struct iommu_flush_ops 
>> arm_smmu_s1_tlb_ops = {
>>  	.tlb_add_page	= arm_smmu_tlb_add_page_s1,
>>  };
>> 
>> +const struct iommu_flush_ops arm_smmu_s1_tlb_impl_ops = {
>> +	.tlb_flush_all	= arm_smmu_tlb_inv_context_s1,
>> +	.tlb_flush_walk	= arm_smmu_tlb_inv_walk_impl_s1,
>> +	.tlb_add_page	= arm_smmu_tlb_add_page_s1,
>> +};
>> +
>>  static const struct iommu_flush_ops arm_smmu_s2_tlb_ops_v2 = {
>>  	.tlb_flush_all	= arm_smmu_tlb_inv_context_s2,
>>  	.tlb_flush_walk	= arm_smmu_tlb_inv_walk_s2,
>> @@ -703,7 +715,10 @@ static int arm_smmu_init_domain_context(struct
>> iommu_domain *domain,
>>  			ias = min(ias, 32UL);
>>  			oas = min(oas, 32UL);
>>  		}
>> -		smmu_domain->flush_ops = &arm_smmu_s1_tlb_ops;
>> +		if (!iommu_get_dma_strict(domain))
>> +			smmu_domain->flush_ops = &arm_smmu_s1_tlb_impl_ops;
>> +		else
>> +			smmu_domain->flush_ops = &arm_smmu_s1_tlb_ops;
>>  		break;
>>  	case ARM_SMMU_DOMAIN_NESTED:
>>  		/*
> 
> Any review comments on this version?
> 

Was hoping to get this into 5.15 in case there aren't any concerns here?
Let me know if there are any concerns.

Thanks,
Sai

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a 
member
of Code Aurora Forum, hosted by The Linux Foundation

  reply	other threads:[~2021-07-21  4:52 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-23 13:42 [PATCHv3] iommu/arm-smmu: Optimize ->tlb_flush_walk() for qcom implementation Sai Prakash Ranjan
2021-07-12  4:09 ` Sai Prakash Ranjan
2021-07-21  4:48   ` Sai Prakash Ranjan [this message]
2021-08-02 15:43 ` Will Deacon
2021-08-03  5:39   ` Sai Prakash Ranjan
2021-08-10 18:08     ` Will Deacon
2021-08-11  6:09 Sai Prakash Ranjan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bf3f323c66eb8bd701a5f96933a08e11@codeaurora.org \
    --to=saiprakash.ranjan@codeaurora.org \
    --cc=bjorn.andersson@linaro.org \
    --cc=dianders@chromium.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=tfiga@chromium.org \
    --cc=treding@nvidia.com \
    --cc=vdumpa@nvidia.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).