linux-arm-msm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>,
	Will Deacon <will@kernel.org>, Joerg Roedel <joro@8bytes.org>
Cc: iommu@lists.linux-foundation.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org
Subject: Re: [PATCH] iommu/io-pgtable-arm: Optimize partial walk flush for large scatter-gather list
Date: Wed, 9 Jun 2021 19:44:24 +0100	[thread overview]
Message-ID: <dbcd394a-4d85-316c-5dd0-033546a66132@arm.com> (raw)
In-Reply-To: <20210609145315.25750-1-saiprakash.ranjan@codeaurora.org>

On 2021-06-09 15:53, Sai Prakash Ranjan wrote:
> Currently for iommu_unmap() of large scatter-gather list with page size
> elements, the majority of time is spent in flushing of partial walks in
> __arm_lpae_unmap() which is a VA based TLB invalidation (TLBIVA for
> arm-smmu).
> 
> For example: to unmap a 32MB scatter-gather list with page size elements
> (8192 entries), there are 16->2MB buffer unmaps based on the pgsize (2MB
> for 4K granule) and each of 2MB will further result in 512 TLBIVAs (2MB/4K)
> resulting in a total of 8192 TLBIVAs (512*16) for 16->2MB causing a huge
> overhead.
> 
> So instead use io_pgtable_tlb_flush_all() to invalidate the entire context
> if size (pgsize) is greater than the granule size (4K, 16K, 64K). For this
> example of 32MB scatter-gather list unmap, this results in just 16 ASID
> based TLB invalidations or tlb_flush_all() callback (TLBIASID in case of
> arm-smmu) as opposed to 8192 TLBIVAs thereby increasing the performance of
> unmaps drastically.
> 
> Condition (size > granule size) is chosen for io_pgtable_tlb_flush_all()
> because for any granule with supported pgsizes, we will have at least 512
> TLB invalidations for which tlb_flush_all() is already recommended. For
> example, take 4K granule with 2MB pgsize, this will result in 512 TLBIVA
> in partial walk flush.
> 
> Test on QTI SM8150 SoC for 10 iterations of iommu_{map_sg}/unmap:
> (average over 10 iterations)
> 
> Before this optimization:
> 
>      size        iommu_map_sg      iommu_unmap
>        4K            2.067 us         1.854 us
>       64K            9.598 us         8.802 us
>        1M          148.890 us       130.718 us
>        2M          305.864 us        67.291 us
>       12M         1793.604 us       390.838 us
>       16M         2386.848 us       518.187 us
>       24M         3563.296 us       775.989 us
>       32M         4747.171 us      1033.364 us
> 
> After this optimization:
> 
>      size        iommu_map_sg      iommu_unmap
>        4K            1.723 us         1.765 us
>       64K            9.880 us         8.869 us
>        1M          155.364 us       135.223 us
>        2M          303.906 us         5.385 us
>       12M         1786.557 us        21.250 us
>       16M         2391.890 us        27.437 us
>       24M         3570.895 us        39.937 us
>       32M         4755.234 us        51.797 us
> 
> This is further reduced once the map/unmap_pages() support gets in which
> will result in just 1 tlb_flush_all() as opposed to 16 tlb_flush_all().
> 
> Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
> ---
>   drivers/iommu/io-pgtable-arm.c | 7 +++++--
>   1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> index 87def58e79b5..c3cb9add3179 100644
> --- a/drivers/iommu/io-pgtable-arm.c
> +++ b/drivers/iommu/io-pgtable-arm.c
> @@ -589,8 +589,11 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
>   
>   		if (!iopte_leaf(pte, lvl, iop->fmt)) {
>   			/* Also flush any partial walks */
> -			io_pgtable_tlb_flush_walk(iop, iova, size,
> -						  ARM_LPAE_GRANULE(data));
> +			if (size > ARM_LPAE_GRANULE(data))
> +				io_pgtable_tlb_flush_all(iop);
> +			else

Erm, when will the above condition ever not be true? ;)

Taking a step back, though, what about the impact to drivers other than 
SMMUv2? In particular I'm thinking of SMMUv3.2 where the whole range can 
be invalidated by VA in a single command anyway, so the additional 
penalties of TLBIALL are undesirable.

Robin.

> +				io_pgtable_tlb_flush_walk(iop, iova, size,
> +							  ARM_LPAE_GRANULE(data));
>   			ptep = iopte_deref(pte, data);
>   			__arm_lpae_free_pgtable(data, lvl + 1, ptep);
>   		} else if (iop->cfg.quirks & IO_PGTABLE_QUIRK_NON_STRICT) {
> 

  reply	other threads:[~2021-06-09 18:44 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-09 14:53 [PATCH] iommu/io-pgtable-arm: Optimize partial walk flush for large scatter-gather list Sai Prakash Ranjan
2021-06-09 18:44 ` Robin Murphy [this message]
2021-06-10  5:24   ` Sai Prakash Ranjan
2021-06-10  9:08     ` Robin Murphy
2021-06-10  9:36       ` Sai Prakash Ranjan
2021-06-10 11:33         ` Robin Murphy
2021-06-10 11:54           ` Sai Prakash Ranjan
2021-06-10 15:29             ` Robin Murphy
2021-06-10 15:51               ` Sai Prakash Ranjan
2021-06-11  0:37               ` Krishna Reddy
2021-06-11  0:54                 ` Sai Prakash Ranjan
2021-06-11 16:49                   ` Krishna Reddy
2021-06-12  2:46                     ` Sai Prakash Ranjan
2021-06-14 17:48                       ` Krishna Reddy
2021-06-15 11:51                         ` Sai Prakash Ranjan
2021-06-15 13:53                           ` Robin Murphy
2021-06-16  6:58                             ` Sai Prakash Ranjan
2021-06-16  9:03                               ` Sai Prakash Ranjan
2021-06-17 21:18                                 ` Krishna Reddy
2021-06-18  2:47                                   ` Sai Prakash Ranjan
2021-06-18  4:04                           ` Sai Prakash Ranjan
2021-06-10 12:03           ` Thierry Reding

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dbcd394a-4d85-316c-5dd0-033546a66132@arm.com \
    --to=robin.murphy@arm.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=saiprakash.ranjan@codeaurora.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).