From: Nadav Amit <namit@vmware.com>
To: Robin Murphy <robin.murphy@arm.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Will Deacon <will@kernel.org>,
"iommu@lists.linux-foundation.org"
<iommu@lists.linux-foundation.org>,
Jiajun Cao <caojiajun@vmware.com>
Subject: Re: [PATCH 3/4] iommu/amd: Do not sync on page size changes
Date: Tue, 1 Jun 2021 18:56:57 +0000 [thread overview]
Message-ID: <46F62EF5-8463-468B-B9D5-30B4F0572491@vmware.com> (raw)
In-Reply-To: <7e0b4b12-c68a-ff90-5d86-4ab88ddd7991@arm.com>
[-- Attachment #1: Type: text/plain, Size: 5774 bytes --]
> On Jun 1, 2021, at 10:27 AM, Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2021-06-01 17:39, Nadav Amit wrote:
>>> On Jun 1, 2021, at 8:59 AM, Robin Murphy <robin.murphy@arm.com> wrote:
>>>
>>> On 2021-05-02 07:59, Nadav Amit wrote:
>>>> From: Nadav Amit <namit@vmware.com>
>>>> Some IOMMU architectures perform invalidations regardless of the page
>>>> size. In such architectures there is no need to sync when the page size
>>>> changes or to regard pgsize when making interim flush in
>>>> iommu_iotlb_gather_add_page().
>>>> Add a "ignore_gather_pgsize" property for each IOMMU-ops to decide
>>>> whether gather's pgsize is tracked and triggers a flush.
>>>
>>> I've objected before[1], and I'll readily object again ;)
>>>
>>> I still think it's very silly to add a bunch of indirection all over the place to make a helper function not do the main thing it's intended to help with. If you only need trivial address gathering then it's far simpler to just implement trivial address gathering. I suppose if you really want to you could factor out another helper to share the 5 lines of code which ended up in mtk-iommu (see commit f21ae3b10084).
>> Thanks, Robin.
>> I read your comments but I cannot fully understand the alternative that you propose, although I do understand your objection to the indirection “ignore_gather_pgsize”. Would it be ok if “ignore_gather_pgsize" was provided as an argument for iommu_iotlb_gather_add_page()?
>
> No, I mean if iommu_iotlb_gather_add_page() doesn't have the behaviour your driver wants, just don't call it. Write or factor out a suitable helper that *does* do what you want and call that, or just implement the logic directly inline. Indirect argument or not, it just doesn't make much sense to have a helper function call which says "do this except don't do most of it".
>
>> In general, I can live without this patch. It probably would have negligent impact on performance anyhow.
>
> As I implied, it sounds like your needs are the same as the Mediatek driver had, so you could readily factor out a new page-size-agnostic gather helper from that. I fully support making the functional change to amd-iommu *somehow* - nobody likes unnecessary syncs - just not with this particular implementation :)
Hm… avoid code duplication I need to extract some common code to another function.
Is the following resembles what you had in mind (untested)?
-- >8 --
Subject: [PATCH] iommu: add iommu_iotlb_gather_add_page_ignore_pgsize()
---
drivers/iommu/mtk_iommu.c | 7 ++---
include/linux/iommu.h | 55 ++++++++++++++++++++++++++++++---------
2 files changed, 44 insertions(+), 18 deletions(-)
diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index e168a682806a..5890e745bed3 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -520,12 +520,9 @@ static size_t mtk_iommu_unmap(struct iommu_domain *domain,
struct iommu_iotlb_gather *gather)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
- unsigned long end = iova + size - 1;
- if (gather->start > iova)
- gather->start = iova;
- if (gather->end < end)
- gather->end = end;
+ iommu_iotlb_gather_update_range(gather, iova, size);
+
return dom->iop->unmap(dom->iop, iova, size, gather);
}
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 9ca6e6b8084d..037434b6eb4c 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -535,29 +535,58 @@ static inline void iommu_iotlb_sync(struct iommu_domain *domain,
iommu_iotlb_gather_init(iotlb_gather);
}
-static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
+static inline
+void iommu_iotlb_gather_update_range(struct iommu_iotlb_gather *gather,
+ unsigned long iova, size_t size)
+{
+ unsigned long start = iova, end = start + size - 1;
+
+ if (gather->end < end)
+ gather->end = end;
+
+ if (gather->start > start)
+ gather->start = start;
+
+ gather->pgsize = size;
+}
+
+static inline
+bool iommu_iotlb_gather_is_disjoint(struct iommu_iotlb_gather *gather,
+ unsigned long iova, size_t size)
+{
+ return iova + size < gather->start || iova > gather->end + 1;
+}
+
+static inline
+void iommu_iotlb_gather_add_page_ignore_pgsize(struct iommu_domain *domain,
struct iommu_iotlb_gather *gather,
unsigned long iova, size_t size)
{
- unsigned long start = iova, end = start + size - 1;
+ /*
+ * Only if the new page is disjoint from the current range, then sync
+ * the TLB so that the gather structure can be rewritten.
+ */
+ if (iommu_iotlb_gather_is_disjoint(gather, iova, size) && gather->pgsize)
+ iommu_iotlb_sync(domain, gather);
+
+ iommu_iotlb_gather_update_range(gather, iova, size);
+}
+static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
+ struct iommu_iotlb_gather *gather,
+ unsigned long iova, size_t size)
+{
/*
* If the new page is disjoint from the current range or is mapped at
* a different granularity, then sync the TLB so that the gather
* structure can be rewritten.
*/
- if (gather->pgsize != size ||
- end + 1 < gather->start || start > gather->end + 1) {
- if (gather->pgsize)
- iommu_iotlb_sync(domain, gather);
- gather->pgsize = size;
- }
-
- if (gather->end < end)
- gather->end = end;
+ if ((gather->pgsize != size ||
+ iommu_iotlb_gather_is_disjoint(gather, iova, size)) &&
+ gather->pgsize)
+ iommu_iotlb_sync(domain, gather);
- if (gather->start > start)
- gather->start = start;
+ iommu_iotlb_gather_update_range(gather, iova, size);
}
/* PCI device grouping function */
--
2.25.1
[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2021-06-01 18:57 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-02 6:59 [PATCH 0/4] iommu/amd: Enable page-selective flushes Nadav Amit
2021-05-02 6:59 ` [PATCH 1/4] iommu/amd: Fix wrong parentheses on page-specific invalidations Nadav Amit
2021-05-18 9:23 ` Joerg Roedel
2021-05-31 20:11 ` Nadav Amit
2021-05-02 6:59 ` [PATCH 2/4] iommu/amd: Do not sync on page size changes Nadav Amit
2021-05-02 6:59 ` [PATCH 2/4] iommu/amd: Selective flush on unmap Nadav Amit
2021-05-02 6:59 ` [PATCH 3/4] iommu/amd: Do not sync on page size changes Nadav Amit
2021-06-01 15:59 ` Robin Murphy
2021-06-01 16:39 ` Nadav Amit
2021-06-01 17:27 ` Robin Murphy
2021-06-01 18:56 ` Nadav Amit [this message]
2021-05-02 7:00 ` [PATCH 3/4] iommu/amd: Selective flush on unmap Nadav Amit
2021-05-02 7:00 ` [PATCH 4/4] iommu/amd: Do not use flush-queue when NpCache is on Nadav Amit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=46F62EF5-8463-468B-B9D5-30B4F0572491@vmware.com \
--to=namit@vmware.com \
--cc=caojiajun@vmware.com \
--cc=iommu@lists.linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=robin.murphy@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).