All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] iommu/amd: Enable page-selective flushes
@ 2021-05-24 22:41 ` Nadav Amit
  0 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-05-24 22:41 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Nadav Amit, Will Deacon, Jiajun Cao, iommu, linux-kernel

From: Nadav Amit <namit@vmware.com>

The previous patch, commit 268aa4548277 ("iommu/amd: Page-specific
invalidations for more than one page") was supposed to enable
page-selective IOTLB flushes on AMD.

The patch had an embaressing bug, and I apologize for it.

Analysis as for why this bug did not result in failures raised
additional issues that caused at least most of the IOTLB flushes not to
be page-selective ones.

The first patch corrects the bug from the previous patch. The next
patches enable page-selective invalidations, which were not enabled
despite the previous patch.

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Jiajun Cao <caojiajun@vmware.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org

---

v1->v2:
* Rebase on v5.13-rc3

Nadav Amit (4):
  iommu/amd: Fix wrong parentheses on page-specific invalidations
  iommu/amd: Selective flush on unmap
  iommu/amd: Do not sync on page size changes
  iommu/amd: Do not use flush-queue when NpCache is on

 drivers/iommu/amd/init.c  |  7 ++++++-
 drivers/iommu/amd/iommu.c | 18 +++++++++++++++---
 include/linux/iommu.h     |  3 ++-
 3 files changed, 23 insertions(+), 5 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 0/4] iommu/amd: Enable page-selective flushes
@ 2021-05-24 22:41 ` Nadav Amit
  0 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-05-24 22:41 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Will Deacon, iommu, Nadav Amit, Jiajun Cao, linux-kernel

From: Nadav Amit <namit@vmware.com>

The previous patch, commit 268aa4548277 ("iommu/amd: Page-specific
invalidations for more than one page") was supposed to enable
page-selective IOTLB flushes on AMD.

The patch had an embaressing bug, and I apologize for it.

Analysis as for why this bug did not result in failures raised
additional issues that caused at least most of the IOTLB flushes not to
be page-selective ones.

The first patch corrects the bug from the previous patch. The next
patches enable page-selective invalidations, which were not enabled
despite the previous patch.

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Jiajun Cao <caojiajun@vmware.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org

---

v1->v2:
* Rebase on v5.13-rc3

Nadav Amit (4):
  iommu/amd: Fix wrong parentheses on page-specific invalidations
  iommu/amd: Selective flush on unmap
  iommu/amd: Do not sync on page size changes
  iommu/amd: Do not use flush-queue when NpCache is on

 drivers/iommu/amd/init.c  |  7 ++++++-
 drivers/iommu/amd/iommu.c | 18 +++++++++++++++---
 include/linux/iommu.h     |  3 ++-
 3 files changed, 23 insertions(+), 5 deletions(-)

-- 
2.25.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 1/4] iommu/amd: Fix wrong parentheses on page-specific invalidations
  2021-05-24 22:41 ` Nadav Amit
@ 2021-05-24 22:41   ` Nadav Amit
  -1 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-05-24 22:41 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Nadav Amit, Will Deacon, Jiajun Cao, iommu, linux-kernel

From: Nadav Amit <namit@vmware.com>

The logic to determine the mask of page-specific invalidations was
tested in userspace. As the code was copied into the kernel, the
parentheses were mistakenly set in the wrong place, resulting in the
wrong mask.

Fix it.

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Jiajun Cao <caojiajun@vmware.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Fixes: 268aa4548277 ("iommu/amd: Page-specific invalidations for more than one page")
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 drivers/iommu/amd/iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 80e8e1916dd1..6723cbcf4030 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -884,7 +884,7 @@ static inline u64 build_inv_address(u64 address, size_t size)
 		 * The msb-bit must be clear on the address. Just set all the
 		 * lower bits.
 		 */
-		address |= 1ull << (msb_diff - 1);
+		address |= (1ull << msb_diff) - 1;
 	}
 
 	/* Clear bits 11:0 */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 1/4] iommu/amd: Fix wrong parentheses on page-specific invalidations
@ 2021-05-24 22:41   ` Nadav Amit
  0 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-05-24 22:41 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Will Deacon, iommu, Nadav Amit, Jiajun Cao, linux-kernel

From: Nadav Amit <namit@vmware.com>

The logic to determine the mask of page-specific invalidations was
tested in userspace. As the code was copied into the kernel, the
parentheses were mistakenly set in the wrong place, resulting in the
wrong mask.

Fix it.

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Jiajun Cao <caojiajun@vmware.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Fixes: 268aa4548277 ("iommu/amd: Page-specific invalidations for more than one page")
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 drivers/iommu/amd/iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 80e8e1916dd1..6723cbcf4030 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -884,7 +884,7 @@ static inline u64 build_inv_address(u64 address, size_t size)
 		 * The msb-bit must be clear on the address. Just set all the
 		 * lower bits.
 		 */
-		address |= 1ull << (msb_diff - 1);
+		address |= (1ull << msb_diff) - 1;
 	}
 
 	/* Clear bits 11:0 */
-- 
2.25.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/4] iommu/amd: Selective flush on unmap
  2021-05-24 22:41 ` Nadav Amit
@ 2021-05-24 22:41   ` Nadav Amit
  -1 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-05-24 22:41 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Nadav Amit, Will Deacon, Jiajun Cao, iommu, linux-kernel

From: Nadav Amit <namit@vmware.com>

Recent patch attempted to enable selective page flushes on AMD IOMMU but
neglected to adapt amd_iommu_iotlb_sync() to use the selective flushes.

Adapt amd_iommu_iotlb_sync() to use selective flushes and change
amd_iommu_unmap() to collect the flushes. As a defensive measure, to
avoid potential issues as those that the Intel IOMMU driver encountered
recently, flush the page-walk caches by always setting the "pde"
parameter. This can be removed later.

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Jiajun Cao <caojiajun@vmware.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 drivers/iommu/amd/iommu.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 6723cbcf4030..b8cabbbeed71 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2057,12 +2057,17 @@ static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
 {
 	struct protection_domain *domain = to_pdomain(dom);
 	struct io_pgtable_ops *ops = &domain->iop.iop.ops;
+	size_t r;
 
 	if ((amd_iommu_pgtable == AMD_IOMMU_V1) &&
 	    (domain->iop.mode == PAGE_MODE_NONE))
 		return 0;
 
-	return (ops->unmap) ? ops->unmap(ops, iova, page_size, gather) : 0;
+	r = (ops->unmap) ? ops->unmap(ops, iova, page_size, gather) : 0;
+
+	iommu_iotlb_gather_add_page(dom, gather, iova, page_size);
+
+	return r;
 }
 
 static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
@@ -2165,7 +2170,13 @@ static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain)
 static void amd_iommu_iotlb_sync(struct iommu_domain *domain,
 				 struct iommu_iotlb_gather *gather)
 {
-	amd_iommu_flush_iotlb_all(domain);
+	struct protection_domain *dom = to_pdomain(domain);
+	unsigned long flags;
+
+	spin_lock_irqsave(&dom->lock, flags);
+	__domain_flush_pages(dom, gather->start, gather->end - gather->start, 1);
+	amd_iommu_domain_flush_complete(dom);
+	spin_unlock_irqrestore(&dom->lock, flags);
 }
 
 static int amd_iommu_def_domain_type(struct device *dev)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/4] iommu/amd: Selective flush on unmap
@ 2021-05-24 22:41   ` Nadav Amit
  0 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-05-24 22:41 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Will Deacon, iommu, Nadav Amit, Jiajun Cao, linux-kernel

From: Nadav Amit <namit@vmware.com>

Recent patch attempted to enable selective page flushes on AMD IOMMU but
neglected to adapt amd_iommu_iotlb_sync() to use the selective flushes.

Adapt amd_iommu_iotlb_sync() to use selective flushes and change
amd_iommu_unmap() to collect the flushes. As a defensive measure, to
avoid potential issues as those that the Intel IOMMU driver encountered
recently, flush the page-walk caches by always setting the "pde"
parameter. This can be removed later.

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Jiajun Cao <caojiajun@vmware.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 drivers/iommu/amd/iommu.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 6723cbcf4030..b8cabbbeed71 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2057,12 +2057,17 @@ static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
 {
 	struct protection_domain *domain = to_pdomain(dom);
 	struct io_pgtable_ops *ops = &domain->iop.iop.ops;
+	size_t r;
 
 	if ((amd_iommu_pgtable == AMD_IOMMU_V1) &&
 	    (domain->iop.mode == PAGE_MODE_NONE))
 		return 0;
 
-	return (ops->unmap) ? ops->unmap(ops, iova, page_size, gather) : 0;
+	r = (ops->unmap) ? ops->unmap(ops, iova, page_size, gather) : 0;
+
+	iommu_iotlb_gather_add_page(dom, gather, iova, page_size);
+
+	return r;
 }
 
 static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
@@ -2165,7 +2170,13 @@ static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain)
 static void amd_iommu_iotlb_sync(struct iommu_domain *domain,
 				 struct iommu_iotlb_gather *gather)
 {
-	amd_iommu_flush_iotlb_all(domain);
+	struct protection_domain *dom = to_pdomain(domain);
+	unsigned long flags;
+
+	spin_lock_irqsave(&dom->lock, flags);
+	__domain_flush_pages(dom, gather->start, gather->end - gather->start, 1);
+	amd_iommu_domain_flush_complete(dom);
+	spin_unlock_irqrestore(&dom->lock, flags);
 }
 
 static int amd_iommu_def_domain_type(struct device *dev)
-- 
2.25.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 3/4] iommu/amd: Do not sync on page size changes
  2021-05-24 22:41 ` Nadav Amit
@ 2021-05-24 22:41   ` Nadav Amit
  -1 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-05-24 22:41 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Nadav Amit, Will Deacon, Jiajun Cao, iommu, linux-kernel

From: Nadav Amit <namit@vmware.com>

Some IOMMU architectures perform invalidations regardless of the page
size. In such architectures there is no need to sync when the page size
changes or to regard pgsize when making interim flush in
iommu_iotlb_gather_add_page().

Add a "ignore_gather_pgsize" property for each IOMMU-ops to decide
whether gather's pgsize is tracked and triggers a flush.

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Jiajun Cao <caojiajun@vmware.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 drivers/iommu/amd/iommu.c | 1 +
 include/linux/iommu.h     | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index b8cabbbeed71..1849b53f2149 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2215,6 +2215,7 @@ const struct iommu_ops amd_iommu_ops = {
 	.put_resv_regions = generic_iommu_put_resv_regions,
 	.is_attach_deferred = amd_iommu_is_attach_deferred,
 	.pgsize_bitmap	= AMD_IOMMU_PGSIZES,
+	.ignore_gather_pgsize = true,
 	.flush_iotlb_all = amd_iommu_flush_iotlb_all,
 	.iotlb_sync = amd_iommu_iotlb_sync,
 	.def_domain_type = amd_iommu_def_domain_type,
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 32d448050bf7..1fb2695418e9 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -284,6 +284,7 @@ struct iommu_ops {
 	int (*def_domain_type)(struct device *dev);
 
 	unsigned long pgsize_bitmap;
+	bool ignore_gather_pgsize;
 	struct module *owner;
 };
 
@@ -508,7 +509,7 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
 	 * a different granularity, then sync the TLB so that the gather
 	 * structure can be rewritten.
 	 */
-	if (gather->pgsize != size ||
+	if ((gather->pgsize != size && !domain->ops->ignore_gather_pgsize) ||
 	    end + 1 < gather->start || start > gather->end + 1) {
 		if (gather->pgsize)
 			iommu_iotlb_sync(domain, gather);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 3/4] iommu/amd: Do not sync on page size changes
@ 2021-05-24 22:41   ` Nadav Amit
  0 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-05-24 22:41 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Will Deacon, iommu, Nadav Amit, Jiajun Cao, linux-kernel

From: Nadav Amit <namit@vmware.com>

Some IOMMU architectures perform invalidations regardless of the page
size. In such architectures there is no need to sync when the page size
changes or to regard pgsize when making interim flush in
iommu_iotlb_gather_add_page().

Add a "ignore_gather_pgsize" property for each IOMMU-ops to decide
whether gather's pgsize is tracked and triggers a flush.

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Jiajun Cao <caojiajun@vmware.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 drivers/iommu/amd/iommu.c | 1 +
 include/linux/iommu.h     | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index b8cabbbeed71..1849b53f2149 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2215,6 +2215,7 @@ const struct iommu_ops amd_iommu_ops = {
 	.put_resv_regions = generic_iommu_put_resv_regions,
 	.is_attach_deferred = amd_iommu_is_attach_deferred,
 	.pgsize_bitmap	= AMD_IOMMU_PGSIZES,
+	.ignore_gather_pgsize = true,
 	.flush_iotlb_all = amd_iommu_flush_iotlb_all,
 	.iotlb_sync = amd_iommu_iotlb_sync,
 	.def_domain_type = amd_iommu_def_domain_type,
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 32d448050bf7..1fb2695418e9 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -284,6 +284,7 @@ struct iommu_ops {
 	int (*def_domain_type)(struct device *dev);
 
 	unsigned long pgsize_bitmap;
+	bool ignore_gather_pgsize;
 	struct module *owner;
 };
 
@@ -508,7 +509,7 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
 	 * a different granularity, then sync the TLB so that the gather
 	 * structure can be rewritten.
 	 */
-	if (gather->pgsize != size ||
+	if ((gather->pgsize != size && !domain->ops->ignore_gather_pgsize) ||
 	    end + 1 < gather->start || start > gather->end + 1) {
 		if (gather->pgsize)
 			iommu_iotlb_sync(domain, gather);
-- 
2.25.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 4/4] iommu/amd: Do not use flush-queue when NpCache is on
  2021-05-24 22:41 ` Nadav Amit
@ 2021-05-24 22:41   ` Nadav Amit
  -1 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-05-24 22:41 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Nadav Amit, Will Deacon, Jiajun Cao, iommu, linux-kernel

From: Nadav Amit <namit@vmware.com>

Do not use flush-queue on virtualized environments, where the NpCache
capability of the IOMMU is set. This is required to reduce
virtualization overheads.

This change follows a similar change to Intel's VT-d and a detailed
explanation as for the rationale is described in commit 29b32839725f
("iommu/vt-d: Do not use flush-queue when caching-mode is on").

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Jiajun Cao <caojiajun@vmware.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 drivers/iommu/amd/init.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index d006724f4dc2..ba3b76ed776d 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -1850,8 +1850,13 @@ static int __init iommu_init_pci(struct amd_iommu *iommu)
 	if (ret)
 		return ret;
 
-	if (iommu->cap & (1UL << IOMMU_CAP_NPCACHE))
+	if (iommu->cap & (1UL << IOMMU_CAP_NPCACHE)) {
+		if (!amd_iommu_unmap_flush)
+			pr_warn_once("IOMMU batching is disabled due to virtualization");
+
 		amd_iommu_np_cache = true;
+		amd_iommu_unmap_flush = true;
+	}
 
 	init_iommu_perf_ctr(iommu);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 4/4] iommu/amd: Do not use flush-queue when NpCache is on
@ 2021-05-24 22:41   ` Nadav Amit
  0 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-05-24 22:41 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Will Deacon, iommu, Nadav Amit, Jiajun Cao, linux-kernel

From: Nadav Amit <namit@vmware.com>

Do not use flush-queue on virtualized environments, where the NpCache
capability of the IOMMU is set. This is required to reduce
virtualization overheads.

This change follows a similar change to Intel's VT-d and a detailed
explanation as for the rationale is described in commit 29b32839725f
("iommu/vt-d: Do not use flush-queue when caching-mode is on").

Cc: Joerg Roedel <joro@8bytes.org>
Cc: Will Deacon <will@kernel.org>
Cc: Jiajun Cao <caojiajun@vmware.com>
Cc: iommu@lists.linux-foundation.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 drivers/iommu/amd/init.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index d006724f4dc2..ba3b76ed776d 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -1850,8 +1850,13 @@ static int __init iommu_init_pci(struct amd_iommu *iommu)
 	if (ret)
 		return ret;
 
-	if (iommu->cap & (1UL << IOMMU_CAP_NPCACHE))
+	if (iommu->cap & (1UL << IOMMU_CAP_NPCACHE)) {
+		if (!amd_iommu_unmap_flush)
+			pr_warn_once("IOMMU batching is disabled due to virtualization");
+
 		amd_iommu_np_cache = true;
+		amd_iommu_unmap_flush = true;
+	}
 
 	init_iommu_perf_ctr(iommu);
 
-- 
2.25.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/4] iommu/amd: Enable page-selective flushes
  2021-05-24 22:41 ` Nadav Amit
@ 2021-06-04 15:38   ` Joerg Roedel
  -1 siblings, 0 replies; 18+ messages in thread
From: Joerg Roedel @ 2021-06-04 15:38 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Nadav Amit, Will Deacon, Jiajun Cao, iommu, linux-kernel, Robin Murphy

Hi Nadav,

[Adding Robin]

On Mon, May 24, 2021 at 03:41:55PM -0700, Nadav Amit wrote:
> Nadav Amit (4):
>   iommu/amd: Fix wrong parentheses on page-specific invalidations

This patch is already upstream in v5.13-rc4. Please rebase to that
version.

>   iommu/amd: Selective flush on unmap
>   iommu/amd: Do not sync on page size changes
>   iommu/amd: Do not use flush-queue when NpCache is on

And I think there have been objections from Robin Murphy on Patch 3,
have those been worked out?

Regards,

	Joerg

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/4] iommu/amd: Enable page-selective flushes
@ 2021-06-04 15:38   ` Joerg Roedel
  0 siblings, 0 replies; 18+ messages in thread
From: Joerg Roedel @ 2021-06-04 15:38 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Robin Murphy, linux-kernel, iommu, Nadav Amit, Jiajun Cao, Will Deacon

Hi Nadav,

[Adding Robin]

On Mon, May 24, 2021 at 03:41:55PM -0700, Nadav Amit wrote:
> Nadav Amit (4):
>   iommu/amd: Fix wrong parentheses on page-specific invalidations

This patch is already upstream in v5.13-rc4. Please rebase to that
version.

>   iommu/amd: Selective flush on unmap
>   iommu/amd: Do not sync on page size changes
>   iommu/amd: Do not use flush-queue when NpCache is on

And I think there have been objections from Robin Murphy on Patch 3,
have those been worked out?

Regards,

	Joerg
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/4] iommu/amd: Enable page-selective flushes
  2021-06-04 15:38   ` Joerg Roedel
@ 2021-06-04 17:10     ` Nadav Amit
  -1 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-06-04 17:10 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Will Deacon, Jiajun Cao, iommu, linux-kernel, Robin Murphy



> On Jun 4, 2021, at 8:38 AM, Joerg Roedel <joro@8bytes.org> wrote:
> 
> Hi Nadav,
> 
> [Adding Robin]
> 
> On Mon, May 24, 2021 at 03:41:55PM -0700, Nadav Amit wrote:
>> Nadav Amit (4):
>>  iommu/amd: Fix wrong parentheses on page-specific invalidations
> 
> This patch is already upstream in v5.13-rc4. Please rebase to that
> version.

I guess it would be rc5 by the time I send it.

> 
>>  iommu/amd: Selective flush on unmap
>>  iommu/amd: Do not sync on page size changes
>>  iommu/amd: Do not use flush-queue when NpCache is on
> 
> And I think there have been objections from Robin Murphy on Patch 3,
> have those been worked out?

I am still waiting for Robin’s feedback on my proposed changes. If he does not respond soon, I will drop this patch for now.

Thanks,
Nadav

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/4] iommu/amd: Enable page-selective flushes
@ 2021-06-04 17:10     ` Nadav Amit
  0 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-06-04 17:10 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Robin Murphy, iommu, Will Deacon, Jiajun Cao, linux-kernel



> On Jun 4, 2021, at 8:38 AM, Joerg Roedel <joro@8bytes.org> wrote:
> 
> Hi Nadav,
> 
> [Adding Robin]
> 
> On Mon, May 24, 2021 at 03:41:55PM -0700, Nadav Amit wrote:
>> Nadav Amit (4):
>>  iommu/amd: Fix wrong parentheses on page-specific invalidations
> 
> This patch is already upstream in v5.13-rc4. Please rebase to that
> version.

I guess it would be rc5 by the time I send it.

> 
>>  iommu/amd: Selective flush on unmap
>>  iommu/amd: Do not sync on page size changes
>>  iommu/amd: Do not use flush-queue when NpCache is on
> 
> And I think there have been objections from Robin Murphy on Patch 3,
> have those been worked out?

I am still waiting for Robin’s feedback on my proposed changes. If he does not respond soon, I will drop this patch for now.

Thanks,
Nadav
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/4] iommu/amd: Enable page-selective flushes
  2021-06-04 17:10     ` Nadav Amit
@ 2021-06-04 18:53       ` Robin Murphy
  -1 siblings, 0 replies; 18+ messages in thread
From: Robin Murphy @ 2021-06-04 18:53 UTC (permalink / raw)
  To: Nadav Amit, Joerg Roedel; +Cc: Will Deacon, Jiajun Cao, iommu, linux-kernel

On 2021-06-04 18:10, Nadav Amit wrote:
> 
> 
>> On Jun 4, 2021, at 8:38 AM, Joerg Roedel <joro@8bytes.org> wrote:
>>
>> Hi Nadav,
>>
>> [Adding Robin]
>>
>> On Mon, May 24, 2021 at 03:41:55PM -0700, Nadav Amit wrote:
>>> Nadav Amit (4):
>>>   iommu/amd: Fix wrong parentheses on page-specific invalidations
>>
>> This patch is already upstream in v5.13-rc4. Please rebase to that
>> version.
> 
> I guess it would be rc5 by the time I send it.
> 
>>
>>>   iommu/amd: Selective flush on unmap
>>>   iommu/amd: Do not sync on page size changes
>>>   iommu/amd: Do not use flush-queue when NpCache is on
>>
>> And I think there have been objections from Robin Murphy on Patch 3,
>> have those been worked out?
> 
> I am still waiting for Robin’s feedback on my proposed changes. If he does not respond soon, I will drop this patch for now.

Apologies, it feels like I've spent most of this week fighting fires,
and a great deal of email got skimmed and mentally filed under "nothing
so wrong that I need to respond immediately"...

FWIW I would have written the simpler patch below, but beyond that I
think it might start descending into bikeshedding - if you still prefer
your more comprehensive refactoring, or something in between, then don't
let my personal preference in style/complexity trade-offs stand in the
way of getting a useful functional change into the AMD driver. Whichever
way, though, I *am* now sold on the idea of having some kerneldoc to
clarify these things.

Thanks,
Robin.

----->8-----
From: Robin Murphy <robin.murphy@arm.com>
Subject: [PATCH] iommu: Improve iommu_iotlb_gather helpers

The Mediatek driver is not the only one which might want a basic
address-based gathering behaviour, so although it's arguably simple
enough to open-code, let's factor it out for the sake of cleanliness.
Let's also take this opportunity to document the intent of these
helpers for clarity.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
  drivers/iommu/mtk_iommu.c |  6 +-----
  include/linux/iommu.h     | 32 ++++++++++++++++++++++++++++++++
  2 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index e06b8a0e2b56..cd457487ce81 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -521,12 +521,8 @@ static size_t mtk_iommu_unmap(struct iommu_domain *domain,
  			      struct iommu_iotlb_gather *gather)
  {
  	struct mtk_iommu_domain *dom = to_mtk_domain(domain);
-	unsigned long end = iova + size - 1;
  
-	if (gather->start > iova)
-		gather->start = iova;
-	if (gather->end < end)
-		gather->end = end;
+	iommu_iotlb_gather_add_range(gather, iova, size);
  	return dom->iop->unmap(dom->iop, iova, size, gather);
  }
  
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 32d448050bf7..5f036e991937 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -497,6 +497,38 @@ static inline void iommu_iotlb_sync(struct iommu_domain *domain,
  	iommu_iotlb_gather_init(iotlb_gather);
  }
  
+/**
+ * iommu_iotlb_gather_add_range - Gather for address-based TLB invalidation
+ * @gather: TLB gather data
+ * @iova: start of page to invalidate
+ * @size: size of page to invalidate
+ *
+ * Helper for IOMMU drivers to build arbitrarily-sized invalidation commands
+ * where only the address range matters, and simply minimising intermediate
+ * syncs is preferred.
+ */
+static inline void iommu_iotlb_gather_add_range(struct iommu_iotlb_gather *gather,
+						unsigned long iova, size_t size)
+{
+	unsigned long end = iova + size - 1;
+
+	if (gather->start > iova)
+		gather->start = iova;
+	if (gather->end < end)
+		gather->end = end;
+}
+
+/**
+ * iommu_iotlb_gather_add_page - Gather for page-based TLB invalidation
+ * @domain: IOMMU domain to be invalidated
+ * @gather: TLB gather data
+ * @iova: start of page to invalidate
+ * @size: size of page to invalidate
+ *
+ * Helper for IOMMU drivers to build invalidation commands based on individual
+ * pages, or with page size/table level hints which cannot be gathered if they
+ * differ.
+ */
  static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
  					       struct iommu_iotlb_gather *gather,
  					       unsigned long iova, size_t size)
-- 
2.25.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/4] iommu/amd: Enable page-selective flushes
@ 2021-06-04 18:53       ` Robin Murphy
  0 siblings, 0 replies; 18+ messages in thread
From: Robin Murphy @ 2021-06-04 18:53 UTC (permalink / raw)
  To: Nadav Amit, Joerg Roedel; +Cc: iommu, Will Deacon, Jiajun Cao, linux-kernel

On 2021-06-04 18:10, Nadav Amit wrote:
> 
> 
>> On Jun 4, 2021, at 8:38 AM, Joerg Roedel <joro@8bytes.org> wrote:
>>
>> Hi Nadav,
>>
>> [Adding Robin]
>>
>> On Mon, May 24, 2021 at 03:41:55PM -0700, Nadav Amit wrote:
>>> Nadav Amit (4):
>>>   iommu/amd: Fix wrong parentheses on page-specific invalidations
>>
>> This patch is already upstream in v5.13-rc4. Please rebase to that
>> version.
> 
> I guess it would be rc5 by the time I send it.
> 
>>
>>>   iommu/amd: Selective flush on unmap
>>>   iommu/amd: Do not sync on page size changes
>>>   iommu/amd: Do not use flush-queue when NpCache is on
>>
>> And I think there have been objections from Robin Murphy on Patch 3,
>> have those been worked out?
> 
> I am still waiting for Robin’s feedback on my proposed changes. If he does not respond soon, I will drop this patch for now.

Apologies, it feels like I've spent most of this week fighting fires,
and a great deal of email got skimmed and mentally filed under "nothing
so wrong that I need to respond immediately"...

FWIW I would have written the simpler patch below, but beyond that I
think it might start descending into bikeshedding - if you still prefer
your more comprehensive refactoring, or something in between, then don't
let my personal preference in style/complexity trade-offs stand in the
way of getting a useful functional change into the AMD driver. Whichever
way, though, I *am* now sold on the idea of having some kerneldoc to
clarify these things.

Thanks,
Robin.

----->8-----
From: Robin Murphy <robin.murphy@arm.com>
Subject: [PATCH] iommu: Improve iommu_iotlb_gather helpers

The Mediatek driver is not the only one which might want a basic
address-based gathering behaviour, so although it's arguably simple
enough to open-code, let's factor it out for the sake of cleanliness.
Let's also take this opportunity to document the intent of these
helpers for clarity.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
  drivers/iommu/mtk_iommu.c |  6 +-----
  include/linux/iommu.h     | 32 ++++++++++++++++++++++++++++++++
  2 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index e06b8a0e2b56..cd457487ce81 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -521,12 +521,8 @@ static size_t mtk_iommu_unmap(struct iommu_domain *domain,
  			      struct iommu_iotlb_gather *gather)
  {
  	struct mtk_iommu_domain *dom = to_mtk_domain(domain);
-	unsigned long end = iova + size - 1;
  
-	if (gather->start > iova)
-		gather->start = iova;
-	if (gather->end < end)
-		gather->end = end;
+	iommu_iotlb_gather_add_range(gather, iova, size);
  	return dom->iop->unmap(dom->iop, iova, size, gather);
  }
  
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 32d448050bf7..5f036e991937 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -497,6 +497,38 @@ static inline void iommu_iotlb_sync(struct iommu_domain *domain,
  	iommu_iotlb_gather_init(iotlb_gather);
  }
  
+/**
+ * iommu_iotlb_gather_add_range - Gather for address-based TLB invalidation
+ * @gather: TLB gather data
+ * @iova: start of page to invalidate
+ * @size: size of page to invalidate
+ *
+ * Helper for IOMMU drivers to build arbitrarily-sized invalidation commands
+ * where only the address range matters, and simply minimising intermediate
+ * syncs is preferred.
+ */
+static inline void iommu_iotlb_gather_add_range(struct iommu_iotlb_gather *gather,
+						unsigned long iova, size_t size)
+{
+	unsigned long end = iova + size - 1;
+
+	if (gather->start > iova)
+		gather->start = iova;
+	if (gather->end < end)
+		gather->end = end;
+}
+
+/**
+ * iommu_iotlb_gather_add_page - Gather for page-based TLB invalidation
+ * @domain: IOMMU domain to be invalidated
+ * @gather: TLB gather data
+ * @iova: start of page to invalidate
+ * @size: size of page to invalidate
+ *
+ * Helper for IOMMU drivers to build invalidation commands based on individual
+ * pages, or with page size/table level hints which cannot be gathered if they
+ * differ.
+ */
  static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain,
  					       struct iommu_iotlb_gather *gather,
  					       unsigned long iova, size_t size)
-- 
2.25.1
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/4] iommu/amd: Enable page-selective flushes
  2021-06-04 18:53       ` Robin Murphy
@ 2021-06-04 19:57         ` Nadav Amit
  -1 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-06-04 19:57 UTC (permalink / raw)
  To: Robin Murphy; +Cc: Joerg Roedel, Will Deacon, Jiajun Cao, iommu, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2615 bytes --]



> On Jun 4, 2021, at 11:53 AM, Robin Murphy <robin.murphy@arm.com> wrote:
> 
> On 2021-06-04 18:10, Nadav Amit wrote:
>>> On Jun 4, 2021, at 8:38 AM, Joerg Roedel <joro@8bytes.org> wrote:
>>> 
>>> Hi Nadav,
>>> 
>>> [Adding Robin]
>>> 
>>> On Mon, May 24, 2021 at 03:41:55PM -0700, Nadav Amit wrote:
>>>> Nadav Amit (4):
>>>>  iommu/amd: Fix wrong parentheses on page-specific invalidations
>>> 
>>> This patch is already upstream in v5.13-rc4. Please rebase to that
>>> version.
>> I guess it would be rc5 by the time I send it.
>>> 
>>>>  iommu/amd: Selective flush on unmap
>>>>  iommu/amd: Do not sync on page size changes
>>>>  iommu/amd: Do not use flush-queue when NpCache is on
>>> 
>>> And I think there have been objections from Robin Murphy on Patch 3,
>>> have those been worked out?
>> I am still waiting for Robin’s feedback on my proposed changes. If he does not respond soon, I will drop this patch for now.
> 
> Apologies, it feels like I've spent most of this week fighting fires,
> and a great deal of email got skimmed and mentally filed under "nothing
> so wrong that I need to respond immediately"...
> 
> FWIW I would have written the simpler patch below, but beyond that I
> think it might start descending into bikeshedding - if you still prefer
> your more comprehensive refactoring, or something in between, then don't
> let my personal preference in style/complexity trade-offs stand in the
> way of getting a useful functional change into the AMD driver. Whichever
> way, though, I *am* now sold on the idea of having some kerneldoc to
> clarify these things.

Thanks, I appreciate your feedback.

I will add kerneldoc as you indicated.

I see you took some parts of the patch I did for MediaTek, but I think this is not good enough for AMD, since AMD behavior should be different than MediaTek - they have different needs:

MediaTek wants as few IOTLB flushes as possible, even if it results in flushing of many irrelevant (unmodified) entries between start and end. That’s the reason it can just use iommu_iotlb_gather_update_range().

In contrast, for AMD we do not want to flush too many irrelevant entries, specifically if the the IOMMU is virtualized. When an IOTLB flush is initiated by the VM, the hypervisor needs to scan the IOMMU page-tables for changes and synchronize it with the physical IOMMU. You don’t want this range to be too big, and that is the reason I needed iommu_iotlb_gather_is_disjoint().

I will add documentation, since clearly this information was not conveyed well enough.

Thanks again,
Nadav

[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/4] iommu/amd: Enable page-selective flushes
@ 2021-06-04 19:57         ` Nadav Amit
  0 siblings, 0 replies; 18+ messages in thread
From: Nadav Amit @ 2021-06-04 19:57 UTC (permalink / raw)
  To: Robin Murphy; +Cc: Will Deacon, Jiajun Cao, iommu, linux-kernel


[-- Attachment #1.1: Type: text/plain, Size: 2615 bytes --]



> On Jun 4, 2021, at 11:53 AM, Robin Murphy <robin.murphy@arm.com> wrote:
> 
> On 2021-06-04 18:10, Nadav Amit wrote:
>>> On Jun 4, 2021, at 8:38 AM, Joerg Roedel <joro@8bytes.org> wrote:
>>> 
>>> Hi Nadav,
>>> 
>>> [Adding Robin]
>>> 
>>> On Mon, May 24, 2021 at 03:41:55PM -0700, Nadav Amit wrote:
>>>> Nadav Amit (4):
>>>>  iommu/amd: Fix wrong parentheses on page-specific invalidations
>>> 
>>> This patch is already upstream in v5.13-rc4. Please rebase to that
>>> version.
>> I guess it would be rc5 by the time I send it.
>>> 
>>>>  iommu/amd: Selective flush on unmap
>>>>  iommu/amd: Do not sync on page size changes
>>>>  iommu/amd: Do not use flush-queue when NpCache is on
>>> 
>>> And I think there have been objections from Robin Murphy on Patch 3,
>>> have those been worked out?
>> I am still waiting for Robin’s feedback on my proposed changes. If he does not respond soon, I will drop this patch for now.
> 
> Apologies, it feels like I've spent most of this week fighting fires,
> and a great deal of email got skimmed and mentally filed under "nothing
> so wrong that I need to respond immediately"...
> 
> FWIW I would have written the simpler patch below, but beyond that I
> think it might start descending into bikeshedding - if you still prefer
> your more comprehensive refactoring, or something in between, then don't
> let my personal preference in style/complexity trade-offs stand in the
> way of getting a useful functional change into the AMD driver. Whichever
> way, though, I *am* now sold on the idea of having some kerneldoc to
> clarify these things.

Thanks, I appreciate your feedback.

I will add kerneldoc as you indicated.

I see you took some parts of the patch I did for MediaTek, but I think this is not good enough for AMD, since AMD behavior should be different than MediaTek - they have different needs:

MediaTek wants as few IOTLB flushes as possible, even if it results in flushing of many irrelevant (unmodified) entries between start and end. That’s the reason it can just use iommu_iotlb_gather_update_range().

In contrast, for AMD we do not want to flush too many irrelevant entries, specifically if the the IOMMU is virtualized. When an IOTLB flush is initiated by the VM, the hypervisor needs to scan the IOMMU page-tables for changes and synchronize it with the physical IOMMU. You don’t want this range to be too big, and that is the reason I needed iommu_iotlb_gather_is_disjoint().

I will add documentation, since clearly this information was not conveyed well enough.

Thanks again,
Nadav

[-- Attachment #1.2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-06-04 19:58 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-24 22:41 [PATCH v2 0/4] iommu/amd: Enable page-selective flushes Nadav Amit
2021-05-24 22:41 ` Nadav Amit
2021-05-24 22:41 ` [PATCH v2 1/4] iommu/amd: Fix wrong parentheses on page-specific invalidations Nadav Amit
2021-05-24 22:41   ` Nadav Amit
2021-05-24 22:41 ` [PATCH v2 2/4] iommu/amd: Selective flush on unmap Nadav Amit
2021-05-24 22:41   ` Nadav Amit
2021-05-24 22:41 ` [PATCH v2 3/4] iommu/amd: Do not sync on page size changes Nadav Amit
2021-05-24 22:41   ` Nadav Amit
2021-05-24 22:41 ` [PATCH v2 4/4] iommu/amd: Do not use flush-queue when NpCache is on Nadav Amit
2021-05-24 22:41   ` Nadav Amit
2021-06-04 15:38 ` [PATCH v2 0/4] iommu/amd: Enable page-selective flushes Joerg Roedel
2021-06-04 15:38   ` Joerg Roedel
2021-06-04 17:10   ` Nadav Amit
2021-06-04 17:10     ` Nadav Amit
2021-06-04 18:53     ` Robin Murphy
2021-06-04 18:53       ` Robin Murphy
2021-06-04 19:57       ` Nadav Amit
2021-06-04 19:57         ` Nadav Amit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.