All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
@ 2022-08-25  6:39 Vasant Hegde
  2022-08-25  6:39 ` [PATCH v3 1/9] iommu/amd/io-pgtable: Implement map_pages io_pgtable_ops callback Vasant Hegde
                   ` (10 more replies)
  0 siblings, 11 replies; 26+ messages in thread
From: Vasant Hegde @ 2022-08-25  6:39 UTC (permalink / raw)
  To: iommu, joro, robin.murphy; +Cc: suravee.suthikulpanit, Vasant Hegde

This series introduces a new usage model for the v2 page table, where it
can be used to implement support for DMA-API by adopting the generic
IO page table framework.

One of the target usecases is to support nested IO page tables
where the guest uses the guest IO page table (v2) for translating
GVA to GPA, and the hypervisor uses the host I/O page table (v1) for
translating GPA to SPA. This is a pre-requisite for supporting the new
HW-assisted vIOMMU presented at the KVM Forum 2020.

  https://static.sched.com/hosted_files/kvmforum2020/26/vIOMMU%20KVM%20Forum%202020.pdf

The following components are introduced in this series:

- Part 1 (patch 1-3)
  Move AMD v1 page table to use [un]map_pages interfaces and minor fixes to unmap path.

- Part 1 (patch 4-5 and 8)
  Refactor the current IOMMU page table code to adopt the generic IO page
  table framework, and add AMD IOMMU Guest (v2) page table management code.

- Part 2 (patch 7)
  Add support for the AMD IOMMU Guest IO Protection feature (GIOV)
  where requests from the I/O device without a PASID are treated as
  if they have PASID of 0.

- Part 3 (patch 9)
  Introduce new "amd_iommu_pgtable" command-line to allow users
  to select the mode of operation (v1 or v2).

See AMD I/O Virtualization Technology Specification for more detail.

  http://www.amd.com/system/files/TechDocs/48882_IOMMU_3.05_PUB.pdf


Thanks,
Vasant

Changes from v2 -> v3 :
  - As Robin suggested added support to use [un]map_pages interface (Patch 1 - 3)
  - Removed [un]map interface from AMD IOMMU driver
  - Updated v2 page table to use [un]map_pages interface

V2 patchset: https://lore.kernel.org/linux-iommu/20220713053034.12061-1-vasant.hegde@amd.com/

Changes from v1 -> v2 :
  - Allow v2 page table only when FEATURE_GT is enabled
  - V2 page table doesn't support IOMMU passthrough mode. Hence added
    check to fall back to v1 mode if system is booted with iommu=pt
  - Overloaded amd_iommu command line option (removed amd_iommu_pgtable
    option)
  - Override supported page sizes dynamically instead of selecting at
    boot time.

V1 patchset : https://lore.kernel.org/linux-iommu/20220603112107.8603-1-vasant.hegde@amd.com/T/#t

Changes from RFC -> v1:
  - Addressed review comments from Joerg
  - Reimplemented v2 page table

RFC patchset : https://lore.kernel.org/linux-iommu/20210312090411.6030-1-suravee.suthikulpanit@amd.com/T/#t


Suravee Suthikulpanit (4):
  iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking
  iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table
  iommu/amd: Add support for Guest IO protection
  iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API

Vasant Hegde (5):
  iommu/amd/io-pgtable: Implement map_pages io_pgtable_ops callback
  iommu/amd/io-pgtable: Implement unmap_pages io_pgtable_ops callback
  iommu/amd: Add map/unmap_pages() iommu_domain_ops callback support
  iommu/amd: Initial support for AMD IOMMU v2 page table
  iommu/amd: Add command-line option to enable different page table

 .../admin-guide/kernel-parameters.txt         |   2 +
 drivers/iommu/amd/Makefile                    |   2 +-
 drivers/iommu/amd/amd_iommu_types.h           |   8 +-
 drivers/iommu/amd/init.c                      |  36 +-
 drivers/iommu/amd/io_pgtable.c                |  76 ++--
 drivers/iommu/amd/io_pgtable_v2.c             | 415 ++++++++++++++++++
 drivers/iommu/amd/iommu.c                     | 117 +++--
 drivers/iommu/io-pgtable.c                    |   1 +
 include/linux/io-pgtable.h                    |   2 +
 9 files changed, 584 insertions(+), 75 deletions(-)
 create mode 100644 drivers/iommu/amd/io_pgtable_v2.c

-- 
2.31.1


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3 1/9] iommu/amd/io-pgtable: Implement map_pages io_pgtable_ops callback
  2022-08-25  6:39 [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
@ 2022-08-25  6:39 ` Vasant Hegde
  2022-08-25  6:39 ` [PATCH v3 2/9] iommu/amd/io-pgtable: Implement unmap_pages " Vasant Hegde
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Vasant Hegde @ 2022-08-25  6:39 UTC (permalink / raw)
  To: iommu, joro, robin.murphy; +Cc: suravee.suthikulpanit, Vasant Hegde

Implement the io_pgtable_ops->map_pages() callback for AMD driver.
Also deprecate io_pgtable->map callback.

Suggested-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
---
 drivers/iommu/amd/io_pgtable.c | 59 ++++++++++++++++++++--------------
 1 file changed, 34 insertions(+), 25 deletions(-)

diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
index 7d4b61e5db47..df7799317f66 100644
--- a/drivers/iommu/amd/io_pgtable.c
+++ b/drivers/iommu/amd/io_pgtable.c
@@ -360,8 +360,9 @@ static void free_clear_pte(u64 *pte, u64 pteval, struct list_head *freelist)
  * supporting all features of AMD IOMMU page tables like level skipping
  * and full 64 bit address spaces.
  */
-static int iommu_v1_map_page(struct io_pgtable_ops *ops, unsigned long iova,
-			  phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
+static int iommu_v1_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+			      phys_addr_t paddr, size_t pgsize, size_t pgcount,
+			      int prot, gfp_t gfp, size_t *mapped)
 {
 	struct protection_domain *dom = io_pgtable_ops_to_domain(ops);
 	LIST_HEAD(freelist);
@@ -369,39 +370,47 @@ static int iommu_v1_map_page(struct io_pgtable_ops *ops, unsigned long iova,
 	u64 __pte, *pte;
 	int ret, i, count;
 
-	BUG_ON(!IS_ALIGNED(iova, size));
-	BUG_ON(!IS_ALIGNED(paddr, size));
+	BUG_ON(!IS_ALIGNED(iova, pgsize));
+	BUG_ON(!IS_ALIGNED(paddr, pgsize));
 
 	ret = -EINVAL;
 	if (!(prot & IOMMU_PROT_MASK))
 		goto out;
 
-	count = PAGE_SIZE_PTE_COUNT(size);
-	pte   = alloc_pte(dom, iova, size, NULL, gfp, &updated);
+	while (pgcount > 0) {
+		count = PAGE_SIZE_PTE_COUNT(pgsize);
+		pte   = alloc_pte(dom, iova, pgsize, NULL, gfp, &updated);
 
-	ret = -ENOMEM;
-	if (!pte)
-		goto out;
+		ret = -ENOMEM;
+		if (!pte)
+			goto out;
 
-	for (i = 0; i < count; ++i)
-		free_clear_pte(&pte[i], pte[i], &freelist);
+		for (i = 0; i < count; ++i)
+			free_clear_pte(&pte[i], pte[i], &freelist);
 
-	if (!list_empty(&freelist))
-		updated = true;
+		if (!list_empty(&freelist))
+			updated = true;
 
-	if (count > 1) {
-		__pte = PAGE_SIZE_PTE(__sme_set(paddr), size);
-		__pte |= PM_LEVEL_ENC(7) | IOMMU_PTE_PR | IOMMU_PTE_FC;
-	} else
-		__pte = __sme_set(paddr) | IOMMU_PTE_PR | IOMMU_PTE_FC;
+		if (count > 1) {
+			__pte = PAGE_SIZE_PTE(__sme_set(paddr), pgsize);
+			__pte |= PM_LEVEL_ENC(7) | IOMMU_PTE_PR | IOMMU_PTE_FC;
+		} else
+			__pte = __sme_set(paddr) | IOMMU_PTE_PR | IOMMU_PTE_FC;
 
-	if (prot & IOMMU_PROT_IR)
-		__pte |= IOMMU_PTE_IR;
-	if (prot & IOMMU_PROT_IW)
-		__pte |= IOMMU_PTE_IW;
+		if (prot & IOMMU_PROT_IR)
+			__pte |= IOMMU_PTE_IR;
+		if (prot & IOMMU_PROT_IW)
+			__pte |= IOMMU_PTE_IW;
 
-	for (i = 0; i < count; ++i)
-		pte[i] = __pte;
+		for (i = 0; i < count; ++i)
+			pte[i] = __pte;
+
+		iova  += pgsize;
+		paddr += pgsize;
+		pgcount--;
+		if (mapped)
+			*mapped += pgsize;
+	}
 
 	ret = 0;
 
@@ -514,7 +523,7 @@ static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo
 	cfg->oas            = IOMMU_OUT_ADDR_BIT_SIZE,
 	cfg->tlb            = &v1_flush_ops;
 
-	pgtable->iop.ops.map          = iommu_v1_map_page;
+	pgtable->iop.ops.map_pages    = iommu_v1_map_pages;
 	pgtable->iop.ops.unmap        = iommu_v1_unmap_page;
 	pgtable->iop.ops.iova_to_phys = iommu_v1_iova_to_phys;
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 2/9] iommu/amd/io-pgtable: Implement unmap_pages io_pgtable_ops callback
  2022-08-25  6:39 [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
  2022-08-25  6:39 ` [PATCH v3 1/9] iommu/amd/io-pgtable: Implement map_pages io_pgtable_ops callback Vasant Hegde
@ 2022-08-25  6:39 ` Vasant Hegde
  2022-08-25  6:39 ` [PATCH v3 3/9] iommu/amd: Add map/unmap_pages() iommu_domain_ops callback support Vasant Hegde
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Vasant Hegde @ 2022-08-25  6:39 UTC (permalink / raw)
  To: iommu, joro, robin.murphy; +Cc: suravee.suthikulpanit, Vasant Hegde

Implement the io_pgtable_ops->unmap_pages() callback for AMD driver
and deprecate io_pgtable_ops->unmap callback.

Also if fetch_pte() returns NULL then return from unmap_mapages()
instead of trying to continue to unmap remaining pages.

Suggested-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
---
 drivers/iommu/amd/io_pgtable.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
index df7799317f66..ace0e9b8b913 100644
--- a/drivers/iommu/amd/io_pgtable.c
+++ b/drivers/iommu/amd/io_pgtable.c
@@ -435,17 +435,18 @@ static int iommu_v1_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
 	return ret;
 }
 
-static unsigned long iommu_v1_unmap_page(struct io_pgtable_ops *ops,
-				      unsigned long iova,
-				      size_t size,
-				      struct iommu_iotlb_gather *gather)
+static unsigned long iommu_v1_unmap_pages(struct io_pgtable_ops *ops,
+					  unsigned long iova,
+					  size_t pgsize, size_t pgcount,
+					  struct iommu_iotlb_gather *gather)
 {
 	struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);
 	unsigned long long unmapped;
 	unsigned long unmap_size;
 	u64 *pte;
+	size_t size = pgcount << __ffs(pgsize);
 
-	BUG_ON(!is_power_of_2(size));
+	BUG_ON(!is_power_of_2(pgsize));
 
 	unmapped = 0;
 
@@ -457,14 +458,14 @@ static unsigned long iommu_v1_unmap_page(struct io_pgtable_ops *ops,
 			count = PAGE_SIZE_PTE_COUNT(unmap_size);
 			for (i = 0; i < count; i++)
 				pte[i] = 0ULL;
+		} else {
+			return unmapped;
 		}
 
 		iova = (iova & ~(unmap_size - 1)) + unmap_size;
 		unmapped += unmap_size;
 	}
 
-	BUG_ON(unmapped && !is_power_of_2(unmapped));
-
 	return unmapped;
 }
 
@@ -524,7 +525,7 @@ static struct io_pgtable *v1_alloc_pgtable(struct io_pgtable_cfg *cfg, void *coo
 	cfg->tlb            = &v1_flush_ops;
 
 	pgtable->iop.ops.map_pages    = iommu_v1_map_pages;
-	pgtable->iop.ops.unmap        = iommu_v1_unmap_page;
+	pgtable->iop.ops.unmap_pages  = iommu_v1_unmap_pages;
 	pgtable->iop.ops.iova_to_phys = iommu_v1_iova_to_phys;
 
 	return &pgtable->iop;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 3/9] iommu/amd: Add map/unmap_pages() iommu_domain_ops callback support
  2022-08-25  6:39 [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
  2022-08-25  6:39 ` [PATCH v3 1/9] iommu/amd/io-pgtable: Implement map_pages io_pgtable_ops callback Vasant Hegde
  2022-08-25  6:39 ` [PATCH v3 2/9] iommu/amd/io-pgtable: Implement unmap_pages " Vasant Hegde
@ 2022-08-25  6:39 ` Vasant Hegde
  2022-08-25  6:39 ` [PATCH v3 4/9] iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking Vasant Hegde
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Vasant Hegde @ 2022-08-25  6:39 UTC (permalink / raw)
  To: iommu, joro, robin.murphy; +Cc: suravee.suthikulpanit, Vasant Hegde

Implement the map_pages() and unmap_pages() callback for the AMD IOMMU
driver to allow calls from iommu core to map and unmap multiple pages.
Also deprecate map/unmap callbacks.

Finally gatherer is not updated by iommu_v1_unmap_pages(). Hence pass
NULL instead of gather to iommu_v1_unmap_pages.

Suggested-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
---
 drivers/iommu/amd/iommu.c | 29 ++++++++++++++++-------------
 1 file changed, 16 insertions(+), 13 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index f287dca85990..c3733884377b 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2174,13 +2174,13 @@ static void amd_iommu_iotlb_sync_map(struct iommu_domain *dom,
 	struct protection_domain *domain = to_pdomain(dom);
 	struct io_pgtable_ops *ops = &domain->iop.iop.ops;
 
-	if (ops->map)
+	if (ops->map_pages)
 		domain_flush_np_cache(domain, iova, size);
 }
 
-static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
-			 phys_addr_t paddr, size_t page_size, int iommu_prot,
-			 gfp_t gfp)
+static int amd_iommu_map_pages(struct iommu_domain *dom, unsigned long iova,
+			       phys_addr_t paddr, size_t pgsize, size_t pgcount,
+			       int iommu_prot, gfp_t gfp, size_t *mapped)
 {
 	struct protection_domain *domain = to_pdomain(dom);
 	struct io_pgtable_ops *ops = &domain->iop.iop.ops;
@@ -2196,8 +2196,10 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
 	if (iommu_prot & IOMMU_WRITE)
 		prot |= IOMMU_PROT_IW;
 
-	if (ops->map)
-		ret = ops->map(ops, iova, paddr, page_size, prot, gfp);
+	if (ops->map_pages) {
+		ret = ops->map_pages(ops, iova, paddr, pgsize,
+				     pgcount, prot, gfp, mapped);
+	}
 
 	return ret;
 }
@@ -2223,9 +2225,9 @@ static void amd_iommu_iotlb_gather_add_page(struct iommu_domain *domain,
 	iommu_iotlb_gather_add_range(gather, iova, size);
 }
 
-static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
-			      size_t page_size,
-			      struct iommu_iotlb_gather *gather)
+static size_t amd_iommu_unmap_pages(struct iommu_domain *dom, unsigned long iova,
+				    size_t pgsize, size_t pgcount,
+				    struct iommu_iotlb_gather *gather)
 {
 	struct protection_domain *domain = to_pdomain(dom);
 	struct io_pgtable_ops *ops = &domain->iop.iop.ops;
@@ -2235,9 +2237,10 @@ static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
 	    (domain->iop.mode == PAGE_MODE_NONE))
 		return 0;
 
-	r = (ops->unmap) ? ops->unmap(ops, iova, page_size, gather) : 0;
+	r = (ops->unmap_pages) ? ops->unmap_pages(ops, iova, pgsize, pgcount, NULL) : 0;
 
-	amd_iommu_iotlb_gather_add_page(dom, gather, iova, page_size);
+	if (r)
+		amd_iommu_iotlb_gather_add_page(dom, gather, iova, r);
 
 	return r;
 }
@@ -2400,8 +2403,8 @@ const struct iommu_ops amd_iommu_ops = {
 	.default_domain_ops = &(const struct iommu_domain_ops) {
 		.attach_dev	= amd_iommu_attach_device,
 		.detach_dev	= amd_iommu_detach_device,
-		.map		= amd_iommu_map,
-		.unmap		= amd_iommu_unmap,
+		.map_pages	= amd_iommu_map_pages,
+		.unmap_pages	= amd_iommu_unmap_pages,
 		.iotlb_sync_map	= amd_iommu_iotlb_sync_map,
 		.iova_to_phys	= amd_iommu_iova_to_phys,
 		.flush_iotlb_all = amd_iommu_flush_iotlb_all,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 4/9] iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking
  2022-08-25  6:39 [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
                   ` (2 preceding siblings ...)
  2022-08-25  6:39 ` [PATCH v3 3/9] iommu/amd: Add map/unmap_pages() iommu_domain_ops callback support Vasant Hegde
@ 2022-08-25  6:39 ` Vasant Hegde
  2022-08-25  6:39 ` [PATCH v3 5/9] iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table Vasant Hegde
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Vasant Hegde @ 2022-08-25  6:39 UTC (permalink / raw)
  To: iommu, joro, robin.murphy; +Cc: suravee.suthikulpanit, Vasant Hegde

From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

The current function to enable IOMMU v2 also lock the domain.
In order to reuse the same code in different code path, in which
the domain has already been locked, refactor the function to separate
the locking from the enabling logic.

Co-developed-by: Vasant Hegde <vasant.hegde@amd.com>
Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
Changes in v2:
  - Remove unused has_ppr function param.

 drivers/iommu/amd/iommu.c | 46 +++++++++++++++++++++++----------------
 1 file changed, 27 insertions(+), 19 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index c3733884377b..1f8067615386 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -85,6 +85,7 @@ struct iommu_cmd {
 struct kmem_cache *amd_iommu_irq_cache;
 
 static void detach_device(struct device *dev);
+static int domain_enable_v2(struct protection_domain *domain, int pasids);
 
 /****************************************************************************
  *
@@ -2451,11 +2452,10 @@ void amd_iommu_domain_direct_map(struct iommu_domain *dom)
 }
 EXPORT_SYMBOL(amd_iommu_domain_direct_map);
 
-int amd_iommu_domain_enable_v2(struct iommu_domain *dom, int pasids)
+/* Note: This function expects iommu_domain->lock to be held prior calling the function. */
+static int domain_enable_v2(struct protection_domain *domain, int pasids)
 {
-	struct protection_domain *domain = to_pdomain(dom);
-	unsigned long flags;
-	int levels, ret;
+	int levels;
 
 	/* Number of GCR3 table levels required */
 	for (levels = 0; (pasids - 1) & ~0x1ff; pasids >>= 9)
@@ -2464,7 +2464,25 @@ int amd_iommu_domain_enable_v2(struct iommu_domain *dom, int pasids)
 	if (levels > amd_iommu_max_glx_val)
 		return -EINVAL;
 
-	spin_lock_irqsave(&domain->lock, flags);
+	domain->gcr3_tbl = (void *)get_zeroed_page(GFP_ATOMIC);
+	if (domain->gcr3_tbl == NULL)
+		return -ENOMEM;
+
+	domain->glx      = levels;
+	domain->flags   |= PD_IOMMUV2_MASK;
+
+	amd_iommu_domain_update(domain);
+
+	return 0;
+}
+
+int amd_iommu_domain_enable_v2(struct iommu_domain *dom, int pasids)
+{
+	struct protection_domain *pdom = to_pdomain(dom);
+	unsigned long flags;
+	int ret;
+
+	spin_lock_irqsave(&pdom->lock, flags);
 
 	/*
 	 * Save us all sanity checks whether devices already in the
@@ -2472,24 +2490,14 @@ int amd_iommu_domain_enable_v2(struct iommu_domain *dom, int pasids)
 	 * devices attached when it is switched into IOMMUv2 mode.
 	 */
 	ret = -EBUSY;
-	if (domain->dev_cnt > 0 || domain->flags & PD_IOMMUV2_MASK)
-		goto out;
-
-	ret = -ENOMEM;
-	domain->gcr3_tbl = (void *)get_zeroed_page(GFP_ATOMIC);
-	if (domain->gcr3_tbl == NULL)
+	if (pdom->dev_cnt > 0 || pdom->flags & PD_IOMMUV2_MASK)
 		goto out;
 
-	domain->glx      = levels;
-	domain->flags   |= PD_IOMMUV2_MASK;
-
-	amd_iommu_domain_update(domain);
-
-	ret = 0;
+	if (!pdom->gcr3_tbl)
+		ret = domain_enable_v2(pdom, pasids);
 
 out:
-	spin_unlock_irqrestore(&domain->lock, flags);
-
+	spin_unlock_irqrestore(&pdom->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL(amd_iommu_domain_enable_v2);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 5/9] iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table
  2022-08-25  6:39 [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
                   ` (3 preceding siblings ...)
  2022-08-25  6:39 ` [PATCH v3 4/9] iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking Vasant Hegde
@ 2022-08-25  6:39 ` Vasant Hegde
  2022-08-25  6:39 ` [PATCH v3 6/9] iommu/amd: Initial support for AMD IOMMU v2 page table Vasant Hegde
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Vasant Hegde @ 2022-08-25  6:39 UTC (permalink / raw)
  To: iommu, joro, robin.murphy; +Cc: suravee.suthikulpanit, Vasant Hegde

From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Currently, PPR/ATS can be enabled only if the domain is type
identity mapping. However, when allowing the IOMMU v2 page table
to be used for DMA-API, the check is no longer valid.

Update the sanity check to only apply for when using AMD_IOMMU_V1
page table mode.

Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
---
 drivers/iommu/amd/iommu.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 1f8067615386..7d52ecea0298 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -1694,7 +1694,7 @@ static void pdev_iommuv2_disable(struct pci_dev *pdev)
 	pci_disable_pasid(pdev);
 }
 
-static int pdev_iommuv2_enable(struct pci_dev *pdev)
+static int pdev_pri_ats_enable(struct pci_dev *pdev)
 {
 	int ret;
 
@@ -1757,11 +1757,19 @@ static int attach_device(struct device *dev,
 		struct iommu_domain *def_domain = iommu_get_dma_domain(dev);
 
 		ret = -EINVAL;
-		if (def_domain->type != IOMMU_DOMAIN_IDENTITY)
+
+		/*
+		 * In case of using AMD_IOMMU_V1 page table mode and the device
+		 * is enabling for PPR/ATS support (using v2 table),
+		 * we need to make sure that the domain type is identity map.
+		 */
+		if ((amd_iommu_pgtable == AMD_IOMMU_V1) &&
+		    def_domain->type != IOMMU_DOMAIN_IDENTITY) {
 			goto out;
+		}
 
 		if (dev_data->iommu_v2) {
-			if (pdev_iommuv2_enable(pdev) != 0)
+			if (pdev_pri_ats_enable(pdev) != 0)
 				goto out;
 
 			dev_data->ats.enabled = true;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 6/9] iommu/amd: Initial support for AMD IOMMU v2 page table
  2022-08-25  6:39 [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
                   ` (4 preceding siblings ...)
  2022-08-25  6:39 ` [PATCH v3 5/9] iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table Vasant Hegde
@ 2022-08-25  6:39 ` Vasant Hegde
  2022-08-25  6:39 ` [PATCH v3 7/9] iommu/amd: Add support for Guest IO protection Vasant Hegde
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Vasant Hegde @ 2022-08-25  6:39 UTC (permalink / raw)
  To: iommu, joro, robin.murphy; +Cc: suravee.suthikulpanit, Vasant Hegde

Introduce IO page table framework support for AMD IOMMU v2 page table.
This patch implements 4 level page table within iommu amd driver and
supports 4K/2M/1G page sizes.

Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
---
Changes in v3:
  - Replaced [un]map interface with [un]map_pages interface
  - Added error handing support in unmap_pages()

 drivers/iommu/amd/Makefile          |   2 +-
 drivers/iommu/amd/amd_iommu_types.h |   5 +-
 drivers/iommu/amd/io_pgtable_v2.c   | 415 ++++++++++++++++++++++++++++
 drivers/iommu/io-pgtable.c          |   1 +
 include/linux/io-pgtable.h          |   2 +
 5 files changed, 423 insertions(+), 2 deletions(-)
 create mode 100644 drivers/iommu/amd/io_pgtable_v2.c

diff --git a/drivers/iommu/amd/Makefile b/drivers/iommu/amd/Makefile
index a935f8f4b974..773d8aa00283 100644
--- a/drivers/iommu/amd/Makefile
+++ b/drivers/iommu/amd/Makefile
@@ -1,4 +1,4 @@
 # SPDX-License-Identifier: GPL-2.0-only
-obj-$(CONFIG_AMD_IOMMU) += iommu.o init.o quirks.o io_pgtable.o
+obj-$(CONFIG_AMD_IOMMU) += iommu.o init.o quirks.o io_pgtable.o io_pgtable_v2.o
 obj-$(CONFIG_AMD_IOMMU_DEBUGFS) += debugfs.o
 obj-$(CONFIG_AMD_IOMMU_V2) += iommu_v2.o
diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
index 3c1205ba636a..4f94682c1350 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -269,6 +269,8 @@
  * 512GB Pages are not supported due to a hardware bug
  */
 #define AMD_IOMMU_PGSIZES	((~0xFFFUL) & ~(2ULL << 38))
+/* 4K, 2MB, 1G page sizes are supported */
+#define AMD_IOMMU_PGSIZES_V2	(PAGE_SIZE | (1ULL << 21) | (1ULL << 30))
 
 /* Bit value definition for dte irq remapping fields*/
 #define DTE_IRQ_PHYS_ADDR_MASK	(((1ULL << 45)-1) << 6)
@@ -519,7 +521,8 @@ struct amd_io_pgtable {
 	struct io_pgtable	iop;
 	int			mode;
 	u64			*root;
-	atomic64_t		pt_root;    /* pgtable root and pgtable mode */
+	atomic64_t		pt_root;	/* pgtable root and pgtable mode */
+	u64			*pgd;		/* v2 pgtable pgd pointer */
 };
 
 /*
diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
new file mode 100644
index 000000000000..8638ddf6fb3b
--- /dev/null
+++ b/drivers/iommu/amd/io_pgtable_v2.c
@@ -0,0 +1,415 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * CPU-agnostic AMD IO page table v2 allocator.
+ *
+ * Copyright (C) 2022 Advanced Micro Devices, Inc.
+ * Author: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
+ * Author: Vasant Hegde <vasant.hegde@amd.com>
+ */
+
+#define pr_fmt(fmt)	"AMD-Vi: " fmt
+#define dev_fmt(fmt)	pr_fmt(fmt)
+
+#include <linux/bitops.h>
+#include <linux/io-pgtable.h>
+#include <linux/kernel.h>
+
+#include <asm/barrier.h>
+
+#include "amd_iommu_types.h"
+#include "amd_iommu.h"
+
+#define IOMMU_PAGE_PRESENT	BIT_ULL(0)	/* Is present */
+#define IOMMU_PAGE_RW		BIT_ULL(1)	/* Writeable */
+#define IOMMU_PAGE_USER		BIT_ULL(2)	/* Userspace addressable */
+#define IOMMU_PAGE_PWT		BIT_ULL(3)	/* Page write through */
+#define IOMMU_PAGE_PCD		BIT_ULL(4)	/* Page cache disabled */
+#define IOMMU_PAGE_ACCESS	BIT_ULL(5)	/* Was accessed (updated by IOMMU) */
+#define IOMMU_PAGE_DIRTY	BIT_ULL(6)	/* Was written to (updated by IOMMU) */
+#define IOMMU_PAGE_PSE		BIT_ULL(7)	/* Page Size Extensions */
+#define IOMMU_PAGE_NX		BIT_ULL(63)	/* No execute */
+
+#define MAX_PTRS_PER_PAGE	512
+
+#define IOMMU_PAGE_SIZE_2M	BIT_ULL(21)
+#define IOMMU_PAGE_SIZE_1G	BIT_ULL(30)
+
+
+static inline int get_pgtable_level(void)
+{
+	/* 5 level page table is not supported */
+	return PAGE_MODE_4_LEVEL;
+}
+
+static inline bool is_large_pte(u64 pte)
+{
+	return (pte & IOMMU_PAGE_PSE);
+}
+
+static inline void *alloc_pgtable_page(void)
+{
+	return (void *)get_zeroed_page(GFP_KERNEL);
+}
+
+static inline u64 set_pgtable_attr(u64 *page)
+{
+	u64 prot;
+
+	prot = IOMMU_PAGE_PRESENT | IOMMU_PAGE_RW | IOMMU_PAGE_USER;
+	prot |= IOMMU_PAGE_ACCESS | IOMMU_PAGE_DIRTY;
+
+	return (iommu_virt_to_phys(page) | prot);
+}
+
+static inline void *get_pgtable_pte(u64 pte)
+{
+	return iommu_phys_to_virt(pte & PM_ADDR_MASK);
+}
+
+static u64 set_pte_attr(u64 paddr, u64 pg_size, int prot)
+{
+	u64 pte;
+
+	pte = __sme_set(paddr & PM_ADDR_MASK);
+	pte |= IOMMU_PAGE_PRESENT | IOMMU_PAGE_USER;
+	pte |= IOMMU_PAGE_ACCESS | IOMMU_PAGE_DIRTY;
+
+	if (prot & IOMMU_PROT_IW)
+		pte |= IOMMU_PAGE_RW;
+
+	/* Large page */
+	if (pg_size == IOMMU_PAGE_SIZE_1G || pg_size == IOMMU_PAGE_SIZE_2M)
+		pte |= IOMMU_PAGE_PSE;
+
+	return pte;
+}
+
+static inline u64 get_alloc_page_size(u64 size)
+{
+	if (size >= IOMMU_PAGE_SIZE_1G)
+		return IOMMU_PAGE_SIZE_1G;
+
+	if (size >= IOMMU_PAGE_SIZE_2M)
+		return IOMMU_PAGE_SIZE_2M;
+
+	return PAGE_SIZE;
+}
+
+static inline int page_size_to_level(u64 pg_size)
+{
+	if (pg_size == IOMMU_PAGE_SIZE_1G)
+		return PAGE_MODE_3_LEVEL;
+	if (pg_size == IOMMU_PAGE_SIZE_2M)
+		return PAGE_MODE_2_LEVEL;
+
+	return PAGE_MODE_1_LEVEL;
+}
+
+static inline void free_pgtable_page(u64 *pt)
+{
+	free_page((unsigned long)pt);
+}
+
+static void free_pgtable(u64 *pt, int level)
+{
+	u64 *p;
+	int i;
+
+	for (i = 0; i < MAX_PTRS_PER_PAGE; i++) {
+		/* PTE present? */
+		if (!IOMMU_PTE_PRESENT(pt[i]))
+			continue;
+
+		if (is_large_pte(pt[i]))
+			continue;
+
+		/*
+		 * Free the next level. No need to look at l1 tables here since
+		 * they can only contain leaf PTEs; just free them directly.
+		 */
+		p = get_pgtable_pte(pt[i]);
+		if (level > 2)
+			free_pgtable(p, level - 1);
+		else
+			free_pgtable_page(p);
+	}
+
+	free_pgtable_page(pt);
+}
+
+/* Allocate page table */
+static u64 *v2_alloc_pte(u64 *pgd, unsigned long iova,
+			 unsigned long pg_size, bool *updated)
+{
+	u64 *pte, *page;
+	int level, end_level;
+
+	level = get_pgtable_level() - 1;
+	end_level = page_size_to_level(pg_size);
+	pte = &pgd[PM_LEVEL_INDEX(level, iova)];
+	iova = PAGE_SIZE_ALIGN(iova, PAGE_SIZE);
+
+	while (level >= end_level) {
+		u64 __pte, __npte;
+
+		__pte = *pte;
+
+		if (IOMMU_PTE_PRESENT(__pte) && is_large_pte(__pte)) {
+			/* Unmap large pte */
+			cmpxchg64(pte, *pte, 0ULL);
+			*updated = true;
+			continue;
+		}
+
+		if (!IOMMU_PTE_PRESENT(__pte)) {
+			page = alloc_pgtable_page();
+			if (!page)
+				return NULL;
+
+			__npte = set_pgtable_attr(page);
+			/* pte could have been changed somewhere. */
+			if (cmpxchg64(pte, __pte, __npte) != __pte)
+				free_pgtable_page(page);
+			else if (IOMMU_PTE_PRESENT(__pte))
+				*updated = true;
+
+			continue;
+		}
+
+		level -= 1;
+		pte = get_pgtable_pte(__pte);
+		pte = &pte[PM_LEVEL_INDEX(level, iova)];
+	}
+
+	/* Tear down existing pte entries */
+	if (IOMMU_PTE_PRESENT(*pte)) {
+		u64 *__pte;
+
+		*updated = true;
+		__pte = get_pgtable_pte(*pte);
+		cmpxchg64(pte, *pte, 0ULL);
+		if (pg_size == IOMMU_PAGE_SIZE_1G)
+			free_pgtable(__pte, end_level - 1);
+		else if (pg_size == IOMMU_PAGE_SIZE_2M)
+			free_pgtable_page(__pte);
+	}
+
+	return pte;
+}
+
+/*
+ * This function checks if there is a PTE for a given dma address.
+ * If there is one, it returns the pointer to it.
+ */
+static u64 *fetch_pte(struct amd_io_pgtable *pgtable,
+		      unsigned long iova, unsigned long *page_size)
+{
+	u64 *pte;
+	int level;
+
+	level = get_pgtable_level() - 1;
+	pte = &pgtable->pgd[PM_LEVEL_INDEX(level, iova)];
+	/* Default page size is 4K */
+	*page_size = PAGE_SIZE;
+
+	while (level) {
+		/* Not present */
+		if (!IOMMU_PTE_PRESENT(*pte))
+			return NULL;
+
+		/* Walk to the next level */
+		pte = get_pgtable_pte(*pte);
+		pte = &pte[PM_LEVEL_INDEX(level - 1, iova)];
+
+		/* Large page */
+		if (is_large_pte(*pte)) {
+			if (level == PAGE_MODE_3_LEVEL)
+				*page_size = IOMMU_PAGE_SIZE_1G;
+			else if (level == PAGE_MODE_2_LEVEL)
+				*page_size = IOMMU_PAGE_SIZE_2M;
+			else
+				return NULL;	/* Wrongly set PSE bit in PTE */
+
+			break;
+		}
+
+		level -= 1;
+	}
+
+	return pte;
+}
+
+static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+			      phys_addr_t paddr, size_t pgsize, size_t pgcount,
+			      int prot, gfp_t gfp, size_t *mapped)
+{
+	struct protection_domain *pdom = io_pgtable_ops_to_domain(ops);
+	struct io_pgtable_cfg *cfg = &pdom->iop.iop.cfg;
+	u64 *pte;
+	unsigned long map_size;
+	unsigned long mapped_size = 0;
+	unsigned long o_iova = iova;
+	size_t size = pgcount << __ffs(pgsize);
+	int count = 0;
+	int ret = 0;
+	bool updated = false;
+
+	if (WARN_ON(!pgsize || (pgsize & cfg->pgsize_bitmap) != pgsize) || !pgcount)
+		return -EINVAL;
+
+	if (!(prot & IOMMU_PROT_MASK))
+		return -EINVAL;
+
+	while (mapped_size < size) {
+		map_size = get_alloc_page_size(pgsize);
+		pte = v2_alloc_pte(pdom->iop.pgd, iova, map_size, &updated);
+		if (!pte) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		*pte = set_pte_attr(paddr, map_size, prot);
+
+		count++;
+		iova += map_size;
+		paddr += map_size;
+		mapped_size += map_size;
+	}
+
+out:
+	if (updated) {
+		if (count > 1)
+			amd_iommu_flush_tlb(&pdom->domain, 0);
+		else
+			amd_iommu_flush_page(&pdom->domain, 0, o_iova);
+	}
+
+	if (mapped)
+		*mapped += mapped_size;
+
+	return ret;
+}
+
+static unsigned long iommu_v2_unmap_pages(struct io_pgtable_ops *ops,
+					  unsigned long iova,
+					  size_t pgsize, size_t pgcount,
+					  struct iommu_iotlb_gather *gather)
+{
+	struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);
+	struct io_pgtable_cfg *cfg = &pgtable->iop.cfg;
+	unsigned long unmap_size;
+	unsigned long unmapped = 0;
+	size_t size = pgcount << __ffs(pgsize);
+	u64 *pte;
+
+	if (WARN_ON(!pgsize || (pgsize & cfg->pgsize_bitmap) != pgsize || !pgcount))
+		return 0;
+
+	while (unmapped < size) {
+		pte = fetch_pte(pgtable, iova, &unmap_size);
+		if (!pte)
+			return unmapped;
+
+		*pte = 0ULL;
+
+		iova = (iova & ~(unmap_size - 1)) + unmap_size;
+		unmapped += unmap_size;
+	}
+
+	return unmapped;
+}
+
+static phys_addr_t iommu_v2_iova_to_phys(struct io_pgtable_ops *ops, unsigned long iova)
+{
+	struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);
+	unsigned long offset_mask, pte_pgsize;
+	u64 *pte, __pte;
+
+	pte = fetch_pte(pgtable, iova, &pte_pgsize);
+	if (!pte || !IOMMU_PTE_PRESENT(*pte))
+		return 0;
+
+	offset_mask = pte_pgsize - 1;
+	__pte = __sme_clr(*pte & PM_ADDR_MASK);
+
+	return (__pte & ~offset_mask) | (iova & offset_mask);
+}
+
+/*
+ * ----------------------------------------------------
+ */
+static void v2_tlb_flush_all(void *cookie)
+{
+}
+
+static void v2_tlb_flush_walk(unsigned long iova, size_t size,
+			      size_t granule, void *cookie)
+{
+}
+
+static void v2_tlb_add_page(struct iommu_iotlb_gather *gather,
+			    unsigned long iova, size_t granule,
+			    void *cookie)
+{
+}
+
+static const struct iommu_flush_ops v2_flush_ops = {
+	.tlb_flush_all	= v2_tlb_flush_all,
+	.tlb_flush_walk = v2_tlb_flush_walk,
+	.tlb_add_page	= v2_tlb_add_page,
+};
+
+static void v2_free_pgtable(struct io_pgtable *iop)
+{
+	struct protection_domain *pdom;
+	struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, iop);
+
+	pdom = container_of(pgtable, struct protection_domain, iop);
+	if (!(pdom->flags & PD_IOMMUV2_MASK))
+		return;
+
+	/*
+	 * Make changes visible to IOMMUs. No need to clear gcr3 entry
+	 * as gcr3 table is already freed.
+	 */
+	amd_iommu_domain_update(pdom);
+
+	/* Free page table */
+	free_pgtable(pgtable->pgd, get_pgtable_level());
+}
+
+static struct io_pgtable *v2_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
+{
+	struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg);
+	struct protection_domain *pdom = (struct protection_domain *)cookie;
+	int ret;
+
+	pgtable->pgd = alloc_pgtable_page();
+	if (!pgtable->pgd)
+		return NULL;
+
+	ret = amd_iommu_domain_set_gcr3(&pdom->domain, 0, iommu_virt_to_phys(pgtable->pgd));
+	if (ret)
+		goto err_free_pgd;
+
+	pgtable->iop.ops.map_pages    = iommu_v2_map_pages;
+	pgtable->iop.ops.unmap_pages  = iommu_v2_unmap_pages;
+	pgtable->iop.ops.iova_to_phys = iommu_v2_iova_to_phys;
+
+	cfg->pgsize_bitmap = AMD_IOMMU_PGSIZES_V2,
+	cfg->ias           = IOMMU_IN_ADDR_BIT_SIZE,
+	cfg->oas           = IOMMU_OUT_ADDR_BIT_SIZE,
+	cfg->tlb           = &v2_flush_ops;
+
+	return &pgtable->iop;
+
+err_free_pgd:
+	free_pgtable_page(pgtable->pgd);
+
+	return NULL;
+}
+
+struct io_pgtable_init_fns io_pgtable_amd_iommu_v2_init_fns = {
+	.alloc	= v2_alloc_pgtable,
+	.free	= v2_free_pgtable,
+};
diff --git a/drivers/iommu/io-pgtable.c b/drivers/iommu/io-pgtable.c
index f4bfcef98297..ac02a45ed798 100644
--- a/drivers/iommu/io-pgtable.c
+++ b/drivers/iommu/io-pgtable.c
@@ -27,6 +27,7 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] = {
 #endif
 #ifdef CONFIG_AMD_IOMMU
 	[AMD_IOMMU_V1] = &io_pgtable_amd_iommu_v1_init_fns,
+	[AMD_IOMMU_V2] = &io_pgtable_amd_iommu_v2_init_fns,
 #endif
 };
 
diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h
index 86af6f0a00a2..dc0f50bbdfa9 100644
--- a/include/linux/io-pgtable.h
+++ b/include/linux/io-pgtable.h
@@ -16,6 +16,7 @@ enum io_pgtable_fmt {
 	ARM_V7S,
 	ARM_MALI_LPAE,
 	AMD_IOMMU_V1,
+	AMD_IOMMU_V2,
 	APPLE_DART,
 	IO_PGTABLE_NUM_FMTS,
 };
@@ -255,6 +256,7 @@ extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_arm_v7s_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_arm_mali_lpae_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_amd_iommu_v1_init_fns;
+extern struct io_pgtable_init_fns io_pgtable_amd_iommu_v2_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_apple_dart_init_fns;
 
 #endif /* __IO_PGTABLE_H */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 7/9] iommu/amd: Add support for Guest IO protection
  2022-08-25  6:39 [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
                   ` (5 preceding siblings ...)
  2022-08-25  6:39 ` [PATCH v3 6/9] iommu/amd: Initial support for AMD IOMMU v2 page table Vasant Hegde
@ 2022-08-25  6:39 ` Vasant Hegde
  2022-08-25  6:39 ` [PATCH v3 8/9] iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API Vasant Hegde
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Vasant Hegde @ 2022-08-25  6:39 UTC (permalink / raw)
  To: iommu, joro, robin.murphy; +Cc: suravee.suthikulpanit, Vasant Hegde

From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

AMD IOMMU introduces support for Guest I/O protection where the request
from the I/O device without a PASID are treated as if they have PASID 0.

Co-developed-by: Vasant Hegde <vasant.hegde@amd.com>
Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
Changes in v2:
  - Added passthrough mode check
  - Added FEATURE_GT check

 drivers/iommu/amd/amd_iommu_types.h |  3 +++
 drivers/iommu/amd/init.c            | 13 +++++++++++++
 drivers/iommu/amd/iommu.c           |  3 +++
 3 files changed, 19 insertions(+)

diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
index 4f94682c1350..3a6051f2e42e 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -94,6 +94,7 @@
 #define FEATURE_HE		(1ULL<<8)
 #define FEATURE_PC		(1ULL<<9)
 #define FEATURE_GAM_VAPIC	(1ULL<<21)
+#define FEATURE_GIOSUP		(1ULL<<48)
 #define FEATURE_EPHSUP		(1ULL<<50)
 #define FEATURE_SNP		(1ULL<<63)
 
@@ -371,6 +372,7 @@
 #define DTE_FLAG_IW (1ULL << 62)
 
 #define DTE_FLAG_IOTLB	(1ULL << 32)
+#define DTE_FLAG_GIOV	(1ULL << 54)
 #define DTE_FLAG_GV	(1ULL << 55)
 #define DTE_FLAG_MASK	(0x3ffULL << 32)
 #define DTE_GLX_SHIFT	(56)
@@ -429,6 +431,7 @@
 #define PD_PASSTHROUGH_MASK	(1UL << 2) /* domain has no page
 					      translation */
 #define PD_IOMMUV2_MASK		(1UL << 3) /* domain has gcr3 table */
+#define PD_GIOV_MASK		(1UL << 4) /* domain enable GIOV support */
 
 extern bool amd_iommu_dump;
 #define DUMP_printk(format, arg...)				\
diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index d86496114ca5..2ed23fd16014 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -2080,6 +2080,17 @@ static int __init iommu_init_pci(struct amd_iommu *iommu)
 
 	init_iommu_perf_ctr(iommu);
 
+	if (amd_iommu_pgtable == AMD_IOMMU_V2) {
+		if (!iommu_feature(iommu, FEATURE_GIOSUP) ||
+		    !iommu_feature(iommu, FEATURE_GT)) {
+			pr_warn("Cannot enable v2 page table for DMA-API. Fallback to v1.\n");
+			amd_iommu_pgtable = AMD_IOMMU_V1;
+		} else if (iommu_default_passthrough()) {
+			pr_warn("V2 page table doesn't support passthrough mode. Fallback to v1.\n");
+			amd_iommu_pgtable = AMD_IOMMU_V1;
+		}
+	}
+
 	if (is_rd890_iommu(iommu->dev)) {
 		int i, j;
 
@@ -2160,6 +2171,8 @@ static void print_iommu_info(void)
 		if (amd_iommu_xt_mode == IRQ_REMAP_X2APIC_MODE)
 			pr_info("X2APIC enabled\n");
 	}
+	if (amd_iommu_pgtable == AMD_IOMMU_V2)
+		pr_info("V2 page table enabled\n");
 }
 
 static int __init amd_iommu_init_pci(void)
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 7d52ecea0298..95081a30ffe3 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -1597,6 +1597,9 @@ static void set_dte_entry(struct amd_iommu *iommu, u16 devid,
 
 		tmp = DTE_GCR3_VAL_C(gcr3) << DTE_GCR3_SHIFT_C;
 		flags    |= tmp;
+
+		if (domain->flags & PD_GIOV_MASK)
+			pte_root |= DTE_FLAG_GIOV;
 	}
 
 	flags &= ~DEV_DOMID_MASK;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 8/9] iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API
  2022-08-25  6:39 [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
                   ` (6 preceding siblings ...)
  2022-08-25  6:39 ` [PATCH v3 7/9] iommu/amd: Add support for Guest IO protection Vasant Hegde
@ 2022-08-25  6:39 ` Vasant Hegde
  2022-08-25  6:39 ` [PATCH v3 9/9] iommu/amd: Add command-line option to enable different page table Vasant Hegde
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Vasant Hegde @ 2022-08-25  6:39 UTC (permalink / raw)
  To: iommu, joro, robin.murphy; +Cc: suravee.suthikulpanit, Vasant Hegde

From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Introduce init function for setting up DMA domain for DMA-API with
the IOMMU v2 page table.

Co-developed-by: Vasant Hegde <vasant.hegde@amd.com>
Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
Changes in v2:
  - Added support to change supported page sizes based on domain type.

 drivers/iommu/amd/iommu.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 95081a30ffe3..d057189de2ac 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -1653,6 +1653,10 @@ static void do_attach(struct iommu_dev_data *dev_data,
 	domain->dev_iommu[iommu->index] += 1;
 	domain->dev_cnt                 += 1;
 
+	/* Override supported page sizes */
+	if (domain->flags & PD_GIOV_MASK)
+		domain->domain.pgsize_bitmap = AMD_IOMMU_PGSIZES_V2;
+
 	/* Update device table */
 	set_dte_entry(iommu, dev_data->devid, domain,
 		      ats, dev_data->iommu_v2);
@@ -2032,6 +2036,24 @@ static int protection_domain_init_v1(struct protection_domain *domain, int mode)
 	return 0;
 }
 
+static int protection_domain_init_v2(struct protection_domain *domain)
+{
+	spin_lock_init(&domain->lock);
+	domain->id = domain_id_alloc();
+	if (!domain->id)
+		return -ENOMEM;
+	INIT_LIST_HEAD(&domain->dev_list);
+
+	domain->flags |= PD_GIOV_MASK;
+
+	if (domain_enable_v2(domain, 1)) {
+		domain_id_free(domain->id);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
 static struct protection_domain *protection_domain_alloc(unsigned int type)
 {
 	struct io_pgtable_ops *pgtbl_ops;
@@ -2059,6 +2081,9 @@ static struct protection_domain *protection_domain_alloc(unsigned int type)
 	case AMD_IOMMU_V1:
 		ret = protection_domain_init_v1(domain, mode);
 		break;
+	case AMD_IOMMU_V2:
+		ret = protection_domain_init_v2(domain);
+		break;
 	default:
 		ret = -EINVAL;
 	}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v3 9/9] iommu/amd: Add command-line option to enable different page table
  2022-08-25  6:39 [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
                   ` (7 preceding siblings ...)
  2022-08-25  6:39 ` [PATCH v3 8/9] iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API Vasant Hegde
@ 2022-08-25  6:39 ` Vasant Hegde
  2022-09-05 11:39 ` [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
  2022-09-07 14:14 ` Joerg Roedel
  10 siblings, 0 replies; 26+ messages in thread
From: Vasant Hegde @ 2022-08-25  6:39 UTC (permalink / raw)
  To: iommu, joro, robin.murphy; +Cc: suravee.suthikulpanit, Vasant Hegde

Enhance amd_iommu command line option to specify v1 or v2 page table.
By default system will boot in V1 page table mode.

Co-developed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
---
Changes in v2:
  - Removed separate command line option and enhace amd_iommu option

 .../admin-guide/kernel-parameters.txt         |  2 ++
 drivers/iommu/amd/init.c                      | 23 +++++++++++++++----
 2 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index d45e58328ce6..d1919bf4f37a 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -321,6 +321,8 @@
 			force_enable - Force enable the IOMMU on platforms known
 				       to be buggy with IOMMU enabled. Use this
 				       option with care.
+			pgtbl_v1     - Use v1 page table for DMA-API (Default).
+			pgtbl_v2     - Use v2 page table for DMA-API.
 
 	amd_iommu_dump=	[HW,X86-64]
 			Enable AMD IOMMU driver option to dump the ACPI table
diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index 2ed23fd16014..621815e73771 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -3348,17 +3348,30 @@ static int __init parse_amd_iommu_intr(char *str)
 
 static int __init parse_amd_iommu_options(char *str)
 {
-	for (; *str; ++str) {
+	if (!str)
+		return -EINVAL;
+
+	while (*str) {
 		if (strncmp(str, "fullflush", 9) == 0) {
 			pr_warn("amd_iommu=fullflush deprecated; use iommu.strict=1 instead\n");
 			iommu_set_dma_strict();
-		}
-		if (strncmp(str, "force_enable", 12) == 0)
+		} else if (strncmp(str, "force_enable", 12) == 0) {
 			amd_iommu_force_enable = true;
-		if (strncmp(str, "off", 3) == 0)
+		} else if (strncmp(str, "off", 3) == 0) {
 			amd_iommu_disabled = true;
-		if (strncmp(str, "force_isolation", 15) == 0)
+		} else if (strncmp(str, "force_isolation", 15) == 0) {
 			amd_iommu_force_isolation = true;
+		} else if (strncmp(str, "pgtbl_v1", 8) == 0) {
+			amd_iommu_pgtable = AMD_IOMMU_V1;
+		} else if (strncmp(str, "pgtbl_v2", 8) == 0) {
+			amd_iommu_pgtable = AMD_IOMMU_V2;
+		} else {
+			pr_notice("Unknown option - '%s'\n", str);
+		}
+
+		str += strcspn(str, ",");
+		while (*str == ',')
+			str++;
 	}
 
 	return 1;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-08-25  6:39 [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
                   ` (8 preceding siblings ...)
  2022-08-25  6:39 ` [PATCH v3 9/9] iommu/amd: Add command-line option to enable different page table Vasant Hegde
@ 2022-09-05 11:39 ` Vasant Hegde
  2022-09-06 16:35   ` Robin Murphy
  2022-09-07 14:14 ` Joerg Roedel
  10 siblings, 1 reply; 26+ messages in thread
From: Vasant Hegde @ 2022-09-05 11:39 UTC (permalink / raw)
  To: iommu, joro, robin.murphy; +Cc: suravee.suthikulpanit

Hi Joerg, Robin,

Ping. Did you get a chance to look into this series?

-Vasant

On 8/25/2022 12:09 PM, Vasant Hegde wrote:
> This series introduces a new usage model for the v2 page table, where it
> can be used to implement support for DMA-API by adopting the generic
> IO page table framework.
> 
> One of the target usecases is to support nested IO page tables
> where the guest uses the guest IO page table (v2) for translating
> GVA to GPA, and the hypervisor uses the host I/O page table (v1) for
> translating GPA to SPA. This is a pre-requisite for supporting the new
> HW-assisted vIOMMU presented at the KVM Forum 2020.
> 
>   https://static.sched.com/hosted_files/kvmforum2020/26/vIOMMU%20KVM%20Forum%202020.pdf
> 
> The following components are introduced in this series:
> 
> - Part 1 (patch 1-3)
>   Move AMD v1 page table to use [un]map_pages interfaces and minor fixes to unmap path.
> 
> - Part 1 (patch 4-5 and 8)
>   Refactor the current IOMMU page table code to adopt the generic IO page
>   table framework, and add AMD IOMMU Guest (v2) page table management code.
> 
> - Part 2 (patch 7)
>   Add support for the AMD IOMMU Guest IO Protection feature (GIOV)
>   where requests from the I/O device without a PASID are treated as
>   if they have PASID of 0.
> 
> - Part 3 (patch 9)
>   Introduce new "amd_iommu_pgtable" command-line to allow users
>   to select the mode of operation (v1 or v2).
> 
> See AMD I/O Virtualization Technology Specification for more detail.
> 
>   http://www.amd.com/system/files/TechDocs/48882_IOMMU_3.05_PUB.pdf
> 
> 
> Thanks,
> Vasant
> 
> Changes from v2 -> v3 :
>   - As Robin suggested added support to use [un]map_pages interface (Patch 1 - 3)
>   - Removed [un]map interface from AMD IOMMU driver
>   - Updated v2 page table to use [un]map_pages interface
> 
> V2 patchset: https://lore.kernel.org/linux-iommu/20220713053034.12061-1-vasant.hegde@amd.com/
> 
> Changes from v1 -> v2 :
>   - Allow v2 page table only when FEATURE_GT is enabled
>   - V2 page table doesn't support IOMMU passthrough mode. Hence added
>     check to fall back to v1 mode if system is booted with iommu=pt
>   - Overloaded amd_iommu command line option (removed amd_iommu_pgtable
>     option)
>   - Override supported page sizes dynamically instead of selecting at
>     boot time.
> 
> V1 patchset : https://lore.kernel.org/linux-iommu/20220603112107.8603-1-vasant.hegde@amd.com/T/#t
> 
> Changes from RFC -> v1:
>   - Addressed review comments from Joerg
>   - Reimplemented v2 page table
> 
> RFC patchset : https://lore.kernel.org/linux-iommu/20210312090411.6030-1-suravee.suthikulpanit@amd.com/T/#t
> 
> 
> Suravee Suthikulpanit (4):
>   iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking
>   iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table
>   iommu/amd: Add support for Guest IO protection
>   iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API
> 
> Vasant Hegde (5):
>   iommu/amd/io-pgtable: Implement map_pages io_pgtable_ops callback
>   iommu/amd/io-pgtable: Implement unmap_pages io_pgtable_ops callback
>   iommu/amd: Add map/unmap_pages() iommu_domain_ops callback support
>   iommu/amd: Initial support for AMD IOMMU v2 page table
>   iommu/amd: Add command-line option to enable different page table
> 
>  .../admin-guide/kernel-parameters.txt         |   2 +
>  drivers/iommu/amd/Makefile                    |   2 +-
>  drivers/iommu/amd/amd_iommu_types.h           |   8 +-
>  drivers/iommu/amd/init.c                      |  36 +-
>  drivers/iommu/amd/io_pgtable.c                |  76 ++--
>  drivers/iommu/amd/io_pgtable_v2.c             | 415 ++++++++++++++++++
>  drivers/iommu/amd/iommu.c                     | 117 +++--
>  drivers/iommu/io-pgtable.c                    |   1 +
>  include/linux/io-pgtable.h                    |   2 +
>  9 files changed, 584 insertions(+), 75 deletions(-)
>  create mode 100644 drivers/iommu/amd/io_pgtable_v2.c
> 


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-05 11:39 ` [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
@ 2022-09-06 16:35   ` Robin Murphy
  2022-09-07 14:16     ` Joerg Roedel
  0 siblings, 1 reply; 26+ messages in thread
From: Robin Murphy @ 2022-09-06 16:35 UTC (permalink / raw)
  To: Vasant Hegde, iommu, joro; +Cc: suravee.suthikulpanit, Jason Gunthorpe

On 2022-09-05 12:39, Vasant Hegde wrote:
> Hi Joerg, Robin,
> 
> Ping. Did you get a chance to look into this series?

The io-pgtable patches look OK to me at a glance (I'd be inclined to 
squash #2 into #1, but it's your choice). One thing that's not so 
obvious, though, is how this all interacts with 
amd_iommu_def_domain_type() that currently tries to force identity 
domains for IOMMUv2 devices - does that mean that this support won't 
actually be used a lot of the time, and/or is that override still necessary?

There's also a thread going on elsewhere that I suspect might tie in 
here as well:

https://lore.kernel.org/regressions/874jy4cqok.wl-tiwai@suse.de/

Will this series also mean that the domain shenanigans in 
amd_iommu_init_device() can be replaced by just making sure the GPU gets 
the proper type of v2 default domain in the first place, so KFD can use 
its PASIDs on top of that directly, and the audio driver problem goes 
away naturally?

Thanks,
Robin.

> 
> -Vasant
> 
> On 8/25/2022 12:09 PM, Vasant Hegde wrote:
>> This series introduces a new usage model for the v2 page table, where it
>> can be used to implement support for DMA-API by adopting the generic
>> IO page table framework.
>>
>> One of the target usecases is to support nested IO page tables
>> where the guest uses the guest IO page table (v2) for translating
>> GVA to GPA, and the hypervisor uses the host I/O page table (v1) for
>> translating GPA to SPA. This is a pre-requisite for supporting the new
>> HW-assisted vIOMMU presented at the KVM Forum 2020.
>>
>>    https://static.sched.com/hosted_files/kvmforum2020/26/vIOMMU%20KVM%20Forum%202020.pdf
>>
>> The following components are introduced in this series:
>>
>> - Part 1 (patch 1-3)
>>    Move AMD v1 page table to use [un]map_pages interfaces and minor fixes to unmap path.
>>
>> - Part 1 (patch 4-5 and 8)
>>    Refactor the current IOMMU page table code to adopt the generic IO page
>>    table framework, and add AMD IOMMU Guest (v2) page table management code.
>>
>> - Part 2 (patch 7)
>>    Add support for the AMD IOMMU Guest IO Protection feature (GIOV)
>>    where requests from the I/O device without a PASID are treated as
>>    if they have PASID of 0.
>>
>> - Part 3 (patch 9)
>>    Introduce new "amd_iommu_pgtable" command-line to allow users
>>    to select the mode of operation (v1 or v2).
>>
>> See AMD I/O Virtualization Technology Specification for more detail.
>>
>>    http://www.amd.com/system/files/TechDocs/48882_IOMMU_3.05_PUB.pdf
>>
>>
>> Thanks,
>> Vasant
>>
>> Changes from v2 -> v3 :
>>    - As Robin suggested added support to use [un]map_pages interface (Patch 1 - 3)
>>    - Removed [un]map interface from AMD IOMMU driver
>>    - Updated v2 page table to use [un]map_pages interface
>>
>> V2 patchset: https://lore.kernel.org/linux-iommu/20220713053034.12061-1-vasant.hegde@amd.com/
>>
>> Changes from v1 -> v2 :
>>    - Allow v2 page table only when FEATURE_GT is enabled
>>    - V2 page table doesn't support IOMMU passthrough mode. Hence added
>>      check to fall back to v1 mode if system is booted with iommu=pt
>>    - Overloaded amd_iommu command line option (removed amd_iommu_pgtable
>>      option)
>>    - Override supported page sizes dynamically instead of selecting at
>>      boot time.
>>
>> V1 patchset : https://lore.kernel.org/linux-iommu/20220603112107.8603-1-vasant.hegde@amd.com/T/#t
>>
>> Changes from RFC -> v1:
>>    - Addressed review comments from Joerg
>>    - Reimplemented v2 page table
>>
>> RFC patchset : https://lore.kernel.org/linux-iommu/20210312090411.6030-1-suravee.suthikulpanit@amd.com/T/#t
>>
>>
>> Suravee Suthikulpanit (4):
>>    iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking
>>    iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table
>>    iommu/amd: Add support for Guest IO protection
>>    iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API
>>
>> Vasant Hegde (5):
>>    iommu/amd/io-pgtable: Implement map_pages io_pgtable_ops callback
>>    iommu/amd/io-pgtable: Implement unmap_pages io_pgtable_ops callback
>>    iommu/amd: Add map/unmap_pages() iommu_domain_ops callback support
>>    iommu/amd: Initial support for AMD IOMMU v2 page table
>>    iommu/amd: Add command-line option to enable different page table
>>
>>   .../admin-guide/kernel-parameters.txt         |   2 +
>>   drivers/iommu/amd/Makefile                    |   2 +-
>>   drivers/iommu/amd/amd_iommu_types.h           |   8 +-
>>   drivers/iommu/amd/init.c                      |  36 +-
>>   drivers/iommu/amd/io_pgtable.c                |  76 ++--
>>   drivers/iommu/amd/io_pgtable_v2.c             | 415 ++++++++++++++++++
>>   drivers/iommu/amd/iommu.c                     | 117 +++--
>>   drivers/iommu/io-pgtable.c                    |   1 +
>>   include/linux/io-pgtable.h                    |   2 +
>>   9 files changed, 584 insertions(+), 75 deletions(-)
>>   create mode 100644 drivers/iommu/amd/io_pgtable_v2.c
>>
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-08-25  6:39 [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
                   ` (9 preceding siblings ...)
  2022-09-05 11:39 ` [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
@ 2022-09-07 14:14 ` Joerg Roedel
  10 siblings, 0 replies; 26+ messages in thread
From: Joerg Roedel @ 2022-09-07 14:14 UTC (permalink / raw)
  To: Vasant Hegde; +Cc: iommu, robin.murphy, suravee.suthikulpanit

On Thu, Aug 25, 2022 at 06:39:30AM +0000, Vasant Hegde wrote:
> Suravee Suthikulpanit (4):
>   iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking
>   iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table
>   iommu/amd: Add support for Guest IO protection
>   iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API
> 
> Vasant Hegde (5):
>   iommu/amd/io-pgtable: Implement map_pages io_pgtable_ops callback
>   iommu/amd/io-pgtable: Implement unmap_pages io_pgtable_ops callback
>   iommu/amd: Add map/unmap_pages() iommu_domain_ops callback support
>   iommu/amd: Initial support for AMD IOMMU v2 page table
>   iommu/amd: Add command-line option to enable different page table

Applied, thanks.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-06 16:35   ` Robin Murphy
@ 2022-09-07 14:16     ` Joerg Roedel
  2022-09-07 16:52       ` Jason Gunthorpe
  0 siblings, 1 reply; 26+ messages in thread
From: Joerg Roedel @ 2022-09-07 14:16 UTC (permalink / raw)
  To: Robin Murphy; +Cc: Vasant Hegde, iommu, suravee.suthikulpanit, Jason Gunthorpe

On Tue, Sep 06, 2022 at 05:35:13PM +0100, Robin Murphy wrote:
> Will this series also mean that the domain shenanigans in
> amd_iommu_init_device() can be replaced by just making sure the GPU gets the
> proper type of v2 default domain in the first place, so KFD can use its
> PASIDs on top of that directly, and the audio driver problem goes away
> naturally?

Yes, on IOMMUs supporting v2 page-tables and declaring a default PASID
this hack can go away. Unfortunately a lot of AMD IOMMUs in the field do
not, so the identity mapping hack needs to stay around.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-07 14:16     ` Joerg Roedel
@ 2022-09-07 16:52       ` Jason Gunthorpe
  2022-09-07 18:16         ` Robin Murphy
  2022-09-08 12:20         ` Joerg Roedel
  0 siblings, 2 replies; 26+ messages in thread
From: Jason Gunthorpe @ 2022-09-07 16:52 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Robin Murphy, Vasant Hegde, iommu, suravee.suthikulpanit

On Wed, Sep 07, 2022 at 04:16:20PM +0200, Joerg Roedel wrote:
> On Tue, Sep 06, 2022 at 05:35:13PM +0100, Robin Murphy wrote:
> > Will this series also mean that the domain shenanigans in
> > amd_iommu_init_device() can be replaced by just making sure the GPU gets the
> > proper type of v2 default domain in the first place, so KFD can use its
> > PASIDs on top of that directly, and the audio driver problem goes away
> > naturally?
> 
> Yes, on IOMMUs supporting v2 page-tables and declaring a default PASID
> this hack can go away. Unfortunately a lot of AMD IOMMUs in the field do
> not, so the identity mapping hack needs to stay around.

Why can an identity map be attached to the RID in v2, but not a full
translation? It seems like a very strange design that entering PASID
mode completely breaks RID support..

Jason

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-07 16:52       ` Jason Gunthorpe
@ 2022-09-07 18:16         ` Robin Murphy
  2022-09-08  0:12           ` Jason Gunthorpe
  2022-09-08 12:20         ` Joerg Roedel
  1 sibling, 1 reply; 26+ messages in thread
From: Robin Murphy @ 2022-09-07 18:16 UTC (permalink / raw)
  To: Jason Gunthorpe, Joerg Roedel; +Cc: Vasant Hegde, iommu, suravee.suthikulpanit

On 2022-09-07 17:52, Jason Gunthorpe wrote:
> On Wed, Sep 07, 2022 at 04:16:20PM +0200, Joerg Roedel wrote:
>> On Tue, Sep 06, 2022 at 05:35:13PM +0100, Robin Murphy wrote:
>>> Will this series also mean that the domain shenanigans in
>>> amd_iommu_init_device() can be replaced by just making sure the GPU gets the
>>> proper type of v2 default domain in the first place, so KFD can use its
>>> PASIDs on top of that directly, and the audio driver problem goes away
>>> naturally?
>>
>> Yes, on IOMMUs supporting v2 page-tables and declaring a default PASID
>> this hack can go away. Unfortunately a lot of AMD IOMMUs in the field do
>> not, so the identity mapping hack needs to stay around.
> 
> Why can an identity map be attached to the RID in v2, but not a full
> translation? It seems like a very strange design that entering PASID
> mode completely breaks RID support..

FWIW we have the opposite problem on SMMUv3 - if we're using PASIDs then 
we can't bypass the RID, it *has* to map to a translation context (as 
PASID 0). I can only guess that nobody's yet tried the combination of 
SVA plus IOMMU_DEFAULT_PASSTHROUGH plus the device sending MSIs (and not 
on the dodgy machine where MSIs bypass the SMMU in hardware)... :/

Anyway, fair enough if def_domain_type still has to force 
IOMMU_DOMAIN_IDENTITY in some cases here, but presumably that still 
wouldn't prevent us from enabling v2 on those from the outset and 
avoiding the mysterious and problematic "concoct an identity domain to 
replace the identity domain" dance?

Thanks,
Robin.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-07 18:16         ` Robin Murphy
@ 2022-09-08  0:12           ` Jason Gunthorpe
  0 siblings, 0 replies; 26+ messages in thread
From: Jason Gunthorpe @ 2022-09-08  0:12 UTC (permalink / raw)
  To: Robin Murphy; +Cc: Joerg Roedel, Vasant Hegde, iommu, suravee.suthikulpanit

On Wed, Sep 07, 2022 at 07:16:32PM +0100, Robin Murphy wrote:
> On 2022-09-07 17:52, Jason Gunthorpe wrote:
> > On Wed, Sep 07, 2022 at 04:16:20PM +0200, Joerg Roedel wrote:
> > > On Tue, Sep 06, 2022 at 05:35:13PM +0100, Robin Murphy wrote:
> > > > Will this series also mean that the domain shenanigans in
> > > > amd_iommu_init_device() can be replaced by just making sure the GPU gets the
> > > > proper type of v2 default domain in the first place, so KFD can use its
> > > > PASIDs on top of that directly, and the audio driver problem goes away
> > > > naturally?
> > > 
> > > Yes, on IOMMUs supporting v2 page-tables and declaring a default PASID
> > > this hack can go away. Unfortunately a lot of AMD IOMMUs in the field do
> > > not, so the identity mapping hack needs to stay around.
> > 
> > Why can an identity map be attached to the RID in v2, but not a full
> > translation? It seems like a very strange design that entering PASID
> > mode completely breaks RID support..
> 
> FWIW we have the opposite problem on SMMUv3 - if we're using PASIDs then we
> can't bypass the RID, it *has* to map to a translation context (as PASID 0).
> I can only guess that nobody's yet tried the combination of SVA plus
> IOMMU_DEFAULT_PASSTHROUGH plus the device sending MSIs (and not on the dodgy
> machine where MSIs bypass the SMMU in hardware)... :/

This at least makes more sense to me, SW has the ability to apply a
translation and if the requirement is an identity translation it can
at least make one, at the cost of performance and memory..

So, looking at bit at the smmuv3 code, if arm_smmu_write_strtab_ent()
was called with stage == BYPASS on the RID then it sets a STE up for
bypass operation and there is no s1contextptr, but then
arm_smmu_write_ctx_desc() will come along later and try to access the
cdtab?

It seems like it needs to upgrade the STE to have an s1contextptr and
point substream 0 at an iopte setting up an identity mapping?

But doesn't this mean that the SVA is the thing that doesn't work if
the RID is set to identity?

Jason

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-07 16:52       ` Jason Gunthorpe
  2022-09-07 18:16         ` Robin Murphy
@ 2022-09-08 12:20         ` Joerg Roedel
  2022-09-08 12:53           ` Robin Murphy
  1 sibling, 1 reply; 26+ messages in thread
From: Joerg Roedel @ 2022-09-08 12:20 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Robin Murphy, Vasant Hegde, iommu, suravee.suthikulpanit

On Wed, Sep 07, 2022 at 01:52:18PM -0300, Jason Gunthorpe wrote:
> Why can an identity map be attached to the RID in v2, but not a full
> translation? It seems like a very strange design that entering PASID
> mode completely breaks RID support.

The reason is that AMD IOMMUs do two-level translation, which means that
the addresses in the PASID page-tables are translated via the v1
page-table again. In order to be able to use Linux page-tables for PASID
mappings the v1 page-table needs to be identity mapped.

Regards,

	Joerg

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-08 12:20         ` Joerg Roedel
@ 2022-09-08 12:53           ` Robin Murphy
  2022-09-08 13:19             ` Jason Gunthorpe
  0 siblings, 1 reply; 26+ messages in thread
From: Robin Murphy @ 2022-09-08 12:53 UTC (permalink / raw)
  To: Joerg Roedel, Jason Gunthorpe; +Cc: Vasant Hegde, iommu, suravee.suthikulpanit

On 2022-09-08 13:20, Joerg Roedel wrote:
> On Wed, Sep 07, 2022 at 01:52:18PM -0300, Jason Gunthorpe wrote:
>> Why can an identity map be attached to the RID in v2, but not a full
>> translation? It seems like a very strange design that entering PASID
>> mode completely breaks RID support.
> 
> The reason is that AMD IOMMUs do two-level translation, which means that
> the addresses in the PASID page-tables are translated via the v1
> page-table again. In order to be able to use Linux page-tables for PASID
> mappings the v1 page-table needs to be identity mapped.

Ah, and without a default PASID, RID traffic goes straight into the 
second level? That actually sounds much the same as what SMMUv3's S1DSS 
can do - seems I misremembered yesterday, we *could* in fact twiddle 
that to handle identity default domains, it's attaching a PASID to an 
identity domain that remains impossible without installing a full 1:1 
pagetable, my bad.

Cheers,
Robin.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-08 12:53           ` Robin Murphy
@ 2022-09-08 13:19             ` Jason Gunthorpe
  2022-09-08 13:30               ` Joerg Roedel
  0 siblings, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2022-09-08 13:19 UTC (permalink / raw)
  To: Robin Murphy; +Cc: Joerg Roedel, Vasant Hegde, iommu, suravee.suthikulpanit

On Thu, Sep 08, 2022 at 01:53:50PM +0100, Robin Murphy wrote:
> On 2022-09-08 13:20, Joerg Roedel wrote:
> > On Wed, Sep 07, 2022 at 01:52:18PM -0300, Jason Gunthorpe wrote:
> > > Why can an identity map be attached to the RID in v2, but not a full
> > > translation? It seems like a very strange design that entering PASID
> > > mode completely breaks RID support.
> > 
> > The reason is that AMD IOMMUs do two-level translation, which means that
> > the addresses in the PASID page-tables are translated via the v1
> > page-table again. In order to be able to use Linux page-tables for PASID
> > mappings the v1 page-table needs to be identity mapped.
> 
> Ah, and without a default PASID, RID traffic goes straight into the second
> level? That actually sounds much the same as what SMMUv3's S1DSS can do -
> seems I misremembered yesterday, we *could* in fact twiddle that to handle
> identity default domains, it's attaching a PASID to an identity domain that
> remains impossible without installing a full 1:1 pagetable, my bad.

What is the use case for having PASID nested under another level of
translation?

Jason

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-08 13:19             ` Jason Gunthorpe
@ 2022-09-08 13:30               ` Joerg Roedel
  2022-09-08 13:47                 ` Robin Murphy
  0 siblings, 1 reply; 26+ messages in thread
From: Joerg Roedel @ 2022-09-08 13:30 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Robin Murphy, Vasant Hegde, iommu, suravee.suthikulpanit

On Thu, Sep 08, 2022 at 10:19:18AM -0300, Jason Gunthorpe wrote:
> What is the use case for having PASID nested under another level of
> translation?

Virtualization. Basically having a device access process address spaces
inside a virtual machine. That got never put into code, but it was the
rationale behind it.

Regards,

	Joerg

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-08 13:30               ` Joerg Roedel
@ 2022-09-08 13:47                 ` Robin Murphy
  2022-09-08 13:58                   ` Jason Gunthorpe
  0 siblings, 1 reply; 26+ messages in thread
From: Robin Murphy @ 2022-09-08 13:47 UTC (permalink / raw)
  To: Joerg Roedel, Jason Gunthorpe; +Cc: Vasant Hegde, iommu, suravee.suthikulpanit

On 2022-09-08 14:30, Joerg Roedel wrote:
> On Thu, Sep 08, 2022 at 10:19:18AM -0300, Jason Gunthorpe wrote:
>> What is the use case for having PASID nested under another level of
>> translation?
> 
> Virtualization. Basically having a device access process address spaces
> inside a virtual machine. That got never put into code, but it was the
> rationale behind it.

For SMMUv3 it did actually get as far as a prototype[1] before the 
iommufd work interjected. I believe there are cloud vendors using those 
patches in their deployments :O

Cheers,
Robin.

[1] 
https://lore.kernel.org/linux-iommu/20210423095147.27922-1-vivek.gautam@arm.com/

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-08 13:47                 ` Robin Murphy
@ 2022-09-08 13:58                   ` Jason Gunthorpe
  2022-09-08 15:23                     ` Robin Murphy
  0 siblings, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2022-09-08 13:58 UTC (permalink / raw)
  To: Robin Murphy; +Cc: Joerg Roedel, Vasant Hegde, iommu, suravee.suthikulpanit

On Thu, Sep 08, 2022 at 02:47:11PM +0100, Robin Murphy wrote:
> On 2022-09-08 14:30, Joerg Roedel wrote:
> > On Thu, Sep 08, 2022 at 10:19:18AM -0300, Jason Gunthorpe wrote:
> > > What is the use case for having PASID nested under another level of
> > > translation?
> > 
> > Virtualization. Basically having a device access process address spaces
> > inside a virtual machine. That got never put into code, but it was the
> > rationale behind it.
> 
> For SMMUv3 it did actually get as far as a prototype[1] before the iommufd
> work interjected. I believe there are cloud vendors using those patches in
> their deployments :O

This is a bit different, this is just normal nested translation stuff.

AMD decision has effectively nested the PASID under the RID, while
typically nested translation nests PASID and RID equally under a
parent table.

I can't think of a use case for nesting the PASID under the RID, in
every scenario this seems to be problematic. Glad to see it is undone
with the new HW.

Jason

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-08 13:58                   ` Jason Gunthorpe
@ 2022-09-08 15:23                     ` Robin Murphy
  2022-09-09  1:24                       ` Baolu Lu
  0 siblings, 1 reply; 26+ messages in thread
From: Robin Murphy @ 2022-09-08 15:23 UTC (permalink / raw)
  To: Jason Gunthorpe; +Cc: Joerg Roedel, Vasant Hegde, iommu, suravee.suthikulpanit

On 2022-09-08 14:58, Jason Gunthorpe wrote:
> On Thu, Sep 08, 2022 at 02:47:11PM +0100, Robin Murphy wrote:
>> On 2022-09-08 14:30, Joerg Roedel wrote:
>>> On Thu, Sep 08, 2022 at 10:19:18AM -0300, Jason Gunthorpe wrote:
>>>> What is the use case for having PASID nested under another level of
>>>> translation?
>>>
>>> Virtualization. Basically having a device access process address spaces
>>> inside a virtual machine. That got never put into code, but it was the
>>> rationale behind it.
>>
>> For SMMUv3 it did actually get as far as a prototype[1] before the iommufd
>> work interjected. I believe there are cloud vendors using those patches in
>> their deployments :O
> 
> This is a bit different, this is just normal nested translation stuff.
> 
> AMD decision has effectively nested the PASID under the RID, while
> typically nested translation nests PASID and RID equally under a
> parent table.

As far as I can tell from a quick scan of the AMD spec, it seems 
effectively identical to SMMUv3's S1DSS=1 mode, while the "Guest I/O 
protection" feature is the same as S1DSS=2, treating the RID as PASID 0. 
SMMU has just always had both options.

> I can't think of a use case for nesting the PASID under the RID, in
> every scenario this seems to be problematic. Glad to see it is undone
> with the new HW.

I assume you mean you can't think of a use-case for having the RID 
bypass stage 1/guest translation, since "nesting the PASID under the 
RID" is basically just how PASIDs work. AMD phrases it as "An upstream 
packet with a valid PASID in the PASID TLP prefix contains a canonical 
GVA; an upstream packet without a valid PASID in the PASID TLP prefix or 
with no PASID TLP prefix and an untranslated address contains a GPA". So 
GVA->GPA is per-PASID, then GPA->PA is for the entire RID, which is 
exactly how SMMU stage 1/2 work as well. As far as I'm aware that's 
standard for Intel too, until SIOV where they can carry the PASID 
through to the second level also.

The use-case is that you assume people care about performance, and thus 
are only going to use translation *inside* a VM for PASID-based SVA 
which can't be done any other way, and let guest-kernel-owned DMA 
(without PASID) operate in GPA space to avoid the extra translation 
latency, invalidation trap overheads, etc.

Cheers,
Robin.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-08 15:23                     ` Robin Murphy
@ 2022-09-09  1:24                       ` Baolu Lu
  2022-09-09  7:51                         ` Tian, Kevin
  0 siblings, 1 reply; 26+ messages in thread
From: Baolu Lu @ 2022-09-09  1:24 UTC (permalink / raw)
  To: Robin Murphy, Jason Gunthorpe
  Cc: baolu.lu, Joerg Roedel, Vasant Hegde, iommu, suravee.suthikulpanit

On 2022/9/8 23:23, Robin Murphy wrote:
> 
>> I can't think of a use case for nesting the PASID under the RID, in
>> every scenario this seems to be problematic. Glad to see it is undone
>> with the new HW.
> 
> I assume you mean you can't think of a use-case for having the RID 
> bypass stage 1/guest translation, since "nesting the PASID under the 
> RID" is basically just how PASIDs work. AMD phrases it as "An upstream 
> packet with a valid PASID in the PASID TLP prefix contains a canonical 
> GVA; an upstream packet without a valid PASID in the PASID TLP prefix or 
> with no PASID TLP prefix and an untranslated address contains a GPA". So 
> GVA->GPA is per-PASID, then GPA->PA is for the entire RID, which is 
> exactly how SMMU stage 1/2 work as well. As far as I'm aware that's 
> standard for Intel too, until SIOV where they can carry the PASID 
> through to the second level also.

Yes. That's something called ECS mode in VT-d 2.x spec. Then Scalable
Mode began to appear in 3.x and replaced the previous ECS. Intel has no
real ECS hardware implementation in the shipping products.

Best regards,
baolu

^ permalink raw reply	[flat|nested] 26+ messages in thread

* RE: [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-09-09  1:24                       ` Baolu Lu
@ 2022-09-09  7:51                         ` Tian, Kevin
  0 siblings, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2022-09-09  7:51 UTC (permalink / raw)
  To: Baolu Lu, Robin Murphy, Jason Gunthorpe
  Cc: Joerg Roedel, Vasant Hegde, iommu, suravee.suthikulpanit

> From: Baolu Lu <baolu.lu@linux.intel.com>
> Sent: Friday, September 9, 2022 9:24 AM
> 
> On 2022/9/8 23:23, Robin Murphy wrote:
> >
> >> I can't think of a use case for nesting the PASID under the RID, in
> >> every scenario this seems to be problematic. Glad to see it is undone
> >> with the new HW.
> >
> > I assume you mean you can't think of a use-case for having the RID
> > bypass stage 1/guest translation, since "nesting the PASID under the
> > RID" is basically just how PASIDs work. AMD phrases it as "An upstream
> > packet with a valid PASID in the PASID TLP prefix contains a canonical
> > GVA; an upstream packet without a valid PASID in the PASID TLP prefix or
> > with no PASID TLP prefix and an untranslated address contains a GPA". So
> > GVA->GPA is per-PASID, then GPA->PA is for the entire RID, which is
> > exactly how SMMU stage 1/2 work as well. As far as I'm aware that's
> > standard for Intel too, until SIOV where they can carry the PASID
> > through to the second level also.
> 
> Yes. That's something called ECS mode in VT-d 2.x spec. Then Scalable
> Mode began to appear in 3.x and replaced the previous ECS. Intel has no
> real ECS hardware implementation in the shipping products.
> 

They are slightly different.

From AMD spec  it supports three configurations: 

  1) stage-2 only;
  2) nested with stage-2 identity mapped so stage-1 can be host page tables;
  3) nested with stage-2 in translation so stage-1 points to guest page tables;

w/o the pasid0 trick (treating RID as pasid#0) it means DMA domain and
SVA cannot be enabled simultaneously (both in host and guest).

The old ECS mode in VT-d supports four configurations:

  1) stage-2 only;
  2) stage-1 only;
  3) stage-1 + stage-2 (w/o nesting);
  4) stage-1 nested on stage-2;

This allows the host to support DMA domain and SVA at the same time. But
it doesn't support the pasid0 trick so the restriction in the guest still remains.

In the end the scalable mode supports RID2PASID (similar to pasid0) and
provides full translation configurability per pasid.

Thanks
Kevin

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2022-09-09  7:51 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-25  6:39 [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
2022-08-25  6:39 ` [PATCH v3 1/9] iommu/amd/io-pgtable: Implement map_pages io_pgtable_ops callback Vasant Hegde
2022-08-25  6:39 ` [PATCH v3 2/9] iommu/amd/io-pgtable: Implement unmap_pages " Vasant Hegde
2022-08-25  6:39 ` [PATCH v3 3/9] iommu/amd: Add map/unmap_pages() iommu_domain_ops callback support Vasant Hegde
2022-08-25  6:39 ` [PATCH v3 4/9] iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking Vasant Hegde
2022-08-25  6:39 ` [PATCH v3 5/9] iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table Vasant Hegde
2022-08-25  6:39 ` [PATCH v3 6/9] iommu/amd: Initial support for AMD IOMMU v2 page table Vasant Hegde
2022-08-25  6:39 ` [PATCH v3 7/9] iommu/amd: Add support for Guest IO protection Vasant Hegde
2022-08-25  6:39 ` [PATCH v3 8/9] iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API Vasant Hegde
2022-08-25  6:39 ` [PATCH v3 9/9] iommu/amd: Add command-line option to enable different page table Vasant Hegde
2022-09-05 11:39 ` [PATCH v3 0/9] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
2022-09-06 16:35   ` Robin Murphy
2022-09-07 14:16     ` Joerg Roedel
2022-09-07 16:52       ` Jason Gunthorpe
2022-09-07 18:16         ` Robin Murphy
2022-09-08  0:12           ` Jason Gunthorpe
2022-09-08 12:20         ` Joerg Roedel
2022-09-08 12:53           ` Robin Murphy
2022-09-08 13:19             ` Jason Gunthorpe
2022-09-08 13:30               ` Joerg Roedel
2022-09-08 13:47                 ` Robin Murphy
2022-09-08 13:58                   ` Jason Gunthorpe
2022-09-08 15:23                     ` Robin Murphy
2022-09-09  1:24                       ` Baolu Lu
2022-09-09  7:51                         ` Tian, Kevin
2022-09-07 14:14 ` Joerg Roedel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.