All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
@ 2022-07-13  5:30 Vasant Hegde
  2022-07-13  5:30 ` [PATCH v2 1/6] iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking Vasant Hegde
                   ` (6 more replies)
  0 siblings, 7 replies; 11+ messages in thread
From: Vasant Hegde @ 2022-07-13  5:30 UTC (permalink / raw)
  To: joro, iommu; +Cc: suravee.suthikulpanit, robin.murphy, Vasant Hegde

This series introduces a new usage model for the v2 page table, where it
can be used to implement support for DMA-API by adopting the generic
IO page table framework.

One of the target usecases is to support nested IO page tables
where the guest uses the guest IO page table (v2) for translating
GVA to GPA, and the hypervisor uses the host I/O page table (v1) for
translating GPA to SPA. This is a pre-requisite for supporting the new
HW-assisted vIOMMU presented at the KVM Forum 2020.

  https://static.sched.com/hosted_files/kvmforum2020/26/vIOMMU%20KVM%20Forum%202020.pdf

The following components are introduced in this series:

- Part 1 (patch 1-3 and 5)
  Refactor the current IOMMU page table code to adopt the generic IO page
  table framework, and add AMD IOMMU Guest (v2) page table management code.

- Part 2 (patch 4)
  Add support for the AMD IOMMU Guest IO Protection feature (GIOV)
  where requests from the I/O device without a PASID are treated as
  if they have PASID of 0.

- Part 3 (patch 6)
  Introduce new "amd_iommu_pgtable" command-line to allow users
  to select the mode of operation (v1 or v2).

See AMD I/O Virtualization Technology Specification for more detail.

  http://www.amd.com/system/files/TechDocs/48882_IOMMU_3.05_PUB.pdf

@Robin,
   We are not addressing your comment on cleaning up pgsize_bitmap in
   this series. As mentioned in other thread [1] we can work on that
   as separate cleanup series.

   [1] https://lore.kernel.org/linux-iommu/15337b9f-48d7-d956-5d6c-b40319ba38ad@amd.com/


Thanks,
Vasant

Changes from v1 -> v2 :
  - Allow v2 page table only when FEATURE_GT is enabled
  - V2 page table doesn't support IOMMU passthrough mode. Hence added
    check to fall back to v1 mode if system is booted with iommu=pt
  - Overloaded amd_iommu command line option (removed amd_iommu_pgtable option)
  - Override supported page sizes dynamically instead of selecting at boot time.

V1 patchset : https://lore.kernel.org/linux-iommu/20220603112107.8603-1-vasant.hegde@amd.com/T/#t

Changes from RFC -> v1:
  - Addressed review comments from Joerg
  - Reimplemented v2 page table

RFC patchset : https://lore.kernel.org/linux-iommu/20210312090411.6030-1-suravee.suthikulpanit@amd.com/T/#t

Suravee Suthikulpanit (4):
  iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking
  iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table
  iommu/amd: Add support for Guest IO protection
  iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API

Vasant Hegde (2):
  iommu/amd: Initial support for AMD IOMMU v2 page table
  iommu/amd: Add command-line option to enable different page table

 .../admin-guide/kernel-parameters.txt         |   2 +
 drivers/iommu/amd/Makefile                    |   2 +-
 drivers/iommu/amd/amd_iommu_types.h           |   8 +-
 drivers/iommu/amd/init.c                      |  36 +-
 drivers/iommu/amd/io_pgtable_v2.c             | 407 ++++++++++++++++++
 drivers/iommu/amd/iommu.c                     |  89 +++-
 drivers/iommu/io-pgtable.c                    |   1 +
 include/linux/io-pgtable.h                    |   2 +
 8 files changed, 518 insertions(+), 29 deletions(-)
 create mode 100644 drivers/iommu/amd/io_pgtable_v2.c

-- 
2.31.1

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/6] iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking
  2022-07-13  5:30 [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
@ 2022-07-13  5:30 ` Vasant Hegde
  2022-07-13  5:30 ` [PATCH v2 2/6] iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table Vasant Hegde
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Vasant Hegde @ 2022-07-13  5:30 UTC (permalink / raw)
  To: joro, iommu; +Cc: suravee.suthikulpanit, robin.murphy, Vasant Hegde

From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

The current function to enable IOMMU v2 also lock the domain.
In order to reuse the same code in different code path, in which
the domain has already been locked, refactor the function to separate
the locking from the enabling logic.

Co-developed-by: Vasant Hegde <vasant.hegde@amd.com>
Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
Changes in v2:
  - Remove unused has_ppr function param.

 drivers/iommu/amd/iommu.c | 46 +++++++++++++++++++++++----------------
 1 file changed, 27 insertions(+), 19 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index a56a9ad3273e..fd5dc6541569 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -85,6 +85,7 @@ struct iommu_cmd {
 struct kmem_cache *amd_iommu_irq_cache;
 
 static void detach_device(struct device *dev);
+static int domain_enable_v2(struct protection_domain *domain, int pasids);
 
 /****************************************************************************
  *
@@ -2429,11 +2430,10 @@ void amd_iommu_domain_direct_map(struct iommu_domain *dom)
 }
 EXPORT_SYMBOL(amd_iommu_domain_direct_map);
 
-int amd_iommu_domain_enable_v2(struct iommu_domain *dom, int pasids)
+/* Note: This function expects iommu_domain->lock to be held prior calling the function. */
+static int domain_enable_v2(struct protection_domain *domain, int pasids)
 {
-	struct protection_domain *domain = to_pdomain(dom);
-	unsigned long flags;
-	int levels, ret;
+	int levels;
 
 	/* Number of GCR3 table levels required */
 	for (levels = 0; (pasids - 1) & ~0x1ff; pasids >>= 9)
@@ -2442,7 +2442,25 @@ int amd_iommu_domain_enable_v2(struct iommu_domain *dom, int pasids)
 	if (levels > amd_iommu_max_glx_val)
 		return -EINVAL;
 
-	spin_lock_irqsave(&domain->lock, flags);
+	domain->gcr3_tbl = (void *)get_zeroed_page(GFP_ATOMIC);
+	if (domain->gcr3_tbl == NULL)
+		return -ENOMEM;
+
+	domain->glx      = levels;
+	domain->flags   |= PD_IOMMUV2_MASK;
+
+	amd_iommu_domain_update(domain);
+
+	return 0;
+}
+
+int amd_iommu_domain_enable_v2(struct iommu_domain *dom, int pasids)
+{
+	struct protection_domain *pdom = to_pdomain(dom);
+	unsigned long flags;
+	int ret;
+
+	spin_lock_irqsave(&pdom->lock, flags);
 
 	/*
 	 * Save us all sanity checks whether devices already in the
@@ -2450,24 +2468,14 @@ int amd_iommu_domain_enable_v2(struct iommu_domain *dom, int pasids)
 	 * devices attached when it is switched into IOMMUv2 mode.
 	 */
 	ret = -EBUSY;
-	if (domain->dev_cnt > 0 || domain->flags & PD_IOMMUV2_MASK)
-		goto out;
-
-	ret = -ENOMEM;
-	domain->gcr3_tbl = (void *)get_zeroed_page(GFP_ATOMIC);
-	if (domain->gcr3_tbl == NULL)
+	if (pdom->dev_cnt > 0 || pdom->flags & PD_IOMMUV2_MASK)
 		goto out;
 
-	domain->glx      = levels;
-	domain->flags   |= PD_IOMMUV2_MASK;
-
-	amd_iommu_domain_update(domain);
-
-	ret = 0;
+	if (!pdom->gcr3_tbl)
+		ret = domain_enable_v2(pdom, pasids);
 
 out:
-	spin_unlock_irqrestore(&domain->lock, flags);
-
+	spin_unlock_irqrestore(&pdom->lock, flags);
 	return ret;
 }
 EXPORT_SYMBOL(amd_iommu_domain_enable_v2);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 2/6] iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table
  2022-07-13  5:30 [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
  2022-07-13  5:30 ` [PATCH v2 1/6] iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking Vasant Hegde
@ 2022-07-13  5:30 ` Vasant Hegde
  2022-07-13  5:30 ` [PATCH v2 3/6] iommu/amd: Initial support for AMD IOMMU v2 page table Vasant Hegde
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Vasant Hegde @ 2022-07-13  5:30 UTC (permalink / raw)
  To: joro, iommu; +Cc: suravee.suthikulpanit, robin.murphy, Vasant Hegde

From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Currently, PPR/ATS can be enabled only if the domain is type
identity mapping. However, when allowing the IOMMU v2 page table
to be used for DMA-API, the check is no longer valid.

Update the sanity check to only apply for when using AMD_IOMMU_V1
page table mode.

Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
---
 drivers/iommu/amd/iommu.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index fd5dc6541569..027ecd5fb4b2 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -1682,7 +1682,7 @@ static void pdev_iommuv2_disable(struct pci_dev *pdev)
 	pci_disable_pasid(pdev);
 }
 
-static int pdev_iommuv2_enable(struct pci_dev *pdev)
+static int pdev_pri_ats_enable(struct pci_dev *pdev)
 {
 	int ret;
 
@@ -1745,11 +1745,19 @@ static int attach_device(struct device *dev,
 		struct iommu_domain *def_domain = iommu_get_dma_domain(dev);
 
 		ret = -EINVAL;
-		if (def_domain->type != IOMMU_DOMAIN_IDENTITY)
+
+		/*
+		 * In case of using AMD_IOMMU_V1 page table mode and the device
+		 * is enabling for PPR/ATS support (using v2 table),
+		 * we need to make sure that the domain type is identity map.
+		 */
+		if ((amd_iommu_pgtable == AMD_IOMMU_V1) &&
+		    def_domain->type != IOMMU_DOMAIN_IDENTITY) {
 			goto out;
+		}
 
 		if (dev_data->iommu_v2) {
-			if (pdev_iommuv2_enable(pdev) != 0)
+			if (pdev_pri_ats_enable(pdev) != 0)
 				goto out;
 
 			dev_data->ats.enabled = true;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 3/6] iommu/amd: Initial support for AMD IOMMU v2 page table
  2022-07-13  5:30 [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
  2022-07-13  5:30 ` [PATCH v2 1/6] iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking Vasant Hegde
  2022-07-13  5:30 ` [PATCH v2 2/6] iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table Vasant Hegde
@ 2022-07-13  5:30 ` Vasant Hegde
  2022-07-15 11:55   ` Robin Murphy
  2022-07-13  5:30 ` [PATCH v2 4/6] iommu/amd: Add support for Guest IO protection Vasant Hegde
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 11+ messages in thread
From: Vasant Hegde @ 2022-07-13  5:30 UTC (permalink / raw)
  To: joro, iommu; +Cc: suravee.suthikulpanit, robin.murphy, Vasant Hegde

Introduce IO page table framework support for AMD IOMMU v2 page table.
This patch implements 4 level page table within iommu amd driver and
supports 4K/2M/1G page sizes.

Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
---
 drivers/iommu/amd/Makefile          |   2 +-
 drivers/iommu/amd/amd_iommu_types.h |   5 +-
 drivers/iommu/amd/io_pgtable_v2.c   | 407 ++++++++++++++++++++++++++++
 drivers/iommu/io-pgtable.c          |   1 +
 include/linux/io-pgtable.h          |   2 +
 5 files changed, 415 insertions(+), 2 deletions(-)
 create mode 100644 drivers/iommu/amd/io_pgtable_v2.c

diff --git a/drivers/iommu/amd/Makefile b/drivers/iommu/amd/Makefile
index a935f8f4b974..773d8aa00283 100644
--- a/drivers/iommu/amd/Makefile
+++ b/drivers/iommu/amd/Makefile
@@ -1,4 +1,4 @@
 # SPDX-License-Identifier: GPL-2.0-only
-obj-$(CONFIG_AMD_IOMMU) += iommu.o init.o quirks.o io_pgtable.o
+obj-$(CONFIG_AMD_IOMMU) += iommu.o init.o quirks.o io_pgtable.o io_pgtable_v2.o
 obj-$(CONFIG_AMD_IOMMU_DEBUGFS) += debugfs.o
 obj-$(CONFIG_AMD_IOMMU_V2) += iommu_v2.o
diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
index 40f52d02c5b9..a6d6a3c7c5f3 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -268,6 +268,8 @@
  * 512GB Pages are not supported due to a hardware bug
  */
 #define AMD_IOMMU_PGSIZES	((~0xFFFUL) & ~(2ULL << 38))
+/* 4K, 2MB, 1G page sizes are supported */
+#define AMD_IOMMU_PGSIZES_V2	(PAGE_SIZE | (1ULL << 21) | (1ULL << 30))
 
 /* Bit value definition for dte irq remapping fields*/
 #define DTE_IRQ_PHYS_ADDR_MASK	(((1ULL << 45)-1) << 6)
@@ -518,7 +520,8 @@ struct amd_io_pgtable {
 	struct io_pgtable	iop;
 	int			mode;
 	u64			*root;
-	atomic64_t		pt_root;    /* pgtable root and pgtable mode */
+	atomic64_t		pt_root;	/* pgtable root and pgtable mode */
+	u64			*pgd;		/* v2 pgtable pgd pointer */
 };
 
 /*
diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
new file mode 100644
index 000000000000..4248a182a780
--- /dev/null
+++ b/drivers/iommu/amd/io_pgtable_v2.c
@@ -0,0 +1,407 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * CPU-agnostic AMD IO page table v2 allocator.
+ *
+ * Copyright (C) 2022 Advanced Micro Devices, Inc.
+ * Author: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
+ * Author: Vasant Hegde <vasant.hegde@amd.com>
+ */
+
+#define pr_fmt(fmt)	"AMD-Vi: " fmt
+#define dev_fmt(fmt)	pr_fmt(fmt)
+
+#include <linux/bitops.h>
+#include <linux/io-pgtable.h>
+#include <linux/kernel.h>
+
+#include <asm/barrier.h>
+
+#include "amd_iommu_types.h"
+#include "amd_iommu.h"
+
+#define IOMMU_PAGE_PRESENT	BIT_ULL(0)	/* Is present */
+#define IOMMU_PAGE_RW		BIT_ULL(1)	/* Writeable */
+#define IOMMU_PAGE_USER		BIT_ULL(2)	/* Userspace addressable */
+#define IOMMU_PAGE_PWT		BIT_ULL(3)	/* Page write through */
+#define IOMMU_PAGE_PCD		BIT_ULL(4)	/* Page cache disabled */
+#define IOMMU_PAGE_ACCESS	BIT_ULL(5)	/* Was accessed (updated by IOMMU) */
+#define IOMMU_PAGE_DIRTY	BIT_ULL(6)	/* Was written to (updated by IOMMU) */
+#define IOMMU_PAGE_PSE		BIT_ULL(7)	/* Page Size Extensions */
+#define IOMMU_PAGE_NX		BIT_ULL(63)	/* No execute */
+
+#define MAX_PTRS_PER_PAGE	512
+
+#define IOMMU_PAGE_SIZE_2M	BIT_ULL(21)
+#define IOMMU_PAGE_SIZE_1G	BIT_ULL(30)
+
+
+static inline int get_pgtable_level(void)
+{
+	/* 5 level page table is not supported */
+	return PAGE_MODE_4_LEVEL;
+}
+
+static inline bool is_large_pte(u64 pte)
+{
+	return (pte & IOMMU_PAGE_PSE);
+}
+
+static inline void *alloc_pgtable_page(void)
+{
+	return (void *)get_zeroed_page(GFP_KERNEL);
+}
+
+static inline u64 set_pgtable_attr(u64 *page)
+{
+	u64 prot;
+
+	prot = IOMMU_PAGE_PRESENT | IOMMU_PAGE_RW | IOMMU_PAGE_USER;
+	prot |= IOMMU_PAGE_ACCESS | IOMMU_PAGE_DIRTY;
+
+	return (iommu_virt_to_phys(page) | prot);
+}
+
+static inline void *get_pgtable_pte(u64 pte)
+{
+	return iommu_phys_to_virt(pte & PM_ADDR_MASK);
+}
+
+static u64 set_pte_attr(u64 paddr, u64 pg_size, int prot)
+{
+	u64 pte;
+
+	pte = __sme_set(paddr & PM_ADDR_MASK);
+	pte |= IOMMU_PAGE_PRESENT | IOMMU_PAGE_USER;
+	pte |= IOMMU_PAGE_ACCESS | IOMMU_PAGE_DIRTY;
+
+	if (prot & IOMMU_PROT_IW)
+		pte |= IOMMU_PAGE_RW;
+
+	/* Large page */
+	if (pg_size == IOMMU_PAGE_SIZE_1G || pg_size == IOMMU_PAGE_SIZE_2M)
+		pte |= IOMMU_PAGE_PSE;
+
+	return pte;
+}
+
+static inline u64 get_alloc_page_size(u64 size)
+{
+	if (size >= IOMMU_PAGE_SIZE_1G)
+		return IOMMU_PAGE_SIZE_1G;
+
+	if (size >= IOMMU_PAGE_SIZE_2M)
+		return IOMMU_PAGE_SIZE_2M;
+
+	return PAGE_SIZE;
+}
+
+static inline int page_size_to_level(u64 pg_size)
+{
+	if (pg_size == IOMMU_PAGE_SIZE_1G)
+		return PAGE_MODE_3_LEVEL;
+	if (pg_size == IOMMU_PAGE_SIZE_2M)
+		return PAGE_MODE_2_LEVEL;
+
+	return PAGE_MODE_1_LEVEL;
+}
+
+static inline void free_pgtable_page(u64 *pt)
+{
+	free_page((unsigned long)pt);
+}
+
+static void free_pgtable(u64 *pt, int level)
+{
+	u64 *p;
+	int i;
+
+	for (i = 0; i < MAX_PTRS_PER_PAGE; i++) {
+		/* PTE present? */
+		if (!IOMMU_PTE_PRESENT(pt[i]))
+			continue;
+
+		if (is_large_pte(pt[i]))
+			continue;
+
+		/*
+		 * Free the next level. No need to look at l1 tables here since
+		 * they can only contain leaf PTEs; just free them directly.
+		 */
+		p = get_pgtable_pte(pt[i]);
+		if (level > 2)
+			free_pgtable(p, level - 1);
+		else
+			free_pgtable_page(p);
+	}
+
+	free_pgtable_page(pt);
+}
+
+/* Allocate page table */
+static u64 *v2_alloc_pte(u64 *pgd, unsigned long iova,
+			 unsigned long pg_size, bool *updated)
+{
+	u64 *pte, *page;
+	int level, end_level;
+
+	BUG_ON(!is_power_of_2(pg_size));
+
+	level = get_pgtable_level() - 1;
+	end_level = page_size_to_level(pg_size);
+	pte = &pgd[PM_LEVEL_INDEX(level, iova)];
+	iova = PAGE_SIZE_ALIGN(iova, PAGE_SIZE);
+
+	while (level >= end_level) {
+		u64 __pte, __npte;
+
+		__pte = *pte;
+
+		if (IOMMU_PTE_PRESENT(__pte) && is_large_pte(__pte)) {
+			/* Unmap large pte */
+			cmpxchg64(pte, *pte, 0ULL);
+			*updated = true;
+			continue;
+		}
+
+		if (!IOMMU_PTE_PRESENT(__pte)) {
+			page = alloc_pgtable_page();
+			if (!page)
+				return NULL;
+
+			__npte = set_pgtable_attr(page);
+			/* pte could have been changed somewhere. */
+			if (cmpxchg64(pte, __pte, __npte) != __pte)
+				free_pgtable_page(page);
+			else if (IOMMU_PTE_PRESENT(__pte))
+				*updated = true;
+
+			continue;
+		}
+
+		level -= 1;
+		pte = get_pgtable_pte(__pte);
+		pte = &pte[PM_LEVEL_INDEX(level, iova)];
+	}
+
+	/* Tear down existing pte entries */
+	if (IOMMU_PTE_PRESENT(*pte)) {
+		u64 *__pte;
+
+		*updated = true;
+		__pte = get_pgtable_pte(*pte);
+		cmpxchg64(pte, *pte, 0ULL);
+		if (pg_size == IOMMU_PAGE_SIZE_1G)
+			free_pgtable(__pte, end_level - 1);
+		else if (pg_size == IOMMU_PAGE_SIZE_2M)
+			free_pgtable_page(__pte);
+	}
+
+	return pte;
+}
+
+/*
+ * This function checks if there is a PTE for a given dma address.
+ * If there is one, it returns the pointer to it.
+ */
+static u64 *fetch_pte(struct amd_io_pgtable *pgtable,
+		      unsigned long iova, unsigned long *page_size)
+{
+	u64 *pte;
+	int level;
+
+	level = get_pgtable_level() - 1;
+	pte = &pgtable->pgd[PM_LEVEL_INDEX(level, iova)];
+	/* Default page size is 4K */
+	*page_size = PAGE_SIZE;
+
+	while (level) {
+		/* Not present */
+		if (!IOMMU_PTE_PRESENT(*pte))
+			return NULL;
+
+		/* Walk to the next level */
+		pte = get_pgtable_pte(*pte);
+		pte = &pte[PM_LEVEL_INDEX(level - 1, iova)];
+
+		/* Large page */
+		if (is_large_pte(*pte)) {
+			if (level == PAGE_MODE_3_LEVEL)
+				*page_size = IOMMU_PAGE_SIZE_1G;
+			else if (level == PAGE_MODE_2_LEVEL)
+				*page_size = IOMMU_PAGE_SIZE_2M;
+			else
+				return NULL;	/* Wrongly set PSE bit in PTE */
+
+			break;
+		}
+
+		level -= 1;
+	}
+
+	return pte;
+}
+
+static int iommu_v2_map_page(struct io_pgtable_ops *ops, unsigned long iova,
+			     phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
+{
+	struct protection_domain *pdom = io_pgtable_ops_to_domain(ops);
+	u64 *pte;
+	unsigned long map_size;
+	unsigned long mapped_size = 0;
+	unsigned long o_iova = iova;
+	int count = 0;
+	int ret = 0;
+	bool updated = false;
+
+	BUG_ON(!IS_ALIGNED(iova, size) || !IS_ALIGNED(paddr, size));
+
+	if (!(prot & IOMMU_PROT_MASK))
+		return -EINVAL;
+
+	while (mapped_size < size) {
+		map_size = get_alloc_page_size(size - mapped_size);
+		pte = v2_alloc_pte(pdom->iop.pgd, iova, map_size, &updated);
+		if (!pte) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		*pte = set_pte_attr(paddr, map_size, prot);
+
+		count++;
+		iova += map_size;
+		paddr += map_size;
+		mapped_size += map_size;
+	}
+
+out:
+	if (updated) {
+		if (count > 1)
+			amd_iommu_flush_tlb(&pdom->domain, 0);
+		else
+			amd_iommu_flush_page(&pdom->domain, 0, o_iova);
+	}
+
+	return ret;
+}
+
+static unsigned long iommu_v2_unmap_page(struct io_pgtable_ops *ops,
+					 unsigned long iova, size_t size,
+					 struct iommu_iotlb_gather *gather)
+{
+	struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);
+	unsigned long unmap_size;
+	unsigned long unmapped = 0;
+	u64 *pte;
+
+	BUG_ON(!is_power_of_2(size));
+
+	while (unmapped < size) {
+		pte = fetch_pte(pgtable, iova, &unmap_size);
+		if (pte) {
+			*pte = 0ULL;
+		}
+
+		iova = (iova & ~(unmap_size - 1)) + unmap_size;
+		unmapped += unmap_size;
+	}
+
+	BUG_ON(unmapped && !is_power_of_2(unmapped));
+
+	return unmapped;
+}
+
+static phys_addr_t iommu_v2_iova_to_phys(struct io_pgtable_ops *ops, unsigned long iova)
+{
+	struct amd_io_pgtable *pgtable = io_pgtable_ops_to_data(ops);
+	unsigned long offset_mask, pte_pgsize;
+	u64 *pte, __pte;
+
+	pte = fetch_pte(pgtable, iova, &pte_pgsize);
+	if (!pte || !IOMMU_PTE_PRESENT(*pte))
+		return 0;
+
+	offset_mask = pte_pgsize - 1;
+	__pte = __sme_clr(*pte & PM_ADDR_MASK);
+
+	return (__pte & ~offset_mask) | (iova & offset_mask);
+}
+
+/*
+ * ----------------------------------------------------
+ */
+static void v2_tlb_flush_all(void *cookie)
+{
+}
+
+static void v2_tlb_flush_walk(unsigned long iova, size_t size,
+			      size_t granule, void *cookie)
+{
+}
+
+static void v2_tlb_add_page(struct iommu_iotlb_gather *gather,
+			    unsigned long iova, size_t granule,
+			    void *cookie)
+{
+}
+
+static const struct iommu_flush_ops v2_flush_ops = {
+	.tlb_flush_all	= v2_tlb_flush_all,
+	.tlb_flush_walk = v2_tlb_flush_walk,
+	.tlb_add_page	= v2_tlb_add_page,
+};
+
+static void v2_free_pgtable(struct io_pgtable *iop)
+{
+	struct protection_domain *pdom;
+	struct amd_io_pgtable *pgtable = container_of(iop, struct amd_io_pgtable, iop);
+
+	pdom = container_of(pgtable, struct protection_domain, iop);
+	if (!(pdom->flags & PD_IOMMUV2_MASK))
+		return;
+
+	/*
+	 * Make changes visible to IOMMUs. No need to clear gcr3 entry
+	 * as gcr3 table is already freed.
+	 */
+	amd_iommu_domain_update(pdom);
+
+	/* Free page table */
+	free_pgtable(pgtable->pgd, get_pgtable_level());
+}
+
+static struct io_pgtable *v2_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
+{
+	struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg);
+	struct protection_domain *pdom = (struct protection_domain *)cookie;
+	int ret;
+
+	pgtable->pgd = alloc_pgtable_page();
+	if (!pgtable->pgd)
+		return NULL;
+
+	ret = amd_iommu_domain_set_gcr3(&pdom->domain, 0, iommu_virt_to_phys(pgtable->pgd));
+	if (ret)
+		goto err_free_pgd;
+
+	pgtable->iop.ops.map          = iommu_v2_map_page;
+	pgtable->iop.ops.unmap        = iommu_v2_unmap_page;
+	pgtable->iop.ops.iova_to_phys = iommu_v2_iova_to_phys;
+
+	cfg->pgsize_bitmap = AMD_IOMMU_PGSIZES_V2,
+	cfg->ias           = IOMMU_IN_ADDR_BIT_SIZE,
+	cfg->oas           = IOMMU_OUT_ADDR_BIT_SIZE,
+	cfg->tlb           = &v2_flush_ops;
+
+	return &pgtable->iop;
+
+err_free_pgd:
+	free_pgtable_page(pgtable->pgd);
+
+	return NULL;
+}
+
+struct io_pgtable_init_fns io_pgtable_amd_iommu_v2_init_fns = {
+	.alloc	= v2_alloc_pgtable,
+	.free	= v2_free_pgtable,
+};
diff --git a/drivers/iommu/io-pgtable.c b/drivers/iommu/io-pgtable.c
index f4bfcef98297..ac02a45ed798 100644
--- a/drivers/iommu/io-pgtable.c
+++ b/drivers/iommu/io-pgtable.c
@@ -27,6 +27,7 @@ io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] = {
 #endif
 #ifdef CONFIG_AMD_IOMMU
 	[AMD_IOMMU_V1] = &io_pgtable_amd_iommu_v1_init_fns,
+	[AMD_IOMMU_V2] = &io_pgtable_amd_iommu_v2_init_fns,
 #endif
 };
 
diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h
index 86af6f0a00a2..dc0f50bbdfa9 100644
--- a/include/linux/io-pgtable.h
+++ b/include/linux/io-pgtable.h
@@ -16,6 +16,7 @@ enum io_pgtable_fmt {
 	ARM_V7S,
 	ARM_MALI_LPAE,
 	AMD_IOMMU_V1,
+	AMD_IOMMU_V2,
 	APPLE_DART,
 	IO_PGTABLE_NUM_FMTS,
 };
@@ -255,6 +256,7 @@ extern struct io_pgtable_init_fns io_pgtable_arm_64_lpae_s2_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_arm_v7s_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_arm_mali_lpae_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_amd_iommu_v1_init_fns;
+extern struct io_pgtable_init_fns io_pgtable_amd_iommu_v2_init_fns;
 extern struct io_pgtable_init_fns io_pgtable_apple_dart_init_fns;
 
 #endif /* __IO_PGTABLE_H */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 4/6] iommu/amd: Add support for Guest IO protection
  2022-07-13  5:30 [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
                   ` (2 preceding siblings ...)
  2022-07-13  5:30 ` [PATCH v2 3/6] iommu/amd: Initial support for AMD IOMMU v2 page table Vasant Hegde
@ 2022-07-13  5:30 ` Vasant Hegde
  2022-07-13  5:30 ` [PATCH v2 5/6] iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API Vasant Hegde
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Vasant Hegde @ 2022-07-13  5:30 UTC (permalink / raw)
  To: joro, iommu; +Cc: suravee.suthikulpanit, robin.murphy, Vasant Hegde

From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

AMD IOMMU introduces support for Guest I/O protection where the request
from the I/O device without a PASID are treated as if they have PASID 0.

Co-developed-by: Vasant Hegde <vasant.hegde@amd.com>
Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
Changes in v2:
  - Added passthrough mode check
  - Added FEATURE_GT check

 drivers/iommu/amd/amd_iommu_types.h |  3 +++
 drivers/iommu/amd/init.c            | 13 +++++++++++++
 drivers/iommu/amd/iommu.c           |  4 ++++
 3 files changed, 20 insertions(+)

diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
index a6d6a3c7c5f3..63bf521ba20c 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -93,6 +93,7 @@
 #define FEATURE_HE		(1ULL<<8)
 #define FEATURE_PC		(1ULL<<9)
 #define FEATURE_GAM_VAPIC	(1ULL<<21)
+#define FEATURE_GIOSUP		(1ULL<<48)
 #define FEATURE_EPHSUP		(1ULL<<50)
 #define FEATURE_SNP		(1ULL<<63)
 
@@ -370,6 +371,7 @@
 #define DTE_FLAG_IW (1ULL << 62)
 
 #define DTE_FLAG_IOTLB	(1ULL << 32)
+#define DTE_FLAG_GIOV	(1ULL << 54)
 #define DTE_FLAG_GV	(1ULL << 55)
 #define DTE_FLAG_MASK	(0x3ffULL << 32)
 #define DTE_GLX_SHIFT	(56)
@@ -428,6 +430,7 @@
 #define PD_PASSTHROUGH_MASK	(1UL << 2) /* domain has no page
 					      translation */
 #define PD_IOMMUV2_MASK		(1UL << 3) /* domain has gcr3 table */
+#define PD_GIOV_MASK		(1UL << 4) /* domain enable GIOV support */
 
 extern bool amd_iommu_dump;
 #define DUMP_printk(format, arg...)				\
diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index 3c82d9c5f1c0..a2929d8b44f4 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -2026,6 +2026,17 @@ static int __init iommu_init_pci(struct amd_iommu *iommu)
 
 	init_iommu_perf_ctr(iommu);
 
+	if (amd_iommu_pgtable == AMD_IOMMU_V2) {
+		if (!iommu_feature(iommu, FEATURE_GIOSUP) ||
+		    !iommu_feature(iommu, FEATURE_GT)) {
+			pr_warn("Cannot enable v2 page table for DMA-API. Fallback to v1.\n");
+			amd_iommu_pgtable = AMD_IOMMU_V1;
+		} else if (iommu_default_passthrough()) {
+			pr_warn("V2 page table doesn't support passthrough mode. Fallback to v1.\n");
+			amd_iommu_pgtable = AMD_IOMMU_V1;
+		}
+	}
+
 	if (is_rd890_iommu(iommu->dev)) {
 		int i, j;
 
@@ -2100,6 +2111,8 @@ static void print_iommu_info(void)
 		if (amd_iommu_xt_mode == IRQ_REMAP_X2APIC_MODE)
 			pr_info("X2APIC enabled\n");
 	}
+	if (amd_iommu_pgtable == AMD_IOMMU_V2)
+		pr_info("V2 page table enabled\n");
 }
 
 static int __init amd_iommu_init_pci(void)
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 027ecd5fb4b2..2e67e6eecc31 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -1553,6 +1553,7 @@ static void set_dte_entry(struct amd_iommu *iommu, u16 devid,
 
 	pte_root |= (domain->iop.mode & DEV_ENTRY_MODE_MASK)
 		    << DEV_ENTRY_MODE_SHIFT;
+
 	pte_root |= DTE_FLAG_IR | DTE_FLAG_IW | DTE_FLAG_V | DTE_FLAG_TV;
 
 	flags = dev_table[devid].data[1];
@@ -1589,6 +1590,9 @@ static void set_dte_entry(struct amd_iommu *iommu, u16 devid,
 
 		tmp = DTE_GCR3_VAL_C(gcr3) << DTE_GCR3_SHIFT_C;
 		flags    |= tmp;
+
+		if (domain->flags & PD_GIOV_MASK)
+			pte_root |= DTE_FLAG_GIOV;
 	}
 
 	flags &= ~DEV_DOMID_MASK;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 5/6] iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API
  2022-07-13  5:30 [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
                   ` (3 preceding siblings ...)
  2022-07-13  5:30 ` [PATCH v2 4/6] iommu/amd: Add support for Guest IO protection Vasant Hegde
@ 2022-07-13  5:30 ` Vasant Hegde
  2022-07-13  5:30 ` [PATCH v2 6/6] iommu/amd: Add command-line option to enable different page table Vasant Hegde
  2022-07-15  8:49 ` [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Joerg Roedel
  6 siblings, 0 replies; 11+ messages in thread
From: Vasant Hegde @ 2022-07-13  5:30 UTC (permalink / raw)
  To: joro, iommu; +Cc: suravee.suthikulpanit, robin.murphy, Vasant Hegde

From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Introduce init function for setting up DMA domain for DMA-API with
the IOMMU v2 page table.

Co-developed-by: Vasant Hegde <vasant.hegde@amd.com>
Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
Changes in v2:
  - Added support to change supported page sizes dynamically based on domain type.

 drivers/iommu/amd/iommu.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 2e67e6eecc31..372621dfc526 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -1642,6 +1642,10 @@ static void do_attach(struct iommu_dev_data *dev_data,
 	domain->dev_iommu[iommu->index] += 1;
 	domain->dev_cnt                 += 1;
 
+	/* Override supported page sizes */
+	if (domain->flags & PD_GIOV_MASK)
+		domain->domain.pgsize_bitmap = AMD_IOMMU_PGSIZES_V2;
+
 	/* Update device table */
 	set_dte_entry(iommu, dev_data->devid, domain,
 		      ats, dev_data->iommu_v2);
@@ -2021,6 +2025,24 @@ static int protection_domain_init_v1(struct protection_domain *domain, int mode)
 	return 0;
 }
 
+static int protection_domain_init_v2(struct protection_domain *domain)
+{
+	spin_lock_init(&domain->lock);
+	domain->id = domain_id_alloc();
+	if (!domain->id)
+		return -ENOMEM;
+	INIT_LIST_HEAD(&domain->dev_list);
+
+	domain->flags |= PD_GIOV_MASK;
+
+	if (domain_enable_v2(domain, 1)) {
+		domain_id_free(domain->id);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
 static struct protection_domain *protection_domain_alloc(unsigned int type)
 {
 	struct io_pgtable_ops *pgtbl_ops;
@@ -2048,6 +2070,9 @@ static struct protection_domain *protection_domain_alloc(unsigned int type)
 	case AMD_IOMMU_V1:
 		ret = protection_domain_init_v1(domain, mode);
 		break;
+	case AMD_IOMMU_V2:
+		ret = protection_domain_init_v2(domain);
+		break;
 	default:
 		ret = -EINVAL;
 	}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 6/6] iommu/amd: Add command-line option to enable different page table
  2022-07-13  5:30 [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
                   ` (4 preceding siblings ...)
  2022-07-13  5:30 ` [PATCH v2 5/6] iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API Vasant Hegde
@ 2022-07-13  5:30 ` Vasant Hegde
  2022-07-15  8:49 ` [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Joerg Roedel
  6 siblings, 0 replies; 11+ messages in thread
From: Vasant Hegde @ 2022-07-13  5:30 UTC (permalink / raw)
  To: joro, iommu; +Cc: suravee.suthikulpanit, robin.murphy, Vasant Hegde

Enhance amd_iommu command line option to specify v1 or v2 page table.
By default system will boot in V1 page table mode.

Co-developed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Vasant Hegde <vasant.hegde@amd.com>
---
Changes in v2:
  - Removed separate command line option and enhace amd_iommu option
  - Update document

 .../admin-guide/kernel-parameters.txt         |  2 ++
 drivers/iommu/amd/init.c                      | 23 +++++++++++++++----
 2 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index d45e58328ce6..d1919bf4f37a 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -321,6 +321,8 @@
 			force_enable - Force enable the IOMMU on platforms known
 				       to be buggy with IOMMU enabled. Use this
 				       option with care.
+			pgtbl_v1     - Use v1 page table for DMA-API (Default).
+			pgtbl_v2     - Use v2 page table for DMA-API.
 
 	amd_iommu_dump=	[HW,X86-64]
 			Enable AMD IOMMU driver option to dump the ACPI table
diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index a2929d8b44f4..673c263fea8d 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -3287,17 +3287,30 @@ static int __init parse_amd_iommu_intr(char *str)
 
 static int __init parse_amd_iommu_options(char *str)
 {
-	for (; *str; ++str) {
+	if (!str)
+		return -EINVAL;
+
+	while (*str) {
 		if (strncmp(str, "fullflush", 9) == 0) {
 			pr_warn("amd_iommu=fullflush deprecated; use iommu.strict=1 instead\n");
 			iommu_set_dma_strict();
-		}
-		if (strncmp(str, "force_enable", 12) == 0)
+		} else if (strncmp(str, "force_enable", 12) == 0) {
 			amd_iommu_force_enable = true;
-		if (strncmp(str, "off", 3) == 0)
+		} else if (strncmp(str, "off", 3) == 0) {
 			amd_iommu_disabled = true;
-		if (strncmp(str, "force_isolation", 15) == 0)
+		} else if (strncmp(str, "force_isolation", 15) == 0) {
 			amd_iommu_force_isolation = true;
+		} else if (strncmp(str, "pgtbl_v1", 8) == 0) {
+			amd_iommu_pgtable = AMD_IOMMU_V1;
+		} else if (strncmp(str, "pgtbl_v2", 8) == 0) {
+			amd_iommu_pgtable = AMD_IOMMU_V2;
+		} else {
+			pr_notice("Unknown option - '%s'\n", str);
+		}
+
+		str += strcspn(str, ",");
+		while (*str == ',')
+			str++;
 	}
 
 	return 1;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-07-13  5:30 [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
                   ` (5 preceding siblings ...)
  2022-07-13  5:30 ` [PATCH v2 6/6] iommu/amd: Add command-line option to enable different page table Vasant Hegde
@ 2022-07-15  8:49 ` Joerg Roedel
  2022-07-21 10:48   ` Vasant Hegde
  6 siblings, 1 reply; 11+ messages in thread
From: Joerg Roedel @ 2022-07-15  8:49 UTC (permalink / raw)
  To: Vasant Hegde; +Cc: iommu, suravee.suthikulpanit, robin.murphy

Hi Vassant,

On Wed, Jul 13, 2022 at 11:00:28AM +0530, Vasant Hegde wrote:
> Suravee Suthikulpanit (4):
>   iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking
>   iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table
>   iommu/amd: Add support for Guest IO protection
>   iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API
> 
> Vasant Hegde (2):
>   iommu/amd: Initial support for AMD IOMMU v2 page table
>   iommu/amd: Add command-line option to enable different page table

Thanks for the submission, but I have to defer it until the next cycle.
With the PCI multi-domain support and the SNP-enablement there are
already some intrusive patches queued up for the AMD driver, and I want
to let the dust settle around those first.

Please remind me after the next merge window about this series.

Thanks,

	Joerg

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/6] iommu/amd: Initial support for AMD IOMMU v2 page table
  2022-07-13  5:30 ` [PATCH v2 3/6] iommu/amd: Initial support for AMD IOMMU v2 page table Vasant Hegde
@ 2022-07-15 11:55   ` Robin Murphy
  2022-07-21 10:52     ` Vasant Hegde
  0 siblings, 1 reply; 11+ messages in thread
From: Robin Murphy @ 2022-07-15 11:55 UTC (permalink / raw)
  To: Vasant Hegde, joro, iommu; +Cc: suravee.suthikulpanit

On 2022-07-13 06:30, Vasant Hegde wrote:
[...]
> +static struct io_pgtable *v2_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
> +{
> +	struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg);
> +	struct protection_domain *pdom = (struct protection_domain *)cookie;
> +	int ret;
> +
> +	pgtable->pgd = alloc_pgtable_page();
> +	if (!pgtable->pgd)
> +		return NULL;
> +
> +	ret = amd_iommu_domain_set_gcr3(&pdom->domain, 0, iommu_virt_to_phys(pgtable->pgd));
> +	if (ret)
> +		goto err_free_pgd;
> +
> +	pgtable->iop.ops.map          = iommu_v2_map_page;
> +	pgtable->iop.ops.unmap        = iommu_v2_unmap_page;

It would be good to use this opportunity to convert AMD over to 
map_pages/unmap_pages - not only for the future goal of removing 
map/unmap (it's not worth maintaining two interfaces where one is a 
trivial subset of the other), but also since it should be appreciably 
more efficient for you in practice, especially for v2 with the more 
limited set of basic page sizes.

Cheers,
Robin.

> +	pgtable->iop.ops.iova_to_phys = iommu_v2_iova_to_phys;
> +
> +	cfg->pgsize_bitmap = AMD_IOMMU_PGSIZES_V2,
> +	cfg->ias           = IOMMU_IN_ADDR_BIT_SIZE,
> +	cfg->oas           = IOMMU_OUT_ADDR_BIT_SIZE,
> +	cfg->tlb           = &v2_flush_ops;
> +
> +	return &pgtable->iop;
> +
> +err_free_pgd:
> +	free_pgtable_page(pgtable->pgd);
> +
> +	return NULL;
> +}

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table
  2022-07-15  8:49 ` [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Joerg Roedel
@ 2022-07-21 10:48   ` Vasant Hegde
  0 siblings, 0 replies; 11+ messages in thread
From: Vasant Hegde @ 2022-07-21 10:48 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: iommu, suravee.suthikulpanit, robin.murphy

Hi Joerg,


On 7/15/2022 2:19 PM, Joerg Roedel wrote:
> Hi Vassant,
> 
> On Wed, Jul 13, 2022 at 11:00:28AM +0530, Vasant Hegde wrote:
>> Suravee Suthikulpanit (4):
>>   iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking
>>   iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table
>>   iommu/amd: Add support for Guest IO protection
>>   iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API
>>
>> Vasant Hegde (2):
>>   iommu/amd: Initial support for AMD IOMMU v2 page table
>>   iommu/amd: Add command-line option to enable different page table
> 
> Thanks for the submission, but I have to defer it until the next cycle.
> With the PCI multi-domain support and the SNP-enablement there are
> already some intrusive patches queued up for the AMD driver, and I want
> to let the dust settle around those first.
> 
> Please remind me after the next merge window about this series.

Sure. Probably I will do v3 soon with map/unmap_pages interface.

Thanks
-Vasant

> 
> Thanks,
> 
> 	Joerg

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/6] iommu/amd: Initial support for AMD IOMMU v2 page table
  2022-07-15 11:55   ` Robin Murphy
@ 2022-07-21 10:52     ` Vasant Hegde
  0 siblings, 0 replies; 11+ messages in thread
From: Vasant Hegde @ 2022-07-21 10:52 UTC (permalink / raw)
  To: Robin Murphy, joro, iommu; +Cc: suravee.suthikulpanit

Hi Robin,


On 7/15/2022 5:25 PM, Robin Murphy wrote:
> On 2022-07-13 06:30, Vasant Hegde wrote:
> [...]
>> +static struct io_pgtable *v2_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
>> +{
>> +    struct amd_io_pgtable *pgtable = io_pgtable_cfg_to_data(cfg);
>> +    struct protection_domain *pdom = (struct protection_domain *)cookie;
>> +    int ret;
>> +
>> +    pgtable->pgd = alloc_pgtable_page();
>> +    if (!pgtable->pgd)
>> +        return NULL;
>> +
>> +    ret = amd_iommu_domain_set_gcr3(&pdom->domain, 0, iommu_virt_to_phys(pgtable->pgd));
>> +    if (ret)
>> +        goto err_free_pgd;
>> +
>> +    pgtable->iop.ops.map          = iommu_v2_map_page;
>> +    pgtable->iop.ops.unmap        = iommu_v2_unmap_page;
> 
> It would be good to use this opportunity to convert AMD over to map_pages/unmap_pages - not only for the future goal of removing map/unmap (it's not worth maintaining two interfaces where one is a trivial subset of the other), but also since it should be appreciably more efficient for you in practice, especially for v2 with the more limited set of basic page sizes.
> 

Makes sense. I will work on this and post v3 soon.

Thanks
-Vasant


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-07-21 10:53 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-13  5:30 [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Vasant Hegde
2022-07-13  5:30 ` [PATCH v2 1/6] iommu/amd: Refactor amd_iommu_domain_enable_v2 to remove locking Vasant Hegde
2022-07-13  5:30 ` [PATCH v2 2/6] iommu/amd: Update sanity check when enable PRI/ATS for IOMMU v1 table Vasant Hegde
2022-07-13  5:30 ` [PATCH v2 3/6] iommu/amd: Initial support for AMD IOMMU v2 page table Vasant Hegde
2022-07-15 11:55   ` Robin Murphy
2022-07-21 10:52     ` Vasant Hegde
2022-07-13  5:30 ` [PATCH v2 4/6] iommu/amd: Add support for Guest IO protection Vasant Hegde
2022-07-13  5:30 ` [PATCH v2 5/6] iommu/amd: Add support for using AMD IOMMU v2 page table for DMA-API Vasant Hegde
2022-07-13  5:30 ` [PATCH v2 6/6] iommu/amd: Add command-line option to enable different page table Vasant Hegde
2022-07-15  8:49 ` [PATCH v2 0/6] iommu/amd: Add Generic IO Page Table Framework Support for v2 Page Table Joerg Roedel
2022-07-21 10:48   ` Vasant Hegde

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.