All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU
@ 2023-12-20 13:43 Roger Pau Monne
  2023-12-20 13:43 ` [PATCH v4 1/6] x86/p2m: move and rename paging_max_paddr_bits() Roger Pau Monne
                   ` (7 more replies)
  0 siblings, 8 replies; 13+ messages in thread
From: Roger Pau Monne @ 2023-12-20 13:43 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu,
	Paul Durrant, Lukasz Hawrylko, Daniel P. Smith,
	Mateusz Mówka

Hello,

The aim of the series is to reduce boot time setup of IOMMU page tables
for dom0.

First and second patches are a pre-req, as further patches can end up
attempting to create maps above the max RAM address, and hence without
properly setting the IOMMU page tables levels those attempts to map
would fail.

Last 4 patches rework the hardware domain IOMMU setup to use a rangeset
instead of iterating over all addresses up to the max RAM page.  See
patch 5/6 for performance figures.

Thanks, Roger.

Roger Pau Monne (6):
  x86/p2m: move and rename paging_max_paddr_bits()
  amd-vi: set IOMMU page table levels based on guest reported paddr
    width
  x86/iommu: introduce a rangeset to perform hwdom IOMMU setup
  x86/iommu: remove regions not to be mapped
  x86/iommu: switch hwdom IOMMU to use a rangeset
  x86/iommu: cleanup unused functions

 xen/arch/x86/cpu-policy.c                   |   2 +-
 xen/arch/x86/domain.c                       |  21 ++
 xen/arch/x86/hvm/io.c                       |  15 +-
 xen/arch/x86/include/asm/domain.h           |   3 +
 xen/arch/x86/include/asm/hvm/io.h           |   4 +-
 xen/arch/x86/include/asm/paging.h           |  22 --
 xen/arch/x86/include/asm/setup.h            |   2 +-
 xen/arch/x86/setup.c                        |  81 +++---
 xen/arch/x86/tboot.c                        |   2 +-
 xen/drivers/passthrough/amd/pci_amd_iommu.c |  20 +-
 xen/drivers/passthrough/x86/iommu.c         | 271 +++++++++++++-------
 11 files changed, 262 insertions(+), 181 deletions(-)

-- 
2.43.0



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 1/6] x86/p2m: move and rename paging_max_paddr_bits()
  2023-12-20 13:43 [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU Roger Pau Monne
@ 2023-12-20 13:43 ` Roger Pau Monne
  2023-12-20 13:43 ` [PATCH v4 2/6] amd-vi: set IOMMU page table levels based on guest reported paddr width Roger Pau Monne
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Roger Pau Monne @ 2023-12-20 13:43 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper, Wei Liu

The function also supports non-paging domains, and hence it being placed in
p2m.h and named with the paging_ prefix is misleading.

Move to x86 domain.c and rename to domain_max_paddr_bits().  Moving to a
different header is non trivial, as the function depends on helpers declared in
p2m.h.  There's no performance reason for the function being inline.

Note the function is safe to use against PV or system domains, as it does check
whether the domain is using external paging, and if not the returned physical
address width is the host (native) value.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes since v3:
 - Expand changelog.

Changes since v2:
 - New in this version.
---
 xen/arch/x86/cpu-policy.c         |  2 +-
 xen/arch/x86/domain.c             | 21 +++++++++++++++++++++
 xen/arch/x86/include/asm/domain.h |  3 +++
 xen/arch/x86/include/asm/paging.h | 22 ----------------------
 4 files changed, 25 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 423932bc13d6..76efb050edf7 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -864,7 +864,7 @@ void recalculate_cpuid_policy(struct domain *d)
 
     p->extd.maxphysaddr = min(p->extd.maxphysaddr, max->extd.maxphysaddr);
     p->extd.maxphysaddr = min_t(uint8_t, p->extd.maxphysaddr,
-                                paging_max_paddr_bits(d));
+                                domain_max_paddr_bits(d));
     p->extd.maxphysaddr = max_t(uint8_t, p->extd.maxphysaddr,
                                 (p->basic.pae || p->basic.pse36) ? 36 : 32);
 
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 3712e36df930..8a31d18f6967 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2552,6 +2552,27 @@ static int __init cf_check init_vcpu_kick_softirq(void)
 }
 __initcall(init_vcpu_kick_softirq);
 
+unsigned int domain_max_paddr_bits(const struct domain *d)
+{
+    unsigned int bits = paging_mode_hap(d) ? hap_paddr_bits : paddr_bits;
+
+    if ( paging_mode_external(d) )
+    {
+        if ( !IS_ENABLED(CONFIG_BIGMEM) && paging_mode_shadow(d) )
+        {
+            /* Shadowed superpages store GFNs in 32-bit page_info fields. */
+            bits = min(bits, 32U + PAGE_SHIFT);
+        }
+        else
+        {
+            /* Both p2m-ept and p2m-pt only support 4-level page tables. */
+            bits = min(bits, 48U);
+        }
+    }
+
+    return bits;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/include/asm/domain.h b/xen/arch/x86/include/asm/domain.h
index 4b6b7ceab1ed..622d22bef255 100644
--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -777,6 +777,9 @@ static inline void pv_inject_sw_interrupt(unsigned int vector)
 struct arch_vcpu_io {
 };
 
+/* Maxphysaddr supportable by the paging infrastructure. */
+unsigned int domain_max_paddr_bits(const struct domain *d);
+
 #endif /* __ASM_DOMAIN_H__ */
 
 /*
diff --git a/xen/arch/x86/include/asm/paging.h b/xen/arch/x86/include/asm/paging.h
index 76162a9429ce..8a2a0af40874 100644
--- a/xen/arch/x86/include/asm/paging.h
+++ b/xen/arch/x86/include/asm/paging.h
@@ -336,28 +336,6 @@ static inline bool gfn_valid(const struct domain *d, gfn_t gfn)
     return !(gfn_x(gfn) >> (d->arch.cpuid->extd.maxphysaddr - PAGE_SHIFT));
 }
 
-/* Maxphysaddr supportable by the paging infrastructure. */
-static always_inline unsigned int paging_max_paddr_bits(const struct domain *d)
-{
-    unsigned int bits = paging_mode_hap(d) ? hap_paddr_bits : paddr_bits;
-
-    if ( paging_mode_external(d) )
-    {
-        if ( !IS_ENABLED(CONFIG_BIGMEM) && paging_mode_shadow(d) )
-        {
-            /* Shadowed superpages store GFNs in 32-bit page_info fields. */
-            bits = min(bits, 32U + PAGE_SHIFT);
-        }
-        else
-        {
-            /* Both p2m-ept and p2m-pt only support 4-level page tables. */
-            bits = min(bits, 48U);
-        }
-    }
-
-    return bits;
-}
-
 #endif /* XEN_PAGING_H */
 
 /*
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 2/6] amd-vi: set IOMMU page table levels based on guest reported paddr width
  2023-12-20 13:43 [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU Roger Pau Monne
  2023-12-20 13:43 ` [PATCH v4 1/6] x86/p2m: move and rename paging_max_paddr_bits() Roger Pau Monne
@ 2023-12-20 13:43 ` Roger Pau Monne
  2023-12-21  8:13   ` Jan Beulich
  2023-12-20 13:43 ` [PATCH v4 3/6] x86/iommu: introduce a rangeset to perform hwdom IOMMU setup Roger Pau Monne
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 13+ messages in thread
From: Roger Pau Monne @ 2023-12-20 13:43 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Andrew Cooper

However take into account the minimum number of levels required by unity maps
when setting the page table levels.

The previous setting of the page table levels for PV guests based on the
highest RAM address was bogus, as there can be other non-RAM regions past the
highest RAM address that need to be mapped, for example device MMIO.

For HVM we also take amd_iommu_min_paging_mode into account, however if unity
maps require more than 4 levels attempting to add those will currently fail at
the p2m level, as 4 levels is the maximum supported.

Fixes: 0700c962ac2d ('Add AMD IOMMU support into hypervisor')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes since v3:
 - Avoid possible UB if domain_max_paddr_bits() ever returns 64.

Changes since v2:
 - Use the renamed domain_max_paddr_bits().

Changes since v1:
 - Use paging_max_paddr_bits() instead of hardcoding
   DEFAULT_DOMAIN_ADDRESS_WIDTH.
---
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 20 ++++++++------------
 1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 6bc73dc21052..146cf60084f4 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -359,21 +359,17 @@ int __read_mostly amd_iommu_min_paging_mode = 1;
 static int cf_check amd_iommu_domain_init(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
+    int pglvl = amd_iommu_get_paging_mode(
+                1UL << (domain_max_paddr_bits(d) - PAGE_SHIFT));
+
+    if ( pglvl < 0 )
+        return pglvl;
 
     /*
-     * Choose the number of levels for the IOMMU page tables.
-     * - PV needs 3 or 4, depending on whether there is RAM (including hotplug
-     *   RAM) above the 512G boundary.
-     * - HVM could in principle use 3 or 4 depending on how much guest
-     *   physical address space we give it, but this isn't known yet so use 4
-     *   unilaterally.
-     * - Unity maps may require an even higher number.
+     * Choose the number of levels for the IOMMU page tables, taking into
+     * account unity maps.
      */
-    hd->arch.amd.paging_mode = max(amd_iommu_get_paging_mode(
-            is_hvm_domain(d)
-            ? 1UL << (DEFAULT_DOMAIN_ADDRESS_WIDTH - PAGE_SHIFT)
-            : get_upper_mfn_bound() + 1),
-        amd_iommu_min_paging_mode);
+    hd->arch.amd.paging_mode = max(pglvl, amd_iommu_min_paging_mode);
 
     return 0;
 }
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 3/6] x86/iommu: introduce a rangeset to perform hwdom IOMMU setup
  2023-12-20 13:43 [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU Roger Pau Monne
  2023-12-20 13:43 ` [PATCH v4 1/6] x86/p2m: move and rename paging_max_paddr_bits() Roger Pau Monne
  2023-12-20 13:43 ` [PATCH v4 2/6] amd-vi: set IOMMU page table levels based on guest reported paddr width Roger Pau Monne
@ 2023-12-20 13:43 ` Roger Pau Monne
  2024-01-09 11:29   ` Jan Beulich
  2023-12-20 13:43 ` [PATCH v4 4/6] x86/iommu: remove regions not to be mapped Roger Pau Monne
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 13+ messages in thread
From: Roger Pau Monne @ 2023-12-20 13:43 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Paul Durrant

This change just introduces the boilerplate code in order to use a rangeset
when setting up the hardware domain IOMMU mappings.  The rangeset is never
populated in this patch, so it's a non-functional change as far as the mappings
the domain gets established.

Note there will be a change for HVM domains (ie: PVH dom0) when the code
introduced here gets used: the p2m mappings will be established using
map_mmio_regions() instead of p2m_add_identity_entry(), so that ranges can be
mapped with a single function call if possible.  Note that the interface of
map_mmio_regions() doesn't allow creating read-only mappings, but so far there
are no such mappings created for PVH dom0 in arch_iommu_hwdom_init().

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v3:
 - Fix one style issue and one typo in a comment.
 - Use map_data.flush_flags in replacement of flush_flags.
 - Return -EOPNOTSUPP on failure to create RO mappings for HVM.
 - Do not panic if mapping of ranges fails, print an error message.

Changes since v2:
 - Better deal with read-only regions.
 - Destroy rangeset earlier.
 - Switch to using map_data.flush_flags.
 - Reword commit message to clarify the change in behaviour for HVM will only
   take effect after later changes.

Changes since v1:
 - Split from bigger patch.
---
 xen/drivers/passthrough/x86/iommu.c | 103 +++++++++++++++++++++++++++-
 1 file changed, 100 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 857dccb6a465..59b0c7e980ca 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -370,10 +370,88 @@ static unsigned int __hwdom_init hwdom_iommu_map(const struct domain *d,
     return perms;
 }
 
+struct map_data {
+    struct domain *d;
+    unsigned int flush_flags;
+    bool mmio_ro;
+};
+
+static int __hwdom_init cf_check identity_map(unsigned long s, unsigned long e,
+                                              void *data)
+{
+    struct map_data *info = data;
+    struct domain *d = info->d;
+    long rc;
+
+    if ( iommu_verbose )
+        printk(XENLOG_INFO " [%010lx, %010lx] R%c\n",
+               s, e, info->mmio_ro ? 'O' : 'W');
+
+    if ( paging_mode_translate(d) )
+    {
+        if ( info->mmio_ro )
+        {
+            ASSERT_UNREACHABLE();
+            /* End the rangeset iteration, as other regions will also fail. */
+            return -EOPNOTSUPP;
+        }
+        while ( (rc = map_mmio_regions(d, _gfn(s), e - s + 1, _mfn(s))) > 0 )
+        {
+            s += rc;
+            process_pending_softirqs();
+        }
+    }
+    else
+    {
+        const unsigned int perms = IOMMUF_readable | IOMMUF_preempt |
+                                   (info->mmio_ro ? 0 : IOMMUF_writable);
+
+        /*
+         * Read-only ranges are strictly MMIO and need an additional iomem
+         * permissions check.
+         */
+        while ( info->mmio_ro && s <= e && !iomem_access_permitted(d, s, e) )
+        {
+            /*
+             * Consume a frame per iteration until the remainder is accessible
+             * or there's nothing left to map.
+             */
+            if ( iomem_access_permitted(d, s, s) )
+            {
+                rc = iommu_map(d, _dfn(s), _mfn(s), 1, perms,
+                               &info->flush_flags);
+                if ( rc < 0 )
+                    break;
+                /* Must map a frame at least, which is what we request for. */
+                ASSERT(rc == 1);
+                process_pending_softirqs();
+            }
+            s++;
+        }
+        while ( (rc = iommu_map(d, _dfn(s), _mfn(s), e - s + 1,
+                                perms, &info->flush_flags)) > 0 )
+        {
+            s += rc;
+            process_pending_softirqs();
+        }
+    }
+    ASSERT(rc <= 0);
+    if ( rc )
+        printk(XENLOG_WARNING
+               "IOMMU identity mapping of [%lx, %lx] failed: %ld\n",
+               s, e, rc);
+
+    /* Ignore errors and attempt to map the remaining regions. */
+    return 0;
+}
+
 void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
 {
     unsigned long i, top, max_pfn, start, count;
-    unsigned int flush_flags = 0, start_perms = 0;
+    unsigned int start_perms = 0;
+    struct rangeset *map;
+    struct map_data map_data = { .d = d };
+    int rc;
 
     BUG_ON(!is_hardware_domain(d));
 
@@ -397,6 +475,10 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
     if ( iommu_hwdom_passthrough )
         return;
 
+    map = rangeset_new(NULL, NULL, 0);
+    if ( !map )
+        panic("IOMMU init: unable to allocate rangeset\n");
+
     max_pfn = (GB(4) >> PAGE_SHIFT) - 1;
     top = max(max_pdx, pfn_to_pdx(max_pfn) + 1);
 
@@ -427,7 +509,7 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
         commit:
             while ( (rc = iommu_map(d, _dfn(start), _mfn(start), count,
                                     start_perms | IOMMUF_preempt,
-                                    &flush_flags)) > 0 )
+                                    &map_data.flush_flags)) > 0 )
             {
                 start += rc;
                 count -= rc;
@@ -451,8 +533,23 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
             goto commit;
     }
 
+    if ( iommu_verbose )
+        printk(XENLOG_INFO "%pd: identity mappings for IOMMU:\n", d);
+
+    rc = rangeset_report_ranges(map, 0, ~0UL, identity_map, &map_data);
+    rangeset_destroy(map);
+    if ( !rc && is_pv_domain(d) )
+    {
+        map_data.mmio_ro = true;
+        rc = rangeset_report_ranges(mmio_ro_ranges, 0, ~0UL, identity_map,
+                                    &map_data);
+    }
+    if ( rc )
+        printk(XENLOG_WARNING "IOMMU unable to create %smappings: %d\n",
+               map_data.mmio_ro ? "read-only " : "", rc);
+
     /* Use if to avoid compiler warning */
-    if ( iommu_iotlb_flush_all(d, flush_flags) )
+    if ( iommu_iotlb_flush_all(d, map_data.flush_flags) )
         return;
 }
 
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 4/6] x86/iommu: remove regions not to be mapped
  2023-12-20 13:43 [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU Roger Pau Monne
                   ` (2 preceding siblings ...)
  2023-12-20 13:43 ` [PATCH v4 3/6] x86/iommu: introduce a rangeset to perform hwdom IOMMU setup Roger Pau Monne
@ 2023-12-20 13:43 ` Roger Pau Monne
  2023-12-20 13:43 ` [PATCH v4 5/6] x86/iommu: switch hwdom IOMMU to use a rangeset Roger Pau Monne
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Roger Pau Monne @ 2023-12-20 13:43 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monne, Paul Durrant, Jan Beulich, Andrew Cooper, Wei Liu

Introduce the code to remove regions not to be mapped from the rangeset
that will be used to setup the IOMMU page tables for the hardware domain.

This change also introduces two new functions: remove_xen_ranges() and
vpci_subtract_mmcfg() that copy the logic in xen_in_range() and
vpci_is_mmcfg_address() respectively and remove the ranges that would otherwise
be intercepted by the original functions.

Note that the rangeset is still not populated.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes since v3:
 - Remove unnecessary line wrapping.

Changes since v1:
 - Split from bigger patch.
---
 xen/arch/x86/hvm/io.c               | 16 ++++++++
 xen/arch/x86/include/asm/hvm/io.h   |  3 ++
 xen/arch/x86/include/asm/setup.h    |  1 +
 xen/arch/x86/setup.c                | 48 +++++++++++++++++++++++
 xen/drivers/passthrough/x86/iommu.c | 61 +++++++++++++++++++++++++++++
 5 files changed, 129 insertions(+)

diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index d75af83ad01f..a42854c52b65 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -369,6 +369,22 @@ bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr)
     return vpci_mmcfg_find(d, addr);
 }
 
+int __hwdom_init vpci_subtract_mmcfg(const struct domain *d, struct rangeset *r)
+{
+    const struct hvm_mmcfg *mmcfg;
+
+    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
+    {
+        int rc = rangeset_remove_range(r, PFN_DOWN(mmcfg->addr),
+                                       PFN_DOWN(mmcfg->addr + mmcfg->size - 1));
+
+        if ( rc )
+            return rc;
+    }
+
+    return 0;
+}
+
 static unsigned int vpci_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg,
                                            paddr_t addr, pci_sbdf_t *sbdf)
 {
diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h
index a97731657801..e1e5e6fe7491 100644
--- a/xen/arch/x86/include/asm/hvm/io.h
+++ b/xen/arch/x86/include/asm/hvm/io.h
@@ -156,6 +156,9 @@ void destroy_vpci_mmcfg(struct domain *d);
 /* Check if an address is between a MMCFG region for a domain. */
 bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr);
 
+/* Remove MMCFG regions from a given rangeset. */
+int vpci_subtract_mmcfg(const struct domain *d, struct rangeset *r);
+
 #endif /* __ASM_X86_HVM_IO_H__ */
 
 
diff --git a/xen/arch/x86/include/asm/setup.h b/xen/arch/x86/include/asm/setup.h
index 9a460e4db8f4..cd07d98101d8 100644
--- a/xen/arch/x86/include/asm/setup.h
+++ b/xen/arch/x86/include/asm/setup.h
@@ -37,6 +37,7 @@ void discard_initial_images(void);
 void *bootstrap_map(const module_t *mod);
 
 int xen_in_range(unsigned long mfn);
+int remove_xen_ranges(struct rangeset *r);
 
 extern uint8_t kbd_shift_flags;
 
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 897b7e92082e..ba6fe9afe5e6 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -2138,6 +2138,54 @@ int __hwdom_init xen_in_range(unsigned long mfn)
     return 0;
 }
 
+int __hwdom_init remove_xen_ranges(struct rangeset *r)
+{
+    paddr_t start, end;
+    int rc;
+
+    /* S3 resume code (and other real mode trampoline code) */
+    rc = rangeset_remove_range(r, PFN_DOWN(bootsym_phys(trampoline_start)),
+                               PFN_DOWN(bootsym_phys(trampoline_end)));
+    if ( rc )
+        return rc;
+
+    /*
+     * This needs to remain in sync with the uses of the same symbols in
+     * - __start_xen()
+     * - is_xen_fixed_mfn()
+     * - tboot_shutdown()
+     */
+    /* hypervisor .text + .rodata */
+    rc = rangeset_remove_range(r, PFN_DOWN(__pa(&_stext)),
+                               PFN_DOWN(__pa(&__2M_rodata_end)));
+    if ( rc )
+        return rc;
+
+    /* hypervisor .data + .bss */
+    if ( efi_boot_mem_unused(&start, &end) )
+    {
+        ASSERT(__pa(start) >= __pa(&__2M_rwdata_start));
+        rc = rangeset_remove_range(r, PFN_DOWN(__pa(&__2M_rwdata_start)),
+                                   PFN_DOWN(__pa(start)));
+        if ( rc )
+            return rc;
+        ASSERT(__pa(end) <= __pa(&__2M_rwdata_end));
+        rc = rangeset_remove_range(r, PFN_DOWN(__pa(end)),
+                                   PFN_DOWN(__pa(&__2M_rwdata_end)));
+        if ( rc )
+            return rc;
+    }
+    else
+    {
+        rc = rangeset_remove_range(r, PFN_DOWN(__pa(&__2M_rwdata_start)),
+                                   PFN_DOWN(__pa(&__2M_rwdata_end)));
+        if ( rc )
+            return rc;
+    }
+
+    return 0;
+}
+
 static int __hwdom_init cf_check io_bitmap_cb(
     unsigned long s, unsigned long e, void *ctx)
 {
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 59b0c7e980ca..fc5215a9dc40 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -370,6 +370,14 @@ static unsigned int __hwdom_init hwdom_iommu_map(const struct domain *d,
     return perms;
 }
 
+static int __hwdom_init cf_check map_subtract(unsigned long s, unsigned long e,
+                                              void *data)
+{
+    struct rangeset *map = data;
+
+    return rangeset_remove_range(map, s, e);
+}
+
 struct map_data {
     struct domain *d;
     unsigned int flush_flags;
@@ -533,6 +541,59 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
             goto commit;
     }
 
+    /* Remove any areas in-use by Xen. */
+    rc = remove_xen_ranges(map);
+    if ( rc )
+        panic("IOMMU failed to remove Xen ranges: %d\n", rc);
+
+    /* Remove any overlap with the Interrupt Address Range. */
+    rc = rangeset_remove_range(map, 0xfee00, 0xfeeff);
+    if ( rc )
+        panic("IOMMU failed to remove Interrupt Address Range: %d\n", rc);
+
+    /* If emulating IO-APIC(s) make sure the base address is unmapped. */
+    if ( has_vioapic(d) )
+    {
+        for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
+        {
+            rc = rangeset_remove_singleton(map,
+                PFN_DOWN(domain_vioapic(d, i)->base_address));
+            if ( rc )
+                panic("IOMMU failed to remove IO-APIC: %d\n", rc);
+        }
+    }
+
+    if ( is_pv_domain(d) )
+    {
+        /*
+         * Be consistent with CPU mappings: Dom0 is permitted to establish r/o
+         * ones there (also for e.g. HPET in certain cases), so it should also
+         * have such established for IOMMUs.  Remove any read-only ranges here,
+         * since ranges in mmio_ro_ranges are already explicitly mapped below
+         * in read-only mode.
+         */
+        rc = rangeset_report_ranges(mmio_ro_ranges, 0, ~0UL, map_subtract, map);
+        if ( rc )
+            panic("IOMMU failed to remove read-only regions: %d\n", rc);
+    }
+
+    if ( has_vpci(d) )
+    {
+        /*
+         * TODO: runtime added MMCFG regions are not checked to make sure they
+         * don't overlap with already mapped regions, thus preventing trapping.
+         */
+        rc = vpci_subtract_mmcfg(d, map);
+        if ( rc )
+            panic("IOMMU unable to remove MMCFG areas: %d\n", rc);
+    }
+
+    /* Remove any regions past the last address addressable by the domain. */
+    rc = rangeset_remove_range(map, PFN_DOWN(1UL << domain_max_paddr_bits(d)),
+                               ~0UL);
+    if ( rc )
+        panic("IOMMU unable to remove unaddressable ranges: %d\n", rc);
+
     if ( iommu_verbose )
         printk(XENLOG_INFO "%pd: identity mappings for IOMMU:\n", d);
 
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 5/6] x86/iommu: switch hwdom IOMMU to use a rangeset
  2023-12-20 13:43 [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU Roger Pau Monne
                   ` (3 preceding siblings ...)
  2023-12-20 13:43 ` [PATCH v4 4/6] x86/iommu: remove regions not to be mapped Roger Pau Monne
@ 2023-12-20 13:43 ` Roger Pau Monne
  2023-12-20 13:43 ` [PATCH v4 6/6] x86/iommu: cleanup unused functions Roger Pau Monne
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Roger Pau Monne @ 2023-12-20 13:43 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Jan Beulich, Paul Durrant

The current loop that iterates from 0 to the maximum RAM address in order to
setup the IOMMU mappings is highly inefficient, and it will get worse as the
amount of RAM increases.  It's also not accounting for any reserved regions
past the last RAM address.

Instead of iterating over memory addresses, iterate over the memory map regions
and use a rangeset in order to keep track of which ranges need to be identity
mapped in the hardware domain physical address space.

On an AMD EPYC 7452 with 512GiB of RAM, the time to execute
arch_iommu_hwdom_init() in nanoseconds is:

x old
+ new
    N           Min           Max        Median           Avg        Stddev
x   5 2.2364154e+10  2.338244e+10 2.2474685e+10 2.2622409e+10 4.2949869e+08
+   5       1025012       1033036       1026188     1028276.2     3623.1194
Difference at 95.0% confidence
        -2.26214e+10 +/- 4.42931e+08
        -99.9955% +/- 9.05152e-05%
        (Student's t, pooled s = 3.03701e+08)

Execution time of arch_iommu_hwdom_init() goes down from ~22s to ~0.001s.

Note there's a change for HVM domains (ie: PVH dom0) that get switched to
create the p2m mappings using map_mmio_regions() instead of
p2m_add_identity_entry(), so that ranges can be mapped with a single function
call if possible.  Note that the interface of map_mmio_regions() doesn't
allow creating read-only mappings, but so far there are no such mappings
created for PVH dom0 in arch_iommu_hwdom_init().

No change intended in the resulting mappings that a hardware domain gets.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes since v3:
 - Remove unnecessary line wraps.

Changes since v2:
 - Simplify a bit the logic related to inclusive option, at the cost of making
   some no-op calls on some cases.

Changes since v1:
 - Split from bigger patch.
 - Remove unneeded default case.
---
 xen/drivers/passthrough/x86/iommu.c | 146 ++++++----------------------
 1 file changed, 30 insertions(+), 116 deletions(-)

diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index fc5215a9dc40..8e764e0afb31 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -300,76 +300,6 @@ void iommu_identity_map_teardown(struct domain *d)
     }
 }
 
-static unsigned int __hwdom_init hwdom_iommu_map(const struct domain *d,
-                                                 unsigned long pfn,
-                                                 unsigned long max_pfn)
-{
-    mfn_t mfn = _mfn(pfn);
-    unsigned int i, type, perms = IOMMUF_readable | IOMMUF_writable;
-
-    /*
-     * Set up 1:1 mapping for dom0. Default to include only conventional RAM
-     * areas and let RMRRs include needed reserved regions. When set, the
-     * inclusive mapping additionally maps in every pfn up to 4GB except those
-     * that fall in unusable ranges for PV Dom0.
-     */
-    if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) )
-        return 0;
-
-    switch ( type = page_get_ram_type(mfn) )
-    {
-    case RAM_TYPE_UNUSABLE:
-        return 0;
-
-    case RAM_TYPE_CONVENTIONAL:
-        if ( iommu_hwdom_strict )
-            return 0;
-        break;
-
-    default:
-        if ( type & RAM_TYPE_RESERVED )
-        {
-            if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
-                perms = 0;
-        }
-        else if ( is_hvm_domain(d) )
-            return 0;
-        else if ( !iommu_hwdom_inclusive || pfn > max_pfn )
-            perms = 0;
-    }
-
-    /* Check that it doesn't overlap with the Interrupt Address Range. */
-    if ( pfn >= 0xfee00 && pfn <= 0xfeeff )
-        return 0;
-    /* ... or the IO-APIC */
-    if ( has_vioapic(d) )
-    {
-        for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
-            if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
-                return 0;
-    }
-    else if ( is_pv_domain(d) )
-    {
-        /*
-         * Be consistent with CPU mappings: Dom0 is permitted to establish r/o
-         * ones there (also for e.g. HPET in certain cases), so it should also
-         * have such established for IOMMUs.
-         */
-        if ( iomem_access_permitted(d, pfn, pfn) &&
-             rangeset_contains_singleton(mmio_ro_ranges, pfn) )
-            perms = IOMMUF_readable;
-    }
-    /*
-     * ... or the PCIe MCFG regions.
-     * TODO: runtime added MMCFG regions are not checked to make sure they
-     * don't overlap with already mapped regions, thus preventing trapping.
-     */
-    if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) )
-        return 0;
-
-    return perms;
-}
-
 static int __hwdom_init cf_check map_subtract(unsigned long s, unsigned long e,
                                               void *data)
 {
@@ -455,8 +385,7 @@ static int __hwdom_init cf_check identity_map(unsigned long s, unsigned long e,
 
 void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
 {
-    unsigned long i, top, max_pfn, start, count;
-    unsigned int start_perms = 0;
+    unsigned int i;
     struct rangeset *map;
     struct map_data map_data = { .d = d };
     int rc;
@@ -487,58 +416,43 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
     if ( !map )
         panic("IOMMU init: unable to allocate rangeset\n");
 
-    max_pfn = (GB(4) >> PAGE_SHIFT) - 1;
-    top = max(max_pdx, pfn_to_pdx(max_pfn) + 1);
+    if ( iommu_hwdom_inclusive )
+    {
+        /* Add the whole range below 4GB, UNUSABLE regions will be removed. */
+        rc = rangeset_add_range(map, 0, PFN_DOWN(GB(4)) - 1);
+        if ( rc )
+            panic("IOMMU inclusive mappings can't be added: %d\n", rc);
+    }
 
-    for ( i = 0, start = 0, count = 0; i < top; )
+    for ( i = 0; i < e820.nr_map; i++ )
     {
-        unsigned long pfn = pdx_to_pfn(i);
-        unsigned int perms = hwdom_iommu_map(d, pfn, max_pfn);
+        const struct e820entry entry = e820.map[i];
 
-        if ( !perms )
-            /* nothing */;
-        else if ( paging_mode_translate(d) )
+        switch ( entry.type )
         {
-            int rc;
-
-            rc = p2m_add_identity_entry(d, pfn,
-                                        perms & IOMMUF_writable ? p2m_access_rw
-                                                                : p2m_access_r,
-                                        0);
+        case E820_UNUSABLE:
+            /* Only relevant for inclusive mode, otherwise this is a no-op. */
+            rc = rangeset_remove_range(map, PFN_DOWN(entry.addr),
+                                       PFN_DOWN(entry.addr + entry.size - 1));
             if ( rc )
-                printk(XENLOG_WARNING
-                       "%pd: identity mapping of %lx failed: %d\n",
-                       d, pfn, rc);
-        }
-        else if ( pfn != start + count || perms != start_perms )
-        {
-            long rc;
+                panic("IOMMU failed to remove unusable memory: %d\n", rc);
+            continue;
 
-        commit:
-            while ( (rc = iommu_map(d, _dfn(start), _mfn(start), count,
-                                    start_perms | IOMMUF_preempt,
-                                    &map_data.flush_flags)) > 0 )
-            {
-                start += rc;
-                count -= rc;
-                process_pending_softirqs();
-            }
-            if ( rc )
-                printk(XENLOG_WARNING
-                       "%pd: IOMMU identity mapping of [%lx,%lx) failed: %ld\n",
-                       d, start, start + count, rc);
-            start = pfn;
-            count = 1;
-            start_perms = perms;
-        }
-        else
-            ++count;
+        case E820_RESERVED:
+            if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
+                continue;
+            break;
 
-        if ( !(++i & 0xfffff) )
-            process_pending_softirqs();
+        case E820_RAM:
+            if ( iommu_hwdom_strict )
+                continue;
+            break;
+        }
 
-        if ( i == top && count )
-            goto commit;
+        rc = rangeset_add_range(map, PFN_DOWN(entry.addr),
+                                PFN_DOWN(entry.addr + entry.size - 1));
+        if ( rc )
+            panic("IOMMU failed to add identity range: %d\n", rc);
     }
 
     /* Remove any areas in-use by Xen. */
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 6/6] x86/iommu: cleanup unused functions
  2023-12-20 13:43 [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU Roger Pau Monne
                   ` (4 preceding siblings ...)
  2023-12-20 13:43 ` [PATCH v4 5/6] x86/iommu: switch hwdom IOMMU to use a rangeset Roger Pau Monne
@ 2023-12-20 13:43 ` Roger Pau Monne
  2023-12-21 10:12 ` [PATCH] x86/iommu: add CHANGELOG entry for hwdom setup time improvements Roger Pau Monne
  2024-01-09 11:38 ` [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU Jan Beulich
  7 siblings, 0 replies; 13+ messages in thread
From: Roger Pau Monne @ 2023-12-20 13:43 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monne, Paul Durrant, Jan Beulich, Andrew Cooper,
	Wei Liu, Lukasz Hawrylko, Daniel P. Smith, Mateusz Mówka

Remove xen_in_range() and vpci_is_mmcfg_address() now that hey are unused.

Adjust comments to point to the new functions that replace the existing ones.

No functional change.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes since v2:
 - Do remove vpci_is_mmcfg_address().
---
Can be squashed with the previous patch if desired, split as a separate patch
for clarity.
---
 xen/arch/x86/hvm/io.c             |  5 ---
 xen/arch/x86/include/asm/hvm/io.h |  3 --
 xen/arch/x86/include/asm/setup.h  |  1 -
 xen/arch/x86/setup.c              | 53 ++-----------------------------
 xen/arch/x86/tboot.c              |  2 +-
 5 files changed, 3 insertions(+), 61 deletions(-)

diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index a42854c52b65..06283b41c463 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -364,11 +364,6 @@ static const struct hvm_mmcfg *vpci_mmcfg_find(const struct domain *d,
     return NULL;
 }
 
-bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr)
-{
-    return vpci_mmcfg_find(d, addr);
-}
-
 int __hwdom_init vpci_subtract_mmcfg(const struct domain *d, struct rangeset *r)
 {
     const struct hvm_mmcfg *mmcfg;
diff --git a/xen/arch/x86/include/asm/hvm/io.h b/xen/arch/x86/include/asm/hvm/io.h
index e1e5e6fe7491..24d1b6134f02 100644
--- a/xen/arch/x86/include/asm/hvm/io.h
+++ b/xen/arch/x86/include/asm/hvm/io.h
@@ -153,9 +153,6 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
 /* Destroy tracked MMCFG areas. */
 void destroy_vpci_mmcfg(struct domain *d);
 
-/* Check if an address is between a MMCFG region for a domain. */
-bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr);
-
 /* Remove MMCFG regions from a given rangeset. */
 int vpci_subtract_mmcfg(const struct domain *d, struct rangeset *r);
 
diff --git a/xen/arch/x86/include/asm/setup.h b/xen/arch/x86/include/asm/setup.h
index cd07d98101d8..1ced1299c77b 100644
--- a/xen/arch/x86/include/asm/setup.h
+++ b/xen/arch/x86/include/asm/setup.h
@@ -36,7 +36,6 @@ unsigned long initial_images_nrpages(nodeid_t node);
 void discard_initial_images(void);
 void *bootstrap_map(const module_t *mod);
 
-int xen_in_range(unsigned long mfn);
 int remove_xen_ranges(struct rangeset *r);
 
 extern uint8_t kbd_shift_flags;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index ba6fe9afe5e6..f52fc3f63cb0 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1343,7 +1343,7 @@ void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
         relocated = true;
 
         /*
-         * This needs to remain in sync with xen_in_range() and the
+         * This needs to remain in sync with remove_xen_ranges() and the
          * respective reserve_e820_ram() invocation below. No need to
          * query efi_boot_mem_unused() here, though.
          */
@@ -1495,7 +1495,7 @@ void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
     if ( using_2M_mapping() )
         efi_boot_mem_unused(NULL, NULL);
 
-    /* This needs to remain in sync with xen_in_range(). */
+    /* This needs to remain in sync with remove_xen_ranges(). */
     if ( efi_boot_mem_unused(&eb_start, &eb_end) )
     {
         reserve_e820_ram(&boot_e820, __pa(_stext), __pa(eb_start));
@@ -2089,55 +2089,6 @@ void arch_get_xen_caps(xen_capabilities_info_t *info)
     }
 }
 
-int __hwdom_init xen_in_range(unsigned long mfn)
-{
-    paddr_t start, end;
-    int i;
-
-    enum { region_s3, region_ro, region_rw, region_bss, nr_regions };
-    static struct {
-        paddr_t s, e;
-    } xen_regions[nr_regions] __hwdom_initdata;
-
-    /* initialize first time */
-    if ( !xen_regions[0].s )
-    {
-        /* S3 resume code (and other real mode trampoline code) */
-        xen_regions[region_s3].s = bootsym_phys(trampoline_start);
-        xen_regions[region_s3].e = bootsym_phys(trampoline_end);
-
-        /*
-         * This needs to remain in sync with the uses of the same symbols in
-         * - __start_xen() (above)
-         * - is_xen_fixed_mfn()
-         * - tboot_shutdown()
-         */
-
-        /* hypervisor .text + .rodata */
-        xen_regions[region_ro].s = __pa(&_stext);
-        xen_regions[region_ro].e = __pa(&__2M_rodata_end);
-        /* hypervisor .data + .bss */
-        xen_regions[region_rw].s = __pa(&__2M_rwdata_start);
-        xen_regions[region_rw].e = __pa(&__2M_rwdata_end);
-        if ( efi_boot_mem_unused(&start, &end) )
-        {
-            ASSERT(__pa(start) >= xen_regions[region_rw].s);
-            ASSERT(__pa(end) <= xen_regions[region_rw].e);
-            xen_regions[region_rw].e = __pa(start);
-            xen_regions[region_bss].s = __pa(end);
-            xen_regions[region_bss].e = __pa(&__2M_rwdata_end);
-        }
-    }
-
-    start = (paddr_t)mfn << PAGE_SHIFT;
-    end = start + PAGE_SIZE;
-    for ( i = 0; i < nr_regions; i++ )
-        if ( (start < xen_regions[i].e) && (end > xen_regions[i].s) )
-            return 1;
-
-    return 0;
-}
-
 int __hwdom_init remove_xen_ranges(struct rangeset *r)
 {
     paddr_t start, end;
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index 86c4c22cacb8..4c254b4e34b4 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -321,7 +321,7 @@ void tboot_shutdown(uint32_t shutdown_type)
 
         /*
          * Xen regions for tboot to MAC. This needs to remain in sync with
-         * xen_in_range().
+         * remove_xen_ranges().
          */
         g_tboot_shared->num_mac_regions = 3;
         /* S3 resume code (and other real mode trampoline code) */
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 2/6] amd-vi: set IOMMU page table levels based on guest reported paddr width
  2023-12-20 13:43 ` [PATCH v4 2/6] amd-vi: set IOMMU page table levels based on guest reported paddr width Roger Pau Monne
@ 2023-12-21  8:13   ` Jan Beulich
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2023-12-21  8:13 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Andrew Cooper, xen-devel

On 20.12.2023 14:43, Roger Pau Monne wrote:
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -359,21 +359,17 @@ int __read_mostly amd_iommu_min_paging_mode = 1;
>  static int cf_check amd_iommu_domain_init(struct domain *d)
>  {
>      struct domain_iommu *hd = dom_iommu(d);
> +    int pglvl = amd_iommu_get_paging_mode(
> +                1UL << (domain_max_paddr_bits(d) - PAGE_SHIFT));

Nit: Considering the "pending" open parenthesis, I think this line
needs indenting one level further. I'll make the adjustment while
committing, unless I hear objections pretty soon.

Jan


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] x86/iommu: add CHANGELOG entry for hwdom setup time improvements
  2023-12-20 13:43 [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU Roger Pau Monne
                   ` (5 preceding siblings ...)
  2023-12-20 13:43 ` [PATCH v4 6/6] x86/iommu: cleanup unused functions Roger Pau Monne
@ 2023-12-21 10:12 ` Roger Pau Monne
  2023-12-21 10:23   ` Jan Beulich
  2024-01-09 11:38 ` [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU Jan Beulich
  7 siblings, 1 reply; 13+ messages in thread
From: Roger Pau Monne @ 2023-12-21 10:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne, Oleksii Kurochko, Community Manager

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 CHANGELOG.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 5ee5d41fc933..52484c047bd1 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -9,6 +9,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
 ### Changed
  - Changed flexible array definitions in public I/O interface headers to not
    use "1" as the number of array elements.
+ - On x86:
+   - Reduce IOMMU setup time for hardware domain.
 
 ### Added
  - On x86:
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86/iommu: add CHANGELOG entry for hwdom setup time improvements
  2023-12-21 10:12 ` [PATCH] x86/iommu: add CHANGELOG entry for hwdom setup time improvements Roger Pau Monne
@ 2023-12-21 10:23   ` Jan Beulich
  2023-12-21 11:24     ` Roger Pau Monné
  0 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2023-12-21 10:23 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Oleksii Kurochko, Community Manager, xen-devel

On 21.12.2023 11:12, Roger Pau Monne wrote:
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  CHANGELOG.md | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index 5ee5d41fc933..52484c047bd1 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -9,6 +9,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
>  ### Changed
>   - Changed flexible array definitions in public I/O interface headers to not
>     use "1" as the number of array elements.
> + - On x86:
> +   - Reduce IOMMU setup time for hardware domain.
>  
>  ### Added
>   - On x86:

I'm a little puzzled: Isn't this more like patch 7/6 of the v4 series
(or possibly patch 5.5/6), which hasn't gone in yet?

Jan


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86/iommu: add CHANGELOG entry for hwdom setup time improvements
  2023-12-21 10:23   ` Jan Beulich
@ 2023-12-21 11:24     ` Roger Pau Monné
  0 siblings, 0 replies; 13+ messages in thread
From: Roger Pau Monné @ 2023-12-21 11:24 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Oleksii Kurochko, Community Manager, xen-devel

On Thu, Dec 21, 2023 at 11:23:49AM +0100, Jan Beulich wrote:
> On 21.12.2023 11:12, Roger Pau Monne wrote:
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> >  CHANGELOG.md | 2 ++
> >  1 file changed, 2 insertions(+)
> > 
> > diff --git a/CHANGELOG.md b/CHANGELOG.md
> > index 5ee5d41fc933..52484c047bd1 100644
> > --- a/CHANGELOG.md
> > +++ b/CHANGELOG.md
> > @@ -9,6 +9,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
> >  ### Changed
> >   - Changed flexible array definitions in public I/O interface headers to not
> >     use "1" as the number of array elements.
> > + - On x86:
> > +   - Reduce IOMMU setup time for hardware domain.
> >  
> >  ### Added
> >   - On x86:
> 
> I'm a little puzzled: Isn't this more like patch 7/6 of the v4 series
> (or possibly patch 5.5/6), which hasn't gone in yet?

Yes, that's why I've sent it as a reply-to the cover letter.  I
assumed it was clear enough that it's only supposed to go in after the
rest of the series.  Should have been 7/7, but I forgot about it.

Regards, Roger.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 3/6] x86/iommu: introduce a rangeset to perform hwdom IOMMU setup
  2023-12-20 13:43 ` [PATCH v4 3/6] x86/iommu: introduce a rangeset to perform hwdom IOMMU setup Roger Pau Monne
@ 2024-01-09 11:29   ` Jan Beulich
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2024-01-09 11:29 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Paul Durrant, xen-devel

On 20.12.2023 14:43, Roger Pau Monne wrote:
> This change just introduces the boilerplate code in order to use a rangeset
> when setting up the hardware domain IOMMU mappings.  The rangeset is never
> populated in this patch, so it's a non-functional change as far as the mappings
> the domain gets established.
> 
> Note there will be a change for HVM domains (ie: PVH dom0) when the code
> introduced here gets used: the p2m mappings will be established using
> map_mmio_regions() instead of p2m_add_identity_entry(), so that ranges can be
> mapped with a single function call if possible.  Note that the interface of
> map_mmio_regions() doesn't allow creating read-only mappings, but so far there
> are no such mappings created for PVH dom0 in arch_iommu_hwdom_init().
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU
  2023-12-20 13:43 [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU Roger Pau Monne
                   ` (6 preceding siblings ...)
  2023-12-21 10:12 ` [PATCH] x86/iommu: add CHANGELOG entry for hwdom setup time improvements Roger Pau Monne
@ 2024-01-09 11:38 ` Jan Beulich
  7 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2024-01-09 11:38 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Wei Liu, Lukasz Hawrylko, Daniel P. Smith,
	Mateusz Mówka, Roger Pau Monne, xen-devel

On 20.12.2023 14:43, Roger Pau Monne wrote:
> Hello,
> 
> The aim of the series is to reduce boot time setup of IOMMU page tables
> for dom0.
> 
> First and second patches are a pre-req, as further patches can end up
> attempting to create maps above the max RAM address, and hence without
> properly setting the IOMMU page tables levels those attempts to map
> would fail.
> 
> Last 4 patches rework the hardware domain IOMMU setup to use a rangeset
> instead of iterating over all addresses up to the max RAM page.  See
> patch 5/6 for performance figures.
> 
> Thanks, Roger.
> 
> Roger Pau Monne (6):
>   x86/p2m: move and rename paging_max_paddr_bits()
>   amd-vi: set IOMMU page table levels based on guest reported paddr
>     width
>   x86/iommu: introduce a rangeset to perform hwdom IOMMU setup
>   x86/iommu: remove regions not to be mapped
>   x86/iommu: switch hwdom IOMMU to use a rangeset
>   x86/iommu: cleanup unused functions
> 
>  xen/arch/x86/cpu-policy.c                   |   2 +-
>  xen/arch/x86/domain.c                       |  21 ++
>  xen/arch/x86/hvm/io.c                       |  15 +-

Paul,

for the changes to this file, any chance of an ack for patches 4 and 6?

Thanks, Jan

>  xen/arch/x86/include/asm/domain.h           |   3 +
>  xen/arch/x86/include/asm/hvm/io.h           |   4 +-
>  xen/arch/x86/include/asm/paging.h           |  22 --
>  xen/arch/x86/include/asm/setup.h            |   2 +-
>  xen/arch/x86/setup.c                        |  81 +++---
>  xen/arch/x86/tboot.c                        |   2 +-
>  xen/drivers/passthrough/amd/pci_amd_iommu.c |  20 +-
>  xen/drivers/passthrough/x86/iommu.c         | 271 +++++++++++++-------
>  11 files changed, 262 insertions(+), 181 deletions(-)
> 



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2024-01-09 11:38 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-20 13:43 [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU Roger Pau Monne
2023-12-20 13:43 ` [PATCH v4 1/6] x86/p2m: move and rename paging_max_paddr_bits() Roger Pau Monne
2023-12-20 13:43 ` [PATCH v4 2/6] amd-vi: set IOMMU page table levels based on guest reported paddr width Roger Pau Monne
2023-12-21  8:13   ` Jan Beulich
2023-12-20 13:43 ` [PATCH v4 3/6] x86/iommu: introduce a rangeset to perform hwdom IOMMU setup Roger Pau Monne
2024-01-09 11:29   ` Jan Beulich
2023-12-20 13:43 ` [PATCH v4 4/6] x86/iommu: remove regions not to be mapped Roger Pau Monne
2023-12-20 13:43 ` [PATCH v4 5/6] x86/iommu: switch hwdom IOMMU to use a rangeset Roger Pau Monne
2023-12-20 13:43 ` [PATCH v4 6/6] x86/iommu: cleanup unused functions Roger Pau Monne
2023-12-21 10:12 ` [PATCH] x86/iommu: add CHANGELOG entry for hwdom setup time improvements Roger Pau Monne
2023-12-21 10:23   ` Jan Beulich
2023-12-21 11:24     ` Roger Pau Monné
2024-01-09 11:38 ` [PATCH v4 0/6] x86/iommu: improve setup time of hwdom IOMMU Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.