All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables
@ 2022-05-27 11:10 Jan Beulich
  2022-05-27 11:12 ` [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0 Jan Beulich
                   ` (14 more replies)
  0 siblings, 15 replies; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné

For a long time we've been rather inefficient with IOMMU page table
management when not sharing page tables, i.e. in particular for PV (and
further specifically also for PV Dom0) and AMD (where nowadays we never
share page tables). While up to about 3.5 years ago AMD code had logic
to un-shatter page mappings, that logic was ripped out for being buggy
(XSA-275 plus follow-on).

This series enables use of large pages in AMD and Intel (VT-d) code;
Arm is presently not in need of any enabling as pagetables are always
shared there. It also augments PV Dom0 creation with suitable explicit
IOMMU mapping calls to facilitate use of large pages there. Depending
on the amount of memory handed to Dom0 this improves booting time
(latency until Dom0 actually starts) quite a bit; subsequent shattering
of some of the large pages may of course consume some of the saved time.

Known fallout has been spelled out here:
https://lists.xen.org/archives/html/xen-devel/2021-08/msg00781.html

I'm inclined to say "of course" there are also a few seemingly unrelated
changes included here, which I just came to consider necessary or at
least desirable (in part for having been in need of adjustment for a
long time) along the way.

See individual patches for details on the v5 changes.

01: IOMMU/x86: restrict IO-APIC mappings for PV Dom0
02: IOMMU/x86: perform PV Dom0 mappings in batches
03: IOMMU/x86: support freeing of pagetables
04: AMD/IOMMU: allow use of superpage mappings
05: VT-d: allow use of superpage mappings
06: IOMMU: fold flush-all hook into "flush one"
07: x86: introduce helper for recording degree of contiguity in page tables
08: IOMMU/x86: prefill newly allocate page tables
09: AMD/IOMMU: free all-empty page tables
10: VT-d: free all-empty page tables
11: AMD/IOMMU: replace all-contiguous page tables by superpage mappings
12: VT-d: replace all-contiguous page tables by superpage mappings
13: IOMMU/x86: add perf counters for page table splitting / coalescing
14: VT-d: fold iommu_flush_iotlb{,_pages}()
15: VT-d: fold dma_pte_clear_one() into its only caller

While not directly related (except that making this mode work properly
here was a fair part of the overall work), at this occasion I'd also
like to renew my proposal to make "iommu=dom0-strict" the default going
forward. It already is not only the default, but the only possible mode
for PVH Dom0.

Jan



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
@ 2022-05-27 11:12 ` Jan Beulich
  2022-05-31 14:40   ` Roger Pau Monné
  2022-05-27 11:12 ` [PATCH v5 02/15] IOMMU/x86: perform PV Dom0 mappings in batches Jan Beulich
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné

While already the case for PVH, there's no reason to treat PV
differently here, though of course the addresses get taken from another
source in this case. Except that, to match CPU side mappings, by default
we permit r/o ones. This then also means we now deal consistently with
IO-APICs whose MMIO is or is not covered by E820 reserved regions.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v5: Extend to also cover e.g. HPET, which in turn means explicitly
    excluding PCI MMCFG ranges.
[integrated] v1: Integrate into series.
[standalone] v2: Keep IOMMU mappings in sync with CPU ones.

--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -13,6 +13,7 @@
  */
 
 #include <xen/sched.h>
+#include <xen/iocap.h>
 #include <xen/iommu.h>
 #include <xen/paging.h>
 #include <xen/guest_access.h>
@@ -275,12 +276,12 @@ void iommu_identity_map_teardown(struct
     }
 }
 
-static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
-                                         unsigned long pfn,
-                                         unsigned long max_pfn)
+static unsigned int __hwdom_init hwdom_iommu_map(const struct domain *d,
+                                                 unsigned long pfn,
+                                                 unsigned long max_pfn)
 {
     mfn_t mfn = _mfn(pfn);
-    unsigned int i, type;
+    unsigned int i, type, perms = IOMMUF_readable | IOMMUF_writable;
 
     /*
      * Set up 1:1 mapping for dom0. Default to include only conventional RAM
@@ -289,44 +290,75 @@ static bool __hwdom_init hwdom_iommu_map
      * that fall in unusable ranges for PV Dom0.
      */
     if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) )
-        return false;
+        return 0;
 
     switch ( type = page_get_ram_type(mfn) )
     {
     case RAM_TYPE_UNUSABLE:
-        return false;
+        return 0;
 
     case RAM_TYPE_CONVENTIONAL:
         if ( iommu_hwdom_strict )
-            return false;
+            return 0;
         break;
 
     default:
         if ( type & RAM_TYPE_RESERVED )
         {
             if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
-                return false;
+                perms = 0;
         }
-        else if ( is_hvm_domain(d) || !iommu_hwdom_inclusive || pfn > max_pfn )
-            return false;
+        else if ( is_hvm_domain(d) )
+            return 0;
+        else if ( !iommu_hwdom_inclusive || pfn > max_pfn )
+            perms = 0;
     }
 
     /* Check that it doesn't overlap with the Interrupt Address Range. */
     if ( pfn >= 0xfee00 && pfn <= 0xfeeff )
-        return false;
+        return 0;
     /* ... or the IO-APIC */
-    for ( i = 0; has_vioapic(d) && i < d->arch.hvm.nr_vioapics; i++ )
-        if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
-            return false;
+    if ( has_vioapic(d) )
+    {
+        for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
+            if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
+                return 0;
+    }
+    else if ( is_pv_domain(d) )
+    {
+        /*
+         * Be consistent with CPU mappings: Dom0 is permitted to establish r/o
+         * ones there (also for e.g. HPET in certain cases), so it should also
+         * have such established for IOMMUs.
+         */
+        if ( iomem_access_permitted(d, pfn, pfn) &&
+             rangeset_contains_singleton(mmio_ro_ranges, pfn) )
+            perms = IOMMUF_readable;
+    }
     /*
      * ... or the PCIe MCFG regions.
      * TODO: runtime added MMCFG regions are not checked to make sure they
      * don't overlap with already mapped regions, thus preventing trapping.
      */
     if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) )
-        return false;
+        return 0;
+    else if ( is_pv_domain(d) )
+    {
+        /*
+         * Don't extend consistency with CPU mappings to PCI MMCFG regions.
+         * These shouldn't be accessed via DMA by devices.
+         */
+        const struct acpi_mcfg_allocation *cfg = pci_mmcfg_config;
+
+        for ( i = 0; i < pci_mmcfg_config_num; ++i, ++cfg )
+            if ( pfn >= PFN_DOWN(cfg->address) + PCI_BDF(cfg->start_bus_number,
+                                                         0, 0) &&
+                 pfn <= PFN_DOWN(cfg->address) + PCI_BDF(cfg->end_bus_number,
+                                                         ~0, ~0))
+                return 0;
+    }
 
-    return true;
+    return perms;
 }
 
 void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
@@ -368,15 +400,19 @@ void __hwdom_init arch_iommu_hwdom_init(
     for ( ; i < top; i++ )
     {
         unsigned long pfn = pdx_to_pfn(i);
+        unsigned int perms = hwdom_iommu_map(d, pfn, max_pfn);
         int rc;
 
-        if ( !hwdom_iommu_map(d, pfn, max_pfn) )
+        if ( !perms )
             rc = 0;
         else if ( paging_mode_translate(d) )
-            rc = p2m_add_identity_entry(d, pfn, p2m_access_rw, 0);
+            rc = p2m_add_identity_entry(d, pfn,
+                                        perms & IOMMUF_writable ? p2m_access_rw
+                                                                : p2m_access_r,
+                                        0);
         else
             rc = iommu_map(d, _dfn(pfn), _mfn(pfn), 1ul << PAGE_ORDER_4K,
-                           IOMMUF_readable | IOMMUF_writable, &flush_flags);
+                           perms, &flush_flags);
 
         if ( rc )
             printk(XENLOG_WARNING "%pd: identity %smapping of %lx failed: %d\n",



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 02/15] IOMMU/x86: perform PV Dom0 mappings in batches
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
  2022-05-27 11:12 ` [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0 Jan Beulich
@ 2022-05-27 11:12 ` Jan Beulich
  2022-05-31 16:01   ` Roger Pau Monné
  2022-05-27 11:13 ` [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables Jan Beulich
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné, Wei Liu

For large page mappings to be easily usable (i.e. in particular without
un-shattering of smaller page mappings) and for mapping operations to
then also be more efficient, pass batches of Dom0 memory to iommu_map().
In dom0_construct_pv() and its helpers (covering strict mode) this
additionally requires establishing the type of those pages (albeit with
zero type references).

The earlier establishing of PGT_writable_page | PGT_validated requires
the existing places where this gets done (through get_page_and_type())
to be updated: For pages which actually have a mapping, the type
refcount needs to be 1.

There is actually a related bug that gets fixed here as a side effect:
Typically the last L1 table would get marked as such only after
get_page_and_type(..., PGT_writable_page). While this is fine as far as
refcounting goes, the page did remain mapped in the IOMMU in this case
(when "iommu=dom0-strict").

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Subsequently p2m_add_identity_entry() may want to also gain an order
parameter, for arch_iommu_hwdom_init() to use. While this only affects
non-RAM regions, systems typically have 2-16Mb of reserved space
immediately below 4Gb, which hence could be mapped more efficiently.

Eventually we may want to overhaul this logic to use a rangeset based
approach instead, punching holes into originally uniformly large-page-
mapped regions. Doing so right here would first and foremost be yet more
of a change.

The installing of zero-ref writable types has in fact shown (observed
while putting together the change) that despite the intention by the
XSA-288 changes (affecting DomU-s only) for Dom0 a number of
sufficiently ordinary pages (at the very least initrd and P2M ones as
well as pages that are part of the initial allocation but not part of
the initial mapping) still have been starting out as PGT_none, meaning
that they would have gained IOMMU mappings only the first time these
pages would get mapped writably. Consequently an open question is
whether iommu_memory_setup() should set the pages to PGT_writable_page
independent of need_iommu_pt_sync().

I didn't think I need to address the bug mentioned in the description in
a separate (prereq) patch, but if others disagree I could certainly
break out that part (needing to first use iommu_legacy_unmap() then).

Note that 4k P2M pages don't get (pre-)mapped in setup_pv_physmap():
They'll end up mapped via the later get_page_and_type().

As to the way these refs get installed: I've chosen to avoid the more
expensive {get,put}_page_and_type(), favoring to put in place the
intended type directly. I guess I could be convinced to avoid this
bypassing of the actual logic; I merely think it's unnecessarily
expensive.

Note also that strictly speaking the iommu_iotlb_flush_all() here (as
well as the pre-existing one in arch_iommu_hwdom_init()) shouldn't be
needed: Actual hooking up (AMD) or enabling of translation (VT-d)
occurs only afterwards anyway, so nothing can have made it into TLBs
just yet.
---
v3: Fold iommu_map() into (the now renamed) iommu_memory_setup(). Move
    iommu_unmap() into mark_pv_pt_pages_rdonly(). Adjust (split) log
    message in arch_iommu_hwdom_init().

--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -46,7 +46,8 @@ void __init dom0_update_physmap(bool com
 static __init void mark_pv_pt_pages_rdonly(struct domain *d,
                                            l4_pgentry_t *l4start,
                                            unsigned long vpt_start,
-                                           unsigned long nr_pt_pages)
+                                           unsigned long nr_pt_pages,
+                                           unsigned int *flush_flags)
 {
     unsigned long count;
     struct page_info *page;
@@ -71,6 +72,14 @@ static __init void mark_pv_pt_pages_rdon
         ASSERT((page->u.inuse.type_info & PGT_type_mask) <= PGT_root_page_table);
         ASSERT(!(page->u.inuse.type_info & ~(PGT_type_mask | PGT_pae_xen_l2)));
 
+        /*
+         * Page table pages need to be removed from the IOMMU again in case
+         * iommu_memory_setup() ended up mapping them.
+         */
+        if ( need_iommu_pt_sync(d) &&
+             iommu_unmap(d, _dfn(mfn_x(page_to_mfn(page))), 1, flush_flags) )
+            BUG();
+
         /* Read-only mapping + PGC_allocated + page-table page. */
         page->count_info         = PGC_allocated | 3;
         page->u.inuse.type_info |= PGT_validated | 1;
@@ -107,11 +116,43 @@ static __init void mark_pv_pt_pages_rdon
     unmap_domain_page(pl3e);
 }
 
+static void __init iommu_memory_setup(struct domain *d, const char *what,
+                                      struct page_info *page, unsigned long nr,
+                                      unsigned int *flush_flags)
+{
+    int rc;
+    mfn_t mfn = page_to_mfn(page);
+
+    if ( !need_iommu_pt_sync(d) )
+        return;
+
+    rc = iommu_map(d, _dfn(mfn_x(mfn)), mfn, nr,
+                   IOMMUF_readable | IOMMUF_writable, flush_flags);
+    if ( rc )
+    {
+        printk(XENLOG_ERR "pre-mapping %s MFN [%lx,%lx) into IOMMU failed: %d\n",
+               what, mfn_x(mfn), mfn_x(mfn) + nr, rc);
+        return;
+    }
+
+    /*
+     * For successfully established IOMMU mappings the type of the page(s)
+     * needs to match (for _get_page_type() to unmap upon type change). Set
+     * the page(s) to writable with no type ref.
+     */
+    for ( ; nr--; ++page )
+    {
+        ASSERT(!page->u.inuse.type_info);
+        page->u.inuse.type_info = PGT_writable_page | PGT_validated;
+    }
+}
+
 static __init void setup_pv_physmap(struct domain *d, unsigned long pgtbl_pfn,
                                     unsigned long v_start, unsigned long v_end,
                                     unsigned long vphysmap_start,
                                     unsigned long vphysmap_end,
-                                    unsigned long nr_pages)
+                                    unsigned long nr_pages,
+                                    unsigned int *flush_flags)
 {
     struct page_info *page = NULL;
     l4_pgentry_t *pl4e, *l4start = map_domain_page(_mfn(pgtbl_pfn));
@@ -177,6 +218,10 @@ static __init void setup_pv_physmap(stru
                                              L3_PAGETABLE_SHIFT - PAGE_SHIFT,
                                              MEMF_no_scrub)) != NULL )
             {
+                iommu_memory_setup(d, "P2M 1G", page,
+                                   SUPERPAGE_PAGES * SUPERPAGE_PAGES,
+                                   flush_flags);
+
                 *pl3e = l3e_from_page(page, L1_PROT|_PAGE_DIRTY|_PAGE_PSE);
                 vphysmap_start += 1UL << L3_PAGETABLE_SHIFT;
                 continue;
@@ -203,6 +248,9 @@ static __init void setup_pv_physmap(stru
                                              L2_PAGETABLE_SHIFT - PAGE_SHIFT,
                                              MEMF_no_scrub)) != NULL )
             {
+                iommu_memory_setup(d, "P2M 2M", page, SUPERPAGE_PAGES,
+                                   flush_flags);
+
                 *pl2e = l2e_from_page(page, L1_PROT|_PAGE_DIRTY|_PAGE_PSE);
                 vphysmap_start += 1UL << L2_PAGETABLE_SHIFT;
                 continue;
@@ -311,6 +359,7 @@ int __init dom0_construct_pv(struct doma
     unsigned long initrd_pfn = -1, initrd_mfn = 0;
     unsigned long count;
     struct page_info *page = NULL;
+    unsigned int flush_flags = 0;
     start_info_t *si;
     struct vcpu *v = d->vcpu[0];
     void *image_base = bootstrap_map(image);
@@ -573,6 +622,9 @@ int __init dom0_construct_pv(struct doma
                     BUG();
         }
         initrd->mod_end = 0;
+
+        iommu_memory_setup(d, "initrd", mfn_to_page(_mfn(initrd_mfn)),
+                           PFN_UP(initrd_len), &flush_flags);
     }
 
     printk("PHYSICAL MEMORY ARRANGEMENT:\n"
@@ -606,6 +658,13 @@ int __init dom0_construct_pv(struct doma
 
     process_pending_softirqs();
 
+    /*
+     * Map the full range here and then punch holes for page tables
+     * alongside marking them as such in mark_pv_pt_pages_rdonly().
+     */
+    iommu_memory_setup(d, "init-alloc", mfn_to_page(_mfn(alloc_spfn)),
+                       alloc_epfn - alloc_spfn, &flush_flags);
+
     mpt_alloc = (vpt_start - v_start) + pfn_to_paddr(alloc_spfn);
     if ( vinitrd_start )
         mpt_alloc -= PAGE_ALIGN(initrd_len);
@@ -690,7 +749,8 @@ int __init dom0_construct_pv(struct doma
         l1tab++;
 
         page = mfn_to_page(_mfn(mfn));
-        if ( !page->u.inuse.type_info &&
+        if ( (!page->u.inuse.type_info ||
+              page->u.inuse.type_info == (PGT_writable_page | PGT_validated)) &&
              !get_page_and_type(page, d, PGT_writable_page) )
             BUG();
     }
@@ -719,7 +779,7 @@ int __init dom0_construct_pv(struct doma
     }
 
     /* Pages that are part of page tables must be read only. */
-    mark_pv_pt_pages_rdonly(d, l4start, vpt_start, nr_pt_pages);
+    mark_pv_pt_pages_rdonly(d, l4start, vpt_start, nr_pt_pages, &flush_flags);
 
     /* Mask all upcalls... */
     for ( i = 0; i < XEN_LEGACY_MAX_VCPUS; i++ )
@@ -794,7 +854,7 @@ int __init dom0_construct_pv(struct doma
     {
         pfn = pagetable_get_pfn(v->arch.guest_table);
         setup_pv_physmap(d, pfn, v_start, v_end, vphysmap_start, vphysmap_end,
-                         nr_pages);
+                         nr_pages, &flush_flags);
     }
 
     /* Write the phys->machine and machine->phys table entries. */
@@ -825,7 +885,9 @@ int __init dom0_construct_pv(struct doma
         if ( get_gpfn_from_mfn(mfn) >= count )
         {
             BUG_ON(compat);
-            if ( !page->u.inuse.type_info &&
+            if ( (!page->u.inuse.type_info ||
+                  page->u.inuse.type_info == (PGT_writable_page |
+                                              PGT_validated)) &&
                  !get_page_and_type(page, d, PGT_writable_page) )
                 BUG();
 
@@ -841,8 +903,12 @@ int __init dom0_construct_pv(struct doma
 #endif
     while ( pfn < nr_pages )
     {
-        if ( (page = alloc_chunk(d, nr_pages - domain_tot_pages(d))) == NULL )
+        count = domain_tot_pages(d);
+        if ( (page = alloc_chunk(d, nr_pages - count)) == NULL )
             panic("Not enough RAM for DOM0 reservation\n");
+
+        iommu_memory_setup(d, "chunk", page, domain_tot_pages(d) - count,
+                           &flush_flags);
         while ( pfn < domain_tot_pages(d) )
         {
             mfn = mfn_x(page_to_mfn(page));
@@ -857,6 +923,10 @@ int __init dom0_construct_pv(struct doma
         }
     }
 
+    /* Use while() to avoid compiler warning. */
+    while ( iommu_iotlb_flush_all(d, flush_flags) )
+        break;
+
     if ( initrd_len != 0 )
     {
         si->mod_start = vinitrd_start ?: initrd_pfn;
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -363,8 +363,8 @@ static unsigned int __hwdom_init hwdom_i
 
 void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
 {
-    unsigned long i, top, max_pfn;
-    unsigned int flush_flags = 0;
+    unsigned long i, top, max_pfn, start, count;
+    unsigned int flush_flags = 0, start_perms = 0;
 
     BUG_ON(!is_hardware_domain(d));
 
@@ -395,9 +395,9 @@ void __hwdom_init arch_iommu_hwdom_init(
      * First Mb will get mapped in one go by pvh_populate_p2m(). Avoid
      * setting up potentially conflicting mappings here.
      */
-    i = paging_mode_translate(d) ? PFN_DOWN(MB(1)) : 0;
+    start = paging_mode_translate(d) ? PFN_DOWN(MB(1)) : 0;
 
-    for ( ; i < top; i++ )
+    for ( i = start, count = 0; i < top; )
     {
         unsigned long pfn = pdx_to_pfn(i);
         unsigned int perms = hwdom_iommu_map(d, pfn, max_pfn);
@@ -406,20 +406,41 @@ void __hwdom_init arch_iommu_hwdom_init(
         if ( !perms )
             rc = 0;
         else if ( paging_mode_translate(d) )
+        {
             rc = p2m_add_identity_entry(d, pfn,
                                         perms & IOMMUF_writable ? p2m_access_rw
                                                                 : p2m_access_r,
                                         0);
+            if ( rc )
+                printk(XENLOG_WARNING
+                       "%pd: identity mapping of %lx failed: %d\n",
+                       d, pfn, rc);
+        }
+        else if ( pfn != start + count || perms != start_perms )
+        {
+        commit:
+            rc = iommu_map(d, _dfn(start), _mfn(start), count, start_perms,
+                           &flush_flags);
+            if ( rc )
+                printk(XENLOG_WARNING
+                       "%pd: IOMMU identity mapping of [%lx,%lx) failed: %d\n",
+                       d, pfn, pfn + count, rc);
+            SWAP(start, pfn);
+            start_perms = perms;
+            count = 1;
+        }
         else
-            rc = iommu_map(d, _dfn(pfn), _mfn(pfn), 1ul << PAGE_ORDER_4K,
-                           perms, &flush_flags);
+        {
+            ++count;
+            rc = 0;
+        }
 
-        if ( rc )
-            printk(XENLOG_WARNING "%pd: identity %smapping of %lx failed: %d\n",
-                   d, !paging_mode_translate(d) ? "IOMMU " : "", pfn, rc);
 
-        if (!(i & 0xfffff))
+        if ( !(++i & 0xfffff) )
             process_pending_softirqs();
+
+        if ( i == top && count )
+            goto commit;
     }
 
     /* Use if to avoid compiler warning */



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
  2022-05-27 11:12 ` [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0 Jan Beulich
  2022-05-27 11:12 ` [PATCH v5 02/15] IOMMU/x86: perform PV Dom0 mappings in batches Jan Beulich
@ 2022-05-27 11:13 ` Jan Beulich
  2022-05-31 16:25   ` Roger Pau Monné
  2022-05-27 11:13 ` [PATCH v5 04/15] AMD/IOMMU: allow use of superpage mappings Jan Beulich
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:13 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné

For vendor specific code to support superpages we need to be able to
deal with a superpage mapping replacing an intermediate page table (or
hierarchy thereof). Consequently an iommu_alloc_pgtable() counterpart is
needed to free individual page tables while a domain is still alive.
Since the freeing needs to be deferred until after a suitable IOTLB
flush was performed, released page tables get queued for processing by a
tasklet.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I was considering whether to use a softirq-tasklet instead. This would
have the benefit of avoiding extra scheduling operations, but come with
the risk of the freeing happening prematurely because of a
process_pending_softirqs() somewhere.
---
v5: Fix CPU_UP_PREPARE for BIGMEM. Schedule tasklet in CPU_DOWN_FAILED
    when list is not empty. Skip all processing in CPU_DEAD when list is
    empty.
v4: Change type of iommu_queue_free_pgtable()'s 1st parameter. Re-base.
v3: Call process_pending_softirqs() from free_queued_pgtables().

--- a/xen/arch/x86/include/asm/iommu.h
+++ b/xen/arch/x86/include/asm/iommu.h
@@ -147,6 +147,7 @@ void iommu_free_domid(domid_t domid, uns
 int __must_check iommu_free_pgtables(struct domain *d);
 struct domain_iommu;
 struct page_info *__must_check iommu_alloc_pgtable(struct domain_iommu *hd);
+void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *pg);
 
 #endif /* !__ARCH_X86_IOMMU_H__ */
 /*
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -12,6 +12,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/cpu.h>
 #include <xen/sched.h>
 #include <xen/iocap.h>
 #include <xen/iommu.h>
@@ -566,6 +567,98 @@ struct page_info *iommu_alloc_pgtable(st
     return pg;
 }
 
+/*
+ * Intermediate page tables which get replaced by large pages may only be
+ * freed after a suitable IOTLB flush. Hence such pages get queued on a
+ * per-CPU list, with a per-CPU tasklet processing the list on the assumption
+ * that the necessary IOTLB flush will have occurred by the time tasklets get
+ * to run. (List and tasklet being per-CPU has the benefit of accesses not
+ * requiring any locking.)
+ */
+static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
+static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
+
+static void free_queued_pgtables(void *arg)
+{
+    struct page_list_head *list = arg;
+    struct page_info *pg;
+    unsigned int done = 0;
+
+    while ( (pg = page_list_remove_head(list)) )
+    {
+        free_domheap_page(pg);
+
+        /* Granularity of checking somewhat arbitrary. */
+        if ( !(++done & 0x1ff) )
+             process_pending_softirqs();
+    }
+}
+
+void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *pg)
+{
+    unsigned int cpu = smp_processor_id();
+
+    spin_lock(&hd->arch.pgtables.lock);
+    page_list_del(pg, &hd->arch.pgtables.list);
+    spin_unlock(&hd->arch.pgtables.lock);
+
+    page_list_add_tail(pg, &per_cpu(free_pgt_list, cpu));
+
+    tasklet_schedule(&per_cpu(free_pgt_tasklet, cpu));
+}
+
+static int cf_check cpu_callback(
+    struct notifier_block *nfb, unsigned long action, void *hcpu)
+{
+    unsigned int cpu = (unsigned long)hcpu;
+    struct page_list_head *list = &per_cpu(free_pgt_list, cpu);
+    struct tasklet *tasklet = &per_cpu(free_pgt_tasklet, cpu);
+
+    switch ( action )
+    {
+    case CPU_DOWN_PREPARE:
+        tasklet_kill(tasklet);
+        break;
+
+    case CPU_DEAD:
+        if ( !page_list_empty(list) )
+        {
+            page_list_splice(list, &this_cpu(free_pgt_list));
+            INIT_PAGE_LIST_HEAD(list);
+            tasklet_schedule(&this_cpu(free_pgt_tasklet));
+        }
+        break;
+
+    case CPU_UP_PREPARE:
+        INIT_PAGE_LIST_HEAD(list);
+        fallthrough;
+    case CPU_DOWN_FAILED:
+        tasklet_init(tasklet, free_queued_pgtables, list);
+        if ( !page_list_empty(list) )
+            tasklet_schedule(tasklet);
+        break;
+    }
+
+    return NOTIFY_DONE;
+}
+
+static struct notifier_block cpu_nfb = {
+    .notifier_call = cpu_callback,
+};
+
+static int __init cf_check bsp_init(void)
+{
+    if ( iommu_enabled )
+    {
+        cpu_callback(&cpu_nfb, CPU_UP_PREPARE,
+                     (void *)(unsigned long)smp_processor_id());
+        register_cpu_notifier(&cpu_nfb);
+    }
+
+    return 0;
+}
+presmp_initcall(bsp_init);
+
 bool arch_iommu_use_permitted(const struct domain *d)
 {
     /*



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 04/15] AMD/IOMMU: allow use of superpage mappings
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
                   ` (2 preceding siblings ...)
  2022-05-27 11:13 ` [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables Jan Beulich
@ 2022-05-27 11:13 ` Jan Beulich
  2022-05-27 11:14 ` [PATCH v5 05/15] VT-d: " Jan Beulich
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:13 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné

No separate feature flags exist which would control availability of
these; the only restriction is HATS (establishing the maximum number of
page table levels in general), and even that has a lower bound of 4.
Thus we can unconditionally announce 2M and 1G mappings. (Via non-
default page sizes the implementation in principle permits arbitrary
size mappings, but these require multiple identical leaf PTEs to be
written, which isn't all that different from having to write multiple
consecutive PTEs with increasing frame numbers. IMO that's therefore
beneficial only on hardware where suitable TLBs exist; I'm unaware of
such hardware.)

Note that in principle 512G and 256T mappings could also be supported
right away, but the freeing of page tables (to be introduced in
subsequent patches) when replacing a sufficiently populated tree with a
single huge page would need suitable preemption, which will require
extra work.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v5: Drop PAGE_SIZE_512G. In amd_iommu_{,un}map_page() assert page order
    is supported.
v4: Change type of queue_free_pt()'s 1st parameter. Re-base.
v3: Rename queue_free_pt()'s last parameter. Replace "level > 1" checks
    where possible.

--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -32,12 +32,13 @@ static unsigned int pfn_to_pde_idx(unsig
 }
 
 static union amd_iommu_pte clear_iommu_pte_present(unsigned long l1_mfn,
-                                                   unsigned long dfn)
+                                                   unsigned long dfn,
+                                                   unsigned int level)
 {
     union amd_iommu_pte *table, *pte, old;
 
     table = map_domain_page(_mfn(l1_mfn));
-    pte = &table[pfn_to_pde_idx(dfn, 1)];
+    pte = &table[pfn_to_pde_idx(dfn, level)];
     old = *pte;
 
     write_atomic(&pte->raw, 0);
@@ -351,15 +352,39 @@ static int iommu_pde_from_dfn(struct dom
     return 0;
 }
 
+static void queue_free_pt(struct domain_iommu *hd, mfn_t mfn, unsigned int level)
+{
+    if ( level > 1 )
+    {
+        union amd_iommu_pte *pt = map_domain_page(mfn);
+        unsigned int i;
+
+        for ( i = 0; i < PTE_PER_TABLE_SIZE; ++i )
+            if ( pt[i].pr && pt[i].next_level )
+            {
+                ASSERT(pt[i].next_level < level);
+                queue_free_pt(hd, _mfn(pt[i].mfn), pt[i].next_level);
+            }
+
+        unmap_domain_page(pt);
+    }
+
+    iommu_queue_free_pgtable(hd, mfn_to_page(mfn));
+}
+
 int cf_check amd_iommu_map_page(
     struct domain *d, dfn_t dfn, mfn_t mfn, unsigned int flags,
     unsigned int *flush_flags)
 {
     struct domain_iommu *hd = dom_iommu(d);
+    unsigned int level = (IOMMUF_order(flags) / PTE_PER_TABLE_SHIFT) + 1;
     int rc;
     unsigned long pt_mfn = 0;
     union amd_iommu_pte old;
 
+    ASSERT((hd->platform_ops->page_sizes >> IOMMUF_order(flags)) &
+           PAGE_SIZE_4K);
+
     spin_lock(&hd->arch.mapping_lock);
 
     /*
@@ -384,7 +409,7 @@ int cf_check amd_iommu_map_page(
         return rc;
     }
 
-    if ( iommu_pde_from_dfn(d, dfn_x(dfn), 1, &pt_mfn, flush_flags, true) ||
+    if ( iommu_pde_from_dfn(d, dfn_x(dfn), level, &pt_mfn, flush_flags, true) ||
          !pt_mfn )
     {
         spin_unlock(&hd->arch.mapping_lock);
@@ -394,8 +419,8 @@ int cf_check amd_iommu_map_page(
         return -EFAULT;
     }
 
-    /* Install 4k mapping */
-    old = set_iommu_pte_present(pt_mfn, dfn_x(dfn), mfn_x(mfn), 1,
+    /* Install mapping */
+    old = set_iommu_pte_present(pt_mfn, dfn_x(dfn), mfn_x(mfn), level,
                                 (flags & IOMMUF_writable),
                                 (flags & IOMMUF_readable));
 
@@ -403,8 +428,13 @@ int cf_check amd_iommu_map_page(
 
     *flush_flags |= IOMMU_FLUSHF_added;
     if ( old.pr )
+    {
         *flush_flags |= IOMMU_FLUSHF_modified;
 
+        if ( IOMMUF_order(flags) && old.next_level )
+            queue_free_pt(hd, _mfn(old.mfn), old.next_level);
+    }
+
     return 0;
 }
 
@@ -413,8 +443,15 @@ int cf_check amd_iommu_unmap_page(
 {
     unsigned long pt_mfn = 0;
     struct domain_iommu *hd = dom_iommu(d);
+    unsigned int level = (order / PTE_PER_TABLE_SHIFT) + 1;
     union amd_iommu_pte old = {};
 
+    /*
+     * While really we could unmap at any granularity, for now we assume unmaps
+     * are issued by common code only at the same granularity as maps.
+     */
+    ASSERT((hd->platform_ops->page_sizes >> order) & PAGE_SIZE_4K);
+
     spin_lock(&hd->arch.mapping_lock);
 
     if ( !hd->arch.amd.root_table )
@@ -423,7 +460,7 @@ int cf_check amd_iommu_unmap_page(
         return 0;
     }
 
-    if ( iommu_pde_from_dfn(d, dfn_x(dfn), 1, &pt_mfn, flush_flags, false) )
+    if ( iommu_pde_from_dfn(d, dfn_x(dfn), level, &pt_mfn, flush_flags, false) )
     {
         spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_ERROR("invalid IO pagetable entry dfn = %"PRI_dfn"\n",
@@ -435,14 +472,19 @@ int cf_check amd_iommu_unmap_page(
     if ( pt_mfn )
     {
         /* Mark PTE as 'page not present'. */
-        old = clear_iommu_pte_present(pt_mfn, dfn_x(dfn));
+        old = clear_iommu_pte_present(pt_mfn, dfn_x(dfn), level);
     }
 
     spin_unlock(&hd->arch.mapping_lock);
 
     if ( old.pr )
+    {
         *flush_flags |= IOMMU_FLUSHF_modified;
 
+        if ( order && old.next_level )
+            queue_free_pt(hd, _mfn(old.mfn), old.next_level);
+    }
+
     return 0;
 }
 
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -747,7 +747,7 @@ static void cf_check amd_dump_page_table
 }
 
 static const struct iommu_ops __initconst_cf_clobber _iommu_ops = {
-    .page_sizes = PAGE_SIZE_4K,
+    .page_sizes = PAGE_SIZE_4K | PAGE_SIZE_2M | PAGE_SIZE_1G,
     .init = amd_iommu_domain_init,
     .hwdom_init = amd_iommu_hwdom_init,
     .quarantine_init = amd_iommu_quarantine_init,
--- a/xen/include/xen/page-defs.h
+++ b/xen/include/xen/page-defs.h
@@ -21,4 +21,14 @@
 #define PAGE_MASK_64K               PAGE_MASK_GRAN(64K)
 #define PAGE_ALIGN_64K(addr)        PAGE_ALIGN_GRAN(64K, addr)
 
+#define PAGE_SHIFT_2M               21
+#define PAGE_SIZE_2M                PAGE_SIZE_GRAN(2M)
+#define PAGE_MASK_2M                PAGE_MASK_GRAN(2M)
+#define PAGE_ALIGN_2M(addr)         PAGE_ALIGN_GRAN(2M, addr)
+
+#define PAGE_SHIFT_1G               30
+#define PAGE_SIZE_1G                PAGE_SIZE_GRAN(1G)
+#define PAGE_MASK_1G                PAGE_MASK_GRAN(1G)
+#define PAGE_ALIGN_1G(addr)         PAGE_ALIGN_GRAN(1G, addr)
+
 #endif /* __XEN_PAGE_DEFS_H__ */



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 05/15] VT-d: allow use of superpage mappings
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
                   ` (3 preceding siblings ...)
  2022-05-27 11:13 ` [PATCH v5 04/15] AMD/IOMMU: allow use of superpage mappings Jan Beulich
@ 2022-05-27 11:14 ` Jan Beulich
  2022-05-27 11:16 ` [PATCH v5 06/15] IOMMU: fold flush-all hook into "flush one" Jan Beulich
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:14 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné

... depending on feature availability (and absence of quirks).

Also make the page table dumping function aware of superpages.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v5: In intel_iommu_{,un}map_page() assert page order is supported.
v4: Change type of queue_free_pt()'s 1st parameter. Re-base.
v3: Rename queue_free_pt()'s last parameter. Replace "level > 1" checks
    where possible. Tighten assertion.

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -784,18 +784,37 @@ static int __must_check cf_check iommu_f
     return iommu_flush_iotlb(d, INVALID_DFN, 0, 0);
 }
 
+static void queue_free_pt(struct domain_iommu *hd, mfn_t mfn, unsigned int level)
+{
+    if ( level > 1 )
+    {
+        struct dma_pte *pt = map_domain_page(mfn);
+        unsigned int i;
+
+        for ( i = 0; i < PTE_NUM; ++i )
+            if ( dma_pte_present(pt[i]) && !dma_pte_superpage(pt[i]) )
+                queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(pt[i])),
+                              level - 1);
+
+        unmap_domain_page(pt);
+    }
+
+    iommu_queue_free_pgtable(hd, mfn_to_page(mfn));
+}
+
 /* clear one page's page table */
 static int dma_pte_clear_one(struct domain *domain, daddr_t addr,
                              unsigned int order,
                              unsigned int *flush_flags)
 {
     struct domain_iommu *hd = dom_iommu(domain);
-    struct dma_pte *page = NULL, *pte = NULL;
+    struct dma_pte *page = NULL, *pte = NULL, old;
     u64 pg_maddr;
+    unsigned int level = (order / LEVEL_STRIDE) + 1;
 
     spin_lock(&hd->arch.mapping_lock);
-    /* get last level pte */
-    pg_maddr = addr_to_dma_page_maddr(domain, addr, 1, flush_flags, false);
+    /* get target level pte */
+    pg_maddr = addr_to_dma_page_maddr(domain, addr, level, flush_flags, false);
     if ( pg_maddr < PAGE_SIZE )
     {
         spin_unlock(&hd->arch.mapping_lock);
@@ -803,7 +822,7 @@ static int dma_pte_clear_one(struct doma
     }
 
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = page + address_level_offset(addr, 1);
+    pte = &page[address_level_offset(addr, level)];
 
     if ( !dma_pte_present(*pte) )
     {
@@ -812,14 +831,20 @@ static int dma_pte_clear_one(struct doma
         return 0;
     }
 
+    old = *pte;
     dma_clear_pte(*pte);
-    *flush_flags |= IOMMU_FLUSHF_modified;
 
     spin_unlock(&hd->arch.mapping_lock);
     iommu_sync_cache(pte, sizeof(struct dma_pte));
 
     unmap_vtd_domain_page(page);
 
+    *flush_flags |= IOMMU_FLUSHF_modified;
+
+    if ( order && !dma_pte_superpage(old) )
+        queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)),
+                      order / LEVEL_STRIDE);
+
     return 0;
 }
 
@@ -2097,8 +2122,12 @@ static int __must_check cf_check intel_i
     struct domain_iommu *hd = dom_iommu(d);
     struct dma_pte *page, *pte, old, new = {};
     u64 pg_maddr;
+    unsigned int level = (IOMMUF_order(flags) / LEVEL_STRIDE) + 1;
     int rc = 0;
 
+    ASSERT((hd->platform_ops->page_sizes >> IOMMUF_order(flags)) &
+           PAGE_SIZE_4K);
+
     /* Do nothing if VT-d shares EPT page table */
     if ( iommu_use_hap_pt(d) )
         return 0;
@@ -2121,7 +2150,7 @@ static int __must_check cf_check intel_i
         return 0;
     }
 
-    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1, flush_flags,
+    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), level, flush_flags,
                                       true);
     if ( pg_maddr < PAGE_SIZE )
     {
@@ -2130,13 +2159,15 @@ static int __must_check cf_check intel_i
     }
 
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = &page[dfn_x(dfn) & LEVEL_MASK];
+    pte = &page[address_level_offset(dfn_to_daddr(dfn), level)];
     old = *pte;
 
     dma_set_pte_addr(new, mfn_to_maddr(mfn));
     dma_set_pte_prot(new,
                      ((flags & IOMMUF_readable) ? DMA_PTE_READ  : 0) |
                      ((flags & IOMMUF_writable) ? DMA_PTE_WRITE : 0));
+    if ( IOMMUF_order(flags) )
+        dma_set_pte_superpage(new);
 
     /* Set the SNP on leaf page table if Snoop Control available */
     if ( iommu_snoop )
@@ -2157,14 +2188,26 @@ static int __must_check cf_check intel_i
 
     *flush_flags |= IOMMU_FLUSHF_added;
     if ( dma_pte_present(old) )
+    {
         *flush_flags |= IOMMU_FLUSHF_modified;
 
+        if ( IOMMUF_order(flags) && !dma_pte_superpage(old) )
+            queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)),
+                          IOMMUF_order(flags) / LEVEL_STRIDE);
+    }
+
     return rc;
 }
 
 static int __must_check cf_check intel_iommu_unmap_page(
     struct domain *d, dfn_t dfn, unsigned int order, unsigned int *flush_flags)
 {
+    /*
+     * While really we could unmap at any granularity, for now we assume unmaps
+     * are issued by common code only at the same granularity as maps.
+     */
+    ASSERT((dom_iommu(d)->platform_ops->page_sizes >> order) & PAGE_SIZE_4K);
+
     /* Do nothing if VT-d shares EPT page table */
     if ( iommu_use_hap_pt(d) )
         return 0;
@@ -2519,6 +2562,7 @@ static int __init cf_check vtd_setup(voi
 {
     struct acpi_drhd_unit *drhd;
     struct vtd_iommu *iommu;
+    unsigned int large_sizes = PAGE_SIZE_2M | PAGE_SIZE_1G;
     int ret;
     bool reg_inval_supported = true;
 
@@ -2561,6 +2605,11 @@ static int __init cf_check vtd_setup(voi
                cap_sps_2mb(iommu->cap) ? ", 2MB" : "",
                cap_sps_1gb(iommu->cap) ? ", 1GB" : "");
 
+        if ( !cap_sps_2mb(iommu->cap) )
+            large_sizes &= ~PAGE_SIZE_2M;
+        if ( !cap_sps_1gb(iommu->cap) )
+            large_sizes &= ~PAGE_SIZE_1G;
+
 #ifndef iommu_snoop
         if ( iommu_snoop && !ecap_snp_ctl(iommu->ecap) )
             iommu_snoop = false;
@@ -2632,6 +2681,9 @@ static int __init cf_check vtd_setup(voi
     if ( ret )
         goto error;
 
+    ASSERT(iommu_ops.page_sizes == PAGE_SIZE_4K);
+    iommu_ops.page_sizes |= large_sizes;
+
     register_keyhandler('V', vtd_dump_iommu_info, "dump iommu info", 1);
 
     return 0;
@@ -2964,7 +3016,7 @@ static void vtd_dump_page_table_level(pa
             continue;
 
         address = gpa + offset_level_address(i, level);
-        if ( next_level >= 1 ) 
+        if ( next_level && !dma_pte_superpage(*pte) )
             vtd_dump_page_table_level(dma_pte_addr(*pte), next_level,
                                       address, indent + 1);
         else



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 06/15] IOMMU: fold flush-all hook into "flush one"
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
                   ` (4 preceding siblings ...)
  2022-05-27 11:14 ` [PATCH v5 05/15] VT-d: " Jan Beulich
@ 2022-05-27 11:16 ` Jan Beulich
  2022-05-27 11:17 ` [PATCH v5 07/15] x86: introduce helper for recording degree of contiguity in page tables Jan Beulich
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné,
	Julien Grall, Rahul Singh, Bertrand Marquis, Stefano Stabellini,
	Volodymyr Babchuk

Having a separate flush-all hook has always been puzzling me some. We
will want to be able to force a full flush via accumulated flush flags
from the map/unmap functions. Introduce a respective new flag and fold
all flush handling to use the single remaining hook.

Note that because of the respective comments in SMMU and IPMMU-VMSA
code, I've folded the two prior hook functions into one. For SMMU-v3,
which lacks a comment towards incapable hardware, I've left both
functions in place on the assumption that selective and full flushes
will eventually want separating.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com> # IPMMU-VMSA, SMMU-V2
Reviewed-by: Rahul Singh <rahul.singh@arm.com> # SMMUv3
Acked-by: Julien Grall <jgrall@amazon.com> # Arm
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
TBD: What we really are going to need is for the map/unmap functions to
     specify that a wider region needs flushing than just the one
     covered by the present set of (un)maps. This may still be less than
     a full flush, but at least as a first step it seemed better to me
     to keep things simple and go the flush-all route.
---
v4: Re-base.
v3: Re-base over changes earlier in the series.
v2: New.

--- a/xen/drivers/passthrough/amd/iommu.h
+++ b/xen/drivers/passthrough/amd/iommu.h
@@ -258,7 +258,6 @@ int cf_check amd_iommu_get_reserved_devi
 int __must_check cf_check amd_iommu_flush_iotlb_pages(
     struct domain *d, dfn_t dfn, unsigned long page_count,
     unsigned int flush_flags);
-int __must_check cf_check amd_iommu_flush_iotlb_all(struct domain *d);
 void amd_iommu_print_entries(const struct amd_iommu *iommu, unsigned int dev_id,
                              dfn_t dfn);
 
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -548,15 +548,18 @@ int cf_check amd_iommu_flush_iotlb_pages
 {
     unsigned long dfn_l = dfn_x(dfn);
 
-    ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN));
-    ASSERT(flush_flags);
+    if ( !(flush_flags & IOMMU_FLUSHF_all) )
+    {
+        ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN));
+        ASSERT(flush_flags);
+    }
 
     /* Unless a PTE was modified, no flush is required */
     if ( !(flush_flags & IOMMU_FLUSHF_modified) )
         return 0;
 
-    /* If the range wraps then just flush everything */
-    if ( dfn_l + page_count < dfn_l )
+    /* If so requested or if the range wraps then just flush everything. */
+    if ( (flush_flags & IOMMU_FLUSHF_all) || dfn_l + page_count < dfn_l )
     {
         amd_iommu_flush_all_pages(d);
         return 0;
@@ -581,13 +584,6 @@ int cf_check amd_iommu_flush_iotlb_pages
 
     return 0;
 }
-
-int cf_check amd_iommu_flush_iotlb_all(struct domain *d)
-{
-    amd_iommu_flush_all_pages(d);
-
-    return 0;
-}
 
 int amd_iommu_reserve_domain_unity_map(struct domain *d,
                                        const struct ivrs_unity_map *map,
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -759,7 +759,6 @@ static const struct iommu_ops __initcons
     .map_page = amd_iommu_map_page,
     .unmap_page = amd_iommu_unmap_page,
     .iotlb_flush = amd_iommu_flush_iotlb_pages,
-    .iotlb_flush_all = amd_iommu_flush_iotlb_all,
     .reassign_device = reassign_device,
     .get_device_group_id = amd_iommu_group_id,
     .enable_x2apic = iov_enable_xt,
--- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
+++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
@@ -1000,13 +1000,19 @@ out:
 }
 
 /* Xen IOMMU ops */
-static int __must_check ipmmu_iotlb_flush_all(struct domain *d)
+static int __must_check ipmmu_iotlb_flush(struct domain *d, dfn_t dfn,
+                                          unsigned long page_count,
+                                          unsigned int flush_flags)
 {
     struct ipmmu_vmsa_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
 
+    ASSERT(flush_flags);
+
     if ( !xen_domain || !xen_domain->root_domain )
         return 0;
 
+    /* The hardware doesn't support selective TLB flush. */
+
     spin_lock(&xen_domain->lock);
     ipmmu_tlb_invalidate(xen_domain->root_domain);
     spin_unlock(&xen_domain->lock);
@@ -1014,16 +1020,6 @@ static int __must_check ipmmu_iotlb_flus
     return 0;
 }
 
-static int __must_check ipmmu_iotlb_flush(struct domain *d, dfn_t dfn,
-                                          unsigned long page_count,
-                                          unsigned int flush_flags)
-{
-    ASSERT(flush_flags);
-
-    /* The hardware doesn't support selective TLB flush. */
-    return ipmmu_iotlb_flush_all(d);
-}
-
 static struct ipmmu_vmsa_domain *ipmmu_get_cache_domain(struct domain *d,
                                                         struct device *dev)
 {
@@ -1360,7 +1356,6 @@ static const struct iommu_ops ipmmu_iomm
     .hwdom_init      = arch_iommu_hwdom_init,
     .teardown        = ipmmu_iommu_domain_teardown,
     .iotlb_flush     = ipmmu_iotlb_flush,
-    .iotlb_flush_all = ipmmu_iotlb_flush_all,
     .assign_device   = ipmmu_assign_device,
     .reassign_device = ipmmu_reassign_device,
     .map_page        = arm_iommu_map_page,
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2648,11 +2648,17 @@ static int force_stage = 2;
  */
 static u32 platform_features = ARM_SMMU_FEAT_COHERENT_WALK;
 
-static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
+static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn,
+					     unsigned long page_count,
+					     unsigned int flush_flags)
 {
 	struct arm_smmu_xen_domain *smmu_domain = dom_iommu(d)->arch.priv;
 	struct iommu_domain *cfg;
 
+	ASSERT(flush_flags);
+
+	/* ARM SMMU v1 doesn't have flush by VMA and VMID */
+
 	spin_lock(&smmu_domain->lock);
 	list_for_each_entry(cfg, &smmu_domain->contexts, list) {
 		/*
@@ -2669,16 +2675,6 @@ static int __must_check arm_smmu_iotlb_f
 	return 0;
 }
 
-static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn,
-					     unsigned long page_count,
-					     unsigned int flush_flags)
-{
-	ASSERT(flush_flags);
-
-	/* ARM SMMU v1 doesn't have flush by VMA and VMID */
-	return arm_smmu_iotlb_flush_all(d);
-}
-
 static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
 						struct device *dev)
 {
@@ -2863,7 +2859,6 @@ static const struct iommu_ops arm_smmu_i
     .add_device = arm_smmu_dt_add_device_generic,
     .teardown = arm_smmu_iommu_domain_teardown,
     .iotlb_flush = arm_smmu_iotlb_flush,
-    .iotlb_flush_all = arm_smmu_iotlb_flush_all,
     .assign_device = arm_smmu_assign_dev,
     .reassign_device = arm_smmu_reassign_dev,
     .map_page = arm_iommu_map_page,
--- a/xen/drivers/passthrough/arm/smmu-v3.c
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -3416,7 +3416,6 @@ static const struct iommu_ops arm_smmu_i
 	.hwdom_init		= arch_iommu_hwdom_init,
 	.teardown		= arm_smmu_iommu_xen_domain_teardown,
 	.iotlb_flush		= arm_smmu_iotlb_flush,
-	.iotlb_flush_all	= arm_smmu_iotlb_flush_all,
 	.assign_device		= arm_smmu_assign_dev,
 	.reassign_device	= arm_smmu_reassign_dev,
 	.map_page		= arm_iommu_map_page,
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -478,15 +478,12 @@ int iommu_iotlb_flush_all(struct domain
     const struct domain_iommu *hd = dom_iommu(d);
     int rc;
 
-    if ( !is_iommu_enabled(d) || !hd->platform_ops->iotlb_flush_all ||
+    if ( !is_iommu_enabled(d) || !hd->platform_ops->iotlb_flush ||
          !flush_flags )
         return 0;
 
-    /*
-     * The operation does a full flush so we don't need to pass the
-     * flush_flags in.
-     */
-    rc = iommu_call(hd->platform_ops, iotlb_flush_all, d);
+    rc = iommu_call(hd->platform_ops, iotlb_flush, d, INVALID_DFN, 0,
+                    flush_flags | IOMMU_FLUSHF_all);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -772,18 +772,21 @@ static int __must_check cf_check iommu_f
     struct domain *d, dfn_t dfn, unsigned long page_count,
     unsigned int flush_flags)
 {
-    ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN));
-    ASSERT(flush_flags);
+    if ( flush_flags & IOMMU_FLUSHF_all )
+    {
+        dfn = INVALID_DFN;
+        page_count = 0;
+    }
+    else
+    {
+        ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN));
+        ASSERT(flush_flags);
+    }
 
     return iommu_flush_iotlb(d, dfn, flush_flags & IOMMU_FLUSHF_modified,
                              page_count);
 }
 
-static int __must_check cf_check iommu_flush_iotlb_all(struct domain *d)
-{
-    return iommu_flush_iotlb(d, INVALID_DFN, 0, 0);
-}
-
 static void queue_free_pt(struct domain_iommu *hd, mfn_t mfn, unsigned int level)
 {
     if ( level > 1 )
@@ -3197,7 +3200,6 @@ static const struct iommu_ops __initcons
     .resume = vtd_resume,
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = iommu_flush_iotlb_pages,
-    .iotlb_flush_all = iommu_flush_iotlb_all,
     .get_reserved_device_memory = intel_iommu_get_reserved_device_memory,
     .dump_page_tables = vtd_dump_page_tables,
 };
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -147,9 +147,11 @@ enum
 {
     _IOMMU_FLUSHF_added,
     _IOMMU_FLUSHF_modified,
+    _IOMMU_FLUSHF_all,
 };
 #define IOMMU_FLUSHF_added (1u << _IOMMU_FLUSHF_added)
 #define IOMMU_FLUSHF_modified (1u << _IOMMU_FLUSHF_modified)
+#define IOMMU_FLUSHF_all (1u << _IOMMU_FLUSHF_all)
 
 int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
                            unsigned long page_count, unsigned int flags,
@@ -281,7 +283,6 @@ struct iommu_ops {
     int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn,
                                     unsigned long page_count,
                                     unsigned int flush_flags);
-    int __must_check (*iotlb_flush_all)(struct domain *d);
     int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
     void (*dump_page_tables)(struct domain *d);
 



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 07/15] x86: introduce helper for recording degree of contiguity in page tables
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
                   ` (5 preceding siblings ...)
  2022-05-27 11:16 ` [PATCH v5 06/15] IOMMU: fold flush-all hook into "flush one" Jan Beulich
@ 2022-05-27 11:17 ` Jan Beulich
  2022-06-01 11:29   ` Roger Pau Monné
  2022-05-27 11:17 ` [PATCH v5 08/15] IOMMU/x86: prefill newly allocate " Jan Beulich
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné, Wei Liu

This is a re-usable helper (kind of a template) which gets introduced
without users so that the individual subsequent patches introducing such
users can get committed independently of one another.

See the comment at the top of the new file. To demonstrate the effect,
if a page table had just 16 entries, this would be the set of markers
for a page table with fully contiguous mappings:

index  0 1 2 3 4 5 6 7 8 9 A B C D E F
marker 4 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0

"Contiguous" here means not only present entries with successively
increasing MFNs, each one suitably aligned for its slot, but also a
respective number of all non-present entries.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
@Roger: I've retained your R-b, but I was on the edge of dropping it.
---
v5: Bail early from step 1 if possible. Arrange for consumers who are
    just after CONTIG_{LEVEL_SHIFT,NR}. Extend comment.
v3: Rename function and header. Introduce IS_CONTIG().
v2: New.

--- /dev/null
+++ b/xen/arch/x86/include/asm/pt-contig-markers.h
@@ -0,0 +1,110 @@
+#ifndef __ASM_X86_PT_CONTIG_MARKERS_H
+#define __ASM_X86_PT_CONTIG_MARKERS_H
+
+/*
+ * Short of having function templates in C, the function defined below is
+ * intended to be used by multiple parties interested in recording the
+ * degree of contiguity in mappings by a single page table.
+ *
+ * Scheme: Every entry records the order of contiguous successive entries,
+ * up to the maximum order covered by that entry (which is the number of
+ * clear low bits in its index, with entry 0 being the exception using
+ * the base-2 logarithm of the number of entries in a single page table).
+ * While a few entries need touching upon update, knowing whether the
+ * table is fully contiguous (and can hence be replaced by a higher level
+ * leaf entry) is then possible by simply looking at entry 0's marker.
+ *
+ * Prereqs:
+ * - CONTIG_MASK needs to be #define-d, to a value having at least 4
+ *   contiguous bits (ignored by hardware), before including this file (or
+ *   else only CONTIG_LEVEL_SHIFT and CONTIG_NR will become available),
+ * - page tables to be passed to the helper need to be initialized with
+ *   correct markers,
+ * - not-present entries need to be entirely clear except for the marker.
+ */
+
+/* This is the same for all anticipated users, so doesn't need passing in. */
+#define CONTIG_LEVEL_SHIFT 9
+#define CONTIG_NR          (1 << CONTIG_LEVEL_SHIFT)
+
+#ifdef CONTIG_MASK
+
+#include <xen/bitops.h>
+#include <xen/lib.h>
+#include <xen/page-size.h>
+
+#define GET_MARKER(e) MASK_EXTR(e, CONTIG_MASK)
+#define SET_MARKER(e, m) \
+    ((void)((e) = ((e) & ~CONTIG_MASK) | MASK_INSR(m, CONTIG_MASK)))
+
+#define IS_CONTIG(kind, pt, i, idx, shift, b) \
+    ((kind) == PTE_kind_leaf \
+     ? (((pt)[i] ^ (pt)[idx]) & ~CONTIG_MASK) == (1ULL << ((b) + (shift))) \
+     : !((pt)[i] & ~CONTIG_MASK))
+
+enum PTE_kind {
+    PTE_kind_null,
+    PTE_kind_leaf,
+    PTE_kind_table,
+};
+
+static bool pt_update_contig_markers(uint64_t *pt, unsigned int idx,
+                                     unsigned int level, enum PTE_kind kind)
+{
+    unsigned int b, i = idx;
+    unsigned int shift = (level - 1) * CONTIG_LEVEL_SHIFT + PAGE_SHIFT;
+
+    ASSERT(idx < CONTIG_NR);
+    ASSERT(!(pt[idx] & CONTIG_MASK));
+
+    /* Step 1: Reduce markers in lower numbered entries. */
+    while ( i )
+    {
+        b = find_first_set_bit(i);
+        i &= ~(1U << b);
+        if ( GET_MARKER(pt[i]) <= b )
+            break;
+        SET_MARKER(pt[i], b);
+    }
+
+    /* An intermediate table is never contiguous with anything. */
+    if ( kind == PTE_kind_table )
+        return false;
+
+    /*
+     * Present entries need in-sync index and address to be a candidate
+     * for being contiguous: What we're after is whether ultimately the
+     * intermediate table can be replaced by a superpage.
+     */
+    if ( kind != PTE_kind_null &&
+         idx != ((pt[idx] >> shift) & (CONTIG_NR - 1)) )
+        return false;
+
+    /* Step 2: Check higher numbered entries for contiguity. */
+    for ( b = 0; b < CONTIG_LEVEL_SHIFT && !(idx & (1U << b)); ++b )
+    {
+        i = idx | (1U << b);
+        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
+            break;
+    }
+
+    /* Step 3: Update markers in this and lower numbered entries. */
+    for ( ; SET_MARKER(pt[idx], b), b < CONTIG_LEVEL_SHIFT; ++b )
+    {
+        i = idx ^ (1U << b);
+        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
+            break;
+        idx &= ~(1U << b);
+    }
+
+    return b == CONTIG_LEVEL_SHIFT;
+}
+
+#undef IS_CONTIG
+#undef SET_MARKER
+#undef GET_MARKER
+#undef CONTIG_MASK
+
+#endif /* CONTIG_MASK */
+
+#endif /* __ASM_X86_PT_CONTIG_MARKERS_H */



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 08/15] IOMMU/x86: prefill newly allocate page tables
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
                   ` (6 preceding siblings ...)
  2022-05-27 11:17 ` [PATCH v5 07/15] x86: introduce helper for recording degree of contiguity in page tables Jan Beulich
@ 2022-05-27 11:17 ` Jan Beulich
  2022-06-01 12:59   ` Roger Pau Monné
  2022-05-27 11:18 ` [PATCH v5 09/15] AMD/IOMMU: free all-empty " Jan Beulich
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:17 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné

Page tables are used for two purposes after allocation: They either
start out all empty, or they are filled to replace a superpage.
Subsequently, to replace all empty or fully contiguous page tables,
contiguous sub-regions will be recorded within individual page tables.
Install the initial set of markers immediately after allocation. Make
sure to retain these markers when further populating a page table in
preparation for it to replace a superpage.

The markers are simply 4-bit fields holding the order value of
contiguous entries. To demonstrate this, if a page table had just 16
entries, this would be the initial (fully contiguous) set of markers:

index  0 1 2 3 4 5 6 7 8 9 A B C D E F
marker 4 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0

"Contiguous" here means not only present entries with successively
increasing MFNs, each one suitably aligned for its slot, and identical
attributes, but also a respective number of all non-present (zero except
for the markers) entries.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
An alternative to the ASSERT()s added to set_iommu_ptes_present() would
be to make the function less general-purpose; it's used in a single
place only after all (i.e. it might as well be folded into its only
caller).

While in VT-d's comment ahead of struct dma_pte I'm adjusting the
description of the high bits, I'd like to note that the description of
some of the lower bits isn't correct either. Yet I don't think adjusting
that belongs here.
---
v5: Assert next_mfn is suitably aligned in set_iommu_ptes_present(). Use
    CONTIG_LEVEL_SHIFT in favor of PAGE_SHIFT-3.
v4: Add another comment referring to pt-contig-markers.h. Re-base.
v3: Add comments. Re-base.
v2: New.

--- a/xen/arch/x86/include/asm/iommu.h
+++ b/xen/arch/x86/include/asm/iommu.h
@@ -146,7 +146,8 @@ void iommu_free_domid(domid_t domid, uns
 
 int __must_check iommu_free_pgtables(struct domain *d);
 struct domain_iommu;
-struct page_info *__must_check iommu_alloc_pgtable(struct domain_iommu *hd);
+struct page_info *__must_check iommu_alloc_pgtable(struct domain_iommu *hd,
+                                                   uint64_t contig_mask);
 void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *pg);
 
 #endif /* !__ARCH_X86_IOMMU_H__ */
--- a/xen/drivers/passthrough/amd/iommu-defs.h
+++ b/xen/drivers/passthrough/amd/iommu-defs.h
@@ -446,11 +446,13 @@ union amd_iommu_x2apic_control {
 #define IOMMU_PAGE_TABLE_U32_PER_ENTRY	(IOMMU_PAGE_TABLE_ENTRY_SIZE / 4)
 #define IOMMU_PAGE_TABLE_ALIGNMENT	4096
 
+#define IOMMU_PTE_CONTIG_MASK           0x1e /* The ign0 field below. */
+
 union amd_iommu_pte {
     uint64_t raw;
     struct {
         bool pr:1;
-        unsigned int ign0:4;
+        unsigned int ign0:4; /* Covered by IOMMU_PTE_CONTIG_MASK. */
         bool a:1;
         bool d:1;
         unsigned int ign1:2;
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -21,6 +21,8 @@
 
 #include "iommu.h"
 
+#include <asm/pt-contig-markers.h>
+
 /* Given pfn and page table level, return pde index */
 static unsigned int pfn_to_pde_idx(unsigned long pfn, unsigned int level)
 {
@@ -113,9 +115,23 @@ static void set_iommu_ptes_present(unsig
         return;
     }
 
+    ASSERT(!(next_mfn & (page_sz - 1)));
+
     while ( nr_ptes-- )
     {
-        set_iommu_pde_present(pde, next_mfn, 0, iw, ir);
+        ASSERT(!pde->next_level);
+        ASSERT(!pde->u);
+
+        if ( pde > table )
+            ASSERT(pde->ign0 == find_first_set_bit(pde - table));
+        else
+            ASSERT(pde->ign0 == CONTIG_LEVEL_SHIFT);
+
+        pde->iw = iw;
+        pde->ir = ir;
+        pde->fc = true; /* See set_iommu_pde_present(). */
+        pde->mfn = next_mfn;
+        pde->pr = true;
 
         ++pde;
         next_mfn += page_sz;
@@ -295,7 +311,7 @@ static int iommu_pde_from_dfn(struct dom
             mfn = next_table_mfn;
 
             /* allocate lower level page table */
-            table = iommu_alloc_pgtable(hd);
+            table = iommu_alloc_pgtable(hd, IOMMU_PTE_CONTIG_MASK);
             if ( table == NULL )
             {
                 AMD_IOMMU_ERROR("cannot allocate I/O page table\n");
@@ -325,7 +341,7 @@ static int iommu_pde_from_dfn(struct dom
 
             if ( next_table_mfn == 0 )
             {
-                table = iommu_alloc_pgtable(hd);
+                table = iommu_alloc_pgtable(hd, IOMMU_PTE_CONTIG_MASK);
                 if ( table == NULL )
                 {
                     AMD_IOMMU_ERROR("cannot allocate I/O page table\n");
@@ -726,7 +742,7 @@ static int fill_qpt(union amd_iommu_pte
                  * page table pages, and the resulting allocations are always
                  * zeroed.
                  */
-                pgs[level] = iommu_alloc_pgtable(hd);
+                pgs[level] = iommu_alloc_pgtable(hd, 0);
                 if ( !pgs[level] )
                 {
                     rc = -ENOMEM;
@@ -784,7 +800,7 @@ int cf_check amd_iommu_quarantine_init(s
         return 0;
     }
 
-    pdev->arch.amd.root_table = iommu_alloc_pgtable(hd);
+    pdev->arch.amd.root_table = iommu_alloc_pgtable(hd, 0);
     if ( !pdev->arch.amd.root_table )
         return -ENOMEM;
 
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -342,7 +342,7 @@ int amd_iommu_alloc_root(struct domain *
 
     if ( unlikely(!hd->arch.amd.root_table) && d != dom_io )
     {
-        hd->arch.amd.root_table = iommu_alloc_pgtable(hd);
+        hd->arch.amd.root_table = iommu_alloc_pgtable(hd, 0);
         if ( !hd->arch.amd.root_table )
             return -ENOMEM;
     }
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -334,7 +334,7 @@ static uint64_t addr_to_dma_page_maddr(s
             goto out;
 
         pte_maddr = level;
-        if ( !(pg = iommu_alloc_pgtable(hd)) )
+        if ( !(pg = iommu_alloc_pgtable(hd, 0)) )
             goto out;
 
         hd->arch.vtd.pgd_maddr = page_to_maddr(pg);
@@ -376,7 +376,7 @@ static uint64_t addr_to_dma_page_maddr(s
             }
 
             pte_maddr = level - 1;
-            pg = iommu_alloc_pgtable(hd);
+            pg = iommu_alloc_pgtable(hd, DMA_PTE_CONTIG_MASK);
             if ( !pg )
                 break;
 
@@ -388,12 +388,13 @@ static uint64_t addr_to_dma_page_maddr(s
                 struct dma_pte *split = map_vtd_domain_page(pte_maddr);
                 unsigned long inc = 1UL << level_to_offset_bits(level - 1);
 
-                split[0].val = pte->val;
+                split[0].val |= pte->val & ~DMA_PTE_CONTIG_MASK;
                 if ( inc == PAGE_SIZE )
                     split[0].val &= ~DMA_PTE_SP;
 
                 for ( offset = 1; offset < PTE_NUM; ++offset )
-                    split[offset].val = split[offset - 1].val + inc;
+                    split[offset].val |=
+                        (split[offset - 1].val & ~DMA_PTE_CONTIG_MASK) + inc;
 
                 iommu_sync_cache(split, PAGE_SIZE);
                 unmap_vtd_domain_page(split);
@@ -2176,7 +2177,7 @@ static int __must_check cf_check intel_i
     if ( iommu_snoop )
         dma_set_pte_snp(new);
 
-    if ( old.val == new.val )
+    if ( !((old.val ^ new.val) & ~DMA_PTE_CONTIG_MASK) )
     {
         spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
@@ -3064,7 +3065,7 @@ static int fill_qpt(struct dma_pte *this
                  * page table pages, and the resulting allocations are always
                  * zeroed.
                  */
-                pgs[level] = iommu_alloc_pgtable(hd);
+                pgs[level] = iommu_alloc_pgtable(hd, 0);
                 if ( !pgs[level] )
                 {
                     rc = -ENOMEM;
@@ -3121,7 +3122,7 @@ static int cf_check intel_iommu_quaranti
     if ( !drhd )
         return -ENODEV;
 
-    pg = iommu_alloc_pgtable(hd);
+    pg = iommu_alloc_pgtable(hd, 0);
     if ( !pg )
         return -ENOMEM;
 
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -253,7 +253,10 @@ struct context_entry {
  * 2-6: reserved
  * 7: super page
  * 8-11: available
- * 12-63: Host physcial address
+ * 12-51: Host physcial address
+ * 52-61: available (52-55 used for DMA_PTE_CONTIG_MASK)
+ * 62: reserved
+ * 63: available
  */
 struct dma_pte {
     u64 val;
@@ -263,6 +266,7 @@ struct dma_pte {
 #define DMA_PTE_PROT (DMA_PTE_READ | DMA_PTE_WRITE)
 #define DMA_PTE_SP   (1 << 7)
 #define DMA_PTE_SNP  (1 << 11)
+#define DMA_PTE_CONTIG_MASK  (0xfull << PADDR_BITS)
 #define dma_clear_pte(p)    do {(p).val = 0;} while(0)
 #define dma_set_pte_readable(p) do {(p).val |= DMA_PTE_READ;} while(0)
 #define dma_set_pte_writable(p) do {(p).val |= DMA_PTE_WRITE;} while(0)
@@ -276,7 +280,7 @@ struct dma_pte {
 #define dma_pte_write(p) (dma_pte_prot(p) & DMA_PTE_WRITE)
 #define dma_pte_addr(p) ((p).val & PADDR_MASK & PAGE_MASK_4K)
 #define dma_set_pte_addr(p, addr) do {\
-            (p).val |= ((addr) & PAGE_MASK_4K); } while (0)
+            (p).val |= ((addr) & PADDR_MASK & PAGE_MASK_4K); } while (0)
 #define dma_pte_present(p) (((p).val & DMA_PTE_PROT) != 0)
 #define dma_pte_superpage(p) (((p).val & DMA_PTE_SP) != 0)
 
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -26,6 +26,7 @@
 #include <asm/hvm/io.h>
 #include <asm/io_apic.h>
 #include <asm/mem_paging.h>
+#include <asm/pt-contig-markers.h>
 #include <asm/setup.h>
 
 const struct iommu_init_ops *__initdata iommu_init_ops;
@@ -538,11 +539,12 @@ int iommu_free_pgtables(struct domain *d
     return 0;
 }
 
-struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd)
+struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd,
+                                      uint64_t contig_mask)
 {
     unsigned int memflags = 0;
     struct page_info *pg;
-    void *p;
+    uint64_t *p;
 
 #ifdef CONFIG_NUMA
     if ( hd->node != NUMA_NO_NODE )
@@ -554,7 +556,29 @@ struct page_info *iommu_alloc_pgtable(st
         return NULL;
 
     p = __map_domain_page(pg);
-    clear_page(p);
+
+    if ( contig_mask )
+    {
+        /* See pt-contig-markers.h for a description of the marker scheme. */
+        unsigned int i, shift = find_first_set_bit(contig_mask);
+
+        ASSERT((CONTIG_LEVEL_SHIFT & (contig_mask >> shift)) == CONTIG_LEVEL_SHIFT);
+
+        p[0] = (CONTIG_LEVEL_SHIFT + 0ull) << shift;
+        p[1] = 0;
+        p[2] = 1ull << shift;
+        p[3] = 0;
+
+        for ( i = 4; i < PAGE_SIZE / 8; i += 4 )
+        {
+            p[i + 0] = (find_first_set_bit(i) + 0ull) << shift;
+            p[i + 1] = 0;
+            p[i + 2] = 1ull << shift;
+            p[i + 3] = 0;
+        }
+    }
+    else
+        clear_page(p);
 
     iommu_sync_cache(p, PAGE_SIZE);
 



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 09/15] AMD/IOMMU: free all-empty page tables
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
                   ` (7 preceding siblings ...)
  2022-05-27 11:17 ` [PATCH v5 08/15] IOMMU/x86: prefill newly allocate " Jan Beulich
@ 2022-05-27 11:18 ` Jan Beulich
  2022-05-27 11:19 ` [PATCH v5 10/15] VT-d: " Jan Beulich
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:18 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné

When a page table ends up with no present entries left, it can be
replaced by a non-present entry at the next higher level. The page table
itself can then be scheduled for freeing.

Note that while its output isn't used there yet,
pt_update_contig_markers() right away needs to be called in all places
where entries get updated, not just the one where entries get cleared.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v5: Re-base over changes earlier in the series.
v4: Re-base over changes earlier in the series.
v3: Re-base over changes earlier in the series.
v2: New.

--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -21,6 +21,7 @@
 
 #include "iommu.h"
 
+#define CONTIG_MASK IOMMU_PTE_CONTIG_MASK
 #include <asm/pt-contig-markers.h>
 
 /* Given pfn and page table level, return pde index */
@@ -35,16 +36,20 @@ static unsigned int pfn_to_pde_idx(unsig
 
 static union amd_iommu_pte clear_iommu_pte_present(unsigned long l1_mfn,
                                                    unsigned long dfn,
-                                                   unsigned int level)
+                                                   unsigned int level,
+                                                   bool *free)
 {
     union amd_iommu_pte *table, *pte, old;
+    unsigned int idx = pfn_to_pde_idx(dfn, level);
 
     table = map_domain_page(_mfn(l1_mfn));
-    pte = &table[pfn_to_pde_idx(dfn, level)];
+    pte = &table[idx];
     old = *pte;
 
     write_atomic(&pte->raw, 0);
 
+    *free = pt_update_contig_markers(&table->raw, idx, level, PTE_kind_null);
+
     unmap_domain_page(table);
 
     return old;
@@ -87,7 +92,11 @@ static union amd_iommu_pte set_iommu_pte
     if ( !old.pr || old.next_level ||
          old.mfn != next_mfn ||
          old.iw != iw || old.ir != ir )
+    {
         set_iommu_pde_present(pde, next_mfn, 0, iw, ir);
+        pt_update_contig_markers(&table->raw, pfn_to_pde_idx(dfn, level),
+                                 level, PTE_kind_leaf);
+    }
     else
         old.pr = false; /* signal "no change" to the caller */
 
@@ -326,6 +335,9 @@ static int iommu_pde_from_dfn(struct dom
             smp_wmb();
             set_iommu_pde_present(pde, next_table_mfn, next_level, true,
                                   true);
+            pt_update_contig_markers(&next_table_vaddr->raw,
+                                     pfn_to_pde_idx(dfn, level),
+                                     level, PTE_kind_table);
 
             *flush_flags |= IOMMU_FLUSHF_modified;
         }
@@ -351,6 +363,9 @@ static int iommu_pde_from_dfn(struct dom
                 next_table_mfn = mfn_x(page_to_mfn(table));
                 set_iommu_pde_present(pde, next_table_mfn, next_level, true,
                                       true);
+                pt_update_contig_markers(&next_table_vaddr->raw,
+                                         pfn_to_pde_idx(dfn, level),
+                                         level, PTE_kind_table);
             }
             else /* should never reach here */
             {
@@ -487,8 +502,24 @@ int cf_check amd_iommu_unmap_page(
 
     if ( pt_mfn )
     {
+        bool free;
+
         /* Mark PTE as 'page not present'. */
-        old = clear_iommu_pte_present(pt_mfn, dfn_x(dfn), level);
+        old = clear_iommu_pte_present(pt_mfn, dfn_x(dfn), level, &free);
+
+        while ( unlikely(free) && ++level < hd->arch.amd.paging_mode )
+        {
+            struct page_info *pg = mfn_to_page(_mfn(pt_mfn));
+
+            if ( iommu_pde_from_dfn(d, dfn_x(dfn), level, &pt_mfn,
+                                    flush_flags, false) )
+                BUG();
+            BUG_ON(!pt_mfn);
+
+            clear_iommu_pte_present(pt_mfn, dfn_x(dfn), level, &free);
+            *flush_flags |= IOMMU_FLUSHF_all;
+            iommu_queue_free_pgtable(hd, pg);
+        }
     }
 
     spin_unlock(&hd->arch.mapping_lock);



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 10/15] VT-d: free all-empty page tables
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
                   ` (8 preceding siblings ...)
  2022-05-27 11:18 ` [PATCH v5 09/15] AMD/IOMMU: free all-empty " Jan Beulich
@ 2022-05-27 11:19 ` Jan Beulich
  2022-05-27 11:19 ` [PATCH v5 11/15] AMD/IOMMU: replace all-contiguous page tables by superpage mappings Jan Beulich
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:19 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné

When a page table ends up with no present entries left, it can be
replaced by a non-present entry at the next higher level. The page table
itself can then be scheduled for freeing.

Note that while its output isn't used there yet,
pt_update_contig_markers() right away needs to be called in all places
where entries get updated, not just the one where entries get cleared.

Note further that while pt_update_contig_markers() updates perhaps
several PTEs within the table, since these are changes to "avail" bits
only I do not think that cache flushing would be needed afterwards. Such
cache flushing (of entire pages, unless adding yet more logic to me more
selective) would be quite noticable performance-wise (very prominent
during Dom0 boot).

Also note that cache sync-ing is likely more strict than necessary. This
is both to be on the safe side as well as to maintain the pattern of all
updates of (potentially) live tables being accompanied by a flush (if so
needed).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v4: Re-base over changes earlier in the series.
v3: Properly bound loop. Re-base over changes earlier in the series.
v2: New.
---
The hang during boot on my Latitude E6410 (see the respective code
comment) was pretty close after iommu_enable_translation(). No errors,
no watchdog would kick in, just sometimes the first few pixel lines of
the next log message's (XEN) prefix would have made it out to the screen
(and there's no serial there). It's been a lot of experimenting until I
figured the workaround (which I consider ugly, but halfway acceptable).
I've been trying hard to make sure the workaround wouldn't be masking a
real issue, yet I'm still wary of it possibly doing so ... My best guess
at this point is that on these old IOMMUs the ignored bits 52...61
aren't really ignored for present entries, but also aren't "reserved"
enough to trigger faults. This guess is from having tried to set other
bits in this range (unconditionally, and with the workaround here in
place), which yielded the same behavior.

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -43,6 +43,9 @@
 #include "vtd.h"
 #include "../ats.h"
 
+#define CONTIG_MASK DMA_PTE_CONTIG_MASK
+#include <asm/pt-contig-markers.h>
+
 /* dom_io is used as a sentinel for quarantined devices */
 #define QUARANTINE_SKIP(d, pgd_maddr) ((d) == dom_io && !(pgd_maddr))
 #define DEVICE_DOMID(d, pdev) ((d) != dom_io ? (d)->domain_id \
@@ -405,6 +408,9 @@ static uint64_t addr_to_dma_page_maddr(s
 
             write_atomic(&pte->val, new_pte.val);
             iommu_sync_cache(pte, sizeof(struct dma_pte));
+            pt_update_contig_markers(&parent->val,
+                                     address_level_offset(addr, level),
+                                     level, PTE_kind_table);
         }
 
         if ( --level == target )
@@ -837,9 +843,31 @@ static int dma_pte_clear_one(struct doma
 
     old = *pte;
     dma_clear_pte(*pte);
+    iommu_sync_cache(pte, sizeof(*pte));
+
+    while ( pt_update_contig_markers(&page->val,
+                                     address_level_offset(addr, level),
+                                     level, PTE_kind_null) &&
+            ++level < min_pt_levels )
+    {
+        struct page_info *pg = maddr_to_page(pg_maddr);
+
+        unmap_vtd_domain_page(page);
+
+        pg_maddr = addr_to_dma_page_maddr(domain, addr, level, flush_flags,
+                                          false);
+        BUG_ON(pg_maddr < PAGE_SIZE);
+
+        page = map_vtd_domain_page(pg_maddr);
+        pte = &page[address_level_offset(addr, level)];
+        dma_clear_pte(*pte);
+        iommu_sync_cache(pte, sizeof(*pte));
+
+        *flush_flags |= IOMMU_FLUSHF_all;
+        iommu_queue_free_pgtable(hd, pg);
+    }
 
     spin_unlock(&hd->arch.mapping_lock);
-    iommu_sync_cache(pte, sizeof(struct dma_pte));
 
     unmap_vtd_domain_page(page);
 
@@ -2185,8 +2213,21 @@ static int __must_check cf_check intel_i
     }
 
     *pte = new;
-
     iommu_sync_cache(pte, sizeof(struct dma_pte));
+
+    /*
+     * While the (ab)use of PTE_kind_table here allows to save some work in
+     * the function, the main motivation for it is that it avoids a so far
+     * unexplained hang during boot (while preparing Dom0) on a Westmere
+     * based laptop.
+     */
+    pt_update_contig_markers(&page->val,
+                             address_level_offset(dfn_to_daddr(dfn), level),
+                             level,
+                             (hd->platform_ops->page_sizes &
+                              (1UL << level_to_offset_bits(level + 1))
+                              ? PTE_kind_leaf : PTE_kind_table));
+
     spin_unlock(&hd->arch.mapping_lock);
     unmap_vtd_domain_page(page);
 



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 11/15] AMD/IOMMU: replace all-contiguous page tables by superpage mappings
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
                   ` (9 preceding siblings ...)
  2022-05-27 11:19 ` [PATCH v5 10/15] VT-d: " Jan Beulich
@ 2022-05-27 11:19 ` Jan Beulich
  2022-05-27 11:19 ` [PATCH v5 12/15] VT-d: " Jan Beulich
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:19 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné

When a page table ends up with all contiguous entries (including all
identical attributes), it can be replaced by a superpage entry at the
next higher level. The page table itself can then be scheduled for
freeing.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
Unlike the freeing of all-empty page tables, this causes quite a bit of
back and forth for PV domains, due to their mapping/unmapping of pages
when they get converted to/from being page tables. It may therefore be
worth considering to delay re-coalescing a little, to avoid doing so
when the superpage would otherwise get split again pretty soon. But I
think this would better be the subject of a separate change anyway.

Of course this could also be helped by more "aware" kernel side
behavior: They could avoid immediately mapping freed page tables
writable again, in anticipation of re-using that same page for another
page table elsewhere.
---
v4: Re-base over changes earlier in the series.
v3: New.

--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -81,7 +81,8 @@ static union amd_iommu_pte set_iommu_pte
                                                  unsigned long dfn,
                                                  unsigned long next_mfn,
                                                  unsigned int level,
-                                                 bool iw, bool ir)
+                                                 bool iw, bool ir,
+                                                 bool *contig)
 {
     union amd_iommu_pte *table, *pde, old;
 
@@ -94,11 +95,15 @@ static union amd_iommu_pte set_iommu_pte
          old.iw != iw || old.ir != ir )
     {
         set_iommu_pde_present(pde, next_mfn, 0, iw, ir);
-        pt_update_contig_markers(&table->raw, pfn_to_pde_idx(dfn, level),
-                                 level, PTE_kind_leaf);
+        *contig = pt_update_contig_markers(&table->raw,
+                                           pfn_to_pde_idx(dfn, level),
+                                           level, PTE_kind_leaf);
     }
     else
+    {
         old.pr = false; /* signal "no change" to the caller */
+        *contig = false;
+    }
 
     unmap_domain_page(table);
 
@@ -409,6 +414,7 @@ int cf_check amd_iommu_map_page(
 {
     struct domain_iommu *hd = dom_iommu(d);
     unsigned int level = (IOMMUF_order(flags) / PTE_PER_TABLE_SHIFT) + 1;
+    bool contig;
     int rc;
     unsigned long pt_mfn = 0;
     union amd_iommu_pte old;
@@ -452,8 +458,26 @@ int cf_check amd_iommu_map_page(
 
     /* Install mapping */
     old = set_iommu_pte_present(pt_mfn, dfn_x(dfn), mfn_x(mfn), level,
-                                (flags & IOMMUF_writable),
-                                (flags & IOMMUF_readable));
+                                flags & IOMMUF_writable,
+                                flags & IOMMUF_readable, &contig);
+
+    while ( unlikely(contig) && ++level < hd->arch.amd.paging_mode )
+    {
+        struct page_info *pg = mfn_to_page(_mfn(pt_mfn));
+        unsigned long next_mfn;
+
+        if ( iommu_pde_from_dfn(d, dfn_x(dfn), level, &pt_mfn, flush_flags,
+                                false) )
+            BUG();
+        BUG_ON(!pt_mfn);
+
+        next_mfn = mfn_x(mfn) & (~0UL << (PTE_PER_TABLE_SHIFT * (level - 1)));
+        set_iommu_pte_present(pt_mfn, dfn_x(dfn), next_mfn, level,
+                              flags & IOMMUF_writable,
+                              flags & IOMMUF_readable, &contig);
+        *flush_flags |= IOMMU_FLUSHF_modified | IOMMU_FLUSHF_all;
+        iommu_queue_free_pgtable(hd, pg);
+    }
 
     spin_unlock(&hd->arch.mapping_lock);
 



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 12/15] VT-d: replace all-contiguous page tables by superpage mappings
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
                   ` (10 preceding siblings ...)
  2022-05-27 11:19 ` [PATCH v5 11/15] AMD/IOMMU: replace all-contiguous page tables by superpage mappings Jan Beulich
@ 2022-05-27 11:19 ` Jan Beulich
  2022-06-02  9:35   ` Roger Pau Monné
  2022-05-27 11:20 ` [PATCH v5 13/15] IOMMU/x86: add perf counters for page table splitting / coalescing Jan Beulich
                   ` (2 subsequent siblings)
  14 siblings, 1 reply; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:19 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné

When a page table ends up with all contiguous entries (including all
identical attributes), it can be replaced by a superpage entry at the
next higher level. The page table itself can then be scheduled for
freeing.

The adjustment to LEVEL_MASK is merely to avoid leaving a latent trap
for whenever we (and obviously hardware) start supporting 512G mappings.

Note that cache sync-ing is likely more strict than necessary. This is
both to be on the safe side as well as to maintain the pattern of all
updates of (potentially) live tables being accompanied by a flush (if so
needed).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
Unlike the freeing of all-empty page tables, this causes quite a bit of
back and forth for PV domains, due to their mapping/unmapping of pages
when they get converted to/from being page tables. It may therefore be
worth considering to delay re-coalescing a little, to avoid doing so
when the superpage would otherwise get split again pretty soon. But I
think this would better be the subject of a separate change anyway.

Of course this could also be helped by more "aware" kernel side
behavior: They could avoid immediately mapping freed page tables
writable again, in anticipation of re-using that same page for another
page table elsewhere.
---
v4: Re-base over changes earlier in the series.
v3: New.

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -2219,14 +2219,35 @@ static int __must_check cf_check intel_i
      * While the (ab)use of PTE_kind_table here allows to save some work in
      * the function, the main motivation for it is that it avoids a so far
      * unexplained hang during boot (while preparing Dom0) on a Westmere
-     * based laptop.
+     * based laptop.  This also has the intended effect of terminating the
+     * loop when super pages aren't supported anymore at the next level.
      */
-    pt_update_contig_markers(&page->val,
-                             address_level_offset(dfn_to_daddr(dfn), level),
-                             level,
-                             (hd->platform_ops->page_sizes &
-                              (1UL << level_to_offset_bits(level + 1))
-                              ? PTE_kind_leaf : PTE_kind_table));
+    while ( pt_update_contig_markers(&page->val,
+                                     address_level_offset(dfn_to_daddr(dfn), level),
+                                     level,
+                                     (hd->platform_ops->page_sizes &
+                                      (1UL << level_to_offset_bits(level + 1))
+                                       ? PTE_kind_leaf : PTE_kind_table)) )
+    {
+        struct page_info *pg = maddr_to_page(pg_maddr);
+
+        unmap_vtd_domain_page(page);
+
+        new.val &= ~(LEVEL_MASK << level_to_offset_bits(level));
+        dma_set_pte_superpage(new);
+
+        pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), ++level,
+                                          flush_flags, false);
+        BUG_ON(pg_maddr < PAGE_SIZE);
+
+        page = map_vtd_domain_page(pg_maddr);
+        pte = &page[address_level_offset(dfn_to_daddr(dfn), level)];
+        *pte = new;
+        iommu_sync_cache(pte, sizeof(*pte));
+
+        *flush_flags |= IOMMU_FLUSHF_modified | IOMMU_FLUSHF_all;
+        iommu_queue_free_pgtable(hd, pg);
+    }
 
     spin_unlock(&hd->arch.mapping_lock);
     unmap_vtd_domain_page(page);
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -232,7 +232,7 @@ struct context_entry {
 
 /* page table handling */
 #define LEVEL_STRIDE       (9)
-#define LEVEL_MASK         ((1 << LEVEL_STRIDE) - 1)
+#define LEVEL_MASK         (PTE_NUM - 1UL)
 #define PTE_NUM            (1 << LEVEL_STRIDE)
 #define level_to_agaw(val) ((val) - 2)
 #define agaw_to_level(val) ((val) + 2)



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 13/15] IOMMU/x86: add perf counters for page table splitting / coalescing
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
                   ` (11 preceding siblings ...)
  2022-05-27 11:19 ` [PATCH v5 12/15] VT-d: " Jan Beulich
@ 2022-05-27 11:20 ` Jan Beulich
  2022-05-27 11:20 ` [PATCH v5 14/15] VT-d: fold iommu_flush_iotlb{,_pages}() Jan Beulich
  2022-05-27 11:21 ` [PATCH v5 15/15] VT-d: fold dma_pte_clear_one() into its only caller Jan Beulich
  14 siblings, 0 replies; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:20 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné, Wei Liu

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v3: New.

--- a/xen/arch/x86/include/asm/perfc_defn.h
+++ b/xen/arch/x86/include/asm/perfc_defn.h
@@ -125,4 +125,7 @@ PERFCOUNTER(realmode_exits,      "vmexit
 
 PERFCOUNTER(pauseloop_exits, "vmexits from Pause-Loop Detection")
 
+PERFCOUNTER(iommu_pt_shatters,    "IOMMU page table shatters")
+PERFCOUNTER(iommu_pt_coalesces,   "IOMMU page table coalesces")
+
 /*#endif*/ /* __XEN_PERFC_DEFN_H__ */
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -345,6 +345,8 @@ static int iommu_pde_from_dfn(struct dom
                                      level, PTE_kind_table);
 
             *flush_flags |= IOMMU_FLUSHF_modified;
+
+            perfc_incr(iommu_pt_shatters);
         }
 
         /* Install lower level page table for non-present entries */
@@ -477,6 +479,7 @@ int cf_check amd_iommu_map_page(
                               flags & IOMMUF_readable, &contig);
         *flush_flags |= IOMMU_FLUSHF_modified | IOMMU_FLUSHF_all;
         iommu_queue_free_pgtable(hd, pg);
+        perfc_incr(iommu_pt_coalesces);
     }
 
     spin_unlock(&hd->arch.mapping_lock);
@@ -543,6 +546,7 @@ int cf_check amd_iommu_unmap_page(
             clear_iommu_pte_present(pt_mfn, dfn_x(dfn), level, &free);
             *flush_flags |= IOMMU_FLUSHF_all;
             iommu_queue_free_pgtable(hd, pg);
+            perfc_incr(iommu_pt_coalesces);
         }
     }
 
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -404,6 +404,8 @@ static uint64_t addr_to_dma_page_maddr(s
 
                 if ( flush_flags )
                     *flush_flags |= IOMMU_FLUSHF_modified;
+
+                perfc_incr(iommu_pt_shatters);
             }
 
             write_atomic(&pte->val, new_pte.val);
@@ -865,6 +867,7 @@ static int dma_pte_clear_one(struct doma
 
         *flush_flags |= IOMMU_FLUSHF_all;
         iommu_queue_free_pgtable(hd, pg);
+        perfc_incr(iommu_pt_coalesces);
     }
 
     spin_unlock(&hd->arch.mapping_lock);
@@ -2247,6 +2250,7 @@ static int __must_check cf_check intel_i
 
         *flush_flags |= IOMMU_FLUSHF_modified | IOMMU_FLUSHF_all;
         iommu_queue_free_pgtable(hd, pg);
+        perfc_incr(iommu_pt_coalesces);
     }
 
     spin_unlock(&hd->arch.mapping_lock);



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 14/15] VT-d: fold iommu_flush_iotlb{,_pages}()
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
                   ` (12 preceding siblings ...)
  2022-05-27 11:20 ` [PATCH v5 13/15] IOMMU/x86: add perf counters for page table splitting / coalescing Jan Beulich
@ 2022-05-27 11:20 ` Jan Beulich
  2022-05-27 11:21 ` [PATCH v5 15/15] VT-d: fold dma_pte_clear_one() into its only caller Jan Beulich
  14 siblings, 0 replies; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:20 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné

With iommu_flush_iotlb_all() gone, iommu_flush_iotlb_pages() is merely a
wrapper around the not otherwise called iommu_flush_iotlb(). Fold both
functions.

No functional change intended.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v4: New.

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -728,9 +728,9 @@ static int __must_check iommu_flush_all(
     return rc;
 }
 
-static int __must_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
-                                          bool_t dma_old_pte_present,
-                                          unsigned long page_count)
+static int __must_check cf_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
+                                                   unsigned long page_count,
+                                                   unsigned int flush_flags)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct acpi_drhd_unit *drhd;
@@ -739,6 +739,17 @@ static int __must_check iommu_flush_iotl
     int iommu_domid;
     int ret = 0;
 
+    if ( flush_flags & IOMMU_FLUSHF_all )
+    {
+        dfn = INVALID_DFN;
+        page_count = 0;
+    }
+    else
+    {
+        ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN));
+        ASSERT(flush_flags);
+    }
+
     /*
      * No need pcideves_lock here because we have flush
      * when assign/deassign device
@@ -765,7 +776,7 @@ static int __must_check iommu_flush_iotl
             rc = iommu_flush_iotlb_psi(iommu, iommu_domid,
                                        dfn_to_daddr(dfn),
                                        get_order_from_pages(page_count),
-                                       !dma_old_pte_present,
+                                       !(flush_flags & IOMMU_FLUSHF_modified),
                                        flush_dev_iotlb);
 
         if ( rc > 0 )
@@ -777,25 +788,6 @@ static int __must_check iommu_flush_iotl
     return ret;
 }
 
-static int __must_check cf_check iommu_flush_iotlb_pages(
-    struct domain *d, dfn_t dfn, unsigned long page_count,
-    unsigned int flush_flags)
-{
-    if ( flush_flags & IOMMU_FLUSHF_all )
-    {
-        dfn = INVALID_DFN;
-        page_count = 0;
-    }
-    else
-    {
-        ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN));
-        ASSERT(flush_flags);
-    }
-
-    return iommu_flush_iotlb(d, dfn, flush_flags & IOMMU_FLUSHF_modified,
-                             page_count);
-}
-
 static void queue_free_pt(struct domain_iommu *hd, mfn_t mfn, unsigned int level)
 {
     if ( level > 1 )
@@ -3266,7 +3258,7 @@ static const struct iommu_ops __initcons
     .suspend = vtd_suspend,
     .resume = vtd_resume,
     .crash_shutdown = vtd_crash_shutdown,
-    .iotlb_flush = iommu_flush_iotlb_pages,
+    .iotlb_flush = iommu_flush_iotlb,
     .get_reserved_device_memory = intel_iommu_get_reserved_device_memory,
     .dump_page_tables = vtd_dump_page_tables,
 };



^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v5 15/15] VT-d: fold dma_pte_clear_one() into its only caller
  2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
                   ` (13 preceding siblings ...)
  2022-05-27 11:20 ` [PATCH v5 14/15] VT-d: fold iommu_flush_iotlb{,_pages}() Jan Beulich
@ 2022-05-27 11:21 ` Jan Beulich
  14 siblings, 0 replies; 38+ messages in thread
From: Jan Beulich @ 2022-05-27 11:21 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Roger Pau Monné

This way intel_iommu_unmap_page() ends up quite a bit more similar to
intel_iommu_map_page().

No functional change intended.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v5: Re-base of changes earlier in the series.
v4: New.

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -806,75 +806,6 @@ static void queue_free_pt(struct domain_
     iommu_queue_free_pgtable(hd, mfn_to_page(mfn));
 }
 
-/* clear one page's page table */
-static int dma_pte_clear_one(struct domain *domain, daddr_t addr,
-                             unsigned int order,
-                             unsigned int *flush_flags)
-{
-    struct domain_iommu *hd = dom_iommu(domain);
-    struct dma_pte *page = NULL, *pte = NULL, old;
-    u64 pg_maddr;
-    unsigned int level = (order / LEVEL_STRIDE) + 1;
-
-    spin_lock(&hd->arch.mapping_lock);
-    /* get target level pte */
-    pg_maddr = addr_to_dma_page_maddr(domain, addr, level, flush_flags, false);
-    if ( pg_maddr < PAGE_SIZE )
-    {
-        spin_unlock(&hd->arch.mapping_lock);
-        return pg_maddr ? -ENOMEM : 0;
-    }
-
-    page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
-    pte = &page[address_level_offset(addr, level)];
-
-    if ( !dma_pte_present(*pte) )
-    {
-        spin_unlock(&hd->arch.mapping_lock);
-        unmap_vtd_domain_page(page);
-        return 0;
-    }
-
-    old = *pte;
-    dma_clear_pte(*pte);
-    iommu_sync_cache(pte, sizeof(*pte));
-
-    while ( pt_update_contig_markers(&page->val,
-                                     address_level_offset(addr, level),
-                                     level, PTE_kind_null) &&
-            ++level < min_pt_levels )
-    {
-        struct page_info *pg = maddr_to_page(pg_maddr);
-
-        unmap_vtd_domain_page(page);
-
-        pg_maddr = addr_to_dma_page_maddr(domain, addr, level, flush_flags,
-                                          false);
-        BUG_ON(pg_maddr < PAGE_SIZE);
-
-        page = map_vtd_domain_page(pg_maddr);
-        pte = &page[address_level_offset(addr, level)];
-        dma_clear_pte(*pte);
-        iommu_sync_cache(pte, sizeof(*pte));
-
-        *flush_flags |= IOMMU_FLUSHF_all;
-        iommu_queue_free_pgtable(hd, pg);
-        perfc_incr(iommu_pt_coalesces);
-    }
-
-    spin_unlock(&hd->arch.mapping_lock);
-
-    unmap_vtd_domain_page(page);
-
-    *flush_flags |= IOMMU_FLUSHF_modified;
-
-    if ( order && !dma_pte_superpage(old) )
-        queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)),
-                      order / LEVEL_STRIDE);
-
-    return 0;
-}
-
 static int iommu_set_root_entry(struct vtd_iommu *iommu)
 {
     u32 sts;
@@ -2264,11 +2195,17 @@ static int __must_check cf_check intel_i
 static int __must_check cf_check intel_iommu_unmap_page(
     struct domain *d, dfn_t dfn, unsigned int order, unsigned int *flush_flags)
 {
+    struct domain_iommu *hd = dom_iommu(d);
+    daddr_t addr = dfn_to_daddr(dfn);
+    struct dma_pte *page = NULL, *pte = NULL, old;
+    uint64_t pg_maddr;
+    unsigned int level = (order / LEVEL_STRIDE) + 1;
+
     /*
      * While really we could unmap at any granularity, for now we assume unmaps
      * are issued by common code only at the same granularity as maps.
      */
-    ASSERT((dom_iommu(d)->platform_ops->page_sizes >> order) & PAGE_SIZE_4K);
+    ASSERT((hd->platform_ops->page_sizes >> order) & PAGE_SIZE_4K);
 
     /* Do nothing if VT-d shares EPT page table */
     if ( iommu_use_hap_pt(d) )
@@ -2278,7 +2215,62 @@ static int __must_check cf_check intel_i
     if ( iommu_hwdom_passthrough && is_hardware_domain(d) )
         return 0;
 
-    return dma_pte_clear_one(d, dfn_to_daddr(dfn), order, flush_flags);
+    spin_lock(&hd->arch.mapping_lock);
+    /* get target level pte */
+    pg_maddr = addr_to_dma_page_maddr(d, addr, level, flush_flags, false);
+    if ( pg_maddr < PAGE_SIZE )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        return pg_maddr ? -ENOMEM : 0;
+    }
+
+    page = map_vtd_domain_page(pg_maddr);
+    pte = &page[address_level_offset(addr, level)];
+
+    if ( !dma_pte_present(*pte) )
+    {
+        spin_unlock(&hd->arch.mapping_lock);
+        unmap_vtd_domain_page(page);
+        return 0;
+    }
+
+    old = *pte;
+    dma_clear_pte(*pte);
+    iommu_sync_cache(pte, sizeof(*pte));
+
+    while ( pt_update_contig_markers(&page->val,
+                                     address_level_offset(addr, level),
+                                     level, PTE_kind_null) &&
+            ++level < min_pt_levels )
+    {
+        struct page_info *pg = maddr_to_page(pg_maddr);
+
+        unmap_vtd_domain_page(page);
+
+        pg_maddr = addr_to_dma_page_maddr(d, addr, level, flush_flags, false);
+        BUG_ON(pg_maddr < PAGE_SIZE);
+
+        page = map_vtd_domain_page(pg_maddr);
+        pte = &page[address_level_offset(addr, level)];
+        dma_clear_pte(*pte);
+        iommu_sync_cache(pte, sizeof(*pte));
+
+        *flush_flags |= IOMMU_FLUSHF_all;
+        iommu_queue_free_pgtable(hd, pg);
+        perfc_incr(iommu_pt_coalesces);
+    }
+
+    spin_unlock(&hd->arch.mapping_lock);
+
+    unmap_vtd_domain_page(page);
+
+    *flush_flags |= IOMMU_FLUSHF_modified;
+
+    if ( order && !dma_pte_superpage(old) )
+        queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)),
+                      order / LEVEL_STRIDE);
+
+    return 0;
 }
 
 static int cf_check intel_iommu_lookup_page(



^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0
  2022-05-27 11:12 ` [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0 Jan Beulich
@ 2022-05-31 14:40   ` Roger Pau Monné
  2022-05-31 15:40     ` Jan Beulich
  0 siblings, 1 reply; 38+ messages in thread
From: Roger Pau Monné @ 2022-05-31 14:40 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On Fri, May 27, 2022 at 01:12:06PM +0200, Jan Beulich wrote:
> While already the case for PVH, there's no reason to treat PV
> differently here, though of course the addresses get taken from another
> source in this case. Except that, to match CPU side mappings, by default
> we permit r/o ones. This then also means we now deal consistently with
> IO-APICs whose MMIO is or is not covered by E820 reserved regions.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Just one comment below.

> ---
> v5: Extend to also cover e.g. HPET, which in turn means explicitly
>     excluding PCI MMCFG ranges.
> [integrated] v1: Integrate into series.
> [standalone] v2: Keep IOMMU mappings in sync with CPU ones.
> 
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -13,6 +13,7 @@
>   */
>  
>  #include <xen/sched.h>
> +#include <xen/iocap.h>
>  #include <xen/iommu.h>
>  #include <xen/paging.h>
>  #include <xen/guest_access.h>
> @@ -275,12 +276,12 @@ void iommu_identity_map_teardown(struct
>      }
>  }
>  
> -static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
> -                                         unsigned long pfn,
> -                                         unsigned long max_pfn)
> +static unsigned int __hwdom_init hwdom_iommu_map(const struct domain *d,
> +                                                 unsigned long pfn,
> +                                                 unsigned long max_pfn)
>  {
>      mfn_t mfn = _mfn(pfn);
> -    unsigned int i, type;
> +    unsigned int i, type, perms = IOMMUF_readable | IOMMUF_writable;
>  
>      /*
>       * Set up 1:1 mapping for dom0. Default to include only conventional RAM
> @@ -289,44 +290,75 @@ static bool __hwdom_init hwdom_iommu_map
>       * that fall in unusable ranges for PV Dom0.
>       */
>      if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) )
> -        return false;
> +        return 0;
>  
>      switch ( type = page_get_ram_type(mfn) )
>      {
>      case RAM_TYPE_UNUSABLE:
> -        return false;
> +        return 0;
>  
>      case RAM_TYPE_CONVENTIONAL:
>          if ( iommu_hwdom_strict )
> -            return false;
> +            return 0;
>          break;
>  
>      default:
>          if ( type & RAM_TYPE_RESERVED )
>          {
>              if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
> -                return false;
> +                perms = 0;
>          }
> -        else if ( is_hvm_domain(d) || !iommu_hwdom_inclusive || pfn > max_pfn )
> -            return false;
> +        else if ( is_hvm_domain(d) )
> +            return 0;
> +        else if ( !iommu_hwdom_inclusive || pfn > max_pfn )
> +            perms = 0;
>      }
>  
>      /* Check that it doesn't overlap with the Interrupt Address Range. */
>      if ( pfn >= 0xfee00 && pfn <= 0xfeeff )
> -        return false;
> +        return 0;
>      /* ... or the IO-APIC */
> -    for ( i = 0; has_vioapic(d) && i < d->arch.hvm.nr_vioapics; i++ )
> -        if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
> -            return false;
> +    if ( has_vioapic(d) )
> +    {
> +        for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
> +            if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
> +                return 0;
> +    }
> +    else if ( is_pv_domain(d) )
> +    {
> +        /*
> +         * Be consistent with CPU mappings: Dom0 is permitted to establish r/o
> +         * ones there (also for e.g. HPET in certain cases), so it should also
> +         * have such established for IOMMUs.
> +         */
> +        if ( iomem_access_permitted(d, pfn, pfn) &&
> +             rangeset_contains_singleton(mmio_ro_ranges, pfn) )
> +            perms = IOMMUF_readable;
> +    }
>      /*
>       * ... or the PCIe MCFG regions.
>       * TODO: runtime added MMCFG regions are not checked to make sure they
>       * don't overlap with already mapped regions, thus preventing trapping.
>       */
>      if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) )
> -        return false;
> +        return 0;
> +    else if ( is_pv_domain(d) )
> +    {
> +        /*
> +         * Don't extend consistency with CPU mappings to PCI MMCFG regions.
> +         * These shouldn't be accessed via DMA by devices.

Could you expand the comment a bit to explicitly mention the reason
why MMCFG regions shouldn't be accessible from device DMA operations?

Thanks, Roger.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0
  2022-05-31 14:40   ` Roger Pau Monné
@ 2022-05-31 15:40     ` Jan Beulich
  2022-05-31 16:15       ` Roger Pau Monné
  0 siblings, 1 reply; 38+ messages in thread
From: Jan Beulich @ 2022-05-31 15:40 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On 31.05.2022 16:40, Roger Pau Monné wrote:
> On Fri, May 27, 2022 at 01:12:06PM +0200, Jan Beulich wrote:
>> While already the case for PVH, there's no reason to treat PV
>> differently here, though of course the addresses get taken from another
>> source in this case. Except that, to match CPU side mappings, by default
>> we permit r/o ones. This then also means we now deal consistently with
>> IO-APICs whose MMIO is or is not covered by E820 reserved regions.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>> @@ -289,44 +290,75 @@ static bool __hwdom_init hwdom_iommu_map
>>       * that fall in unusable ranges for PV Dom0.
>>       */
>>      if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) )
>> -        return false;
>> +        return 0;
>>  
>>      switch ( type = page_get_ram_type(mfn) )
>>      {
>>      case RAM_TYPE_UNUSABLE:
>> -        return false;
>> +        return 0;
>>  
>>      case RAM_TYPE_CONVENTIONAL:
>>          if ( iommu_hwdom_strict )
>> -            return false;
>> +            return 0;
>>          break;
>>  
>>      default:
>>          if ( type & RAM_TYPE_RESERVED )
>>          {
>>              if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
>> -                return false;
>> +                perms = 0;
>>          }
>> -        else if ( is_hvm_domain(d) || !iommu_hwdom_inclusive || pfn > max_pfn )
>> -            return false;
>> +        else if ( is_hvm_domain(d) )
>> +            return 0;
>> +        else if ( !iommu_hwdom_inclusive || pfn > max_pfn )
>> +            perms = 0;
>>      }
>>  
>>      /* Check that it doesn't overlap with the Interrupt Address Range. */
>>      if ( pfn >= 0xfee00 && pfn <= 0xfeeff )
>> -        return false;
>> +        return 0;
>>      /* ... or the IO-APIC */
>> -    for ( i = 0; has_vioapic(d) && i < d->arch.hvm.nr_vioapics; i++ )
>> -        if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
>> -            return false;
>> +    if ( has_vioapic(d) )
>> +    {
>> +        for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
>> +            if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
>> +                return 0;
>> +    }
>> +    else if ( is_pv_domain(d) )
>> +    {
>> +        /*
>> +         * Be consistent with CPU mappings: Dom0 is permitted to establish r/o
>> +         * ones there (also for e.g. HPET in certain cases), so it should also
>> +         * have such established for IOMMUs.
>> +         */
>> +        if ( iomem_access_permitted(d, pfn, pfn) &&
>> +             rangeset_contains_singleton(mmio_ro_ranges, pfn) )
>> +            perms = IOMMUF_readable;
>> +    }
>>      /*
>>       * ... or the PCIe MCFG regions.

With this comment (which I leave alone) ...

>>       * TODO: runtime added MMCFG regions are not checked to make sure they
>>       * don't overlap with already mapped regions, thus preventing trapping.
>>       */
>>      if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) )
>> -        return false;
>> +        return 0;
>> +    else if ( is_pv_domain(d) )
>> +    {
>> +        /*
>> +         * Don't extend consistency with CPU mappings to PCI MMCFG regions.
>> +         * These shouldn't be accessed via DMA by devices.
> 
> Could you expand the comment a bit to explicitly mention the reason
> why MMCFG regions shouldn't be accessible from device DMA operations?

... it's hard to tell what I should write here. I'd expect extended
reasoning to go there (if anywhere). I'd be okay adjusting the earlier
comment, if only I knew what to write. "We don't want them to be
accessed that way" seems a little blunt. I could say "Devices have
other means to access PCI config space", but this not being said there
I took as being implied. Or else what was the reason to exclude these
for PVH Dom0?

Jan



^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 02/15] IOMMU/x86: perform PV Dom0 mappings in batches
  2022-05-27 11:12 ` [PATCH v5 02/15] IOMMU/x86: perform PV Dom0 mappings in batches Jan Beulich
@ 2022-05-31 16:01   ` Roger Pau Monné
  2022-06-01  7:30     ` Jan Beulich
  0 siblings, 1 reply; 38+ messages in thread
From: Roger Pau Monné @ 2022-05-31 16:01 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Paul Durrant, Wei Liu

On Fri, May 27, 2022 at 01:12:48PM +0200, Jan Beulich wrote:
> For large page mappings to be easily usable (i.e. in particular without
> un-shattering of smaller page mappings) and for mapping operations to
> then also be more efficient, pass batches of Dom0 memory to iommu_map().
> In dom0_construct_pv() and its helpers (covering strict mode) this
> additionally requires establishing the type of those pages (albeit with
> zero type references).
> 
> The earlier establishing of PGT_writable_page | PGT_validated requires
> the existing places where this gets done (through get_page_and_type())
> to be updated: For pages which actually have a mapping, the type
> refcount needs to be 1.
> 
> There is actually a related bug that gets fixed here as a side effect:
> Typically the last L1 table would get marked as such only after
> get_page_and_type(..., PGT_writable_page). While this is fine as far as
> refcounting goes, the page did remain mapped in the IOMMU in this case
> (when "iommu=dom0-strict").
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> Subsequently p2m_add_identity_entry() may want to also gain an order
> parameter, for arch_iommu_hwdom_init() to use. While this only affects
> non-RAM regions, systems typically have 2-16Mb of reserved space
> immediately below 4Gb, which hence could be mapped more efficiently.
> 
> Eventually we may want to overhaul this logic to use a rangeset based
> approach instead, punching holes into originally uniformly large-page-
> mapped regions. Doing so right here would first and foremost be yet more
> of a change.
> 
> The installing of zero-ref writable types has in fact shown (observed
> while putting together the change) that despite the intention by the
> XSA-288 changes (affecting DomU-s only) for Dom0 a number of
> sufficiently ordinary pages (at the very least initrd and P2M ones as
> well as pages that are part of the initial allocation but not part of
> the initial mapping) still have been starting out as PGT_none, meaning
> that they would have gained IOMMU mappings only the first time these
> pages would get mapped writably. Consequently an open question is
> whether iommu_memory_setup() should set the pages to PGT_writable_page
> independent of need_iommu_pt_sync().

Hm, I see, non strict PV dom0s won't get the pages set to
PGT_writable_page even when accessible by devices by virtue of such
domain having all RAM mapped in the IOMMU page-tables.

I guess it does make sense to also have the pages set as
PGT_writable_page by default in that case, as tthe pages _are_
writable by the IOMMU.  Do pages added during runtime (ie: ballooned
in) also get PGT_writable_page set?

> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -363,8 +363,8 @@ static unsigned int __hwdom_init hwdom_i
>  
>  void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
>  {
> -    unsigned long i, top, max_pfn;
> -    unsigned int flush_flags = 0;
> +    unsigned long i, top, max_pfn, start, count;
> +    unsigned int flush_flags = 0, start_perms = 0;
>  
>      BUG_ON(!is_hardware_domain(d));
>  
> @@ -395,9 +395,9 @@ void __hwdom_init arch_iommu_hwdom_init(
>       * First Mb will get mapped in one go by pvh_populate_p2m(). Avoid
>       * setting up potentially conflicting mappings here.
>       */
> -    i = paging_mode_translate(d) ? PFN_DOWN(MB(1)) : 0;
> +    start = paging_mode_translate(d) ? PFN_DOWN(MB(1)) : 0;
>  
> -    for ( ; i < top; i++ )
> +    for ( i = start, count = 0; i < top; )
>      {
>          unsigned long pfn = pdx_to_pfn(i);
>          unsigned int perms = hwdom_iommu_map(d, pfn, max_pfn);
> @@ -406,20 +406,41 @@ void __hwdom_init arch_iommu_hwdom_init(
>          if ( !perms )
>              rc = 0;
>          else if ( paging_mode_translate(d) )
> +        {
>              rc = p2m_add_identity_entry(d, pfn,
>                                          perms & IOMMUF_writable ? p2m_access_rw
>                                                                  : p2m_access_r,
>                                          0);
> +            if ( rc )
> +                printk(XENLOG_WARNING
> +                       "%pd: identity mapping of %lx failed: %d\n",
> +                       d, pfn, rc);
> +        }
> +        else if ( pfn != start + count || perms != start_perms )
> +        {
> +        commit:
> +            rc = iommu_map(d, _dfn(start), _mfn(start), count, start_perms,
> +                           &flush_flags);
> +            if ( rc )
> +                printk(XENLOG_WARNING
> +                       "%pd: IOMMU identity mapping of [%lx,%lx) failed: %d\n",
> +                       d, pfn, pfn + count, rc);
> +            SWAP(start, pfn);
> +            start_perms = perms;
> +            count = 1;
> +        }
>          else
> -            rc = iommu_map(d, _dfn(pfn), _mfn(pfn), 1ul << PAGE_ORDER_4K,
> -                           perms, &flush_flags);
> +        {
> +            ++count;
> +            rc = 0;
> +        }
>  
> -        if ( rc )
> -            printk(XENLOG_WARNING "%pd: identity %smapping of %lx failed: %d\n",
> -                   d, !paging_mode_translate(d) ? "IOMMU " : "", pfn, rc);
>  
> -        if (!(i & 0xfffff))
> +        if ( !(++i & 0xfffff) )
>              process_pending_softirqs();
> +
> +        if ( i == top && count )

Nit: do you really need to check for count != 0? AFAICT this is only
possible in the first iteration.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0
  2022-05-31 15:40     ` Jan Beulich
@ 2022-05-31 16:15       ` Roger Pau Monné
  2022-06-01  7:10         ` Jan Beulich
  0 siblings, 1 reply; 38+ messages in thread
From: Roger Pau Monné @ 2022-05-31 16:15 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On Tue, May 31, 2022 at 05:40:03PM +0200, Jan Beulich wrote:
> On 31.05.2022 16:40, Roger Pau Monné wrote:
> > On Fri, May 27, 2022 at 01:12:06PM +0200, Jan Beulich wrote:
> >> @@ -289,44 +290,75 @@ static bool __hwdom_init hwdom_iommu_map
> >>       * that fall in unusable ranges for PV Dom0.
> >>       */
> >>      if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) )
> >> -        return false;
> >> +        return 0;
> >>  
> >>      switch ( type = page_get_ram_type(mfn) )
> >>      {
> >>      case RAM_TYPE_UNUSABLE:
> >> -        return false;
> >> +        return 0;
> >>  
> >>      case RAM_TYPE_CONVENTIONAL:
> >>          if ( iommu_hwdom_strict )
> >> -            return false;
> >> +            return 0;
> >>          break;
> >>  
> >>      default:
> >>          if ( type & RAM_TYPE_RESERVED )
> >>          {
> >>              if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
> >> -                return false;
> >> +                perms = 0;
> >>          }
> >> -        else if ( is_hvm_domain(d) || !iommu_hwdom_inclusive || pfn > max_pfn )
> >> -            return false;
> >> +        else if ( is_hvm_domain(d) )
> >> +            return 0;
> >> +        else if ( !iommu_hwdom_inclusive || pfn > max_pfn )
> >> +            perms = 0;
> >>      }
> >>  
> >>      /* Check that it doesn't overlap with the Interrupt Address Range. */
> >>      if ( pfn >= 0xfee00 && pfn <= 0xfeeff )
> >> -        return false;
> >> +        return 0;
> >>      /* ... or the IO-APIC */
> >> -    for ( i = 0; has_vioapic(d) && i < d->arch.hvm.nr_vioapics; i++ )
> >> -        if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
> >> -            return false;
> >> +    if ( has_vioapic(d) )
> >> +    {
> >> +        for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
> >> +            if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
> >> +                return 0;
> >> +    }
> >> +    else if ( is_pv_domain(d) )
> >> +    {
> >> +        /*
> >> +         * Be consistent with CPU mappings: Dom0 is permitted to establish r/o
> >> +         * ones there (also for e.g. HPET in certain cases), so it should also
> >> +         * have such established for IOMMUs.
> >> +         */
> >> +        if ( iomem_access_permitted(d, pfn, pfn) &&
> >> +             rangeset_contains_singleton(mmio_ro_ranges, pfn) )
> >> +            perms = IOMMUF_readable;
> >> +    }
> >>      /*
> >>       * ... or the PCIe MCFG regions.
> 
> With this comment (which I leave alone) ...
> 
> >>       * TODO: runtime added MMCFG regions are not checked to make sure they
> >>       * don't overlap with already mapped regions, thus preventing trapping.
> >>       */
> >>      if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) )
> >> -        return false;
> >> +        return 0;
> >> +    else if ( is_pv_domain(d) )
> >> +    {
> >> +        /*
> >> +         * Don't extend consistency with CPU mappings to PCI MMCFG regions.
> >> +         * These shouldn't be accessed via DMA by devices.
> > 
> > Could you expand the comment a bit to explicitly mention the reason
> > why MMCFG regions shouldn't be accessible from device DMA operations?
> 
> ... it's hard to tell what I should write here. I'd expect extended
> reasoning to go there (if anywhere). I'd be okay adjusting the earlier
> comment, if only I knew what to write. "We don't want them to be
> accessed that way" seems a little blunt. I could say "Devices have
> other means to access PCI config space", but this not being said there
> I took as being implied.

But we could likely say the same about IO-APIC or HPET MMIO regions.
I don't think we expect them to be accessed by devices, yet we provide
them for coherency with CPU side mappings in the PV case.

> Or else what was the reason to exclude these
> for PVH Dom0?

The reason for PVH is because the config space is (partially) emulated
for the hardware domain, so we don't allow untrapped access by the CPU
either.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables
  2022-05-27 11:13 ` [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables Jan Beulich
@ 2022-05-31 16:25   ` Roger Pau Monné
  2022-06-01  7:32     ` Jan Beulich
  0 siblings, 1 reply; 38+ messages in thread
From: Roger Pau Monné @ 2022-05-31 16:25 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On Fri, May 27, 2022 at 01:13:09PM +0200, Jan Beulich wrote:
> For vendor specific code to support superpages we need to be able to
> deal with a superpage mapping replacing an intermediate page table (or
> hierarchy thereof). Consequently an iommu_alloc_pgtable() counterpart is
> needed to free individual page tables while a domain is still alive.
> Since the freeing needs to be deferred until after a suitable IOTLB
> flush was performed, released page tables get queued for processing by a
> tasklet.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I was considering whether to use a softirq-tasklet instead. This would
> have the benefit of avoiding extra scheduling operations, but come with
> the risk of the freeing happening prematurely because of a
> process_pending_softirqs() somewhere.
> ---
> v5: Fix CPU_UP_PREPARE for BIGMEM. Schedule tasklet in CPU_DOWN_FAILED
>     when list is not empty. Skip all processing in CPU_DEAD when list is
>     empty.
> v4: Change type of iommu_queue_free_pgtable()'s 1st parameter. Re-base.
> v3: Call process_pending_softirqs() from free_queued_pgtables().
> 
> --- a/xen/arch/x86/include/asm/iommu.h
> +++ b/xen/arch/x86/include/asm/iommu.h
> @@ -147,6 +147,7 @@ void iommu_free_domid(domid_t domid, uns
>  int __must_check iommu_free_pgtables(struct domain *d);
>  struct domain_iommu;
>  struct page_info *__must_check iommu_alloc_pgtable(struct domain_iommu *hd);
> +void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *pg);
>  
>  #endif /* !__ARCH_X86_IOMMU_H__ */
>  /*
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -12,6 +12,7 @@
>   * this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <xen/cpu.h>
>  #include <xen/sched.h>
>  #include <xen/iocap.h>
>  #include <xen/iommu.h>
> @@ -566,6 +567,98 @@ struct page_info *iommu_alloc_pgtable(st
>      return pg;
>  }
>  
> +/*
> + * Intermediate page tables which get replaced by large pages may only be
> + * freed after a suitable IOTLB flush. Hence such pages get queued on a
> + * per-CPU list, with a per-CPU tasklet processing the list on the assumption
> + * that the necessary IOTLB flush will have occurred by the time tasklets get
> + * to run. (List and tasklet being per-CPU has the benefit of accesses not
> + * requiring any locking.)
> + */
> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
> +
> +static void free_queued_pgtables(void *arg)
> +{
> +    struct page_list_head *list = arg;
> +    struct page_info *pg;
> +    unsigned int done = 0;
> +
> +    while ( (pg = page_list_remove_head(list)) )
> +    {
> +        free_domheap_page(pg);
> +
> +        /* Granularity of checking somewhat arbitrary. */
> +        if ( !(++done & 0x1ff) )
> +             process_pending_softirqs();

Hm, I'm wondering whether we really want to process pending softirqs
here.

Such processing will prevent the watchdog from triggering, which we
likely want in production builds.  OTOH in debug builds we should make
sure that free_queued_pgtables() doesn't take longer than a watchdog
window, or else it's likely to cause issues to guests scheduled on
this same pCPU (and calling process_pending_softirqs() will just mask
it).

Thanks, Roger.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0
  2022-05-31 16:15       ` Roger Pau Monné
@ 2022-06-01  7:10         ` Jan Beulich
  2022-06-01  8:17           ` Roger Pau Monné
  0 siblings, 1 reply; 38+ messages in thread
From: Jan Beulich @ 2022-06-01  7:10 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On 31.05.2022 18:15, Roger Pau Monné wrote:
> On Tue, May 31, 2022 at 05:40:03PM +0200, Jan Beulich wrote:
>> On 31.05.2022 16:40, Roger Pau Monné wrote:
>>> On Fri, May 27, 2022 at 01:12:06PM +0200, Jan Beulich wrote:
>>>> @@ -289,44 +290,75 @@ static bool __hwdom_init hwdom_iommu_map
>>>>       * that fall in unusable ranges for PV Dom0.
>>>>       */
>>>>      if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) )
>>>> -        return false;
>>>> +        return 0;
>>>>  
>>>>      switch ( type = page_get_ram_type(mfn) )
>>>>      {
>>>>      case RAM_TYPE_UNUSABLE:
>>>> -        return false;
>>>> +        return 0;
>>>>  
>>>>      case RAM_TYPE_CONVENTIONAL:
>>>>          if ( iommu_hwdom_strict )
>>>> -            return false;
>>>> +            return 0;
>>>>          break;
>>>>  
>>>>      default:
>>>>          if ( type & RAM_TYPE_RESERVED )
>>>>          {
>>>>              if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
>>>> -                return false;
>>>> +                perms = 0;
>>>>          }
>>>> -        else if ( is_hvm_domain(d) || !iommu_hwdom_inclusive || pfn > max_pfn )
>>>> -            return false;
>>>> +        else if ( is_hvm_domain(d) )
>>>> +            return 0;
>>>> +        else if ( !iommu_hwdom_inclusive || pfn > max_pfn )
>>>> +            perms = 0;
>>>>      }
>>>>  
>>>>      /* Check that it doesn't overlap with the Interrupt Address Range. */
>>>>      if ( pfn >= 0xfee00 && pfn <= 0xfeeff )
>>>> -        return false;
>>>> +        return 0;
>>>>      /* ... or the IO-APIC */
>>>> -    for ( i = 0; has_vioapic(d) && i < d->arch.hvm.nr_vioapics; i++ )
>>>> -        if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
>>>> -            return false;
>>>> +    if ( has_vioapic(d) )
>>>> +    {
>>>> +        for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
>>>> +            if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
>>>> +                return 0;
>>>> +    }
>>>> +    else if ( is_pv_domain(d) )
>>>> +    {
>>>> +        /*
>>>> +         * Be consistent with CPU mappings: Dom0 is permitted to establish r/o
>>>> +         * ones there (also for e.g. HPET in certain cases), so it should also
>>>> +         * have such established for IOMMUs.
>>>> +         */
>>>> +        if ( iomem_access_permitted(d, pfn, pfn) &&
>>>> +             rangeset_contains_singleton(mmio_ro_ranges, pfn) )
>>>> +            perms = IOMMUF_readable;
>>>> +    }
>>>>      /*
>>>>       * ... or the PCIe MCFG regions.
>>
>> With this comment (which I leave alone) ...
>>
>>>>       * TODO: runtime added MMCFG regions are not checked to make sure they
>>>>       * don't overlap with already mapped regions, thus preventing trapping.
>>>>       */
>>>>      if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) )
>>>> -        return false;
>>>> +        return 0;
>>>> +    else if ( is_pv_domain(d) )
>>>> +    {
>>>> +        /*
>>>> +         * Don't extend consistency with CPU mappings to PCI MMCFG regions.
>>>> +         * These shouldn't be accessed via DMA by devices.
>>>
>>> Could you expand the comment a bit to explicitly mention the reason
>>> why MMCFG regions shouldn't be accessible from device DMA operations?
>>
>> ... it's hard to tell what I should write here. I'd expect extended
>> reasoning to go there (if anywhere). I'd be okay adjusting the earlier
>> comment, if only I knew what to write. "We don't want them to be
>> accessed that way" seems a little blunt. I could say "Devices have
>> other means to access PCI config space", but this not being said there
>> I took as being implied.
> 
> But we could likely say the same about IO-APIC or HPET MMIO regions.
> I don't think we expect them to be accessed by devices, yet we provide
> them for coherency with CPU side mappings in the PV case.

As to "say the same" - yes for the first part of my earlier reply, but
no for the latter part.

>> Or else what was the reason to exclude these
>> for PVH Dom0?
> 
> The reason for PVH is because the config space is (partially) emulated
> for the hardware domain, so we don't allow untrapped access by the CPU
> either.

Hmm, right - there's read emulation there as well, while for PV we
only intercept writes.

So overall should we perhaps permit r/o access to MMCFG for PV? Of
course that would only end up consistent once we adjust mappings
dynamically when MMCFG ranges are put in use (IOW if we can't verify
an MMCFG range is suitably reserved, we'd not find in
mmio_ro_ranges just yet, and hence we still wouldn't have an IOMMU
side mapping even if CPU side mappings are permitted). But for the
patch here it would simply mean dropping some of the code I did add
for v5.

Otherwise, i.e. if the code is to remain as is, I'm afraid I still
wouldn't see what to put usefully in the comment.

Jan



^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 02/15] IOMMU/x86: perform PV Dom0 mappings in batches
  2022-05-31 16:01   ` Roger Pau Monné
@ 2022-06-01  7:30     ` Jan Beulich
  2022-06-01  9:08       ` Roger Pau Monné
  0 siblings, 1 reply; 38+ messages in thread
From: Jan Beulich @ 2022-06-01  7:30 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Andrew Cooper, Paul Durrant, Wei Liu

On 31.05.2022 18:01, Roger Pau Monné wrote:
> On Fri, May 27, 2022 at 01:12:48PM +0200, Jan Beulich wrote:
>> For large page mappings to be easily usable (i.e. in particular without
>> un-shattering of smaller page mappings) and for mapping operations to
>> then also be more efficient, pass batches of Dom0 memory to iommu_map().
>> In dom0_construct_pv() and its helpers (covering strict mode) this
>> additionally requires establishing the type of those pages (albeit with
>> zero type references).
>>
>> The earlier establishing of PGT_writable_page | PGT_validated requires
>> the existing places where this gets done (through get_page_and_type())
>> to be updated: For pages which actually have a mapping, the type
>> refcount needs to be 1.
>>
>> There is actually a related bug that gets fixed here as a side effect:
>> Typically the last L1 table would get marked as such only after
>> get_page_and_type(..., PGT_writable_page). While this is fine as far as
>> refcounting goes, the page did remain mapped in the IOMMU in this case
>> (when "iommu=dom0-strict").
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>> ---
>> Subsequently p2m_add_identity_entry() may want to also gain an order
>> parameter, for arch_iommu_hwdom_init() to use. While this only affects
>> non-RAM regions, systems typically have 2-16Mb of reserved space
>> immediately below 4Gb, which hence could be mapped more efficiently.
>>
>> Eventually we may want to overhaul this logic to use a rangeset based
>> approach instead, punching holes into originally uniformly large-page-
>> mapped regions. Doing so right here would first and foremost be yet more
>> of a change.
>>
>> The installing of zero-ref writable types has in fact shown (observed
>> while putting together the change) that despite the intention by the
>> XSA-288 changes (affecting DomU-s only) for Dom0 a number of
>> sufficiently ordinary pages (at the very least initrd and P2M ones as
>> well as pages that are part of the initial allocation but not part of
>> the initial mapping) still have been starting out as PGT_none, meaning
>> that they would have gained IOMMU mappings only the first time these
>> pages would get mapped writably. Consequently an open question is
>> whether iommu_memory_setup() should set the pages to PGT_writable_page
>> independent of need_iommu_pt_sync().
> 
> Hm, I see, non strict PV dom0s won't get the pages set to
> PGT_writable_page even when accessible by devices by virtue of such
> domain having all RAM mapped in the IOMMU page-tables.
> 
> I guess it does make sense to also have the pages set as
> PGT_writable_page by default in that case, as tthe pages _are_
> writable by the IOMMU.  Do pages added during runtime (ie: ballooned
> in) also get PGT_writable_page set?

Yes, by virtue of going through guest_physmap_add_page().

>> @@ -406,20 +406,41 @@ void __hwdom_init arch_iommu_hwdom_init(
>>          if ( !perms )
>>              rc = 0;
>>          else if ( paging_mode_translate(d) )
>> +        {
>>              rc = p2m_add_identity_entry(d, pfn,
>>                                          perms & IOMMUF_writable ? p2m_access_rw
>>                                                                  : p2m_access_r,
>>                                          0);
>> +            if ( rc )
>> +                printk(XENLOG_WARNING
>> +                       "%pd: identity mapping of %lx failed: %d\n",
>> +                       d, pfn, rc);
>> +        }
>> +        else if ( pfn != start + count || perms != start_perms )
>> +        {
>> +        commit:
>> +            rc = iommu_map(d, _dfn(start), _mfn(start), count, start_perms,
>> +                           &flush_flags);
>> +            if ( rc )
>> +                printk(XENLOG_WARNING
>> +                       "%pd: IOMMU identity mapping of [%lx,%lx) failed: %d\n",
>> +                       d, pfn, pfn + count, rc);
>> +            SWAP(start, pfn);
>> +            start_perms = perms;
>> +            count = 1;
>> +        }
>>          else
>> -            rc = iommu_map(d, _dfn(pfn), _mfn(pfn), 1ul << PAGE_ORDER_4K,
>> -                           perms, &flush_flags);
>> +        {
>> +            ++count;
>> +            rc = 0;
>> +        }
>>  
>> -        if ( rc )
>> -            printk(XENLOG_WARNING "%pd: identity %smapping of %lx failed: %d\n",
>> -                   d, !paging_mode_translate(d) ? "IOMMU " : "", pfn, rc);
>>  
>> -        if (!(i & 0xfffff))
>> +        if ( !(++i & 0xfffff) )
>>              process_pending_softirqs();
>> +
>> +        if ( i == top && count )
> 
> Nit: do you really need to check for count != 0? AFAICT this is only
> possible in the first iteration.

Yes, to avoid taking the PV path for PVH on the last iteration (count
remains zero for PVH throughout the entire loop).

Jan



^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables
  2022-05-31 16:25   ` Roger Pau Monné
@ 2022-06-01  7:32     ` Jan Beulich
  2022-06-01  9:24       ` Roger Pau Monné
  0 siblings, 1 reply; 38+ messages in thread
From: Jan Beulich @ 2022-06-01  7:32 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On 31.05.2022 18:25, Roger Pau Monné wrote:
> On Fri, May 27, 2022 at 01:13:09PM +0200, Jan Beulich wrote:
>> @@ -566,6 +567,98 @@ struct page_info *iommu_alloc_pgtable(st
>>      return pg;
>>  }
>>  
>> +/*
>> + * Intermediate page tables which get replaced by large pages may only be
>> + * freed after a suitable IOTLB flush. Hence such pages get queued on a
>> + * per-CPU list, with a per-CPU tasklet processing the list on the assumption
>> + * that the necessary IOTLB flush will have occurred by the time tasklets get
>> + * to run. (List and tasklet being per-CPU has the benefit of accesses not
>> + * requiring any locking.)
>> + */
>> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
>> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
>> +
>> +static void free_queued_pgtables(void *arg)
>> +{
>> +    struct page_list_head *list = arg;
>> +    struct page_info *pg;
>> +    unsigned int done = 0;
>> +
>> +    while ( (pg = page_list_remove_head(list)) )
>> +    {
>> +        free_domheap_page(pg);
>> +
>> +        /* Granularity of checking somewhat arbitrary. */
>> +        if ( !(++done & 0x1ff) )
>> +             process_pending_softirqs();
> 
> Hm, I'm wondering whether we really want to process pending softirqs
> here.
> 
> Such processing will prevent the watchdog from triggering, which we
> likely want in production builds.  OTOH in debug builds we should make
> sure that free_queued_pgtables() doesn't take longer than a watchdog
> window, or else it's likely to cause issues to guests scheduled on
> this same pCPU (and calling process_pending_softirqs() will just mask
> it).

Doesn't this consideration apply to about every use of the function we
already have in the code base?

Jan



^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0
  2022-06-01  7:10         ` Jan Beulich
@ 2022-06-01  8:17           ` Roger Pau Monné
  2022-06-01 15:10             ` Jan Beulich
  0 siblings, 1 reply; 38+ messages in thread
From: Roger Pau Monné @ 2022-06-01  8:17 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On Wed, Jun 01, 2022 at 09:10:09AM +0200, Jan Beulich wrote:
> On 31.05.2022 18:15, Roger Pau Monné wrote:
> > On Tue, May 31, 2022 at 05:40:03PM +0200, Jan Beulich wrote:
> >> On 31.05.2022 16:40, Roger Pau Monné wrote:
> >>> On Fri, May 27, 2022 at 01:12:06PM +0200, Jan Beulich wrote:
> >>>> @@ -289,44 +290,75 @@ static bool __hwdom_init hwdom_iommu_map
> >>>>       * that fall in unusable ranges for PV Dom0.
> >>>>       */
> >>>>      if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) )
> >>>> -        return false;
> >>>> +        return 0;
> >>>>  
> >>>>      switch ( type = page_get_ram_type(mfn) )
> >>>>      {
> >>>>      case RAM_TYPE_UNUSABLE:
> >>>> -        return false;
> >>>> +        return 0;
> >>>>  
> >>>>      case RAM_TYPE_CONVENTIONAL:
> >>>>          if ( iommu_hwdom_strict )
> >>>> -            return false;
> >>>> +            return 0;
> >>>>          break;
> >>>>  
> >>>>      default:
> >>>>          if ( type & RAM_TYPE_RESERVED )
> >>>>          {
> >>>>              if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
> >>>> -                return false;
> >>>> +                perms = 0;
> >>>>          }
> >>>> -        else if ( is_hvm_domain(d) || !iommu_hwdom_inclusive || pfn > max_pfn )
> >>>> -            return false;
> >>>> +        else if ( is_hvm_domain(d) )
> >>>> +            return 0;
> >>>> +        else if ( !iommu_hwdom_inclusive || pfn > max_pfn )
> >>>> +            perms = 0;
> >>>>      }
> >>>>  
> >>>>      /* Check that it doesn't overlap with the Interrupt Address Range. */
> >>>>      if ( pfn >= 0xfee00 && pfn <= 0xfeeff )
> >>>> -        return false;
> >>>> +        return 0;
> >>>>      /* ... or the IO-APIC */
> >>>> -    for ( i = 0; has_vioapic(d) && i < d->arch.hvm.nr_vioapics; i++ )
> >>>> -        if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
> >>>> -            return false;
> >>>> +    if ( has_vioapic(d) )
> >>>> +    {
> >>>> +        for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
> >>>> +            if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
> >>>> +                return 0;
> >>>> +    }
> >>>> +    else if ( is_pv_domain(d) )
> >>>> +    {
> >>>> +        /*
> >>>> +         * Be consistent with CPU mappings: Dom0 is permitted to establish r/o
> >>>> +         * ones there (also for e.g. HPET in certain cases), so it should also
> >>>> +         * have such established for IOMMUs.
> >>>> +         */
> >>>> +        if ( iomem_access_permitted(d, pfn, pfn) &&
> >>>> +             rangeset_contains_singleton(mmio_ro_ranges, pfn) )
> >>>> +            perms = IOMMUF_readable;
> >>>> +    }
> >>>>      /*
> >>>>       * ... or the PCIe MCFG regions.
> >>
> >> With this comment (which I leave alone) ...
> >>
> >>>>       * TODO: runtime added MMCFG regions are not checked to make sure they
> >>>>       * don't overlap with already mapped regions, thus preventing trapping.
> >>>>       */
> >>>>      if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) )
> >>>> -        return false;
> >>>> +        return 0;
> >>>> +    else if ( is_pv_domain(d) )
> >>>> +    {
> >>>> +        /*
> >>>> +         * Don't extend consistency with CPU mappings to PCI MMCFG regions.
> >>>> +         * These shouldn't be accessed via DMA by devices.
> >>>
> >>> Could you expand the comment a bit to explicitly mention the reason
> >>> why MMCFG regions shouldn't be accessible from device DMA operations?
> >>
> >> ... it's hard to tell what I should write here. I'd expect extended
> >> reasoning to go there (if anywhere). I'd be okay adjusting the earlier
> >> comment, if only I knew what to write. "We don't want them to be
> >> accessed that way" seems a little blunt. I could say "Devices have
> >> other means to access PCI config space", but this not being said there
> >> I took as being implied.
> > 
> > But we could likely say the same about IO-APIC or HPET MMIO regions.
> > I don't think we expect them to be accessed by devices, yet we provide
> > them for coherency with CPU side mappings in the PV case.
> 
> As to "say the same" - yes for the first part of my earlier reply, but
> no for the latter part.

Yes, obviously devices cannot access the HPET or the IO-APIC MMIO from
the PCI config space :).

> >> Or else what was the reason to exclude these
> >> for PVH Dom0?
> > 
> > The reason for PVH is because the config space is (partially) emulated
> > for the hardware domain, so we don't allow untrapped access by the CPU
> > either.
> 
> Hmm, right - there's read emulation there as well, while for PV we
> only intercept writes.
> 
> So overall should we perhaps permit r/o access to MMCFG for PV? Of
> course that would only end up consistent once we adjust mappings
> dynamically when MMCFG ranges are put in use (IOW if we can't verify
> an MMCFG range is suitably reserved, we'd not find in
> mmio_ro_ranges just yet, and hence we still wouldn't have an IOMMU
> side mapping even if CPU side mappings are permitted). But for the
> patch here it would simply mean dropping some of the code I did add
> for v5.

I would be OK with that, as I think we would then be consistent with
how IO-APIC and HPET MMIO regions are handled.  We would have to add
some small helper/handling in PHYSDEVOP_pci_mmcfg_reserved for PV.

> Otherwise, i.e. if the code is to remain as is, I'm afraid I still
> wouldn't see what to put usefully in the comment.

IMO the important part is to note whether there's a reason or not why
the handling of IO-APIC, HPET vs MMCFG RO regions differ in PV mode.
Ie: if we don't want to handle MMCFG in RO mode for device mappings
because of the complication with handling dynamic changes as a result
of PHYSDEVOP_pci_mmcfg_reserved we should just note it.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 02/15] IOMMU/x86: perform PV Dom0 mappings in batches
  2022-06-01  7:30     ` Jan Beulich
@ 2022-06-01  9:08       ` Roger Pau Monné
  0 siblings, 0 replies; 38+ messages in thread
From: Roger Pau Monné @ 2022-06-01  9:08 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Paul Durrant, Wei Liu

On Wed, Jun 01, 2022 at 09:30:07AM +0200, Jan Beulich wrote:
> On 31.05.2022 18:01, Roger Pau Monné wrote:
> > On Fri, May 27, 2022 at 01:12:48PM +0200, Jan Beulich wrote:
> >> @@ -406,20 +406,41 @@ void __hwdom_init arch_iommu_hwdom_init(
> >>          if ( !perms )
> >>              rc = 0;
> >>          else if ( paging_mode_translate(d) )
> >> +        {
> >>              rc = p2m_add_identity_entry(d, pfn,
> >>                                          perms & IOMMUF_writable ? p2m_access_rw
> >>                                                                  : p2m_access_r,
> >>                                          0);
> >> +            if ( rc )
> >> +                printk(XENLOG_WARNING
> >> +                       "%pd: identity mapping of %lx failed: %d\n",
> >> +                       d, pfn, rc);
> >> +        }
> >> +        else if ( pfn != start + count || perms != start_perms )
> >> +        {
> >> +        commit:
> >> +            rc = iommu_map(d, _dfn(start), _mfn(start), count, start_perms,
> >> +                           &flush_flags);
> >> +            if ( rc )
> >> +                printk(XENLOG_WARNING
> >> +                       "%pd: IOMMU identity mapping of [%lx,%lx) failed: %d\n",
> >> +                       d, pfn, pfn + count, rc);
> >> +            SWAP(start, pfn);
> >> +            start_perms = perms;
> >> +            count = 1;
> >> +        }
> >>          else
> >> -            rc = iommu_map(d, _dfn(pfn), _mfn(pfn), 1ul << PAGE_ORDER_4K,
> >> -                           perms, &flush_flags);
> >> +        {
> >> +            ++count;
> >> +            rc = 0;
> >> +        }
> >>  
> >> -        if ( rc )
> >> -            printk(XENLOG_WARNING "%pd: identity %smapping of %lx failed: %d\n",
> >> -                   d, !paging_mode_translate(d) ? "IOMMU " : "", pfn, rc);
> >>  
> >> -        if (!(i & 0xfffff))
> >> +        if ( !(++i & 0xfffff) )
> >>              process_pending_softirqs();
> >> +
> >> +        if ( i == top && count )
> > 
> > Nit: do you really need to check for count != 0? AFAICT this is only
> > possible in the first iteration.
> 
> Yes, to avoid taking the PV path for PVH on the last iteration (count
> remains zero for PVH throughout the entire loop).

Oh, I see, that chunk is shared by both PV and PVH.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables
  2022-06-01  7:32     ` Jan Beulich
@ 2022-06-01  9:24       ` Roger Pau Monné
  2022-06-01 15:25         ` Jan Beulich
  0 siblings, 1 reply; 38+ messages in thread
From: Roger Pau Monné @ 2022-06-01  9:24 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On Wed, Jun 01, 2022 at 09:32:44AM +0200, Jan Beulich wrote:
> On 31.05.2022 18:25, Roger Pau Monné wrote:
> > On Fri, May 27, 2022 at 01:13:09PM +0200, Jan Beulich wrote:
> >> @@ -566,6 +567,98 @@ struct page_info *iommu_alloc_pgtable(st
> >>      return pg;
> >>  }
> >>  
> >> +/*
> >> + * Intermediate page tables which get replaced by large pages may only be
> >> + * freed after a suitable IOTLB flush. Hence such pages get queued on a
> >> + * per-CPU list, with a per-CPU tasklet processing the list on the assumption
> >> + * that the necessary IOTLB flush will have occurred by the time tasklets get
> >> + * to run. (List and tasklet being per-CPU has the benefit of accesses not
> >> + * requiring any locking.)
> >> + */
> >> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
> >> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
> >> +
> >> +static void free_queued_pgtables(void *arg)
> >> +{
> >> +    struct page_list_head *list = arg;
> >> +    struct page_info *pg;
> >> +    unsigned int done = 0;
> >> +
> >> +    while ( (pg = page_list_remove_head(list)) )
> >> +    {
> >> +        free_domheap_page(pg);
> >> +
> >> +        /* Granularity of checking somewhat arbitrary. */
> >> +        if ( !(++done & 0x1ff) )
> >> +             process_pending_softirqs();
> > 
> > Hm, I'm wondering whether we really want to process pending softirqs
> > here.
> > 
> > Such processing will prevent the watchdog from triggering, which we
> > likely want in production builds.  OTOH in debug builds we should make
> > sure that free_queued_pgtables() doesn't take longer than a watchdog
> > window, or else it's likely to cause issues to guests scheduled on
> > this same pCPU (and calling process_pending_softirqs() will just mask
> > it).
> 
> Doesn't this consideration apply to about every use of the function we
> already have in the code base?

Not really, at least when used by init code or by the debug key
handlers.  This use is IMO different than what I would expect, as it's
a guest triggered path that we believe do require such processing.
Normally we would use continuations for such long going guest
triggered operations.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 07/15] x86: introduce helper for recording degree of contiguity in page tables
  2022-05-27 11:17 ` [PATCH v5 07/15] x86: introduce helper for recording degree of contiguity in page tables Jan Beulich
@ 2022-06-01 11:29   ` Roger Pau Monné
  2022-06-01 12:11     ` Jan Beulich
  0 siblings, 1 reply; 38+ messages in thread
From: Roger Pau Monné @ 2022-06-01 11:29 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Paul Durrant, Wei Liu

On Fri, May 27, 2022 at 01:17:08PM +0200, Jan Beulich wrote:
> This is a re-usable helper (kind of a template) which gets introduced
> without users so that the individual subsequent patches introducing such
> users can get committed independently of one another.
> 
> See the comment at the top of the new file. To demonstrate the effect,
> if a page table had just 16 entries, this would be the set of markers
> for a page table with fully contiguous mappings:
> 
> index  0 1 2 3 4 5 6 7 8 9 A B C D E F
> marker 4 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0
> 
> "Contiguous" here means not only present entries with successively
> increasing MFNs, each one suitably aligned for its slot, but also a
> respective number of all non-present entries.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> @Roger: I've retained your R-b, but I was on the edge of dropping it.

Sure, that's fine.

> ---
> v5: Bail early from step 1 if possible. Arrange for consumers who are
>     just after CONTIG_{LEVEL_SHIFT,NR}. Extend comment.
> v3: Rename function and header. Introduce IS_CONTIG().
> v2: New.
> 
> --- /dev/null
> +++ b/xen/arch/x86/include/asm/pt-contig-markers.h
> @@ -0,0 +1,110 @@
> +#ifndef __ASM_X86_PT_CONTIG_MARKERS_H
> +#define __ASM_X86_PT_CONTIG_MARKERS_H
> +
> +/*
> + * Short of having function templates in C, the function defined below is
> + * intended to be used by multiple parties interested in recording the
> + * degree of contiguity in mappings by a single page table.
> + *
> + * Scheme: Every entry records the order of contiguous successive entries,
> + * up to the maximum order covered by that entry (which is the number of
> + * clear low bits in its index, with entry 0 being the exception using
> + * the base-2 logarithm of the number of entries in a single page table).
> + * While a few entries need touching upon update, knowing whether the
> + * table is fully contiguous (and can hence be replaced by a higher level
> + * leaf entry) is then possible by simply looking at entry 0's marker.
> + *
> + * Prereqs:
> + * - CONTIG_MASK needs to be #define-d, to a value having at least 4
> + *   contiguous bits (ignored by hardware), before including this file (or
> + *   else only CONTIG_LEVEL_SHIFT and CONTIG_NR will become available),
> + * - page tables to be passed to the helper need to be initialized with
> + *   correct markers,
> + * - not-present entries need to be entirely clear except for the marker.
> + */
> +
> +/* This is the same for all anticipated users, so doesn't need passing in. */
> +#define CONTIG_LEVEL_SHIFT 9
> +#define CONTIG_NR          (1 << CONTIG_LEVEL_SHIFT)
> +
> +#ifdef CONTIG_MASK
> +
> +#include <xen/bitops.h>
> +#include <xen/lib.h>
> +#include <xen/page-size.h>
> +
> +#define GET_MARKER(e) MASK_EXTR(e, CONTIG_MASK)
> +#define SET_MARKER(e, m) \
> +    ((void)((e) = ((e) & ~CONTIG_MASK) | MASK_INSR(m, CONTIG_MASK)))
> +
> +#define IS_CONTIG(kind, pt, i, idx, shift, b) \
> +    ((kind) == PTE_kind_leaf \
> +     ? (((pt)[i] ^ (pt)[idx]) & ~CONTIG_MASK) == (1ULL << ((b) + (shift))) \
> +     : !((pt)[i] & ~CONTIG_MASK))
> +
> +enum PTE_kind {
> +    PTE_kind_null,
> +    PTE_kind_leaf,
> +    PTE_kind_table,
> +};
> +
> +static bool pt_update_contig_markers(uint64_t *pt, unsigned int idx,
> +                                     unsigned int level, enum PTE_kind kind)
> +{
> +    unsigned int b, i = idx;
> +    unsigned int shift = (level - 1) * CONTIG_LEVEL_SHIFT + PAGE_SHIFT;
> +
> +    ASSERT(idx < CONTIG_NR);
> +    ASSERT(!(pt[idx] & CONTIG_MASK));
> +
> +    /* Step 1: Reduce markers in lower numbered entries. */
> +    while ( i )
> +    {
> +        b = find_first_set_bit(i);
> +        i &= ~(1U << b);
> +        if ( GET_MARKER(pt[i]) <= b )
> +            break;
> +        SET_MARKER(pt[i], b);
> +    }
> +
> +    /* An intermediate table is never contiguous with anything. */
> +    if ( kind == PTE_kind_table )
> +        return false;
> +
> +    /*
> +     * Present entries need in-sync index and address to be a candidate
> +     * for being contiguous: What we're after is whether ultimately the
> +     * intermediate table can be replaced by a superpage.
> +     */
> +    if ( kind != PTE_kind_null &&
> +         idx != ((pt[idx] >> shift) & (CONTIG_NR - 1)) )
> +        return false;
> +
> +    /* Step 2: Check higher numbered entries for contiguity. */
> +    for ( b = 0; b < CONTIG_LEVEL_SHIFT && !(idx & (1U << b)); ++b )
> +    {
> +        i = idx | (1U << b);
> +        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
> +            break;
> +    }
> +
> +    /* Step 3: Update markers in this and lower numbered entries. */
> +    for ( ; SET_MARKER(pt[idx], b), b < CONTIG_LEVEL_SHIFT; ++b )
> +    {
> +        i = idx ^ (1U << b);
> +        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
> +            break;
> +        idx &= ~(1U << b);
> +    }
> +
> +    return b == CONTIG_LEVEL_SHIFT;
> +}
> +
> +#undef IS_CONTIG
> +#undef SET_MARKER
> +#undef GET_MARKER
> +#undef CONTIG_MASK

Is it fine to undef CONTIG_MASK here, when it was defined outside of
this file?  It does seem weird to me.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 07/15] x86: introduce helper for recording degree of contiguity in page tables
  2022-06-01 11:29   ` Roger Pau Monné
@ 2022-06-01 12:11     ` Jan Beulich
  2022-06-01 13:02       ` Roger Pau Monné
  0 siblings, 1 reply; 38+ messages in thread
From: Jan Beulich @ 2022-06-01 12:11 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Andrew Cooper, Paul Durrant, Wei Liu

On 01.06.2022 13:29, Roger Pau Monné wrote:
> On Fri, May 27, 2022 at 01:17:08PM +0200, Jan Beulich wrote:
>> --- /dev/null
>> +++ b/xen/arch/x86/include/asm/pt-contig-markers.h
>> @@ -0,0 +1,110 @@
>> +#ifndef __ASM_X86_PT_CONTIG_MARKERS_H
>> +#define __ASM_X86_PT_CONTIG_MARKERS_H
>> +
>> +/*
>> + * Short of having function templates in C, the function defined below is
>> + * intended to be used by multiple parties interested in recording the
>> + * degree of contiguity in mappings by a single page table.
>> + *
>> + * Scheme: Every entry records the order of contiguous successive entries,
>> + * up to the maximum order covered by that entry (which is the number of
>> + * clear low bits in its index, with entry 0 being the exception using
>> + * the base-2 logarithm of the number of entries in a single page table).
>> + * While a few entries need touching upon update, knowing whether the
>> + * table is fully contiguous (and can hence be replaced by a higher level
>> + * leaf entry) is then possible by simply looking at entry 0's marker.
>> + *
>> + * Prereqs:
>> + * - CONTIG_MASK needs to be #define-d, to a value having at least 4
>> + *   contiguous bits (ignored by hardware), before including this file (or
>> + *   else only CONTIG_LEVEL_SHIFT and CONTIG_NR will become available),
>> + * - page tables to be passed to the helper need to be initialized with
>> + *   correct markers,
>> + * - not-present entries need to be entirely clear except for the marker.
>> + */
>> +
>> +/* This is the same for all anticipated users, so doesn't need passing in. */
>> +#define CONTIG_LEVEL_SHIFT 9
>> +#define CONTIG_NR          (1 << CONTIG_LEVEL_SHIFT)
>> +
>> +#ifdef CONTIG_MASK
>> +
>> +#include <xen/bitops.h>
>> +#include <xen/lib.h>
>> +#include <xen/page-size.h>
>> +
>> +#define GET_MARKER(e) MASK_EXTR(e, CONTIG_MASK)
>> +#define SET_MARKER(e, m) \
>> +    ((void)((e) = ((e) & ~CONTIG_MASK) | MASK_INSR(m, CONTIG_MASK)))
>> +
>> +#define IS_CONTIG(kind, pt, i, idx, shift, b) \
>> +    ((kind) == PTE_kind_leaf \
>> +     ? (((pt)[i] ^ (pt)[idx]) & ~CONTIG_MASK) == (1ULL << ((b) + (shift))) \
>> +     : !((pt)[i] & ~CONTIG_MASK))
>> +
>> +enum PTE_kind {
>> +    PTE_kind_null,
>> +    PTE_kind_leaf,
>> +    PTE_kind_table,
>> +};
>> +
>> +static bool pt_update_contig_markers(uint64_t *pt, unsigned int idx,
>> +                                     unsigned int level, enum PTE_kind kind)
>> +{
>> +    unsigned int b, i = idx;
>> +    unsigned int shift = (level - 1) * CONTIG_LEVEL_SHIFT + PAGE_SHIFT;
>> +
>> +    ASSERT(idx < CONTIG_NR);
>> +    ASSERT(!(pt[idx] & CONTIG_MASK));
>> +
>> +    /* Step 1: Reduce markers in lower numbered entries. */
>> +    while ( i )
>> +    {
>> +        b = find_first_set_bit(i);
>> +        i &= ~(1U << b);
>> +        if ( GET_MARKER(pt[i]) <= b )
>> +            break;
>> +        SET_MARKER(pt[i], b);
>> +    }
>> +
>> +    /* An intermediate table is never contiguous with anything. */
>> +    if ( kind == PTE_kind_table )
>> +        return false;
>> +
>> +    /*
>> +     * Present entries need in-sync index and address to be a candidate
>> +     * for being contiguous: What we're after is whether ultimately the
>> +     * intermediate table can be replaced by a superpage.
>> +     */
>> +    if ( kind != PTE_kind_null &&
>> +         idx != ((pt[idx] >> shift) & (CONTIG_NR - 1)) )
>> +        return false;
>> +
>> +    /* Step 2: Check higher numbered entries for contiguity. */
>> +    for ( b = 0; b < CONTIG_LEVEL_SHIFT && !(idx & (1U << b)); ++b )
>> +    {
>> +        i = idx | (1U << b);
>> +        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
>> +            break;
>> +    }
>> +
>> +    /* Step 3: Update markers in this and lower numbered entries. */
>> +    for ( ; SET_MARKER(pt[idx], b), b < CONTIG_LEVEL_SHIFT; ++b )
>> +    {
>> +        i = idx ^ (1U << b);
>> +        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
>> +            break;
>> +        idx &= ~(1U << b);
>> +    }
>> +
>> +    return b == CONTIG_LEVEL_SHIFT;
>> +}
>> +
>> +#undef IS_CONTIG
>> +#undef SET_MARKER
>> +#undef GET_MARKER
>> +#undef CONTIG_MASK
> 
> Is it fine to undef CONTIG_MASK here, when it was defined outside of
> this file?  It does seem weird to me.

I consider it not just fine, but desirable. Use sites of this header #define
this just for the purpose of this header. And I want to leave name space as
uncluttered as possible. Should there really arise a need to keep this, we
can always consider removing the #undef (just like I did for
CONTIG_LEVEL_SHIFT and CONTIG_NR because of feedback of yours on another
patch).

Jan



^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 08/15] IOMMU/x86: prefill newly allocate page tables
  2022-05-27 11:17 ` [PATCH v5 08/15] IOMMU/x86: prefill newly allocate " Jan Beulich
@ 2022-06-01 12:59   ` Roger Pau Monné
  2022-06-01 13:17     ` Jan Beulich
  0 siblings, 1 reply; 38+ messages in thread
From: Roger Pau Monné @ 2022-06-01 12:59 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On Fri, May 27, 2022 at 01:17:35PM +0200, Jan Beulich wrote:
> Page tables are used for two purposes after allocation: They either
> start out all empty, or they are filled to replace a superpage.
> Subsequently, to replace all empty or fully contiguous page tables,
> contiguous sub-regions will be recorded within individual page tables.
> Install the initial set of markers immediately after allocation. Make
> sure to retain these markers when further populating a page table in
> preparation for it to replace a superpage.
> 
> The markers are simply 4-bit fields holding the order value of
> contiguous entries. To demonstrate this, if a page table had just 16
> entries, this would be the initial (fully contiguous) set of markers:
> 
> index  0 1 2 3 4 5 6 7 8 9 A B C D E F
> marker 4 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0
> 
> "Contiguous" here means not only present entries with successively
> increasing MFNs, each one suitably aligned for its slot, and identical
> attributes, but also a respective number of all non-present (zero except
> for the markers) entries.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -26,6 +26,7 @@
>  #include <asm/hvm/io.h>
>  #include <asm/io_apic.h>
>  #include <asm/mem_paging.h>
> +#include <asm/pt-contig-markers.h>
>  #include <asm/setup.h>
>  
>  const struct iommu_init_ops *__initdata iommu_init_ops;
> @@ -538,11 +539,12 @@ int iommu_free_pgtables(struct domain *d
>      return 0;
>  }
>  
> -struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd)
> +struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd,
> +                                      uint64_t contig_mask)
>  {
>      unsigned int memflags = 0;
>      struct page_info *pg;
> -    void *p;
> +    uint64_t *p;
>  
>  #ifdef CONFIG_NUMA
>      if ( hd->node != NUMA_NO_NODE )
> @@ -554,7 +556,29 @@ struct page_info *iommu_alloc_pgtable(st
>          return NULL;
>  
>      p = __map_domain_page(pg);
> -    clear_page(p);
> +
> +    if ( contig_mask )
> +    {
> +        /* See pt-contig-markers.h for a description of the marker scheme. */
> +        unsigned int i, shift = find_first_set_bit(contig_mask);
> +
> +        ASSERT((CONTIG_LEVEL_SHIFT & (contig_mask >> shift)) == CONTIG_LEVEL_SHIFT);
> +
> +        p[0] = (CONTIG_LEVEL_SHIFT + 0ull) << shift;
> +        p[1] = 0;
> +        p[2] = 1ull << shift;
> +        p[3] = 0;
> +
> +        for ( i = 4; i < PAGE_SIZE / 8; i += 4 )

FWIW, you could also use sizeof(*p) instead of hardcoding 8.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 07/15] x86: introduce helper for recording degree of contiguity in page tables
  2022-06-01 12:11     ` Jan Beulich
@ 2022-06-01 13:02       ` Roger Pau Monné
  0 siblings, 0 replies; 38+ messages in thread
From: Roger Pau Monné @ 2022-06-01 13:02 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Paul Durrant, Wei Liu

On Wed, Jun 01, 2022 at 02:11:53PM +0200, Jan Beulich wrote:
> On 01.06.2022 13:29, Roger Pau Monné wrote:
> > On Fri, May 27, 2022 at 01:17:08PM +0200, Jan Beulich wrote:
> >> --- /dev/null
> >> +++ b/xen/arch/x86/include/asm/pt-contig-markers.h
> >> @@ -0,0 +1,110 @@
> >> +#ifndef __ASM_X86_PT_CONTIG_MARKERS_H
> >> +#define __ASM_X86_PT_CONTIG_MARKERS_H
> >> +
> >> +/*
> >> + * Short of having function templates in C, the function defined below is
> >> + * intended to be used by multiple parties interested in recording the
> >> + * degree of contiguity in mappings by a single page table.
> >> + *
> >> + * Scheme: Every entry records the order of contiguous successive entries,
> >> + * up to the maximum order covered by that entry (which is the number of
> >> + * clear low bits in its index, with entry 0 being the exception using
> >> + * the base-2 logarithm of the number of entries in a single page table).
> >> + * While a few entries need touching upon update, knowing whether the
> >> + * table is fully contiguous (and can hence be replaced by a higher level
> >> + * leaf entry) is then possible by simply looking at entry 0's marker.
> >> + *
> >> + * Prereqs:
> >> + * - CONTIG_MASK needs to be #define-d, to a value having at least 4
> >> + *   contiguous bits (ignored by hardware), before including this file (or
> >> + *   else only CONTIG_LEVEL_SHIFT and CONTIG_NR will become available),
> >> + * - page tables to be passed to the helper need to be initialized with
> >> + *   correct markers,
> >> + * - not-present entries need to be entirely clear except for the marker.
> >> + */
> >> +
> >> +/* This is the same for all anticipated users, so doesn't need passing in. */
> >> +#define CONTIG_LEVEL_SHIFT 9
> >> +#define CONTIG_NR          (1 << CONTIG_LEVEL_SHIFT)
> >> +
> >> +#ifdef CONTIG_MASK
> >> +
> >> +#include <xen/bitops.h>
> >> +#include <xen/lib.h>
> >> +#include <xen/page-size.h>
> >> +
> >> +#define GET_MARKER(e) MASK_EXTR(e, CONTIG_MASK)
> >> +#define SET_MARKER(e, m) \
> >> +    ((void)((e) = ((e) & ~CONTIG_MASK) | MASK_INSR(m, CONTIG_MASK)))
> >> +
> >> +#define IS_CONTIG(kind, pt, i, idx, shift, b) \
> >> +    ((kind) == PTE_kind_leaf \
> >> +     ? (((pt)[i] ^ (pt)[idx]) & ~CONTIG_MASK) == (1ULL << ((b) + (shift))) \
> >> +     : !((pt)[i] & ~CONTIG_MASK))
> >> +
> >> +enum PTE_kind {
> >> +    PTE_kind_null,
> >> +    PTE_kind_leaf,
> >> +    PTE_kind_table,
> >> +};
> >> +
> >> +static bool pt_update_contig_markers(uint64_t *pt, unsigned int idx,
> >> +                                     unsigned int level, enum PTE_kind kind)
> >> +{
> >> +    unsigned int b, i = idx;
> >> +    unsigned int shift = (level - 1) * CONTIG_LEVEL_SHIFT + PAGE_SHIFT;
> >> +
> >> +    ASSERT(idx < CONTIG_NR);
> >> +    ASSERT(!(pt[idx] & CONTIG_MASK));
> >> +
> >> +    /* Step 1: Reduce markers in lower numbered entries. */
> >> +    while ( i )
> >> +    {
> >> +        b = find_first_set_bit(i);
> >> +        i &= ~(1U << b);
> >> +        if ( GET_MARKER(pt[i]) <= b )
> >> +            break;
> >> +        SET_MARKER(pt[i], b);
> >> +    }
> >> +
> >> +    /* An intermediate table is never contiguous with anything. */
> >> +    if ( kind == PTE_kind_table )
> >> +        return false;
> >> +
> >> +    /*
> >> +     * Present entries need in-sync index and address to be a candidate
> >> +     * for being contiguous: What we're after is whether ultimately the
> >> +     * intermediate table can be replaced by a superpage.
> >> +     */
> >> +    if ( kind != PTE_kind_null &&
> >> +         idx != ((pt[idx] >> shift) & (CONTIG_NR - 1)) )
> >> +        return false;
> >> +
> >> +    /* Step 2: Check higher numbered entries for contiguity. */
> >> +    for ( b = 0; b < CONTIG_LEVEL_SHIFT && !(idx & (1U << b)); ++b )
> >> +    {
> >> +        i = idx | (1U << b);
> >> +        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
> >> +            break;
> >> +    }
> >> +
> >> +    /* Step 3: Update markers in this and lower numbered entries. */
> >> +    for ( ; SET_MARKER(pt[idx], b), b < CONTIG_LEVEL_SHIFT; ++b )
> >> +    {
> >> +        i = idx ^ (1U << b);
> >> +        if ( !IS_CONTIG(kind, pt, i, idx, shift, b) || GET_MARKER(pt[i]) != b )
> >> +            break;
> >> +        idx &= ~(1U << b);
> >> +    }
> >> +
> >> +    return b == CONTIG_LEVEL_SHIFT;
> >> +}
> >> +
> >> +#undef IS_CONTIG
> >> +#undef SET_MARKER
> >> +#undef GET_MARKER
> >> +#undef CONTIG_MASK
> > 
> > Is it fine to undef CONTIG_MASK here, when it was defined outside of
> > this file?  It does seem weird to me.
> 
> I consider it not just fine, but desirable. Use sites of this header #define
> this just for the purpose of this header. And I want to leave name space as
> uncluttered as possible. Should there really arise a need to keep this, we
> can always consider removing the #undef (just like I did for
> CONTIG_LEVEL_SHIFT and CONTIG_NR because of feedback of yours on another
> patch).

OK, I find it kind of unexpected to undef in a file where it's not
defined, but I think that's fine.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 08/15] IOMMU/x86: prefill newly allocate page tables
  2022-06-01 12:59   ` Roger Pau Monné
@ 2022-06-01 13:17     ` Jan Beulich
  0 siblings, 0 replies; 38+ messages in thread
From: Jan Beulich @ 2022-06-01 13:17 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On 01.06.2022 14:59, Roger Pau Monné wrote:
> On Fri, May 27, 2022 at 01:17:35PM +0200, Jan Beulich wrote:
>> Page tables are used for two purposes after allocation: They either
>> start out all empty, or they are filled to replace a superpage.
>> Subsequently, to replace all empty or fully contiguous page tables,
>> contiguous sub-regions will be recorded within individual page tables.
>> Install the initial set of markers immediately after allocation. Make
>> sure to retain these markers when further populating a page table in
>> preparation for it to replace a superpage.
>>
>> The markers are simply 4-bit fields holding the order value of
>> contiguous entries. To demonstrate this, if a page table had just 16
>> entries, this would be the initial (fully contiguous) set of markers:
>>
>> index  0 1 2 3 4 5 6 7 8 9 A B C D E F
>> marker 4 0 1 0 2 0 1 0 3 0 1 0 2 0 1 0
>>
>> "Contiguous" here means not only present entries with successively
>> increasing MFNs, each one suitably aligned for its slot, and identical
>> attributes, but also a respective number of all non-present (zero except
>> for the markers) entries.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>> @@ -538,11 +539,12 @@ int iommu_free_pgtables(struct domain *d
>>      return 0;
>>  }
>>  
>> -struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd)
>> +struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd,
>> +                                      uint64_t contig_mask)
>>  {
>>      unsigned int memflags = 0;
>>      struct page_info *pg;
>> -    void *p;
>> +    uint64_t *p;
>>  
>>  #ifdef CONFIG_NUMA
>>      if ( hd->node != NUMA_NO_NODE )
>> @@ -554,7 +556,29 @@ struct page_info *iommu_alloc_pgtable(st
>>          return NULL;
>>  
>>      p = __map_domain_page(pg);
>> -    clear_page(p);
>> +
>> +    if ( contig_mask )
>> +    {
>> +        /* See pt-contig-markers.h for a description of the marker scheme. */
>> +        unsigned int i, shift = find_first_set_bit(contig_mask);
>> +
>> +        ASSERT((CONTIG_LEVEL_SHIFT & (contig_mask >> shift)) == CONTIG_LEVEL_SHIFT);
>> +
>> +        p[0] = (CONTIG_LEVEL_SHIFT + 0ull) << shift;
>> +        p[1] = 0;
>> +        p[2] = 1ull << shift;
>> +        p[3] = 0;
>> +
>> +        for ( i = 4; i < PAGE_SIZE / 8; i += 4 )
> 
> FWIW, you could also use sizeof(*p) instead of hardcoding 8.

Indeed. Changed.

Jan



^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0
  2022-06-01  8:17           ` Roger Pau Monné
@ 2022-06-01 15:10             ` Jan Beulich
  0 siblings, 0 replies; 38+ messages in thread
From: Jan Beulich @ 2022-06-01 15:10 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On 01.06.2022 10:17, Roger Pau Monné wrote:
> On Wed, Jun 01, 2022 at 09:10:09AM +0200, Jan Beulich wrote:
>> On 31.05.2022 18:15, Roger Pau Monné wrote:
>>> On Tue, May 31, 2022 at 05:40:03PM +0200, Jan Beulich wrote:
>>>> On 31.05.2022 16:40, Roger Pau Monné wrote:
>>>>> On Fri, May 27, 2022 at 01:12:06PM +0200, Jan Beulich wrote:
>>>>>> @@ -289,44 +290,75 @@ static bool __hwdom_init hwdom_iommu_map
>>>>>>       * that fall in unusable ranges for PV Dom0.
>>>>>>       */
>>>>>>      if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) )
>>>>>> -        return false;
>>>>>> +        return 0;
>>>>>>  
>>>>>>      switch ( type = page_get_ram_type(mfn) )
>>>>>>      {
>>>>>>      case RAM_TYPE_UNUSABLE:
>>>>>> -        return false;
>>>>>> +        return 0;
>>>>>>  
>>>>>>      case RAM_TYPE_CONVENTIONAL:
>>>>>>          if ( iommu_hwdom_strict )
>>>>>> -            return false;
>>>>>> +            return 0;
>>>>>>          break;
>>>>>>  
>>>>>>      default:
>>>>>>          if ( type & RAM_TYPE_RESERVED )
>>>>>>          {
>>>>>>              if ( !iommu_hwdom_inclusive && !iommu_hwdom_reserved )
>>>>>> -                return false;
>>>>>> +                perms = 0;
>>>>>>          }
>>>>>> -        else if ( is_hvm_domain(d) || !iommu_hwdom_inclusive || pfn > max_pfn )
>>>>>> -            return false;
>>>>>> +        else if ( is_hvm_domain(d) )
>>>>>> +            return 0;
>>>>>> +        else if ( !iommu_hwdom_inclusive || pfn > max_pfn )
>>>>>> +            perms = 0;
>>>>>>      }
>>>>>>  
>>>>>>      /* Check that it doesn't overlap with the Interrupt Address Range. */
>>>>>>      if ( pfn >= 0xfee00 && pfn <= 0xfeeff )
>>>>>> -        return false;
>>>>>> +        return 0;
>>>>>>      /* ... or the IO-APIC */
>>>>>> -    for ( i = 0; has_vioapic(d) && i < d->arch.hvm.nr_vioapics; i++ )
>>>>>> -        if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
>>>>>> -            return false;
>>>>>> +    if ( has_vioapic(d) )
>>>>>> +    {
>>>>>> +        for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
>>>>>> +            if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
>>>>>> +                return 0;
>>>>>> +    }
>>>>>> +    else if ( is_pv_domain(d) )
>>>>>> +    {
>>>>>> +        /*
>>>>>> +         * Be consistent with CPU mappings: Dom0 is permitted to establish r/o
>>>>>> +         * ones there (also for e.g. HPET in certain cases), so it should also
>>>>>> +         * have such established for IOMMUs.
>>>>>> +         */
>>>>>> +        if ( iomem_access_permitted(d, pfn, pfn) &&
>>>>>> +             rangeset_contains_singleton(mmio_ro_ranges, pfn) )
>>>>>> +            perms = IOMMUF_readable;
>>>>>> +    }
>>>>>>      /*
>>>>>>       * ... or the PCIe MCFG regions.
>>>>
>>>> With this comment (which I leave alone) ...
>>>>
>>>>>>       * TODO: runtime added MMCFG regions are not checked to make sure they
>>>>>>       * don't overlap with already mapped regions, thus preventing trapping.
>>>>>>       */
>>>>>>      if ( has_vpci(d) && vpci_is_mmcfg_address(d, pfn_to_paddr(pfn)) )
>>>>>> -        return false;
>>>>>> +        return 0;
>>>>>> +    else if ( is_pv_domain(d) )
>>>>>> +    {
>>>>>> +        /*
>>>>>> +         * Don't extend consistency with CPU mappings to PCI MMCFG regions.
>>>>>> +         * These shouldn't be accessed via DMA by devices.
>>>>>
>>>>> Could you expand the comment a bit to explicitly mention the reason
>>>>> why MMCFG regions shouldn't be accessible from device DMA operations?
>>>>
>>>> ... it's hard to tell what I should write here. I'd expect extended
>>>> reasoning to go there (if anywhere). I'd be okay adjusting the earlier
>>>> comment, if only I knew what to write. "We don't want them to be
>>>> accessed that way" seems a little blunt. I could say "Devices have
>>>> other means to access PCI config space", but this not being said there
>>>> I took as being implied.
>>>
>>> But we could likely say the same about IO-APIC or HPET MMIO regions.
>>> I don't think we expect them to be accessed by devices, yet we provide
>>> them for coherency with CPU side mappings in the PV case.
>>
>> As to "say the same" - yes for the first part of my earlier reply, but
>> no for the latter part.
> 
> Yes, obviously devices cannot access the HPET or the IO-APIC MMIO from
> the PCI config space :).
> 
>>>> Or else what was the reason to exclude these
>>>> for PVH Dom0?
>>>
>>> The reason for PVH is because the config space is (partially) emulated
>>> for the hardware domain, so we don't allow untrapped access by the CPU
>>> either.
>>
>> Hmm, right - there's read emulation there as well, while for PV we
>> only intercept writes.
>>
>> So overall should we perhaps permit r/o access to MMCFG for PV? Of
>> course that would only end up consistent once we adjust mappings
>> dynamically when MMCFG ranges are put in use (IOW if we can't verify
>> an MMCFG range is suitably reserved, we'd not find in
>> mmio_ro_ranges just yet, and hence we still wouldn't have an IOMMU
>> side mapping even if CPU side mappings are permitted). But for the
>> patch here it would simply mean dropping some of the code I did add
>> for v5.
> 
> I would be OK with that, as I think we would then be consistent with
> how IO-APIC and HPET MMIO regions are handled.  We would have to add
> some small helper/handling in PHYSDEVOP_pci_mmcfg_reserved for PV.

Okay, I'll drop that code again then. But I'm not going to look into
making the dynamic part work, at least not within this series.

Jan



^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables
  2022-06-01  9:24       ` Roger Pau Monné
@ 2022-06-01 15:25         ` Jan Beulich
  2022-06-02  8:57           ` Roger Pau Monné
  0 siblings, 1 reply; 38+ messages in thread
From: Jan Beulich @ 2022-06-01 15:25 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On 01.06.2022 11:24, Roger Pau Monné wrote:
> On Wed, Jun 01, 2022 at 09:32:44AM +0200, Jan Beulich wrote:
>> On 31.05.2022 18:25, Roger Pau Monné wrote:
>>> On Fri, May 27, 2022 at 01:13:09PM +0200, Jan Beulich wrote:
>>>> @@ -566,6 +567,98 @@ struct page_info *iommu_alloc_pgtable(st
>>>>      return pg;
>>>>  }
>>>>  
>>>> +/*
>>>> + * Intermediate page tables which get replaced by large pages may only be
>>>> + * freed after a suitable IOTLB flush. Hence such pages get queued on a
>>>> + * per-CPU list, with a per-CPU tasklet processing the list on the assumption
>>>> + * that the necessary IOTLB flush will have occurred by the time tasklets get
>>>> + * to run. (List and tasklet being per-CPU has the benefit of accesses not
>>>> + * requiring any locking.)
>>>> + */
>>>> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
>>>> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
>>>> +
>>>> +static void free_queued_pgtables(void *arg)
>>>> +{
>>>> +    struct page_list_head *list = arg;
>>>> +    struct page_info *pg;
>>>> +    unsigned int done = 0;
>>>> +
>>>> +    while ( (pg = page_list_remove_head(list)) )
>>>> +    {
>>>> +        free_domheap_page(pg);
>>>> +
>>>> +        /* Granularity of checking somewhat arbitrary. */
>>>> +        if ( !(++done & 0x1ff) )
>>>> +             process_pending_softirqs();
>>>
>>> Hm, I'm wondering whether we really want to process pending softirqs
>>> here.
>>>
>>> Such processing will prevent the watchdog from triggering, which we
>>> likely want in production builds.  OTOH in debug builds we should make
>>> sure that free_queued_pgtables() doesn't take longer than a watchdog
>>> window, or else it's likely to cause issues to guests scheduled on
>>> this same pCPU (and calling process_pending_softirqs() will just mask
>>> it).
>>
>> Doesn't this consideration apply to about every use of the function we
>> already have in the code base?
> 
> Not really, at least when used by init code or by the debug key
> handlers.  This use is IMO different than what I would expect, as it's
> a guest triggered path that we believe do require such processing.
> Normally we would use continuations for such long going guest
> triggered operations.

So what do you suggest I do? Putting the call inside #ifndef CONFIG_DEBUG
is not a good option imo. Re-scheduling the tasklet wouldn't help, aiui
(it would still run again right away). Moving the work to another CPU so
this one can do other things isn't very good either - what if other CPUs
are similarly busy? Remains making things more complicated here by
involving a timer, the handler of which would re-schedule the tasklet. I
have to admit I don't like that very much either. The more that the
use of process_pending_softirqs() is "just in case" here anyway - if lots
of page tables were to be queued, I'd expect the queuing entity to be
preempted before a rather large pile could accumulate.

Maybe I could make iommu_queue_free_pgtable() return non-void, to instruct
the caller to bubble up a preemption notification once a certain number
of pages have been queued for freeing. This might end up intrusive ...

Jan



^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables
  2022-06-01 15:25         ` Jan Beulich
@ 2022-06-02  8:57           ` Roger Pau Monné
  0 siblings, 0 replies; 38+ messages in thread
From: Roger Pau Monné @ 2022-06-02  8:57 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On Wed, Jun 01, 2022 at 05:25:16PM +0200, Jan Beulich wrote:
> On 01.06.2022 11:24, Roger Pau Monné wrote:
> > On Wed, Jun 01, 2022 at 09:32:44AM +0200, Jan Beulich wrote:
> >> On 31.05.2022 18:25, Roger Pau Monné wrote:
> >>> On Fri, May 27, 2022 at 01:13:09PM +0200, Jan Beulich wrote:
> >>>> @@ -566,6 +567,98 @@ struct page_info *iommu_alloc_pgtable(st
> >>>>      return pg;
> >>>>  }
> >>>>  
> >>>> +/*
> >>>> + * Intermediate page tables which get replaced by large pages may only be
> >>>> + * freed after a suitable IOTLB flush. Hence such pages get queued on a
> >>>> + * per-CPU list, with a per-CPU tasklet processing the list on the assumption
> >>>> + * that the necessary IOTLB flush will have occurred by the time tasklets get
> >>>> + * to run. (List and tasklet being per-CPU has the benefit of accesses not
> >>>> + * requiring any locking.)
> >>>> + */
> >>>> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
> >>>> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
> >>>> +
> >>>> +static void free_queued_pgtables(void *arg)
> >>>> +{
> >>>> +    struct page_list_head *list = arg;
> >>>> +    struct page_info *pg;
> >>>> +    unsigned int done = 0;
> >>>> +
> >>>> +    while ( (pg = page_list_remove_head(list)) )
> >>>> +    {
> >>>> +        free_domheap_page(pg);
> >>>> +
> >>>> +        /* Granularity of checking somewhat arbitrary. */
> >>>> +        if ( !(++done & 0x1ff) )
> >>>> +             process_pending_softirqs();
> >>>
> >>> Hm, I'm wondering whether we really want to process pending softirqs
> >>> here.
> >>>
> >>> Such processing will prevent the watchdog from triggering, which we
> >>> likely want in production builds.  OTOH in debug builds we should make
> >>> sure that free_queued_pgtables() doesn't take longer than a watchdog
> >>> window, or else it's likely to cause issues to guests scheduled on
> >>> this same pCPU (and calling process_pending_softirqs() will just mask
> >>> it).
> >>
> >> Doesn't this consideration apply to about every use of the function we
> >> already have in the code base?
> > 
> > Not really, at least when used by init code or by the debug key
> > handlers.  This use is IMO different than what I would expect, as it's
> > a guest triggered path that we believe do require such processing.
> > Normally we would use continuations for such long going guest
> > triggered operations.
> 
> So what do you suggest I do? Putting the call inside #ifndef CONFIG_DEBUG
> is not a good option imo. Re-scheduling the tasklet wouldn't help, aiui
> (it would still run again right away). Moving the work to another CPU so
> this one can do other things isn't very good either - what if other CPUs
> are similarly busy? Remains making things more complicated here by
> involving a timer, the handler of which would re-schedule the tasklet. I
> have to admit I don't like that very much either. The more that the
> use of process_pending_softirqs() is "just in case" here anyway - if lots
> of page tables were to be queued, I'd expect the queuing entity to be
> preempted before a rather large pile could accumulate.

I would be fine with adding a comment here that we don't expect the
processing of softirqs to be necessary, but it's added merely as a
safety guard.

Long term I think we want to do with the approach of freeing such
pages in guest context before resuming execution, much like how we
defer vPCI BAR mappings using vpci_process_pending.  But this requires
some extra work, and we would also need to handle the case of the
freeing being triggered in a remote domain context.

> Maybe I could make iommu_queue_free_pgtable() return non-void, to instruct
> the caller to bubble up a preemption notification once a certain number
> of pages have been queued for freeing. This might end up intrusive ...

Hm, it will end up being pretty arbitrary, as the time taken to free
the pages is very variable depending on the contention on the heap
lock, and hence setting a limit in number of pages will be hard.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 12/15] VT-d: replace all-contiguous page tables by superpage mappings
  2022-05-27 11:19 ` [PATCH v5 12/15] VT-d: " Jan Beulich
@ 2022-06-02  9:35   ` Roger Pau Monné
  2022-06-02  9:58     ` Jan Beulich
  0 siblings, 1 reply; 38+ messages in thread
From: Roger Pau Monné @ 2022-06-02  9:35 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On Fri, May 27, 2022 at 01:19:55PM +0200, Jan Beulich wrote:
> When a page table ends up with all contiguous entries (including all
> identical attributes), it can be replaced by a superpage entry at the
> next higher level. The page table itself can then be scheduled for
> freeing.
> 
> The adjustment to LEVEL_MASK is merely to avoid leaving a latent trap
> for whenever we (and obviously hardware) start supporting 512G mappings.
> 
> Note that cache sync-ing is likely more strict than necessary. This is
> both to be on the safe side as well as to maintain the pattern of all
> updates of (potentially) live tables being accompanied by a flush (if so
> needed).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> Unlike the freeing of all-empty page tables, this causes quite a bit of
> back and forth for PV domains, due to their mapping/unmapping of pages
> when they get converted to/from being page tables. It may therefore be
> worth considering to delay re-coalescing a little, to avoid doing so
> when the superpage would otherwise get split again pretty soon. But I
> think this would better be the subject of a separate change anyway.
> 
> Of course this could also be helped by more "aware" kernel side
> behavior: They could avoid immediately mapping freed page tables
> writable again, in anticipation of re-using that same page for another
> page table elsewhere.

Could we provide an option to select whether to use super-pages for
IOMMU, so that PV domains could keep the previous behavior?

Thanks, Roger.


^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 12/15] VT-d: replace all-contiguous page tables by superpage mappings
  2022-06-02  9:35   ` Roger Pau Monné
@ 2022-06-02  9:58     ` Jan Beulich
  2022-06-02 10:31       ` Roger Pau Monné
  0 siblings, 1 reply; 38+ messages in thread
From: Jan Beulich @ 2022-06-02  9:58 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On 02.06.2022 11:35, Roger Pau Monné wrote:
> On Fri, May 27, 2022 at 01:19:55PM +0200, Jan Beulich wrote:
>> When a page table ends up with all contiguous entries (including all
>> identical attributes), it can be replaced by a superpage entry at the
>> next higher level. The page table itself can then be scheduled for
>> freeing.
>>
>> The adjustment to LEVEL_MASK is merely to avoid leaving a latent trap
>> for whenever we (and obviously hardware) start supporting 512G mappings.
>>
>> Note that cache sync-ing is likely more strict than necessary. This is
>> both to be on the safe side as well as to maintain the pattern of all
>> updates of (potentially) live tables being accompanied by a flush (if so
>> needed).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>> ---
>> Unlike the freeing of all-empty page tables, this causes quite a bit of
>> back and forth for PV domains, due to their mapping/unmapping of pages
>> when they get converted to/from being page tables. It may therefore be
>> worth considering to delay re-coalescing a little, to avoid doing so
>> when the superpage would otherwise get split again pretty soon. But I
>> think this would better be the subject of a separate change anyway.
>>
>> Of course this could also be helped by more "aware" kernel side
>> behavior: They could avoid immediately mapping freed page tables
>> writable again, in anticipation of re-using that same page for another
>> page table elsewhere.
> 
> Could we provide an option to select whether to use super-pages for
> IOMMU, so that PV domains could keep the previous behavior?

Hmm, I did (a while ago) consider adding a command line option, largely
to have something in case of problems, but here you're asking about a
per-domain setting. Possible, sure, but I have to admit I'm always
somewhat hesitant when it comes to changes requiring to touch the tool
stack in non-trivial ways (required besides a separate Dom0 control).

It's also not clear what granularity we'd want to allow control at:
Just yes/no, or also an upper bound on the page sizes permitted, or
even a map of (dis)allowed page sizes?

Finally, what would the behavior be for HVM guests using shared page
tables? Should the IOMMU option there also suppress CPU-side large
pages? Or should the IOMMU option, when not fulfillable with shared
page tables, lead to use of separate page tables (and an error if
shared page tables were explicitly requested)?

Jan



^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v5 12/15] VT-d: replace all-contiguous page tables by superpage mappings
  2022-06-02  9:58     ` Jan Beulich
@ 2022-06-02 10:31       ` Roger Pau Monné
  0 siblings, 0 replies; 38+ messages in thread
From: Roger Pau Monné @ 2022-06-02 10:31 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Paul Durrant

On Thu, Jun 02, 2022 at 11:58:48AM +0200, Jan Beulich wrote:
> On 02.06.2022 11:35, Roger Pau Monné wrote:
> > On Fri, May 27, 2022 at 01:19:55PM +0200, Jan Beulich wrote:
> >> When a page table ends up with all contiguous entries (including all
> >> identical attributes), it can be replaced by a superpage entry at the
> >> next higher level. The page table itself can then be scheduled for
> >> freeing.
> >>
> >> The adjustment to LEVEL_MASK is merely to avoid leaving a latent trap
> >> for whenever we (and obviously hardware) start supporting 512G mappings.
> >>
> >> Note that cache sync-ing is likely more strict than necessary. This is
> >> both to be on the safe side as well as to maintain the pattern of all
> >> updates of (potentially) live tables being accompanied by a flush (if so
> >> needed).
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> > 
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Thanks.
> 
> >> ---
> >> Unlike the freeing of all-empty page tables, this causes quite a bit of
> >> back and forth for PV domains, due to their mapping/unmapping of pages
> >> when they get converted to/from being page tables. It may therefore be
> >> worth considering to delay re-coalescing a little, to avoid doing so
> >> when the superpage would otherwise get split again pretty soon. But I
> >> think this would better be the subject of a separate change anyway.
> >>
> >> Of course this could also be helped by more "aware" kernel side
> >> behavior: They could avoid immediately mapping freed page tables
> >> writable again, in anticipation of re-using that same page for another
> >> page table elsewhere.
> > 
> > Could we provide an option to select whether to use super-pages for
> > IOMMU, so that PV domains could keep the previous behavior?
> 
> Hmm, I did (a while ago) consider adding a command line option, largely
> to have something in case of problems, but here you're asking about a
> per-domain setting. Possible, sure, but I have to admit I'm always
> somewhat hesitant when it comes to changes requiring to touch the tool
> stack in non-trivial ways (required besides a separate Dom0 control).

Well, per-domain is always better IMO, but I don't want to block you
on this, so I guess a command line option would be OK.

Per-domain would IMO be helpful in this case because an admin might
wish to disable IOMMU super-pages just for PV guests, in order to
prevent the back-and-forth in that case.  We could also do so with a
command line option but it's not the most user-friendly approach.

> It's also not clear what granularity we'd want to allow control at:
> Just yes/no, or also an upper bound on the page sizes permitted, or
> even a map of (dis)allowed page sizes?

I would be fine with just yes/no.  I don't think we need to complicate
the logic, this should be a fallback in case things don't work as
expected.

> Finally, what would the behavior be for HVM guests using shared page
> tables? Should the IOMMU option there also suppress CPU-side large
> pages? Or should the IOMMU option, when not fulfillable with shared
> page tables, lead to use of separate page tables (and an error if
> shared page tables were explicitly requested)?

I think the option should error out (or be ignored?) when used with
shared page tables, there are already options to control the page
sizes for the CPU side page tables, and those should be used when
using shared page tables.

Thanks, Roger.


^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2022-06-02 10:32 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-27 11:10 [PATCH v5 00/15] IOMMU: superpage support when not sharing pagetables Jan Beulich
2022-05-27 11:12 ` [PATCH v5 01/15] IOMMU/x86: restrict IO-APIC mappings for PV Dom0 Jan Beulich
2022-05-31 14:40   ` Roger Pau Monné
2022-05-31 15:40     ` Jan Beulich
2022-05-31 16:15       ` Roger Pau Monné
2022-06-01  7:10         ` Jan Beulich
2022-06-01  8:17           ` Roger Pau Monné
2022-06-01 15:10             ` Jan Beulich
2022-05-27 11:12 ` [PATCH v5 02/15] IOMMU/x86: perform PV Dom0 mappings in batches Jan Beulich
2022-05-31 16:01   ` Roger Pau Monné
2022-06-01  7:30     ` Jan Beulich
2022-06-01  9:08       ` Roger Pau Monné
2022-05-27 11:13 ` [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables Jan Beulich
2022-05-31 16:25   ` Roger Pau Monné
2022-06-01  7:32     ` Jan Beulich
2022-06-01  9:24       ` Roger Pau Monné
2022-06-01 15:25         ` Jan Beulich
2022-06-02  8:57           ` Roger Pau Monné
2022-05-27 11:13 ` [PATCH v5 04/15] AMD/IOMMU: allow use of superpage mappings Jan Beulich
2022-05-27 11:14 ` [PATCH v5 05/15] VT-d: " Jan Beulich
2022-05-27 11:16 ` [PATCH v5 06/15] IOMMU: fold flush-all hook into "flush one" Jan Beulich
2022-05-27 11:17 ` [PATCH v5 07/15] x86: introduce helper for recording degree of contiguity in page tables Jan Beulich
2022-06-01 11:29   ` Roger Pau Monné
2022-06-01 12:11     ` Jan Beulich
2022-06-01 13:02       ` Roger Pau Monné
2022-05-27 11:17 ` [PATCH v5 08/15] IOMMU/x86: prefill newly allocate " Jan Beulich
2022-06-01 12:59   ` Roger Pau Monné
2022-06-01 13:17     ` Jan Beulich
2022-05-27 11:18 ` [PATCH v5 09/15] AMD/IOMMU: free all-empty " Jan Beulich
2022-05-27 11:19 ` [PATCH v5 10/15] VT-d: " Jan Beulich
2022-05-27 11:19 ` [PATCH v5 11/15] AMD/IOMMU: replace all-contiguous page tables by superpage mappings Jan Beulich
2022-05-27 11:19 ` [PATCH v5 12/15] VT-d: " Jan Beulich
2022-06-02  9:35   ` Roger Pau Monné
2022-06-02  9:58     ` Jan Beulich
2022-06-02 10:31       ` Roger Pau Monné
2022-05-27 11:20 ` [PATCH v5 13/15] IOMMU/x86: add perf counters for page table splitting / coalescing Jan Beulich
2022-05-27 11:20 ` [PATCH v5 14/15] VT-d: fold iommu_flush_iotlb{,_pages}() Jan Beulich
2022-05-27 11:21 ` [PATCH v5 15/15] VT-d: fold dma_pte_clear_one() into its only caller Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.