xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM
@ 2021-04-12 14:03 Jan Beulich
  2021-04-12 14:05 ` [PATCH v2 01/12] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only Jan Beulich
                   ` (12 more replies)
  0 siblings, 13 replies; 35+ messages in thread
From: Jan Beulich @ 2021-04-12 14:03 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

Since it was brought up in the discussion of v1: I think the end
goal is to be for mm/p2m.c to become a HVM-only file. Any "wrappers"
also trying to take care of !paging_mode_translate() guests ought to
be moved elsewhere. To see what actually still needs taking care of,
incrementally extending the #ifdef CONFIG_HVM regions there is the
way to go imo.

Compared to v1 there are many new patches here plus build fixes to
two of the three remaining ones from v1.

01: p2m: set_{foreign,mmio}_p2m_entry() are HVM-only
02: p2m: {,un}map_mmio_regions() are HVM-only
03: mm: the gva_to_gfn() hook is HVM-only
04: AMD/IOMMU: guest IOMMU support is for HVM only
05: p2m: change_entry_type_* hooks are HVM-only
06: p2m: hardware-log-dirty related hooks are HVM-only
07: p2m: the recalc hook is HVM-only
08: mem-access is HVM-only
09: make mem-paging configuarable and default it to off for being unsupported
10: p2m: {get,set}_entry hooks and p2m-pt.c are HVM-only
11: p2m: write_p2m_entry_{pre,post} hooks are HVM-only
12: p2m: re-arrange struct p2m_domain

Jan


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 01/12] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only
  2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
@ 2021-04-12 14:05 ` Jan Beulich
  2021-04-29 13:17   ` Roger Pau Monné
  2021-04-12 14:06 ` [PATCH v2 02/12] x86/p2m: {,un}map_mmio_regions() " Jan Beulich
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2021-04-12 14:05 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

Extend a respective #ifdef from inside set_typed_p2m_entry() to around
all three functions. Add ASSERT_UNREACHABLE() to the latter one's safety
check path.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1260,6 +1260,8 @@ int p2m_finish_type_change(struct domain
     return rc;
 }
 
+#ifdef CONFIG_HVM
+
 /*
  * Returns:
  *    0              for success
@@ -1280,7 +1282,10 @@ static int set_typed_p2m_entry(struct do
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     if ( !paging_mode_translate(d) )
+    {
+        ASSERT_UNREACHABLE();
         return -EIO;
+    }
 
     gfn_lock(p2m, gfn, order);
     omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, &cur_order, NULL);
@@ -1313,7 +1318,6 @@ static int set_typed_p2m_entry(struct do
     if ( rc )
         gdprintk(XENLOG_ERR, "p2m_set_entry: %#lx:%u -> %d (0x%"PRI_mfn")\n",
                  gfn_l, order, rc, mfn_x(mfn));
-#ifdef CONFIG_HVM
     else if ( p2m_is_pod(ot) )
     {
         pod_lock(p2m);
@@ -1321,7 +1325,6 @@ static int set_typed_p2m_entry(struct do
         BUG_ON(p2m->pod.entry_count < 0);
         pod_unlock(p2m);
     }
-#endif
     gfn_unlock(p2m, gfn, order);
 
     return rc;
@@ -1349,6 +1352,8 @@ int set_mmio_p2m_entry(struct domain *d,
                                p2m_get_hostp2m(d)->default_access);
 }
 
+#endif /* CONFIG_HVM */
+
 int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
                            p2m_access_t p2ma, unsigned int flag)
 {



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 02/12] x86/p2m: {,un}map_mmio_regions() are HVM-only
  2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
  2021-04-12 14:05 ` [PATCH v2 01/12] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only Jan Beulich
@ 2021-04-12 14:06 ` Jan Beulich
  2021-04-29 14:48   ` Roger Pau Monné
  2021-04-12 14:07 ` [PATCH v2 03/12] x86/mm: the gva_to_gfn() hook is HVM-only Jan Beulich
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2021-04-12 14:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, Roger Pau Monné

Mirror the "translated" check the functions do to do_domctl(), allowing
the calls to be DCEd by the compiler. Add ASSERT_UNREACHABLE() to the
original checks.

Also arrange for {set,clear}_mmio_p2m_entry() and
{set,clear}_identity_p2m_entry() to respectively live next to each
other, such that clear_mmio_p2m_entry() can also be covered by the
#ifdef already covering set_mmio_p2m_entry().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Fix build.
---
Arguably the original checks, returning success, could also be dropped
at this point.

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1352,52 +1352,6 @@ int set_mmio_p2m_entry(struct domain *d,
                                p2m_get_hostp2m(d)->default_access);
 }
 
-#endif /* CONFIG_HVM */
-
-int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
-                           p2m_access_t p2ma, unsigned int flag)
-{
-    p2m_type_t p2mt;
-    p2m_access_t a;
-    gfn_t gfn = _gfn(gfn_l);
-    mfn_t mfn;
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    int ret;
-
-    if ( !paging_mode_translate(p2m->domain) )
-    {
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l),
-                                1ul << PAGE_ORDER_4K,
-                                IOMMUF_readable | IOMMUF_writable);
-    }
-
-    gfn_lock(p2m, gfn, 0);
-
-    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
-
-    if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm )
-        ret = p2m_set_entry(p2m, gfn, _mfn(gfn_l), PAGE_ORDER_4K,
-                            p2m_mmio_direct, p2ma);
-    else if ( mfn_x(mfn) == gfn_l && p2mt == p2m_mmio_direct && a == p2ma )
-        ret = 0;
-    else
-    {
-        if ( flag & XEN_DOMCTL_DEV_RDM_RELAXED )
-            ret = 0;
-        else
-            ret = -EBUSY;
-        printk(XENLOG_G_WARNING
-               "Cannot setup identity map d%d:%lx,"
-               " gfn already mapped to %lx.\n",
-               d->domain_id, gfn_l, mfn_x(mfn));
-    }
-
-    gfn_unlock(p2m, gfn, 0);
-    return ret;
-}
-
 /*
  * Returns:
  *    0        for success
@@ -1447,6 +1401,52 @@ int clear_mmio_p2m_entry(struct domain *
     return rc;
 }
 
+#endif /* CONFIG_HVM */
+
+int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
+                           p2m_access_t p2ma, unsigned int flag)
+{
+    p2m_type_t p2mt;
+    p2m_access_t a;
+    gfn_t gfn = _gfn(gfn_l);
+    mfn_t mfn;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    int ret;
+
+    if ( !paging_mode_translate(p2m->domain) )
+    {
+        if ( !is_iommu_enabled(d) )
+            return 0;
+        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l),
+                                1ul << PAGE_ORDER_4K,
+                                IOMMUF_readable | IOMMUF_writable);
+    }
+
+    gfn_lock(p2m, gfn, 0);
+
+    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
+
+    if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm )
+        ret = p2m_set_entry(p2m, gfn, _mfn(gfn_l), PAGE_ORDER_4K,
+                            p2m_mmio_direct, p2ma);
+    else if ( mfn_x(mfn) == gfn_l && p2mt == p2m_mmio_direct && a == p2ma )
+        ret = 0;
+    else
+    {
+        if ( flag & XEN_DOMCTL_DEV_RDM_RELAXED )
+            ret = 0;
+        else
+            ret = -EBUSY;
+        printk(XENLOG_G_WARNING
+               "Cannot setup identity map d%d:%lx,"
+               " gfn already mapped to %lx.\n",
+               d->domain_id, gfn_l, mfn_x(mfn));
+    }
+
+    gfn_unlock(p2m, gfn, 0);
+    return ret;
+}
+
 int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
 {
     p2m_type_t p2mt;
@@ -1892,6 +1892,8 @@ void *map_domain_gfn(struct p2m_domain *
     return map_domain_page(*mfn);
 }
 
+#ifdef CONFIG_HVM
+
 static unsigned int mmio_order(const struct domain *d,
                                unsigned long start_fn, unsigned long nr)
 {
@@ -1932,7 +1934,10 @@ int map_mmio_regions(struct domain *d,
     unsigned int iter, order;
 
     if ( !paging_mode_translate(d) )
+    {
+        ASSERT_UNREACHABLE();
         return 0;
+    }
 
     for ( iter = i = 0; i < nr && iter < MAP_MMIO_MAX_ITER;
           i += 1UL << order, ++iter )
@@ -1964,7 +1969,10 @@ int unmap_mmio_regions(struct domain *d,
     unsigned int iter, order;
 
     if ( !paging_mode_translate(d) )
+    {
+        ASSERT_UNREACHABLE();
         return 0;
+    }
 
     for ( iter = i = 0; i < nr && iter < MAP_MMIO_MAX_ITER;
           i += 1UL << order, ++iter )
@@ -1986,8 +1994,6 @@ int unmap_mmio_regions(struct domain *d,
     return i == nr ? 0 : i ?: ret;
 }
 
-#ifdef CONFIG_HVM
-
 int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn,
                                p2m_type_t *t, p2m_access_t *a,
                                bool prepopulate)
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -750,6 +750,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         if ( ret )
             break;
 
+        if ( !paging_mode_translate(d) )
+            break;
+
         if ( add )
         {
             printk(XENLOG_G_DEBUG
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -678,11 +678,19 @@ int p2m_finish_type_change(struct domain
 int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start,
                           unsigned long end);
 
+#ifdef CONFIG_HVM
 /* Set mmio addresses in the p2m table (for pass-through) */
 int set_mmio_p2m_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
                        unsigned int order);
 int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
                          unsigned int order);
+#else
+static inline int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn,
+                                       mfn_t mfn, unsigned int order)
+{
+    return -EIO;
+}
+#endif
 
 /* Set identity addresses in the p2m table (for pass-through) */
 int set_identity_p2m_entry(struct domain *d, unsigned long gfn,



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 03/12] x86/mm: the gva_to_gfn() hook is HVM-only
  2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
  2021-04-12 14:05 ` [PATCH v2 01/12] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only Jan Beulich
  2021-04-12 14:06 ` [PATCH v2 02/12] x86/p2m: {,un}map_mmio_regions() " Jan Beulich
@ 2021-04-12 14:07 ` Jan Beulich
  2021-04-15 16:22   ` Tim Deegan
  2021-04-12 14:07 ` [PATCH v2 04/12] AMD/IOMMU: guest IOMMU support is for HVM only Jan Beulich
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2021-04-12 14:07 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap, Tim Deegan

As is the adjacent ga_to_gfn() one as well as paging_gva_to_gfn().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Fix !SHADOW_PAGING && !HVM build.

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1793,7 +1793,6 @@ void np2m_schedule(int dir)
         p2m_unlock(p2m);
     }
 }
-#endif
 
 unsigned long paging_gva_to_gfn(struct vcpu *v,
                                 unsigned long va,
@@ -1844,6 +1843,8 @@ unsigned long paging_gva_to_gfn(struct v
     return hostmode->gva_to_gfn(v, hostp2m, va, pfec);
 }
 
+#endif /* CONFIG_HVM */
+
 /*
  * If the map is non-NULL, we leave this function having acquired an extra ref
  * on mfn_to_page(*mfn).  In all cases, *pfec contains appropriate
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3030,6 +3030,7 @@ static bool sh_invlpg(struct vcpu *v, un
     return true;
 }
 
+#ifdef CONFIG_HVM
 
 static unsigned long
 sh_gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m,
@@ -3063,6 +3064,7 @@ sh_gva_to_gfn(struct vcpu *v, struct p2m
     return gfn_x(gfn);
 }
 
+#endif /* CONFIG_HVM */
 
 static inline void
 sh_update_linear_entries(struct vcpu *v)
@@ -4183,7 +4185,9 @@ int sh_audit_l4_table(struct vcpu *v, mf
 const struct paging_mode sh_paging_mode = {
     .page_fault                    = sh_page_fault,
     .invlpg                        = sh_invlpg,
+#ifdef CONFIG_HVM
     .gva_to_gfn                    = sh_gva_to_gfn,
+#endif
     .update_cr3                    = sh_update_cr3,
     .update_paging_modes           = shadow_update_paging_modes,
     .flush_tlb                     = shadow_flush_tlb,
--- a/xen/arch/x86/mm/shadow/none.c
+++ b/xen/arch/x86/mm/shadow/none.c
@@ -43,12 +43,14 @@ static bool _invlpg(struct vcpu *v, unsi
     return true;
 }
 
+#ifdef CONFIG_HVM
 static unsigned long _gva_to_gfn(struct vcpu *v, struct p2m_domain *p2m,
                                  unsigned long va, uint32_t *pfec)
 {
     ASSERT_UNREACHABLE();
     return gfn_x(INVALID_GFN);
 }
+#endif
 
 static void _update_cr3(struct vcpu *v, int do_locking, bool noflush)
 {
@@ -63,7 +65,9 @@ static void _update_paging_modes(struct
 static const struct paging_mode sh_paging_none = {
     .page_fault                    = _page_fault,
     .invlpg                        = _invlpg,
+#ifdef CONFIG_HVM
     .gva_to_gfn                    = _gva_to_gfn,
+#endif
     .update_cr3                    = _update_cr3,
     .update_paging_modes           = _update_paging_modes,
 };
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -127,6 +127,7 @@ struct paging_mode {
                                             struct cpu_user_regs *regs);
     bool          (*invlpg                )(struct vcpu *v,
                                             unsigned long linear);
+#ifdef CONFIG_HVM
     unsigned long (*gva_to_gfn            )(struct vcpu *v,
                                             struct p2m_domain *p2m,
                                             unsigned long va,
@@ -136,6 +137,7 @@ struct paging_mode {
                                             unsigned long cr3,
                                             paddr_t ga, uint32_t *pfec,
                                             unsigned int *page_order);
+#endif
     void          (*update_cr3            )(struct vcpu *v, int do_locking,
                                             bool noflush);
     void          (*update_paging_modes   )(struct vcpu *v);
@@ -287,6 +289,8 @@ unsigned long paging_gva_to_gfn(struct v
                                 unsigned long va,
                                 uint32_t *pfec);
 
+#ifdef CONFIG_HVM
+
 /* Translate a guest address using a particular CR3 value.  This is used
  * to by nested HAP code, to walk the guest-supplied NPT tables as if
  * they were pagetables.
@@ -305,6 +309,8 @@ static inline unsigned long paging_ga_to
         page_order);
 }
 
+#endif /* CONFIG_HVM */
+
 /* Update all the things that are derived from the guest's CR3.
  * Called when the guest changes CR3; the caller can then use v->arch.cr3
  * as the value to load into the host CR3 to schedule this vcpu */



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 04/12] AMD/IOMMU: guest IOMMU support is for HVM only
  2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
                   ` (2 preceding siblings ...)
  2021-04-12 14:07 ` [PATCH v2 03/12] x86/mm: the gva_to_gfn() hook is HVM-only Jan Beulich
@ 2021-04-12 14:07 ` Jan Beulich
  2021-04-29 15:14   ` Roger Pau Monné
  2021-04-12 14:08 ` [PATCH v2 05/12] x86/p2m: change_entry_type_* hooks are HVM-only Jan Beulich
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2021-04-12 14:07 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, George Dunlap

Generally all of this is dead code anyway, but there's a caller of
guest_iommu_add_ppr_log(), and the code itself calls
p2m_change_type_one(), which is about to become HVM-only. Hence this
code, short of deleting it altogether, needs to become properly HVM-
only as well.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/drivers/passthrough/amd/Makefile
+++ b/xen/drivers/passthrough/amd/Makefile
@@ -5,4 +5,4 @@ obj-y += pci_amd_iommu.o
 obj-bin-y += iommu_acpi.init.o
 obj-y += iommu_intr.o
 obj-y += iommu_cmd.o
-obj-y += iommu_guest.o
+obj-$(CONFIG_HVM) += iommu_guest.o
--- a/xen/drivers/passthrough/amd/iommu.h
+++ b/xen/drivers/passthrough/amd/iommu.h
@@ -307,12 +307,16 @@ int __must_check amd_iommu_suspend(void)
 void amd_iommu_crash_shutdown(void);
 
 /* guest iommu support */
+#ifdef CONFIG_HVM
 void amd_iommu_send_guest_cmd(struct amd_iommu *iommu, u32 cmd[]);
 void guest_iommu_add_ppr_log(struct domain *d, u32 entry[]);
 void guest_iommu_add_event_log(struct domain *d, u32 entry[]);
 int guest_iommu_init(struct domain* d);
 void guest_iommu_destroy(struct domain *d);
 int guest_iommu_set_base(struct domain *d, uint64_t base);
+#else
+static inline void guest_iommu_add_ppr_log(struct domain *d, uint32_t entry[]) {}
+#endif
 
 static inline u32 get_field_from_reg_u32(u32 reg_value, u32 mask, u32 shift)
 {



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 05/12] x86/p2m: change_entry_type_* hooks are HVM-only
  2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
                   ` (3 preceding siblings ...)
  2021-04-12 14:07 ` [PATCH v2 04/12] AMD/IOMMU: guest IOMMU support is for HVM only Jan Beulich
@ 2021-04-12 14:08 ` Jan Beulich
  2021-04-29 15:49   ` Roger Pau Monné
  2021-04-12 14:08 ` [PATCH v2 06/12] x86/p2m: hardware-log-dirty related " Jan Beulich
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2021-04-12 14:08 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

Exclude functions using them from !HVM builds, thus making it possible
to exclude the hooks as well. Also cover the already unused
memory_type_changed hook while inserting the #ifdef in the struct.

While no respective check was there, I can't see how
XEN_DOMCTL_set_broken_page_p2m could have been meant to work for PV the
way it is implemented. Restrict this operation to acting on HVM guests.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1080,9 +1080,12 @@ long arch_do_domctl(
     {
         p2m_type_t pt;
         unsigned long pfn = domctl->u.set_broken_page_p2m.pfn;
-        mfn_t mfn = get_gfn_query(d, pfn, &pt);
 
-        if ( unlikely(!mfn_valid(mfn)) || unlikely(!p2m_is_ram(pt)) )
+        if ( !is_hvm_domain(d) )
+            return -EINVAL;
+
+        if ( unlikely(!mfn_valid(get_gfn_query(d, pfn, &pt))) ||
+             unlikely(!p2m_is_ram(pt)) )
             ret = -EINVAL;
         else
             ret = p2m_change_type_one(d, pfn, pt, p2m_ram_broken);
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -292,6 +292,8 @@ int p2m_is_logdirty_range(struct p2m_dom
     return 0;
 }
 
+#ifdef CONFIG_HVM
+
 static void change_entry_type_global(struct p2m_domain *p2m,
                                      p2m_type_t ot, p2m_type_t nt)
 {
@@ -316,7 +318,6 @@ void p2m_change_entry_type_global(struct
 
     change_entry_type_global(hostp2m, ot, nt);
 
-#ifdef CONFIG_HVM
     if ( unlikely(altp2m_active(d)) )
     {
         unsigned int i;
@@ -331,12 +332,10 @@ void p2m_change_entry_type_global(struct
                 p2m_unlock(altp2m);
             }
     }
-#endif
 
     p2m_unlock(hostp2m);
 }
 
-#ifdef CONFIG_HVM
 /* There's already a memory_type_changed() in asm/mtrr.h. */
 static void _memory_type_changed(struct p2m_domain *p2m)
 {
@@ -369,7 +368,8 @@ void p2m_memory_type_changed(struct doma
 
     p2m_unlock(hostp2m);
 }
-#endif
+
+#endif /* CONFIG_HVM */
 
 int p2m_set_ioreq_server(struct domain *d,
                          unsigned int flags,
@@ -876,6 +876,7 @@ guest_physmap_add_page(struct domain *d,
 }
 
 #ifdef CONFIG_HVM
+
 int
 guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
                         unsigned int page_order, p2m_type_t t)
@@ -1024,7 +1025,6 @@ out:
 
     return rc;
 }
-#endif
 
 /*
  * Modify the p2m type of a single gfn from ot to nt.
@@ -1161,7 +1161,6 @@ void p2m_change_type_range(struct domain
 
     change_type_range(hostp2m, start, end, ot, nt);
 
-#ifdef CONFIG_HVM
     if ( unlikely(altp2m_active(d)) )
     {
         unsigned int i;
@@ -1176,7 +1175,6 @@ void p2m_change_type_range(struct domain
                 p2m_unlock(altp2m);
             }
     }
-#endif
     hostp2m->defer_nested_flush = 0;
     if ( nestedhvm_enabled(d) )
         p2m_flush_nestedp2m(d);
@@ -1184,6 +1182,8 @@ void p2m_change_type_range(struct domain
     p2m_unlock(hostp2m);
 }
 
+#endif /* CONFIG_HVM */
+
 /*
  * Finish p2m type change for gfns which are marked as need_recalc in a range.
  * Uses the current p2m's max_mapped_pfn to further clip the invalidation
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -324,6 +324,7 @@ p2m_next_level(struct p2m_domain *p2m, v
     return rc;
 }
 
+#ifdef CONFIG_HVM
 /*
  * Mark (via clearing the U flag) as needing P2M type re-calculation all valid
  * present entries at the targeted level for the passed in GFN range, which is
@@ -392,6 +393,7 @@ static int p2m_pt_set_recalc_range(struc
 
     return err;
 }
+#endif /* CONFIG_HVM */
 
 /*
  * Handle possibly necessary P2M type re-calculation (U flag clear for a
@@ -930,6 +932,8 @@ pod_retry_l1:
     return (p2m_is_valid(*t) || p2m_is_any_ram(*t)) ? mfn : INVALID_MFN;
 }
 
+#ifdef CONFIG_HVM
+
 static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
                                             p2m_type_t ot, p2m_type_t nt)
 {
@@ -1011,6 +1015,8 @@ static int p2m_pt_change_entry_type_rang
     return err;
 }
 
+#endif /* CONFIG_HVM */
+
 #if P2M_AUDIT
 static long p2m_pt_audit_p2m(struct p2m_domain *p2m)
 {
@@ -1168,8 +1174,10 @@ void p2m_pt_init(struct p2m_domain *p2m)
     p2m->set_entry = p2m_pt_set_entry;
     p2m->get_entry = p2m_pt_get_entry;
     p2m->recalc = do_recalc;
+#ifdef CONFIG_HVM
     p2m->change_entry_type_global = p2m_pt_change_entry_type_global;
     p2m->change_entry_type_range = p2m_pt_change_entry_type_range;
+#endif
 
     /* Still too early to use paging_mode_hap(). */
     if ( hap_enabled(p2m->domain) )
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -588,6 +588,7 @@ static int paging_log_dirty_op(struct do
     return rv;
 }
 
+#ifdef CONFIG_HVM
 void paging_log_dirty_range(struct domain *d,
                            unsigned long begin_pfn,
                            unsigned long nr,
@@ -617,6 +618,7 @@ void paging_log_dirty_range(struct domai
 
     guest_flush_tlb_mask(d, d->dirty_cpumask);
 }
+#endif
 
 /*
  * Callers must supply log_dirty_ops for the log dirty code to call. This
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -264,6 +264,7 @@ struct p2m_domain {
     void               (*enable_hardware_log_dirty)(struct p2m_domain *p2m);
     void               (*disable_hardware_log_dirty)(struct p2m_domain *p2m);
     void               (*flush_hardware_cached_dirty)(struct p2m_domain *p2m);
+#ifdef CONFIG_HVM
     void               (*change_entry_type_global)(struct p2m_domain *p2m,
                                                    p2m_type_t ot,
                                                    p2m_type_t nt);
@@ -272,6 +273,7 @@ struct p2m_domain {
                                                   unsigned long first_gfn,
                                                   unsigned long last_gfn);
     void               (*memory_type_changed)(struct p2m_domain *p2m);
+#endif
     void               (*write_p2m_entry_pre)(struct domain *d,
                                               unsigned long gfn,
                                               l1_pgentry_t old,



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 06/12] x86/p2m: hardware-log-dirty related hooks are HVM-only
  2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
                   ` (4 preceding siblings ...)
  2021-04-12 14:08 ` [PATCH v2 05/12] x86/p2m: change_entry_type_* hooks are HVM-only Jan Beulich
@ 2021-04-12 14:08 ` Jan Beulich
  2021-04-29 16:05   ` Roger Pau Monné
  2021-04-12 14:09 ` [PATCH v2 07/12] x86/p2m: the recalc hook is HVM-only Jan Beulich
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2021-04-12 14:08 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

Exclude functions using them from !HVM builds, thus making it possible
to exclude the hooks as well.

By moving an #endif in p2m.c (instead of introducing yet another one)
p2m_{get,set}_ioreq_server() get excluded for !HVM builds as well.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -369,8 +369,6 @@ void p2m_memory_type_changed(struct doma
     p2m_unlock(hostp2m);
 }
 
-#endif /* CONFIG_HVM */
-
 int p2m_set_ioreq_server(struct domain *d,
                          unsigned int flags,
                          struct ioreq_server *s)
@@ -464,6 +462,8 @@ void p2m_flush_hardware_cached_dirty(str
     }
 }
 
+#endif /* CONFIG_HVM */
+
 /*
  * Force a synchronous P2M TLB flush if a deferred flush is pending.
  *
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -261,10 +261,10 @@ struct p2m_domain {
                                     bool_t *sve);
     int                (*recalc)(struct p2m_domain *p2m,
                                  unsigned long gfn);
+#ifdef CONFIG_HVM
     void               (*enable_hardware_log_dirty)(struct p2m_domain *p2m);
     void               (*disable_hardware_log_dirty)(struct p2m_domain *p2m);
     void               (*flush_hardware_cached_dirty)(struct p2m_domain *p2m);
-#ifdef CONFIG_HVM
     void               (*change_entry_type_global)(struct p2m_domain *p2m,
                                                    p2m_type_t ot,
                                                    p2m_type_t nt);
@@ -630,6 +630,9 @@ int guest_physmap_add_page(struct domain
 /* Set a p2m range as populate-on-demand */
 int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
                                           unsigned int order);
+
+#ifdef CONFIG_HVM
+
 /* Enable hardware-assisted log-dirty. */
 void p2m_enable_hardware_log_dirty(struct domain *d);
 
@@ -639,6 +642,12 @@ void p2m_disable_hardware_log_dirty(stru
 /* Flush hardware cached dirty GFNs */
 void p2m_flush_hardware_cached_dirty(struct domain *d);
 
+#else
+
+static inline void p2m_flush_hardware_cached_dirty(struct domain *d) {}
+
+#endif
+
 /* Change types across all p2m entries in a domain */
 void p2m_change_entry_type_global(struct domain *d, 
                                   p2m_type_t ot, p2m_type_t nt);



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 07/12] x86/p2m: the recalc hook is HVM-only
  2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
                   ` (5 preceding siblings ...)
  2021-04-12 14:08 ` [PATCH v2 06/12] x86/p2m: hardware-log-dirty related " Jan Beulich
@ 2021-04-12 14:09 ` Jan Beulich
  2021-04-30  9:27   ` Roger Pau Monné
  2021-04-12 14:10 ` [PATCH v2 08/12] x86: mem-access " Jan Beulich
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2021-04-12 14:09 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

Exclude functions involved in its use from !HVM builds, thus making it
possible to exclude the hook as well.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -1173,8 +1173,8 @@ void p2m_pt_init(struct p2m_domain *p2m)
 {
     p2m->set_entry = p2m_pt_set_entry;
     p2m->get_entry = p2m_pt_get_entry;
-    p2m->recalc = do_recalc;
 #ifdef CONFIG_HVM
+    p2m->recalc = do_recalc;
     p2m->change_entry_type_global = p2m_pt_change_entry_type_global;
     p2m->change_entry_type_range = p2m_pt_change_entry_type_range;
 #endif
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1182,8 +1182,6 @@ void p2m_change_type_range(struct domain
     p2m_unlock(hostp2m);
 }
 
-#endif /* CONFIG_HVM */
-
 /*
  * Finish p2m type change for gfns which are marked as need_recalc in a range.
  * Uses the current p2m's max_mapped_pfn to further clip the invalidation
@@ -1234,7 +1232,6 @@ int p2m_finish_type_change(struct domain
     if ( rc < 0 )
         goto out;
 
-#ifdef CONFIG_HVM
     if ( unlikely(altp2m_active(d)) )
     {
         unsigned int i;
@@ -1252,7 +1249,6 @@ int p2m_finish_type_change(struct domain
                     goto out;
             }
     }
-#endif
 
  out:
     p2m_unlock(hostp2m);
@@ -1260,8 +1256,6 @@ int p2m_finish_type_change(struct domain
     return rc;
 }
 
-#ifdef CONFIG_HVM
-
 /*
  * Returns:
  *    0              for success
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -259,9 +259,9 @@ struct p2m_domain {
                                     p2m_query_t q,
                                     unsigned int *page_order,
                                     bool_t *sve);
+#ifdef CONFIG_HVM
     int                (*recalc)(struct p2m_domain *p2m,
                                  unsigned long gfn);
-#ifdef CONFIG_HVM
     void               (*enable_hardware_log_dirty)(struct p2m_domain *p2m);
     void               (*disable_hardware_log_dirty)(struct p2m_domain *p2m);
     void               (*flush_hardware_cached_dirty)(struct p2m_domain *p2m);



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 08/12] x86: mem-access is HVM-only
  2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
                   ` (6 preceding siblings ...)
  2021-04-12 14:09 ` [PATCH v2 07/12] x86/p2m: the recalc hook is HVM-only Jan Beulich
@ 2021-04-12 14:10 ` Jan Beulich
  2021-04-12 14:16   ` Tamas K Lengyel
  2021-04-12 14:48   ` Isaila Alexandru
  2021-04-12 14:12 ` [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported Jan Beulich
                   ` (4 subsequent siblings)
  12 siblings, 2 replies; 35+ messages in thread
From: Jan Beulich @ 2021-04-12 14:10 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné,
	George Dunlap, Tamas K Lengyel, Alexandru Isaila,
	Petre Pircalabu

By excluding the file from being built for !HVM, #ifdef-ary can be
removed from it.

The new HVM dependency on the Kconfig option is benign for Arm.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -15,7 +15,6 @@ config X86
 	select HAS_FAST_MULTIPLY
 	select HAS_IOPORTS
 	select HAS_KEXEC
-	select MEM_ACCESS_ALWAYS_ON
 	select HAS_MEM_PAGING
 	select HAS_NS16550
 	select HAS_PASSTHROUGH
@@ -92,6 +91,7 @@ config PV_LINEAR_PT
 config HVM
 	def_bool !PV_SHIM_EXCLUSIVE
 	select IOREQ_SERVER
+	select MEM_ACCESS_ALWAYS_ON
 	prompt "HVM support"
 	---help---
 	  Interfaces to support HVM domains.  HVM domains require hardware
--- a/xen/arch/x86/mm/mem_access.c
+++ b/xen/arch/x86/mm/mem_access.c
@@ -139,7 +139,6 @@ bool p2m_mem_access_emulate_check(struct
     return violation;
 }
 
-#ifdef CONFIG_HVM
 bool p2m_mem_access_check(paddr_t gpa, unsigned long gla,
                           struct npfec npfec,
                           vm_event_request_t **req_ptr)
@@ -282,7 +281,6 @@ int p2m_set_altp2m_mem_access(struct dom
      */
     return ap2m->set_entry(ap2m, gfn, mfn, PAGE_ORDER_4K, t, a, -1);
 }
-#endif
 
 static int set_mem_access(struct domain *d, struct p2m_domain *p2m,
                           struct p2m_domain *ap2m, p2m_access_t a,
@@ -290,7 +288,6 @@ static int set_mem_access(struct domain
 {
     int rc = 0;
 
-#ifdef CONFIG_HVM
     if ( ap2m )
     {
         rc = p2m_set_altp2m_mem_access(d, p2m, ap2m, a, gfn);
@@ -299,9 +296,6 @@ static int set_mem_access(struct domain
             rc = 0;
     }
     else
-#else
-    ASSERT(!ap2m);
-#endif
     {
         p2m_access_t _a;
         p2m_type_t t;
@@ -362,7 +356,6 @@ long p2m_set_mem_access(struct domain *d
     long rc = 0;
 
     /* altp2m view 0 is treated as the hostp2m */
-#ifdef CONFIG_HVM
     if ( altp2m_idx )
     {
         if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
@@ -372,9 +365,6 @@ long p2m_set_mem_access(struct domain *d
 
         ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx);
     }
-#else
-    ASSERT(!altp2m_idx);
-#endif
 
     if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
         return -EINVAL;
@@ -422,7 +412,6 @@ long p2m_set_mem_access_multi(struct dom
     long rc = 0;
 
     /* altp2m view 0 is treated as the hostp2m */
-#ifdef CONFIG_HVM
     if ( altp2m_idx )
     {
         if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
@@ -432,9 +421,6 @@ long p2m_set_mem_access_multi(struct dom
 
         ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx);
     }
-#else
-    ASSERT(!altp2m_idx);
-#endif
 
     p2m_lock(p2m);
     if ( ap2m )
@@ -484,7 +470,6 @@ int p2m_get_mem_access(struct domain *d,
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
-#ifdef CONFIG_HVM
     if ( !altp2m_active(d) )
     {
         if ( altp2m_idx )
@@ -499,9 +484,6 @@ int p2m_get_mem_access(struct domain *d,
 
         p2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx);
     }
-#else
-    ASSERT(!altp2m_idx);
-#endif
 
     return _p2m_get_mem_access(p2m, gfn, access);
 }
@@ -512,7 +494,6 @@ void arch_p2m_set_access_required(struct
 
     p2m_get_hostp2m(d)->access_required = access_required;
 
-#ifdef CONFIG_HVM
     if ( altp2m_active(d) )
     {
         unsigned int i;
@@ -524,7 +505,6 @@ void arch_p2m_set_access_required(struct
                 p2m->access_required = access_required;
         }
     }
-#endif
 }
 
 bool p2m_mem_access_sanity_check(const struct domain *d)
--- a/xen/arch/x86/vm_event.c
+++ b/xen/arch/x86/vm_event.c
@@ -265,6 +265,7 @@ void vm_event_emulate_check(struct vcpu
         return;
     }
 
+#ifdef CONFIG_HVM
     switch ( rsp->reason )
     {
     case VM_EVENT_REASON_MEM_ACCESS:
@@ -298,6 +299,7 @@ void vm_event_emulate_check(struct vcpu
     default:
         break;
     };
+#endif
 }
 
 void vm_event_reset_vmtrace(struct vcpu *v)
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -58,6 +58,7 @@ config MEM_ACCESS_ALWAYS_ON
 config MEM_ACCESS
 	def_bool MEM_ACCESS_ALWAYS_ON
 	prompt "Memory Access and VM events" if !MEM_ACCESS_ALWAYS_ON
+	depends on HVM
 	---help---
 
 	  Framework to configure memory access types for guests and receive



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported
  2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
                   ` (7 preceding siblings ...)
  2021-04-12 14:10 ` [PATCH v2 08/12] x86: mem-access " Jan Beulich
@ 2021-04-12 14:12 ` Jan Beulich
  2021-04-12 14:18   ` Tamas K Lengyel
                     ` (2 more replies)
  2021-04-12 14:13 ` [PATCH v2 10/12] x86/p2m: {get,set}_entry hooks and p2m-pt.c are HVM-only Jan Beulich
                   ` (3 subsequent siblings)
  12 siblings, 3 replies; 35+ messages in thread
From: Jan Beulich @ 2021-04-12 14:12 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, Roger Pau Monné,
	Daniel de Graaf, Paul Durrant, Tamas K Lengyel, Petre Pircalabu,
	Alexandru Isaila

While doing so, make the option dependent upon HVM, which really is the
main purpose of the change.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -15,7 +15,6 @@ config X86
 	select HAS_FAST_MULTIPLY
 	select HAS_IOPORTS
 	select HAS_KEXEC
-	select HAS_MEM_PAGING
 	select HAS_NS16550
 	select HAS_PASSTHROUGH
 	select HAS_PCI
@@ -251,6 +250,10 @@ config HYPERV_GUEST
 
 endif
 
+config MEM_PAGING
+	bool "Xen memory paging support (UNSUPPORTED)" if UNSUPPORTED
+	depends on HVM
+
 config MEM_SHARING
 	bool "Xen memory sharing support (UNSUPPORTED)" if UNSUPPORTED
 	depends on HVM
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1932,9 +1932,11 @@ int hvm_hap_nested_page_fault(paddr_t gp
         goto out_put_gfn;
     }
 
+#ifdef CONFIG_MEM_PAGING
     /* Check if the page has been paged out */
     if ( p2m_is_paged(p2mt) || (p2mt == p2m_ram_paging_out) )
         paged = 1;
+#endif
 
 #ifdef CONFIG_MEM_SHARING
     /* Mem sharing: if still shared on write access then its enomem */
--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -5,7 +5,7 @@ obj-$(CONFIG_HVM) += altp2m.o
 obj-$(CONFIG_HVM) += guest_walk_2.o guest_walk_3.o guest_walk_4.o
 obj-$(CONFIG_SHADOW_PAGING) += guest_walk_4.o
 obj-$(CONFIG_MEM_ACCESS) += mem_access.o
-obj-y += mem_paging.o
+obj-$(CONFIG_MEM_PAGING) += mem_paging.o
 obj-$(CONFIG_MEM_SHARING) += mem_sharing.o
 obj-y += p2m.o p2m-pt.o
 obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -155,8 +155,10 @@ int compat_arch_memory_op(unsigned long
     case XENMEM_get_sharing_shared_pages:
         return mem_sharing_get_nr_shared_mfns();
 
+#ifdef CONFIG_MEM_PAGING
     case XENMEM_paging_op:
         return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
+#endif
 
 #ifdef CONFIG_MEM_SHARING
     case XENMEM_sharing_op:
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -994,8 +994,10 @@ long subarch_memory_op(unsigned long cmd
     case XENMEM_get_sharing_shared_pages:
         return mem_sharing_get_nr_shared_mfns();
 
+#ifdef CONFIG_MEM_PAGING
     case XENMEM_paging_op:
         return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
+#endif
 
 #ifdef CONFIG_MEM_SHARING
     case XENMEM_sharing_op:
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -40,9 +40,6 @@ config HAS_IOPORTS
 config HAS_KEXEC
 	bool
 
-config HAS_MEM_PAGING
-	bool
-
 config HAS_PDX
 	bool
 
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1096,7 +1096,7 @@ static void complete_domain_destroy(stru
     free_xenoprof_pages(d);
 #endif
 
-#ifdef CONFIG_HAS_MEM_PAGING
+#ifdef CONFIG_MEM_PAGING
     xfree(d->vm_event_paging);
 #endif
     xfree(d->vm_event_monitor);
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1856,7 +1856,7 @@ int check_get_page_from_gfn(struct domai
 
     page = get_page_from_gfn(d, gfn_x(gfn), &p2mt, q);
 
-#ifdef CONFIG_HAS_MEM_PAGING
+#ifdef CONFIG_MEM_PAGING
     if ( p2m_is_paging(p2mt) )
     {
         if ( page )
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -390,7 +390,7 @@ static int vm_event_resume(struct domain
         /* Check flags which apply only when the vCPU is paused */
         if ( atomic_read(&v->vm_event_pause_count) )
         {
-#ifdef CONFIG_HAS_MEM_PAGING
+#ifdef CONFIG_MEM_PAGING
             if ( rsp.reason == VM_EVENT_REASON_MEM_PAGING )
                 p2m_mem_paging_resume(d, &rsp);
 #endif
@@ -521,7 +521,7 @@ int __vm_event_claim_slot(struct domain
         return vm_event_grab_slot(ved, current->domain != d);
 }
 
-#ifdef CONFIG_HAS_MEM_PAGING
+#ifdef CONFIG_MEM_PAGING
 /* Registered with Xen-bound event channel for incoming notifications. */
 static void mem_paging_notification(struct vcpu *v, unsigned int port)
 {
@@ -546,7 +546,7 @@ static void mem_sharing_notification(str
 /* Clean up on domain destruction */
 void vm_event_cleanup(struct domain *d)
 {
-#ifdef CONFIG_HAS_MEM_PAGING
+#ifdef CONFIG_MEM_PAGING
     if ( vm_event_check_ring(d->vm_event_paging) )
     {
         /* Destroying the wait queue head means waking up all
@@ -613,7 +613,7 @@ int vm_event_domctl(struct domain *d, st
 
     switch ( vec->mode )
     {
-#ifdef CONFIG_HAS_MEM_PAGING
+#ifdef CONFIG_MEM_PAGING
     case XEN_DOMCTL_VM_EVENT_OP_PAGING:
     {
         rc = -EINVAL;
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -23,6 +23,7 @@
 
 #include <asm/hvm/io.h>
 #include <asm/io_apic.h>
+#include <asm/mem_paging.h>
 #include <asm/setup.h>
 
 const struct iommu_init_ops *__initdata iommu_init_ops;
@@ -336,7 +337,7 @@ bool arch_iommu_use_permitted(const stru
      */
     return d == dom_io ||
            (likely(!mem_sharing_enabled(d)) &&
-            likely(!vm_event_check_ring(d->vm_event_paging)) &&
+            likely(!mem_paging_enabled(d)) &&
             likely(!p2m_get_hostp2m(d)->global_logdirty));
 }
 
--- a/xen/include/asm-x86/mem_paging.h
+++ b/xen/include/asm-x86/mem_paging.h
@@ -24,6 +24,12 @@
 
 int mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg);
 
+#ifdef CONFIG_MEM_PAGING
+# define mem_paging_enabled(d) vm_event_check_ring((d)->vm_event_paging)
+#else
+# define mem_paging_enabled(d) false
+#endif
+
 #endif /*__ASM_X86_MEM_PAGING_H__ */
 
 /*
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -136,11 +136,16 @@ typedef unsigned int p2m_query_t;
 #define P2M_PAGEABLE_TYPES (p2m_to_mask(p2m_ram_rw) \
                             | p2m_to_mask(p2m_ram_logdirty) )
 
+#ifdef CONFIG_MEM_PAGING
 #define P2M_PAGING_TYPES (p2m_to_mask(p2m_ram_paging_out)        \
                           | p2m_to_mask(p2m_ram_paged)           \
                           | p2m_to_mask(p2m_ram_paging_in))
 
 #define P2M_PAGED_TYPES (p2m_to_mask(p2m_ram_paged))
+#else
+#define P2M_PAGING_TYPES 0
+#define P2M_PAGED_TYPES 0
+#endif
 
 /* Shared types */
 /* XXX: Sharable types could include p2m_ram_ro too, but we would need to
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -530,7 +530,7 @@ struct domain
     struct domain *parent; /* VM fork parent */
 #endif
     /* Memory paging support */
-#ifdef CONFIG_HAS_MEM_PAGING
+#ifdef CONFIG_MEM_PAGING
     struct vm_event_domain *vm_event_paging;
 #endif
     /* VM event monitor support */
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -592,7 +592,7 @@ static XSM_INLINE int xsm_mem_access(XSM
 }
 #endif
 
-#ifdef CONFIG_HAS_MEM_PAGING
+#ifdef CONFIG_MEM_PAGING
 static XSM_INLINE int xsm_mem_paging(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -146,7 +146,7 @@ struct xsm_operations {
     int (*mem_access) (struct domain *d);
 #endif
 
-#ifdef CONFIG_HAS_MEM_PAGING
+#ifdef CONFIG_MEM_PAGING
     int (*mem_paging) (struct domain *d);
 #endif
 
@@ -592,7 +592,7 @@ static inline int xsm_mem_access (xsm_de
 }
 #endif
 
-#ifdef CONFIG_HAS_MEM_PAGING
+#ifdef CONFIG_MEM_PAGING
 static inline int xsm_mem_paging (xsm_default_t def, struct domain *d)
 {
     return xsm_ops->mem_paging(d);
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -124,7 +124,7 @@ void __init xsm_fixup_ops (struct xsm_op
     set_to_dummy_if_null(ops, mem_access);
 #endif
 
-#ifdef CONFIG_HAS_MEM_PAGING
+#ifdef CONFIG_MEM_PAGING
     set_to_dummy_if_null(ops, mem_paging);
 #endif
 
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1256,7 +1256,7 @@ static int flask_mem_access(struct domai
 }
 #endif
 
-#ifdef CONFIG_HAS_MEM_PAGING
+#ifdef CONFIG_MEM_PAGING
 static int flask_mem_paging(struct domain *d)
 {
     return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__MEM_PAGING);
@@ -1829,7 +1829,7 @@ static struct xsm_operations flask_ops =
     .mem_access = flask_mem_access,
 #endif
 
-#ifdef CONFIG_HAS_MEM_PAGING
+#ifdef CONFIG_MEM_PAGING
     .mem_paging = flask_mem_paging,
 #endif
 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 10/12] x86/p2m: {get,set}_entry hooks and p2m-pt.c are HVM-only
  2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
                   ` (8 preceding siblings ...)
  2021-04-12 14:12 ` [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported Jan Beulich
@ 2021-04-12 14:13 ` Jan Beulich
  2021-04-30 10:57   ` Roger Pau Monné
  2021-04-12 14:13 ` [PATCH v2 11/12] x86/p2m: write_p2m_entry_{pre,post} hooks " Jan Beulich
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2021-04-12 14:13 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

With the hooks no longer needing setting, p2m_pt_init() degenerates to
(about) nothing when !HVM. As a result, p2m-pt.c doesn't need building
anymore in this case, as long as p2m_pt_init() has proper surrogates put
in place.

Unfortunately this means some new #ifdef-ary in p2m.c, but the mid-term
plan there is to get rid of (almost) all of it by splitting out the then
hopefully very few remaining non-HVM pieces.

While the movement of the paging_mode_translate() check from
p2m_remove_page() to guest_physmap_remove_page() may not be strictly
necessary anymore (it was in an early version of this change), it looks
more logical to live in the latter function, allowing to avoid acquiring
the lock in the PV case. All other callers of p2m_remove_page() already
have such a check anyway (in the altp2m case up the call stack).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -7,8 +7,8 @@ obj-$(CONFIG_SHADOW_PAGING) += guest_wal
 obj-$(CONFIG_MEM_ACCESS) += mem_access.o
 obj-$(CONFIG_MEM_PAGING) += mem_paging.o
 obj-$(CONFIG_MEM_SHARING) += mem_sharing.o
-obj-y += p2m.o p2m-pt.o
-obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o
+obj-y += p2m.o
+obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o p2m-pt.o
 obj-y += paging.o
 
 guest_walk_%.o: guest_walk.c Makefile
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -494,8 +494,10 @@ mfn_t __get_gfn_type_access(struct p2m_d
                     p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
                     unsigned int *page_order, bool_t locked)
 {
+#ifdef CONFIG_HVM
     mfn_t mfn;
     gfn_t gfn = _gfn(gfn_l);
+#endif
 
     /* Unshare makes no sense withuot populate. */
     if ( q & P2M_UNSHARE )
@@ -509,6 +511,7 @@ mfn_t __get_gfn_type_access(struct p2m_d
         return _mfn(gfn_l);
     }
 
+#ifdef CONFIG_HVM
     if ( locked )
         /* Grab the lock here, don't release until put_gfn */
         gfn_lock(p2m, gfn, 0);
@@ -542,6 +545,7 @@ mfn_t __get_gfn_type_access(struct p2m_d
     }
 
     return mfn;
+#endif
 }
 
 void __put_gfn(struct p2m_domain *p2m, unsigned long gfn)
@@ -620,6 +624,7 @@ struct page_info *p2m_get_page_from_gfn(
     return page;
 }
 
+#ifdef CONFIG_HVM
 /* Returns: 0 for success, -errno for failure */
 int p2m_set_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
                   unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
@@ -659,6 +664,7 @@ int p2m_set_entry(struct p2m_domain *p2m
 
     return rc;
 }
+#endif
 
 mfn_t p2m_alloc_ptp(struct p2m_domain *p2m, unsigned int level)
 {
@@ -777,6 +783,8 @@ void p2m_final_teardown(struct domain *d
     p2m_teardown_hostp2m(d);
 }
 
+#ifdef CONFIG_HVM
+
 static int __must_check
 p2m_remove_page(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
                 unsigned int page_order)
@@ -785,10 +793,6 @@ p2m_remove_page(struct p2m_domain *p2m,
     p2m_type_t t;
     p2m_access_t a;
 
-    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
-    if ( !paging_mode_translate(p2m->domain) )
-        return 0;
-
     ASSERT(gfn_locked_by_me(p2m, gfn));
     P2M_DEBUG("removing gfn=%#lx mfn=%#lx\n", gfn_x(gfn), mfn_x(mfn));
 
@@ -829,6 +833,10 @@ guest_physmap_remove_page(struct domain
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int rc;
 
+    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
+    if ( !paging_mode_translate(d) )
+        return 0;
+
     gfn_lock(p2m, gfn, page_order);
     rc = p2m_remove_page(p2m, gfn, mfn, page_order);
     gfn_unlock(p2m, gfn, page_order);
@@ -836,6 +844,8 @@ guest_physmap_remove_page(struct domain
     return rc;
 }
 
+#endif /* CONFIG_HVM */
+
 int
 guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
                        unsigned int page_order)
@@ -1400,14 +1410,16 @@ int clear_mmio_p2m_entry(struct domain *
 int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
                            p2m_access_t p2ma, unsigned int flag)
 {
+#ifdef CONFIG_HVM
     p2m_type_t p2mt;
     p2m_access_t a;
     gfn_t gfn = _gfn(gfn_l);
     mfn_t mfn;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int ret;
+#endif
 
-    if ( !paging_mode_translate(p2m->domain) )
+    if ( !paging_mode_translate(d) )
     {
         if ( !is_iommu_enabled(d) )
             return 0;
@@ -1416,6 +1428,7 @@ int set_identity_p2m_entry(struct domain
                                 IOMMUF_readable | IOMMUF_writable);
     }
 
+#ifdef CONFIG_HVM
     gfn_lock(p2m, gfn, 0);
 
     mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
@@ -1439,16 +1452,19 @@ int set_identity_p2m_entry(struct domain
 
     gfn_unlock(p2m, gfn, 0);
     return ret;
+#endif
 }
 
 int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
 {
+#ifdef CONFIG_HVM
     p2m_type_t p2mt;
     p2m_access_t a;
     gfn_t gfn = _gfn(gfn_l);
     mfn_t mfn;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int ret;
+#endif
 
     if ( !paging_mode_translate(d) )
     {
@@ -1457,6 +1473,7 @@ int clear_identity_p2m_entry(struct doma
         return iommu_legacy_unmap(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K);
     }
 
+#ifdef CONFIG_HVM
     gfn_lock(p2m, gfn, 0);
 
     mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
@@ -1476,6 +1493,7 @@ int clear_identity_p2m_entry(struct doma
     }
 
     return ret;
+#endif
 }
 
 #ifdef CONFIG_MEM_SHARING
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -324,7 +324,6 @@ p2m_next_level(struct p2m_domain *p2m, v
     return rc;
 }
 
-#ifdef CONFIG_HVM
 /*
  * Mark (via clearing the U flag) as needing P2M type re-calculation all valid
  * present entries at the targeted level for the passed in GFN range, which is
@@ -393,7 +392,6 @@ static int p2m_pt_set_recalc_range(struc
 
     return err;
 }
-#endif /* CONFIG_HVM */
 
 /*
  * Handle possibly necessary P2M type re-calculation (U flag clear for a
@@ -932,8 +930,6 @@ pod_retry_l1:
     return (p2m_is_valid(*t) || p2m_is_any_ram(*t)) ? mfn : INVALID_MFN;
 }
 
-#ifdef CONFIG_HVM
-
 static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
                                             p2m_type_t ot, p2m_type_t nt)
 {
@@ -1015,8 +1011,6 @@ static int p2m_pt_change_entry_type_rang
     return err;
 }
 
-#endif /* CONFIG_HVM */
-
 #if P2M_AUDIT
 static long p2m_pt_audit_p2m(struct p2m_domain *p2m)
 {
@@ -1173,11 +1167,9 @@ void p2m_pt_init(struct p2m_domain *p2m)
 {
     p2m->set_entry = p2m_pt_set_entry;
     p2m->get_entry = p2m_pt_get_entry;
-#ifdef CONFIG_HVM
     p2m->recalc = do_recalc;
     p2m->change_entry_type_global = p2m_pt_change_entry_type_global;
     p2m->change_entry_type_range = p2m_pt_change_entry_type_range;
-#endif
 
     /* Still too early to use paging_mode_hap(). */
     if ( hap_enabled(p2m->domain) )
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -251,6 +251,7 @@ struct p2m_domain {
     /* Pages used to construct the p2m */
     struct page_list_head pages;
 
+#ifdef CONFIG_HVM
     int                (*set_entry)(struct p2m_domain *p2m,
                                     gfn_t gfn,
                                     mfn_t mfn, unsigned int page_order,
@@ -264,7 +265,6 @@ struct p2m_domain {
                                     p2m_query_t q,
                                     unsigned int *page_order,
                                     bool_t *sve);
-#ifdef CONFIG_HVM
     int                (*recalc)(struct p2m_domain *p2m,
                                  unsigned long gfn);
     void               (*enable_hardware_log_dirty)(struct p2m_domain *p2m);
@@ -785,8 +785,14 @@ int __must_check p2m_set_entry(struct p2
                                unsigned int page_order, p2m_type_t p2mt,
                                p2m_access_t p2ma);
 
+#if defined(CONFIG_HVM)
 /* Set up function pointers for PT implementation: only for use by p2m code */
 extern void p2m_pt_init(struct p2m_domain *p2m);
+#elif defined(CONFIG_SHADOW_PAGING)
+# define p2m_pt_init shadow_p2m_init
+#else
+static inline void p2m_pt_init(struct p2m_domain *p2m) {}
+#endif
 
 void *map_domain_gfn(struct p2m_domain *p2m, gfn_t gfn, mfn_t *mfn,
                      p2m_query_t q, uint32_t *pfec);
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -8,9 +8,18 @@ int set_foreign_p2m_entry(struct domain
                           unsigned long gfn, mfn_t mfn);
 
 /* Remove a page from a domain's p2m table */
+#ifdef CONFIG_HVM
 int __must_check
 guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
                           unsigned int page_order);
+#else
+static inline int
+guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+                          unsigned int page_order)
+{
+    return 0;
+}
+#endif
 
 /* Map MMIO regions in the p2m: start_gfn and nr describe the range in
  *  * the guest physical address space to map, starting from the machine



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 11/12] x86/p2m: write_p2m_entry_{pre,post} hooks are HVM-only
  2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
                   ` (9 preceding siblings ...)
  2021-04-12 14:13 ` [PATCH v2 10/12] x86/p2m: {get,set}_entry hooks and p2m-pt.c are HVM-only Jan Beulich
@ 2021-04-12 14:13 ` Jan Beulich
  2021-04-15 16:22   ` Tim Deegan
  2021-04-12 14:14 ` [PATCH v2 12/12] x86/p2m: re-arrange struct p2m_domain Jan Beulich
  2021-04-29  9:27 ` Ping: [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
  12 siblings, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2021-04-12 14:13 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap, Tim Deegan

Move respective shadow code to its HVM-only source file, thus making it
possible to exclude the hooks as well. This then shows that
shadow_p2m_init() also isn't needed in !HVM builds.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2327,22 +2327,6 @@ void shadow_prepare_page_type_change(str
     shadow_remove_all_shadows(d, page_to_mfn(page));
 }
 
-static void
-sh_remove_all_shadows_and_parents(struct domain *d, mfn_t gmfn)
-/* Even harsher: this is a HVM page that we thing is no longer a pagetable.
- * Unshadow it, and recursively unshadow pages that reference it. */
-{
-    sh_remove_shadows(d, gmfn, 0, 1);
-    /* XXX TODO:
-     * Rework this hashtable walker to return a linked-list of all
-     * the shadows it modified, then do breadth-first recursion
-     * to find the way up to higher-level tables and unshadow them too.
-     *
-     * The current code (just tearing down each page's shadows as we
-     * detect that it is not a pagetable) is correct, but very slow.
-     * It means extra emulated writes and slows down removal of mappings. */
-}
-
 /**************************************************************************/
 
 /* Reset the up-pointers of every L3 shadow to 0.
@@ -3086,126 +3070,6 @@ static int shadow_test_disable(struct do
 }
 
 /**************************************************************************/
-/* P2M map manipulations */
-
-/* shadow specific code which should be called when P2M table entry is updated
- * with new content. It is responsible for update the entry, as well as other
- * shadow processing jobs.
- */
-
-static void sh_unshadow_for_p2m_change(struct domain *d, unsigned long gfn,
-                                       l1_pgentry_t old, l1_pgentry_t new,
-                                       unsigned int level)
-{
-    mfn_t omfn = l1e_get_mfn(old);
-    unsigned int oflags = l1e_get_flags(old);
-    p2m_type_t p2mt = p2m_flags_to_type(oflags);
-    bool flush = false;
-
-    /*
-     * If there are any shadows, update them.  But if shadow_teardown()
-     * has already been called then it's not safe to try.
-     */
-    if ( unlikely(!d->arch.paging.shadow.total_pages) )
-        return;
-
-    switch ( level )
-    {
-    default:
-        /*
-         * The following assertion is to make sure we don't step on 1GB host
-         * page support of HVM guest.
-         */
-        ASSERT(!((oflags & _PAGE_PRESENT) && (oflags & _PAGE_PSE)));
-        break;
-
-    /* If we're removing an MFN from the p2m, remove it from the shadows too */
-    case 1:
-        if ( (p2m_is_valid(p2mt) || p2m_is_grant(p2mt)) && mfn_valid(omfn) )
-        {
-            sh_remove_all_shadows_and_parents(d, omfn);
-            if ( sh_remove_all_mappings(d, omfn, _gfn(gfn)) )
-                flush = true;
-        }
-        break;
-
-    /*
-     * If we're removing a superpage mapping from the p2m, we need to check
-     * all the pages covered by it.  If they're still there in the new
-     * scheme, that's OK, but otherwise they must be unshadowed.
-     */
-    case 2:
-        if ( !(oflags & _PAGE_PRESENT) || !(oflags & _PAGE_PSE) )
-            break;
-
-        if ( p2m_is_valid(p2mt) && mfn_valid(omfn) )
-        {
-            unsigned int i;
-            mfn_t nmfn = l1e_get_mfn(new);
-            l1_pgentry_t *npte = NULL;
-
-            /* If we're replacing a superpage with a normal L1 page, map it */
-            if ( (l1e_get_flags(new) & _PAGE_PRESENT) &&
-                 !(l1e_get_flags(new) & _PAGE_PSE) &&
-                 mfn_valid(nmfn) )
-                npte = map_domain_page(nmfn);
-
-            gfn &= ~(L1_PAGETABLE_ENTRIES - 1);
-
-            for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-            {
-                if ( !npte ||
-                     !p2m_is_ram(p2m_flags_to_type(l1e_get_flags(npte[i]))) ||
-                     !mfn_eq(l1e_get_mfn(npte[i]), omfn) )
-                {
-                    /* This GFN->MFN mapping has gone away */
-                    sh_remove_all_shadows_and_parents(d, omfn);
-                    if ( sh_remove_all_mappings(d, omfn, _gfn(gfn + i)) )
-                        flush = true;
-                }
-                omfn = mfn_add(omfn, 1);
-            }
-
-            if ( npte )
-                unmap_domain_page(npte);
-        }
-
-        break;
-    }
-
-    if ( flush )
-        guest_flush_tlb_mask(d, d->dirty_cpumask);
-}
-
-#if (SHADOW_OPTIMIZATIONS & SHOPT_FAST_FAULT_PATH)
-static void
-sh_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags)
-{
-    struct domain *d = p2m->domain;
-
-    /* If we're doing FAST_FAULT_PATH, then shadow mode may have
-       cached the fact that this is an mmio region in the shadow
-       page tables.  Blow the tables away to remove the cache.
-       This is pretty heavy handed, but this is a rare operation
-       (it might happen a dozen times during boot and then never
-       again), so it doesn't matter too much. */
-    if ( d->arch.paging.shadow.has_fast_mmio_entries )
-    {
-        shadow_blow_tables(d);
-        d->arch.paging.shadow.has_fast_mmio_entries = false;
-    }
-}
-#else
-# define sh_write_p2m_entry_post NULL
-#endif
-
-void shadow_p2m_init(struct p2m_domain *p2m)
-{
-    p2m->write_p2m_entry_pre  = sh_unshadow_for_p2m_change;
-    p2m->write_p2m_entry_post = sh_write_p2m_entry_post;
-}
-
-/**************************************************************************/
 /* Log-dirty mode support */
 
 /* Shadow specific code which is called in paging_log_dirty_enable().
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -774,6 +774,142 @@ void sh_destroy_monitor_table(const stru
 }
 
 /**************************************************************************/
+/* P2M map manipulations */
+
+/* shadow specific code which should be called when P2M table entry is updated
+ * with new content. It is responsible for update the entry, as well as other
+ * shadow processing jobs.
+ */
+
+static void
+sh_remove_all_shadows_and_parents(struct domain *d, mfn_t gmfn)
+/* Even harsher: this is a HVM page that we thing is no longer a pagetable.
+ * Unshadow it, and recursively unshadow pages that reference it. */
+{
+    sh_remove_shadows(d, gmfn, 0, 1);
+    /* XXX TODO:
+     * Rework this hashtable walker to return a linked-list of all
+     * the shadows it modified, then do breadth-first recursion
+     * to find the way up to higher-level tables and unshadow them too.
+     *
+     * The current code (just tearing down each page's shadows as we
+     * detect that it is not a pagetable) is correct, but very slow.
+     * It means extra emulated writes and slows down removal of mappings. */
+}
+
+static void sh_unshadow_for_p2m_change(struct domain *d, unsigned long gfn,
+                                       l1_pgentry_t old, l1_pgentry_t new,
+                                       unsigned int level)
+{
+    mfn_t omfn = l1e_get_mfn(old);
+    unsigned int oflags = l1e_get_flags(old);
+    p2m_type_t p2mt = p2m_flags_to_type(oflags);
+    bool flush = false;
+
+    /*
+     * If there are any shadows, update them.  But if shadow_teardown()
+     * has already been called then it's not safe to try.
+     */
+    if ( unlikely(!d->arch.paging.shadow.total_pages) )
+        return;
+
+    switch ( level )
+    {
+    default:
+        /*
+         * The following assertion is to make sure we don't step on 1GB host
+         * page support of HVM guest.
+         */
+        ASSERT(!((oflags & _PAGE_PRESENT) && (oflags & _PAGE_PSE)));
+        break;
+
+    /* If we're removing an MFN from the p2m, remove it from the shadows too */
+    case 1:
+        if ( (p2m_is_valid(p2mt) || p2m_is_grant(p2mt)) && mfn_valid(omfn) )
+        {
+            sh_remove_all_shadows_and_parents(d, omfn);
+            if ( sh_remove_all_mappings(d, omfn, _gfn(gfn)) )
+                flush = true;
+        }
+        break;
+
+    /*
+     * If we're removing a superpage mapping from the p2m, we need to check
+     * all the pages covered by it.  If they're still there in the new
+     * scheme, that's OK, but otherwise they must be unshadowed.
+     */
+    case 2:
+        if ( !(oflags & _PAGE_PRESENT) || !(oflags & _PAGE_PSE) )
+            break;
+
+        if ( p2m_is_valid(p2mt) && mfn_valid(omfn) )
+        {
+            unsigned int i;
+            mfn_t nmfn = l1e_get_mfn(new);
+            l1_pgentry_t *npte = NULL;
+
+            /* If we're replacing a superpage with a normal L1 page, map it */
+            if ( (l1e_get_flags(new) & _PAGE_PRESENT) &&
+                 !(l1e_get_flags(new) & _PAGE_PSE) &&
+                 mfn_valid(nmfn) )
+                npte = map_domain_page(nmfn);
+
+            gfn &= ~(L1_PAGETABLE_ENTRIES - 1);
+
+            for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
+            {
+                if ( !npte ||
+                     !p2m_is_ram(p2m_flags_to_type(l1e_get_flags(npte[i]))) ||
+                     !mfn_eq(l1e_get_mfn(npte[i]), omfn) )
+                {
+                    /* This GFN->MFN mapping has gone away */
+                    sh_remove_all_shadows_and_parents(d, omfn);
+                    if ( sh_remove_all_mappings(d, omfn, _gfn(gfn + i)) )
+                        flush = true;
+                }
+                omfn = mfn_add(omfn, 1);
+            }
+
+            if ( npte )
+                unmap_domain_page(npte);
+        }
+
+        break;
+    }
+
+    if ( flush )
+        guest_flush_tlb_mask(d, d->dirty_cpumask);
+}
+
+#if (SHADOW_OPTIMIZATIONS & SHOPT_FAST_FAULT_PATH)
+static void
+sh_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags)
+{
+    struct domain *d = p2m->domain;
+
+    /* If we're doing FAST_FAULT_PATH, then shadow mode may have
+       cached the fact that this is an mmio region in the shadow
+       page tables.  Blow the tables away to remove the cache.
+       This is pretty heavy handed, but this is a rare operation
+       (it might happen a dozen times during boot and then never
+       again), so it doesn't matter too much. */
+    if ( d->arch.paging.shadow.has_fast_mmio_entries )
+    {
+        shadow_blow_tables(d);
+        d->arch.paging.shadow.has_fast_mmio_entries = false;
+    }
+}
+#else
+# define sh_write_p2m_entry_post NULL
+#endif
+
+void shadow_p2m_init(struct p2m_domain *p2m)
+{
+    p2m->write_p2m_entry_pre  = sh_unshadow_for_p2m_change;
+    p2m->write_p2m_entry_post = sh_write_p2m_entry_post;
+}
+
+/**************************************************************************/
 /* VRAM dirty tracking support */
 int shadow_track_dirty_vram(struct domain *d,
                             unsigned long begin_pfn,
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -278,7 +278,6 @@ struct p2m_domain {
                                                   unsigned long first_gfn,
                                                   unsigned long last_gfn);
     void               (*memory_type_changed)(struct p2m_domain *p2m);
-#endif
     void               (*write_p2m_entry_pre)(struct domain *d,
                                               unsigned long gfn,
                                               l1_pgentry_t old,
@@ -286,6 +285,7 @@ struct p2m_domain {
                                               unsigned int level);
     void               (*write_p2m_entry_post)(struct p2m_domain *p2m,
                                                unsigned int oflags);
+#endif
 #if P2M_AUDIT
     long               (*audit_p2m)(struct p2m_domain *p2m);
 #endif
@@ -788,8 +788,6 @@ int __must_check p2m_set_entry(struct p2
 #if defined(CONFIG_HVM)
 /* Set up function pointers for PT implementation: only for use by p2m code */
 extern void p2m_pt_init(struct p2m_domain *p2m);
-#elif defined(CONFIG_SHADOW_PAGING)
-# define p2m_pt_init shadow_p2m_init
 #else
 static inline void p2m_pt_init(struct p2m_domain *p2m) {}
 #endif



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v2 12/12] x86/p2m: re-arrange struct p2m_domain
  2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
                   ` (10 preceding siblings ...)
  2021-04-12 14:13 ` [PATCH v2 11/12] x86/p2m: write_p2m_entry_{pre,post} hooks " Jan Beulich
@ 2021-04-12 14:14 ` Jan Beulich
  2021-04-30 10:59   ` Roger Pau Monné
  2021-04-29  9:27 ` Ping: [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
  12 siblings, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2021-04-12 14:14 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

Combine two HVM-specific sections in two cases (i.e. going from four of
them to just two). Make defer_nested_flush bool and HVM-only, moving it
next to other nested stuff. Move default_access up into a padding hole.

When moving them anyway, also adjust comment style.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1167,7 +1167,7 @@ void p2m_change_type_range(struct domain
     ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
 
     p2m_lock(hostp2m);
-    hostp2m->defer_nested_flush = 1;
+    hostp2m->defer_nested_flush = true;
 
     change_type_range(hostp2m, start, end, ot, nt);
 
@@ -1185,7 +1185,7 @@ void p2m_change_type_range(struct domain
                 p2m_unlock(altp2m);
             }
     }
-    hostp2m->defer_nested_flush = 0;
+    hostp2m->defer_nested_flush = false;
     if ( nestedhvm_enabled(d) )
         p2m_flush_nestedp2m(d);
 
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -216,20 +216,15 @@ struct p2m_domain {
 
     p2m_class_t       p2m_class; /* host/nested/alternate */
 
-#ifdef CONFIG_HVM
-    /* Nested p2ms only: nested p2m base value that this p2m shadows.
-     * This can be cleared to P2M_BASE_EADDR under the per-p2m lock but
-     * needs both the per-p2m lock and the per-domain nestedp2m lock
-     * to set it to any other value. */
-#define P2M_BASE_EADDR     (~0ULL)
-    uint64_t           np2m_base;
-    uint64_t           np2m_generation;
+    /*
+     * Default P2M access type for each page in the the domain: new pages,
+     * swapped in pages, cleared pages, and pages that are ambiguously
+     * retyped get this access type.  See definition of p2m_access_t.
+     */
+    p2m_access_t default_access;
 
-    /* Nested p2ms: linked list of n2pms allocated to this domain. 
-     * The host p2m hasolds the head of the list and the np2ms are 
-     * threaded on in LRU order. */
-    struct list_head   np2m_list;
-#endif
+    /* Pages used to construct the p2m */
+    struct page_list_head pages;
 
     /* Host p2m: Log-dirty ranges registered for the domain. */
     struct rangeset   *logdirty_ranges;
@@ -237,21 +232,10 @@ struct p2m_domain {
     /* Host p2m: Global log-dirty mode enabled for the domain. */
     bool               global_logdirty;
 
-    /* Host p2m: when this flag is set, don't flush all the nested-p2m 
-     * tables on every host-p2m change.  The setter of this flag 
-     * is responsible for performing the full flush before releasing the
-     * host p2m's lock. */
-    int                defer_nested_flush;
-
 #ifdef CONFIG_HVM
     /* Alternate p2m: count of vcpu's currently using this p2m. */
     atomic_t           active_vcpus;
-#endif
-
-    /* Pages used to construct the p2m */
-    struct page_list_head pages;
 
-#ifdef CONFIG_HVM
     int                (*set_entry)(struct p2m_domain *p2m,
                                     gfn_t gfn,
                                     mfn_t mfn, unsigned int page_order,
@@ -306,11 +290,6 @@ struct p2m_domain {
     unsigned int defer_flush;
     bool_t need_flush;
 
-    /* Default P2M access type for each page in the the domain: new pages,
-     * swapped in pages, cleared pages, and pages that are ambiguously
-     * retyped get this access type.  See definition of p2m_access_t. */
-    p2m_access_t default_access;
-
     /* If true, and an access fault comes in and there is no vm_event listener, 
      * pause domain.  Otherwise, remove access restrictions. */
     bool_t       access_required;
@@ -357,6 +336,31 @@ struct p2m_domain {
         mm_lock_t        lock;         /* Locking of private pod structs,   *
                                         * not relying on the p2m lock.      */
     } pod;
+
+    /*
+     * Host p2m: when this flag is set, don't flush all the nested-p2m
+     * tables on every host-p2m change.  The setter of this flag
+     * is responsible for performing the full flush before releasing the
+     * host p2m's lock.
+     */
+    bool               defer_nested_flush;
+
+    /*
+     * Nested p2ms only: nested p2m base value that this p2m shadows.
+     * This can be cleared to P2M_BASE_EADDR under the per-p2m lock but
+     * needs both the per-p2m lock and the per-domain nestedp2m lock
+     * to set it to any other value.
+     */
+#define P2M_BASE_EADDR     (~0ULL)
+    uint64_t           np2m_base;
+    uint64_t           np2m_generation;
+
+    /*
+     * Nested p2ms: linked list of n2pms allocated to this domain.
+     * The host p2m hasolds the head of the list and the np2ms are
+     * threaded on in LRU order.
+     */
+    struct list_head   np2m_list;
 #endif
 
     union {



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 08/12] x86: mem-access is HVM-only
  2021-04-12 14:10 ` [PATCH v2 08/12] x86: mem-access " Jan Beulich
@ 2021-04-12 14:16   ` Tamas K Lengyel
  2021-04-12 14:48   ` Isaila Alexandru
  1 sibling, 0 replies; 35+ messages in thread
From: Tamas K Lengyel @ 2021-04-12 14:16 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monné,
	George Dunlap, Alexandru Isaila, Petre Pircalabu

On Mon, Apr 12, 2021 at 10:10 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> By excluding the file from being built for !HVM, #ifdef-ary can be
> removed from it.
>
> The new HVM dependency on the Kconfig option is benign for Arm.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tamas K Lengyel <tamas@tklengyel.com>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported
  2021-04-12 14:12 ` [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported Jan Beulich
@ 2021-04-12 14:18   ` Tamas K Lengyel
  2021-04-12 14:47     ` Isaila Alexandru
  2021-04-12 14:27   ` Isaila Alexandru
  2021-04-30  9:55   ` Roger Pau Monné
  2 siblings, 1 reply; 35+ messages in thread
From: Tamas K Lengyel @ 2021-04-12 14:18 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, George Dunlap, Ian Jackson,
	Julien Grall, Stefano Stabellini, Wei Liu, Roger Pau Monné,
	Daniel de Graaf, Paul Durrant, Petre Pircalabu, Alexandru Isaila

On Mon, Apr 12, 2021 at 10:12 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> While doing so, make the option dependent upon HVM, which really is the
> main purpose of the change.

IMHO it would be better to just remove this dead code altogether.

Tamas


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported
  2021-04-12 14:12 ` [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported Jan Beulich
  2021-04-12 14:18   ` Tamas K Lengyel
@ 2021-04-12 14:27   ` Isaila Alexandru
  2021-04-30  9:55   ` Roger Pau Monné
  2 siblings, 0 replies; 35+ messages in thread
From: Isaila Alexandru @ 2021-04-12 14:27 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu, Roger Pau Monné,
	Daniel de Graaf, Paul Durrant, Tamas K Lengyel, Petre Pircalabu



On 12.04.2021 17:12, Jan Beulich wrote:
> CAUTION: This email originated from outside of our organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
> 
> While doing so, make the option dependent upon HVM, which really is the
> main purpose of the change.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Alexandru Isaila <aisaila@bitdefender.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported
  2021-04-12 14:18   ` Tamas K Lengyel
@ 2021-04-12 14:47     ` Isaila Alexandru
  0 siblings, 0 replies; 35+ messages in thread
From: Isaila Alexandru @ 2021-04-12 14:47 UTC (permalink / raw)
  To: Tamas K Lengyel, Jan Beulich
  Cc: xen-devel, Andrew Cooper, George Dunlap, Ian Jackson,
	Julien Grall, Stefano Stabellini, Wei Liu, Roger Pau Monné,
	Daniel de Graaf, Paul Durrant, Petre Pircalabu



On 12.04.2021 17:18, Tamas K Lengyel wrote:
> CAUTION: This email originated from outside of our organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
> 
> On Mon, Apr 12, 2021 at 10:12 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> While doing so, make the option dependent upon HVM, which really is the
>> main purpose of the change.
> 
> IMHO it would be better to just remove this dead code altogether.
> 

I agree with Tamas here.

Alex


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 08/12] x86: mem-access is HVM-only
  2021-04-12 14:10 ` [PATCH v2 08/12] x86: mem-access " Jan Beulich
  2021-04-12 14:16   ` Tamas K Lengyel
@ 2021-04-12 14:48   ` Isaila Alexandru
  1 sibling, 0 replies; 35+ messages in thread
From: Isaila Alexandru @ 2021-04-12 14:48 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné,
	George Dunlap, Tamas K Lengyel, Petre Pircalabu



On 12.04.2021 17:10, Jan Beulich wrote:
> CAUTION: This email originated from outside of our organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
> 
> By excluding the file from being built for !HVM, #ifdef-ary can be
> removed from it.
> 
> The new HVM dependency on the Kconfig option is benign for Arm.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Alexandru Isaila <aisaila@bitdefender.com>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 03/12] x86/mm: the gva_to_gfn() hook is HVM-only
  2021-04-12 14:07 ` [PATCH v2 03/12] x86/mm: the gva_to_gfn() hook is HVM-only Jan Beulich
@ 2021-04-15 16:22   ` Tim Deegan
  0 siblings, 0 replies; 35+ messages in thread
From: Tim Deegan @ 2021-04-15 16:22 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

At 16:07 +0200 on 12 Apr (1618243630), Jan Beulich wrote:
> As is the adjacent ga_to_gfn() one as well as paging_gva_to_gfn().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 11/12] x86/p2m: write_p2m_entry_{pre,post} hooks are HVM-only
  2021-04-12 14:13 ` [PATCH v2 11/12] x86/p2m: write_p2m_entry_{pre,post} hooks " Jan Beulich
@ 2021-04-15 16:22   ` Tim Deegan
  0 siblings, 0 replies; 35+ messages in thread
From: Tim Deegan @ 2021-04-15 16:22 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

At 16:13 +0200 on 12 Apr (1618244032), Jan Beulich wrote:
> Move respective shadow code to its HVM-only source file, thus making it
> possible to exclude the hooks as well. This then shows that
> shadow_p2m_init() also isn't needed in !HVM builds.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Ping: [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM
  2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
                   ` (11 preceding siblings ...)
  2021-04-12 14:14 ` [PATCH v2 12/12] x86/p2m: re-arrange struct p2m_domain Jan Beulich
@ 2021-04-29  9:27 ` Jan Beulich
  12 siblings, 0 replies; 35+ messages in thread
From: Jan Beulich @ 2021-04-29  9:27 UTC (permalink / raw)
  To: Andrew Cooper, Roger Pau Monné, Wei Liu; +Cc: George Dunlap, xen-devel

On 12.04.2021 16:03, Jan Beulich wrote:
> Since it was brought up in the discussion of v1: I think the end
> goal is to be for mm/p2m.c to become a HVM-only file. Any "wrappers"
> also trying to take care of !paging_mode_translate() guests ought to
> be moved elsewhere. To see what actually still needs taking care of,
> incrementally extending the #ifdef CONFIG_HVM regions there is the
> way to go imo.
> 
> Compared to v1 there are many new patches here plus build fixes to
> two of the three remaining ones from v1.
> 
> 01: p2m: set_{foreign,mmio}_p2m_entry() are HVM-only
> 02: p2m: {,un}map_mmio_regions() are HVM-only
> 03: mm: the gva_to_gfn() hook is HVM-only
> 04: AMD/IOMMU: guest IOMMU support is for HVM only
> 05: p2m: change_entry_type_* hooks are HVM-only
> 06: p2m: hardware-log-dirty related hooks are HVM-only
> 07: p2m: the recalc hook is HVM-only
> 08: mem-access is HVM-only
> 09: make mem-paging configuarable and default it to off for being unsupported
> 10: p2m: {get,set}_entry hooks and p2m-pt.c are HVM-only
> 11: p2m: write_p2m_entry_{pre,post} hooks are HVM-only
> 12: p2m: re-arrange struct p2m_domain

Besides patch 8, which has gone in, I've got two acks from Tim for
shadow code changes and contradicting feedback on patch 9, but
nothing else.

Thanks for acks or otherwise,
Jan


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 01/12] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only
  2021-04-12 14:05 ` [PATCH v2 01/12] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only Jan Beulich
@ 2021-04-29 13:17   ` Roger Pau Monné
  2021-04-29 14:09     ` Jan Beulich
  0 siblings, 1 reply; 35+ messages in thread
From: Roger Pau Monné @ 2021-04-29 13:17 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, George Dunlap

On Mon, Apr 12, 2021 at 04:05:41PM +0200, Jan Beulich wrote:
> Extend a respective #ifdef from inside set_typed_p2m_entry() to around
> all three functions. Add ASSERT_UNREACHABLE() to the latter one's safety
> check path.

Wouldn't it be better to also move the prototypes in p2m.h into a
CONFIG_HVM guarded region, so that it fails at compile time rather
than link time?

Thanks, Roger.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 01/12] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only
  2021-04-29 13:17   ` Roger Pau Monné
@ 2021-04-29 14:09     ` Jan Beulich
  2021-04-29 15:06       ` Roger Pau Monné
  0 siblings, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2021-04-29 14:09 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Andrew Cooper, Wei Liu, George Dunlap

On 29.04.2021 15:17, Roger Pau Monné wrote:
> On Mon, Apr 12, 2021 at 04:05:41PM +0200, Jan Beulich wrote:
>> Extend a respective #ifdef from inside set_typed_p2m_entry() to around
>> all three functions. Add ASSERT_UNREACHABLE() to the latter one's safety
>> check path.
> 
> Wouldn't it be better to also move the prototypes in p2m.h into a
> CONFIG_HVM guarded region, so that it fails at compile time rather
> than link time?

In the header I'm fearing this ending up as spaghetti more than in
the .c file. I think where possible we may want to do so once we
have a clear / clean set of APIs which are generic vs such which
are HVM-specific (which I expect to be the case once p2m.c as a
whole becomes HVM-only).

Jan


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 02/12] x86/p2m: {,un}map_mmio_regions() are HVM-only
  2021-04-12 14:06 ` [PATCH v2 02/12] x86/p2m: {,un}map_mmio_regions() " Jan Beulich
@ 2021-04-29 14:48   ` Roger Pau Monné
  2021-04-29 15:01     ` Jan Beulich
  0 siblings, 1 reply; 35+ messages in thread
From: Roger Pau Monné @ 2021-04-29 14:48 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, George Dunlap, Ian Jackson,
	Julien Grall, Stefano Stabellini, Wei Liu

On Mon, Apr 12, 2021 at 04:06:34PM +0200, Jan Beulich wrote:
> Mirror the "translated" check the functions do to do_domctl(), allowing
> the calls to be DCEd by the compiler. Add ASSERT_UNREACHABLE() to the
> original checks.
> 
> Also arrange for {set,clear}_mmio_p2m_entry() and
> {set,clear}_identity_p2m_entry() to respectively live next to each
> other, such that clear_mmio_p2m_entry() can also be covered by the
> #ifdef already covering set_mmio_p2m_entry().

Seeing the increase in HVM specific regions, would it make sense to
consider splitting the HVM bits into p2m-hvm.c or some such?

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Fix build.
> ---
> Arguably the original checks, returning success, could also be dropped
> at this point.
> 
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1352,52 +1352,6 @@ int set_mmio_p2m_entry(struct domain *d,
>                                 p2m_get_hostp2m(d)->default_access);
>  }
>  
> -#endif /* CONFIG_HVM */
> -
> -int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
> -                           p2m_access_t p2ma, unsigned int flag)
> -{
> -    p2m_type_t p2mt;
> -    p2m_access_t a;
> -    gfn_t gfn = _gfn(gfn_l);
> -    mfn_t mfn;
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    int ret;
> -
> -    if ( !paging_mode_translate(p2m->domain) )
> -    {
> -        if ( !is_iommu_enabled(d) )
> -            return 0;
> -        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l),
> -                                1ul << PAGE_ORDER_4K,
> -                                IOMMUF_readable | IOMMUF_writable);
> -    }
> -
> -    gfn_lock(p2m, gfn, 0);
> -
> -    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
> -
> -    if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm )
> -        ret = p2m_set_entry(p2m, gfn, _mfn(gfn_l), PAGE_ORDER_4K,
> -                            p2m_mmio_direct, p2ma);
> -    else if ( mfn_x(mfn) == gfn_l && p2mt == p2m_mmio_direct && a == p2ma )
> -        ret = 0;
> -    else
> -    {
> -        if ( flag & XEN_DOMCTL_DEV_RDM_RELAXED )
> -            ret = 0;
> -        else
> -            ret = -EBUSY;
> -        printk(XENLOG_G_WARNING
> -               "Cannot setup identity map d%d:%lx,"
> -               " gfn already mapped to %lx.\n",
> -               d->domain_id, gfn_l, mfn_x(mfn));
> -    }
> -
> -    gfn_unlock(p2m, gfn, 0);
> -    return ret;
> -}
> -
>  /*
>   * Returns:
>   *    0        for success
> @@ -1447,6 +1401,52 @@ int clear_mmio_p2m_entry(struct domain *
>      return rc;
>  }
>  
> +#endif /* CONFIG_HVM */
> +
> +int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
> +                           p2m_access_t p2ma, unsigned int flag)
> +{
> +    p2m_type_t p2mt;
> +    p2m_access_t a;
> +    gfn_t gfn = _gfn(gfn_l);
> +    mfn_t mfn;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    int ret;
> +
> +    if ( !paging_mode_translate(p2m->domain) )
> +    {
> +        if ( !is_iommu_enabled(d) )
> +            return 0;
> +        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l),
> +                                1ul << PAGE_ORDER_4K,
> +                                IOMMUF_readable | IOMMUF_writable);
> +    }
> +
> +    gfn_lock(p2m, gfn, 0);
> +
> +    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
> +
> +    if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm )
> +        ret = p2m_set_entry(p2m, gfn, _mfn(gfn_l), PAGE_ORDER_4K,
> +                            p2m_mmio_direct, p2ma);
> +    else if ( mfn_x(mfn) == gfn_l && p2mt == p2m_mmio_direct && a == p2ma )
> +        ret = 0;
> +    else
> +    {
> +        if ( flag & XEN_DOMCTL_DEV_RDM_RELAXED )
> +            ret = 0;
> +        else
> +            ret = -EBUSY;
> +        printk(XENLOG_G_WARNING
> +               "Cannot setup identity map d%d:%lx,"
> +               " gfn already mapped to %lx.\n",
> +               d->domain_id, gfn_l, mfn_x(mfn));
> +    }
> +
> +    gfn_unlock(p2m, gfn, 0);
> +    return ret;
> +}
> +
>  int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
>  {
>      p2m_type_t p2mt;
> @@ -1892,6 +1892,8 @@ void *map_domain_gfn(struct p2m_domain *
>      return map_domain_page(*mfn);
>  }
>  
> +#ifdef CONFIG_HVM
> +
>  static unsigned int mmio_order(const struct domain *d,
>                                 unsigned long start_fn, unsigned long nr)
>  {
> @@ -1932,7 +1934,10 @@ int map_mmio_regions(struct domain *d,
>      unsigned int iter, order;
>  
>      if ( !paging_mode_translate(d) )
> +    {
> +        ASSERT_UNREACHABLE();
>          return 0;
> +    }
>  
>      for ( iter = i = 0; i < nr && iter < MAP_MMIO_MAX_ITER;
>            i += 1UL << order, ++iter )
> @@ -1964,7 +1969,10 @@ int unmap_mmio_regions(struct domain *d,
>      unsigned int iter, order;
>  
>      if ( !paging_mode_translate(d) )
> +    {
> +        ASSERT_UNREACHABLE();
>          return 0;

Maybe consider returning an error here now instead of silently
failing? It's not supposed to be reached, so getting here likely means
something else has gone wrong and it's best to just report an error?

The rest LGTM:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 02/12] x86/p2m: {,un}map_mmio_regions() are HVM-only
  2021-04-29 14:48   ` Roger Pau Monné
@ 2021-04-29 15:01     ` Jan Beulich
  0 siblings, 0 replies; 35+ messages in thread
From: Jan Beulich @ 2021-04-29 15:01 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: xen-devel, Andrew Cooper, George Dunlap, Ian Jackson,
	Julien Grall, Stefano Stabellini, Wei Liu

On 29.04.2021 16:48, Roger Pau Monné wrote:
> On Mon, Apr 12, 2021 at 04:06:34PM +0200, Jan Beulich wrote:
>> Mirror the "translated" check the functions do to do_domctl(), allowing
>> the calls to be DCEd by the compiler. Add ASSERT_UNREACHABLE() to the
>> original checks.
>>
>> Also arrange for {set,clear}_mmio_p2m_entry() and
>> {set,clear}_identity_p2m_entry() to respectively live next to each
>> other, such that clear_mmio_p2m_entry() can also be covered by the
>> #ifdef already covering set_mmio_p2m_entry().
> 
> Seeing the increase in HVM specific regions, would it make sense to
> consider splitting the HVM bits into p2m-hvm.c or some such?

As said on the 01/12 sub-thread, I see the goal as p2m.c as a whole
becoming HVM specific.

>> @@ -1932,7 +1934,10 @@ int map_mmio_regions(struct domain *d,
>>      unsigned int iter, order;
>>  
>>      if ( !paging_mode_translate(d) )
>> +    {
>> +        ASSERT_UNREACHABLE();
>>          return 0;
>> +    }
>>  
>>      for ( iter = i = 0; i < nr && iter < MAP_MMIO_MAX_ITER;
>>            i += 1UL << order, ++iter )
>> @@ -1964,7 +1969,10 @@ int unmap_mmio_regions(struct domain *d,
>>      unsigned int iter, order;
>>  
>>      if ( !paging_mode_translate(d) )
>> +    {
>> +        ASSERT_UNREACHABLE();
>>          return 0;
> 
> Maybe consider returning an error here now instead of silently
> failing? It's not supposed to be reached, so getting here likely means
> something else has gone wrong and it's best to just report an error?

Can do, sure. Would be -EOPNOTSUPP.

> The rest LGTM:
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

Jan


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 01/12] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only
  2021-04-29 14:09     ` Jan Beulich
@ 2021-04-29 15:06       ` Roger Pau Monné
  0 siblings, 0 replies; 35+ messages in thread
From: Roger Pau Monné @ 2021-04-29 15:06 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, George Dunlap

On Thu, Apr 29, 2021 at 04:09:30PM +0200, Jan Beulich wrote:
> On 29.04.2021 15:17, Roger Pau Monné wrote:
> > On Mon, Apr 12, 2021 at 04:05:41PM +0200, Jan Beulich wrote:
> >> Extend a respective #ifdef from inside set_typed_p2m_entry() to around
> >> all three functions. Add ASSERT_UNREACHABLE() to the latter one's safety
> >> check path.
> > 
> > Wouldn't it be better to also move the prototypes in p2m.h into a
> > CONFIG_HVM guarded region, so that it fails at compile time rather
> > than link time?
> 
> In the header I'm fearing this ending up as spaghetti more than in
> the .c file.

I would just move them all into a common CONFIG_HVM section rather
than adding CONFIG_HVM guards around each of them.

> I think where possible we may want to do so once we
> have a clear / clean set of APIs which are generic vs such which
> are HVM-specific (which I expect to be the case once p2m.c as a
> whole becomes HVM-only).

Oh, I would certainly like p2m.c to be HVM-only (see my comment about
introducing a p2m-hvm.c). Anyway, my only comment was about the
header, so:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 04/12] AMD/IOMMU: guest IOMMU support is for HVM only
  2021-04-12 14:07 ` [PATCH v2 04/12] AMD/IOMMU: guest IOMMU support is for HVM only Jan Beulich
@ 2021-04-29 15:14   ` Roger Pau Monné
  0 siblings, 0 replies; 35+ messages in thread
From: Roger Pau Monné @ 2021-04-29 15:14 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, George Dunlap

On Mon, Apr 12, 2021 at 04:07:50PM +0200, Jan Beulich wrote:
> Generally all of this is dead code anyway, but there's a caller of
> guest_iommu_add_ppr_log(), and the code itself calls
> p2m_change_type_one(), which is about to become HVM-only. Hence this
> code, short of deleting it altogether, needs to become properly HVM-
> only as well.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 05/12] x86/p2m: change_entry_type_* hooks are HVM-only
  2021-04-12 14:08 ` [PATCH v2 05/12] x86/p2m: change_entry_type_* hooks are HVM-only Jan Beulich
@ 2021-04-29 15:49   ` Roger Pau Monné
  0 siblings, 0 replies; 35+ messages in thread
From: Roger Pau Monné @ 2021-04-29 15:49 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, George Dunlap

On Mon, Apr 12, 2021 at 04:08:29PM +0200, Jan Beulich wrote:
> Exclude functions using them from !HVM builds, thus making it possible
> to exclude the hooks as well. Also cover the already unused
> memory_type_changed hook while inserting the #ifdef in the struct.
> 
> While no respective check was there, I can't see how
> XEN_DOMCTL_set_broken_page_p2m could have been meant to work for PV the
> way it is implemented. Restrict this operation to acting on HVM guests.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I see you also make the recalc hook HVM only in a later patch, which
I was going to ask, so:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 06/12] x86/p2m: hardware-log-dirty related hooks are HVM-only
  2021-04-12 14:08 ` [PATCH v2 06/12] x86/p2m: hardware-log-dirty related " Jan Beulich
@ 2021-04-29 16:05   ` Roger Pau Monné
  0 siblings, 0 replies; 35+ messages in thread
From: Roger Pau Monné @ 2021-04-29 16:05 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, George Dunlap

On Mon, Apr 12, 2021 at 04:08:50PM +0200, Jan Beulich wrote:
> Exclude functions using them from !HVM builds, thus making it possible
> to exclude the hooks as well.
> 
> By moving an #endif in p2m.c (instead of introducing yet another one)
> p2m_{get,set}_ioreq_server() get excluded for !HVM builds as well.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 07/12] x86/p2m: the recalc hook is HVM-only
  2021-04-12 14:09 ` [PATCH v2 07/12] x86/p2m: the recalc hook is HVM-only Jan Beulich
@ 2021-04-30  9:27   ` Roger Pau Monné
  0 siblings, 0 replies; 35+ messages in thread
From: Roger Pau Monné @ 2021-04-30  9:27 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, George Dunlap

On Mon, Apr 12, 2021 at 04:09:18PM +0200, Jan Beulich wrote:
> Exclude functions involved in its use from !HVM builds, thus making it
> possible to exclude the hook as well.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported
  2021-04-12 14:12 ` [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported Jan Beulich
  2021-04-12 14:18   ` Tamas K Lengyel
  2021-04-12 14:27   ` Isaila Alexandru
@ 2021-04-30  9:55   ` Roger Pau Monné
  2021-04-30 14:16     ` Jan Beulich
  2 siblings, 1 reply; 35+ messages in thread
From: Roger Pau Monné @ 2021-04-30  9:55 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, George Dunlap, Ian Jackson,
	Julien Grall, Stefano Stabellini, Wei Liu, Daniel de Graaf,
	Paul Durrant, Tamas K Lengyel, Petre Pircalabu, Alexandru Isaila

On Mon, Apr 12, 2021 at 04:12:41PM +0200, Jan Beulich wrote:
> While doing so, make the option dependent upon HVM, which really is the
> main purpose of the change.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: New.
> 
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -15,7 +15,6 @@ config X86
>  	select HAS_FAST_MULTIPLY
>  	select HAS_IOPORTS
>  	select HAS_KEXEC
> -	select HAS_MEM_PAGING
>  	select HAS_NS16550
>  	select HAS_PASSTHROUGH
>  	select HAS_PCI
> @@ -251,6 +250,10 @@ config HYPERV_GUEST
>  
>  endif
>  
> +config MEM_PAGING
> +	bool "Xen memory paging support (UNSUPPORTED)" if UNSUPPORTED
> +	depends on HVM
> +
>  config MEM_SHARING
>  	bool "Xen memory sharing support (UNSUPPORTED)" if UNSUPPORTED
>  	depends on HVM
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1932,9 +1932,11 @@ int hvm_hap_nested_page_fault(paddr_t gp
>          goto out_put_gfn;
>      }
>  
> +#ifdef CONFIG_MEM_PAGING
>      /* Check if the page has been paged out */
>      if ( p2m_is_paged(p2mt) || (p2mt == p2m_ram_paging_out) )
>          paged = 1;
> +#endif
>  
>  #ifdef CONFIG_MEM_SHARING
>      /* Mem sharing: if still shared on write access then its enomem */
> --- a/xen/arch/x86/mm/Makefile
> +++ b/xen/arch/x86/mm/Makefile
> @@ -5,7 +5,7 @@ obj-$(CONFIG_HVM) += altp2m.o
>  obj-$(CONFIG_HVM) += guest_walk_2.o guest_walk_3.o guest_walk_4.o
>  obj-$(CONFIG_SHADOW_PAGING) += guest_walk_4.o
>  obj-$(CONFIG_MEM_ACCESS) += mem_access.o
> -obj-y += mem_paging.o
> +obj-$(CONFIG_MEM_PAGING) += mem_paging.o
>  obj-$(CONFIG_MEM_SHARING) += mem_sharing.o
>  obj-y += p2m.o p2m-pt.o
>  obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o
> --- a/xen/arch/x86/x86_64/compat/mm.c
> +++ b/xen/arch/x86/x86_64/compat/mm.c
> @@ -155,8 +155,10 @@ int compat_arch_memory_op(unsigned long
>      case XENMEM_get_sharing_shared_pages:
>          return mem_sharing_get_nr_shared_mfns();
>  
> +#ifdef CONFIG_MEM_PAGING
>      case XENMEM_paging_op:
>          return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
> +#endif
>  
>  #ifdef CONFIG_MEM_SHARING
>      case XENMEM_sharing_op:
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -994,8 +994,10 @@ long subarch_memory_op(unsigned long cmd
>      case XENMEM_get_sharing_shared_pages:
>          return mem_sharing_get_nr_shared_mfns();
>  
> +#ifdef CONFIG_MEM_PAGING
>      case XENMEM_paging_op:
>          return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
> +#endif

I would create a dummy handler when !CONFIG_MEM_PAGING in
asm-x86/mem_paging.h.

> --- a/xen/include/asm-x86/mem_paging.h
> +++ b/xen/include/asm-x86/mem_paging.h
> @@ -24,6 +24,12 @@
>  
>  int mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg);
>  
> +#ifdef CONFIG_MEM_PAGING
> +# define mem_paging_enabled(d) vm_event_check_ring((d)->vm_event_paging)
> +#else
> +# define mem_paging_enabled(d) false
> +#endif
> +
>  #endif /*__ASM_X86_MEM_PAGING_H__ */
>  
>  /*
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -136,11 +136,16 @@ typedef unsigned int p2m_query_t;
>  #define P2M_PAGEABLE_TYPES (p2m_to_mask(p2m_ram_rw) \
>                              | p2m_to_mask(p2m_ram_logdirty) )
>  
> +#ifdef CONFIG_MEM_PAGING
>  #define P2M_PAGING_TYPES (p2m_to_mask(p2m_ram_paging_out)        \
>                            | p2m_to_mask(p2m_ram_paged)           \
>                            | p2m_to_mask(p2m_ram_paging_in))
>  
>  #define P2M_PAGED_TYPES (p2m_to_mask(p2m_ram_paged))
> +#else
> +#define P2M_PAGING_TYPES 0
> +#define P2M_PAGED_TYPES 0
> +#endif

Since you don't guard the p2m related paged types in p2m_type_t is it
worth having diverging P2M_{PAGING/PAGED}_TYPES?

I guess it might be required in order to force the compiler to DCE
without having to add yet more CONFIG_MEM_PAGING guards?

I don't really have a strong opinion on whether the code should be
removed, IMO it's best to start by making it off by default at build
time and remove it in a later release?

Thanks, Roger.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 10/12] x86/p2m: {get,set}_entry hooks and p2m-pt.c are HVM-only
  2021-04-12 14:13 ` [PATCH v2 10/12] x86/p2m: {get,set}_entry hooks and p2m-pt.c are HVM-only Jan Beulich
@ 2021-04-30 10:57   ` Roger Pau Monné
  0 siblings, 0 replies; 35+ messages in thread
From: Roger Pau Monné @ 2021-04-30 10:57 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, George Dunlap

On Mon, Apr 12, 2021 at 04:13:19PM +0200, Jan Beulich wrote:
> With the hooks no longer needing setting, p2m_pt_init() degenerates to
> (about) nothing when !HVM. As a result, p2m-pt.c doesn't need building
> anymore in this case, as long as p2m_pt_init() has proper surrogates put
> in place.
> 
> Unfortunately this means some new #ifdef-ary in p2m.c, but the mid-term
> plan there is to get rid of (almost) all of it by splitting out the then
> hopefully very few remaining non-HVM pieces.
> 
> While the movement of the paging_mode_translate() check from
> p2m_remove_page() to guest_physmap_remove_page() may not be strictly
> necessary anymore (it was in an early version of this change), it looks
> more logical to live in the latter function, allowing to avoid acquiring
> the lock in the PV case. All other callers of p2m_remove_page() already
> have such a check anyway (in the altp2m case up the call stack).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 12/12] x86/p2m: re-arrange struct p2m_domain
  2021-04-12 14:14 ` [PATCH v2 12/12] x86/p2m: re-arrange struct p2m_domain Jan Beulich
@ 2021-04-30 10:59   ` Roger Pau Monné
  0 siblings, 0 replies; 35+ messages in thread
From: Roger Pau Monné @ 2021-04-30 10:59 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, George Dunlap

On Mon, Apr 12, 2021 at 04:14:22PM +0200, Jan Beulich wrote:
> Combine two HVM-specific sections in two cases (i.e. going from four of
> them to just two). Make defer_nested_flush bool and HVM-only, moving it
> next to other nested stuff. Move default_access up into a padding hole.
> 
> When moving them anyway, also adjust comment style.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported
  2021-04-30  9:55   ` Roger Pau Monné
@ 2021-04-30 14:16     ` Jan Beulich
  2021-04-30 14:37       ` Roger Pau Monné
  0 siblings, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2021-04-30 14:16 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: xen-devel, Andrew Cooper, George Dunlap, Ian Jackson,
	Julien Grall, Stefano Stabellini, Wei Liu, Daniel de Graaf,
	Paul Durrant, Tamas K Lengyel, Petre Pircalabu, Alexandru Isaila

On 30.04.2021 11:55, Roger Pau Monné wrote:
> On Mon, Apr 12, 2021 at 04:12:41PM +0200, Jan Beulich wrote:
>> --- a/xen/arch/x86/x86_64/compat/mm.c
>> +++ b/xen/arch/x86/x86_64/compat/mm.c
>> @@ -155,8 +155,10 @@ int compat_arch_memory_op(unsigned long
>>      case XENMEM_get_sharing_shared_pages:
>>          return mem_sharing_get_nr_shared_mfns();
>>  
>> +#ifdef CONFIG_MEM_PAGING
>>      case XENMEM_paging_op:
>>          return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
>> +#endif
>>  
>>  #ifdef CONFIG_MEM_SHARING
>>      case XENMEM_sharing_op:
>> --- a/xen/arch/x86/x86_64/mm.c
>> +++ b/xen/arch/x86/x86_64/mm.c
>> @@ -994,8 +994,10 @@ long subarch_memory_op(unsigned long cmd
>>      case XENMEM_get_sharing_shared_pages:
>>          return mem_sharing_get_nr_shared_mfns();
>>  
>> +#ifdef CONFIG_MEM_PAGING
>>      case XENMEM_paging_op:
>>          return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
>> +#endif
> 
> I would create a dummy handler when !CONFIG_MEM_PAGING in
> asm-x86/mem_paging.h.

I was simply following the neighboring mem-sharing approach,
which you've stripped here, but which is still visible in the
xen/arch/x86/x86_64/compat/mm.c change above. I think the two
are helpful to be similar in such aspects.

>> --- a/xen/include/asm-x86/p2m.h
>> +++ b/xen/include/asm-x86/p2m.h
>> @@ -136,11 +136,16 @@ typedef unsigned int p2m_query_t;
>>  #define P2M_PAGEABLE_TYPES (p2m_to_mask(p2m_ram_rw) \
>>                              | p2m_to_mask(p2m_ram_logdirty) )
>>  
>> +#ifdef CONFIG_MEM_PAGING
>>  #define P2M_PAGING_TYPES (p2m_to_mask(p2m_ram_paging_out)        \
>>                            | p2m_to_mask(p2m_ram_paged)           \
>>                            | p2m_to_mask(p2m_ram_paging_in))
>>  
>>  #define P2M_PAGED_TYPES (p2m_to_mask(p2m_ram_paged))
>> +#else
>> +#define P2M_PAGING_TYPES 0
>> +#define P2M_PAGED_TYPES 0
>> +#endif
> 
> Since you don't guard the p2m related paged types in p2m_type_t is it
> worth having diverging P2M_{PAGING/PAGED}_TYPES?
> 
> I guess it might be required in order to force the compiler to DCE
> without having to add yet more CONFIG_MEM_PAGING guards?

Indeed, this is the reason.

> I don't really have a strong opinion on whether the code should be
> removed, IMO it's best to start by making it off by default at build
> time and remove it in a later release?

Matches my way of thinking. I also wouldn't want to be the one to
delete code completely out of the blue.

Jan


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported
  2021-04-30 14:16     ` Jan Beulich
@ 2021-04-30 14:37       ` Roger Pau Monné
  0 siblings, 0 replies; 35+ messages in thread
From: Roger Pau Monné @ 2021-04-30 14:37 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, George Dunlap, Ian Jackson,
	Julien Grall, Stefano Stabellini, Wei Liu, Daniel de Graaf,
	Paul Durrant, Tamas K Lengyel, Petre Pircalabu, Alexandru Isaila

On Fri, Apr 30, 2021 at 04:16:24PM +0200, Jan Beulich wrote:
> On 30.04.2021 11:55, Roger Pau Monné wrote:
> > On Mon, Apr 12, 2021 at 04:12:41PM +0200, Jan Beulich wrote:
> >> --- a/xen/arch/x86/x86_64/compat/mm.c
> >> +++ b/xen/arch/x86/x86_64/compat/mm.c
> >> @@ -155,8 +155,10 @@ int compat_arch_memory_op(unsigned long
> >>      case XENMEM_get_sharing_shared_pages:
> >>          return mem_sharing_get_nr_shared_mfns();
> >>  
> >> +#ifdef CONFIG_MEM_PAGING
> >>      case XENMEM_paging_op:
> >>          return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
> >> +#endif
> >>  
> >>  #ifdef CONFIG_MEM_SHARING
> >>      case XENMEM_sharing_op:
> >> --- a/xen/arch/x86/x86_64/mm.c
> >> +++ b/xen/arch/x86/x86_64/mm.c
> >> @@ -994,8 +994,10 @@ long subarch_memory_op(unsigned long cmd
> >>      case XENMEM_get_sharing_shared_pages:
> >>          return mem_sharing_get_nr_shared_mfns();
> >>  
> >> +#ifdef CONFIG_MEM_PAGING
> >>      case XENMEM_paging_op:
> >>          return mem_paging_memop(guest_handle_cast(arg, xen_mem_paging_op_t));
> >> +#endif
> > 
> > I would create a dummy handler when !CONFIG_MEM_PAGING in
> > asm-x86/mem_paging.h.
> 
> I was simply following the neighboring mem-sharing approach,
> which you've stripped here, but which is still visible in the
> xen/arch/x86/x86_64/compat/mm.c change above. I think the two
> are helpful to be similar in such aspects.
> 
> >> --- a/xen/include/asm-x86/p2m.h
> >> +++ b/xen/include/asm-x86/p2m.h
> >> @@ -136,11 +136,16 @@ typedef unsigned int p2m_query_t;
> >>  #define P2M_PAGEABLE_TYPES (p2m_to_mask(p2m_ram_rw) \
> >>                              | p2m_to_mask(p2m_ram_logdirty) )
> >>  
> >> +#ifdef CONFIG_MEM_PAGING
> >>  #define P2M_PAGING_TYPES (p2m_to_mask(p2m_ram_paging_out)        \
> >>                            | p2m_to_mask(p2m_ram_paged)           \
> >>                            | p2m_to_mask(p2m_ram_paging_in))
> >>  
> >>  #define P2M_PAGED_TYPES (p2m_to_mask(p2m_ram_paged))
> >> +#else
> >> +#define P2M_PAGING_TYPES 0
> >> +#define P2M_PAGED_TYPES 0
> >> +#endif
> > 
> > Since you don't guard the p2m related paged types in p2m_type_t is it
> > worth having diverging P2M_{PAGING/PAGED}_TYPES?
> > 
> > I guess it might be required in order to force the compiler to DCE
> > without having to add yet more CONFIG_MEM_PAGING guards?
> 
> Indeed, this is the reason.
> 
> > I don't really have a strong opinion on whether the code should be
> > removed, IMO it's best to start by making it off by default at build
> > time and remove it in a later release?
> 
> Matches my way of thinking. I also wouldn't want to be the one to
> delete code completely out of the blue.

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.


^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2021-04-30 14:38 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
2021-04-12 14:05 ` [PATCH v2 01/12] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only Jan Beulich
2021-04-29 13:17   ` Roger Pau Monné
2021-04-29 14:09     ` Jan Beulich
2021-04-29 15:06       ` Roger Pau Monné
2021-04-12 14:06 ` [PATCH v2 02/12] x86/p2m: {,un}map_mmio_regions() " Jan Beulich
2021-04-29 14:48   ` Roger Pau Monné
2021-04-29 15:01     ` Jan Beulich
2021-04-12 14:07 ` [PATCH v2 03/12] x86/mm: the gva_to_gfn() hook is HVM-only Jan Beulich
2021-04-15 16:22   ` Tim Deegan
2021-04-12 14:07 ` [PATCH v2 04/12] AMD/IOMMU: guest IOMMU support is for HVM only Jan Beulich
2021-04-29 15:14   ` Roger Pau Monné
2021-04-12 14:08 ` [PATCH v2 05/12] x86/p2m: change_entry_type_* hooks are HVM-only Jan Beulich
2021-04-29 15:49   ` Roger Pau Monné
2021-04-12 14:08 ` [PATCH v2 06/12] x86/p2m: hardware-log-dirty related " Jan Beulich
2021-04-29 16:05   ` Roger Pau Monné
2021-04-12 14:09 ` [PATCH v2 07/12] x86/p2m: the recalc hook is HVM-only Jan Beulich
2021-04-30  9:27   ` Roger Pau Monné
2021-04-12 14:10 ` [PATCH v2 08/12] x86: mem-access " Jan Beulich
2021-04-12 14:16   ` Tamas K Lengyel
2021-04-12 14:48   ` Isaila Alexandru
2021-04-12 14:12 ` [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported Jan Beulich
2021-04-12 14:18   ` Tamas K Lengyel
2021-04-12 14:47     ` Isaila Alexandru
2021-04-12 14:27   ` Isaila Alexandru
2021-04-30  9:55   ` Roger Pau Monné
2021-04-30 14:16     ` Jan Beulich
2021-04-30 14:37       ` Roger Pau Monné
2021-04-12 14:13 ` [PATCH v2 10/12] x86/p2m: {get,set}_entry hooks and p2m-pt.c are HVM-only Jan Beulich
2021-04-30 10:57   ` Roger Pau Monné
2021-04-12 14:13 ` [PATCH v2 11/12] x86/p2m: write_p2m_entry_{pre,post} hooks " Jan Beulich
2021-04-15 16:22   ` Tim Deegan
2021-04-12 14:14 ` [PATCH v2 12/12] x86/p2m: re-arrange struct p2m_domain Jan Beulich
2021-04-30 10:59   ` Roger Pau Monné
2021-04-29  9:27 ` Ping: [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).