All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only
@ 2021-07-05 16:03 Jan Beulich
  2021-07-05 16:05 ` Jan Beulich
                   ` (16 more replies)
  0 siblings, 17 replies; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:03 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

The primary goal of this series is to leave p2m.c with, as its leading
comment suggests, just code for "physical-to-machine mappings for
automatically-translated domains". This requires splitting a few
functions, with their non-HVM parts moved elsewhere.

01: x86/P2M: rename p2m_remove_page()
02: x86/P2M: introduce p2m_{add,remove}_page()
03: x86/P2M: drop a few CONFIG_HVM
04: x86/P2M: move map_domain_gfn() (again)
05: x86/mm: move guest_physmap_{add,remove}_page()
06: x86/mm: split set_identity_p2m_entry() into PV and HVM parts
07: x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only
08: x86/P2M: PoD, altp2m, and nested-p2m are HVM-only
09: x86/P2M: split out init/teardown functions
10: x86/P2M: p2m_get_page_from_gfn() is HVM-only
11: x86/P2M: derive a HVM-only variant from __get_gfn_type_access()
12: x86/p2m: re-arrange {,__}put_gfn()
13: shr_pages field is MEM_SHARING-only
14: paged_pages field is MEM_PAGING-only
15: x86/P2M: p2m.c is HVM-only
16: x86/P2M: the majority for struct p2m_domain's fields are HVM-only

Jan



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
@ 2021-07-05 16:05 ` Jan Beulich
  2021-07-05 16:05 ` [PATCH 01/16] x86/P2M: rename p2m_remove_page() Jan Beulich
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:05 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

On 05.07.2021 18:03, Jan Beulich wrote:
> The primary goal of this series is to leave p2m.c with, as its leading
> comment suggests, just code for "physical-to-machine mappings for
> automatically-translated domains". This requires splitting a few
> functions, with their non-HVM parts moved elsewhere.

Forgot to spell out that this goes on top of "[PATCH 0/2] x86/mem-sharing:
a fix and some cleanup".

Jan

> 01: x86/P2M: rename p2m_remove_page()
> 02: x86/P2M: introduce p2m_{add,remove}_page()
> 03: x86/P2M: drop a few CONFIG_HVM
> 04: x86/P2M: move map_domain_gfn() (again)
> 05: x86/mm: move guest_physmap_{add,remove}_page()
> 06: x86/mm: split set_identity_p2m_entry() into PV and HVM parts
> 07: x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only
> 08: x86/P2M: PoD, altp2m, and nested-p2m are HVM-only
> 09: x86/P2M: split out init/teardown functions
> 10: x86/P2M: p2m_get_page_from_gfn() is HVM-only
> 11: x86/P2M: derive a HVM-only variant from __get_gfn_type_access()
> 12: x86/p2m: re-arrange {,__}put_gfn()
> 13: shr_pages field is MEM_SHARING-only
> 14: paged_pages field is MEM_PAGING-only
> 15: x86/P2M: p2m.c is HVM-only
> 16: x86/P2M: the majority for struct p2m_domain's fields are HVM-only
> 
> Jan
> 
> 



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 01/16] x86/P2M: rename p2m_remove_page()
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
  2021-07-05 16:05 ` Jan Beulich
@ 2021-07-05 16:05 ` Jan Beulich
  2022-02-04 21:54   ` George Dunlap
  2021-07-05 16:06 ` [PATCH 02/16] x86/P2M: introduce p2m_{add,remove}_page() Jan Beulich
                   ` (14 subsequent siblings)
  16 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:05 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

This is in preparation to re-using the original name.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -788,8 +788,8 @@ void p2m_final_teardown(struct domain *d
 #ifdef CONFIG_HVM
 
 static int __must_check
-p2m_remove_page(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
-                unsigned int page_order)
+p2m_remove_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
+                 unsigned int page_order)
 {
     unsigned long i;
     p2m_type_t t;
@@ -840,7 +840,7 @@ guest_physmap_remove_page(struct domain
         return 0;
 
     gfn_lock(p2m, gfn, page_order);
-    rc = p2m_remove_page(p2m, gfn, mfn, page_order);
+    rc = p2m_remove_entry(p2m, gfn, mfn, page_order);
     gfn_unlock(p2m, gfn, page_order);
 
     return rc;
@@ -1009,7 +1009,7 @@ guest_physmap_add_entry(struct domain *d
                 P2M_DEBUG("old gfn=%#lx -> mfn %#lx\n",
                           gfn_x(ogfn) , mfn_x(omfn));
                 if ( mfn_eq(omfn, mfn_add(mfn, i)) &&
-                     (rc = p2m_remove_page(p2m, ogfn, omfn, 0)) )
+                     (rc = p2m_remove_entry(p2m, ogfn, omfn, 0)) )
                     goto out;
             }
         }
@@ -2382,7 +2382,7 @@ int p2m_change_altp2m_gfn(struct domain
     {
         mfn = ap2m->get_entry(ap2m, old_gfn, &t, &a, 0, NULL, NULL);
         rc = mfn_valid(mfn)
-             ? p2m_remove_page(ap2m, old_gfn, mfn, PAGE_ORDER_4K)
+             ? p2m_remove_entry(ap2m, old_gfn, mfn, PAGE_ORDER_4K)
              : 0;
         goto out;
     }



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 02/16] x86/P2M: introduce p2m_{add,remove}_page()
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
  2021-07-05 16:05 ` Jan Beulich
  2021-07-05 16:05 ` [PATCH 01/16] x86/P2M: rename p2m_remove_page() Jan Beulich
@ 2021-07-05 16:06 ` Jan Beulich
  2021-07-05 17:47   ` Paul Durrant
  2022-02-04 22:07   ` George Dunlap
  2021-07-05 16:06 ` [PATCH 03/16] x86/P2M: drop a few CONFIG_HVM Jan Beulich
                   ` (13 subsequent siblings)
  16 siblings, 2 replies; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné,
	George Dunlap, Paul Durrant

p2m_add_page() is simply a rename from guest_physmap_add_entry().
p2m_remove_page() then is its counterpart, despite rendering
guest_physmap_remove_page(). This way callers can use suitable pairs of
functions (previously violated by hvm/grant_table.c).

In HVM-specific code further avoid going through the guest_physmap_*()
layer, and instead use the two new/renamed functions directly.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -174,8 +174,7 @@ static int __init pvh_populate_memory_ra
             continue;
         }
 
-        rc = guest_physmap_add_page(d, _gfn(start), page_to_mfn(page),
-                                    order);
+        rc = p2m_add_page(d, _gfn(start), page_to_mfn(page), order, p2m_ram_rw);
         if ( rc != 0 )
         {
             printk("Failed to populate memory: [%#lx,%#lx): %d\n",
--- a/xen/arch/x86/hvm/grant_table.c
+++ b/xen/arch/x86/hvm/grant_table.c
@@ -39,9 +39,8 @@ int create_grant_p2m_mapping(uint64_t ad
         p2mt = p2m_grant_map_ro;
     else
         p2mt = p2m_grant_map_rw;
-    rc = guest_physmap_add_entry(current->domain,
-                                 _gfn(addr >> PAGE_SHIFT),
-                                 frame, PAGE_ORDER_4K, p2mt);
+    rc = p2m_add_page(current->domain, _gfn(addr >> PAGE_SHIFT),
+                      frame, PAGE_ORDER_4K, p2mt);
     if ( rc )
         return GNTST_general_error;
     else
@@ -68,7 +67,7 @@ int replace_grant_p2m_mapping(uint64_t a
                  type, mfn_x(old_mfn), mfn_x(frame));
         return GNTST_general_error;
     }
-    if ( guest_physmap_remove_page(d, _gfn(gfn), frame, PAGE_ORDER_4K) )
+    if ( p2m_remove_page(d, _gfn(gfn), frame, PAGE_ORDER_4K) )
     {
         put_gfn(d, gfn);
         return GNTST_general_error;
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -188,8 +188,7 @@ static void hvm_remove_ioreq_gfn(struct
     if ( gfn_eq(iorp->gfn, INVALID_GFN) )
         return;
 
-    if ( guest_physmap_remove_page(d, iorp->gfn,
-                                   page_to_mfn(iorp->page), 0) )
+    if ( p2m_remove_page(d, iorp->gfn, page_to_mfn(iorp->page), 0) )
         domain_crash(d);
     clear_page(iorp->va);
 }
@@ -205,8 +204,7 @@ static int hvm_add_ioreq_gfn(struct iore
 
     clear_page(iorp->va);
 
-    rc = guest_physmap_add_page(d, iorp->gfn,
-                                page_to_mfn(iorp->page), 0);
+    rc = p2m_add_page(d, iorp->gfn, page_to_mfn(iorp->page), 0, p2m_ram_rw);
     if ( rc == 0 )
         paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn)));
 
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -829,15 +829,17 @@ p2m_remove_entry(struct p2m_domain *p2m,
 }
 
 int
-guest_physmap_remove_page(struct domain *d, gfn_t gfn,
-                          mfn_t mfn, unsigned int page_order)
+p2m_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+                unsigned int page_order)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int rc;
 
-    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
     if ( !paging_mode_translate(d) )
-        return 0;
+    {
+        ASSERT_UNREACHABLE();
+        return -EPERM;
+    }
 
     gfn_lock(p2m, gfn, page_order);
     rc = p2m_remove_entry(p2m, gfn, mfn, page_order);
@@ -846,6 +848,17 @@ guest_physmap_remove_page(struct domain
     return rc;
 }
 
+int
+guest_physmap_remove_page(struct domain *d, gfn_t gfn,
+                          mfn_t mfn, unsigned int page_order)
+{
+    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
+    if ( !paging_mode_translate(d) )
+        return 0;
+
+    return p2m_remove_page(d, gfn, mfn, page_order);
+}
+
 #endif /* CONFIG_HVM */
 
 int
@@ -884,14 +897,14 @@ guest_physmap_add_page(struct domain *d,
         return 0;
     }
 
-    return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw);
+    return p2m_add_page(d, gfn, mfn, page_order, p2m_ram_rw);
 }
 
 #ifdef CONFIG_HVM
 
 int
-guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
-                        unsigned int page_order, p2m_type_t t)
+p2m_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+             unsigned int page_order, p2m_type_t t)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long i;
@@ -2665,7 +2678,7 @@ static int p2m_add_foreign(struct domain
     {
         if ( is_special_page(mfn_to_page(prev_mfn)) )
             /* Special pages are simply unhooked from this phys slot */
-            rc = guest_physmap_remove_page(tdom, _gfn(gpfn), prev_mfn, 0);
+            rc = p2m_remove_page(tdom, _gfn(gpfn), prev_mfn, 0);
         else
             /* Normal domain memory is freed, to avoid leaking memory. */
             rc = guest_remove_page(tdom, gpfn);
@@ -2673,7 +2686,7 @@ static int p2m_add_foreign(struct domain
             goto put_both;
     }
     /*
-     * Create the new mapping. Can't use guest_physmap_add_page() because it
+     * Create the new mapping. Can't use p2m_add_page() because it
      * will update the m2p table which will result in  mfn -> gpfn of dom0
      * and not fgfn of domU.
      */
@@ -2771,7 +2784,7 @@ int xenmem_add_to_physmap_one(
     {
         if ( is_special_page(mfn_to_page(prev_mfn)) )
             /* Special pages are simply unhooked from this phys slot. */
-            rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
+            rc = p2m_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
         else if ( !mfn_eq(mfn, prev_mfn) )
             /* Normal domain memory is freed, to avoid leaking memory. */
             rc = guest_remove_page(d, gfn_x(gpfn));
@@ -2784,11 +2797,11 @@ int xenmem_add_to_physmap_one(
 
     /* Unmap from old location, if any. */
     if ( old_gpfn != INVALID_M2P_ENTRY )
-        rc = guest_physmap_remove_page(d, _gfn(old_gpfn), mfn, PAGE_ORDER_4K);
+        rc = p2m_remove_page(d, _gfn(old_gpfn), mfn, PAGE_ORDER_4K);
 
     /* Map at new location. */
     if ( !rc )
-        rc = guest_physmap_add_page(d, gpfn, mfn, PAGE_ORDER_4K);
+        rc = p2m_add_page(d, gpfn, mfn, PAGE_ORDER_4K, p2m_ram_rw);
 
  put_both:
     /*
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -571,10 +571,11 @@ int p2m_alloc_table(struct p2m_domain *p
 void p2m_teardown(struct p2m_domain *p2m);
 void p2m_final_teardown(struct domain *d);
 
-/* Add a page to a domain's p2m table */
-int guest_physmap_add_entry(struct domain *d, gfn_t gfn,
-                            mfn_t mfn, unsigned int page_order,
-                            p2m_type_t t);
+/* Add/remove a page to/from a domain's p2m table. */
+int p2m_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+                 unsigned int page_order, p2m_type_t t);
+int p2m_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+                    unsigned int page_order);
 
 /* Untyped version for RAM only, for compatibility and PV. */
 int guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 03/16] x86/P2M: drop a few CONFIG_HVM
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (2 preceding siblings ...)
  2021-07-05 16:06 ` [PATCH 02/16] x86/P2M: introduce p2m_{add,remove}_page() Jan Beulich
@ 2021-07-05 16:06 ` Jan Beulich
  2022-02-04 22:13   ` George Dunlap
  2021-07-05 16:07 ` [PATCH 04/16] x86/P2M: move map_domain_gfn() (again) Jan Beulich
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:06 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

This is to make it easier to see which parts of p2m.c still aren't HVM-
specific: In one case the conditionals sat in an already guarded region,
while in the other case P2M_AUDIT implies HVM.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1584,11 +1584,10 @@ p2m_flush_table_locked(struct p2m_domain
      * when discarding them.
      */
     ASSERT(!p2m_is_hostp2m(p2m));
-#ifdef CONFIG_HVM
-    /* Nested p2m's do not do pod, hence the asserts (and no pod lock)*/
+
+    /* Nested p2m's do not do pod, hence the asserts (and no pod lock) */
     ASSERT(page_list_empty(&p2m->pod.super));
     ASSERT(page_list_empty(&p2m->pod.single));
-#endif
 
     /* No need to flush if it's already empty */
     if ( p2m_is_nestedp2m(p2m) && p2m->np2m_base == P2M_BASE_EADDR )
@@ -2497,7 +2496,6 @@ int p2m_altp2m_propagate_change(struct d
 
     return ret;
 }
-#endif /* CONFIG_HVM */
 
 /*** Audit ***/
 
@@ -2603,8 +2601,6 @@ out_p2m_audit:
 }
 #endif /* P2M_AUDIT */
 
-#ifdef CONFIG_HVM
-
 /*
  * Add frame from foreign domain to target domain's physmap. Similar to
  * XENMAPSPACE_gmfn but the frame is foreign being mapped into current,



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 04/16] x86/P2M: move map_domain_gfn() (again)
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (3 preceding siblings ...)
  2021-07-05 16:06 ` [PATCH 03/16] x86/P2M: drop a few CONFIG_HVM Jan Beulich
@ 2021-07-05 16:07 ` Jan Beulich
  2022-02-04 22:17   ` George Dunlap
  2021-07-05 16:07 ` [PATCH 05/16] x86/mm: move guest_physmap_{add,remove}_page() Jan Beulich
                   ` (11 subsequent siblings)
  16 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:07 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

The main user is the guest walking code, so move it back there; commit
9a6787cc3809 ("x86/mm: build map_domain_gfn() just once") would perhaps
better have kept it there in the first place. This way it'll only get
built when it's actually needed (and still only once).

This also eliminates one more CONFIG_HVM conditional from p2m.c.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -535,6 +535,56 @@ guest_walk_tables(const struct vcpu *v,
     return walk_ok;
 }
 
+#if GUEST_PAGING_LEVELS == CONFIG_PAGING_LEVELS
+/*
+ * If the map is non-NULL, we leave this function having acquired an extra ref
+ * on mfn_to_page(*mfn).  In all cases, *pfec contains appropriate
+ * synthetic/structure PFEC_* bits.
+ */
+void *map_domain_gfn(struct p2m_domain *p2m, gfn_t gfn, mfn_t *mfn,
+                     p2m_query_t q, uint32_t *pfec)
+{
+    p2m_type_t p2mt;
+    struct page_info *page;
+
+    if ( !gfn_valid(p2m->domain, gfn) )
+    {
+        *pfec = PFEC_reserved_bit | PFEC_page_present;
+        return NULL;
+    }
+
+    /* Translate the gfn, unsharing if shared. */
+    page = p2m_get_page_from_gfn(p2m, gfn, &p2mt, NULL, q);
+    if ( p2m_is_paging(p2mt) )
+    {
+        ASSERT(p2m_is_hostp2m(p2m));
+        if ( page )
+            put_page(page);
+        p2m_mem_paging_populate(p2m->domain, gfn);
+        *pfec = PFEC_page_paged;
+        return NULL;
+    }
+    if ( p2m_is_shared(p2mt) )
+    {
+        if ( page )
+            put_page(page);
+        *pfec = PFEC_page_shared;
+        return NULL;
+    }
+    if ( !page )
+    {
+        *pfec = 0;
+        return NULL;
+    }
+
+    *pfec = PFEC_page_present;
+    *mfn = page_to_mfn(page);
+    ASSERT(mfn_valid(*mfn));
+
+    return map_domain_page(*mfn);
+}
+#endif
+
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1869,58 +1869,6 @@ unsigned long paging_gva_to_gfn(struct v
     return hostmode->gva_to_gfn(v, hostp2m, va, pfec);
 }
 
-#endif /* CONFIG_HVM */
-
-/*
- * If the map is non-NULL, we leave this function having acquired an extra ref
- * on mfn_to_page(*mfn).  In all cases, *pfec contains appropriate
- * synthetic/structure PFEC_* bits.
- */
-void *map_domain_gfn(struct p2m_domain *p2m, gfn_t gfn, mfn_t *mfn,
-                     p2m_query_t q, uint32_t *pfec)
-{
-    p2m_type_t p2mt;
-    struct page_info *page;
-
-    if ( !gfn_valid(p2m->domain, gfn) )
-    {
-        *pfec = PFEC_reserved_bit | PFEC_page_present;
-        return NULL;
-    }
-
-    /* Translate the gfn, unsharing if shared. */
-    page = p2m_get_page_from_gfn(p2m, gfn, &p2mt, NULL, q);
-    if ( p2m_is_paging(p2mt) )
-    {
-        ASSERT(p2m_is_hostp2m(p2m));
-        if ( page )
-            put_page(page);
-        p2m_mem_paging_populate(p2m->domain, gfn);
-        *pfec = PFEC_page_paged;
-        return NULL;
-    }
-    if ( p2m_is_shared(p2mt) )
-    {
-        if ( page )
-            put_page(page);
-        *pfec = PFEC_page_shared;
-        return NULL;
-    }
-    if ( !page )
-    {
-        *pfec = 0;
-        return NULL;
-    }
-
-    *pfec = PFEC_page_present;
-    *mfn = page_to_mfn(page);
-    ASSERT(mfn_valid(*mfn));
-
-    return map_domain_page(*mfn);
-}
-
-#ifdef CONFIG_HVM
-
 static unsigned int mmio_order(const struct domain *d,
                                unsigned long start_fn, unsigned long nr)
 {



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 05/16] x86/mm: move guest_physmap_{add,remove}_page()
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (4 preceding siblings ...)
  2021-07-05 16:07 ` [PATCH 04/16] x86/P2M: move map_domain_gfn() (again) Jan Beulich
@ 2021-07-05 16:07 ` Jan Beulich
  2022-02-05 21:06   ` George Dunlap
  2021-07-05 16:07 ` [PATCH 06/16] x86/mm: split set_identity_p2m_entry() into PV and HVM parts Jan Beulich
                   ` (10 subsequent siblings)
  16 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:07 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

... to a new file, separating the functions from their HVM-specific
backing ones, themselves only dealing with the non-translated case.

To avoid having a new CONFIG_HVM conditional in there, do away with
the inline placeholder.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -10,6 +10,7 @@ obj-$(CONFIG_MEM_SHARING) += mem_sharing
 obj-y += p2m.o
 obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o p2m-pt.o
 obj-y += paging.o
+obj-y += physmap.o
 
 guest_walk_%.o: guest_walk.c Makefile
 	$(CC) $(c_flags) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -849,60 +849,6 @@ p2m_remove_page(struct domain *d, gfn_t
 }
 
 int
-guest_physmap_remove_page(struct domain *d, gfn_t gfn,
-                          mfn_t mfn, unsigned int page_order)
-{
-    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
-    if ( !paging_mode_translate(d) )
-        return 0;
-
-    return p2m_remove_page(d, gfn, mfn, page_order);
-}
-
-#endif /* CONFIG_HVM */
-
-int
-guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
-                       unsigned int page_order)
-{
-    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
-    if ( !paging_mode_translate(d) )
-    {
-        struct page_info *page = mfn_to_page(mfn);
-        unsigned long i;
-
-        /*
-         * Our interface for PV guests wrt IOMMU entries hasn't been very
-         * clear; but historically, pages have started out with IOMMU mappings,
-         * and only lose them when changed to a different page type.
-         *
-         * Retain this property by grabbing a writable type ref and then
-         * dropping it immediately.  The result will be pages that have a
-         * writable type (and an IOMMU entry), but a count of 0 (such that
-         * any guest-requested type changes succeed and remove the IOMMU
-         * entry).
-         */
-        for ( i = 0; i < (1UL << page_order); ++i, ++page )
-        {
-            if ( !need_iommu_pt_sync(d) )
-                /* nothing */;
-            else if ( get_page_and_type(page, d, PGT_writable_page) )
-                put_page_and_type(page);
-            else
-                return -EINVAL;
-
-            set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn) + i);
-        }
-
-        return 0;
-    }
-
-    return p2m_add_page(d, gfn, mfn, page_order, p2m_ram_rw);
-}
-
-#ifdef CONFIG_HVM
-
-int
 p2m_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
              unsigned int page_order, p2m_type_t t)
 {
--- /dev/null
+++ b/xen/arch/x86/mm/physmap.c
@@ -0,0 +1,85 @@
+/******************************************************************************
+ * arch/x86/mm/physmap.c
+ *
+ * Parts of this code are Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
+ * Parts of this code are Copyright (c) 2007 by Advanced Micro Devices.
+ * Parts of this code are Copyright (c) 2006-2007 by XenSource Inc.
+ * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
+ * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/p2m.h>
+
+#include "mm-locks.h"
+
+int
+guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+                       unsigned int page_order)
+{
+    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
+    if ( !paging_mode_translate(d) )
+    {
+        struct page_info *page = mfn_to_page(mfn);
+        unsigned long i;
+
+        /*
+         * Our interface for PV guests wrt IOMMU entries hasn't been very
+         * clear; but historically, pages have started out with IOMMU mappings,
+         * and only lose them when changed to a different page type.
+         *
+         * Retain this property by grabbing a writable type ref and then
+         * dropping it immediately.  The result will be pages that have a
+         * writable type (and an IOMMU entry), but a count of 0 (such that
+         * any guest-requested type changes succeed and remove the IOMMU
+         * entry).
+         */
+        for ( i = 0; i < (1UL << page_order); ++i, ++page )
+        {
+            if ( !need_iommu_pt_sync(d) )
+                /* nothing */;
+            else if ( get_page_and_type(page, d, PGT_writable_page) )
+                put_page_and_type(page);
+            else
+                return -EINVAL;
+
+            set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn) + i);
+        }
+
+        return 0;
+    }
+
+    return p2m_add_page(d, gfn, mfn, page_order, p2m_ram_rw);
+}
+
+int
+guest_physmap_remove_page(struct domain *d, gfn_t gfn,
+                          mfn_t mfn, unsigned int page_order)
+{
+    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
+    if ( !paging_mode_translate(d) )
+        return 0;
+
+    return p2m_remove_page(d, gfn, mfn, page_order);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -8,18 +8,9 @@ int set_foreign_p2m_entry(struct domain
                           unsigned long gfn, mfn_t mfn);
 
 /* Remove a page from a domain's p2m table */
-#ifdef CONFIG_HVM
 int __must_check
 guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
                           unsigned int page_order);
-#else
-static inline int
-guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
-                          unsigned int page_order)
-{
-    return 0;
-}
-#endif
 
 /* Map MMIO regions in the p2m: start_gfn and nr describe the range in
  *  * the guest physical address space to map, starting from the machine



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 06/16] x86/mm: split set_identity_p2m_entry() into PV and HVM parts
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (5 preceding siblings ...)
  2021-07-05 16:07 ` [PATCH 05/16] x86/mm: move guest_physmap_{add,remove}_page() Jan Beulich
@ 2021-07-05 16:07 ` Jan Beulich
  2022-02-05 21:09   ` George Dunlap
  2021-07-05 16:09 ` [PATCH 07/16] x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only Jan Beulich
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:07 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

..., moving the former into the new physmap.c.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -43,6 +43,7 @@
 #include <xsm/xsm.h>
 
 #include "mm-locks.h"
+#include "p2m.h"
 
 /* Override macro from asm/page.h to make work with mfn_t */
 #undef virt_to_mfn
@@ -1366,12 +1367,9 @@ int clear_mmio_p2m_entry(struct domain *
     return rc;
 }
 
-#endif /* CONFIG_HVM */
-
-int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
+int p2m_add_identity_entry(struct domain *d, unsigned long gfn_l,
                            p2m_access_t p2ma, unsigned int flag)
 {
-#ifdef CONFIG_HVM
     p2m_type_t p2mt;
     p2m_access_t a;
     gfn_t gfn = _gfn(gfn_l);
@@ -1381,13 +1379,8 @@ int set_identity_p2m_entry(struct domain
 
     if ( !paging_mode_translate(d) )
     {
-#endif
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l),
-                                1ul << PAGE_ORDER_4K,
-                                IOMMUF_readable | IOMMUF_writable);
-#ifdef CONFIG_HVM
+        ASSERT_UNREACHABLE();
+        return -EPERM;
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1413,12 +1406,10 @@ int set_identity_p2m_entry(struct domain
 
     gfn_unlock(p2m, gfn, 0);
     return ret;
-#endif
 }
 
-int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
+int p2m_remove_identity_entry(struct domain *d, unsigned long gfn_l)
 {
-#ifdef CONFIG_HVM
     p2m_type_t p2mt;
     p2m_access_t a;
     gfn_t gfn = _gfn(gfn_l);
@@ -1428,11 +1419,8 @@ int clear_identity_p2m_entry(struct doma
 
     if ( !paging_mode_translate(d) )
     {
-#endif
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_unmap(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K);
-#ifdef CONFIG_HVM
+        ASSERT_UNREACHABLE();
+        return -EPERM;
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1454,7 +1442,6 @@ int clear_identity_p2m_entry(struct doma
     }
 
     return ret;
-#endif
 }
 
 #ifdef CONFIG_MEM_SHARING
@@ -1499,8 +1486,6 @@ int set_shared_p2m_entry(struct domain *
 
 #endif /* CONFIG_MEM_SHARING */
 
-#ifdef CONFIG_HVM
-
 static struct p2m_domain *
 p2m_getlru_nestedp2m(struct domain *d, struct p2m_domain *p2m)
 {
--- /dev/null
+++ b/xen/arch/x86/mm/p2m.h
@@ -0,0 +1,31 @@
+/******************************************************************************
+ * arch/x86/mm/p2m.h
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/mem_access.h>
+
+int p2m_add_identity_entry(struct domain *d, unsigned long gfn,
+                           p2m_access_t p2ma, unsigned int flag);
+int p2m_remove_identity_entry(struct domain *d, unsigned long gfn);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
--- a/xen/arch/x86/mm/physmap.c
+++ b/xen/arch/x86/mm/physmap.c
@@ -21,9 +21,11 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/iommu.h>
 #include <asm/p2m.h>
 
 #include "mm-locks.h"
+#include "p2m.h"
 
 int
 guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
@@ -75,6 +77,33 @@ guest_physmap_remove_page(struct domain
     return p2m_remove_page(d, gfn, mfn, page_order);
 }
 
+int set_identity_p2m_entry(struct domain *d, unsigned long gfn,
+                           p2m_access_t p2ma, unsigned int flag)
+{
+    if ( !paging_mode_translate(d) )
+    {
+        if ( !is_iommu_enabled(d) )
+            return 0;
+        return iommu_legacy_map(d, _dfn(gfn), _mfn(gfn),
+                                1ul << PAGE_ORDER_4K,
+                                IOMMUF_readable | IOMMUF_writable);
+    }
+
+    return p2m_add_identity_entry(d, gfn, p2ma, flag);
+}
+
+int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
+{
+    if ( !paging_mode_translate(d) )
+    {
+        if ( !is_iommu_enabled(d) )
+            return 0;
+        return iommu_legacy_unmap(d, _dfn(gfn), 1ul << PAGE_ORDER_4K);
+    }
+
+    return p2m_remove_identity_entry(d, gfn);
+}
+
 /*
  * Local variables:
  * mode: C



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 07/16] x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (6 preceding siblings ...)
  2021-07-05 16:07 ` [PATCH 06/16] x86/mm: split set_identity_p2m_entry() into PV and HVM parts Jan Beulich
@ 2021-07-05 16:09 ` Jan Beulich
  2021-07-07  1:35   ` Tian, Kevin
  2022-02-05 21:17   ` George Dunlap
  2021-07-05 16:09 ` [PATCH 08/16] x86/P2M: PoD, altp2m, and nested-p2m " Jan Beulich
                   ` (8 subsequent siblings)
  16 siblings, 2 replies; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:09 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap, Kevin Tian

This also includes the two p2m related fields.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -94,7 +94,9 @@ static int p2m_initialise(struct domain
     int ret = 0;
 
     mm_rwlock_init(&p2m->lock);
+#ifdef CONFIG_HVM
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+#endif
 
     p2m->domain = d;
     p2m->default_access = p2m_access_rwx;
@@ -628,6 +630,7 @@ struct page_info *p2m_get_page_from_gfn(
 }
 
 #ifdef CONFIG_HVM
+
 /* Returns: 0 for success, -errno for failure */
 int p2m_set_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
                   unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
@@ -667,7 +670,6 @@ int p2m_set_entry(struct p2m_domain *p2m
 
     return rc;
 }
-#endif
 
 mfn_t p2m_alloc_ptp(struct p2m_domain *p2m, unsigned int level)
 {
@@ -746,6 +748,8 @@ int p2m_alloc_table(struct p2m_domain *p
     return 0;
 }
 
+#endif /* CONFIG_HVM */
+
 /*
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
@@ -754,7 +758,9 @@ void p2m_teardown(struct p2m_domain *p2m
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
+#ifdef CONFIG_HVM
     struct page_info *pg;
+#endif
     struct domain *d;
 
     if (p2m == NULL)
@@ -763,11 +769,16 @@ void p2m_teardown(struct p2m_domain *p2m
     d = p2m->domain;
 
     p2m_lock(p2m);
+
     ASSERT(atomic_read(&d->shr_pages) == 0);
+
+#ifdef CONFIG_HVM
     p2m->phys_table = pagetable_null();
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
         d->arch.paging.free_page(d, pg);
+#endif
+
     p2m_unlock(p2m);
 }
 
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2700,8 +2700,10 @@ int shadow_enable(struct domain *d, u32
  out_locked:
     paging_unlock(d);
  out_unlocked:
+#ifdef CONFIG_HVM
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
         p2m_teardown(p2m);
+#endif
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -339,12 +339,14 @@ static uint64_t domain_pgd_maddr(struct
 
     ASSERT(spin_is_locked(&hd->arch.mapping_lock));
 
+#ifdef CONFIG_HVM
     if ( iommu_use_hap_pt(d) )
     {
         pagetable_t pgt = p2m_get_pagetable(p2m_get_hostp2m(d));
 
         return pagetable_get_paddr(pgt);
     }
+#endif
 
     if ( !hd->arch.vtd.pgd_maddr )
     {
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -202,9 +202,6 @@ struct p2m_domain {
     /* Lock that protects updates to the p2m */
     mm_rwlock_t           lock;
 
-    /* Shadow translated domain: p2m mapping */
-    pagetable_t        phys_table;
-
     /*
      * Same as a domain's dirty_cpumask but limited to
      * this p2m and those physical cpus whose vcpu's are in
@@ -223,9 +220,6 @@ struct p2m_domain {
      */
     p2m_access_t default_access;
 
-    /* Pages used to construct the p2m */
-    struct page_list_head pages;
-
     /* Host p2m: Log-dirty ranges registered for the domain. */
     struct rangeset   *logdirty_ranges;
 
@@ -233,6 +227,12 @@ struct p2m_domain {
     bool               global_logdirty;
 
 #ifdef CONFIG_HVM
+    /* Translated domain: p2m mapping */
+    pagetable_t        phys_table;
+
+    /* Pages used to construct the p2m */
+    struct page_list_head pages;
+
     /* Alternate p2m: count of vcpu's currently using this p2m. */
     atomic_t           active_vcpus;
 



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 08/16] x86/P2M: PoD, altp2m, and nested-p2m are HVM-only
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (7 preceding siblings ...)
  2021-07-05 16:09 ` [PATCH 07/16] x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only Jan Beulich
@ 2021-07-05 16:09 ` Jan Beulich
  2022-02-05 21:29   ` George Dunlap
  2021-07-05 16:10 ` [PATCH 09/16] x86/P2M: split out init/teardown functions Jan Beulich
                   ` (7 subsequent siblings)
  16 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:09 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

There's no need to initialize respective data for PV domains. Note that
p2m_teardown_{alt,nested}p2m() will handle the lack-of-initialization
case fine.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -102,6 +102,9 @@ static int p2m_initialise(struct domain
     p2m->default_access = p2m_access_rwx;
     p2m->p2m_class = p2m_host;
 
+    if ( !is_hvm_domain(d) )
+        return 0;
+
     p2m_pod_init(p2m);
     p2m_nestedp2m_init(p2m);
 
@@ -259,7 +262,7 @@ int p2m_init(struct domain *d)
     int rc;
 
     rc = p2m_init_hostp2m(d);
-    if ( rc )
+    if ( rc || !is_hvm_domain(d) )
         return rc;
 
 #ifdef CONFIG_HVM
--- a/xen/arch/x86/mm/p2m.h
+++ b/xen/arch/x86/mm/p2m.h
@@ -17,6 +17,8 @@
 
 #include <xen/mem_access.h>
 
+void p2m_pod_init(struct p2m_domain *p2m);
+
 int p2m_add_identity_entry(struct domain *d, unsigned long gfn,
                            p2m_access_t p2ma, unsigned int flag);
 int p2m_remove_identity_entry(struct domain *d, unsigned long gfn);
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1135,6 +1135,12 @@ p2m_pod_demand_populate(struct p2m_domai
     mfn_t mfn;
     unsigned long i;
 
+    if ( !p2m_is_hostp2m(p2m) )
+    {
+        ASSERT_UNREACHABLE();
+        return false;
+    }
+
     ASSERT(gfn_locked_by_me(p2m, gfn));
     pod_lock(p2m);
 
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -679,8 +679,6 @@ static inline long p2m_pod_entry_count(c
     return p2m->pod.entry_count;
 }
 
-void p2m_pod_init(struct p2m_domain *p2m);
-
 #else
 
 static inline bool
@@ -709,8 +707,6 @@ static inline long p2m_pod_entry_count(c
     return 0;
 }
 
-static inline void p2m_pod_init(struct p2m_domain *p2m) {}
-
 #endif
 
 



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 09/16] x86/P2M: split out init/teardown functions
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (8 preceding siblings ...)
  2021-07-05 16:09 ` [PATCH 08/16] x86/P2M: PoD, altp2m, and nested-p2m " Jan Beulich
@ 2021-07-05 16:10 ` Jan Beulich
  2022-02-05 21:31   ` George Dunlap
  2021-07-05 16:10 ` [PATCH 10/16] x86/P2M: p2m_get_page_from_gfn() is HVM-only Jan Beulich
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

Mostly just code movement, and certainly no functional change intended.
In p2m_final_teardown() the calls to p2m_teardown_{alt,nested}p2m() need
to be guarded by an is_hvm_domain() check now, though. This matches
p2m_init(). And p2m_is_logdirty_range() also gets moved inside the (so
far) adjacent #ifdef.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -7,7 +7,9 @@ obj-$(CONFIG_SHADOW_PAGING) += guest_wal
 obj-$(CONFIG_MEM_ACCESS) += mem_access.o
 obj-$(CONFIG_MEM_PAGING) += mem_paging.o
 obj-$(CONFIG_MEM_SHARING) += mem_sharing.o
+obj-$(CONFIG_HVM) += nested.o
 obj-y += p2m.o
+obj-y += p2m-basic.o
 obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o p2m-pt.o
 obj-y += paging.o
 obj-y += physmap.o
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -19,6 +19,8 @@
 #include <asm/hvm/hvm.h>
 #include <asm/p2m.h>
 #include <asm/altp2m.h>
+#include "mm-locks.h"
+#include "p2m.h"
 
 void
 altp2m_vcpu_initialise(struct vcpu *v)
@@ -123,6 +125,44 @@ void altp2m_vcpu_disable_ve(struct vcpu
     }
 }
 
+int p2m_init_altp2m(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
+
+    mm_lock_init(&d->arch.altp2m_list_lock);
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
+        if ( p2m == NULL )
+        {
+            p2m_teardown_altp2m(d);
+            return -ENOMEM;
+        }
+        p2m->p2m_class = p2m_alternate;
+        p2m->access_required = hostp2m->access_required;
+        _atomic_set(&p2m->active_vcpus, 0);
+    }
+
+    return 0;
+}
+
+void p2m_teardown_altp2m(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        if ( !d->arch.altp2m_p2m[i] )
+            continue;
+        p2m = d->arch.altp2m_p2m[i];
+        d->arch.altp2m_p2m[i] = NULL;
+        p2m_free_one(p2m);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -25,8 +25,6 @@
 #ifndef _MM_LOCKS_H
 #define _MM_LOCKS_H
 
-#include <asm/mem_sharing.h>
-
 /* Per-CPU variable for enforcing the lock ordering */
 DECLARE_PER_CPU(int, mm_lock_level);
 
--- /dev/null
+++ b/xen/arch/x86/mm/nested.c
@@ -0,0 +1,74 @@
+/******************************************************************************
+ * arch/x86/mm/nested.c
+ *
+ * Parts of this code are Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
+ * Parts of this code are Copyright (c) 2007 by Advanced Micro Devices.
+ * Parts of this code are Copyright (c) 2006-2007 by XenSource Inc.
+ * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
+ * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/sched.h>
+#include <asm/p2m.h>
+#include "mm-locks.h"
+#include "p2m.h"
+
+void p2m_nestedp2m_init(struct p2m_domain *p2m)
+{
+    INIT_LIST_HEAD(&p2m->np2m_list);
+
+    p2m->np2m_base = P2M_BASE_EADDR;
+    p2m->np2m_generation = 0;
+}
+
+int p2m_init_nestedp2m(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+
+    mm_lock_init(&d->arch.nested_p2m_lock);
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+    {
+        d->arch.nested_p2m[i] = p2m = p2m_init_one(d);
+        if ( p2m == NULL )
+        {
+            p2m_teardown_nestedp2m(d);
+            return -ENOMEM;
+        }
+        p2m->p2m_class = p2m_nested;
+        p2m->write_p2m_entry_pre = NULL;
+        p2m->write_p2m_entry_post = nestedp2m_write_p2m_entry_post;
+        list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
+    }
+
+    return 0;
+}
+
+void p2m_teardown_nestedp2m(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+    {
+        if ( !d->arch.nested_p2m[i] )
+            continue;
+        p2m = d->arch.nested_p2m[i];
+        list_del(&p2m->np2m_list);
+        p2m_free_one(p2m);
+        d->arch.nested_p2m[i] = NULL;
+    }
+}
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -35,7 +35,6 @@
 #include <asm/page.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
-#include <asm/hvm/vmx/vmx.h> /* ept_p2m_init() */
 #include <asm/mem_sharing.h>
 #include <asm/hvm/nestedhvm.h>
 #include <asm/altp2m.h>
@@ -56,17 +55,9 @@ boolean_param("hap_2mb", opt_hap_2mb);
 
 DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
 
-static void p2m_nestedp2m_init(struct p2m_domain *p2m)
-{
 #ifdef CONFIG_HVM
-    INIT_LIST_HEAD(&p2m->np2m_list);
 
-    p2m->np2m_base = P2M_BASE_EADDR;
-    p2m->np2m_generation = 0;
-#endif
-}
-
-static int p2m_init_logdirty(struct p2m_domain *p2m)
+int p2m_init_logdirty(struct p2m_domain *p2m)
 {
     if ( p2m->logdirty_ranges )
         return 0;
@@ -79,7 +70,7 @@ static int p2m_init_logdirty(struct p2m_
     return 0;
 }
 
-static void p2m_free_logdirty(struct p2m_domain *p2m)
+void p2m_free_logdirty(struct p2m_domain *p2m)
 {
     if ( !p2m->logdirty_ranges )
         return;
@@ -88,205 +79,6 @@ static void p2m_free_logdirty(struct p2m
     p2m->logdirty_ranges = NULL;
 }
 
-/* Init the datastructures for later use by the p2m code */
-static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
-{
-    int ret = 0;
-
-    mm_rwlock_init(&p2m->lock);
-#ifdef CONFIG_HVM
-    INIT_PAGE_LIST_HEAD(&p2m->pages);
-#endif
-
-    p2m->domain = d;
-    p2m->default_access = p2m_access_rwx;
-    p2m->p2m_class = p2m_host;
-
-    if ( !is_hvm_domain(d) )
-        return 0;
-
-    p2m_pod_init(p2m);
-    p2m_nestedp2m_init(p2m);
-
-    if ( hap_enabled(d) && cpu_has_vmx )
-        ret = ept_p2m_init(p2m);
-    else
-        p2m_pt_init(p2m);
-
-    spin_lock_init(&p2m->ioreq.lock);
-
-    return ret;
-}
-
-static struct p2m_domain *p2m_init_one(struct domain *d)
-{
-    struct p2m_domain *p2m = xzalloc(struct p2m_domain);
-
-    if ( !p2m )
-        return NULL;
-
-    if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
-        goto free_p2m;
-
-    if ( p2m_initialise(d, p2m) )
-        goto free_cpumask;
-    return p2m;
-
-free_cpumask:
-    free_cpumask_var(p2m->dirty_cpumask);
-free_p2m:
-    xfree(p2m);
-    return NULL;
-}
-
-static void p2m_free_one(struct p2m_domain *p2m)
-{
-    p2m_free_logdirty(p2m);
-    if ( hap_enabled(p2m->domain) && cpu_has_vmx )
-        ept_p2m_uninit(p2m);
-    free_cpumask_var(p2m->dirty_cpumask);
-    xfree(p2m);
-}
-
-static int p2m_init_hostp2m(struct domain *d)
-{
-    struct p2m_domain *p2m = p2m_init_one(d);
-    int rc;
-
-    if ( !p2m )
-        return -ENOMEM;
-
-    rc = p2m_init_logdirty(p2m);
-
-    if ( !rc )
-        d->arch.p2m = p2m;
-    else
-        p2m_free_one(p2m);
-
-    return rc;
-}
-
-static void p2m_teardown_hostp2m(struct domain *d)
-{
-    /* Iterate over all p2m tables per domain */
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-    if ( p2m )
-    {
-        p2m_free_one(p2m);
-        d->arch.p2m = NULL;
-    }
-}
-
-#ifdef CONFIG_HVM
-static void p2m_teardown_nestedp2m(struct domain *d)
-{
-    unsigned int i;
-    struct p2m_domain *p2m;
-
-    for ( i = 0; i < MAX_NESTEDP2M; i++ )
-    {
-        if ( !d->arch.nested_p2m[i] )
-            continue;
-        p2m = d->arch.nested_p2m[i];
-        list_del(&p2m->np2m_list);
-        p2m_free_one(p2m);
-        d->arch.nested_p2m[i] = NULL;
-    }
-}
-
-static int p2m_init_nestedp2m(struct domain *d)
-{
-    unsigned int i;
-    struct p2m_domain *p2m;
-
-    mm_lock_init(&d->arch.nested_p2m_lock);
-    for ( i = 0; i < MAX_NESTEDP2M; i++ )
-    {
-        d->arch.nested_p2m[i] = p2m = p2m_init_one(d);
-        if ( p2m == NULL )
-        {
-            p2m_teardown_nestedp2m(d);
-            return -ENOMEM;
-        }
-        p2m->p2m_class = p2m_nested;
-        p2m->write_p2m_entry_pre = NULL;
-        p2m->write_p2m_entry_post = nestedp2m_write_p2m_entry_post;
-        list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
-    }
-
-    return 0;
-}
-
-static void p2m_teardown_altp2m(struct domain *d)
-{
-    unsigned int i;
-    struct p2m_domain *p2m;
-
-    for ( i = 0; i < MAX_ALTP2M; i++ )
-    {
-        if ( !d->arch.altp2m_p2m[i] )
-            continue;
-        p2m = d->arch.altp2m_p2m[i];
-        d->arch.altp2m_p2m[i] = NULL;
-        p2m_free_one(p2m);
-    }
-}
-
-static int p2m_init_altp2m(struct domain *d)
-{
-    unsigned int i;
-    struct p2m_domain *p2m;
-    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
-
-    mm_lock_init(&d->arch.altp2m_list_lock);
-    for ( i = 0; i < MAX_ALTP2M; i++ )
-    {
-        d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
-        if ( p2m == NULL )
-        {
-            p2m_teardown_altp2m(d);
-            return -ENOMEM;
-        }
-        p2m->p2m_class = p2m_alternate;
-        p2m->access_required = hostp2m->access_required;
-        _atomic_set(&p2m->active_vcpus, 0);
-    }
-
-    return 0;
-}
-#endif
-
-int p2m_init(struct domain *d)
-{
-    int rc;
-
-    rc = p2m_init_hostp2m(d);
-    if ( rc || !is_hvm_domain(d) )
-        return rc;
-
-#ifdef CONFIG_HVM
-    /* Must initialise nestedp2m unconditionally
-     * since nestedhvm_enabled(d) returns false here.
-     * (p2m_init runs too early for HVM_PARAM_* options) */
-    rc = p2m_init_nestedp2m(d);
-    if ( rc )
-    {
-        p2m_teardown_hostp2m(d);
-        return rc;
-    }
-
-    rc = p2m_init_altp2m(d);
-    if ( rc )
-    {
-        p2m_teardown_hostp2m(d);
-        p2m_teardown_nestedp2m(d);
-    }
-#endif
-
-    return rc;
-}
-
 int p2m_is_logdirty_range(struct p2m_domain *p2m, unsigned long start,
                           unsigned long end)
 {
@@ -298,8 +90,6 @@ int p2m_is_logdirty_range(struct p2m_dom
     return 0;
 }
 
-#ifdef CONFIG_HVM
-
 static void change_entry_type_global(struct p2m_domain *p2m,
                                      p2m_type_t ot, p2m_type_t nt)
 {
@@ -751,57 +541,6 @@ int p2m_alloc_table(struct p2m_domain *p
     return 0;
 }
 
-#endif /* CONFIG_HVM */
-
-/*
- * hvm fixme: when adding support for pvh non-hardware domains, this path must
- * cleanup any foreign p2m types (release refcnts on them).
- */
-void p2m_teardown(struct p2m_domain *p2m)
-/* Return all the p2m pages to Xen.
- * We know we don't have any extra mappings to these pages */
-{
-#ifdef CONFIG_HVM
-    struct page_info *pg;
-#endif
-    struct domain *d;
-
-    if (p2m == NULL)
-        return;
-
-    d = p2m->domain;
-
-    p2m_lock(p2m);
-
-    ASSERT(atomic_read(&d->shr_pages) == 0);
-
-#ifdef CONFIG_HVM
-    p2m->phys_table = pagetable_null();
-
-    while ( (pg = page_list_remove_head(&p2m->pages)) )
-        d->arch.paging.free_page(d, pg);
-#endif
-
-    p2m_unlock(p2m);
-}
-
-void p2m_final_teardown(struct domain *d)
-{
-#ifdef CONFIG_HVM
-    /*
-     * We must teardown both of them unconditionally because
-     * we initialise them unconditionally.
-     */
-    p2m_teardown_altp2m(d);
-    p2m_teardown_nestedp2m(d);
-#endif
-
-    /* Iterate over all p2m tables per domain */
-    p2m_teardown_hostp2m(d);
-}
-
-#ifdef CONFIG_HVM
-
 static int __must_check
 p2m_remove_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
                  unsigned int page_order)
--- a/xen/arch/x86/mm/p2m.h
+++ b/xen/arch/x86/mm/p2m.h
@@ -17,12 +17,34 @@
 
 #include <xen/mem_access.h>
 
+struct p2m_domain *p2m_init_one(struct domain *d);
+void p2m_free_one(struct p2m_domain *p2m);
+
 void p2m_pod_init(struct p2m_domain *p2m);
 
 int p2m_add_identity_entry(struct domain *d, unsigned long gfn,
                            p2m_access_t p2ma, unsigned int flag);
 int p2m_remove_identity_entry(struct domain *d, unsigned long gfn);
 
+#ifdef CONFIG_HVM
+int p2m_init_logdirty(struct p2m_domain *p2m);
+void p2m_free_logdirty(struct p2m_domain *p2m);
+#else
+static inline int p2m_init_logdirty(struct p2m_domain *p2m) { return 0; }
+static inline void p2m_free_logdirty(struct p2m_domain *p2m) {}
+#endif
+
+int p2m_init_altp2m(struct domain *d);
+void p2m_teardown_altp2m(struct domain *d);
+
+void p2m_nestedp2m_init(struct p2m_domain *p2m);
+int p2m_init_nestedp2m(struct domain *d);
+void p2m_teardown_nestedp2m(struct domain *d);
+
+int ept_p2m_init(struct p2m_domain *p2m);
+void ept_p2m_uninit(struct p2m_domain *p2m);
+void p2m_init_altp2m_ept(struct domain *d, unsigned int i);
+
 /*
  * Local variables:
  * mode: C
--- /dev/null
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -0,0 +1,207 @@
+/******************************************************************************
+ * arch/x86/mm/p2m-basic.c
+ *
+ * Basic P2M management largely applicable to all domain types.
+ *
+ * Parts of this code are Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
+ * Parts of this code are Copyright (c) 2007 by Advanced Micro Devices.
+ * Parts of this code are Copyright (c) 2006-2007 by XenSource Inc.
+ * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
+ * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/types.h>
+#include <asm/p2m.h>
+#include "mm-locks.h"
+#include "p2m.h"
+
+/* Init the datastructures for later use by the p2m code */
+static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
+{
+    int ret = 0;
+
+    mm_rwlock_init(&p2m->lock);
+#ifdef CONFIG_HVM
+    INIT_PAGE_LIST_HEAD(&p2m->pages);
+#endif
+
+    p2m->domain = d;
+    p2m->default_access = p2m_access_rwx;
+    p2m->p2m_class = p2m_host;
+
+    if ( !is_hvm_domain(d) )
+        return 0;
+
+    p2m_pod_init(p2m);
+    p2m_nestedp2m_init(p2m);
+
+    if ( hap_enabled(d) && cpu_has_vmx )
+        ret = ept_p2m_init(p2m);
+    else
+        p2m_pt_init(p2m);
+
+    spin_lock_init(&p2m->ioreq.lock);
+
+    return ret;
+}
+
+struct p2m_domain *p2m_init_one(struct domain *d)
+{
+    struct p2m_domain *p2m = xzalloc(struct p2m_domain);
+
+    if ( !p2m )
+        return NULL;
+
+    if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
+        goto free_p2m;
+
+    if ( p2m_initialise(d, p2m) )
+        goto free_cpumask;
+    return p2m;
+
+ free_cpumask:
+    free_cpumask_var(p2m->dirty_cpumask);
+ free_p2m:
+    xfree(p2m);
+    return NULL;
+}
+
+void p2m_free_one(struct p2m_domain *p2m)
+{
+    p2m_free_logdirty(p2m);
+    if ( hap_enabled(p2m->domain) && cpu_has_vmx )
+        ept_p2m_uninit(p2m);
+    free_cpumask_var(p2m->dirty_cpumask);
+    xfree(p2m);
+}
+
+static int p2m_init_hostp2m(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_init_one(d);
+    int rc;
+
+    if ( !p2m )
+        return -ENOMEM;
+
+    rc = p2m_init_logdirty(p2m);
+
+    if ( !rc )
+        d->arch.p2m = p2m;
+    else
+        p2m_free_one(p2m);
+
+    return rc;
+}
+
+static void p2m_teardown_hostp2m(struct domain *d)
+{
+    /* Iterate over all p2m tables per domain */
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    if ( p2m )
+    {
+        p2m_free_one(p2m);
+        d->arch.p2m = NULL;
+    }
+}
+
+int p2m_init(struct domain *d)
+{
+    int rc;
+
+    rc = p2m_init_hostp2m(d);
+    if ( rc || !is_hvm_domain(d) )
+        return rc;
+
+    /*
+     * Must initialise nestedp2m unconditionally
+     * since nestedhvm_enabled(d) returns false here.
+     * (p2m_init runs too early for HVM_PARAM_* options)
+     */
+    rc = p2m_init_nestedp2m(d);
+    if ( rc )
+    {
+        p2m_teardown_hostp2m(d);
+        return rc;
+    }
+
+    rc = p2m_init_altp2m(d);
+    if ( rc )
+    {
+        p2m_teardown_hostp2m(d);
+        p2m_teardown_nestedp2m(d);
+    }
+
+    return rc;
+}
+
+/*
+ * Return all the p2m pages to Xen.
+ * We know we don't have any extra mappings to these pages.
+ *
+ * hvm fixme: when adding support for pvh non-hardware domains, this path must
+ * cleanup any foreign p2m types (release refcnts on them).
+ */
+void p2m_teardown(struct p2m_domain *p2m)
+{
+#ifdef CONFIG_HVM
+    struct page_info *pg;
+#endif
+    struct domain *d;
+
+    if ( !p2m )
+        return;
+
+    d = p2m->domain;
+
+    p2m_lock(p2m);
+
+    ASSERT(atomic_read(&d->shr_pages) == 0);
+
+#ifdef CONFIG_HVM
+    p2m->phys_table = pagetable_null();
+
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+        d->arch.paging.free_page(d, pg);
+#endif
+
+    p2m_unlock(p2m);
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    if ( is_hvm_domain(d) )
+    {
+        /*
+         * We must tear down both of them unconditionally because
+         * we initialise them unconditionally.
+         */
+        p2m_teardown_altp2m(d);
+        p2m_teardown_nestedp2m(d);
+    }
+
+    /* Iterate over all p2m tables per domain */
+    p2m_teardown_hostp2m(d);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -35,6 +35,7 @@
 #include <xen/softirq.h>
 
 #include "mm-locks.h"
+#include "p2m.h"
 
 #define atomic_read_ept_entry(__pepte)                              \
     ( (ept_entry_t) { .epte = read_atomic(&(__pepte)->epte) } )
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -594,15 +594,11 @@ unsigned int vmx_get_cpl(void);
 void vmx_inject_extint(int trap, uint8_t source);
 void vmx_inject_nmi(void);
 
-int ept_p2m_init(struct p2m_domain *p2m);
-void ept_p2m_uninit(struct p2m_domain *p2m);
-
 void ept_walk_table(struct domain *d, unsigned long gfn);
 bool_t ept_handle_misconfig(uint64_t gpa);
 int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
                        unsigned int order, bool *ipat, p2m_type_t type);
 void setup_ept_dump(void);
-void p2m_init_altp2m_ept(struct domain *d, unsigned int i);
 /* Locate an alternate p2m by its EPTP */
 unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp);
 



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 10/16] x86/P2M: p2m_get_page_from_gfn() is HVM-only
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (9 preceding siblings ...)
  2021-07-05 16:10 ` [PATCH 09/16] x86/P2M: split out init/teardown functions Jan Beulich
@ 2021-07-05 16:10 ` Jan Beulich
  2022-02-14 14:26   ` George Dunlap
  2021-07-05 16:12 ` [PATCH 11/16] x86/P2M: derive a HVM-only variant from __get_gfn_type_access() Jan Beulich
                   ` (5 subsequent siblings)
  16 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

This function is the wrong layer to go through for PV guests. It happens
to work, but produces results which aren't fully consistent with
get_page_from_gfn(). The latter function, however, cannot be used in
map_domain_gfn() as it may not be the host P2M we mean to act on.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -554,7 +554,9 @@ void *map_domain_gfn(struct p2m_domain *
     }
 
     /* Translate the gfn, unsharing if shared. */
-    page = p2m_get_page_from_gfn(p2m, gfn, &p2mt, NULL, q);
+    page = paging_mode_translate(p2m->domain)
+           ? p2m_get_page_from_gfn(p2m, gfn, &p2mt, NULL, q)
+           : get_page_from_gfn(p2m->domain, gfn_x(gfn), &p2mt, q);
     if ( p2m_is_paging(p2mt) )
     {
         ASSERT(p2m_is_hostp2m(p2m));
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -357,6 +357,8 @@ void __put_gfn(struct p2m_domain *p2m, u
     gfn_unlock(p2m, gfn, 0);
 }
 
+#ifdef CONFIG_HVM
+
 /* Atomically look up a GFN and take a reference count on the backing page. */
 struct page_info *p2m_get_page_from_gfn(
     struct p2m_domain *p2m, gfn_t gfn,
@@ -422,8 +424,6 @@ struct page_info *p2m_get_page_from_gfn(
     return page;
 }
 
-#ifdef CONFIG_HVM
-
 /* Returns: 0 for success, -errno for failure */
 int p2m_set_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
                   unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 11/16] x86/P2M: derive a HVM-only variant from __get_gfn_type_access()
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (10 preceding siblings ...)
  2021-07-05 16:10 ` [PATCH 10/16] x86/P2M: p2m_get_page_from_gfn() is HVM-only Jan Beulich
@ 2021-07-05 16:12 ` Jan Beulich
  2022-02-14 15:12   ` George Dunlap
  2021-07-05 16:12 ` [PATCH 12/16] x86/p2m: re-arrange {,__}put_gfn() Jan Beulich
                   ` (4 subsequent siblings)
  16 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:12 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné,
	George Dunlap, Tamas K Lengyel, Alexandru Isaila,
	Petre Pircalabu

Introduce an inline wrapper dealing with the non-translated-domain case,
while stripping that logic from the main function, which gets renamed to
p2m_get_gfn_type_access(). HVM-only callers can then directly use the
main function.

Along with renaming the main function also make its and the new inline
helper's GFN parameters type-safe.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1711,7 +1711,7 @@ static void svm_do_nested_pgfault(struct
         } _d;
 
         p2m = p2m_get_p2m(v);
-        mfn = __get_gfn_type_access(p2m, gfn, &p2mt, &p2ma, 0, NULL, 0);
+        mfn = p2m_get_gfn_type_access(p2m, _gfn(gfn), &p2mt, &p2ma, 0, NULL, 0);
 
         _d.gpa = gpa;
         _d.qualification = 0;
@@ -1736,7 +1736,7 @@ static void svm_do_nested_pgfault(struct
     if ( p2m == NULL )
     {
         p2m = p2m_get_p2m(v);
-        mfn = __get_gfn_type_access(p2m, gfn, &p2mt, &p2ma, 0, NULL, 0);
+        mfn = p2m_get_gfn_type_access(p2m, _gfn(gfn), &p2mt, &p2ma, 0, NULL, 0);
     }
     gdprintk(XENLOG_ERR,
          "SVM violation gpa %#"PRIpaddr", mfn %#lx, type %i\n",
--- a/xen/arch/x86/mm/mem_access.c
+++ b/xen/arch/x86/mm/mem_access.c
@@ -299,8 +299,9 @@ static int set_mem_access(struct domain
     {
         p2m_access_t _a;
         p2m_type_t t;
-        mfn_t mfn = __get_gfn_type_access(p2m, gfn_x(gfn), &t, &_a,
-                                          P2M_ALLOC, NULL, false);
+        mfn_t mfn = p2m_get_gfn_type_access(p2m, gfn, &t, &_a,
+                                            P2M_ALLOC, NULL, false);
+
         rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, t, a, -1);
     }
 
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -478,12 +478,12 @@ do {
 #undef assign_pointers
 
     /* Now do the gets. */
-    *first_mfn  = __get_gfn_type_access(p2m_get_hostp2m(rval->first_domain),
-                                        gfn_x(rval->first_gfn), first_t,
-                                        first_a, q, NULL, lock);
-    *second_mfn = __get_gfn_type_access(p2m_get_hostp2m(rval->second_domain),
-                                        gfn_x(rval->second_gfn), second_t,
-                                        second_a, q, NULL, lock);
+    *first_mfn  = p2m_get_gfn_type_access(p2m_get_hostp2m(rval->first_domain),
+                                          rval->first_gfn, first_t,
+                                          first_a, q, NULL, lock);
+    *second_mfn = p2m_get_gfn_type_access(p2m_get_hostp2m(rval->second_domain),
+                                          rval->second_gfn, second_t,
+                                          second_a, q, NULL, lock);
 }
 
 static void put_two_gfns(const struct two_gfns *arg)
@@ -936,8 +936,8 @@ static int nominate_page(struct domain *
             if ( !ap2m )
                 continue;
 
-            amfn = __get_gfn_type_access(ap2m, gfn_x(gfn), &ap2mt, &ap2ma,
-                                         0, NULL, false);
+            amfn = p2m_get_gfn_type_access(ap2m, gfn, &ap2mt, &ap2ma,
+                                           0, NULL, false);
             if ( mfn_valid(amfn) && (!mfn_eq(amfn, mfn) || ap2ma != p2ma) )
             {
                 altp2m_list_unlock(d);
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -286,25 +286,13 @@ void p2m_unlock_and_tlb_flush(struct p2m
         mm_write_unlock(&p2m->lock);
 }
 
-mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn_l,
-                    p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
-                    unsigned int *page_order, bool_t locked)
-{
 #ifdef CONFIG_HVM
-    mfn_t mfn;
-    gfn_t gfn = _gfn(gfn_l);
 
-    if ( !p2m || !paging_mode_translate(p2m->domain) )
-    {
-#endif
-        /*
-         * Not necessarily true, but for non-translated guests we claim
-         * it's the most generic kind of memory.
-         */
-        *t = p2m_ram_rw;
-        return _mfn(gfn_l);
-#ifdef CONFIG_HVM
-    }
+mfn_t p2m_get_gfn_type_access(struct p2m_domain *p2m, gfn_t gfn,
+                              p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
+                              unsigned int *page_order, bool_t locked)
+{
+    mfn_t mfn;
 
     /* Unshare makes no sense without populate. */
     if ( q & P2M_UNSHARE )
@@ -329,8 +317,8 @@ mfn_t __get_gfn_type_access(struct p2m_d
          * Try to unshare. If we fail, communicate ENOMEM without
          * sleeping.
          */
-        if ( mem_sharing_unshare_page(p2m->domain, gfn_l) < 0 )
-            mem_sharing_notify_enomem(p2m->domain, gfn_l, false);
+        if ( mem_sharing_unshare_page(p2m->domain, gfn_x(gfn)) < 0 )
+            mem_sharing_notify_enomem(p2m->domain, gfn_x(gfn), false);
         mfn = p2m->get_entry(p2m, gfn, t, a, q, page_order, NULL);
     }
 
@@ -343,9 +331,10 @@ mfn_t __get_gfn_type_access(struct p2m_d
     }
 
     return mfn;
-#endif
 }
 
+#endif /* CONFIG_HVM */
+
 void __put_gfn(struct p2m_domain *p2m, unsigned long gfn)
 {
     if ( !p2m || !paging_mode_translate(p2m->domain) )
@@ -377,7 +366,7 @@ struct page_info *p2m_get_page_from_gfn(
     {
         /* Fast path: look up and get out */
         p2m_read_lock(p2m);
-        mfn = __get_gfn_type_access(p2m, gfn_x(gfn), t, a, 0, NULL, 0);
+        mfn = p2m_get_gfn_type_access(p2m, gfn, t, a, 0, NULL, 0);
         if ( p2m_is_any_ram(*t) && mfn_valid(mfn)
              && !((q & P2M_UNSHARE) && p2m_is_shared(*t)) )
         {
@@ -1666,8 +1655,8 @@ int altp2m_get_effective_entry(struct p2
         unsigned int page_order;
         int rc;
 
-        *mfn = __get_gfn_type_access(hp2m, gfn_x(gfn), t, a,
-                                     P2M_ALLOC | P2M_UNSHARE, &page_order, 0);
+        *mfn = p2m_get_gfn_type_access(hp2m, gfn, t, a, P2M_ALLOC | P2M_UNSHARE,
+                                       &page_order, 0);
 
         rc = -ESRCH;
         if ( !mfn_valid(*mfn) || *t != p2m_ram_rw )
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -453,10 +453,27 @@ void p2m_unlock_and_tlb_flush(struct p2m
  * After calling any of the variants below, caller needs to use
  * put_gfn. ****/
 
-mfn_t __nonnull(3, 4) __get_gfn_type_access(
-    struct p2m_domain *p2m, unsigned long gfn, p2m_type_t *t,
+mfn_t __nonnull(3, 4) p2m_get_gfn_type_access(
+    struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t,
     p2m_access_t *a, p2m_query_t q, unsigned int *page_order, bool_t locked);
 
+static inline mfn_t __nonnull(3, 4) _get_gfn_type_access(
+    struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t,
+    p2m_access_t *a, p2m_query_t q, unsigned int *page_order, bool_t locked)
+{
+    if ( !p2m || !paging_mode_translate(p2m->domain) )
+    {
+        /*
+         * Not necessarily true, but for non-translated guests we claim
+         * it's the most generic kind of memory.
+         */
+        *t = p2m_ram_rw;
+        return _mfn(gfn_x(gfn));
+    }
+
+    return p2m_get_gfn_type_access(p2m, gfn, t, a, q, page_order, locked);
+}
+
 /* Read a particular P2M table, mapping pages as we go.  Most callers
  * should _not_ call this directly; use the other get_gfn* functions
  * below unless you know you want to walk a p2m that isn't a domain's
@@ -468,7 +485,7 @@ static inline mfn_t __nonnull(3, 4) get_
     struct p2m_domain *p2m, unsigned long gfn, p2m_type_t *t,
     p2m_access_t *a, p2m_query_t q, unsigned int *page_order)
 {
-    return __get_gfn_type_access(p2m, gfn, t, a, q, page_order, true);
+    return _get_gfn_type_access(p2m, _gfn(gfn), t, a, q, page_order, true);
 }
 
 /* General conversion function from gfn to mfn */
@@ -509,7 +526,8 @@ static inline mfn_t get_gfn_query_unlock
                                            p2m_type_t *t)
 {
     p2m_access_t a;
-    return __get_gfn_type_access(p2m_get_hostp2m(d), gfn, t, &a, 0, NULL, 0);
+    return _get_gfn_type_access(p2m_get_hostp2m(d), _gfn(gfn), t, &a, 0,
+                                NULL, 0);
 }
 
 /* Atomically look up a GFN and take a reference count on the backing page.



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 12/16] x86/p2m: re-arrange {,__}put_gfn()
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (11 preceding siblings ...)
  2021-07-05 16:12 ` [PATCH 11/16] x86/P2M: derive a HVM-only variant from __get_gfn_type_access() Jan Beulich
@ 2021-07-05 16:12 ` Jan Beulich
  2022-02-14 15:17   ` George Dunlap
  2021-07-05 16:13 ` [PATCH 13/16] shr_pages field is MEM_SHARING-only Jan Beulich
                   ` (3 subsequent siblings)
  16 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

All explicit callers of __put_gfn() are in HVM-only code and hold a valid
P2M pointer in their hands. Move the paging_mode_translate() check out of
there into put_gfn(), renaming __put_gfn() and making its GFN parameter
type-safe.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1969,9 +1969,9 @@ int hvm_hap_nested_page_fault(paddr_t gp
              * altp2m_list lock.
              */
             if ( p2m != hostp2m )
-                __put_gfn(p2m, gfn);
+                p2m_put_gfn(p2m, _gfn(gfn));
             p2m_change_type_one(currd, gfn, p2m_ram_logdirty, p2m_ram_rw);
-            __put_gfn(hostp2m, gfn);
+            p2m_put_gfn(hostp2m, _gfn(gfn));
 
             goto out;
         }
@@ -1993,8 +1993,8 @@ int hvm_hap_nested_page_fault(paddr_t gp
 
  out_put_gfn:
     if ( p2m != hostp2m )
-        __put_gfn(p2m, gfn);
-    __put_gfn(hostp2m, gfn);
+        p2m_put_gfn(p2m, _gfn(gfn));
+    p2m_put_gfn(hostp2m, _gfn(gfn));
  out:
     /*
      * All of these are delayed until we exit, since we might
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -167,7 +167,7 @@ nestedhap_walk_L0_p2m(struct p2m_domain
 direct_mmio_out:
     *L0_gpa = (mfn_x(mfn) << PAGE_SHIFT) + (L1_gpa & ~PAGE_MASK);
 out:
-    __put_gfn(p2m, L1_gpa >> PAGE_SHIFT);
+    p2m_put_gfn(p2m, gaddr_to_gfn(L1_gpa));
     return rc;
 }
 
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -333,21 +333,13 @@ mfn_t p2m_get_gfn_type_access(struct p2m
     return mfn;
 }
 
-#endif /* CONFIG_HVM */
-
-void __put_gfn(struct p2m_domain *p2m, unsigned long gfn)
+void p2m_put_gfn(struct p2m_domain *p2m, gfn_t gfn)
 {
-    if ( !p2m || !paging_mode_translate(p2m->domain) )
-        /* Nothing to do in this case */
-        return;
-
-    ASSERT(gfn_locked_by_me(p2m, gfn));
+    ASSERT(gfn_locked_by_me(p2m, gfn_x(gfn)));
 
-    gfn_unlock(p2m, gfn, 0);
+    gfn_unlock(p2m, gfn_x(gfn), 0);
 }
 
-#ifdef CONFIG_HVM
-
 /* Atomically look up a GFN and take a reference count on the backing page. */
 struct page_info *p2m_get_page_from_gfn(
     struct p2m_domain *p2m, gfn_t gfn,
@@ -2086,7 +2078,7 @@ int p2m_altp2m_propagate_change(struct d
             else
             {
                 /* At least 2 altp2m's impacted, so reset everything */
-                __put_gfn(p2m, gfn_x(gfn));
+                p2m_put_gfn(p2m, gfn);
 
                 for ( i = 0; i < MAX_ALTP2M; i++ )
                 {
@@ -2110,7 +2102,7 @@ int p2m_altp2m_propagate_change(struct d
                 ret = rc;
         }
 
-        __put_gfn(p2m, gfn_x(gfn));
+        p2m_put_gfn(p2m, gfn);
     }
 
     altp2m_list_unlock(d);
@@ -2195,7 +2187,7 @@ void audit_p2m(struct domain *d,
              * blow away the m2p entry. */
             set_gpfn_from_mfn(mfn, INVALID_M2P_ENTRY);
         }
-        __put_gfn(p2m, gfn);
+        p2m_put_gfn(p2m, _gfn(gfn));
 
         P2M_PRINTK("OK: mfn=%#lx, gfn=%#lx, p2mfn=%#lx\n",
                        mfn, gfn, mfn_x(p2mfn));
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -503,9 +503,16 @@ static inline mfn_t __nonnull(3) get_gfn
                                               P2M_ALLOC | P2M_UNSHARE)
 
 /* Will release the p2m_lock for this gfn entry. */
-void __put_gfn(struct p2m_domain *p2m, unsigned long gfn);
+void p2m_put_gfn(struct p2m_domain *p2m, gfn_t gfn);
 
-#define put_gfn(d, gfn) __put_gfn(p2m_get_hostp2m((d)), (gfn))
+static inline void put_gfn(struct domain *d, unsigned long gfn)
+{
+    if ( !paging_mode_translate(d) )
+        /* Nothing to do in this case */
+        return;
+
+    p2m_put_gfn(p2m_get_hostp2m(d), _gfn(gfn));
+}
 
 /* The intent of the "unlocked" accessor is to have the caller not worry about
  * put_gfn. They apply to very specific situations: debug printk's, dumps 



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 13/16] shr_pages field is MEM_SHARING-only
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (12 preceding siblings ...)
  2021-07-05 16:12 ` [PATCH 12/16] x86/p2m: re-arrange {,__}put_gfn() Jan Beulich
@ 2021-07-05 16:13 ` Jan Beulich
  2021-07-06 12:42   ` Tamas K Lengyel
  2022-02-14 15:36   ` George Dunlap
  2021-07-05 16:14 ` [PATCH 14/16] paged_pages field is MEM_PAGING-only Jan Beulich
                   ` (2 subsequent siblings)
  16 siblings, 2 replies; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:13 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monné,
	Tamas K Lengyel, Andrew Cooper, George Dunlap, Ian Jackson,
	Julien Grall, Stefano Stabellini, Wei Liu

Conditionalize it and its uses accordingly. The main goal though is to
demonstrate that x86's p2m_teardown() is now empty when !HVM, which in
particular means the last remaining use of p2m_lock() in this cases goes
away.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I was on the edge of introducing a helper for atomic_read(&d->shr_pages)
but decided against because of dump_domains() not being able to use it
sensibly (I really want to omit the output field altogether there when
!MEM_SHARING).

--- a/xen/arch/x86/mm/p2m-basic.c
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -159,7 +159,6 @@ void p2m_teardown(struct p2m_domain *p2m
 {
 #ifdef CONFIG_HVM
     struct page_info *pg;
-#endif
     struct domain *d;
 
     if ( !p2m )
@@ -169,16 +168,17 @@ void p2m_teardown(struct p2m_domain *p2m
 
     p2m_lock(p2m);
 
+#ifdef CONFIG_MEM_SHARING
     ASSERT(atomic_read(&d->shr_pages) == 0);
+#endif
 
-#ifdef CONFIG_HVM
     p2m->phys_table = pagetable_null();
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
         d->arch.paging.free_page(d, pg);
-#endif
 
     p2m_unlock(p2m);
+#endif
 }
 
 void p2m_final_teardown(struct domain *d)
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -109,7 +109,11 @@ void getdomaininfo(struct domain *d, str
     info->tot_pages         = domain_tot_pages(d);
     info->max_pages         = d->max_pages;
     info->outstanding_pages = d->outstanding_pages;
+#ifdef CONFIG_MEM_SHARING
     info->shr_pages         = atomic_read(&d->shr_pages);
+#else
+    info->shr_pages         = 0;
+#endif
     info->paged_pages       = atomic_read(&d->paged_pages);
     info->shared_info_frame =
         gfn_x(mfn_to_gfn(d, _mfn(virt_to_mfn(d->shared_info))));
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -274,9 +274,16 @@ static void dump_domains(unsigned char k
         printk("    refcnt=%d dying=%d pause_count=%d\n",
                atomic_read(&d->refcnt), d->is_dying,
                atomic_read(&d->pause_count));
-        printk("    nr_pages=%d xenheap_pages=%d shared_pages=%u paged_pages=%u "
-               "dirty_cpus={%*pbl} max_pages=%u\n",
-               domain_tot_pages(d), d->xenheap_pages, atomic_read(&d->shr_pages),
+        printk("    nr_pages=%u xenheap_pages=%u"
+#ifdef CONFIG_MEM_SHARING
+               " shared_pages=%u"
+#endif
+               " paged_pages=%u"
+               " dirty_cpus={%*pbl} max_pages=%u\n",
+               domain_tot_pages(d), d->xenheap_pages,
+#ifdef CONFIG_MEM_SHARING
+               atomic_read(&d->shr_pages),
+#endif
                atomic_read(&d->paged_pages), CPUMASK_PR(d->dirty_cpumask),
                d->max_pages);
         printk("    handle=%02x%02x%02x%02x-%02x%02x-%02x%02x-"
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -385,7 +385,11 @@ struct domain
     unsigned int     outstanding_pages; /* pages claimed but not possessed */
     unsigned int     max_pages;         /* maximum value for domain_tot_pages() */
     unsigned int     extra_pages;       /* pages not included in domain_tot_pages() */
+
+#ifdef CONFIG_MEM_SHARING
     atomic_t         shr_pages;         /* shared pages */
+#endif
+
     atomic_t         paged_pages;       /* paged-out pages */
 
     /* Scheduling. */



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 14/16] paged_pages field is MEM_PAGING-only
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (13 preceding siblings ...)
  2021-07-05 16:13 ` [PATCH 13/16] shr_pages field is MEM_SHARING-only Jan Beulich
@ 2021-07-05 16:14 ` Jan Beulich
  2021-07-06 12:44   ` Tamas K Lengyel
  2022-02-14 15:38   ` George Dunlap
  2021-07-05 16:14 ` [PATCH 15/16] x86/P2M: p2m.c is HVM-only Jan Beulich
  2021-07-05 16:15 ` [PATCH 16/16] x86/P2M: the majority for struct p2m_domain's fields are HVM-only Jan Beulich
  16 siblings, 2 replies; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monné,
	Tamas K Lengyel, Andrew Cooper, George Dunlap, Ian Jackson,
	Julien Grall, Stefano Stabellini, Wei Liu

Conditionalize it and its uses accordingly.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I was on the edge of introducing a helper for
atomic_read(&d->paged_pages) but decided against because of
dump_domains() not being able to use it sensibly (I really want to omit
the output field altogether there when !MEM_PAGING).

--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1213,6 +1213,7 @@ int add_to_physmap(struct domain *sd, un
     }
     else
     {
+#ifdef CONFIG_MEM_PAGING
         /*
          * There is a chance we're plugging a hole where a paged out
          * page was.
@@ -1238,6 +1239,7 @@ int add_to_physmap(struct domain *sd, un
                 put_page(cpage);
             }
         }
+#endif
     }
 
     atomic_inc(&nr_saved_mfns);
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -666,11 +666,13 @@ p2m_add_page(struct domain *d, gfn_t gfn
             /* Count how man PoD entries we'll be replacing if successful */
             pod_count++;
         }
+#ifdef CONFIG_MEM_PAGING
         else if ( p2m_is_paging(ot) && (ot != p2m_ram_paging_out) )
         {
             /* We're plugging a hole in the physmap where a paged out page was */
             atomic_dec(&d->paged_pages);
         }
+#endif
     }
 
     /* Then, look for m->p mappings for this range and deal with them */
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -114,7 +114,11 @@ void getdomaininfo(struct domain *d, str
 #else
     info->shr_pages         = 0;
 #endif
+#ifdef CONFIG_MEM_PAGING
     info->paged_pages       = atomic_read(&d->paged_pages);
+#else
+    info->paged_pages       = 0;
+#endif
     info->shared_info_frame =
         gfn_x(mfn_to_gfn(d, _mfn(virt_to_mfn(d->shared_info))));
     BUG_ON(SHARED_M2P(info->shared_info_frame));
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -278,14 +278,18 @@ static void dump_domains(unsigned char k
 #ifdef CONFIG_MEM_SHARING
                " shared_pages=%u"
 #endif
+#ifdef CONFIG_MEM_PAGING
                " paged_pages=%u"
+#endif
                " dirty_cpus={%*pbl} max_pages=%u\n",
                domain_tot_pages(d), d->xenheap_pages,
 #ifdef CONFIG_MEM_SHARING
                atomic_read(&d->shr_pages),
 #endif
-               atomic_read(&d->paged_pages), CPUMASK_PR(d->dirty_cpumask),
-               d->max_pages);
+#ifdef CONFIG_MEM_PAGING
+               atomic_read(&d->paged_pages),
+#endif
+               CPUMASK_PR(d->dirty_cpumask), d->max_pages);
         printk("    handle=%02x%02x%02x%02x-%02x%02x-%02x%02x-"
                "%02x%02x-%02x%02x%02x%02x%02x%02x vm_assist=%08lx\n",
                d->handle[ 0], d->handle[ 1], d->handle[ 2], d->handle[ 3],
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -390,7 +390,9 @@ struct domain
     atomic_t         shr_pages;         /* shared pages */
 #endif
 
+#ifdef CONFIG_MEM_PAGING
     atomic_t         paged_pages;       /* paged-out pages */
+#endif
 
     /* Scheduling. */
     void            *sched_priv;    /* scheduler-specific data */



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 15/16] x86/P2M: p2m.c is HVM-only
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (14 preceding siblings ...)
  2021-07-05 16:14 ` [PATCH 14/16] paged_pages field is MEM_PAGING-only Jan Beulich
@ 2021-07-05 16:14 ` Jan Beulich
  2022-02-14 15:39   ` George Dunlap
  2021-07-05 16:15 ` [PATCH 16/16] x86/P2M: the majority for struct p2m_domain's fields are HVM-only Jan Beulich
  16 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:14 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

This only requires moving p2m_percpu_rwlock elsewhere (ultimately I
think all P2M locking should go away as well when !HVM, but this looks
to require further code juggling). The two other unguarded functions are
already unneeded (by virtue of DCE) when !HVM.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -8,7 +8,7 @@ obj-$(CONFIG_MEM_ACCESS) += mem_access.o
 obj-$(CONFIG_MEM_PAGING) += mem_paging.o
 obj-$(CONFIG_MEM_SHARING) += mem_sharing.o
 obj-$(CONFIG_HVM) += nested.o
-obj-y += p2m.o
+obj-$(CONFIG_HVM) += p2m.o
 obj-y += p2m-basic.o
 obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o p2m-pt.o
 obj-y += paging.o
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -53,10 +53,6 @@ bool_t __initdata opt_hap_1gb = 1, __ini
 boolean_param("hap_1gb", opt_hap_1gb);
 boolean_param("hap_2mb", opt_hap_2mb);
 
-DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
-
-#ifdef CONFIG_HVM
-
 int p2m_init_logdirty(struct p2m_domain *p2m)
 {
     if ( p2m->logdirty_ranges )
@@ -258,8 +254,6 @@ void p2m_flush_hardware_cached_dirty(str
     }
 }
 
-#endif /* CONFIG_HVM */
-
 /*
  * Force a synchronous P2M TLB flush if a deferred flush is pending.
  *
@@ -286,8 +280,6 @@ void p2m_unlock_and_tlb_flush(struct p2m
         mm_write_unlock(&p2m->lock);
 }
 
-#ifdef CONFIG_HVM
-
 mfn_t p2m_get_gfn_type_access(struct p2m_domain *p2m, gfn_t gfn,
                               p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
                               unsigned int *page_order, bool_t locked)
@@ -2589,8 +2581,6 @@ int p2m_set_altp2m_view_visibility(struc
     return rc;
 }
 
-#endif /* CONFIG_HVM */
-
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/mm/p2m-basic.c
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -28,6 +28,8 @@
 #include "mm-locks.h"
 #include "p2m.h"
 
+DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
+
 /* Init the datastructures for later use by the p2m code */
 static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {



^ permalink raw reply	[flat|nested] 50+ messages in thread

* [PATCH 16/16] x86/P2M: the majority for struct p2m_domain's fields are HVM-only
  2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (15 preceding siblings ...)
  2021-07-05 16:14 ` [PATCH 15/16] x86/P2M: p2m.c is HVM-only Jan Beulich
@ 2021-07-05 16:15 ` Jan Beulich
  2021-07-05 17:49   ` Paul Durrant
  2022-02-14 15:51   ` George Dunlap
  16 siblings, 2 replies; 50+ messages in thread
From: Jan Beulich @ 2021-07-05 16:15 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné,
	George Dunlap, Paul Durrant

..., as are the majority of the locks involved. Conditionalize things
accordingly.

Also adjust the ioreq field's indentation at this occasion.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -481,8 +481,11 @@ unsigned int page_get_ram_type(mfn_t mfn
 
 unsigned long domain_get_maximum_gpfn(struct domain *d)
 {
+#ifdef CONFIG_HVM
     if ( is_hvm_domain(d) )
         return p2m_get_hostp2m(d)->max_mapped_pfn;
+#endif
+
     /* NB. PV guests specify nr_pfns rather than max_pfn so we adjust here. */
     return (arch_get_max_pfn(d) ?: 1) - 1;
 }
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -237,6 +237,8 @@ static inline void mm_enforce_order_unlo
  *                                                                      *
  ************************************************************************/
 
+#ifdef CONFIG_HVM
+
 /* Nested P2M lock (per-domain)
  *
  * A per-domain lock that protects the mapping from nested-CR3 to
@@ -354,6 +356,8 @@ declare_mm_lock(pod)
 #define pod_unlock(p)         mm_unlock(&(p)->pod.lock)
 #define pod_locked_by_me(p)   mm_locked_by_me(&(p)->pod.lock)
 
+#endif /* CONFIG_HVM */
+
 /* Page alloc lock (per-domain)
  *
  * This is an external lock, not represented by an mm_lock_t. However,
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -48,6 +48,8 @@
 #undef virt_to_mfn
 #define virt_to_mfn(v) _mfn(__virt_to_mfn(v))
 
+DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
+
 /* Turn on/off host superpage page table support for hap, default on. */
 bool_t __initdata opt_hap_1gb = 1, __initdata opt_hap_2mb = 1;
 boolean_param("hap_1gb", opt_hap_1gb);
--- a/xen/arch/x86/mm/p2m-basic.c
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -28,16 +28,15 @@
 #include "mm-locks.h"
 #include "p2m.h"
 
-DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
-
 /* Init the datastructures for later use by the p2m code */
 static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {
     int ret = 0;
 
-    mm_rwlock_init(&p2m->lock);
 #ifdef CONFIG_HVM
+    mm_rwlock_init(&p2m->lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+    spin_lock_init(&p2m->ioreq.lock);
 #endif
 
     p2m->domain = d;
@@ -55,8 +54,6 @@ static int p2m_initialise(struct domain
     else
         p2m_pt_init(p2m);
 
-    spin_lock_init(&p2m->ioreq.lock);
-
     return ret;
 }
 
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -338,7 +338,7 @@ bool arch_iommu_use_permitted(const stru
     return d == dom_io ||
            (likely(!mem_sharing_enabled(d)) &&
             likely(!mem_paging_enabled(d)) &&
-            likely(!p2m_get_hostp2m(d)->global_logdirty));
+            likely(!p2m_is_global_logdirty(d)));
 }
 
 /*
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -199,8 +199,10 @@ typedef enum {
 
 /* Per-p2m-table state */
 struct p2m_domain {
+#ifdef CONFIG_HVM
     /* Lock that protects updates to the p2m */
     mm_rwlock_t           lock;
+#endif
 
     /*
      * Same as a domain's dirty_cpumask but limited to
@@ -220,13 +222,14 @@ struct p2m_domain {
      */
     p2m_access_t default_access;
 
+#ifdef CONFIG_HVM
+
     /* Host p2m: Log-dirty ranges registered for the domain. */
     struct rangeset   *logdirty_ranges;
 
     /* Host p2m: Global log-dirty mode enabled for the domain. */
     bool               global_logdirty;
 
-#ifdef CONFIG_HVM
     /* Translated domain: p2m mapping */
     pagetable_t        phys_table;
 
@@ -269,7 +272,6 @@ struct p2m_domain {
                                               unsigned int level);
     void               (*write_p2m_entry_post)(struct p2m_domain *p2m,
                                                unsigned int oflags);
-#endif
 #if P2M_AUDIT
     long               (*audit_p2m)(struct p2m_domain *p2m);
 #endif
@@ -304,7 +306,6 @@ struct p2m_domain {
     unsigned long min_remapped_gfn;
     unsigned long max_remapped_gfn;
 
-#ifdef CONFIG_HVM
     /* Populate-on-demand variables
      * All variables are protected with the pod lock. We cannot rely on
      * the p2m lock if it's turned into a fine-grained lock.
@@ -361,27 +362,27 @@ struct p2m_domain {
      * threaded on in LRU order.
      */
     struct list_head   np2m_list;
-#endif
 
     union {
         struct ept_data ept;
         /* NPT-equivalent structure could be added here. */
     };
 
-     struct {
-         spinlock_t lock;
-         /*
-          * ioreq server who's responsible for the emulation of
-          * gfns with specific p2m type(for now, p2m_ioreq_server).
-          */
-         struct ioreq_server *server;
-         /*
-          * flags specifies whether read, write or both operations
-          * are to be emulated by an ioreq server.
-          */
-         unsigned int flags;
-         unsigned long entry_count;
-     } ioreq;
+    struct {
+        spinlock_t lock;
+        /*
+         * ioreq server who's responsible for the emulation of
+         * gfns with specific p2m type(for now, p2m_ioreq_server).
+         */
+        struct ioreq_server *server;
+        /*
+         * flags specifies whether read, write or both operations
+         * are to be emulated by an ioreq server.
+         */
+        unsigned int flags;
+        unsigned long entry_count;
+    } ioreq;
+#endif /* CONFIG_HVM */
 };
 
 /* get host p2m table */
@@ -645,6 +646,15 @@ int p2m_finish_type_change(struct domain
                            gfn_t first_gfn,
                            unsigned long max_nr);
 
+static inline bool p2m_is_global_logdirty(const struct domain *d)
+{
+#ifdef CONFIG_HVM
+    return p2m_get_hostp2m(d)->global_logdirty;
+#else
+    return false;
+#endif
+}
+
 int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start,
                           unsigned long end);
 
@@ -792,6 +802,8 @@ extern void audit_p2m(struct domain *d,
 #define P2M_DEBUG(f, a...) do { (void)(f); } while(0)
 #endif
 
+#ifdef CONFIG_HVM
+
 /*
  * Functions specific to the p2m-pt implementation
  */
@@ -852,7 +864,7 @@ void nestedp2m_write_p2m_entry_post(stru
 /*
  * Alternate p2m: shadow p2m tables used for alternate memory views
  */
-#ifdef CONFIG_HVM
+
 /* get current alternate p2m table */
 static inline struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
 {
@@ -905,10 +917,10 @@ int p2m_altp2m_propagate_change(struct d
 /* Set a specific p2m view visibility */
 int p2m_set_altp2m_view_visibility(struct domain *d, unsigned int idx,
                                    uint8_t visible);
-#else
+#else /* CONFIG_HVM */
 struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
 static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) {}
-#endif
+#endif /* CONFIG_HVM */
 
 /*
  * p2m type to IOMMU flags
@@ -942,6 +954,8 @@ static inline unsigned int p2m_get_iommu
     return flags;
 }
 
+#ifdef CONFIG_HVM
+
 int p2m_set_ioreq_server(struct domain *d, unsigned int flags,
                          struct ioreq_server *s);
 struct ioreq_server *p2m_get_ioreq_server(struct domain *d,
@@ -1006,6 +1020,8 @@ static inline int p2m_entry_modify(struc
     return 0;
 }
 
+#endif /* CONFIG_HVM */
+
 #endif /* _XEN_ASM_X86_P2M_H */
 
 /*



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 02/16] x86/P2M: introduce p2m_{add,remove}_page()
  2021-07-05 16:06 ` [PATCH 02/16] x86/P2M: introduce p2m_{add,remove}_page() Jan Beulich
@ 2021-07-05 17:47   ` Paul Durrant
  2021-07-06  7:05     ` Jan Beulich
  2022-02-04 22:07   ` George Dunlap
  1 sibling, 1 reply; 50+ messages in thread
From: Paul Durrant @ 2021-07-05 17:47 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

On 05/07/2021 17:06, Jan Beulich wrote:
> p2m_add_page() is simply a rename from guest_physmap_add_entry().
> p2m_remove_page() then is its counterpart, despite rendering
> guest_physmap_remove_page().

Did some words get dropped in that second sentence? I can't really 
understand what it is saying.

> This way callers can use suitable pairs of
> functions (previously violated by hvm/grant_table.c).
> 
> In HVM-specific code further avoid going through the guest_physmap_*()
> layer, and instead use the two new/renamed functions directly.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 

The code looks fine so...

Reviewed-by: Paul Durrant <paul@xen.org>


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 16/16] x86/P2M: the majority for struct p2m_domain's fields are HVM-only
  2021-07-05 16:15 ` [PATCH 16/16] x86/P2M: the majority for struct p2m_domain's fields are HVM-only Jan Beulich
@ 2021-07-05 17:49   ` Paul Durrant
  2022-02-14 15:51   ` George Dunlap
  1 sibling, 0 replies; 50+ messages in thread
From: Paul Durrant @ 2021-07-05 17:49 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

On 05/07/2021 17:15, Jan Beulich wrote:
> ..., as are the majority of the locks involved. Conditionalize things
> accordingly.
> 
> Also adjust the ioreq field's indentation at this occasion.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 

Reviewed-by: Paul Durrant <paul@xen.org>


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 02/16] x86/P2M: introduce p2m_{add,remove}_page()
  2021-07-05 17:47   ` Paul Durrant
@ 2021-07-06  7:05     ` Jan Beulich
  0 siblings, 0 replies; 50+ messages in thread
From: Jan Beulich @ 2021-07-06  7:05 UTC (permalink / raw)
  To: paul
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap, xen-devel

On 05.07.2021 19:47, Paul Durrant wrote:
> On 05/07/2021 17:06, Jan Beulich wrote:
>> p2m_add_page() is simply a rename from guest_physmap_add_entry().
>> p2m_remove_page() then is its counterpart, despite rendering
>> guest_physmap_remove_page().
> 
> Did some words get dropped in that second sentence? I can't really 
> understand what it is saying.

Oops - this was meant to be "...  a trivial wrapper".

>> This way callers can use suitable pairs of
>> functions (previously violated by hvm/grant_table.c).
>>
>> In HVM-specific code further avoid going through the guest_physmap_*()
>> layer, and instead use the two new/renamed functions directly.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
> 
> The code looks fine so...
> 
> Reviewed-by: Paul Durrant <paul@xen.org>

Thanks.

Jan



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 13/16] shr_pages field is MEM_SHARING-only
  2021-07-05 16:13 ` [PATCH 13/16] shr_pages field is MEM_SHARING-only Jan Beulich
@ 2021-07-06 12:42   ` Tamas K Lengyel
  2022-02-14 15:36   ` George Dunlap
  1 sibling, 0 replies; 50+ messages in thread
From: Tamas K Lengyel @ 2021-07-06 12:42 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Roger Pau Monné,
	Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu

On Mon, Jul 5, 2021 at 12:13 PM Jan Beulich <jbeulich@suse.com> wrote:
>
> Conditionalize it and its uses accordingly. The main goal though is to
> demonstrate that x86's p2m_teardown() is now empty when !HVM, which in
> particular means the last remaining use of p2m_lock() in this cases goes
> away.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Tamas K Lengyel <tamas@tklengyel.com>


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 14/16] paged_pages field is MEM_PAGING-only
  2021-07-05 16:14 ` [PATCH 14/16] paged_pages field is MEM_PAGING-only Jan Beulich
@ 2021-07-06 12:44   ` Tamas K Lengyel
  2022-02-14 15:38   ` George Dunlap
  1 sibling, 0 replies; 50+ messages in thread
From: Tamas K Lengyel @ 2021-07-06 12:44 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Roger Pau Monné,
	Andrew Cooper, George Dunlap, Ian Jackson, Julien Grall,
	Stefano Stabellini, Wei Liu

On Mon, Jul 5, 2021 at 12:14 PM Jan Beulich <jbeulich@suse.com> wrote:
>
> Conditionalize it and its uses accordingly.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

For the mem_sharing bits:
Acked-by: Tamas K Lengyel <tamas@tklengyel.com>

The rest also look fine to me as you can consider having an R-b as
well for those bits.


^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH 07/16] x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only
  2021-07-05 16:09 ` [PATCH 07/16] x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only Jan Beulich
@ 2021-07-07  1:35   ` Tian, Kevin
  2022-02-05 21:17   ` George Dunlap
  1 sibling, 0 replies; 50+ messages in thread
From: Tian, Kevin @ 2021-07-07  1:35 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Cooper, Andrew, Wei Liu, Roger Pau Monné, George Dunlap

> From: Jan Beulich <jbeulich@suse.com>
> Sent: Tuesday, July 6, 2021 12:09 AM
> 
> This also includes the two p2m related fields.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> 
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -94,7 +94,9 @@ static int p2m_initialise(struct domain
>      int ret = 0;
> 
>      mm_rwlock_init(&p2m->lock);
> +#ifdef CONFIG_HVM
>      INIT_PAGE_LIST_HEAD(&p2m->pages);
> +#endif
> 
>      p2m->domain = d;
>      p2m->default_access = p2m_access_rwx;
> @@ -628,6 +630,7 @@ struct page_info *p2m_get_page_from_gfn(
>  }
> 
>  #ifdef CONFIG_HVM
> +
>  /* Returns: 0 for success, -errno for failure */
>  int p2m_set_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
>                    unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
> @@ -667,7 +670,6 @@ int p2m_set_entry(struct p2m_domain *p2m
> 
>      return rc;
>  }
> -#endif
> 
>  mfn_t p2m_alloc_ptp(struct p2m_domain *p2m, unsigned int level)
>  {
> @@ -746,6 +748,8 @@ int p2m_alloc_table(struct p2m_domain *p
>      return 0;
>  }
> 
> +#endif /* CONFIG_HVM */
> +
>  /*
>   * hvm fixme: when adding support for pvh non-hardware domains, this
> path must
>   * cleanup any foreign p2m types (release refcnts on them).
> @@ -754,7 +758,9 @@ void p2m_teardown(struct p2m_domain *p2m
>  /* Return all the p2m pages to Xen.
>   * We know we don't have any extra mappings to these pages */
>  {
> +#ifdef CONFIG_HVM
>      struct page_info *pg;
> +#endif
>      struct domain *d;
> 
>      if (p2m == NULL)
> @@ -763,11 +769,16 @@ void p2m_teardown(struct p2m_domain *p2m
>      d = p2m->domain;
> 
>      p2m_lock(p2m);
> +
>      ASSERT(atomic_read(&d->shr_pages) == 0);
> +
> +#ifdef CONFIG_HVM
>      p2m->phys_table = pagetable_null();
> 
>      while ( (pg = page_list_remove_head(&p2m->pages)) )
>          d->arch.paging.free_page(d, pg);
> +#endif
> +
>      p2m_unlock(p2m);
>  }
> 
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -2700,8 +2700,10 @@ int shadow_enable(struct domain *d, u32
>   out_locked:
>      paging_unlock(d);
>   out_unlocked:
> +#ifdef CONFIG_HVM
>      if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
>          p2m_teardown(p2m);
> +#endif
>      if ( rv != 0 && pg != NULL )
>      {
>          pg->count_info &= ~PGC_count_mask;
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -339,12 +339,14 @@ static uint64_t domain_pgd_maddr(struct
> 
>      ASSERT(spin_is_locked(&hd->arch.mapping_lock));
> 
> +#ifdef CONFIG_HVM
>      if ( iommu_use_hap_pt(d) )
>      {
>          pagetable_t pgt = p2m_get_pagetable(p2m_get_hostp2m(d));
> 
>          return pagetable_get_paddr(pgt);
>      }
> +#endif
> 
>      if ( !hd->arch.vtd.pgd_maddr )
>      {
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -202,9 +202,6 @@ struct p2m_domain {
>      /* Lock that protects updates to the p2m */
>      mm_rwlock_t           lock;
> 
> -    /* Shadow translated domain: p2m mapping */
> -    pagetable_t        phys_table;
> -
>      /*
>       * Same as a domain's dirty_cpumask but limited to
>       * this p2m and those physical cpus whose vcpu's are in
> @@ -223,9 +220,6 @@ struct p2m_domain {
>       */
>      p2m_access_t default_access;
> 
> -    /* Pages used to construct the p2m */
> -    struct page_list_head pages;
> -
>      /* Host p2m: Log-dirty ranges registered for the domain. */
>      struct rangeset   *logdirty_ranges;
> 
> @@ -233,6 +227,12 @@ struct p2m_domain {
>      bool               global_logdirty;
> 
>  #ifdef CONFIG_HVM
> +    /* Translated domain: p2m mapping */
> +    pagetable_t        phys_table;
> +
> +    /* Pages used to construct the p2m */
> +    struct page_list_head pages;
> +
>      /* Alternate p2m: count of vcpu's currently using this p2m. */
>      atomic_t           active_vcpus;
> 


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 01/16] x86/P2M: rename p2m_remove_page()
  2021-07-05 16:05 ` [PATCH 01/16] x86/P2M: rename p2m_remove_page() Jan Beulich
@ 2022-02-04 21:54   ` George Dunlap
  2022-02-07  9:20     ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: George Dunlap @ 2022-02-04 21:54 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

[-- Attachment #1: Type: text/plain, Size: 1384 bytes --]

On Mon, Jul 5, 2021 at 5:05 PM Jan Beulich <jbeulich@suse.com> wrote:

> This is in preparation to re-using the original name.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>

Hey Jan,

This series overall looks good; thanks for taking this on.

Functionally this patch looks good; just one question...

--- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -788,8 +788,8 @@ void p2m_final_teardown(struct domain *d
>  #ifdef CONFIG_HVM
>
>  static int __must_check
> -p2m_remove_page(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
> -                unsigned int page_order)
> +p2m_remove_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
> +                 unsigned int page_order)
>

One question that's naturally raised for both this and the following patch
is, what is the new naming "scheme" for these renamed functions, and how do
they relate to the old scheme?

Overall it seems like the intention is that "guest_physmap_..." can be
called on a domain which may be PV or HVM, while "p2m_..." should only be
called on HVM domains.

There's also "..._entry" vs "..._page".  Is the p2m_remove_page /
p2m_remove_entry distinction have a meaning, and is it the same meaning as
guest_physmap_add_page / guest_physmap_add_entry?  Or is it similar to
p2m_init_nestedp2m / p2m_nestedp2m_init -- we need both functions and
don't want to make the names longer?

 -George

[-- Attachment #2: Type: text/html, Size: 2185 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 02/16] x86/P2M: introduce p2m_{add,remove}_page()
  2021-07-05 16:06 ` [PATCH 02/16] x86/P2M: introduce p2m_{add,remove}_page() Jan Beulich
  2021-07-05 17:47   ` Paul Durrant
@ 2022-02-04 22:07   ` George Dunlap
  2022-02-07  9:38     ` Jan Beulich
  1 sibling, 1 reply; 50+ messages in thread
From: George Dunlap @ 2022-02-04 22:07 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monné,
	Paul Durrant, George Dunlap

[-- Attachment #1: Type: text/plain, Size: 751 bytes --]

On Mon, Jul 5, 2021 at 5:06 PM Jan Beulich <jbeulich@suse.com> wrote:

> p2m_add_page() is simply a rename from guest_physmap_add_entry().
> p2m_remove_page() then is its counterpart, despite rendering
> guest_physmap_remove_page(). This way callers can use suitable pairs of
> functions (previously violated by hvm/grant_table.c).
>

Obviously this needs some clarification.  While we're here, I find this a
bit confusing; I tend to use the present tense for the way the code is
before the patch, and the imperative for what the patch does; so Id' say:

Rename guest_physmap_add_entry() to p2m_add_page; make
guest_physmap_remove_page a wrapper with p2m_remove_page.  That way callers
can use suitable pairs...

Other than that looks good.

 -George

[-- Attachment #2: Type: text/html, Size: 1209 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 03/16] x86/P2M: drop a few CONFIG_HVM
  2021-07-05 16:06 ` [PATCH 03/16] x86/P2M: drop a few CONFIG_HVM Jan Beulich
@ 2022-02-04 22:13   ` George Dunlap
  2022-02-07  9:51     ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: George Dunlap @ 2022-02-04 22:13 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

[-- Attachment #1: Type: text/plain, Size: 702 bytes --]

On Mon, Jul 5, 2021 at 5:06 PM Jan Beulich <jbeulich@suse.com> wrote:

> This is to make it easier to see which parts of p2m.c still aren't HVM-
> specific: In one case the conditionals sat in an already guarded region,
> while in the other case P2M_AUDIT implies HVM.
>

I think this would be much more easy to understand what's going on if it
was more like this:

---
x86/p2m: P2M_AUDIT implies CONFIG_HVM

Remove one #endif / #ifdef CONFIG_HVM pair to make all the audit code
CONFIG_HVM only.  This is to make it easier to see which parts of p2m.c
still aren't HVM-specific.

While here, remove a redundant set of nested #ifdef CONFIG_HVM.
---

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

[-- Attachment #2: Type: text/html, Size: 1279 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 04/16] x86/P2M: move map_domain_gfn() (again)
  2021-07-05 16:07 ` [PATCH 04/16] x86/P2M: move map_domain_gfn() (again) Jan Beulich
@ 2022-02-04 22:17   ` George Dunlap
  0 siblings, 0 replies; 50+ messages in thread
From: George Dunlap @ 2022-02-04 22:17 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

[-- Attachment #1: Type: text/plain, Size: 524 bytes --]

On Mon, Jul 5, 2021 at 5:07 PM Jan Beulich <jbeulich@suse.com> wrote:

> The main user is the guest walking code, so move it back there; commit
> 9a6787cc3809 ("x86/mm: build map_domain_gfn() just once") would perhaps
> better have kept it there in the first place. This way it'll only get
> built when it's actually needed (and still only once).
>
> This also eliminates one more CONFIG_HVM conditional from p2m.c.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

[-- Attachment #2: Type: text/html, Size: 1022 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 05/16] x86/mm: move guest_physmap_{add,remove}_page()
  2021-07-05 16:07 ` [PATCH 05/16] x86/mm: move guest_physmap_{add,remove}_page() Jan Beulich
@ 2022-02-05 21:06   ` George Dunlap
  0 siblings, 0 replies; 50+ messages in thread
From: George Dunlap @ 2022-02-05 21:06 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne



> On Jul 5, 2021, at 5:07 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> ... to a new file, separating the functions from their HVM-specific
> backing ones, themselves only dealing with the non-translated case.
> 
> To avoid having a new CONFIG_HVM conditional in there, do away with
> the inline placeholder.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 06/16] x86/mm: split set_identity_p2m_entry() into PV and HVM parts
  2021-07-05 16:07 ` [PATCH 06/16] x86/mm: split set_identity_p2m_entry() into PV and HVM parts Jan Beulich
@ 2022-02-05 21:09   ` George Dunlap
  0 siblings, 0 replies; 50+ messages in thread
From: George Dunlap @ 2022-02-05 21:09 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne



> On Jul 5, 2021, at 5:07 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> ..., moving the former into the new physmap.c.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: George Dunlap <george.dunlap@ctirix.com>



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 07/16] x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only
  2021-07-05 16:09 ` [PATCH 07/16] x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only Jan Beulich
  2021-07-07  1:35   ` Tian, Kevin
@ 2022-02-05 21:17   ` George Dunlap
  1 sibling, 0 replies; 50+ messages in thread
From: George Dunlap @ 2022-02-05 21:17 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne, Kevin Tian



> On Jul 5, 2021, at 5:09 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> This also includes the two p2m related fields.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 08/16] x86/P2M: PoD, altp2m, and nested-p2m are HVM-only
  2021-07-05 16:09 ` [PATCH 08/16] x86/P2M: PoD, altp2m, and nested-p2m " Jan Beulich
@ 2022-02-05 21:29   ` George Dunlap
  2022-02-07 10:11     ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: George Dunlap @ 2022-02-05 21:29 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne



> On Jul 5, 2021, at 5:09 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> There's no need to initialize respective data for PV domains. Note that
> p2m_teardown_{alt,nested}p2m() will handle the lack-of-initialization
> case fine.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -102,6 +102,9 @@ static int p2m_initialise(struct domain
>     p2m->default_access = p2m_access_rwx;
>     p2m->p2m_class = p2m_host;
> 
> +    if ( !is_hvm_domain(d) )
> +        return 0;
> +
>     p2m_pod_init(p2m);
>     p2m_nestedp2m_init(p2m);


> 
> @@ -259,7 +262,7 @@ int p2m_init(struct domain *d)
>     int rc;
> 
>     rc = p2m_init_hostp2m(d);
> -    if ( rc )
> +    if ( rc || !is_hvm_domain(d) )
>         return rc;
> 
> #ifdef CONFIG_HVM
> --- a/xen/arch/x86/mm/p2m.h
> +++ b/xen/arch/x86/mm/p2m.h
> @@ -17,6 +17,8 @@
> 
> #include <xen/mem_access.h>
> 
> +void p2m_pod_init(struct p2m_domain *p2m);
> +
> int p2m_add_identity_entry(struct domain *d, unsigned long gfn,
>                            p2m_access_t p2ma, unsigned int flag);
> int p2m_remove_identity_entry(struct domain *d, unsigned long gfn);
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -1135,6 +1135,12 @@ p2m_pod_demand_populate(struct p2m_domai
>     mfn_t mfn;
>     unsigned long i;
> 
> +    if ( !p2m_is_hostp2m(p2m) )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return false;
> +    }
> +
>     ASSERT(gfn_locked_by_me(p2m, gfn));
>     pod_lock(p2m);

Why this check rather than something which explicitly says HVM?

If you really mean to check for HVM here but are just using this as a shortcut, it needs a comment.

With that addressed:

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

 -George



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 09/16] x86/P2M: split out init/teardown functions
  2021-07-05 16:10 ` [PATCH 09/16] x86/P2M: split out init/teardown functions Jan Beulich
@ 2022-02-05 21:31   ` George Dunlap
  0 siblings, 0 replies; 50+ messages in thread
From: George Dunlap @ 2022-02-05 21:31 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne



> On Jul 5, 2021, at 5:10 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> Mostly just code movement, and certainly no functional change intended.
> In p2m_final_teardown() the calls to p2m_teardown_{alt,nested}p2m() need
> to be guarded by an is_hvm_domain() check now, though. This matches
> p2m_init(). And p2m_is_logdirty_range() also gets moved inside the (so
> far) adjacent #ifdef.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 01/16] x86/P2M: rename p2m_remove_page()
  2022-02-04 21:54   ` George Dunlap
@ 2022-02-07  9:20     ` Jan Beulich
  0 siblings, 0 replies; 50+ messages in thread
From: Jan Beulich @ 2022-02-07  9:20 UTC (permalink / raw)
  To: George Dunlap
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

On 04.02.2022 22:54, George Dunlap wrote:
> On Mon, Jul 5, 2021 at 5:05 PM Jan Beulich <jbeulich@suse.com> wrote:
> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -788,8 +788,8 @@ void p2m_final_teardown(struct domain *d
>>  #ifdef CONFIG_HVM
>>
>>  static int __must_check
>> -p2m_remove_page(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
>> -                unsigned int page_order)
>> +p2m_remove_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
>> +                 unsigned int page_order)
>>
> 
> One question that's naturally raised for both this and the following patch
> is, what is the new naming "scheme" for these renamed functions, and how do
> they relate to the old scheme?
> 
> Overall it seems like the intention is that "guest_physmap_..." can be
> called on a domain which may be PV or HVM, while "p2m_..." should only be
> called on HVM domains.

Yes. I think by the end of the series all p2m_...() named functions
pertain to HVM domains only.

> There's also "..._entry" vs "..._page".  Is the p2m_remove_page /
> p2m_remove_entry distinction have a meaning, and is it the same meaning as
> guest_physmap_add_page / guest_physmap_add_entry?

In the next patch a pair p2m_{add,remove}_page() is introduced.
p2m_remove_entry() remains a static helper for the latter of the two,
assuming the GFN is already locked. I've used the "page" vs "entry" in
the names just like it was used prior to patch 2; I'd be happy to take
suggestions on what else could be used in place of "entry" (but I'd
like to stick to "page").

Jan



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 02/16] x86/P2M: introduce p2m_{add,remove}_page()
  2022-02-04 22:07   ` George Dunlap
@ 2022-02-07  9:38     ` Jan Beulich
  2022-02-07 15:49       ` George Dunlap
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2022-02-07  9:38 UTC (permalink / raw)
  To: George Dunlap
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monné,
	Paul Durrant, George Dunlap

On 04.02.2022 23:07, George Dunlap wrote:
> On Mon, Jul 5, 2021 at 5:06 PM Jan Beulich <jbeulich@suse.com> wrote:
> 
>> p2m_add_page() is simply a rename from guest_physmap_add_entry().
>> p2m_remove_page() then is its counterpart, despite rendering
>> guest_physmap_remove_page().

First of all: It has been long ago that I noticed that this sentence
misses words. It now ends "...  a trivial wrapper."

>> This way callers can use suitable pairs of
>> functions (previously violated by hvm/grant_table.c).
>>
> 
> Obviously this needs some clarification.  While we're here, I find this a
> bit confusing; I tend to use the present tense for the way the code is
> before the patch, and the imperative for what the patch does; so Id' say:
> 
> Rename guest_physmap_add_entry() to p2m_add_page; make
> guest_physmap_remove_page a wrapper with p2m_remove_page.  That way callers
> can use suitable pairs...

Well, yes, I understand you might word it this way. I'm not convinced
of the fixed scheme you mention for present vs imperative use to be a
universal fit though, requiring to always be followed. When reading
the description with the title in mind (and with the previously missing
words added), I find the use of present tense quite reasonable here.
I'm further slightly puzzled by you keeping the use of present tense in
"That way callers can use ...".

Jan



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 03/16] x86/P2M: drop a few CONFIG_HVM
  2022-02-04 22:13   ` George Dunlap
@ 2022-02-07  9:51     ` Jan Beulich
  0 siblings, 0 replies; 50+ messages in thread
From: Jan Beulich @ 2022-02-07  9:51 UTC (permalink / raw)
  To: George Dunlap
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

On 04.02.2022 23:13, George Dunlap wrote:
> On Mon, Jul 5, 2021 at 5:06 PM Jan Beulich <jbeulich@suse.com> wrote:
> 
>> This is to make it easier to see which parts of p2m.c still aren't HVM-
>> specific: In one case the conditionals sat in an already guarded region,
>> while in the other case P2M_AUDIT implies HVM.
>>
> 
> I think this would be much more easy to understand what's going on if it
> was more like this:
> 
> ---
> x86/p2m: P2M_AUDIT implies CONFIG_HVM
> 
> Remove one #endif / #ifdef CONFIG_HVM pair to make all the audit code
> CONFIG_HVM only.  This is to make it easier to see which parts of p2m.c
> still aren't HVM-specific.
> 
> While here, remove a redundant set of nested #ifdef CONFIG_HVM.
> ---
> 
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Thanks. Unless you tell me otherwise I'll assume the changed title and
description are merely a suggestion, not a requirement for your R-b to
apply. I continue to like my variant better; in particular I'd like to
not mention P2M_AUDIT in the title and this way avoid "While here, ..."
or alike.

Jan



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 08/16] x86/P2M: PoD, altp2m, and nested-p2m are HVM-only
  2022-02-05 21:29   ` George Dunlap
@ 2022-02-07 10:11     ` Jan Beulich
  2022-02-07 14:45       ` George Dunlap
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2022-02-07 10:11 UTC (permalink / raw)
  To: George Dunlap; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne

On 05.02.2022 22:29, George Dunlap wrote:
>> On Jul 5, 2021, at 5:09 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> --- a/xen/arch/x86/mm/p2m-pod.c
>> +++ b/xen/arch/x86/mm/p2m-pod.c
>> @@ -1135,6 +1135,12 @@ p2m_pod_demand_populate(struct p2m_domai
>>     mfn_t mfn;
>>     unsigned long i;
>>
>> +    if ( !p2m_is_hostp2m(p2m) )
>> +    {
>> +        ASSERT_UNREACHABLE();
>> +        return false;
>> +    }
>> +
>>     ASSERT(gfn_locked_by_me(p2m, gfn));
>>     pod_lock(p2m);
> 
> Why this check rather than something which explicitly says HVM?

Checking for just HVM is too lax here imo. PoD operations should
never be invoked for alternative or nested p2ms; see the various
uses of p2m_get_hostp2m() in p2m-pod.c. However, looking at the
call sites again, I no longer see why I did put in
ASSERT_UNREACHABLE() here. IOW ...

> If you really mean to check for HVM here but are just using this as a shortcut, it needs a comment.

... it's not just a shortcut, yet it feels as if even then you'd
want a comment attached. I'm not really sure though what such a
comment might say which goes beyond what the use p2m_is_hostp2m()
already communicates.

> With that addressed:
> 
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Thanks, but as per above I'll wait with making use of this.

Jan



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 08/16] x86/P2M: PoD, altp2m, and nested-p2m are HVM-only
  2022-02-07 10:11     ` Jan Beulich
@ 2022-02-07 14:45       ` George Dunlap
  2022-02-07 15:23         ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: George Dunlap @ 2022-02-07 14:45 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne



> On Feb 7, 2022, at 10:11 AM, Jan Beulich <jbeulich@suse.com> wrote:
> 
> On 05.02.2022 22:29, George Dunlap wrote:
>>> On Jul 5, 2021, at 5:09 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>> --- a/xen/arch/x86/mm/p2m-pod.c
>>> +++ b/xen/arch/x86/mm/p2m-pod.c
>>> @@ -1135,6 +1135,12 @@ p2m_pod_demand_populate(struct p2m_domai
>>>    mfn_t mfn;
>>>    unsigned long i;
>>> 
>>> +    if ( !p2m_is_hostp2m(p2m) )
>>> +    {
>>> +        ASSERT_UNREACHABLE();
>>> +        return false;
>>> +    }
>>> +
>>>    ASSERT(gfn_locked_by_me(p2m, gfn));
>>>    pod_lock(p2m);
>> 
>> Why this check rather than something which explicitly says HVM?
> 
> Checking for just HVM is too lax here imo. PoD operations should
> never be invoked for alternative or nested p2ms; see the various
> uses of p2m_get_hostp2m() in p2m-pod.c.

The fact remains that it doesn’t match what the patch descriptions says, and you’re making me, the reviewer, guess why you changed it — along with anyone else coming back to try to figure out why the code was this way.

If you want me to approve of the decision to make the check more strict than simply HVM, then you need to make it clear why you’re doing it.  Adding a sentence in the commit message should be fine.

 -George

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 08/16] x86/P2M: PoD, altp2m, and nested-p2m are HVM-only
  2022-02-07 14:45       ` George Dunlap
@ 2022-02-07 15:23         ` Jan Beulich
  0 siblings, 0 replies; 50+ messages in thread
From: Jan Beulich @ 2022-02-07 15:23 UTC (permalink / raw)
  To: George Dunlap; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne

On 07.02.2022 15:45, George Dunlap wrote:
>> On Feb 7, 2022, at 10:11 AM, Jan Beulich <jbeulich@suse.com> wrote:
>> On 05.02.2022 22:29, George Dunlap wrote:
>>>> On Jul 5, 2021, at 5:09 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> --- a/xen/arch/x86/mm/p2m-pod.c
>>>> +++ b/xen/arch/x86/mm/p2m-pod.c
>>>> @@ -1135,6 +1135,12 @@ p2m_pod_demand_populate(struct p2m_domai
>>>>    mfn_t mfn;
>>>>    unsigned long i;
>>>>
>>>> +    if ( !p2m_is_hostp2m(p2m) )
>>>> +    {
>>>> +        ASSERT_UNREACHABLE();
>>>> +        return false;
>>>> +    }
>>>> +
>>>>    ASSERT(gfn_locked_by_me(p2m, gfn));
>>>>    pod_lock(p2m);
>>>
>>> Why this check rather than something which explicitly says HVM?
>>
>> Checking for just HVM is too lax here imo. PoD operations should
>> never be invoked for alternative or nested p2ms; see the various
>> uses of p2m_get_hostp2m() in p2m-pod.c.
> 
> The fact remains that it doesn’t match what the patch descriptions says, and you’re making me, the reviewer, guess why you changed it — along with anyone else coming back to try to figure out why the code was this way.
> 
> If you want me to approve of the decision to make the check more strict than simply HVM, then you need to make it clear why you’re doing it.  Adding a sentence in the commit message should be fine.

I've added a paragraph, but already after your first reply I was
asking myself whether I actually need that change here. It's
more of the "just to be on the safe side" nature, I think. But
it's been quite a while since I put this change together, so I
may also have forgotten about some subtle aspect.

Jan



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 02/16] x86/P2M: introduce p2m_{add,remove}_page()
  2022-02-07  9:38     ` Jan Beulich
@ 2022-02-07 15:49       ` George Dunlap
  0 siblings, 0 replies; 50+ messages in thread
From: George Dunlap @ 2022-02-07 15:49 UTC (permalink / raw)
  To: Jan Beulich
  Cc: George Dunlap, xen-devel, Andrew Cooper, Wei Liu,
	Roger Pau Monne, Paul Durrant



> On Feb 7, 2022, at 9:38 AM, Jan Beulich <jbeulich@suse.com> wrote:
> 
> On 04.02.2022 23:07, George Dunlap wrote:
>> On Mon, Jul 5, 2021 at 5:06 PM Jan Beulich <jbeulich@suse.com> wrote:
>> 
>>> p2m_add_page() is simply a rename from guest_physmap_add_entry().
>>> p2m_remove_page() then is its counterpart, despite rendering
>>> guest_physmap_remove_page().
> 
> First of all: It has been long ago that I noticed that this sentence
> misses words. It now ends "...  a trivial wrapper."
> 
>>> This way callers can use suitable pairs of
>>> functions (previously violated by hvm/grant_table.c).
>>> 
>> 
>> Obviously this needs some clarification.  While we're here, I find this a
>> bit confusing; I tend to use the present tense for the way the code is
>> before the patch, and the imperative for what the patch does; so Id' say:
>> 
>> Rename guest_physmap_add_entry() to p2m_add_page; make
>> guest_physmap_remove_page a wrapper with p2m_remove_page.  That way callers
>> can use suitable pairs...
> 
> Well, yes, I understand you might word it this way. I'm not convinced
> of the fixed scheme you mention for present vs imperative use to be a
> universal fit though, requiring to always be followed. When reading
> the description with the title in mind (and with the previously missing
> words added), I find the use of present tense quite reasonable here.

The way you wrote it is ambiguous grammatically; it could either mean, “Right now p2m_add_page() is simply a rename, and so…” or it could mean, “In this patch, p2m_add_page() is simply a rename.”  If a reader starts interpreting it the first way, then they’ll read along until it doesn’t make sense any more, then have to re-evaluate the whole paragraph.

It seems to me that my proposal is unambiguous.

> I'm further slightly puzzled by you keeping the use of present tense in
> "That way callers can use ...".

I wouldn’t call that the present tense; I’m sure a real linguist would have a name for it. Consider the sentence, “Put the box near the door; that way we can find it easily when we need it.”  The second half of the sentence is set in the hypothetical universe in which the imperative has been carried out.

 -George



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 10/16] x86/P2M: p2m_get_page_from_gfn() is HVM-only
  2021-07-05 16:10 ` [PATCH 10/16] x86/P2M: p2m_get_page_from_gfn() is HVM-only Jan Beulich
@ 2022-02-14 14:26   ` George Dunlap
  0 siblings, 0 replies; 50+ messages in thread
From: George Dunlap @ 2022-02-14 14:26 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne

[-- Attachment #1: Type: text/plain, Size: 463 bytes --]



> On Jul 5, 2021, at 5:10 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> This function is the wrong layer to go through for PV guests. It happens
> to work, but produces results which aren't fully consistent with
> get_page_from_gfn(). The latter function, however, cannot be used in
> map_domain_gfn() as it may not be the host P2M we mean to act on.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 11/16] x86/P2M: derive a HVM-only variant from __get_gfn_type_access()
  2021-07-05 16:12 ` [PATCH 11/16] x86/P2M: derive a HVM-only variant from __get_gfn_type_access() Jan Beulich
@ 2022-02-14 15:12   ` George Dunlap
  2022-02-14 15:20     ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: George Dunlap @ 2022-02-14 15:12 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne,
	Tamas K Lengyel, Alexandru Isaila, Petre Pircalabu

[-- Attachment #1: Type: text/plain, Size: 797 bytes --]



> On Jul 5, 2021, at 5:12 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> Introduce an inline wrapper dealing with the non-translated-domain case,
> while stripping that logic from the main function, which gets renamed to
> p2m_get_gfn_type_access(). HVM-only callers can then directly use the
> main function.
> 
> Along with renaming the main function also make its and the new inline
> helper's GFN parameters type-safe.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Nit in the title: I read “HVM” as “aych vee emm”, and so I use ‘an’ before it rather than ‘a’; i.e., “derive an HVM-only…”

I feel obligated to mention it but I’ll leave it to you whether you want to change it or not:

Reviewed-by: George Dunlap <george.dunlap@citrix.com>


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 12/16] x86/p2m: re-arrange {,__}put_gfn()
  2021-07-05 16:12 ` [PATCH 12/16] x86/p2m: re-arrange {,__}put_gfn() Jan Beulich
@ 2022-02-14 15:17   ` George Dunlap
  0 siblings, 0 replies; 50+ messages in thread
From: George Dunlap @ 2022-02-14 15:17 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne

[-- Attachment #1: Type: text/plain, Size: 421 bytes --]



> On Jul 5, 2021, at 5:12 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> All explicit callers of __put_gfn() are in HVM-only code and hold a valid
> P2M pointer in their hands. Move the paging_mode_translate() check out of
> there into put_gfn(), renaming __put_gfn() and making its GFN parameter
> type-safe.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 11/16] x86/P2M: derive a HVM-only variant from __get_gfn_type_access()
  2022-02-14 15:12   ` George Dunlap
@ 2022-02-14 15:20     ` Jan Beulich
  0 siblings, 0 replies; 50+ messages in thread
From: Jan Beulich @ 2022-02-14 15:20 UTC (permalink / raw)
  To: George Dunlap
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne,
	Tamas K Lengyel, Alexandru Isaila, Petre Pircalabu

On 14.02.2022 16:12, George Dunlap wrote:
>> On Jul 5, 2021, at 5:12 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>> Introduce an inline wrapper dealing with the non-translated-domain case,
>> while stripping that logic from the main function, which gets renamed to
>> p2m_get_gfn_type_access(). HVM-only callers can then directly use the
>> main function.
>>
>> Along with renaming the main function also make its and the new inline
>> helper's GFN parameters type-safe.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Nit in the title: I read “HVM” as “aych vee emm”, and so I use ‘an’ before it rather than ‘a’; i.e., “derive an HVM-only…”
> 
> I feel obligated to mention it but I’ll leave it to you whether you want to change it or not:

Thanks - I always appreciate clarification on my, frequently, improper
language use. In the case here, however, I know people saying "aych"
as well as ones saying "haych", so I'm always in trouble to judge
which one's right (and probably both are). I therefore decided to
simply drop the "a" from the title, which I think still leaves it be a
proper one.

> Reviewed-by: George Dunlap <george.dunlap@citrix.com>

And thanks again.

Jan



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 13/16] shr_pages field is MEM_SHARING-only
  2021-07-05 16:13 ` [PATCH 13/16] shr_pages field is MEM_SHARING-only Jan Beulich
  2021-07-06 12:42   ` Tamas K Lengyel
@ 2022-02-14 15:36   ` George Dunlap
  1 sibling, 0 replies; 50+ messages in thread
From: George Dunlap @ 2022-02-14 15:36 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Roger Pau Monne, Tamas K Lengyel, Andrew Cooper,
	Ian Jackson, Julien Grall, Stefano Stabellini, Wei Liu

[-- Attachment #1: Type: text/plain, Size: 411 bytes --]



> On Jul 5, 2021, at 5:13 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> Conditionalize it and its uses accordingly. The main goal though is to
> demonstrate that x86's p2m_teardown() is now empty when !HVM, which in
> particular means the last remaining use of p2m_lock() in this cases goes
> away.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 14/16] paged_pages field is MEM_PAGING-only
  2021-07-05 16:14 ` [PATCH 14/16] paged_pages field is MEM_PAGING-only Jan Beulich
  2021-07-06 12:44   ` Tamas K Lengyel
@ 2022-02-14 15:38   ` George Dunlap
  1 sibling, 0 replies; 50+ messages in thread
From: George Dunlap @ 2022-02-14 15:38 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Roger Pau Monne, Tamas K Lengyel, Andrew Cooper,
	Ian Jackson, Julien Grall, Stefano Stabellini, Wei Liu

[-- Attachment #1: Type: text/plain, Size: 228 bytes --]



> On Jul 5, 2021, at 5:14 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> Conditionalize it and its uses accordingly.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 15/16] x86/P2M: p2m.c is HVM-only
  2021-07-05 16:14 ` [PATCH 15/16] x86/P2M: p2m.c is HVM-only Jan Beulich
@ 2022-02-14 15:39   ` George Dunlap
  0 siblings, 0 replies; 50+ messages in thread
From: George Dunlap @ 2022-02-14 15:39 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne

[-- Attachment #1: Type: text/plain, Size: 449 bytes --]



> On Jul 5, 2021, at 5:14 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> This only requires moving p2m_percpu_rwlock elsewhere (ultimately I
> think all P2M locking should go away as well when !HVM, but this looks
> to require further code juggling). The two other unguarded functions are
> already unneeded (by virtue of DCE) when !HVM.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 16/16] x86/P2M: the majority for struct p2m_domain's fields are HVM-only
  2021-07-05 16:15 ` [PATCH 16/16] x86/P2M: the majority for struct p2m_domain's fields are HVM-only Jan Beulich
  2021-07-05 17:49   ` Paul Durrant
@ 2022-02-14 15:51   ` George Dunlap
  2022-02-14 16:07     ` Jan Beulich
  1 sibling, 1 reply; 50+ messages in thread
From: George Dunlap @ 2022-02-14 15:51 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne, Paul Durrant

[-- Attachment #1: Type: text/plain, Size: 1387 bytes --]



> On Jul 5, 2021, at 5:15 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> ..., as are the majority of the locks involved. Conditionalize things
> accordingly.
> 
> Also adjust the ioreq field's indentation at this occasion.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

With one question…

> @@ -905,10 +917,10 @@ int p2m_altp2m_propagate_change(struct d
> /* Set a specific p2m view visibility */
> int p2m_set_altp2m_view_visibility(struct domain *d, unsigned int idx,
>                                    uint8_t visible);
> -#else
> +#else /* CONFIG_HVM */
> struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
> static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) {}
> -#endif
> +#endif /* CONFIG_HVM */

This is relatively minor, but what’s the normal for how to label #else macros here?  Wouldn’t you normally see “#endif /* CONFIG_HVM */“ and think that the immediately preceding lines are compiled only if CONFIG_HVM is defined?  I.e., would this be more accurate to write “!CONFIG_HVM” here?

I realize in this case it’s not a big deal since the #else is just three lines above it, but since you took the time to add the comment in there, it seems like it’s worth the time to have a quick think about whether that’s the right thing to do.

 -George

[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 16/16] x86/P2M: the majority for struct p2m_domain's fields are HVM-only
  2022-02-14 15:51   ` George Dunlap
@ 2022-02-14 16:07     ` Jan Beulich
  2022-02-16  7:54       ` Jan Beulich
  0 siblings, 1 reply; 50+ messages in thread
From: Jan Beulich @ 2022-02-14 16:07 UTC (permalink / raw)
  To: George Dunlap
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne, Paul Durrant

On 14.02.2022 16:51, George Dunlap wrote:
> 
> 
>> On Jul 5, 2021, at 5:15 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>> ..., as are the majority of the locks involved. Conditionalize things
>> accordingly.
>>
>> Also adjust the ioreq field's indentation at this occasion.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Thanks.

> With one question…
> 
>> @@ -905,10 +917,10 @@ int p2m_altp2m_propagate_change(struct d
>> /* Set a specific p2m view visibility */
>> int p2m_set_altp2m_view_visibility(struct domain *d, unsigned int idx,
>>                                    uint8_t visible);
>> -#else
>> +#else /* CONFIG_HVM */
>> struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
>> static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) {}
>> -#endif
>> +#endif /* CONFIG_HVM */
> 
> This is relatively minor, but what’s the normal for how to label #else macros here?  Wouldn’t you normally see “#endif /* CONFIG_HVM */“ and think that the immediately preceding lines are compiled only if CONFIG_HVM is defined?  I.e., would this be more accurate to write “!CONFIG_HVM” here?
> 
> I realize in this case it’s not a big deal since the #else is just three lines above it, but since you took the time to add the comment in there, it seems like it’s worth the time to have a quick think about whether that’s the right thing to do.

Hmm, yes, let me make this !CONFIG_HVM. I think we're not really
consistent with this, but I agree it's more natural like you say.

Jan



^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH 16/16] x86/P2M: the majority for struct p2m_domain's fields are HVM-only
  2022-02-14 16:07     ` Jan Beulich
@ 2022-02-16  7:54       ` Jan Beulich
  0 siblings, 0 replies; 50+ messages in thread
From: Jan Beulich @ 2022-02-16  7:54 UTC (permalink / raw)
  To: George Dunlap
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne, Paul Durrant

On 14.02.2022 17:07, Jan Beulich wrote:
> On 14.02.2022 16:51, George Dunlap wrote:
>>> On Jul 5, 2021, at 5:15 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>
>>> ..., as are the majority of the locks involved. Conditionalize things
>>> accordingly.
>>>
>>> Also adjust the ioreq field's indentation at this occasion.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
> 
> Thanks.
> 
>> With one question…
>>
>>> @@ -905,10 +917,10 @@ int p2m_altp2m_propagate_change(struct d
>>> /* Set a specific p2m view visibility */
>>> int p2m_set_altp2m_view_visibility(struct domain *d, unsigned int idx,
>>>                                    uint8_t visible);
>>> -#else
>>> +#else /* CONFIG_HVM */
>>> struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
>>> static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) {}
>>> -#endif
>>> +#endif /* CONFIG_HVM */
>>
>> This is relatively minor, but what’s the normal for how to label #else macros here?  Wouldn’t you normally see “#endif /* CONFIG_HVM */“ and think that the immediately preceding lines are compiled only if CONFIG_HVM is defined?  I.e., would this be more accurate to write “!CONFIG_HVM” here?
>>
>> I realize in this case it’s not a big deal since the #else is just three lines above it, but since you took the time to add the comment in there, it seems like it’s worth the time to have a quick think about whether that’s the right thing to do.
> 
> Hmm, yes, let me make this !CONFIG_HVM. I think we're not really
> consistent with this, but I agree it's more natural like you say.

Coming to write a similar construct elsewhere, I've realized this is
odd. Looking through include/asm/, the model generally used is

#ifdef CONFIG_xyz
#else /* !CONFIG_xyz */
#endif /* CONFIG_xyz */

That's what I'll switch to here then as well.

Jan



^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2022-02-16  7:54 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-05 16:03 [PATCH 00/16] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
2021-07-05 16:05 ` Jan Beulich
2021-07-05 16:05 ` [PATCH 01/16] x86/P2M: rename p2m_remove_page() Jan Beulich
2022-02-04 21:54   ` George Dunlap
2022-02-07  9:20     ` Jan Beulich
2021-07-05 16:06 ` [PATCH 02/16] x86/P2M: introduce p2m_{add,remove}_page() Jan Beulich
2021-07-05 17:47   ` Paul Durrant
2021-07-06  7:05     ` Jan Beulich
2022-02-04 22:07   ` George Dunlap
2022-02-07  9:38     ` Jan Beulich
2022-02-07 15:49       ` George Dunlap
2021-07-05 16:06 ` [PATCH 03/16] x86/P2M: drop a few CONFIG_HVM Jan Beulich
2022-02-04 22:13   ` George Dunlap
2022-02-07  9:51     ` Jan Beulich
2021-07-05 16:07 ` [PATCH 04/16] x86/P2M: move map_domain_gfn() (again) Jan Beulich
2022-02-04 22:17   ` George Dunlap
2021-07-05 16:07 ` [PATCH 05/16] x86/mm: move guest_physmap_{add,remove}_page() Jan Beulich
2022-02-05 21:06   ` George Dunlap
2021-07-05 16:07 ` [PATCH 06/16] x86/mm: split set_identity_p2m_entry() into PV and HVM parts Jan Beulich
2022-02-05 21:09   ` George Dunlap
2021-07-05 16:09 ` [PATCH 07/16] x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only Jan Beulich
2021-07-07  1:35   ` Tian, Kevin
2022-02-05 21:17   ` George Dunlap
2021-07-05 16:09 ` [PATCH 08/16] x86/P2M: PoD, altp2m, and nested-p2m " Jan Beulich
2022-02-05 21:29   ` George Dunlap
2022-02-07 10:11     ` Jan Beulich
2022-02-07 14:45       ` George Dunlap
2022-02-07 15:23         ` Jan Beulich
2021-07-05 16:10 ` [PATCH 09/16] x86/P2M: split out init/teardown functions Jan Beulich
2022-02-05 21:31   ` George Dunlap
2021-07-05 16:10 ` [PATCH 10/16] x86/P2M: p2m_get_page_from_gfn() is HVM-only Jan Beulich
2022-02-14 14:26   ` George Dunlap
2021-07-05 16:12 ` [PATCH 11/16] x86/P2M: derive a HVM-only variant from __get_gfn_type_access() Jan Beulich
2022-02-14 15:12   ` George Dunlap
2022-02-14 15:20     ` Jan Beulich
2021-07-05 16:12 ` [PATCH 12/16] x86/p2m: re-arrange {,__}put_gfn() Jan Beulich
2022-02-14 15:17   ` George Dunlap
2021-07-05 16:13 ` [PATCH 13/16] shr_pages field is MEM_SHARING-only Jan Beulich
2021-07-06 12:42   ` Tamas K Lengyel
2022-02-14 15:36   ` George Dunlap
2021-07-05 16:14 ` [PATCH 14/16] paged_pages field is MEM_PAGING-only Jan Beulich
2021-07-06 12:44   ` Tamas K Lengyel
2022-02-14 15:38   ` George Dunlap
2021-07-05 16:14 ` [PATCH 15/16] x86/P2M: p2m.c is HVM-only Jan Beulich
2022-02-14 15:39   ` George Dunlap
2021-07-05 16:15 ` [PATCH 16/16] x86/P2M: the majority for struct p2m_domain's fields are HVM-only Jan Beulich
2021-07-05 17:49   ` Paul Durrant
2022-02-14 15:51   ` George Dunlap
2022-02-14 16:07     ` Jan Beulich
2022-02-16  7:54       ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.