All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only
@ 2022-02-23 15:55 Jan Beulich
  2022-02-23 15:57 ` [PATCH v2 01/14] x86/P2M: rename p2m_remove_page() Jan Beulich
                   ` (14 more replies)
  0 siblings, 15 replies; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 15:55 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

The primary goal of this series is to leave p2m.c with, as its leading
comment suggests, just code for "physical-to-machine mappings for
automatically-translated domains". This requires splitting a few
functions, with their non-HVM parts moved elsewhere.

There aren't many changes in v2, mostly from re-basing. See individual
patches for details.

01: x86/P2M: rename p2m_remove_page()
02: x86/P2M: introduce p2m_{add,remove}_page()
03: x86/mm: move guest_physmap_{add,remove}_page()
04: x86/mm: split set_identity_p2m_entry() into PV and HVM parts
05: x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only
06: x86/P2M: PoD, altp2m, and nested-p2m are HVM-only
07: x86/P2M: split out init/teardown functions
08: x86/P2M: p2m_get_page_from_gfn() is HVM-only
09: x86/P2M: derive a HVM-only variant from __get_gfn_type_access()
10: x86/p2m: re-arrange {,__}put_gfn()
11: shr_pages field is MEM_SHARING-only
12: paged_pages field is MEM_PAGING-only
13: x86/P2M: p2m.c is HVM-only
14: x86/P2M: the majority for struct p2m_domain's fields are HVM-only

Jan



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 01/14] x86/P2M: rename p2m_remove_page()
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
@ 2022-02-23 15:57 ` Jan Beulich
  2022-04-01 11:00   ` George Dunlap
  2022-02-23 15:58 ` [PATCH v2 02/14] x86/P2M: introduce p2m_{add,remove}_page() Jan Beulich
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 15:57 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

This is in preparation to re-using the original name.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -788,8 +788,8 @@ void p2m_final_teardown(struct domain *d
 #ifdef CONFIG_HVM
 
 static int __must_check
-p2m_remove_page(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
-                unsigned int page_order)
+p2m_remove_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
+                 unsigned int page_order)
 {
     unsigned long i;
     p2m_type_t t;
@@ -861,7 +861,7 @@ guest_physmap_remove_page(struct domain
         return 0;
 
     gfn_lock(p2m, gfn, page_order);
-    rc = p2m_remove_page(p2m, gfn, mfn, page_order);
+    rc = p2m_remove_entry(p2m, gfn, mfn, page_order);
     gfn_unlock(p2m, gfn, page_order);
 
     return rc;
@@ -1034,7 +1034,7 @@ guest_physmap_add_entry(struct domain *d
                 P2M_DEBUG("old gfn=%#lx -> mfn %#lx\n",
                           gfn_x(ogfn) , mfn_x(omfn));
                 if ( mfn_eq(omfn, mfn_add(mfn, i)) &&
-                     (rc = p2m_remove_page(p2m, ogfn, omfn, 0)) )
+                     (rc = p2m_remove_entry(p2m, ogfn, omfn, 0)) )
                     goto out;
             }
         }
@@ -2444,7 +2444,7 @@ int p2m_change_altp2m_gfn(struct domain
     {
         mfn = ap2m->get_entry(ap2m, old_gfn, &t, &a, 0, NULL, NULL);
         rc = mfn_valid(mfn)
-             ? p2m_remove_page(ap2m, old_gfn, mfn, PAGE_ORDER_4K)
+             ? p2m_remove_entry(ap2m, old_gfn, mfn, PAGE_ORDER_4K)
              : 0;
         goto out;
     }



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 02/14] x86/P2M: introduce p2m_{add,remove}_page()
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
  2022-02-23 15:57 ` [PATCH v2 01/14] x86/P2M: rename p2m_remove_page() Jan Beulich
@ 2022-02-23 15:58 ` Jan Beulich
  2022-04-01 12:02   ` George Dunlap
  2022-02-23 15:58 ` [PATCH v2 03/14] x86/mm: move guest_physmap_{add,remove}_page() Jan Beulich
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 15:58 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

Rename guest_physmap_add_entry() to p2m_add_page(); make
guest_physmap_remove_page() a trivial wrapper around p2m_remove_page().
This way callers can use suitable pairs of functions (previously
violated by hvm/grant_table.c).

In HVM-specific code further avoid going through the guest_physmap_*()
layer, and instead use the two new/renamed functions directly.

Ultimately the goal is to have guest_physmap_...() functions cover all
types of guests, but p2m_...() dealing only with translated ones.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
v2: Re-base. Adjust description.

--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -173,8 +173,7 @@ static int __init pvh_populate_memory_ra
             continue;
         }
 
-        rc = guest_physmap_add_page(d, _gfn(start), page_to_mfn(page),
-                                    order);
+        rc = p2m_add_page(d, _gfn(start), page_to_mfn(page), order, p2m_ram_rw);
         if ( rc != 0 )
         {
             printk("Failed to populate memory: [%#lx,%#lx): %d\n",
--- a/xen/arch/x86/hvm/grant_table.c
+++ b/xen/arch/x86/hvm/grant_table.c
@@ -39,9 +39,8 @@ int create_grant_p2m_mapping(uint64_t ad
         p2mt = p2m_grant_map_ro;
     else
         p2mt = p2m_grant_map_rw;
-    rc = guest_physmap_add_entry(current->domain,
-                                 _gfn(addr >> PAGE_SHIFT),
-                                 frame, PAGE_ORDER_4K, p2mt);
+    rc = p2m_add_page(current->domain, _gfn(addr >> PAGE_SHIFT),
+                      frame, PAGE_ORDER_4K, p2mt);
     if ( rc )
         return GNTST_general_error;
     else
@@ -68,7 +67,7 @@ int replace_grant_p2m_mapping(uint64_t a
                  type, mfn_x(old_mfn), mfn_x(frame));
         return GNTST_general_error;
     }
-    if ( guest_physmap_remove_page(d, _gfn(gfn), frame, PAGE_ORDER_4K) )
+    if ( p2m_remove_page(d, _gfn(gfn), frame, PAGE_ORDER_4K) )
     {
         put_gfn(d, gfn);
         return GNTST_general_error;
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -188,8 +188,7 @@ static void hvm_remove_ioreq_gfn(struct
     if ( gfn_eq(iorp->gfn, INVALID_GFN) )
         return;
 
-    if ( guest_physmap_remove_page(d, iorp->gfn,
-                                   page_to_mfn(iorp->page), 0) )
+    if ( p2m_remove_page(d, iorp->gfn, page_to_mfn(iorp->page), 0) )
         domain_crash(d);
     clear_page(iorp->va);
 }
@@ -205,8 +204,7 @@ static int hvm_add_ioreq_gfn(struct iore
 
     clear_page(iorp->va);
 
-    rc = guest_physmap_add_page(d, iorp->gfn,
-                                page_to_mfn(iorp->page), 0);
+    rc = p2m_add_page(d, iorp->gfn, page_to_mfn(iorp->page), 0, p2m_ram_rw);
     if ( rc == 0 )
         paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn)));
 
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -850,15 +850,17 @@ p2m_remove_entry(struct p2m_domain *p2m,
 }
 
 int
-guest_physmap_remove_page(struct domain *d, gfn_t gfn,
-                          mfn_t mfn, unsigned int page_order)
+p2m_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+                unsigned int page_order)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int rc;
 
-    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
     if ( !paging_mode_translate(d) )
-        return 0;
+    {
+        ASSERT_UNREACHABLE();
+        return -EPERM;
+    }
 
     gfn_lock(p2m, gfn, page_order);
     rc = p2m_remove_entry(p2m, gfn, mfn, page_order);
@@ -867,6 +869,17 @@ guest_physmap_remove_page(struct domain
     return rc;
 }
 
+int
+guest_physmap_remove_page(struct domain *d, gfn_t gfn,
+                          mfn_t mfn, unsigned int page_order)
+{
+    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
+    if ( !paging_mode_translate(d) )
+        return 0;
+
+    return p2m_remove_page(d, gfn, mfn, page_order);
+}
+
 #endif /* CONFIG_HVM */
 
 int
@@ -905,14 +918,14 @@ guest_physmap_add_page(struct domain *d,
         return 0;
     }
 
-    return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw);
+    return p2m_add_page(d, gfn, mfn, page_order, p2m_ram_rw);
 }
 
 #ifdef CONFIG_HVM
 
 int
-guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
-                        unsigned int page_order, p2m_type_t t)
+p2m_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+             unsigned int page_order, p2m_type_t t)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long i;
@@ -2724,7 +2737,7 @@ static int p2m_add_foreign(struct domain
     {
         if ( is_special_page(mfn_to_page(prev_mfn)) )
             /* Special pages are simply unhooked from this phys slot */
-            rc = guest_physmap_remove_page(tdom, _gfn(gpfn), prev_mfn, 0);
+            rc = p2m_remove_page(tdom, _gfn(gpfn), prev_mfn, 0);
         else
             /* Normal domain memory is freed, to avoid leaking memory. */
             rc = guest_remove_page(tdom, gpfn);
@@ -2732,7 +2745,7 @@ static int p2m_add_foreign(struct domain
             goto put_both;
     }
     /*
-     * Create the new mapping. Can't use guest_physmap_add_page() because it
+     * Create the new mapping. Can't use p2m_add_page() because it
      * will update the m2p table which will result in  mfn -> gpfn of dom0
      * and not fgfn of domU.
      */
@@ -2846,7 +2859,7 @@ int xenmem_add_to_physmap_one(
     {
         if ( is_special_page(mfn_to_page(prev_mfn)) )
             /* Special pages are simply unhooked from this phys slot. */
-            rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
+            rc = p2m_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
         else if ( !mfn_eq(mfn, prev_mfn) )
             /* Normal domain memory is freed, to avoid leaking memory. */
             rc = guest_remove_page(d, gfn_x(gpfn));
@@ -2854,11 +2867,11 @@ int xenmem_add_to_physmap_one(
 
     /* Unmap from old location, if any. */
     if ( !rc && old_gpfn != INVALID_M2P_ENTRY && !gfn_eq(_gfn(old_gpfn), gpfn) )
-        rc = guest_physmap_remove_page(d, _gfn(old_gpfn), mfn, PAGE_ORDER_4K);
+        rc = p2m_remove_page(d, _gfn(old_gpfn), mfn, PAGE_ORDER_4K);
 
     /* Map at new location. */
     if ( !rc )
-        rc = guest_physmap_add_page(d, gpfn, mfn, PAGE_ORDER_4K);
+        rc = p2m_add_page(d, gpfn, mfn, PAGE_ORDER_4K, p2m_ram_rw);
 
  put_all:
     put_gfn(d, gfn_x(gpfn));
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -577,10 +577,11 @@ int p2m_alloc_table(struct p2m_domain *p
 void p2m_teardown(struct p2m_domain *p2m);
 void p2m_final_teardown(struct domain *d);
 
-/* Add a page to a domain's p2m table */
-int guest_physmap_add_entry(struct domain *d, gfn_t gfn,
-                            mfn_t mfn, unsigned int page_order,
-                            p2m_type_t t);
+/* Add/remove a page to/from a domain's p2m table. */
+int p2m_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+                 unsigned int page_order, p2m_type_t t);
+int p2m_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+                    unsigned int page_order);
 
 /* Untyped version for RAM only, for compatibility and PV. */
 int __must_check guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 03/14] x86/mm: move guest_physmap_{add,remove}_page()
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
  2022-02-23 15:57 ` [PATCH v2 01/14] x86/P2M: rename p2m_remove_page() Jan Beulich
  2022-02-23 15:58 ` [PATCH v2 02/14] x86/P2M: introduce p2m_{add,remove}_page() Jan Beulich
@ 2022-02-23 15:58 ` Jan Beulich
  2022-02-23 15:59 ` [PATCH v2 04/14] x86/mm: split set_identity_p2m_entry() into PV and HVM parts Jan Beulich
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 15:58 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

... to a new file, separating the functions from their HVM-specific
backing ones, themselves only dealing with the non-translated case.

To avoid having a new CONFIG_HVM conditional in there, do away with
the inline placeholder.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
---
v2: Re-base.

--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -10,3 +10,4 @@ obj-$(CONFIG_MEM_SHARING) += mem_sharing
 obj-y += p2m.o
 obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o p2m-pt.o
 obj-y += paging.o
+obj-y += physmap.o
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -870,60 +870,6 @@ p2m_remove_page(struct domain *d, gfn_t
 }
 
 int
-guest_physmap_remove_page(struct domain *d, gfn_t gfn,
-                          mfn_t mfn, unsigned int page_order)
-{
-    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
-    if ( !paging_mode_translate(d) )
-        return 0;
-
-    return p2m_remove_page(d, gfn, mfn, page_order);
-}
-
-#endif /* CONFIG_HVM */
-
-int
-guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
-                       unsigned int page_order)
-{
-    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
-    if ( !paging_mode_translate(d) )
-    {
-        struct page_info *page = mfn_to_page(mfn);
-        unsigned long i;
-
-        /*
-         * Our interface for PV guests wrt IOMMU entries hasn't been very
-         * clear; but historically, pages have started out with IOMMU mappings,
-         * and only lose them when changed to a different page type.
-         *
-         * Retain this property by grabbing a writable type ref and then
-         * dropping it immediately.  The result will be pages that have a
-         * writable type (and an IOMMU entry), but a count of 0 (such that
-         * any guest-requested type changes succeed and remove the IOMMU
-         * entry).
-         */
-        for ( i = 0; i < (1UL << page_order); ++i, ++page )
-        {
-            if ( !need_iommu_pt_sync(d) )
-                /* nothing */;
-            else if ( get_page_and_type(page, d, PGT_writable_page) )
-                put_page_and_type(page);
-            else
-                return -EINVAL;
-
-            set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn) + i);
-        }
-
-        return 0;
-    }
-
-    return p2m_add_page(d, gfn, mfn, page_order, p2m_ram_rw);
-}
-
-#ifdef CONFIG_HVM
-
-int
 p2m_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
              unsigned int page_order, p2m_type_t t)
 {
--- /dev/null
+++ b/xen/arch/x86/mm/physmap.c
@@ -0,0 +1,85 @@
+/******************************************************************************
+ * arch/x86/mm/physmap.c
+ *
+ * Parts of this code are Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
+ * Parts of this code are Copyright (c) 2007 by Advanced Micro Devices.
+ * Parts of this code are Copyright (c) 2006-2007 by XenSource Inc.
+ * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
+ * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/p2m.h>
+
+#include "mm-locks.h"
+
+int
+guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
+                       unsigned int page_order)
+{
+    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
+    if ( !paging_mode_translate(d) )
+    {
+        struct page_info *page = mfn_to_page(mfn);
+        unsigned long i;
+
+        /*
+         * Our interface for PV guests wrt IOMMU entries hasn't been very
+         * clear; but historically, pages have started out with IOMMU mappings,
+         * and only lose them when changed to a different page type.
+         *
+         * Retain this property by grabbing a writable type ref and then
+         * dropping it immediately.  The result will be pages that have a
+         * writable type (and an IOMMU entry), but a count of 0 (such that
+         * any guest-requested type changes succeed and remove the IOMMU
+         * entry).
+         */
+        for ( i = 0; i < (1UL << page_order); ++i, ++page )
+        {
+            if ( !need_iommu_pt_sync(d) )
+                /* nothing */;
+            else if ( get_page_and_type(page, d, PGT_writable_page) )
+                put_page_and_type(page);
+            else
+                return -EINVAL;
+
+            set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_x(gfn) + i);
+        }
+
+        return 0;
+    }
+
+    return p2m_add_page(d, gfn, mfn, page_order, p2m_ram_rw);
+}
+
+int
+guest_physmap_remove_page(struct domain *d, gfn_t gfn,
+                          mfn_t mfn, unsigned int page_order)
+{
+    /* IOMMU for PV guests is handled in get_page_type() and put_page(). */
+    if ( !paging_mode_translate(d) )
+        return 0;
+
+    return p2m_remove_page(d, gfn, mfn, page_order);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -8,18 +8,9 @@ int set_foreign_p2m_entry(struct domain
                           unsigned long gfn, mfn_t mfn);
 
 /* Remove a page from a domain's p2m table */
-#ifdef CONFIG_HVM
 int __must_check
 guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
                           unsigned int page_order);
-#else
-static inline int
-guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
-                          unsigned int page_order)
-{
-    return 0;
-}
-#endif
 
 /* Map MMIO regions in the p2m: start_gfn and nr describe the range in
  *  * the guest physical address space to map, starting from the machine



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 04/14] x86/mm: split set_identity_p2m_entry() into PV and HVM parts
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (2 preceding siblings ...)
  2022-02-23 15:58 ` [PATCH v2 03/14] x86/mm: move guest_physmap_{add,remove}_page() Jan Beulich
@ 2022-02-23 15:59 ` Jan Beulich
  2022-04-01 12:13   ` George Dunlap
                     ` (2 more replies)
  2022-02-23 16:00 ` [PATCH v2 05/14] x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only Jan Beulich
                   ` (10 subsequent siblings)
  14 siblings, 3 replies; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 15:59 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné,
	George Dunlap, Paul Durrant

..., moving the former into the new physmap.c. Also call the new
functions directly from arch_iommu_hwdom_init() and
vpci_make_msix_hole(), as the PV/HVM split is explicit there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@ctirix.com>
---
v2: Change arch_iommu_hwdom_init() and vpci_make_msix_hole().

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1473,12 +1473,9 @@ static int clear_mmio_p2m_entry(struct d
     return rc;
 }
 
-#endif /* CONFIG_HVM */
-
-int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
+int p2m_add_identity_entry(struct domain *d, unsigned long gfn_l,
                            p2m_access_t p2ma, unsigned int flag)
 {
-#ifdef CONFIG_HVM
     p2m_type_t p2mt;
     p2m_access_t a;
     gfn_t gfn = _gfn(gfn_l);
@@ -1488,13 +1485,8 @@ int set_identity_p2m_entry(struct domain
 
     if ( !paging_mode_translate(d) )
     {
-#endif
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l),
-                                1ul << PAGE_ORDER_4K,
-                                p2m_access_to_iommu_flags(p2ma));
-#ifdef CONFIG_HVM
+        ASSERT_UNREACHABLE();
+        return -EPERM;
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1520,12 +1512,10 @@ int set_identity_p2m_entry(struct domain
 
     gfn_unlock(p2m, gfn, 0);
     return ret;
-#endif
 }
 
-int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
+int p2m_remove_identity_entry(struct domain *d, unsigned long gfn_l)
 {
-#ifdef CONFIG_HVM
     p2m_type_t p2mt;
     p2m_access_t a;
     gfn_t gfn = _gfn(gfn_l);
@@ -1535,11 +1525,8 @@ int clear_identity_p2m_entry(struct doma
 
     if ( !paging_mode_translate(d) )
     {
-#endif
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_unmap(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K);
-#ifdef CONFIG_HVM
+        ASSERT_UNREACHABLE();
+        return -EPERM;
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1561,7 +1548,6 @@ int clear_identity_p2m_entry(struct doma
     }
 
     return ret;
-#endif
 }
 
 #ifdef CONFIG_MEM_SHARING
@@ -1606,8 +1592,6 @@ int set_shared_p2m_entry(struct domain *
 
 #endif /* CONFIG_MEM_SHARING */
 
-#ifdef CONFIG_HVM
-
 static struct p2m_domain *
 p2m_getlru_nestedp2m(struct domain *d, struct p2m_domain *p2m)
 {
--- a/xen/arch/x86/mm/physmap.c
+++ b/xen/arch/x86/mm/physmap.c
@@ -21,6 +21,7 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/iommu.h>
 #include <asm/p2m.h>
 
 #include "mm-locks.h"
@@ -75,6 +76,33 @@ guest_physmap_remove_page(struct domain
     return p2m_remove_page(d, gfn, mfn, page_order);
 }
 
+int set_identity_p2m_entry(struct domain *d, unsigned long gfn,
+                           p2m_access_t p2ma, unsigned int flag)
+{
+    if ( !paging_mode_translate(d) )
+    {
+        if ( !is_iommu_enabled(d) )
+            return 0;
+        return iommu_legacy_map(d, _dfn(gfn), _mfn(gfn),
+                                1ul << PAGE_ORDER_4K,
+                                p2m_access_to_iommu_flags(p2ma));
+    }
+
+    return p2m_add_identity_entry(d, gfn, p2ma, flag);
+}
+
+int clear_identity_p2m_entry(struct domain *d, unsigned long gfn)
+{
+    if ( !paging_mode_translate(d) )
+    {
+        if ( !is_iommu_enabled(d) )
+            return 0;
+        return iommu_legacy_unmap(d, _dfn(gfn), 1ul << PAGE_ORDER_4K);
+    }
+
+    return p2m_remove_identity_entry(d, gfn);
+}
+
 /*
  * Local variables:
  * mode: C
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -373,7 +373,7 @@ void __hwdom_init arch_iommu_hwdom_init(
         if ( !hwdom_iommu_map(d, pfn, max_pfn) )
             rc = 0;
         else if ( paging_mode_translate(d) )
-            rc = set_identity_p2m_entry(d, pfn, p2m_access_rw, 0);
+            rc = p2m_add_identity_entry(d, pfn, p2m_access_rw, 0);
         else
             rc = iommu_map(d, _dfn(pfn), _mfn(pfn), 1ul << PAGE_ORDER_4K,
                            IOMMUF_readable | IOMMUF_writable, &flush_flags);
--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -409,7 +409,7 @@ int vpci_make_msix_hole(const struct pci
             case p2m_mmio_direct:
                 if ( mfn_x(mfn) == start )
                 {
-                    clear_identity_p2m_entry(d, start);
+                    p2m_remove_identity_entry(d, start);
                     break;
                 }
                 /* fallthrough. */
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -637,6 +637,10 @@ int set_mmio_p2m_entry(struct domain *d,
 int set_identity_p2m_entry(struct domain *d, unsigned long gfn,
                            p2m_access_t p2ma, unsigned int flag);
 int clear_identity_p2m_entry(struct domain *d, unsigned long gfn);
+/* HVM-only callers can use these directly: */
+int p2m_add_identity_entry(struct domain *d, unsigned long gfn,
+                           p2m_access_t p2ma, unsigned int flag);
+int p2m_remove_identity_entry(struct domain *d, unsigned long gfn);
 
 /* 
  * Populate-on-demand



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 05/14] x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (3 preceding siblings ...)
  2022-02-23 15:59 ` [PATCH v2 04/14] x86/mm: split set_identity_p2m_entry() into PV and HVM parts Jan Beulich
@ 2022-02-23 16:00 ` Jan Beulich
  2022-02-23 16:01 ` [PATCH v2 06/14] x86/P2M: PoD, altp2m, and nested-p2m " Jan Beulich
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 16:00 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

This also includes the two p2m related fields.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
---
v2: Re-base.

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -93,7 +93,9 @@ static int p2m_initialise(struct domain
     int ret = 0;
 
     mm_rwlock_init(&p2m->lock);
+#ifdef CONFIG_HVM
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+#endif
 
     p2m->domain = d;
     p2m->default_access = p2m_access_rwx;
@@ -627,6 +629,7 @@ struct page_info *p2m_get_page_from_gfn(
 }
 
 #ifdef CONFIG_HVM
+
 /* Returns: 0 for success, -errno for failure */
 int p2m_set_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
                   unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
@@ -666,7 +669,6 @@ int p2m_set_entry(struct p2m_domain *p2m
 
     return rc;
 }
-#endif
 
 mfn_t p2m_alloc_ptp(struct p2m_domain *p2m, unsigned int level)
 {
@@ -745,6 +747,8 @@ int p2m_alloc_table(struct p2m_domain *p
     return 0;
 }
 
+#endif /* CONFIG_HVM */
+
 /*
  * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
@@ -753,7 +757,9 @@ void p2m_teardown(struct p2m_domain *p2m
 /* Return all the p2m pages to Xen.
  * We know we don't have any extra mappings to these pages */
 {
+#ifdef CONFIG_HVM
     struct page_info *pg;
+#endif
     struct domain *d;
 
     if (p2m == NULL)
@@ -762,11 +768,16 @@ void p2m_teardown(struct p2m_domain *p2m
     d = p2m->domain;
 
     p2m_lock(p2m);
+
     ASSERT(atomic_read(&d->shr_pages) == 0);
+
+#ifdef CONFIG_HVM
     p2m->phys_table = pagetable_null();
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
         d->arch.paging.free_page(d, pg);
+#endif
+
     p2m_unlock(p2m);
 }
 
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2700,8 +2700,10 @@ int shadow_enable(struct domain *d, u32
  out_locked:
     paging_unlock(d);
  out_unlocked:
+#ifdef CONFIG_HVM
     if ( rv != 0 && !pagetable_is_null(p2m_get_pagetable(p2m)) )
         p2m_teardown(p2m);
+#endif
     if ( rv != 0 && pg != NULL )
     {
         pg->count_info &= ~PGC_count_mask;
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -370,6 +370,7 @@ static uint64_t domain_pgd_maddr(struct
 
     ASSERT(spin_is_locked(&hd->arch.mapping_lock));
 
+#ifdef CONFIG_HVM
     if ( iommu_use_hap_pt(d) )
     {
         pagetable_t pgt = p2m_get_pagetable(p2m_get_hostp2m(d));
@@ -377,6 +378,7 @@ static uint64_t domain_pgd_maddr(struct
         pgd_maddr = pagetable_get_paddr(pgt);
     }
     else
+#endif
     {
         if ( !hd->arch.vtd.pgd_maddr )
         {
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -208,9 +208,6 @@ struct p2m_domain {
     /* Lock that protects updates to the p2m */
     mm_rwlock_t           lock;
 
-    /* Shadow translated domain: p2m mapping */
-    pagetable_t        phys_table;
-
     /*
      * Same as a domain's dirty_cpumask but limited to
      * this p2m and those physical cpus whose vcpu's are in
@@ -229,9 +226,6 @@ struct p2m_domain {
      */
     p2m_access_t default_access;
 
-    /* Pages used to construct the p2m */
-    struct page_list_head pages;
-
     /* Host p2m: Log-dirty ranges registered for the domain. */
     struct rangeset   *logdirty_ranges;
 
@@ -239,6 +233,12 @@ struct p2m_domain {
     bool               global_logdirty;
 
 #ifdef CONFIG_HVM
+    /* Translated domain: p2m mapping */
+    pagetable_t        phys_table;
+
+    /* Pages used to construct the p2m */
+    struct page_list_head pages;
+
     /* Alternate p2m: count of vcpu's currently using this p2m. */
     atomic_t           active_vcpus;
 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 06/14] x86/P2M: PoD, altp2m, and nested-p2m are HVM-only
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (4 preceding siblings ...)
  2022-02-23 16:00 ` [PATCH v2 05/14] x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only Jan Beulich
@ 2022-02-23 16:01 ` Jan Beulich
  2022-04-01 12:36   ` George Dunlap
  2022-02-23 16:01 ` [PATCH v2 07/14] x86/P2M: split out init/teardown functions Jan Beulich
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 16:01 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

There's no need to initialize respective data for PV domains. Note that
p2m_teardown_{alt,nested}p2m() will handle the lack-of-initialization
case fine.

As a result, despite PV domains having a host P2M associated with them
and hence using XENMEM_get_pod_target on such may not be a real problem,
calling p2m_pod_set_mem_target() for a PV domain is surely wrong, even
if benign at present. Add a guard there as well.

In p2m_pod_demand_populate() the situation is a little different: This
function is reachable only for HVM domains anyway, but following from
other PoD functions only ever acting on the host P2M (and hence PoD
entries only ever existing in host P2Ms), assert and bail from there for
non-host-P2Ms.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Also guard XENMEM_{get,set}_pod_target handling. Re-base over wider
    p2m_add_identity_entry() exposure in earlier patch. Add missing
    inclusion of "p2m.h". Mention the p2m_pod_demand_populate()
    adjustment separately in the description.
---
Perhaps p2m_pod_init() could be invoked from p2m_init_hostp2m(), leaving
all other p2m's PoD state uninitialized. Of course at that point the
question would be whether the PoD pieces of struct p2m_domain wouldn't
better move into a separate structure, present only for host P2Ms.
Together with the p2m_pod_demand_populate() adjustment this might then
better be a separate change ...

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -43,6 +43,7 @@
 #include <xsm/xsm.h>
 
 #include "mm-locks.h"
+#include "p2m.h"
 
 /* Override macro from asm/page.h to make work with mfn_t */
 #undef virt_to_mfn
@@ -101,6 +102,9 @@ static int p2m_initialise(struct domain
     p2m->default_access = p2m_access_rwx;
     p2m->p2m_class = p2m_host;
 
+    if ( !is_hvm_domain(d) )
+        return 0;
+
     p2m_pod_init(p2m);
     p2m_nestedp2m_init(p2m);
 
@@ -258,7 +262,7 @@ int p2m_init(struct domain *d)
     int rc;
 
     rc = p2m_init_hostp2m(d);
-    if ( rc )
+    if ( rc || !is_hvm_domain(d) )
         return rc;
 
 #ifdef CONFIG_HVM
--- /dev/null
+++ b/xen/arch/x86/mm/p2m.h
@@ -0,0 +1,27 @@
+/******************************************************************************
+ * arch/x86/mm/p2m.h
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+void p2m_pod_init(struct p2m_domain *p2m);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -30,6 +30,7 @@
 #include <asm/p2m.h>
 
 #include "mm-locks.h"
+#include "p2m.h"
 
 #define superpage_aligned(_x)  (((_x)&(SUPERPAGE_PAGES-1))==0)
 
@@ -1162,6 +1163,12 @@ p2m_pod_demand_populate(struct p2m_domai
     mfn_t mfn;
     unsigned long i;
 
+    if ( !p2m_is_hostp2m(p2m) )
+    {
+        ASSERT_UNREACHABLE();
+        return false;
+    }
+
     ASSERT(gfn_locked_by_me(p2m, gfn));
     pod_lock(p2m);
 
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4810,7 +4810,9 @@ long arch_memory_op(unsigned long cmd, X
         if ( d == NULL )
             return -ESRCH;
 
-        if ( cmd == XENMEM_set_pod_target )
+        if ( !is_hvm_domain(d) )
+            rc = -EINVAL;
+        else if ( cmd == XENMEM_set_pod_target )
         {
             rc = xsm_set_pod_target(XSM_PRIV, d);
             if ( rc )
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -679,8 +679,6 @@ static inline long p2m_pod_entry_count(c
     return p2m->pod.entry_count;
 }
 
-void p2m_pod_init(struct p2m_domain *p2m);
-
 #else
 
 static inline bool
@@ -709,8 +707,6 @@ static inline long p2m_pod_entry_count(c
     return 0;
 }
 
-static inline void p2m_pod_init(struct p2m_domain *p2m) {}
-
 #endif
 
 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 07/14] x86/P2M: split out init/teardown functions
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (5 preceding siblings ...)
  2022-02-23 16:01 ` [PATCH v2 06/14] x86/P2M: PoD, altp2m, and nested-p2m " Jan Beulich
@ 2022-02-23 16:01 ` Jan Beulich
  2022-02-23 16:02 ` [PATCH v2 08/14] x86/P2M: p2m_get_page_from_gfn() is HVM-only Jan Beulich
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 16:01 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné,
	George Dunlap, Kevin Tian, Jun Nakajima

Mostly just code movement, and certainly no functional change intended.
In p2m_final_teardown() the calls to p2m_teardown_{alt,nested}p2m() need
to be guarded by an is_hvm_domain() check now, though. This matches
p2m_init(). And p2m_is_logdirty_range() also gets moved inside the (so
far) adjacent #ifdef.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
---
v2: Re-base over wider p2m_add_identity_entry() exposure in earlier
    patch.

--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -7,7 +7,9 @@ obj-$(CONFIG_SHADOW_PAGING) += guest_wal
 obj-$(CONFIG_MEM_ACCESS) += mem_access.o
 obj-$(CONFIG_MEM_PAGING) += mem_paging.o
 obj-$(CONFIG_MEM_SHARING) += mem_sharing.o
+obj-$(CONFIG_HVM) += nested.o
 obj-y += p2m.o
+obj-y += p2m-basic.o
 obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o p2m-pt.o
 obj-y += paging.o
 obj-y += physmap.o
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -19,6 +19,8 @@
 #include <asm/hvm/hvm.h>
 #include <asm/p2m.h>
 #include <asm/altp2m.h>
+#include "mm-locks.h"
+#include "p2m.h"
 
 void
 altp2m_vcpu_initialise(struct vcpu *v)
@@ -123,6 +125,44 @@ void altp2m_vcpu_disable_ve(struct vcpu
     }
 }
 
+int p2m_init_altp2m(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
+
+    mm_lock_init(&d->arch.altp2m_list_lock);
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
+        if ( p2m == NULL )
+        {
+            p2m_teardown_altp2m(d);
+            return -ENOMEM;
+        }
+        p2m->p2m_class = p2m_alternate;
+        p2m->access_required = hostp2m->access_required;
+        _atomic_set(&p2m->active_vcpus, 0);
+    }
+
+    return 0;
+}
+
+void p2m_teardown_altp2m(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        if ( !d->arch.altp2m_p2m[i] )
+            continue;
+        p2m = d->arch.altp2m_p2m[i];
+        d->arch.altp2m_p2m[i] = NULL;
+        p2m_free_one(p2m);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -25,8 +25,6 @@
 #ifndef _MM_LOCKS_H
 #define _MM_LOCKS_H
 
-#include <asm/mem_sharing.h>
-
 /* Per-CPU variable for enforcing the lock ordering */
 DECLARE_PER_CPU(int, mm_lock_level);
 
--- /dev/null
+++ b/xen/arch/x86/mm/nested.c
@@ -0,0 +1,74 @@
+/******************************************************************************
+ * arch/x86/mm/nested.c
+ *
+ * Parts of this code are Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
+ * Parts of this code are Copyright (c) 2007 by Advanced Micro Devices.
+ * Parts of this code are Copyright (c) 2006-2007 by XenSource Inc.
+ * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
+ * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/sched.h>
+#include <asm/p2m.h>
+#include "mm-locks.h"
+#include "p2m.h"
+
+void p2m_nestedp2m_init(struct p2m_domain *p2m)
+{
+    INIT_LIST_HEAD(&p2m->np2m_list);
+
+    p2m->np2m_base = P2M_BASE_EADDR;
+    p2m->np2m_generation = 0;
+}
+
+int p2m_init_nestedp2m(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+
+    mm_lock_init(&d->arch.nested_p2m_lock);
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+    {
+        d->arch.nested_p2m[i] = p2m = p2m_init_one(d);
+        if ( p2m == NULL )
+        {
+            p2m_teardown_nestedp2m(d);
+            return -ENOMEM;
+        }
+        p2m->p2m_class = p2m_nested;
+        p2m->write_p2m_entry_pre = NULL;
+        p2m->write_p2m_entry_post = nestedp2m_write_p2m_entry_post;
+        list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
+    }
+
+    return 0;
+}
+
+void p2m_teardown_nestedp2m(struct domain *d)
+{
+    unsigned int i;
+    struct p2m_domain *p2m;
+
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+    {
+        if ( !d->arch.nested_p2m[i] )
+            continue;
+        p2m = d->arch.nested_p2m[i];
+        list_del(&p2m->np2m_list);
+        p2m_free_one(p2m);
+        d->arch.nested_p2m[i] = NULL;
+    }
+}
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -35,7 +35,6 @@
 #include <asm/page.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
-#include <asm/hvm/vmx/vmx.h> /* ept_p2m_init() */
 #include <asm/mem_sharing.h>
 #include <asm/hvm/nestedhvm.h>
 #include <asm/altp2m.h>
@@ -56,17 +55,9 @@ boolean_param("hap_2mb", opt_hap_2mb);
 
 DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
 
-static void p2m_nestedp2m_init(struct p2m_domain *p2m)
-{
 #ifdef CONFIG_HVM
-    INIT_LIST_HEAD(&p2m->np2m_list);
 
-    p2m->np2m_base = P2M_BASE_EADDR;
-    p2m->np2m_generation = 0;
-#endif
-}
-
-static int p2m_init_logdirty(struct p2m_domain *p2m)
+int p2m_init_logdirty(struct p2m_domain *p2m)
 {
     if ( p2m->logdirty_ranges )
         return 0;
@@ -79,7 +70,7 @@ static int p2m_init_logdirty(struct p2m_
     return 0;
 }
 
-static void p2m_free_logdirty(struct p2m_domain *p2m)
+void p2m_free_logdirty(struct p2m_domain *p2m)
 {
     if ( !p2m->logdirty_ranges )
         return;
@@ -88,205 +79,6 @@ static void p2m_free_logdirty(struct p2m
     p2m->logdirty_ranges = NULL;
 }
 
-/* Init the datastructures for later use by the p2m code */
-static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
-{
-    int ret = 0;
-
-    mm_rwlock_init(&p2m->lock);
-#ifdef CONFIG_HVM
-    INIT_PAGE_LIST_HEAD(&p2m->pages);
-#endif
-
-    p2m->domain = d;
-    p2m->default_access = p2m_access_rwx;
-    p2m->p2m_class = p2m_host;
-
-    if ( !is_hvm_domain(d) )
-        return 0;
-
-    p2m_pod_init(p2m);
-    p2m_nestedp2m_init(p2m);
-
-    if ( hap_enabled(d) && cpu_has_vmx )
-        ret = ept_p2m_init(p2m);
-    else
-        p2m_pt_init(p2m);
-
-    spin_lock_init(&p2m->ioreq.lock);
-
-    return ret;
-}
-
-static struct p2m_domain *p2m_init_one(struct domain *d)
-{
-    struct p2m_domain *p2m = xzalloc(struct p2m_domain);
-
-    if ( !p2m )
-        return NULL;
-
-    if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
-        goto free_p2m;
-
-    if ( p2m_initialise(d, p2m) )
-        goto free_cpumask;
-    return p2m;
-
-free_cpumask:
-    free_cpumask_var(p2m->dirty_cpumask);
-free_p2m:
-    xfree(p2m);
-    return NULL;
-}
-
-static void p2m_free_one(struct p2m_domain *p2m)
-{
-    p2m_free_logdirty(p2m);
-    if ( hap_enabled(p2m->domain) && cpu_has_vmx )
-        ept_p2m_uninit(p2m);
-    free_cpumask_var(p2m->dirty_cpumask);
-    xfree(p2m);
-}
-
-static int p2m_init_hostp2m(struct domain *d)
-{
-    struct p2m_domain *p2m = p2m_init_one(d);
-    int rc;
-
-    if ( !p2m )
-        return -ENOMEM;
-
-    rc = p2m_init_logdirty(p2m);
-
-    if ( !rc )
-        d->arch.p2m = p2m;
-    else
-        p2m_free_one(p2m);
-
-    return rc;
-}
-
-static void p2m_teardown_hostp2m(struct domain *d)
-{
-    /* Iterate over all p2m tables per domain */
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-    if ( p2m )
-    {
-        p2m_free_one(p2m);
-        d->arch.p2m = NULL;
-    }
-}
-
-#ifdef CONFIG_HVM
-static void p2m_teardown_nestedp2m(struct domain *d)
-{
-    unsigned int i;
-    struct p2m_domain *p2m;
-
-    for ( i = 0; i < MAX_NESTEDP2M; i++ )
-    {
-        if ( !d->arch.nested_p2m[i] )
-            continue;
-        p2m = d->arch.nested_p2m[i];
-        list_del(&p2m->np2m_list);
-        p2m_free_one(p2m);
-        d->arch.nested_p2m[i] = NULL;
-    }
-}
-
-static int p2m_init_nestedp2m(struct domain *d)
-{
-    unsigned int i;
-    struct p2m_domain *p2m;
-
-    mm_lock_init(&d->arch.nested_p2m_lock);
-    for ( i = 0; i < MAX_NESTEDP2M; i++ )
-    {
-        d->arch.nested_p2m[i] = p2m = p2m_init_one(d);
-        if ( p2m == NULL )
-        {
-            p2m_teardown_nestedp2m(d);
-            return -ENOMEM;
-        }
-        p2m->p2m_class = p2m_nested;
-        p2m->write_p2m_entry_pre = NULL;
-        p2m->write_p2m_entry_post = nestedp2m_write_p2m_entry_post;
-        list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
-    }
-
-    return 0;
-}
-
-static void p2m_teardown_altp2m(struct domain *d)
-{
-    unsigned int i;
-    struct p2m_domain *p2m;
-
-    for ( i = 0; i < MAX_ALTP2M; i++ )
-    {
-        if ( !d->arch.altp2m_p2m[i] )
-            continue;
-        p2m = d->arch.altp2m_p2m[i];
-        d->arch.altp2m_p2m[i] = NULL;
-        p2m_free_one(p2m);
-    }
-}
-
-static int p2m_init_altp2m(struct domain *d)
-{
-    unsigned int i;
-    struct p2m_domain *p2m;
-    struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
-
-    mm_lock_init(&d->arch.altp2m_list_lock);
-    for ( i = 0; i < MAX_ALTP2M; i++ )
-    {
-        d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
-        if ( p2m == NULL )
-        {
-            p2m_teardown_altp2m(d);
-            return -ENOMEM;
-        }
-        p2m->p2m_class = p2m_alternate;
-        p2m->access_required = hostp2m->access_required;
-        _atomic_set(&p2m->active_vcpus, 0);
-    }
-
-    return 0;
-}
-#endif
-
-int p2m_init(struct domain *d)
-{
-    int rc;
-
-    rc = p2m_init_hostp2m(d);
-    if ( rc || !is_hvm_domain(d) )
-        return rc;
-
-#ifdef CONFIG_HVM
-    /* Must initialise nestedp2m unconditionally
-     * since nestedhvm_enabled(d) returns false here.
-     * (p2m_init runs too early for HVM_PARAM_* options) */
-    rc = p2m_init_nestedp2m(d);
-    if ( rc )
-    {
-        p2m_teardown_hostp2m(d);
-        return rc;
-    }
-
-    rc = p2m_init_altp2m(d);
-    if ( rc )
-    {
-        p2m_teardown_hostp2m(d);
-        p2m_teardown_nestedp2m(d);
-    }
-#endif
-
-    return rc;
-}
-
 int p2m_is_logdirty_range(struct p2m_domain *p2m, unsigned long start,
                           unsigned long end)
 {
@@ -298,8 +90,6 @@ int p2m_is_logdirty_range(struct p2m_dom
     return 0;
 }
 
-#ifdef CONFIG_HVM
-
 static void change_entry_type_global(struct p2m_domain *p2m,
                                      p2m_type_t ot, p2m_type_t nt)
 {
@@ -751,57 +541,6 @@ int p2m_alloc_table(struct p2m_domain *p
     return 0;
 }
 
-#endif /* CONFIG_HVM */
-
-/*
- * hvm fixme: when adding support for pvh non-hardware domains, this path must
- * cleanup any foreign p2m types (release refcnts on them).
- */
-void p2m_teardown(struct p2m_domain *p2m)
-/* Return all the p2m pages to Xen.
- * We know we don't have any extra mappings to these pages */
-{
-#ifdef CONFIG_HVM
-    struct page_info *pg;
-#endif
-    struct domain *d;
-
-    if (p2m == NULL)
-        return;
-
-    d = p2m->domain;
-
-    p2m_lock(p2m);
-
-    ASSERT(atomic_read(&d->shr_pages) == 0);
-
-#ifdef CONFIG_HVM
-    p2m->phys_table = pagetable_null();
-
-    while ( (pg = page_list_remove_head(&p2m->pages)) )
-        d->arch.paging.free_page(d, pg);
-#endif
-
-    p2m_unlock(p2m);
-}
-
-void p2m_final_teardown(struct domain *d)
-{
-#ifdef CONFIG_HVM
-    /*
-     * We must teardown both of them unconditionally because
-     * we initialise them unconditionally.
-     */
-    p2m_teardown_altp2m(d);
-    p2m_teardown_nestedp2m(d);
-#endif
-
-    /* Iterate over all p2m tables per domain */
-    p2m_teardown_hostp2m(d);
-}
-
-#ifdef CONFIG_HVM
-
 static int __must_check
 p2m_remove_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
                  unsigned int page_order)
--- a/xen/arch/x86/mm/p2m.h
+++ b/xen/arch/x86/mm/p2m.h
@@ -15,8 +15,30 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+struct p2m_domain *p2m_init_one(struct domain *d);
+void p2m_free_one(struct p2m_domain *p2m);
+
 void p2m_pod_init(struct p2m_domain *p2m);
 
+#ifdef CONFIG_HVM
+int p2m_init_logdirty(struct p2m_domain *p2m);
+void p2m_free_logdirty(struct p2m_domain *p2m);
+#else
+static inline int p2m_init_logdirty(struct p2m_domain *p2m) { return 0; }
+static inline void p2m_free_logdirty(struct p2m_domain *p2m) {}
+#endif
+
+int p2m_init_altp2m(struct domain *d);
+void p2m_teardown_altp2m(struct domain *d);
+
+void p2m_nestedp2m_init(struct p2m_domain *p2m);
+int p2m_init_nestedp2m(struct domain *d);
+void p2m_teardown_nestedp2m(struct domain *d);
+
+int ept_p2m_init(struct p2m_domain *p2m);
+void ept_p2m_uninit(struct p2m_domain *p2m);
+void p2m_init_altp2m_ept(struct domain *d, unsigned int i);
+
 /*
  * Local variables:
  * mode: C
--- /dev/null
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -0,0 +1,207 @@
+/******************************************************************************
+ * arch/x86/mm/p2m-basic.c
+ *
+ * Basic P2M management largely applicable to all domain types.
+ *
+ * Parts of this code are Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
+ * Parts of this code are Copyright (c) 2007 by Advanced Micro Devices.
+ * Parts of this code are Copyright (c) 2006-2007 by XenSource Inc.
+ * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
+ * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/types.h>
+#include <asm/p2m.h>
+#include "mm-locks.h"
+#include "p2m.h"
+
+/* Init the datastructures for later use by the p2m code */
+static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
+{
+    int ret = 0;
+
+    mm_rwlock_init(&p2m->lock);
+#ifdef CONFIG_HVM
+    INIT_PAGE_LIST_HEAD(&p2m->pages);
+#endif
+
+    p2m->domain = d;
+    p2m->default_access = p2m_access_rwx;
+    p2m->p2m_class = p2m_host;
+
+    if ( !is_hvm_domain(d) )
+        return 0;
+
+    p2m_pod_init(p2m);
+    p2m_nestedp2m_init(p2m);
+
+    if ( hap_enabled(d) && cpu_has_vmx )
+        ret = ept_p2m_init(p2m);
+    else
+        p2m_pt_init(p2m);
+
+    spin_lock_init(&p2m->ioreq.lock);
+
+    return ret;
+}
+
+struct p2m_domain *p2m_init_one(struct domain *d)
+{
+    struct p2m_domain *p2m = xzalloc(struct p2m_domain);
+
+    if ( !p2m )
+        return NULL;
+
+    if ( !zalloc_cpumask_var(&p2m->dirty_cpumask) )
+        goto free_p2m;
+
+    if ( p2m_initialise(d, p2m) )
+        goto free_cpumask;
+    return p2m;
+
+ free_cpumask:
+    free_cpumask_var(p2m->dirty_cpumask);
+ free_p2m:
+    xfree(p2m);
+    return NULL;
+}
+
+void p2m_free_one(struct p2m_domain *p2m)
+{
+    p2m_free_logdirty(p2m);
+    if ( hap_enabled(p2m->domain) && cpu_has_vmx )
+        ept_p2m_uninit(p2m);
+    free_cpumask_var(p2m->dirty_cpumask);
+    xfree(p2m);
+}
+
+static int p2m_init_hostp2m(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_init_one(d);
+    int rc;
+
+    if ( !p2m )
+        return -ENOMEM;
+
+    rc = p2m_init_logdirty(p2m);
+
+    if ( !rc )
+        d->arch.p2m = p2m;
+    else
+        p2m_free_one(p2m);
+
+    return rc;
+}
+
+static void p2m_teardown_hostp2m(struct domain *d)
+{
+    /* Iterate over all p2m tables per domain */
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    if ( p2m )
+    {
+        p2m_free_one(p2m);
+        d->arch.p2m = NULL;
+    }
+}
+
+int p2m_init(struct domain *d)
+{
+    int rc;
+
+    rc = p2m_init_hostp2m(d);
+    if ( rc || !is_hvm_domain(d) )
+        return rc;
+
+    /*
+     * Must initialise nestedp2m unconditionally
+     * since nestedhvm_enabled(d) returns false here.
+     * (p2m_init runs too early for HVM_PARAM_* options)
+     */
+    rc = p2m_init_nestedp2m(d);
+    if ( rc )
+    {
+        p2m_teardown_hostp2m(d);
+        return rc;
+    }
+
+    rc = p2m_init_altp2m(d);
+    if ( rc )
+    {
+        p2m_teardown_hostp2m(d);
+        p2m_teardown_nestedp2m(d);
+    }
+
+    return rc;
+}
+
+/*
+ * Return all the p2m pages to Xen.
+ * We know we don't have any extra mappings to these pages.
+ *
+ * hvm fixme: when adding support for pvh non-hardware domains, this path must
+ * cleanup any foreign p2m types (release refcnts on them).
+ */
+void p2m_teardown(struct p2m_domain *p2m)
+{
+#ifdef CONFIG_HVM
+    struct page_info *pg;
+#endif
+    struct domain *d;
+
+    if ( !p2m )
+        return;
+
+    d = p2m->domain;
+
+    p2m_lock(p2m);
+
+    ASSERT(atomic_read(&d->shr_pages) == 0);
+
+#ifdef CONFIG_HVM
+    p2m->phys_table = pagetable_null();
+
+    while ( (pg = page_list_remove_head(&p2m->pages)) )
+        d->arch.paging.free_page(d, pg);
+#endif
+
+    p2m_unlock(p2m);
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    if ( is_hvm_domain(d) )
+    {
+        /*
+         * We must tear down both of them unconditionally because
+         * we initialise them unconditionally.
+         */
+        p2m_teardown_altp2m(d);
+        p2m_teardown_nestedp2m(d);
+    }
+
+    /* Iterate over all p2m tables per domain */
+    p2m_teardown_hostp2m(d);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -35,6 +35,7 @@
 #include <xen/softirq.h>
 
 #include "mm-locks.h"
+#include "p2m.h"
 
 #define atomic_read_ept_entry(__pepte)                              \
     ( (ept_entry_t) { .epte = read_atomic(&(__pepte)->epte) } )
--- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
@@ -594,15 +594,11 @@ unsigned int vmx_get_cpl(void);
 void vmx_inject_extint(int trap, uint8_t source);
 void vmx_inject_nmi(void);
 
-int ept_p2m_init(struct p2m_domain *p2m);
-void ept_p2m_uninit(struct p2m_domain *p2m);
-
 void ept_walk_table(struct domain *d, unsigned long gfn);
 bool_t ept_handle_misconfig(uint64_t gpa);
 int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
                        unsigned int order, bool *ipat, p2m_type_t type);
 void setup_ept_dump(void);
-void p2m_init_altp2m_ept(struct domain *d, unsigned int i);
 /* Locate an alternate p2m by its EPTP */
 unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp);
 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 08/14] x86/P2M: p2m_get_page_from_gfn() is HVM-only
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (6 preceding siblings ...)
  2022-02-23 16:01 ` [PATCH v2 07/14] x86/P2M: split out init/teardown functions Jan Beulich
@ 2022-02-23 16:02 ` Jan Beulich
  2022-02-23 16:03 ` [PATCH v2 09/14] x86/P2M: derive HVM-only variant from __get_gfn_type_access() Jan Beulich
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 16:02 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

This function is the wrong layer to go through for PV guests. It happens
to work, but produces results which aren't fully consistent with
get_page_from_gfn(). The latter function, however, cannot be used in
map_domain_gfn() as it may not be the host P2M we mean to act on.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>

--- a/xen/arch/x86/mm/guest_walk.c
+++ b/xen/arch/x86/mm/guest_walk.c
@@ -551,7 +551,9 @@ void *map_domain_gfn(struct p2m_domain *
     }
 
     /* Translate the gfn, unsharing if shared. */
-    page = p2m_get_page_from_gfn(p2m, gfn, &p2mt, NULL, q);
+    page = paging_mode_translate(p2m->domain)
+           ? p2m_get_page_from_gfn(p2m, gfn, &p2mt, NULL, q)
+           : get_page_from_gfn(p2m->domain, gfn_x(gfn), &p2mt, q);
     if ( p2m_is_paging(p2mt) )
     {
         ASSERT(p2m_is_hostp2m(p2m));
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -357,6 +357,8 @@ void __put_gfn(struct p2m_domain *p2m, u
     gfn_unlock(p2m, gfn, 0);
 }
 
+#ifdef CONFIG_HVM
+
 /* Atomically look up a GFN and take a reference count on the backing page. */
 struct page_info *p2m_get_page_from_gfn(
     struct p2m_domain *p2m, gfn_t gfn,
@@ -422,8 +424,6 @@ struct page_info *p2m_get_page_from_gfn(
     return page;
 }
 
-#ifdef CONFIG_HVM
-
 /* Returns: 0 for success, -errno for failure */
 int p2m_set_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
                   unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 09/14] x86/P2M: derive HVM-only variant from __get_gfn_type_access()
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (7 preceding siblings ...)
  2022-02-23 16:02 ` [PATCH v2 08/14] x86/P2M: p2m_get_page_from_gfn() is HVM-only Jan Beulich
@ 2022-02-23 16:03 ` Jan Beulich
  2022-02-23 16:03 ` [PATCH v2 10/14] x86/p2m: re-arrange {,__}put_gfn() Jan Beulich
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 16:03 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné,
	George Dunlap, Tamas K Lengyel

Introduce an inline wrapper dealing with the non-translated-domain case,
while stripping that logic from the main function, which gets renamed to
p2m_get_gfn_type_access(). HVM-only callers can then directly use the
main function.

Along with renaming the main function also make its and the new inline
helper's GFN parameters type-safe.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>

--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1716,7 +1716,7 @@ static void svm_do_nested_pgfault(struct
         } _d;
 
         p2m = p2m_get_p2m(v);
-        mfn = __get_gfn_type_access(p2m, gfn, &p2mt, &p2ma, 0, NULL, 0);
+        mfn = p2m_get_gfn_type_access(p2m, _gfn(gfn), &p2mt, &p2ma, 0, NULL, 0);
 
         _d.gpa = gpa;
         _d.qualification = 0;
@@ -1741,7 +1741,7 @@ static void svm_do_nested_pgfault(struct
     if ( p2m == NULL )
     {
         p2m = p2m_get_p2m(v);
-        mfn = __get_gfn_type_access(p2m, gfn, &p2mt, &p2ma, 0, NULL, 0);
+        mfn = p2m_get_gfn_type_access(p2m, _gfn(gfn), &p2mt, &p2ma, 0, NULL, 0);
     }
     gdprintk(XENLOG_ERR,
          "SVM violation gpa %#"PRIpaddr", mfn %#lx, type %i\n",
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -459,10 +459,27 @@ void p2m_unlock_and_tlb_flush(struct p2m
  * After calling any of the variants below, caller needs to use
  * put_gfn. ****/
 
-mfn_t __nonnull(3, 4) __get_gfn_type_access(
-    struct p2m_domain *p2m, unsigned long gfn, p2m_type_t *t,
+mfn_t __nonnull(3, 4) p2m_get_gfn_type_access(
+    struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t,
     p2m_access_t *a, p2m_query_t q, unsigned int *page_order, bool_t locked);
 
+static inline mfn_t __nonnull(3, 4) _get_gfn_type_access(
+    struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t,
+    p2m_access_t *a, p2m_query_t q, unsigned int *page_order, bool_t locked)
+{
+    if ( !p2m || !paging_mode_translate(p2m->domain) )
+    {
+        /*
+         * Not necessarily true, but for non-translated guests we claim
+         * it's the most generic kind of memory.
+         */
+        *t = p2m_ram_rw;
+        return _mfn(gfn_x(gfn));
+    }
+
+    return p2m_get_gfn_type_access(p2m, gfn, t, a, q, page_order, locked);
+}
+
 /* Read a particular P2M table, mapping pages as we go.  Most callers
  * should _not_ call this directly; use the other get_gfn* functions
  * below unless you know you want to walk a p2m that isn't a domain's
@@ -474,7 +491,7 @@ static inline mfn_t __nonnull(3, 4) get_
     struct p2m_domain *p2m, unsigned long gfn, p2m_type_t *t,
     p2m_access_t *a, p2m_query_t q, unsigned int *page_order)
 {
-    return __get_gfn_type_access(p2m, gfn, t, a, q, page_order, true);
+    return _get_gfn_type_access(p2m, _gfn(gfn), t, a, q, page_order, true);
 }
 
 /* General conversion function from gfn to mfn */
@@ -515,7 +532,8 @@ static inline mfn_t get_gfn_query_unlock
                                            p2m_type_t *t)
 {
     p2m_access_t a;
-    return __get_gfn_type_access(p2m_get_hostp2m(d), gfn, t, &a, 0, NULL, 0);
+    return _get_gfn_type_access(p2m_get_hostp2m(d), _gfn(gfn), t, &a, 0,
+                                NULL, 0);
 }
 
 /* Atomically look up a GFN and take a reference count on the backing page.
--- a/xen/arch/x86/mm/mem_access.c
+++ b/xen/arch/x86/mm/mem_access.c
@@ -299,8 +299,9 @@ static int set_mem_access(struct domain
     {
         p2m_access_t _a;
         p2m_type_t t;
-        mfn_t mfn = __get_gfn_type_access(p2m, gfn_x(gfn), &t, &_a,
-                                          P2M_ALLOC, NULL, false);
+        mfn_t mfn = p2m_get_gfn_type_access(p2m, gfn, &t, &_a,
+                                            P2M_ALLOC, NULL, false);
+
         rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, t, a, -1);
     }
 
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -478,12 +478,12 @@ do {
 #undef assign_pointers
 
     /* Now do the gets. */
-    *first_mfn  = __get_gfn_type_access(p2m_get_hostp2m(rval->first_domain),
-                                        gfn_x(rval->first_gfn), first_t,
-                                        first_a, q, NULL, lock);
-    *second_mfn = __get_gfn_type_access(p2m_get_hostp2m(rval->second_domain),
-                                        gfn_x(rval->second_gfn), second_t,
-                                        second_a, q, NULL, lock);
+    *first_mfn  = p2m_get_gfn_type_access(p2m_get_hostp2m(rval->first_domain),
+                                          rval->first_gfn, first_t,
+                                          first_a, q, NULL, lock);
+    *second_mfn = p2m_get_gfn_type_access(p2m_get_hostp2m(rval->second_domain),
+                                          rval->second_gfn, second_t,
+                                          second_a, q, NULL, lock);
 }
 
 static void put_two_gfns(const struct two_gfns *arg)
@@ -936,8 +936,8 @@ static int nominate_page(struct domain *
             if ( !ap2m )
                 continue;
 
-            amfn = __get_gfn_type_access(ap2m, gfn_x(gfn), &ap2mt, &ap2ma,
-                                         0, NULL, false);
+            amfn = p2m_get_gfn_type_access(ap2m, gfn, &ap2mt, &ap2ma,
+                                           0, NULL, false);
             if ( mfn_valid(amfn) && (!mfn_eq(amfn, mfn) || ap2ma != p2ma) )
             {
                 altp2m_list_unlock(d);
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -286,25 +286,13 @@ void p2m_unlock_and_tlb_flush(struct p2m
         mm_write_unlock(&p2m->lock);
 }
 
-mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn_l,
-                    p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
-                    unsigned int *page_order, bool_t locked)
-{
 #ifdef CONFIG_HVM
-    mfn_t mfn;
-    gfn_t gfn = _gfn(gfn_l);
 
-    if ( !p2m || !paging_mode_translate(p2m->domain) )
-    {
-#endif
-        /*
-         * Not necessarily true, but for non-translated guests we claim
-         * it's the most generic kind of memory.
-         */
-        *t = p2m_ram_rw;
-        return _mfn(gfn_l);
-#ifdef CONFIG_HVM
-    }
+mfn_t p2m_get_gfn_type_access(struct p2m_domain *p2m, gfn_t gfn,
+                              p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
+                              unsigned int *page_order, bool_t locked)
+{
+    mfn_t mfn;
 
     /* Unshare makes no sense without populate. */
     if ( q & P2M_UNSHARE )
@@ -329,8 +317,8 @@ mfn_t __get_gfn_type_access(struct p2m_d
          * Try to unshare. If we fail, communicate ENOMEM without
          * sleeping.
          */
-        if ( mem_sharing_unshare_page(p2m->domain, gfn_l) < 0 )
-            mem_sharing_notify_enomem(p2m->domain, gfn_l, false);
+        if ( mem_sharing_unshare_page(p2m->domain, gfn_x(gfn)) < 0 )
+            mem_sharing_notify_enomem(p2m->domain, gfn_x(gfn), false);
         mfn = p2m->get_entry(p2m, gfn, t, a, q, page_order, NULL);
     }
 
@@ -343,9 +331,10 @@ mfn_t __get_gfn_type_access(struct p2m_d
     }
 
     return mfn;
-#endif
 }
 
+#endif /* CONFIG_HVM */
+
 void __put_gfn(struct p2m_domain *p2m, unsigned long gfn)
 {
     if ( !p2m || !paging_mode_translate(p2m->domain) )
@@ -377,7 +366,7 @@ struct page_info *p2m_get_page_from_gfn(
     {
         /* Fast path: look up and get out */
         p2m_read_lock(p2m);
-        mfn = __get_gfn_type_access(p2m, gfn_x(gfn), t, a, 0, NULL, 0);
+        mfn = p2m_get_gfn_type_access(p2m, gfn, t, a, 0, NULL, 0);
         if ( p2m_is_any_ram(*t) && mfn_valid(mfn)
              && !((q & P2M_UNSHARE) && p2m_is_shared(*t)) )
         {
@@ -1775,8 +1764,8 @@ int altp2m_get_effective_entry(struct p2
         unsigned int page_order;
         int rc;
 
-        *mfn = __get_gfn_type_access(hp2m, gfn_x(gfn), t, a,
-                                     P2M_ALLOC | P2M_UNSHARE, &page_order, 0);
+        *mfn = p2m_get_gfn_type_access(hp2m, gfn, t, a, P2M_ALLOC | P2M_UNSHARE,
+                                       &page_order, 0);
 
         rc = -ESRCH;
         if ( !mfn_valid(*mfn) || *t != p2m_ram_rw )



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 10/14] x86/p2m: re-arrange {,__}put_gfn()
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (8 preceding siblings ...)
  2022-02-23 16:03 ` [PATCH v2 09/14] x86/P2M: derive HVM-only variant from __get_gfn_type_access() Jan Beulich
@ 2022-02-23 16:03 ` Jan Beulich
  2022-02-23 16:04 ` [PATCH v2 11/14] shr_pages field is MEM_SHARING-only Jan Beulich
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 16:03 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

All explicit callers of __put_gfn() are in HVM-only code and hold a valid
P2M pointer in their hands. Move the paging_mode_translate() check out of
there into put_gfn(), renaming __put_gfn() and making its GFN parameter
type-safe.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
---
v2: Re-base over XSA-388 follow-up.

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1955,9 +1955,9 @@ int hvm_hap_nested_page_fault(paddr_t gp
              * altp2m_list lock.
              */
             if ( p2m != hostp2m )
-                __put_gfn(p2m, gfn);
+                p2m_put_gfn(p2m, _gfn(gfn));
             p2m_change_type_one(currd, gfn, p2m_ram_logdirty, p2m_ram_rw);
-            __put_gfn(hostp2m, gfn);
+            p2m_put_gfn(hostp2m, _gfn(gfn));
 
             goto out;
         }
@@ -1979,8 +1979,8 @@ int hvm_hap_nested_page_fault(paddr_t gp
 
  out_put_gfn:
     if ( p2m != hostp2m )
-        __put_gfn(p2m, gfn);
-    __put_gfn(hostp2m, gfn);
+        p2m_put_gfn(p2m, _gfn(gfn));
+    p2m_put_gfn(hostp2m, _gfn(gfn));
  out:
     /*
      * All of these are delayed until we exit, since we might
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -509,9 +509,16 @@ static inline mfn_t __nonnull(3) get_gfn
                                               P2M_ALLOC | P2M_UNSHARE)
 
 /* Will release the p2m_lock for this gfn entry. */
-void __put_gfn(struct p2m_domain *p2m, unsigned long gfn);
+void p2m_put_gfn(struct p2m_domain *p2m, gfn_t gfn);
 
-#define put_gfn(d, gfn) __put_gfn(p2m_get_hostp2m((d)), (gfn))
+static inline void put_gfn(struct domain *d, unsigned long gfn)
+{
+    if ( !paging_mode_translate(d) )
+        /* Nothing to do in this case */
+        return;
+
+    p2m_put_gfn(p2m_get_hostp2m(d), _gfn(gfn));
+}
 
 /* The intent of the "unlocked" accessor is to have the caller not worry about
  * put_gfn. They apply to very specific situations: debug printk's, dumps 
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -149,7 +149,7 @@ static int nestedhap_walk_L0_p2m(
 direct_mmio_out:
     *L0_gpa = (mfn_x(mfn) << PAGE_SHIFT) + (L1_gpa & ~PAGE_MASK);
 out:
-    __put_gfn(p2m, L1_gpa >> PAGE_SHIFT);
+    p2m_put_gfn(p2m, gaddr_to_gfn(L1_gpa));
     return rc;
 }
 
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -333,21 +333,13 @@ mfn_t p2m_get_gfn_type_access(struct p2m
     return mfn;
 }
 
-#endif /* CONFIG_HVM */
-
-void __put_gfn(struct p2m_domain *p2m, unsigned long gfn)
+void p2m_put_gfn(struct p2m_domain *p2m, gfn_t gfn)
 {
-    if ( !p2m || !paging_mode_translate(p2m->domain) )
-        /* Nothing to do in this case */
-        return;
-
-    ASSERT(gfn_locked_by_me(p2m, gfn));
+    ASSERT(gfn_locked_by_me(p2m, gfn_x(gfn)));
 
-    gfn_unlock(p2m, gfn, 0);
+    gfn_unlock(p2m, gfn_x(gfn), 0);
 }
 
-#ifdef CONFIG_HVM
-
 /* Atomically look up a GFN and take a reference count on the backing page. */
 struct page_info *p2m_get_page_from_gfn(
     struct p2m_domain *p2m, gfn_t gfn,
@@ -2201,7 +2193,7 @@ int p2m_altp2m_propagate_change(struct d
             else
             {
                 /* At least 2 altp2m's impacted, so reset everything */
-                __put_gfn(p2m, gfn_x(gfn));
+                p2m_put_gfn(p2m, gfn);
 
                 for ( i = 0; i < MAX_ALTP2M; i++ )
                 {
@@ -2225,7 +2217,7 @@ int p2m_altp2m_propagate_change(struct d
                 ret = rc;
         }
 
-        __put_gfn(p2m, gfn_x(gfn));
+        p2m_put_gfn(p2m, gfn);
     }
 
     altp2m_list_unlock(d);
@@ -2310,7 +2302,7 @@ void audit_p2m(struct domain *d,
              * blow away the m2p entry. */
             set_gpfn_from_mfn(mfn, INVALID_M2P_ENTRY);
         }
-        __put_gfn(p2m, gfn);
+        p2m_put_gfn(p2m, _gfn(gfn));
 
         P2M_PRINTK("OK: mfn=%#lx, gfn=%#lx, p2mfn=%#lx\n",
                        mfn, gfn, mfn_x(p2mfn));



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 11/14] shr_pages field is MEM_SHARING-only
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (9 preceding siblings ...)
  2022-02-23 16:03 ` [PATCH v2 10/14] x86/p2m: re-arrange {,__}put_gfn() Jan Beulich
@ 2022-02-23 16:04 ` Jan Beulich
  2022-02-23 16:07   ` Jan Beulich
  2022-02-23 16:05 ` [PATCH v2 12/14] paged_pages field is MEM_PAGING-only Jan Beulich
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 16:04 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monné,
	Andrew Cooper, George Dunlap, Julien Grall, Stefano Stabellini,
	Wei Liu

Conditionalize it and its uses accordingly. The main goal though is to
demonstrate that x86's p2m_teardown() is now empty when !HVM, which in
particular means the last remaining use of p2m_lock() in this cases goes
away.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Tamas K Lengyel <tamas@tklengyel.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
---
I was on the edge of introducing a helper for atomic_read(&d->shr_pages)
but decided against because of dump_domains() not being able to use it
sensibly (I really want to omit the output field altogether there when
!MEM_SHARING).

--- a/xen/arch/x86/mm/p2m-basic.c
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -159,7 +159,6 @@ void p2m_teardown(struct p2m_domain *p2m
 {
 #ifdef CONFIG_HVM
     struct page_info *pg;
-#endif
     struct domain *d;
 
     if ( !p2m )
@@ -169,16 +168,17 @@ void p2m_teardown(struct p2m_domain *p2m
 
     p2m_lock(p2m);
 
+#ifdef CONFIG_MEM_SHARING
     ASSERT(atomic_read(&d->shr_pages) == 0);
+#endif
 
-#ifdef CONFIG_HVM
     p2m->phys_table = pagetable_null();
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
         d->arch.paging.free_page(d, pg);
-#endif
 
     p2m_unlock(p2m);
+#endif
 }
 
 void p2m_final_teardown(struct domain *d)
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -109,7 +109,9 @@ void getdomaininfo(struct domain *d, str
     info->tot_pages         = domain_tot_pages(d);
     info->max_pages         = d->max_pages;
     info->outstanding_pages = d->outstanding_pages;
+#ifdef CONFIG_MEM_SHARING
     info->shr_pages         = atomic_read(&d->shr_pages);
+#endif
     info->paged_pages       = atomic_read(&d->paged_pages);
     info->shared_info_frame =
         gfn_x(mfn_to_gfn(d, _mfn(virt_to_mfn(d->shared_info))));
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -274,9 +274,16 @@ static void dump_domains(unsigned char k
         printk("    refcnt=%d dying=%d pause_count=%d\n",
                atomic_read(&d->refcnt), d->is_dying,
                atomic_read(&d->pause_count));
-        printk("    nr_pages=%d xenheap_pages=%d shared_pages=%u paged_pages=%u "
-               "dirty_cpus={%*pbl} max_pages=%u\n",
-               domain_tot_pages(d), d->xenheap_pages, atomic_read(&d->shr_pages),
+        printk("    nr_pages=%u xenheap_pages=%u"
+#ifdef CONFIG_MEM_SHARING
+               " shared_pages=%u"
+#endif
+               " paged_pages=%u"
+               " dirty_cpus={%*pbl} max_pages=%u\n",
+               domain_tot_pages(d), d->xenheap_pages,
+#ifdef CONFIG_MEM_SHARING
+               atomic_read(&d->shr_pages),
+#endif
                atomic_read(&d->paged_pages), CPUMASK_PR(d->dirty_cpumask),
                d->max_pages);
         printk("    handle=%02x%02x%02x%02x-%02x%02x-%02x%02x-"
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -385,7 +385,11 @@ struct domain
     unsigned int     outstanding_pages; /* pages claimed but not possessed */
     unsigned int     max_pages;         /* maximum value for domain_tot_pages() */
     unsigned int     extra_pages;       /* pages not included in domain_tot_pages() */
+
+#ifdef CONFIG_MEM_SHARING
     atomic_t         shr_pages;         /* shared pages */
+#endif
+
     atomic_t         paged_pages;       /* paged-out pages */
 
     /* Scheduling. */



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 12/14] paged_pages field is MEM_PAGING-only
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (10 preceding siblings ...)
  2022-02-23 16:04 ` [PATCH v2 11/14] shr_pages field is MEM_SHARING-only Jan Beulich
@ 2022-02-23 16:05 ` Jan Beulich
  2022-02-23 16:06 ` [PATCH v2 13/14] x86/P2M: p2m.c is HVM-only Jan Beulich
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 16:05 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monné,
	Andrew Cooper, George Dunlap, Julien Grall, Stefano Stabellini,
	Wei Liu

Conditionalize it and its uses accordingly.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Tamas K Lengyel <tamas@tklengyel.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
---
v2: Re-base (drop clearing of field in getdomaininfo()).
---
I was on the edge of introducing a helper for
atomic_read(&d->paged_pages) but decided against because of
dump_domains() not being able to use it sensibly (I really want to omit
the output field altogether there when !MEM_PAGING).

--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1213,6 +1213,7 @@ int add_to_physmap(struct domain *sd, un
     }
     else
     {
+#ifdef CONFIG_MEM_PAGING
         /*
          * There is a chance we're plugging a hole where a paged out
          * page was.
@@ -1238,6 +1239,7 @@ int add_to_physmap(struct domain *sd, un
                 put_page(cpage);
             }
         }
+#endif
     }
 
     atomic_inc(&nr_saved_mfns);
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -691,11 +691,13 @@ p2m_add_page(struct domain *d, gfn_t gfn
             /* Count how man PoD entries we'll be replacing if successful */
             pod_count++;
         }
+#ifdef CONFIG_MEM_PAGING
         else if ( p2m_is_paging(ot) && (ot != p2m_ram_paging_out) )
         {
             /* We're plugging a hole in the physmap where a paged out page was */
             atomic_dec(&d->paged_pages);
         }
+#endif
     }
 
     /* Then, look for m->p mappings for this range and deal with them */
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -112,7 +112,9 @@ void getdomaininfo(struct domain *d, str
 #ifdef CONFIG_MEM_SHARING
     info->shr_pages         = atomic_read(&d->shr_pages);
 #endif
+#ifdef CONFIG_MEM_PAGING
     info->paged_pages       = atomic_read(&d->paged_pages);
+#endif
     info->shared_info_frame =
         gfn_x(mfn_to_gfn(d, _mfn(virt_to_mfn(d->shared_info))));
     BUG_ON(SHARED_M2P(info->shared_info_frame));
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -278,14 +278,18 @@ static void dump_domains(unsigned char k
 #ifdef CONFIG_MEM_SHARING
                " shared_pages=%u"
 #endif
+#ifdef CONFIG_MEM_PAGING
                " paged_pages=%u"
+#endif
                " dirty_cpus={%*pbl} max_pages=%u\n",
                domain_tot_pages(d), d->xenheap_pages,
 #ifdef CONFIG_MEM_SHARING
                atomic_read(&d->shr_pages),
 #endif
-               atomic_read(&d->paged_pages), CPUMASK_PR(d->dirty_cpumask),
-               d->max_pages);
+#ifdef CONFIG_MEM_PAGING
+               atomic_read(&d->paged_pages),
+#endif
+               CPUMASK_PR(d->dirty_cpumask), d->max_pages);
         printk("    handle=%02x%02x%02x%02x-%02x%02x-%02x%02x-"
                "%02x%02x-%02x%02x%02x%02x%02x%02x vm_assist=%08lx\n",
                d->handle[ 0], d->handle[ 1], d->handle[ 2], d->handle[ 3],
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -390,7 +390,9 @@ struct domain
     atomic_t         shr_pages;         /* shared pages */
 #endif
 
+#ifdef CONFIG_MEM_PAGING
     atomic_t         paged_pages;       /* paged-out pages */
+#endif
 
     /* Scheduling. */
     void            *sched_priv;    /* scheduler-specific data */



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 13/14] x86/P2M: p2m.c is HVM-only
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (11 preceding siblings ...)
  2022-02-23 16:05 ` [PATCH v2 12/14] paged_pages field is MEM_PAGING-only Jan Beulich
@ 2022-02-23 16:06 ` Jan Beulich
  2022-02-23 16:06 ` [PATCH v2 14/14] x86/P2M: the majority for struct p2m_domain's fields are HVM-only Jan Beulich
  2022-04-01 12:47 ` [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain " George Dunlap
  14 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 16:06 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné, George Dunlap

This only requires moving p2m_percpu_rwlock elsewhere (ultimately I
think all P2M locking should go away as well when !HVM, but this looks
to require further code juggling). The two other unguarded functions are
already unneeded (by virtue of DCE) when !HVM.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>

--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -8,7 +8,7 @@ obj-$(CONFIG_MEM_ACCESS) += mem_access.o
 obj-$(CONFIG_MEM_PAGING) += mem_paging.o
 obj-$(CONFIG_MEM_SHARING) += mem_sharing.o
 obj-$(CONFIG_HVM) += nested.o
-obj-y += p2m.o
+obj-$(CONFIG_HVM) += p2m.o
 obj-y += p2m-basic.o
 obj-$(CONFIG_HVM) += p2m-ept.o p2m-pod.o p2m-pt.o
 obj-y += paging.o
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -53,10 +53,6 @@ bool_t __initdata opt_hap_1gb = 1, __ini
 boolean_param("hap_1gb", opt_hap_1gb);
 boolean_param("hap_2mb", opt_hap_2mb);
 
-DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
-
-#ifdef CONFIG_HVM
-
 int p2m_init_logdirty(struct p2m_domain *p2m)
 {
     if ( p2m->logdirty_ranges )
@@ -258,8 +254,6 @@ void p2m_flush_hardware_cached_dirty(str
     }
 }
 
-#endif /* CONFIG_HVM */
-
 /*
  * Force a synchronous P2M TLB flush if a deferred flush is pending.
  *
@@ -286,8 +280,6 @@ void p2m_unlock_and_tlb_flush(struct p2m
         mm_write_unlock(&p2m->lock);
 }
 
-#ifdef CONFIG_HVM
-
 mfn_t p2m_get_gfn_type_access(struct p2m_domain *p2m, gfn_t gfn,
                               p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
                               unsigned int *page_order, bool_t locked)
@@ -2718,8 +2710,6 @@ int p2m_set_altp2m_view_visibility(struc
     return rc;
 }
 
-#endif /* CONFIG_HVM */
-
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/mm/p2m-basic.c
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -28,6 +28,8 @@
 #include "mm-locks.h"
 #include "p2m.h"
 
+DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
+
 /* Init the datastructures for later use by the p2m code */
 static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 14/14] x86/P2M: the majority for struct p2m_domain's fields are HVM-only
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (12 preceding siblings ...)
  2022-02-23 16:06 ` [PATCH v2 13/14] x86/P2M: p2m.c is HVM-only Jan Beulich
@ 2022-02-23 16:06 ` Jan Beulich
  2022-04-01 12:47 ` [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain " George Dunlap
  14 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 16:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Roger Pau Monné,
	George Dunlap, Paul Durrant

..., as are the majority of the locks involved. Conditionalize things
accordingly.

Also adjust the ioreq field's indentation at this occasion.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
---
v2: Adjust a comment.

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -481,8 +481,11 @@ unsigned int page_get_ram_type(mfn_t mfn
 
 unsigned long domain_get_maximum_gpfn(struct domain *d)
 {
+#ifdef CONFIG_HVM
     if ( is_hvm_domain(d) )
         return p2m_get_hostp2m(d)->max_mapped_pfn;
+#endif
+
     /* NB. PV guests specify nr_pfns rather than max_pfn so we adjust here. */
     return (arch_get_max_pfn(d) ?: 1) - 1;
 }
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -237,6 +237,8 @@ static inline void mm_enforce_order_unlo
  *                                                                      *
  ************************************************************************/
 
+#ifdef CONFIG_HVM
+
 /* Nested P2M lock (per-domain)
  *
  * A per-domain lock that protects the mapping from nested-CR3 to
@@ -354,6 +356,8 @@ declare_mm_lock(pod)
 #define pod_unlock(p)         mm_unlock(&(p)->pod.lock)
 #define pod_locked_by_me(p)   mm_locked_by_me(&(p)->pod.lock)
 
+#endif /* CONFIG_HVM */
+
 /* Page alloc lock (per-domain)
  *
  * This is an external lock, not represented by an mm_lock_t. However,
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -48,6 +48,8 @@
 #undef virt_to_mfn
 #define virt_to_mfn(v) _mfn(__virt_to_mfn(v))
 
+DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
+
 /* Turn on/off host superpage page table support for hap, default on. */
 bool_t __initdata opt_hap_1gb = 1, __initdata opt_hap_2mb = 1;
 boolean_param("hap_1gb", opt_hap_1gb);
--- a/xen/arch/x86/mm/p2m-basic.c
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -28,16 +28,15 @@
 #include "mm-locks.h"
 #include "p2m.h"
 
-DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock);
-
 /* Init the datastructures for later use by the p2m code */
 static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
 {
     int ret = 0;
 
-    mm_rwlock_init(&p2m->lock);
 #ifdef CONFIG_HVM
+    mm_rwlock_init(&p2m->lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
+    spin_lock_init(&p2m->ioreq.lock);
 #endif
 
     p2m->domain = d;
@@ -55,8 +54,6 @@ static int p2m_initialise(struct domain
     else
         p2m_pt_init(p2m);
 
-    spin_lock_init(&p2m->ioreq.lock);
-
     return ret;
 }
 
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -459,7 +459,7 @@ bool arch_iommu_use_permitted(const stru
     return d == dom_io ||
            (likely(!mem_sharing_enabled(d)) &&
             likely(!mem_paging_enabled(d)) &&
-            likely(!p2m_get_hostp2m(d)->global_logdirty));
+            likely(!p2m_is_global_logdirty(d)));
 }
 
 /*
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -205,8 +205,10 @@ typedef enum {
 
 /* Per-p2m-table state */
 struct p2m_domain {
+#ifdef CONFIG_HVM
     /* Lock that protects updates to the p2m */
     mm_rwlock_t           lock;
+#endif
 
     /*
      * Same as a domain's dirty_cpumask but limited to
@@ -226,13 +228,14 @@ struct p2m_domain {
      */
     p2m_access_t default_access;
 
+#ifdef CONFIG_HVM
+
     /* Host p2m: Log-dirty ranges registered for the domain. */
     struct rangeset   *logdirty_ranges;
 
     /* Host p2m: Global log-dirty mode enabled for the domain. */
     bool               global_logdirty;
 
-#ifdef CONFIG_HVM
     /* Translated domain: p2m mapping */
     pagetable_t        phys_table;
 
@@ -275,7 +278,6 @@ struct p2m_domain {
                                               unsigned int level);
     void               (*write_p2m_entry_post)(struct p2m_domain *p2m,
                                                unsigned int oflags);
-#endif
 #if P2M_AUDIT
     long               (*audit_p2m)(struct p2m_domain *p2m);
 #endif
@@ -310,7 +312,6 @@ struct p2m_domain {
     unsigned long min_remapped_gfn;
     unsigned long max_remapped_gfn;
 
-#ifdef CONFIG_HVM
     /* Populate-on-demand variables
      * All variables are protected with the pod lock. We cannot rely on
      * the p2m lock if it's turned into a fine-grained lock.
@@ -367,27 +368,27 @@ struct p2m_domain {
      * threaded on in LRU order.
      */
     struct list_head   np2m_list;
-#endif
 
     union {
         struct ept_data ept;
         /* NPT-equivalent structure could be added here. */
     };
 
-     struct {
-         spinlock_t lock;
-         /*
-          * ioreq server who's responsible for the emulation of
-          * gfns with specific p2m type(for now, p2m_ioreq_server).
-          */
-         struct ioreq_server *server;
-         /*
-          * flags specifies whether read, write or both operations
-          * are to be emulated by an ioreq server.
-          */
-         unsigned int flags;
-         unsigned long entry_count;
-     } ioreq;
+    struct {
+        spinlock_t lock;
+        /*
+         * ioreq server who's responsible for the emulation of
+         * gfns with specific p2m type(for now, p2m_ioreq_server).
+         */
+        struct ioreq_server *server;
+        /*
+         * flags specifies whether read, write or both operations
+         * are to be emulated by an ioreq server.
+         */
+        unsigned int flags;
+        unsigned long entry_count;
+    } ioreq;
+#endif /* CONFIG_HVM */
 };
 
 /* get host p2m table */
@@ -651,6 +652,15 @@ int p2m_finish_type_change(struct domain
                            gfn_t first_gfn,
                            unsigned long max_nr);
 
+static inline bool p2m_is_global_logdirty(const struct domain *d)
+{
+#ifdef CONFIG_HVM
+    return p2m_get_hostp2m(d)->global_logdirty;
+#else
+    return false;
+#endif
+}
+
 int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start,
                           unsigned long end);
 
@@ -792,6 +802,8 @@ extern void audit_p2m(struct domain *d,
 #define P2M_DEBUG(f, a...) do { (void)(f); } while(0)
 #endif
 
+#ifdef CONFIG_HVM
+
 /*
  * Functions specific to the p2m-pt implementation
  */
@@ -852,7 +864,7 @@ void nestedp2m_write_p2m_entry_post(stru
 /*
  * Alternate p2m: shadow p2m tables used for alternate memory views
  */
-#ifdef CONFIG_HVM
+
 /* get current alternate p2m table */
 static inline struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
 {
@@ -905,10 +917,10 @@ int p2m_altp2m_propagate_change(struct d
 /* Set a specific p2m view visibility */
 int p2m_set_altp2m_view_visibility(struct domain *d, unsigned int idx,
                                    uint8_t visible);
-#else
+#else /* !CONFIG_HVM */
 struct p2m_domain *p2m_get_altp2m(struct vcpu *v);
 static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) {}
-#endif
+#endif /* CONFIG_HVM */
 
 /* p2m access to IOMMU flags */
 static inline unsigned int p2m_access_to_iommu_flags(p2m_access_t p2ma)
@@ -972,6 +984,8 @@ static inline unsigned int p2m_get_iommu
     return flags;
 }
 
+#ifdef CONFIG_HVM
+
 int p2m_set_ioreq_server(struct domain *d, unsigned int flags,
                          struct ioreq_server *s);
 struct ioreq_server *p2m_get_ioreq_server(struct domain *d,
@@ -1036,6 +1050,8 @@ static inline int p2m_entry_modify(struc
     return 0;
 }
 
+#endif /* CONFIG_HVM */
+
 #endif /* _XEN_ASM_X86_P2M_H */
 
 /*



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 11/14] shr_pages field is MEM_SHARING-only
  2022-02-23 16:04 ` [PATCH v2 11/14] shr_pages field is MEM_SHARING-only Jan Beulich
@ 2022-02-23 16:07   ` Jan Beulich
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-02-23 16:07 UTC (permalink / raw)
  To: xen-devel
  Cc: Roger Pau Monné,
	Andrew Cooper, George Dunlap, Julien Grall, Stefano Stabellini,
	Wei Liu

On 23.02.2022 17:04, Jan Beulich wrote:
> Conditionalize it and its uses accordingly. The main goal though is to
> demonstrate that x86's p2m_teardown() is now empty when !HVM, which in
> particular means the last remaining use of p2m_lock() in this cases goes
> away.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Tamas K Lengyel <tamas@tklengyel.com>
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Forgot to add here:

---
v2: Re-base (drop clearing of field in getdomaininfo()).

Jan

> ---
> I was on the edge of introducing a helper for atomic_read(&d->shr_pages)
> but decided against because of dump_domains() not being able to use it
> sensibly (I really want to omit the output field altogether there when
> !MEM_SHARING).
> 
> --- a/xen/arch/x86/mm/p2m-basic.c
> +++ b/xen/arch/x86/mm/p2m-basic.c
> @@ -159,7 +159,6 @@ void p2m_teardown(struct p2m_domain *p2m
>  {
>  #ifdef CONFIG_HVM
>      struct page_info *pg;
> -#endif
>      struct domain *d;
>  
>      if ( !p2m )
> @@ -169,16 +168,17 @@ void p2m_teardown(struct p2m_domain *p2m
>  
>      p2m_lock(p2m);
>  
> +#ifdef CONFIG_MEM_SHARING
>      ASSERT(atomic_read(&d->shr_pages) == 0);
> +#endif
>  
> -#ifdef CONFIG_HVM
>      p2m->phys_table = pagetable_null();
>  
>      while ( (pg = page_list_remove_head(&p2m->pages)) )
>          d->arch.paging.free_page(d, pg);
> -#endif
>  
>      p2m_unlock(p2m);
> +#endif
>  }
>  
>  void p2m_final_teardown(struct domain *d)
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -109,7 +109,9 @@ void getdomaininfo(struct domain *d, str
>      info->tot_pages         = domain_tot_pages(d);
>      info->max_pages         = d->max_pages;
>      info->outstanding_pages = d->outstanding_pages;
> +#ifdef CONFIG_MEM_SHARING
>      info->shr_pages         = atomic_read(&d->shr_pages);
> +#endif
>      info->paged_pages       = atomic_read(&d->paged_pages);
>      info->shared_info_frame =
>          gfn_x(mfn_to_gfn(d, _mfn(virt_to_mfn(d->shared_info))));
> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -274,9 +274,16 @@ static void dump_domains(unsigned char k
>          printk("    refcnt=%d dying=%d pause_count=%d\n",
>                 atomic_read(&d->refcnt), d->is_dying,
>                 atomic_read(&d->pause_count));
> -        printk("    nr_pages=%d xenheap_pages=%d shared_pages=%u paged_pages=%u "
> -               "dirty_cpus={%*pbl} max_pages=%u\n",
> -               domain_tot_pages(d), d->xenheap_pages, atomic_read(&d->shr_pages),
> +        printk("    nr_pages=%u xenheap_pages=%u"
> +#ifdef CONFIG_MEM_SHARING
> +               " shared_pages=%u"
> +#endif
> +               " paged_pages=%u"
> +               " dirty_cpus={%*pbl} max_pages=%u\n",
> +               domain_tot_pages(d), d->xenheap_pages,
> +#ifdef CONFIG_MEM_SHARING
> +               atomic_read(&d->shr_pages),
> +#endif
>                 atomic_read(&d->paged_pages), CPUMASK_PR(d->dirty_cpumask),
>                 d->max_pages);
>          printk("    handle=%02x%02x%02x%02x-%02x%02x-%02x%02x-"
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -385,7 +385,11 @@ struct domain
>      unsigned int     outstanding_pages; /* pages claimed but not possessed */
>      unsigned int     max_pages;         /* maximum value for domain_tot_pages() */
>      unsigned int     extra_pages;       /* pages not included in domain_tot_pages() */
> +
> +#ifdef CONFIG_MEM_SHARING
>      atomic_t         shr_pages;         /* shared pages */
> +#endif
> +
>      atomic_t         paged_pages;       /* paged-out pages */
>  
>      /* Scheduling. */
> 
> 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 01/14] x86/P2M: rename p2m_remove_page()
  2022-02-23 15:57 ` [PATCH v2 01/14] x86/P2M: rename p2m_remove_page() Jan Beulich
@ 2022-04-01 11:00   ` George Dunlap
  0 siblings, 0 replies; 26+ messages in thread
From: George Dunlap @ 2022-04-01 11:00 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne

[-- Attachment #1: Type: text/plain, Size: 238 bytes --]



> On Feb 23, 2022, at 3:57 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> This is in preparation to re-using the original name.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 02/14] x86/P2M: introduce p2m_{add,remove}_page()
  2022-02-23 15:58 ` [PATCH v2 02/14] x86/P2M: introduce p2m_{add,remove}_page() Jan Beulich
@ 2022-04-01 12:02   ` George Dunlap
  2022-04-04 11:59     ` Jan Beulich
  0 siblings, 1 reply; 26+ messages in thread
From: George Dunlap @ 2022-04-01 12:02 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne

[-- Attachment #1: Type: text/plain, Size: 816 bytes --]



> On Feb 23, 2022, at 3:58 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> Rename guest_physmap_add_entry() to p2m_add_page(); make
> guest_physmap_remove_page() a trivial wrapper around p2m_remove_page().
> This way callers can use suitable pairs of functions (previously
> violated by hvm/grant_table.c).
> 
> In HVM-specific code further avoid going through the guest_physmap_*()
> layer, and instead use the two new/renamed functions directly.
> 
> Ultimately the goal is to have guest_physmap_...() functions cover all
> types of guests, but p2m_...() dealing only with translated ones.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Paul Durrant <paul@xen.org>

Description reads much better to me — thanks!

Reviewed-by: George Dunlap <george.dunlap@citrix.com>


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 04/14] x86/mm: split set_identity_p2m_entry() into PV and HVM parts
  2022-02-23 15:59 ` [PATCH v2 04/14] x86/mm: split set_identity_p2m_entry() into PV and HVM parts Jan Beulich
@ 2022-04-01 12:13   ` George Dunlap
  2022-04-06 14:52   ` Jan Beulich
  2022-04-08 10:55   ` Roger Pau Monné
  2 siblings, 0 replies; 26+ messages in thread
From: George Dunlap @ 2022-04-01 12:13 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne, Paul Durrant

[-- Attachment #1: Type: text/plain, Size: 702 bytes --]



> On Feb 23, 2022, at 3:59 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> ..., moving the former into the new physmap.c. Also call the new
> functions directly from arch_iommu_hwdom_init() and
> vpci_make_msix_hole(), as the PV/HVM split is explicit there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: George Dunlap <george.dunlap@ctirix.com>

“citrix” is spelled wrong in the email address — not sure whether it’s my typo or yours. :-)

> ---
> v2: Change arch_iommu_hwdom_init() and vpci_make_msix_hole().

I think the R-b should probably have been dropped for this change, but in any case:

Reviewed-by: George Dunlap <george.dunlap@citrix.com>


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 06/14] x86/P2M: PoD, altp2m, and nested-p2m are HVM-only
  2022-02-23 16:01 ` [PATCH v2 06/14] x86/P2M: PoD, altp2m, and nested-p2m " Jan Beulich
@ 2022-04-01 12:36   ` George Dunlap
  0 siblings, 0 replies; 26+ messages in thread
From: George Dunlap @ 2022-04-01 12:36 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne

[-- Attachment #1: Type: text/plain, Size: 1657 bytes --]



> On Feb 23, 2022, at 4:01 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> There's no need to initialize respective data for PV domains. Note that
> p2m_teardown_{alt,nested}p2m() will handle the lack-of-initialization
> case fine.
> 
> As a result, despite PV domains having a host P2M associated with them
> and hence using XENMEM_get_pod_target on such may not be a real problem,
> calling p2m_pod_set_mem_target() for a PV domain is surely wrong, even
> if benign at present. Add a guard there as well.
> 
> In p2m_pod_demand_populate() the situation is a little different: This
> function is reachable only for HVM domains anyway, but following from
> other PoD functions only ever acting on the host P2M (and hence PoD
> entries only ever existing in host P2Ms), assert and bail from there for
> non-host-P2Ms.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Thanks,

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

> Perhaps p2m_pod_init() could be invoked from p2m_init_hostp2m(), leaving
> all other p2m's PoD state uninitialized. Of course at that point the
> question would be whether the PoD pieces of struct p2m_domain wouldn't
> better move into a separate structure, present only for host P2Ms.
> Together with the p2m_pod_demand_populate() adjustment this might then
> better be a separate change ...

I’d certainly be tempted to do that kind of clean-up.

I would just check this patch in as-is, but if you really want to pull the p2m_pod_demand_populate() adjustment into a separate patch to keep everything in the same place, feel free to drop that while retaining my R-b.

 -George


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only
  2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
                   ` (13 preceding siblings ...)
  2022-02-23 16:06 ` [PATCH v2 14/14] x86/P2M: the majority for struct p2m_domain's fields are HVM-only Jan Beulich
@ 2022-04-01 12:47 ` George Dunlap
  2022-04-07  7:45   ` Jan Beulich
  14 siblings, 1 reply; 26+ messages in thread
From: George Dunlap @ 2022-04-01 12:47 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne

[-- Attachment #1: Type: text/plain, Size: 1311 bytes --]



> On Feb 23, 2022, at 3:55 PM, Jan Beulich <JBeulich@suse.com> wrote:
> 
> The primary goal of this series is to leave p2m.c with, as its leading
> comment suggests, just code for "physical-to-machine mappings for
> automatically-translated domains". This requires splitting a few
> functions, with their non-HVM parts moved elsewhere.
> 
> There aren't many changes in v2, mostly from re-basing. See individual
> patches for details.
> 
> 01: x86/P2M: rename p2m_remove_page()
> 02: x86/P2M: introduce p2m_{add,remove}_page()
> 03: x86/mm: move guest_physmap_{add,remove}_page()
> 04: x86/mm: split set_identity_p2m_entry() into PV and HVM parts
> 05: x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only
> 06: x86/P2M: PoD, altp2m, and nested-p2m are HVM-only
> 07: x86/P2M: split out init/teardown functions
> 08: x86/P2M: p2m_get_page_from_gfn() is HVM-only
> 09: x86/P2M: derive a HVM-only variant from __get_gfn_type_access()
> 10: x86/p2m: re-arrange {,__}put_gfn()
> 11: shr_pages field is MEM_SHARING-only
> 12: paged_pages field is MEM_PAGING-only
> 13: x86/P2M: p2m.c is HVM-only
> 14: x86/P2M: the majority for struct p2m_domain's fields are HVM-only

OK, I think every patch has an R-b from me on it now — let me know if I missed anything.

 -George

[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 02/14] x86/P2M: introduce p2m_{add,remove}_page()
  2022-04-01 12:02   ` George Dunlap
@ 2022-04-04 11:59     ` Jan Beulich
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-04-04 11:59 UTC (permalink / raw)
  To: George Dunlap; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne

On 01.04.2022 14:02, George Dunlap wrote:
>> On Feb 23, 2022, at 3:58 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>> Rename guest_physmap_add_entry() to p2m_add_page(); make
>> guest_physmap_remove_page() a trivial wrapper around p2m_remove_page().
>> This way callers can use suitable pairs of functions (previously
>> violated by hvm/grant_table.c).
>>
>> In HVM-specific code further avoid going through the guest_physmap_*()
>> layer, and instead use the two new/renamed functions directly.
>>
>> Ultimately the goal is to have guest_physmap_...() functions cover all
>> types of guests, but p2m_...() dealing only with translated ones.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Paul Durrant <paul@xen.org>
> 
> Description reads much better to me — thanks!

Well - thanks to you, as iirc it is largely based on your suggestion.

> Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Thanks.

Jan



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 04/14] x86/mm: split set_identity_p2m_entry() into PV and HVM parts
  2022-02-23 15:59 ` [PATCH v2 04/14] x86/mm: split set_identity_p2m_entry() into PV and HVM parts Jan Beulich
  2022-04-01 12:13   ` George Dunlap
@ 2022-04-06 14:52   ` Jan Beulich
  2022-04-08 10:55   ` Roger Pau Monné
  2 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-04-06 14:52 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Andrew Cooper, Wei Liu, George Dunlap, Paul Durrant, xen-devel

On 23.02.2022 16:59, Jan Beulich wrote:
> ..., moving the former into the new physmap.c. Also call the new
> functions directly from arch_iommu_hwdom_init() and
> vpci_make_msix_hole(), as the PV/HVM split is explicit there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: George Dunlap <george.dunlap@ctirix.com>

May I ask for an ack on the vPCI change here?

> --- a/xen/drivers/vpci/msix.c
> +++ b/xen/drivers/vpci/msix.c
> @@ -409,7 +409,7 @@ int vpci_make_msix_hole(const struct pci
>              case p2m_mmio_direct:
>                  if ( mfn_x(mfn) == start )
>                  {
> -                    clear_identity_p2m_entry(d, start);
> +                    p2m_remove_identity_entry(d, start);
>                      break;
>                  }
>                  /* fallthrough. */

Thanks, Jan



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only
  2022-04-01 12:47 ` [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain " George Dunlap
@ 2022-04-07  7:45   ` Jan Beulich
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-04-07  7:45 UTC (permalink / raw)
  To: George Dunlap; +Cc: xen-devel, Andrew Cooper, Wei Liu, Roger Pau Monne

On 01.04.2022 14:47, George Dunlap wrote:
> 
> 
>> On Feb 23, 2022, at 3:55 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>> The primary goal of this series is to leave p2m.c with, as its leading
>> comment suggests, just code for "physical-to-machine mappings for
>> automatically-translated domains". This requires splitting a few
>> functions, with their non-HVM parts moved elsewhere.
>>
>> There aren't many changes in v2, mostly from re-basing. See individual
>> patches for details.
>>
>> 01: x86/P2M: rename p2m_remove_page()
>> 02: x86/P2M: introduce p2m_{add,remove}_page()
>> 03: x86/mm: move guest_physmap_{add,remove}_page()
>> 04: x86/mm: split set_identity_p2m_entry() into PV and HVM parts
>> 05: x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only
>> 06: x86/P2M: PoD, altp2m, and nested-p2m are HVM-only
>> 07: x86/P2M: split out init/teardown functions
>> 08: x86/P2M: p2m_get_page_from_gfn() is HVM-only
>> 09: x86/P2M: derive a HVM-only variant from __get_gfn_type_access()
>> 10: x86/p2m: re-arrange {,__}put_gfn()
>> 11: shr_pages field is MEM_SHARING-only
>> 12: paged_pages field is MEM_PAGING-only
>> 13: x86/P2M: p2m.c is HVM-only
>> 14: x86/P2M: the majority for struct p2m_domain's fields are HVM-only
> 
> OK, I think every patch has an R-b from me on it now — let me know if I missed anything.

Thanks a lot! I don't think there's anything missing; I've committed
the first few patches, until one where I still need an ack from Roger
for a vPCI change.

Jan



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 04/14] x86/mm: split set_identity_p2m_entry() into PV and HVM parts
  2022-02-23 15:59 ` [PATCH v2 04/14] x86/mm: split set_identity_p2m_entry() into PV and HVM parts Jan Beulich
  2022-04-01 12:13   ` George Dunlap
  2022-04-06 14:52   ` Jan Beulich
@ 2022-04-08 10:55   ` Roger Pau Monné
  2022-04-08 12:14     ` Jan Beulich
  2 siblings, 1 reply; 26+ messages in thread
From: Roger Pau Monné @ 2022-04-08 10:55 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Wei Liu, George Dunlap, Paul Durrant

On Wed, Feb 23, 2022 at 04:59:42PM +0100, Jan Beulich wrote:
> ..., moving the former into the new physmap.c. Also call the new
> functions directly from arch_iommu_hwdom_init() and
> vpci_make_msix_hole(), as the PV/HVM split is explicit there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: George Dunlap <george.dunlap@ctirix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Just one comment below, which can also be taken care in a followup
patch if you agree.

> ---
> v2: Change arch_iommu_hwdom_init() and vpci_make_msix_hole().
> 
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1473,12 +1473,9 @@ static int clear_mmio_p2m_entry(struct d
>      return rc;
>  }
>  
> -#endif /* CONFIG_HVM */
> -
> -int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
> +int p2m_add_identity_entry(struct domain *d, unsigned long gfn_l,

I guess switching the gfn_l parameter to be gfn_t gfn, and then
defining:

unsigned long gfn_l = gfn_x(gfn);

Was too much churn?

(this is just so that the exposed interface for p2m_add_identity_entry
is cleaner).

Thanks, Roger.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 04/14] x86/mm: split set_identity_p2m_entry() into PV and HVM parts
  2022-04-08 10:55   ` Roger Pau Monné
@ 2022-04-08 12:14     ` Jan Beulich
  0 siblings, 0 replies; 26+ messages in thread
From: Jan Beulich @ 2022-04-08 12:14 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: xen-devel, Andrew Cooper, Wei Liu, George Dunlap, Paul Durrant

On 08.04.2022 12:55, Roger Pau Monné wrote:
> On Wed, Feb 23, 2022 at 04:59:42PM +0100, Jan Beulich wrote:
>> ..., moving the former into the new physmap.c. Also call the new
>> functions directly from arch_iommu_hwdom_init() and
>> vpci_make_msix_hole(), as the PV/HVM split is explicit there.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: George Dunlap <george.dunlap@ctirix.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> Just one comment below, which can also be taken care in a followup
> patch if you agree.
> 
>> ---
>> v2: Change arch_iommu_hwdom_init() and vpci_make_msix_hole().
>>
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -1473,12 +1473,9 @@ static int clear_mmio_p2m_entry(struct d
>>      return rc;
>>  }
>>  
>> -#endif /* CONFIG_HVM */
>> -
>> -int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
>> +int p2m_add_identity_entry(struct domain *d, unsigned long gfn_l,
> 
> I guess switching the gfn_l parameter to be gfn_t gfn, and then
> defining:
> 
> unsigned long gfn_l = gfn_x(gfn);
> 
> Was too much churn?

Well, yes, I probably could have done that, but the series was (going
to be) big enough already, so I tried to stay away from such (for
consistency's sake I think I would then have needed to do the same
elsewhere as well).

Jan



^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2022-04-08 12:14 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-23 15:55 [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain are HVM-only Jan Beulich
2022-02-23 15:57 ` [PATCH v2 01/14] x86/P2M: rename p2m_remove_page() Jan Beulich
2022-04-01 11:00   ` George Dunlap
2022-02-23 15:58 ` [PATCH v2 02/14] x86/P2M: introduce p2m_{add,remove}_page() Jan Beulich
2022-04-01 12:02   ` George Dunlap
2022-04-04 11:59     ` Jan Beulich
2022-02-23 15:58 ` [PATCH v2 03/14] x86/mm: move guest_physmap_{add,remove}_page() Jan Beulich
2022-02-23 15:59 ` [PATCH v2 04/14] x86/mm: split set_identity_p2m_entry() into PV and HVM parts Jan Beulich
2022-04-01 12:13   ` George Dunlap
2022-04-06 14:52   ` Jan Beulich
2022-04-08 10:55   ` Roger Pau Monné
2022-04-08 12:14     ` Jan Beulich
2022-02-23 16:00 ` [PATCH v2 05/14] x86/P2M: p2m_{alloc,free}_ptp() and p2m_alloc_table() are HVM-only Jan Beulich
2022-02-23 16:01 ` [PATCH v2 06/14] x86/P2M: PoD, altp2m, and nested-p2m " Jan Beulich
2022-04-01 12:36   ` George Dunlap
2022-02-23 16:01 ` [PATCH v2 07/14] x86/P2M: split out init/teardown functions Jan Beulich
2022-02-23 16:02 ` [PATCH v2 08/14] x86/P2M: p2m_get_page_from_gfn() is HVM-only Jan Beulich
2022-02-23 16:03 ` [PATCH v2 09/14] x86/P2M: derive HVM-only variant from __get_gfn_type_access() Jan Beulich
2022-02-23 16:03 ` [PATCH v2 10/14] x86/p2m: re-arrange {,__}put_gfn() Jan Beulich
2022-02-23 16:04 ` [PATCH v2 11/14] shr_pages field is MEM_SHARING-only Jan Beulich
2022-02-23 16:07   ` Jan Beulich
2022-02-23 16:05 ` [PATCH v2 12/14] paged_pages field is MEM_PAGING-only Jan Beulich
2022-02-23 16:06 ` [PATCH v2 13/14] x86/P2M: p2m.c is HVM-only Jan Beulich
2022-02-23 16:06 ` [PATCH v2 14/14] x86/P2M: the majority for struct p2m_domain's fields are HVM-only Jan Beulich
2022-04-01 12:47 ` [PATCH v2 00/14] x86/mm: large parts of P2M code and struct p2m_domain " George Dunlap
2022-04-07  7:45   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.