xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs
@ 2019-09-30 10:32 Hongyan Xia
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 01/55] x86/mm: defer clearing page in virt_to_xen_lXe Hongyan Xia
                   ` (55 more replies)
  0 siblings, 56 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:32 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

This series is mostly Wei's effort to switch from xenheap to domheap for
Xen page tables. In addition, I have also merged several bug fixes from
my "Remove direct map from Xen" series [1]. As the title suggests, this
series switches from xenheap to domheap for Xen PTEs.

This is needed to achieve the ultimate goal of removing the
always-mapped direct map from Xen. To work without an always-mapped
direct map, Xen PTE manipulations themselves must not rely on it.
Unfortunately, PTE APIs currently use the xenheap that does not work
without the direct map. By switching to domheap APIs, it is much easier
for us to break the reliance on the direct map later on, not only for
PTEs but for all other memory allocations as well.

I have broken down the direct map removal series into two. This series
is the first batch. The patches change the life cycle of Xen PTEs from
alloc-free to alloc-map-unmap-free, which means PTEs must be explicitly
mapped and unmapped. This also makes sense to be the first batch from a
stability PoV, since this is just an API change and the direct map has
not been actually removed. Further, the map and unmap in the release
build use the direct map as a fast path, so there is also no performance
degredation in a release build.

I have tested both debug and release build on bare-metal and nested
virtualisation. I am able to run PV and HVM guests and XTF tests without
crashes so far on x86. I am able to build on AArch64.

This series is at https://xenbits.xen.org/git-http/people/hx242/xen.git,
xen_pte_map branch.

---
Changed since v1:
- squash some commits
- merge bug fixes into this first batch
- rebase against latest master

[1]:
https://lists.xenproject.org/archives/html/xen-devel/2019-09/msg02647.html

Wei Liu (55):
  x86/mm: defer clearing page in virt_to_xen_lXe
  x86: move some xen mm function declarations
  x86: introduce a new set of APIs to manage Xen page tables
  x86/mm: introduce l{1,2}t local variables to map_pages_to_xen
  x86/mm: introduce l{1,2}t local variables to modify_xen_mappings
  x86/mm: map_pages_to_xen should have one exit path
  x86/mm: add an end_of_loop label in map_pages_to_xen
  x86/mm: make sure there is one exit path for modify_xen_mappings
  x86/mm: add an end_of_loop label in modify_xen_mappings
  x86/mm: change pl2e to l2t in virt_to_xen_l2e
  x86/mm: change pl1e to l1t in virt_to_xen_l1e
  x86/mm: change pl3e to l3t in virt_to_xen_l3e
  x86/mm: rewrite virt_to_xen_l3e
  x86/mm: rewrite xen_to_virt_l2e
  x86/mm: rewrite virt_to_xen_l1e
  x86/mm: switch to new APIs in map_pages_to_xen
  x86/mm: drop lXe_to_lYe invocations in map_pages_to_xen
  x86/mm: switch to new APIs in modify_xen_mappings
  x86/mm: drop lXe_to_lYe invocations from modify_xen_mappings
  x86/mm: switch to new APIs in arch_init_memory
  x86_64/mm: introduce pl2e in paging_init
  x86_64/mm: switch to new APIs in paging_init
  x86_64/mm: drop l4e_to_l3e invocation from paging_init
  x86_64/mm.c: remove code that serves no purpose in setup_m2p_table
  x86_64/mm: introduce pl2e in setup_m2p_table
  x86_64/mm: switch to new APIs in setup_m2p_table
  x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table
  efi: use new page table APIs in copy_mapping
  efi: avoid using global variable in copy_mapping
  efi: use new page table APIs in efi_init_memory
  efi: add emacs block to boot.c
  efi: switch EFI L4 table to use new APIs
  x86/smpboot: add emacs block
  x86/smpboot: clone_mapping should have one exit path
  x86/smpboot: switch pl3e to use new APIs in clone_mapping
  x86/smpboot: switch pl2e to use new APIs in clone_mapping
  x86/smpboot: switch pl1e to use new APIs in clone_mapping
  x86/smpboot: drop lXe_to_lYe invocations from cleanup_cpu_root_pgt
  x86: switch root_pgt to mfn_t and use new APIs
  x86/shim: map and unmap page tables in replace_va_mapping
  x86_64/mm: map and unmap page tables in m2p_mapped
  x86_64/mm: map and unmap page tables in share_hotadd_m2p_table
  x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping
  x86_64/mm: map and unmap page tables in destroy_m2p_mapping
  x86_64/mm: map and unmap page tables in setup_compat_m2p_table
  x86_64/mm: map and unmap page tables in cleanup_frame_table
  x86_64/mm: map and unmap page tables in subarch_init_memory
  x86_64/mm: map and unmap page tables in subarch_memory_op
  x86/smpboot: remove lXe_to_lYe in cleanup_cpu_root_pgt
  x86/pv: properly map and unmap page tables in mark_pv_pt_pages_rdonly
  x86/pv: properly map and unmap page table in dom0_construct_pv
  x86: remove lXe_to_lYe in __start_xen
  x86/mm: drop old page table APIs
  x86: switch to use domheap page for page tables
  x86/mm: drop _new suffix for page table APIs

 xen/arch/x86/domain.c           |  15 +-
 xen/arch/x86/domain_page.c      |  12 +-
 xen/arch/x86/efi/runtime.h      |  12 +-
 xen/arch/x86/mm.c               | 482 ++++++++++++++++++++++----------
 xen/arch/x86/pv/dom0_build.c    |  41 ++-
 xen/arch/x86/pv/domain.c        |   2 +-
 xen/arch/x86/pv/shim.c          |  20 +-
 xen/arch/x86/setup.c            |  10 +-
 xen/arch/x86/smpboot.c          | 171 +++++++----
 xen/arch/x86/x86_64/mm.c        | 267 +++++++++++++-----
 xen/common/efi/boot.c           |  84 ++++--
 xen/common/efi/efi.h            |   3 +-
 xen/common/efi/runtime.c        |   8 +-
 xen/include/asm-x86/mm.h        |  16 ++
 xen/include/asm-x86/page.h      |  10 -
 xen/include/asm-x86/processor.h |   2 +-
 16 files changed, 819 insertions(+), 336 deletions(-)

-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 01/55] x86/mm: defer clearing page in virt_to_xen_lXe
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
@ 2019-09-30 10:32 ` Hongyan Xia
  2019-09-30 15:05   ` Wei Liu
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 02/55] x86: move some xen mm function declarations Hongyan Xia
                   ` (54 subsequent siblings)
  55 siblings, 1 reply; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:32 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Defer the call to clear_page to the point when we're sure the page is
going to become a page table.

This is a minor optimisation. No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 99816fc67c..e90c8a63a6 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4879,13 +4879,13 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
 
         if ( !pl3e )
             return NULL;
-        clear_page(pl3e);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
         {
             l4_pgentry_t l4e = l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR);
 
+            clear_page(pl3e);
             l4e_write(pl4e, l4e);
             efi_update_l4_pgtable(l4_table_offset(v), l4e);
             pl3e = NULL;
@@ -4914,11 +4914,11 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
 
         if ( !pl2e )
             return NULL;
-        clear_page(pl2e);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
+            clear_page(pl2e);
             l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
             pl2e = NULL;
         }
@@ -4947,11 +4947,11 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
 
         if ( !pl1e )
             return NULL;
-        clear_page(pl1e);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
+            clear_page(pl1e);
             l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
             pl1e = NULL;
         }
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 02/55] x86: move some xen mm function declarations
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 01/55] x86/mm: defer clearing page in virt_to_xen_lXe Hongyan Xia
@ 2019-09-30 10:32 ` Hongyan Xia
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 03/55] x86: introduce a new set of APIs to manage Xen page tables Hongyan Xia
                   ` (53 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:32 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

They were put into page.h but mm.h is more appropriate.

The real reason is that I will be adding some new functions which
takes mfn_t. It turns out it is a bit difficult to do in page.h.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/include/asm-x86/mm.h   | 5 +++++
 xen/include/asm-x86/page.h | 5 -----
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 3863e4ce57..2800106327 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -630,4 +630,9 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
                           unsigned int id, unsigned long frame,
                           unsigned int nr_frames, xen_pfn_t mfn_list[]);
 
+/* Allocator functions for Xen pagetables. */
+void *alloc_xen_pagetable(void);
+void free_xen_pagetable(void *v);
+l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
+
 #endif /* __ASM_X86_MM_H__ */
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index c1e92937c0..05a8b1efa6 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -345,11 +345,6 @@ void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t);
 
 #ifndef __ASSEMBLY__
 
-/* Allocator functions for Xen pagetables. */
-void *alloc_xen_pagetable(void);
-void free_xen_pagetable(void *v);
-l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
-
 /* Convert between PAT/PCD/PWT embedded in PTE flags and 3-bit cacheattr. */
 static inline unsigned int pte_flags_to_cacheattr(unsigned int flags)
 {
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 03/55] x86: introduce a new set of APIs to manage Xen page tables
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 01/55] x86/mm: defer clearing page in virt_to_xen_lXe Hongyan Xia
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 02/55] x86: move some xen mm function declarations Hongyan Xia
@ 2019-09-30 10:32 ` Hongyan Xia
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Hongyan Xia
                   ` (52 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:32 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We are going to switch to using domheap page for page tables.
A new set of APIs is introduced to allocate, map, unmap and free pages
for page tables.

The allocation and deallocation work on mfn_t but not page_info,
because they are required to work even before frame table is set up.

Implement the old functions with the new ones. We will rewrite, site
by site, other mm functions that manipulate page tables to use the new
APIs.

Note these new APIs still use xenheap page underneath and no actual
map and unmap is done so that we don't break xen half way. They will
be switched to use domheap and dynamic mappings when usage of old APIs
is eliminated.

No functional change intended in this patch.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c        | 39 ++++++++++++++++++++++++++++++++++-----
 xen/include/asm-x86/mm.h | 11 +++++++++++
 2 files changed, 45 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index e90c8a63a6..e2c8c3f3a1 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -119,6 +119,7 @@
 #include <xen/efi.h>
 #include <xen/grant_table.h>
 #include <xen/hypercall.h>
+#include <xen/mm.h>
 #include <asm/paging.h>
 #include <asm/shadow.h>
 #include <asm/page.h>
@@ -4847,22 +4848,50 @@ int mmcfg_intercept_write(
 }
 
 void *alloc_xen_pagetable(void)
+{
+    mfn_t mfn;
+
+    mfn = alloc_xen_pagetable_new();
+    ASSERT(!mfn_eq(mfn, INVALID_MFN));
+
+    return map_xen_pagetable_new(mfn);
+}
+
+void free_xen_pagetable(void *v)
+{
+    if ( system_state != SYS_STATE_early_boot )
+        free_xen_pagetable_new(virt_to_mfn(v));
+}
+
+mfn_t alloc_xen_pagetable_new(void)
 {
     if ( system_state != SYS_STATE_early_boot )
     {
         void *ptr = alloc_xenheap_page();
 
         BUG_ON(!hardware_domain && !ptr);
-        return ptr;
+        return virt_to_mfn(ptr);
     }
 
-    return mfn_to_virt(mfn_x(alloc_boot_pages(1, 1)));
+    return alloc_boot_pages(1, 1);
 }
 
-void free_xen_pagetable(void *v)
+void *map_xen_pagetable_new(mfn_t mfn)
 {
-    if ( system_state != SYS_STATE_early_boot )
-        free_xenheap_page(v);
+    return mfn_to_virt(mfn_x(mfn));
+}
+
+/* v can point to an entry within a table or be NULL */
+void unmap_xen_pagetable_new(void *v)
+{
+    /* XXX still using xenheap page, no need to do anything.  */
+}
+
+/* mfn can be INVALID_MFN */
+void free_xen_pagetable_new(mfn_t mfn)
+{
+    if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
+        free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
 }
 
 static DEFINE_SPINLOCK(map_pgdir_lock);
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 2800106327..80173eb4c3 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -633,6 +633,17 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
 /* Allocator functions for Xen pagetables. */
 void *alloc_xen_pagetable(void);
 void free_xen_pagetable(void *v);
+mfn_t alloc_xen_pagetable_new(void);
+void *map_xen_pagetable_new(mfn_t mfn);
+void unmap_xen_pagetable_new(void *v);
+void free_xen_pagetable_new(mfn_t mfn);
+
+#define UNMAP_XEN_PAGETABLE_NEW(ptr)    \
+    do {                                \
+        unmap_xen_pagetable_new((ptr)); \
+        (ptr) = NULL;                   \
+    } while (0)
+
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
 
 #endif /* __ASM_X86_MM_H__ */
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (2 preceding siblings ...)
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 03/55] x86: introduce a new set of APIs to manage Xen page tables Hongyan Xia
@ 2019-09-30 10:32 ` Hongyan Xia
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 05/55] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Hongyan Xia
                   ` (51 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:32 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

The pl2e and pl1e variables are heavily (ab)used in that function. It
is fine at the moment because all page tables are always mapped so
there is no need to track the life time of each variable.

We will soon have the requirement to map and unmap page tables. We
need to track the life time of each variable to avoid leakage.

Introduce some l{1,2}t variables with limited scope so that we can
track life time of pointers to xen page tables more easily.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 75 ++++++++++++++++++++++++++---------------------
 1 file changed, 42 insertions(+), 33 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index e2c8c3f3a1..2ae8a7736f 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5061,10 +5061,12 @@ int map_pages_to_xen(
                 }
                 else
                 {
-                    pl2e = l3e_to_l2e(ol3e);
+                    l2_pgentry_t *l2t;
+
+                    l2t = l3e_to_l2e(ol3e);
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
-                        ol2e = pl2e[i];
+                        ol2e = l2t[i];
                         if ( !(l2e_get_flags(ol2e) & _PAGE_PRESENT) )
                             continue;
                         if ( l2e_get_flags(ol2e) & _PAGE_PSE )
@@ -5072,21 +5074,22 @@ int map_pages_to_xen(
                         else
                         {
                             unsigned int j;
+                            l1_pgentry_t *l1t;
 
-                            pl1e = l2e_to_l1e(ol2e);
+                            l1t = l2e_to_l1e(ol2e);
                             for ( j = 0; j < L1_PAGETABLE_ENTRIES; j++ )
-                                flush_flags(l1e_get_flags(pl1e[j]));
+                                flush_flags(l1e_get_flags(l1t[j]));
                         }
                     }
                     flush_area(virt, flush_flags);
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
-                        ol2e = pl2e[i];
+                        ol2e = l2t[i];
                         if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) &&
                              !(l2e_get_flags(ol2e) & _PAGE_PSE) )
                             free_xen_pagetable(l2e_to_l1e(ol2e));
                     }
-                    free_xen_pagetable(pl2e);
+                    free_xen_pagetable(l2t);
                 }
             }
 
@@ -5102,6 +5105,7 @@ int map_pages_to_xen(
         {
             unsigned int flush_flags =
                 FLUSH_TLB | FLUSH_ORDER(2 * PAGETABLE_ORDER);
+            l2_pgentry_t *l2t;
 
             /* Skip this PTE if there is no change. */
             if ( ((l3e_get_pfn(ol3e) & ~(L2_PAGETABLE_ENTRIES *
@@ -5123,12 +5127,12 @@ int map_pages_to_xen(
                 continue;
             }
 
-            pl2e = alloc_xen_pagetable();
-            if ( pl2e == NULL )
+            l2t = alloc_xen_pagetable();
+            if ( l2t == NULL )
                 return -ENOMEM;
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
-                l2e_write(pl2e + i,
+                l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(ol3e) +
                                        (i << PAGETABLE_ORDER),
                                        l3e_get_flags(ol3e)));
@@ -5141,15 +5145,15 @@ int map_pages_to_xen(
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(pl2e),
+                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
                                                     __PAGE_HYPERVISOR));
-                pl2e = NULL;
+                l2t = NULL;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
             flush_area(virt, flush_flags);
-            if ( pl2e )
-                free_xen_pagetable(pl2e);
+            if ( l2t )
+                free_xen_pagetable(l2t);
         }
 
         pl2e = virt_to_xen_l2e(virt);
@@ -5177,11 +5181,13 @@ int map_pages_to_xen(
                 }
                 else
                 {
-                    pl1e = l2e_to_l1e(ol2e);
+                    l1_pgentry_t *l1t;
+
+                    l1t = l2e_to_l1e(ol2e);
                     for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                        flush_flags(l1e_get_flags(pl1e[i]));
+                        flush_flags(l1e_get_flags(l1t[i]));
                     flush_area(virt, flush_flags);
-                    free_xen_pagetable(pl1e);
+                    free_xen_pagetable(l1t);
                 }
             }
 
@@ -5203,6 +5209,7 @@ int map_pages_to_xen(
             {
                 unsigned int flush_flags =
                     FLUSH_TLB | FLUSH_ORDER(PAGETABLE_ORDER);
+                l1_pgentry_t *l1t;
 
                 /* Skip this PTE if there is no change. */
                 if ( (((l2e_get_pfn(*pl2e) & ~(L1_PAGETABLE_ENTRIES - 1)) +
@@ -5222,12 +5229,12 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                pl1e = alloc_xen_pagetable();
-                if ( pl1e == NULL )
+                l1t = alloc_xen_pagetable();
+                if ( l1t == NULL )
                     return -ENOMEM;
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                    l1e_write(&pl1e[i],
+                    l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
                                            lNf_to_l1f(l2e_get_flags(*pl2e))));
 
@@ -5239,15 +5246,15 @@ int map_pages_to_xen(
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(pl1e),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
                                                         __PAGE_HYPERVISOR));
-                    pl1e = NULL;
+                    l1t = NULL;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(virt, flush_flags);
-                if ( pl1e )
-                    free_xen_pagetable(pl1e);
+                if ( l1t )
+                    free_xen_pagetable(l1t);
             }
 
             pl1e  = l2e_to_l1e(*pl2e) + l1_table_offset(virt);
@@ -5272,6 +5279,7 @@ int map_pages_to_xen(
                     ((1u << PAGETABLE_ORDER) - 1)) == 0)) )
             {
                 unsigned long base_mfn;
+                l1_pgentry_t *l1t;
 
                 if ( locking )
                     spin_lock(&map_pgdir_lock);
@@ -5295,11 +5303,11 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                pl1e = l2e_to_l1e(ol2e);
-                base_mfn = l1e_get_pfn(*pl1e) & ~(L1_PAGETABLE_ENTRIES - 1);
-                for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++, pl1e++ )
-                    if ( (l1e_get_pfn(*pl1e) != (base_mfn + i)) ||
-                         (l1e_get_flags(*pl1e) != flags) )
+                l1t = l2e_to_l1e(ol2e);
+                base_mfn = l1e_get_pfn(l1t[0]) & ~(L1_PAGETABLE_ENTRIES - 1);
+                for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
+                    if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) ||
+                         (l1e_get_flags(l1t[i]) != flags) )
                         break;
                 if ( i == L1_PAGETABLE_ENTRIES )
                 {
@@ -5325,6 +5333,7 @@ int map_pages_to_xen(
                 ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1))) )
         {
             unsigned long base_mfn;
+            l2_pgentry_t *l2t;
 
             if ( locking )
                 spin_lock(&map_pgdir_lock);
@@ -5342,13 +5351,13 @@ int map_pages_to_xen(
                 continue;
             }
 
-            pl2e = l3e_to_l2e(ol3e);
-            base_mfn = l2e_get_pfn(*pl2e) & ~(L2_PAGETABLE_ENTRIES *
+            l2t = l3e_to_l2e(ol3e);
+            base_mfn = l2e_get_pfn(l2t[0]) & ~(L2_PAGETABLE_ENTRIES *
                                               L1_PAGETABLE_ENTRIES - 1);
-            for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++, pl2e++ )
-                if ( (l2e_get_pfn(*pl2e) !=
+            for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
+                if ( (l2e_get_pfn(l2t[i]) !=
                       (base_mfn + (i << PAGETABLE_ORDER))) ||
-                     (l2e_get_flags(*pl2e) != l1f_to_lNf(flags)) )
+                     (l2e_get_flags(l2t[i]) != l1f_to_lNf(flags)) )
                     break;
             if ( i == L2_PAGETABLE_ENTRIES )
             {
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 05/55] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (3 preceding siblings ...)
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Hongyan Xia
@ 2019-09-30 10:32 ` Hongyan Xia
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 06/55] x86/mm: map_pages_to_xen should have one exit path Hongyan Xia
                   ` (50 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:32 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

The pl2e and pl1e variables are heavily (ab)used in that function.  It
is fine at the moment because all page tables are always mapped so
there is no need to track the life time of each variable.

We will soon have the requirement to map and unmap page tables. We
need to track the life time of each variable to avoid leakage.

Introduce some l{1,2}t variables with limited scope so that we can
track life time of pointers to xen page tables more easily.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 68 ++++++++++++++++++++++++++---------------------
 1 file changed, 38 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 2ae8a7736f..063cacffb8 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5428,6 +5428,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
         if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
         {
+            l2_pgentry_t *l2t;
+
             if ( l2_table_offset(v) == 0 &&
                  l1_table_offset(v) == 0 &&
                  ((e - v) >= (1UL << L3_PAGETABLE_SHIFT)) )
@@ -5443,11 +5445,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
-            pl2e = alloc_xen_pagetable();
-            if ( !pl2e )
+            l2t = alloc_xen_pagetable();
+            if ( !l2t )
                 return -ENOMEM;
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
-                l2e_write(pl2e + i,
+                l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
                                        (i << PAGETABLE_ORDER),
                                        l3e_get_flags(*pl3e)));
@@ -5456,14 +5458,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(pl2e),
+                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
                                                     __PAGE_HYPERVISOR));
-                pl2e = NULL;
+                l2t = NULL;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
-            if ( pl2e )
-                free_xen_pagetable(pl2e);
+            if ( l2t )
+                free_xen_pagetable(l2t);
         }
 
         /*
@@ -5497,12 +5499,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
             else
             {
+                l1_pgentry_t *l1t;
+
                 /* PSE: shatter the superpage and try again. */
-                pl1e = alloc_xen_pagetable();
-                if ( !pl1e )
+                l1t = alloc_xen_pagetable();
+                if ( !l1t )
                     return -ENOMEM;
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                    l1e_write(&pl1e[i],
+                    l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
                                            l2e_get_flags(*pl2e) & ~_PAGE_PSE));
                 if ( locking )
@@ -5510,19 +5514,19 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(pl1e),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
                                                         __PAGE_HYPERVISOR));
-                    pl1e = NULL;
+                    l1t = NULL;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
-                if ( pl1e )
-                    free_xen_pagetable(pl1e);
+                if ( l1t )
+                    free_xen_pagetable(l1t);
             }
         }
         else
         {
-            l1_pgentry_t nl1e;
+            l1_pgentry_t nl1e, *l1t;
 
             /*
              * Ordinary 4kB mapping: The L2 entry has been verified to be
@@ -5569,9 +5573,9 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 continue;
             }
 
-            pl1e = l2e_to_l1e(*pl2e);
+            l1t = l2e_to_l1e(*pl2e);
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                if ( l1e_get_intpte(pl1e[i]) != 0 )
+                if ( l1e_get_intpte(l1t[i]) != 0 )
                     break;
             if ( i == L1_PAGETABLE_ENTRIES )
             {
@@ -5580,7 +5584,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable(pl1e);
+                free_xen_pagetable(l1t);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5609,21 +5613,25 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             continue;
         }
 
-        pl2e = l3e_to_l2e(*pl3e);
-        for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
-            if ( l2e_get_intpte(pl2e[i]) != 0 )
-                break;
-        if ( i == L2_PAGETABLE_ENTRIES )
         {
-            /* Empty: zap the L3E and free the L2 page. */
-            l3e_write_atomic(pl3e, l3e_empty());
-            if ( locking )
+            l2_pgentry_t *l2t;
+
+            l2t = l3e_to_l2e(*pl3e);
+            for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
+                if ( l2e_get_intpte(l2t[i]) != 0 )
+                    break;
+            if ( i == L2_PAGETABLE_ENTRIES )
+            {
+                /* Empty: zap the L3E and free the L2 page. */
+                l3e_write_atomic(pl3e, l3e_empty());
+                if ( locking )
+                    spin_unlock(&map_pgdir_lock);
+                flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
+                free_xen_pagetable(l2t);
+            }
+            else if ( locking )
                 spin_unlock(&map_pgdir_lock);
-            flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-            free_xen_pagetable(pl2e);
         }
-        else if ( locking )
-            spin_unlock(&map_pgdir_lock);
     }
 
     flush_area(NULL, FLUSH_TLB_GLOBAL);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 06/55] x86/mm: map_pages_to_xen should have one exit path
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (4 preceding siblings ...)
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 05/55] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Hongyan Xia
@ 2019-09-30 10:32 ` Hongyan Xia
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen Hongyan Xia
                   ` (49 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:32 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will soon rewrite the function to handle dynamically mapping and
unmapping of page tables.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 34 +++++++++++++++++++++++++++-------
 1 file changed, 27 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 063cacffb8..ba38525d36 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5014,9 +5014,11 @@ int map_pages_to_xen(
     unsigned int flags)
 {
     bool locking = system_state > SYS_STATE_boot;
+    l3_pgentry_t *pl3e, ol3e;
     l2_pgentry_t *pl2e, ol2e;
     l1_pgentry_t *pl1e, ol1e;
     unsigned int  i;
+    int rc = -ENOMEM;
 
 #define flush_flags(oldf) do {                 \
     unsigned int o_ = (oldf);                  \
@@ -5034,10 +5036,13 @@ int map_pages_to_xen(
 
     while ( nr_mfns != 0 )
     {
-        l3_pgentry_t ol3e, *pl3e = virt_to_xen_l3e(virt);
+        pl3e = virt_to_xen_l3e(virt);
 
         if ( !pl3e )
-            return -ENOMEM;
+        {
+            ASSERT(rc == -ENOMEM);
+            goto out;
+        }
         ol3e = *pl3e;
 
         if ( cpu_has_page1gb &&
@@ -5129,7 +5134,10 @@ int map_pages_to_xen(
 
             l2t = alloc_xen_pagetable();
             if ( l2t == NULL )
-                return -ENOMEM;
+            {
+                ASSERT(rc == -ENOMEM);
+                goto out;
+            }
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
@@ -5158,7 +5166,10 @@ int map_pages_to_xen(
 
         pl2e = virt_to_xen_l2e(virt);
         if ( !pl2e )
-            return -ENOMEM;
+        {
+            ASSERT(rc == -ENOMEM);
+            goto out;
+        }
 
         if ( ((((virt >> PAGE_SHIFT) | mfn_x(mfn)) &
                ((1u << PAGETABLE_ORDER) - 1)) == 0) &&
@@ -5203,7 +5214,10 @@ int map_pages_to_xen(
             {
                 pl1e = virt_to_xen_l1e(virt);
                 if ( pl1e == NULL )
-                    return -ENOMEM;
+                {
+                    ASSERT(rc == -ENOMEM);
+                    goto out;
+                }
             }
             else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
             {
@@ -5231,7 +5245,10 @@ int map_pages_to_xen(
 
                 l1t = alloc_xen_pagetable();
                 if ( l1t == NULL )
-                    return -ENOMEM;
+                {
+                    ASSERT(rc == -ENOMEM);
+                    goto out;
+                }
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
@@ -5377,7 +5394,10 @@ int map_pages_to_xen(
 
 #undef flush_flags
 
-    return 0;
+    rc = 0;
+
+ out:
+    return rc;
 }
 
 int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (5 preceding siblings ...)
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 06/55] x86/mm: map_pages_to_xen should have one exit path Hongyan Xia
@ 2019-09-30 10:32 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 08/55] x86/mm: make sure there is one exit path for modify_xen_mappings Hongyan Xia
                   ` (48 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:32 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will soon need to clean up mappings whenever the out most loop is
ended. Add a new label and turn relevant continue's into goto's.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index ba38525d36..0916aa74ae 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5102,7 +5102,7 @@ int map_pages_to_xen(
             if ( !mfn_eq(mfn, INVALID_MFN) )
                 mfn  = mfn_add(mfn, 1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT));
             nr_mfns -= 1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT);
-            continue;
+            goto end_of_loop;
         }
 
         if ( (l3e_get_flags(ol3e) & _PAGE_PRESENT) &&
@@ -5129,7 +5129,7 @@ int map_pages_to_xen(
                 if ( !mfn_eq(mfn, INVALID_MFN) )
                     mfn = mfn_add(mfn, i);
                 nr_mfns -= i;
-                continue;
+                goto end_of_loop;
             }
 
             l2t = alloc_xen_pagetable();
@@ -5310,7 +5310,7 @@ int map_pages_to_xen(
                 {
                     if ( locking )
                         spin_unlock(&map_pgdir_lock);
-                    continue;
+                    goto end_of_loop;
                 }
 
                 if ( l2e_get_flags(ol2e) & _PAGE_PSE )
@@ -5365,7 +5365,7 @@ int map_pages_to_xen(
             {
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
-                continue;
+                goto end_of_loop;
             }
 
             l2t = l3e_to_l2e(ol3e);
@@ -5390,6 +5390,7 @@ int map_pages_to_xen(
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
+    end_of_loop:;
     }
 
 #undef flush_flags
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 08/55] x86/mm: make sure there is one exit path for modify_xen_mappings
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (6 preceding siblings ...)
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 09/55] x86/mm: add an end_of_loop label in modify_xen_mappings Hongyan Xia
                   ` (47 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will soon need to handle dynamically mapping / unmapping page
tables in the said function.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 0916aa74ae..3a799e17e4 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5425,6 +5425,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     l1_pgentry_t *pl1e;
     unsigned int  i;
     unsigned long v = s;
+    int rc = -ENOMEM;
 
     /* Set of valid PTE bits which may be altered. */
 #define FLAGS_MASK (_PAGE_NX|_PAGE_RW|_PAGE_PRESENT)
@@ -5468,7 +5469,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             /* PAGE1GB: shatter the superpage and fall through. */
             l2t = alloc_xen_pagetable();
             if ( !l2t )
-                return -ENOMEM;
+            {
+                ASSERT(rc == -ENOMEM);
+                goto out;
+            }
+
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
@@ -5525,7 +5530,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 /* PSE: shatter the superpage and try again. */
                 l1t = alloc_xen_pagetable();
                 if ( !l1t )
-                    return -ENOMEM;
+                {
+                    ASSERT(rc == -ENOMEM);
+                    goto out;
+                }
+
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
@@ -5658,7 +5667,10 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     flush_area(NULL, FLUSH_TLB_GLOBAL);
 
 #undef FLAGS_MASK
-    return 0;
+    rc = 0;
+
+ out:
+    return rc;
 }
 
 #undef flush_area
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 09/55] x86/mm: add an end_of_loop label in modify_xen_mappings
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (7 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 08/55] x86/mm: make sure there is one exit path for modify_xen_mappings Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 10/55] x86/mm: change pl2e to l2t in virt_to_xen_l2e Hongyan Xia
                   ` (46 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will soon need to clean up mappings whenever the out most loop
is ended. Add a new label and turn relevant continue's into goto's.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 3a799e17e4..b20d417fec 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5445,7 +5445,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
             v += 1UL << L3_PAGETABLE_SHIFT;
             v &= ~((1UL << L3_PAGETABLE_SHIFT) - 1);
-            continue;
+            goto end_of_loop;
         }
 
         if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
@@ -5463,7 +5463,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
                 l3e_write_atomic(pl3e, nl3e);
                 v += 1UL << L3_PAGETABLE_SHIFT;
-                continue;
+                goto end_of_loop;
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
@@ -5507,7 +5507,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
             v += 1UL << L2_PAGETABLE_SHIFT;
             v &= ~((1UL << L2_PAGETABLE_SHIFT) - 1);
-            continue;
+            goto end_of_loop;
         }
 
         if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
@@ -5581,7 +5581,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
              * skip the empty&free check.
              */
             if ( (nf & _PAGE_PRESENT) || ((v != e) && (l1_table_offset(v) != 0)) )
-                continue;
+                goto end_of_loop;
             if ( locking )
                 spin_lock(&map_pgdir_lock);
 
@@ -5600,7 +5600,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             {
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
-                continue;
+                goto end_of_loop;
             }
 
             l1t = l2e_to_l1e(*pl2e);
@@ -5627,7 +5627,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
          */
         if ( (nf & _PAGE_PRESENT) ||
              ((v != e) && (l2_table_offset(v) + l1_table_offset(v) != 0)) )
-            continue;
+            goto end_of_loop;
         if ( locking )
             spin_lock(&map_pgdir_lock);
 
@@ -5640,7 +5640,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
         {
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
-            continue;
+            goto end_of_loop;
         }
 
         {
@@ -5662,6 +5662,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
+    end_of_loop:;
     }
 
     flush_area(NULL, FLUSH_TLB_GLOBAL);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 10/55] x86/mm: change pl2e to l2t in virt_to_xen_l2e
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (8 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 09/55] x86/mm: add an end_of_loop label in modify_xen_mappings Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 11/55] x86/mm: change pl1e to l1t in virt_to_xen_l1e Hongyan Xia
                   ` (45 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will need to have a variable named pl2e when we rewrite
virt_to_xen_l2e. Change pl2e to l2t to reflect better its purpose.
This will make reviewing later patch easier.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index b20d417fec..ea6931e052 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4939,22 +4939,22 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l2_pgentry_t *pl2e = alloc_xen_pagetable();
+        l2_pgentry_t *l2t = alloc_xen_pagetable();
 
-        if ( !pl2e )
+        if ( !l2t )
             return NULL;
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
-            clear_page(pl2e);
-            l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
-            pl2e = NULL;
+            clear_page(l2t);
+            l3e_write(pl3e, l3e_from_paddr(__pa(l2t), __PAGE_HYPERVISOR));
+            l2t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( pl2e )
-            free_xen_pagetable(pl2e);
+        if ( l2t )
+            free_xen_pagetable(l2t);
     }
 
     BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 11/55] x86/mm: change pl1e to l1t in virt_to_xen_l1e
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (9 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 10/55] x86/mm: change pl2e to l2t in virt_to_xen_l2e Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 12/55] x86/mm: change pl3e to l3t in virt_to_xen_l3e Hongyan Xia
                   ` (44 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will need to have a variable named pl1e when we rewrite
virt_to_xen_l1e. Change pl1e to l1t to reflect better its purpose.
This will make reviewing later patch easier.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index ea6931e052..7a522d90fe 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4972,22 +4972,22 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l1_pgentry_t *pl1e = alloc_xen_pagetable();
+        l1_pgentry_t *l1t = alloc_xen_pagetable();
 
-        if ( !pl1e )
+        if ( !l1t )
             return NULL;
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
-            clear_page(pl1e);
-            l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
-            pl1e = NULL;
+            clear_page(l1t);
+            l2e_write(pl2e, l2e_from_paddr(__pa(l1t), __PAGE_HYPERVISOR));
+            l1t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( pl1e )
-            free_xen_pagetable(pl1e);
+        if ( l1t )
+            free_xen_pagetable(l1t);
     }
 
     BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 12/55] x86/mm: change pl3e to l3t in virt_to_xen_l3e
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (10 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 11/55] x86/mm: change pl1e to l1t in virt_to_xen_l1e Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 13/55] x86/mm: rewrite virt_to_xen_l3e Hongyan Xia
                   ` (43 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will need to have a variable named pl3e when we rewrite
virt_to_xen_l3e. Change pl3e to l3t to reflect better its purpose.
This will make reviewing later patch easier.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 7a522d90fe..f8a8f97f81 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4904,25 +4904,25 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
     if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l3_pgentry_t *pl3e = alloc_xen_pagetable();
+        l3_pgentry_t *l3t = alloc_xen_pagetable();
 
-        if ( !pl3e )
+        if ( !l3t )
             return NULL;
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
         {
-            l4_pgentry_t l4e = l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR);
+            l4_pgentry_t l4e = l4e_from_paddr(__pa(l3t), __PAGE_HYPERVISOR);
 
-            clear_page(pl3e);
+            clear_page(l3t);
             l4e_write(pl4e, l4e);
             efi_update_l4_pgtable(l4_table_offset(v), l4e);
-            pl3e = NULL;
+            l3t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( pl3e )
-            free_xen_pagetable(pl3e);
+        if ( l3t )
+            free_xen_pagetable(l3t);
     }
 
     return l4e_to_l3e(*pl4e) + l3_table_offset(v);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 13/55] x86/mm: rewrite virt_to_xen_l3e
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (11 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 12/55] x86/mm: change pl3e to l3t in virt_to_xen_l3e Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 14/55] x86/mm: rewrite xen_to_virt_l2e Hongyan Xia
                   ` (42 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Rewrite that function to use the new APIs. Modify its callers to unmap
the pointer returned.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 61 +++++++++++++++++++++++++++++++++++++----------
 1 file changed, 48 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f8a8f97f81..1dcd4289d1 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4896,45 +4896,70 @@ void free_xen_pagetable_new(mfn_t mfn)
 
 static DEFINE_SPINLOCK(map_pgdir_lock);
 
+/*
+ * Given a virtual address, return a pointer to xen's L3 entry. Caller
+ * needs to unmap the pointer.
+ */
 static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
 {
     l4_pgentry_t *pl4e;
+    l3_pgentry_t *pl3e = NULL;
 
     pl4e = &idle_pg_table[l4_table_offset(v)];
     if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l3_pgentry_t *l3t = alloc_xen_pagetable();
+        l3_pgentry_t *l3t;
+        mfn_t mfn;
+
+        mfn = alloc_xen_pagetable_new();
+        if ( mfn_eq(mfn, INVALID_MFN) )
+            goto out;
+
+        l3t = map_xen_pagetable_new(mfn);
 
-        if ( !l3t )
-            return NULL;
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
         {
-            l4_pgentry_t l4e = l4e_from_paddr(__pa(l3t), __PAGE_HYPERVISOR);
+            l4_pgentry_t l4e = l4e_from_mfn(mfn, __PAGE_HYPERVISOR);
 
             clear_page(l3t);
             l4e_write(pl4e, l4e);
             efi_update_l4_pgtable(l4_table_offset(v), l4e);
+            pl3e = l3t + l3_table_offset(v);
             l3t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
         if ( l3t )
-            free_xen_pagetable(l3t);
+        {
+            ASSERT(!pl3e);
+            ASSERT(!mfn_eq(mfn, INVALID_MFN));
+            UNMAP_XEN_PAGETABLE_NEW(l3t);
+            free_xen_pagetable_new(mfn);
+        }
+    }
+
+    if ( !pl3e )
+    {
+        ASSERT(l4e_get_flags(*pl4e) & _PAGE_PRESENT);
+        pl3e = (l3_pgentry_t *)map_xen_pagetable_new(l4e_get_mfn(*pl4e))
+            + l3_table_offset(v);
     }
 
-    return l4e_to_l3e(*pl4e) + l3_table_offset(v);
+ out:
+    return pl3e;
 }
 
 static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
 {
     l3_pgentry_t *pl3e;
+    l2_pgentry_t *pl2e = NULL;
 
     pl3e = virt_to_xen_l3e(v);
     if ( !pl3e )
-        return NULL;
+        goto out;
 
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
@@ -4942,7 +4967,8 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
         l2_pgentry_t *l2t = alloc_xen_pagetable();
 
         if ( !l2t )
-            return NULL;
+            goto out;
+
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
@@ -4958,7 +4984,11 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
     }
 
     BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-    return l3e_to_l2e(*pl3e) + l2_table_offset(v);
+    pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(v);
+
+ out:
+    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    return pl2e;
 }
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
@@ -5014,7 +5044,7 @@ int map_pages_to_xen(
     unsigned int flags)
 {
     bool locking = system_state > SYS_STATE_boot;
-    l3_pgentry_t *pl3e, ol3e;
+    l3_pgentry_t *pl3e = NULL, ol3e;
     l2_pgentry_t *pl2e, ol2e;
     l1_pgentry_t *pl1e, ol1e;
     unsigned int  i;
@@ -5390,7 +5420,8 @@ int map_pages_to_xen(
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
-    end_of_loop:;
+    end_of_loop:
+        UNMAP_XEN_PAGETABLE_NEW(pl3e);
     }
 
 #undef flush_flags
@@ -5398,6 +5429,7 @@ int map_pages_to_xen(
     rc = 0;
 
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
 }
 
@@ -5421,6 +5453,7 @@ int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 {
     bool locking = system_state > SYS_STATE_boot;
+    l3_pgentry_t *pl3e = NULL;
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
     unsigned int  i;
@@ -5436,7 +5469,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
     while ( v < e )
     {
-        l3_pgentry_t *pl3e = virt_to_xen_l3e(v);
+        pl3e = virt_to_xen_l3e(v);
 
         if ( !pl3e || !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
@@ -5662,7 +5695,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
-    end_of_loop:;
+    end_of_loop:
+        UNMAP_XEN_PAGETABLE_NEW(pl3e);
     }
 
     flush_area(NULL, FLUSH_TLB_GLOBAL);
@@ -5671,6 +5705,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     rc = 0;
 
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
 }
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 14/55] x86/mm: rewrite xen_to_virt_l2e
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (12 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 13/55] x86/mm: rewrite virt_to_xen_l3e Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 15/55] x86/mm: rewrite virt_to_xen_l1e Hongyan Xia
                   ` (41 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Rewrite that function to use the new APIs. Modify its callers to unmap
the pointer returned.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 46 +++++++++++++++++++++++++++++++++++++---------
 1 file changed, 37 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 1dcd4289d1..ad0d7a0b80 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4952,6 +4952,10 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
     return pl3e;
 }
 
+/*
+ * Given a virtual address, return a pointer to xen's L2 entry. Caller
+ * needs to unmap the pointer.
+ */
 static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
 {
     l3_pgentry_t *pl3e;
@@ -4964,27 +4968,44 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l2_pgentry_t *l2t = alloc_xen_pagetable();
+        l2_pgentry_t *l2t;
+        mfn_t mfn;
 
-        if ( !l2t )
+        mfn = alloc_xen_pagetable_new();
+        if ( mfn_eq(mfn, INVALID_MFN) )
             goto out;
 
+        l2t = map_xen_pagetable_new(mfn);
+
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
             clear_page(l2t);
-            l3e_write(pl3e, l3e_from_paddr(__pa(l2t), __PAGE_HYPERVISOR));
+            l3e_write(pl3e, l3e_from_mfn(mfn, __PAGE_HYPERVISOR));
+            pl2e = l2t + l2_table_offset(v);
             l2t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
+
         if ( l2t )
-            free_xen_pagetable(l2t);
+        {
+            ASSERT(!pl2e);
+            ASSERT(!mfn_eq(mfn, INVALID_MFN));
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            free_xen_pagetable_new(mfn);
+        }
     }
 
     BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-    pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(v);
+
+    if ( !pl2e )
+    {
+        ASSERT(l3e_get_flags(*pl3e) & _PAGE_PRESENT);
+        pl2e = (l2_pgentry_t *)map_xen_pagetable_new(l3e_get_mfn(*pl3e))
+            + l2_table_offset(v);
+    }
 
  out:
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
@@ -4994,10 +5015,11 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
 {
     l2_pgentry_t *pl2e;
+    l1_pgentry_t *pl1e = NULL;
 
     pl2e = virt_to_xen_l2e(v);
     if ( !pl2e )
-        return NULL;
+        goto out;
 
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
@@ -5005,7 +5027,7 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
         l1_pgentry_t *l1t = alloc_xen_pagetable();
 
         if ( !l1t )
-            return NULL;
+            goto out;
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
@@ -5021,7 +5043,11 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
     }
 
     BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-    return l2e_to_l1e(*pl2e) + l1_table_offset(v);
+    pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(v);
+
+ out:
+    UNMAP_XEN_PAGETABLE_NEW(pl2e);
+    return pl1e;
 }
 
 /* Convert to from superpage-mapping flags for map_pages_to_xen(). */
@@ -5045,7 +5071,7 @@ int map_pages_to_xen(
 {
     bool locking = system_state > SYS_STATE_boot;
     l3_pgentry_t *pl3e = NULL, ol3e;
-    l2_pgentry_t *pl2e, ol2e;
+    l2_pgentry_t *pl2e = NULL, ol2e;
     l1_pgentry_t *pl1e, ol1e;
     unsigned int  i;
     int rc = -ENOMEM;
@@ -5421,6 +5447,7 @@ int map_pages_to_xen(
                 spin_unlock(&map_pgdir_lock);
         }
     end_of_loop:
+        UNMAP_XEN_PAGETABLE_NEW(pl2e);
         UNMAP_XEN_PAGETABLE_NEW(pl3e);
     }
 
@@ -5429,6 +5456,7 @@ int map_pages_to_xen(
     rc = 0;
 
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
 }
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 15/55] x86/mm: rewrite virt_to_xen_l1e
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (13 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 14/55] x86/mm: rewrite xen_to_virt_l2e Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 16/55] x86/mm: switch to new APIs in map_pages_to_xen Hongyan Xia
                   ` (40 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Rewrite this function to use new APIs. Modify its callers to unmap the
pointer returned.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/domain_page.c | 10 ++++++----
 xen/arch/x86/mm.c          | 30 +++++++++++++++++++++++++-----
 2 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index 4a07cfb18e..24083e9a86 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -333,21 +333,23 @@ void unmap_domain_page_global(const void *ptr)
 mfn_t domain_page_map_to_mfn(const void *ptr)
 {
     unsigned long va = (unsigned long)ptr;
-    const l1_pgentry_t *pl1e;
+    l1_pgentry_t l1e;
 
     if ( va >= DIRECTMAP_VIRT_START )
         return _mfn(virt_to_mfn(ptr));
 
     if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
     {
-        pl1e = virt_to_xen_l1e(va);
+        l1_pgentry_t *pl1e = virt_to_xen_l1e(va);
         BUG_ON(!pl1e);
+        l1e = *pl1e;
+        UNMAP_XEN_PAGETABLE_NEW(pl1e);
     }
     else
     {
         ASSERT(va >= MAPCACHE_VIRT_START && va < MAPCACHE_VIRT_END);
-        pl1e = &__linear_l1_table[l1_linear_offset(va)];
+        l1e = __linear_l1_table[l1_linear_offset(va)];
     }
 
-    return l1e_get_mfn(*pl1e);
+    return l1e_get_mfn(l1e);
 }
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index ad0d7a0b80..f7fd0e6bad 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5024,26 +5024,44 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l1_pgentry_t *l1t = alloc_xen_pagetable();
+        l1_pgentry_t *l1t;
+        mfn_t mfn;
 
-        if ( !l1t )
+        mfn = alloc_xen_pagetable_new();
+        if ( mfn_eq(mfn, INVALID_MFN) )
             goto out;
+
+        l1t = map_xen_pagetable_new(mfn);
+
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
             clear_page(l1t);
-            l2e_write(pl2e, l2e_from_paddr(__pa(l1t), __PAGE_HYPERVISOR));
+            l2e_write(pl2e, l2e_from_mfn(mfn, __PAGE_HYPERVISOR));
+            pl1e = l1t + l1_table_offset(v);
             l1t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
+
         if ( l1t )
-            free_xen_pagetable(l1t);
+        {
+            ASSERT(!pl1e);
+            ASSERT(!mfn_eq(mfn, INVALID_MFN));
+            UNMAP_XEN_PAGETABLE_NEW(l1t);
+            free_xen_pagetable_new(mfn);
+        }
     }
 
     BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-    pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(v);
+
+    if ( !pl1e )
+    {
+        ASSERT(l2e_get_flags(*pl2e) & _PAGE_PRESENT);
+        pl1e = (l1_pgentry_t *)map_xen_pagetable_new(l2e_get_mfn(*pl2e))
+            + l1_table_offset(v);
+    }
 
  out:
     UNMAP_XEN_PAGETABLE_NEW(pl2e);
@@ -5447,6 +5465,7 @@ int map_pages_to_xen(
                 spin_unlock(&map_pgdir_lock);
         }
     end_of_loop:
+        UNMAP_XEN_PAGETABLE_NEW(pl1e);
         UNMAP_XEN_PAGETABLE_NEW(pl2e);
         UNMAP_XEN_PAGETABLE_NEW(pl3e);
     }
@@ -5456,6 +5475,7 @@ int map_pages_to_xen(
     rc = 0;
 
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl1e);
     UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 16/55] x86/mm: switch to new APIs in map_pages_to_xen
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (14 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 15/55] x86/mm: rewrite virt_to_xen_l1e Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 17/55] x86/mm: drop lXe_to_lYe invocations " Hongyan Xia
                   ` (39 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Hongyan Xia, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Page tables allocated in that function should be mapped and unmapped
now.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyax@amazon.com>

---
Changed since v1:
  * remove redundant lines
---
 xen/arch/x86/mm.c | 34 +++++++++++++++++++++++-----------
 1 file changed, 23 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f7fd0e6bad..5bb86935f4 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5185,6 +5185,7 @@ int map_pages_to_xen(
             unsigned int flush_flags =
                 FLUSH_TLB | FLUSH_ORDER(2 * PAGETABLE_ORDER);
             l2_pgentry_t *l2t;
+            mfn_t l2t_mfn;
 
             /* Skip this PTE if there is no change. */
             if ( ((l3e_get_pfn(ol3e) & ~(L2_PAGETABLE_ENTRIES *
@@ -5206,13 +5207,15 @@ int map_pages_to_xen(
                 goto end_of_loop;
             }
 
-            l2t = alloc_xen_pagetable();
-            if ( l2t == NULL )
+            l2t_mfn = alloc_xen_pagetable_new();
+            if ( mfn_eq(l2t_mfn, INVALID_MFN) )
             {
                 ASSERT(rc == -ENOMEM);
                 goto out;
             }
 
+            l2t = map_xen_pagetable_new(l2t_mfn);
+
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(ol3e) +
@@ -5227,15 +5230,18 @@ int map_pages_to_xen(
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
-                                                    __PAGE_HYPERVISOR));
-                l2t = NULL;
+                l3e_write_atomic(pl3e,
+                                 l3e_from_mfn(l2t_mfn, __PAGE_HYPERVISOR));
+                UNMAP_XEN_PAGETABLE_NEW(l2t);
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
             flush_area(virt, flush_flags);
             if ( l2t )
-                free_xen_pagetable(l2t);
+            {
+                UNMAP_XEN_PAGETABLE_NEW(l2t);
+                free_xen_pagetable_new(l2t_mfn);
+            }
         }
 
         pl2e = virt_to_xen_l2e(virt);
@@ -5298,6 +5304,7 @@ int map_pages_to_xen(
                 unsigned int flush_flags =
                     FLUSH_TLB | FLUSH_ORDER(PAGETABLE_ORDER);
                 l1_pgentry_t *l1t;
+                mfn_t l1t_mfn;
 
                 /* Skip this PTE if there is no change. */
                 if ( (((l2e_get_pfn(*pl2e) & ~(L1_PAGETABLE_ENTRIES - 1)) +
@@ -5317,13 +5324,15 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                l1t = alloc_xen_pagetable();
-                if ( l1t == NULL )
+                l1t_mfn = alloc_xen_pagetable_new();
+                if ( mfn_eq(l1t_mfn, INVALID_MFN) )
                 {
                     ASSERT(rc == -ENOMEM);
                     goto out;
                 }
 
+                l1t = map_xen_pagetable_new(l1t_mfn);
+
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
@@ -5337,15 +5346,18 @@ int map_pages_to_xen(
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(l1t_mfn,
                                                         __PAGE_HYPERVISOR));
-                    l1t = NULL;
+                    UNMAP_XEN_PAGETABLE_NEW(l1t);
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(virt, flush_flags);
                 if ( l1t )
-                    free_xen_pagetable(l1t);
+                {
+                    UNMAP_XEN_PAGETABLE_NEW(l1t);
+                    free_xen_pagetable_new(l1t_mfn);
+                }
             }
 
             pl1e  = l2e_to_l1e(*pl2e) + l1_table_offset(virt);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 17/55] x86/mm: drop lXe_to_lYe invocations in map_pages_to_xen
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (15 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 16/55] x86/mm: switch to new APIs in map_pages_to_xen Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 18/55] x86/mm: switch to new APIs in modify_xen_mappings Hongyan Xia
                   ` (38 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Map and unmap page tables where necessary.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 40 +++++++++++++++++++++++++++++-----------
 1 file changed, 29 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 5bb86935f4..08af71a261 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5141,8 +5141,10 @@ int map_pages_to_xen(
                 else
                 {
                     l2_pgentry_t *l2t;
+                    mfn_t l2t_mfn = l3e_get_mfn(ol3e);
+
+                    l2t = map_xen_pagetable_new(l2t_mfn);
 
-                    l2t = l3e_to_l2e(ol3e);
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
                         ol2e = l2t[i];
@@ -5154,10 +5156,12 @@ int map_pages_to_xen(
                         {
                             unsigned int j;
                             l1_pgentry_t *l1t;
+                            mfn_t l1t_mfn = l2e_get_mfn(ol2e);
 
-                            l1t = l2e_to_l1e(ol2e);
+                            l1t = map_xen_pagetable_new(l1t_mfn);
                             for ( j = 0; j < L1_PAGETABLE_ENTRIES; j++ )
                                 flush_flags(l1e_get_flags(l1t[j]));
+                            UNMAP_XEN_PAGETABLE_NEW(l1t);
                         }
                     }
                     flush_area(virt, flush_flags);
@@ -5166,9 +5170,9 @@ int map_pages_to_xen(
                         ol2e = l2t[i];
                         if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) &&
                              !(l2e_get_flags(ol2e) & _PAGE_PSE) )
-                            free_xen_pagetable(l2e_to_l1e(ol2e));
+                            free_xen_pagetable_new(l2e_get_mfn(ol2e));
                     }
-                    free_xen_pagetable(l2t);
+                    free_xen_pagetable_new(l2t_mfn);
                 }
             }
 
@@ -5273,12 +5277,14 @@ int map_pages_to_xen(
                 else
                 {
                     l1_pgentry_t *l1t;
+                    mfn_t l1t_mfn = l2e_get_mfn(ol2e);
 
-                    l1t = l2e_to_l1e(ol2e);
+                    l1t = map_xen_pagetable_new(l1t_mfn);
                     for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                         flush_flags(l1e_get_flags(l1t[i]));
                     flush_area(virt, flush_flags);
-                    free_xen_pagetable(l1t);
+                    UNMAP_XEN_PAGETABLE_NEW(l1t);
+                    free_xen_pagetable_new(l1t_mfn);
                 }
             }
 
@@ -5292,12 +5298,14 @@ int map_pages_to_xen(
             /* Normal page mapping. */
             if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
             {
+                /* XXX This forces page table to be populated */
                 pl1e = virt_to_xen_l1e(virt);
                 if ( pl1e == NULL )
                 {
                     ASSERT(rc == -ENOMEM);
                     goto out;
                 }
+                UNMAP_XEN_PAGETABLE_NEW(pl1e);
             }
             else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
             {
@@ -5360,9 +5368,11 @@ int map_pages_to_xen(
                 }
             }
 
-            pl1e  = l2e_to_l1e(*pl2e) + l1_table_offset(virt);
+            pl1e  = map_xen_pagetable_new(l2e_get_mfn((*pl2e)));
+            pl1e += l1_table_offset(virt);
             ol1e  = *pl1e;
             l1e_write_atomic(pl1e, l1e_from_mfn(mfn, flags));
+            UNMAP_XEN_PAGETABLE_NEW(pl1e);
             if ( (l1e_get_flags(ol1e) & _PAGE_PRESENT) )
             {
                 unsigned int flush_flags = FLUSH_TLB | FLUSH_ORDER(0);
@@ -5383,6 +5393,7 @@ int map_pages_to_xen(
             {
                 unsigned long base_mfn;
                 l1_pgentry_t *l1t;
+                mfn_t l1t_mfn;
 
                 if ( locking )
                     spin_lock(&map_pgdir_lock);
@@ -5406,12 +5417,15 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                l1t = l2e_to_l1e(ol2e);
+                l1t_mfn = l2e_get_mfn(ol2e);
+                l1t = map_xen_pagetable_new(l1t_mfn);
+
                 base_mfn = l1e_get_pfn(l1t[0]) & ~(L1_PAGETABLE_ENTRIES - 1);
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) ||
                          (l1e_get_flags(l1t[i]) != flags) )
                         break;
+                UNMAP_XEN_PAGETABLE_NEW(l1t);
                 if ( i == L1_PAGETABLE_ENTRIES )
                 {
                     l2e_write_atomic(pl2e, l2e_from_pfn(base_mfn,
@@ -5421,7 +5435,7 @@ int map_pages_to_xen(
                     flush_area(virt - PAGE_SIZE,
                                FLUSH_TLB_GLOBAL |
                                FLUSH_ORDER(PAGETABLE_ORDER));
-                    free_xen_pagetable(l2e_to_l1e(ol2e));
+                    free_xen_pagetable_new(l1t_mfn);
                 }
                 else if ( locking )
                     spin_unlock(&map_pgdir_lock);
@@ -5437,6 +5451,7 @@ int map_pages_to_xen(
         {
             unsigned long base_mfn;
             l2_pgentry_t *l2t;
+            mfn_t l2t_mfn;
 
             if ( locking )
                 spin_lock(&map_pgdir_lock);
@@ -5454,7 +5469,9 @@ int map_pages_to_xen(
                 goto end_of_loop;
             }
 
-            l2t = l3e_to_l2e(ol3e);
+            l2t_mfn = l3e_get_mfn(ol3e);
+            l2t = map_xen_pagetable_new(l2t_mfn);
+
             base_mfn = l2e_get_pfn(l2t[0]) & ~(L2_PAGETABLE_ENTRIES *
                                               L1_PAGETABLE_ENTRIES - 1);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
@@ -5462,6 +5479,7 @@ int map_pages_to_xen(
                       (base_mfn + (i << PAGETABLE_ORDER))) ||
                      (l2e_get_flags(l2t[i]) != l1f_to_lNf(flags)) )
                     break;
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
             if ( i == L2_PAGETABLE_ENTRIES )
             {
                 l3e_write_atomic(pl3e, l3e_from_pfn(base_mfn,
@@ -5471,7 +5489,7 @@ int map_pages_to_xen(
                 flush_area(virt - PAGE_SIZE,
                            FLUSH_TLB_GLOBAL |
                            FLUSH_ORDER(2*PAGETABLE_ORDER));
-                free_xen_pagetable(l3e_to_l2e(ol3e));
+                free_xen_pagetable_new(l2t_mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 18/55] x86/mm: switch to new APIs in modify_xen_mappings
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (16 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 17/55] x86/mm: drop lXe_to_lYe invocations " Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 19/55] x86/mm: drop lXe_to_lYe invocations from modify_xen_mappings Hongyan Xia
                   ` (37 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Hongyan Xia, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Page tables allocated in that function should be mapped and unmapped
now.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyax@amazon.com>

---
Changed since v1:
  * remove redundant lines
---
 xen/arch/x86/mm.c | 33 ++++++++++++++++++++++-----------
 1 file changed, 22 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 08af71a261..a812ef0244 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5562,6 +5562,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
         if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
         {
             l2_pgentry_t *l2t;
+            mfn_t mfn;
 
             if ( l2_table_offset(v) == 0 &&
                  l1_table_offset(v) == 0 &&
@@ -5578,13 +5579,15 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
-            l2t = alloc_xen_pagetable();
-            if ( !l2t )
+            mfn = alloc_xen_pagetable_new();
+            if ( mfn_eq(mfn, INVALID_MFN) )
             {
                 ASSERT(rc == -ENOMEM);
                 goto out;
             }
 
+            l2t = map_xen_pagetable_new(mfn);
+
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
@@ -5595,14 +5598,16 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
-                                                    __PAGE_HYPERVISOR));
-                l2t = NULL;
+                l3e_write_atomic(pl3e, l3e_from_mfn(mfn, __PAGE_HYPERVISOR));
+                UNMAP_XEN_PAGETABLE_NEW(l2t);
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
             if ( l2t )
-                free_xen_pagetable(l2t);
+            {
+                UNMAP_XEN_PAGETABLE_NEW(l2t);
+                free_xen_pagetable_new(mfn);
+            }
         }
 
         /*
@@ -5637,15 +5642,18 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             else
             {
                 l1_pgentry_t *l1t;
+                mfn_t mfn;
 
                 /* PSE: shatter the superpage and try again. */
-                l1t = alloc_xen_pagetable();
-                if ( !l1t )
+                mfn = alloc_xen_pagetable_new();
+                if ( mfn_eq(mfn, INVALID_MFN) )
                 {
                     ASSERT(rc == -ENOMEM);
                     goto out;
                 }
 
+                l1t = map_xen_pagetable_new(mfn);
+
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
@@ -5655,14 +5663,17 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(mfn,
                                                         __PAGE_HYPERVISOR));
-                    l1t = NULL;
+                    UNMAP_XEN_PAGETABLE_NEW(l1t);
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 if ( l1t )
-                    free_xen_pagetable(l1t);
+                {
+                    UNMAP_XEN_PAGETABLE_NEW(l1t);
+                    free_xen_pagetable_new(mfn);
+                }
             }
         }
         else
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 19/55] x86/mm: drop lXe_to_lYe invocations from modify_xen_mappings
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (17 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 18/55] x86/mm: switch to new APIs in modify_xen_mappings Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 20/55] x86/mm: switch to new APIs in arch_init_memory Hongyan Xia
                   ` (36 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 28 +++++++++++++++++++---------
 1 file changed, 19 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index a812ef0244..6fb8c92543 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5532,8 +5532,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 {
     bool locking = system_state > SYS_STATE_boot;
     l3_pgentry_t *pl3e = NULL;
-    l2_pgentry_t *pl2e;
-    l1_pgentry_t *pl1e;
+    l2_pgentry_t *pl2e = NULL;
     unsigned int  i;
     unsigned long v = s;
     int rc = -ENOMEM;
@@ -5614,7 +5613,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
          * The L3 entry has been verified to be present, and we've dealt with
          * 1G pages as well, so the L2 table cannot require allocation.
          */
-        pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(v);
+        pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+        pl2e += l2_table_offset(v);
 
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
@@ -5678,14 +5678,16 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
         }
         else
         {
-            l1_pgentry_t nl1e, *l1t;
+            l1_pgentry_t nl1e, *l1t, *pl1e;
+            mfn_t l1t_mfn;
 
             /*
              * Ordinary 4kB mapping: The L2 entry has been verified to be
              * present, and we've dealt with 2M pages as well, so the L1 table
              * cannot require allocation.
              */
-            pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(v);
+            pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            pl1e += l1_table_offset(v);
 
             /* Confirm the caller isn't trying to create new mappings. */
             if ( !(l1e_get_flags(*pl1e) & _PAGE_PRESENT) )
@@ -5696,6 +5698,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                                (l1e_get_flags(*pl1e) & ~FLAGS_MASK) | nf);
 
             l1e_write_atomic(pl1e, nl1e);
+            UNMAP_XEN_PAGETABLE_NEW(pl1e);
             v += PAGE_SIZE;
 
             /*
@@ -5725,10 +5728,12 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 goto end_of_loop;
             }
 
-            l1t = l2e_to_l1e(*pl2e);
+            l1t_mfn = l2e_get_mfn(*pl2e);
+            l1t = map_xen_pagetable_new(l1t_mfn);
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                 if ( l1e_get_intpte(l1t[i]) != 0 )
                     break;
+            UNMAP_XEN_PAGETABLE_NEW(l1t);
             if ( i == L1_PAGETABLE_ENTRIES )
             {
                 /* Empty: zap the L2E and free the L1 page. */
@@ -5736,7 +5741,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable(l1t);
+                free_xen_pagetable_new(l1t_mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5767,11 +5772,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
         {
             l2_pgentry_t *l2t;
+            mfn_t l2t_mfn;
 
-            l2t = l3e_to_l2e(*pl3e);
+            l2t_mfn = l3e_get_mfn(*pl3e);
+            l2t = map_xen_pagetable_new(l2t_mfn);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 if ( l2e_get_intpte(l2t[i]) != 0 )
                     break;
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
             if ( i == L2_PAGETABLE_ENTRIES )
             {
                 /* Empty: zap the L3E and free the L2 page. */
@@ -5779,12 +5787,13 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable(l2t);
+                free_xen_pagetable_new(l2t_mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
     end_of_loop:
+        UNMAP_XEN_PAGETABLE_NEW(pl2e);
         UNMAP_XEN_PAGETABLE_NEW(pl3e);
     }
 
@@ -5794,6 +5803,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     rc = 0;
 
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
 }
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 20/55] x86/mm: switch to new APIs in arch_init_memory
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (18 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 19/55] x86/mm: drop lXe_to_lYe invocations from modify_xen_mappings Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 21/55] x86_64/mm: introduce pl2e in paging_init Hongyan Xia
                   ` (35 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 6fb8c92543..8706dc0174 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -353,19 +353,22 @@ void __init arch_init_memory(void)
             ASSERT(root_pgt_pv_xen_slots < ROOT_PAGETABLE_PV_XEN_SLOTS);
             if ( l4_table_offset(split_va) == l4_table_offset(split_va - 1) )
             {
-                l3_pgentry_t *l3tab = alloc_xen_pagetable();
+                mfn_t l3tab_mfn = alloc_xen_pagetable_new();
 
-                if ( l3tab )
+                if ( !mfn_eq(l3tab_mfn, INVALID_MFN) )
                 {
-                    const l3_pgentry_t *l3idle =
-                        l4e_to_l3e(idle_pg_table[l4_table_offset(split_va)]);
+                    l3_pgentry_t *l3idle =
+                        map_xen_pagetable_new(
+                            l4e_get_mfn(idle_pg_table[l4_table_offset(split_va)]));
+                    l3_pgentry_t *l3tab = map_xen_pagetable_new(l3tab_mfn);
 
                     for ( i = 0; i < l3_table_offset(split_va); ++i )
                         l3tab[i] = l3idle[i];
                     for ( ; i < L3_PAGETABLE_ENTRIES; ++i )
                         l3tab[i] = l3e_empty();
-                    split_l4e = l4e_from_mfn(virt_to_mfn(l3tab),
-                                             __PAGE_HYPERVISOR_RW);
+                    split_l4e = l4e_from_mfn(l3tab_mfn, __PAGE_HYPERVISOR_RW);
+                    UNMAP_XEN_PAGETABLE_NEW(l3idle);
+                    UNMAP_XEN_PAGETABLE_NEW(l3tab);
                 }
                 else
                     ++root_pgt_pv_xen_slots;
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 21/55] x86_64/mm: introduce pl2e in paging_init
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (19 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 20/55] x86/mm: switch to new APIs in arch_init_memory Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 22/55] x86_64/mm: switch to new APIs " Hongyan Xia
                   ` (34 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Introduce pl2e so that we can use l2_ro_mpt to point to the page table
itself.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 795a467462..ac5e366e5b 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -497,7 +497,7 @@ void __init paging_init(void)
     unsigned long i, mpt_size, va;
     unsigned int n, memflags;
     l3_pgentry_t *l3_ro_mpt;
-    l2_pgentry_t *l2_ro_mpt = NULL;
+    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt;
     struct page_info *l1_pg;
 
     /*
@@ -547,7 +547,7 @@ void __init paging_init(void)
             (L2_PAGETABLE_SHIFT - 3 + PAGE_SHIFT)));
 
         if ( cpu_has_page1gb &&
-             !((unsigned long)l2_ro_mpt & ~PAGE_MASK) &&
+             !((unsigned long)pl2e & ~PAGE_MASK) &&
              (mpt_size >> L3_PAGETABLE_SHIFT) > (i >> PAGETABLE_ORDER) )
         {
             unsigned int k, holes;
@@ -607,7 +607,7 @@ void __init paging_init(void)
             memset((void *)(RDWR_MPT_VIRT_START + (i << L2_PAGETABLE_SHIFT)),
                    0xFF, 1UL << L2_PAGETABLE_SHIFT);
         }
-        if ( !((unsigned long)l2_ro_mpt & ~PAGE_MASK) )
+        if ( !((unsigned long)pl2e & ~PAGE_MASK) )
         {
             if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
                 goto nomem;
@@ -615,13 +615,14 @@ void __init paging_init(void)
             l3e_write(&l3_ro_mpt[l3_table_offset(va)],
                       l3e_from_paddr(__pa(l2_ro_mpt),
                                      __PAGE_HYPERVISOR_RO | _PAGE_USER));
+            pl2e = l2_ro_mpt;
             ASSERT(!l2_table_offset(va));
         }
         /* NB. Cannot be GLOBAL: guest user mode should not see it. */
         if ( l1_pg )
-            l2e_write(l2_ro_mpt, l2e_from_page(
+            l2e_write(pl2e, l2e_from_page(
                 l1_pg, /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT));
-        l2_ro_mpt++;
+        pl2e++;
     }
 #undef CNT
 #undef MFN
@@ -637,7 +638,8 @@ void __init paging_init(void)
     clear_page(l2_ro_mpt);
     l3e_write(&l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)],
               l3e_from_paddr(__pa(l2_ro_mpt), __PAGE_HYPERVISOR_RO));
-    l2_ro_mpt += l2_table_offset(HIRO_COMPAT_MPT_VIRT_START);
+    pl2e = l2_ro_mpt;
+    pl2e += l2_table_offset(HIRO_COMPAT_MPT_VIRT_START);
     /* Allocate and map the compatibility mode machine-to-phys table. */
     mpt_size = (mpt_size >> 1) + (1UL << (L2_PAGETABLE_SHIFT - 1));
     if ( mpt_size > RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START )
@@ -650,7 +652,7 @@ void __init paging_init(void)
              sizeof(*compat_machine_to_phys_mapping))
     BUILD_BUG_ON((sizeof(*frame_table) & ~sizeof(*frame_table)) % \
                  sizeof(*compat_machine_to_phys_mapping));
-    for ( i = 0; i < (mpt_size >> L2_PAGETABLE_SHIFT); i++, l2_ro_mpt++ )
+    for ( i = 0; i < (mpt_size >> L2_PAGETABLE_SHIFT); i++, pl2e++ )
     {
         memflags = MEMF_node(phys_to_nid(i <<
             (L2_PAGETABLE_SHIFT - 2 + PAGE_SHIFT)));
@@ -672,7 +674,7 @@ void __init paging_init(void)
                         (i << L2_PAGETABLE_SHIFT)),
                0xFF, 1UL << L2_PAGETABLE_SHIFT);
         /* NB. Cannot be GLOBAL as the ptes get copied into per-VM space. */
-        l2e_write(l2_ro_mpt, l2e_from_page(l1_pg, _PAGE_PSE|_PAGE_PRESENT));
+        l2e_write(pl2e, l2e_from_page(l1_pg, _PAGE_PSE|_PAGE_PRESENT));
     }
 #undef CNT
 #undef MFN
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 22/55] x86_64/mm: switch to new APIs in paging_init
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (20 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 21/55] x86_64/mm: introduce pl2e in paging_init Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-10-01 11:51   ` Wei Liu
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 23/55] x86_64/mm: drop l4e_to_l3e invocation from paging_init Hongyan Xia
                   ` (33 subsequent siblings)
  55 siblings, 1 reply; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Hongyan Xia, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyax@amazon.com>

---
Changed since v1:
  * use a global mapping for compat_idle_pg_table_l2, otherwise
    l2_ro_mpt will unmap it.
---
 xen/arch/x86/x86_64/mm.c | 50 +++++++++++++++++++++++++++++-----------
 1 file changed, 37 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index ac5e366e5b..c8c71564ba 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -496,9 +496,10 @@ void __init paging_init(void)
 {
     unsigned long i, mpt_size, va;
     unsigned int n, memflags;
-    l3_pgentry_t *l3_ro_mpt;
-    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt;
+    l3_pgentry_t *l3_ro_mpt = NULL;
+    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
     struct page_info *l1_pg;
+    mfn_t l3_ro_mpt_mfn, l2_ro_mpt_mfn;
 
     /*
      * We setup the L3s for 1:1 mapping if host support memory hotplug
@@ -511,22 +512,29 @@ void __init paging_init(void)
         if ( !(l4e_get_flags(idle_pg_table[l4_table_offset(va)]) &
               _PAGE_PRESENT) )
         {
-            l3_pgentry_t *pl3t = alloc_xen_pagetable();
+            l3_pgentry_t *pl3t;
+            mfn_t mfn;
 
-            if ( !pl3t )
+            mfn = alloc_xen_pagetable_new();
+            if ( mfn_eq(mfn, INVALID_MFN) )
                 goto nomem;
+
+            pl3t = map_xen_pagetable_new(mfn);
             clear_page(pl3t);
             l4e_write(&idle_pg_table[l4_table_offset(va)],
-                      l4e_from_paddr(__pa(pl3t), __PAGE_HYPERVISOR_RW));
+                      l4e_from_mfn(mfn, __PAGE_HYPERVISOR_RW));
+            UNMAP_XEN_PAGETABLE_NEW(pl3t);
         }
     }
 
     /* Create user-accessible L2 directory to map the MPT for guests. */
-    if ( (l3_ro_mpt = alloc_xen_pagetable()) == NULL )
+    l3_ro_mpt_mfn = alloc_xen_pagetable_new();
+    if ( mfn_eq(l3_ro_mpt_mfn, INVALID_MFN) )
         goto nomem;
+    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
     clear_page(l3_ro_mpt);
     l4e_write(&idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)],
-              l4e_from_paddr(__pa(l3_ro_mpt), __PAGE_HYPERVISOR_RO | _PAGE_USER));
+              l4e_from_mfn(l3_ro_mpt_mfn, __PAGE_HYPERVISOR_RO | _PAGE_USER));
 
     /*
      * Allocate and map the machine-to-phys table.
@@ -609,12 +617,21 @@ void __init paging_init(void)
         }
         if ( !((unsigned long)pl2e & ~PAGE_MASK) )
         {
-            if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
+            /*
+             * Unmap l2_ro_mpt, which could've been mapped in previous
+             * iteration.
+             */
+            unmap_xen_pagetable_new(l2_ro_mpt);
+
+            l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+            if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
                 goto nomem;
+
+            l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
             clear_page(l2_ro_mpt);
             l3e_write(&l3_ro_mpt[l3_table_offset(va)],
-                      l3e_from_paddr(__pa(l2_ro_mpt),
-                                     __PAGE_HYPERVISOR_RO | _PAGE_USER));
+                      l3e_from_mfn(l2_ro_mpt_mfn,
+                                   __PAGE_HYPERVISOR_RO | _PAGE_USER));
             pl2e = l2_ro_mpt;
             ASSERT(!l2_table_offset(va));
         }
@@ -626,18 +643,23 @@ void __init paging_init(void)
     }
 #undef CNT
 #undef MFN
+    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
 
     /* Create user-accessible L2 directory to map the MPT for compat guests. */
     BUILD_BUG_ON(l4_table_offset(RDWR_MPT_VIRT_START) !=
                  l4_table_offset(HIRO_COMPAT_MPT_VIRT_START));
     l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(
         HIRO_COMPAT_MPT_VIRT_START)]);
-    if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
+
+    l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+    if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
         goto nomem;
-    compat_idle_pg_table_l2 = l2_ro_mpt;
+    l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
+    compat_idle_pg_table_l2 = map_domain_page_global(l2_ro_mpt_mfn);
     clear_page(l2_ro_mpt);
     l3e_write(&l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)],
-              l3e_from_paddr(__pa(l2_ro_mpt), __PAGE_HYPERVISOR_RO));
+              l3e_from_mfn(l2_ro_mpt_mfn, __PAGE_HYPERVISOR_RO));
     pl2e = l2_ro_mpt;
     pl2e += l2_table_offset(HIRO_COMPAT_MPT_VIRT_START);
     /* Allocate and map the compatibility mode machine-to-phys table. */
@@ -679,6 +701,8 @@ void __init paging_init(void)
 #undef CNT
 #undef MFN
 
+    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+
     machine_to_phys_mapping_valid = 1;
 
     /* Set up linear page table mapping. */
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 23/55] x86_64/mm: drop l4e_to_l3e invocation from paging_init
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (21 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 22/55] x86_64/mm: switch to new APIs " Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 24/55] x86_64/mm.c: remove code that serves no purpose in setup_m2p_table Hongyan Xia
                   ` (32 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index c8c71564ba..c1daa04cf5 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -649,8 +649,10 @@ void __init paging_init(void)
     /* Create user-accessible L2 directory to map the MPT for compat guests. */
     BUILD_BUG_ON(l4_table_offset(RDWR_MPT_VIRT_START) !=
                  l4_table_offset(HIRO_COMPAT_MPT_VIRT_START));
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(
-        HIRO_COMPAT_MPT_VIRT_START)]);
+
+    l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset(
+                                        HIRO_COMPAT_MPT_VIRT_START)]);
+    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
 
     l2_ro_mpt_mfn = alloc_xen_pagetable_new();
     if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
@@ -702,6 +704,7 @@ void __init paging_init(void)
 #undef MFN
 
     UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
 
     machine_to_phys_mapping_valid = 1;
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 24/55] x86_64/mm.c: remove code that serves no purpose in setup_m2p_table
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (22 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 23/55] x86_64/mm: drop l4e_to_l3e invocation from paging_init Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 25/55] x86_64/mm: introduce pl2e " Hongyan Xia
                   ` (31 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index c1daa04cf5..103932720b 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -480,8 +480,6 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
             l2e_write(l2_ro_mpt, l2e_from_mfn(mfn,
                    /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT));
         }
-        if ( !((unsigned long)l2_ro_mpt & ~PAGE_MASK) )
-            l2_ro_mpt = NULL;
         i += ( 1UL << (L2_PAGETABLE_SHIFT - 3));
     }
 #undef CNT
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 25/55] x86_64/mm: introduce pl2e in setup_m2p_table
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (23 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 24/55] x86_64/mm.c: remove code that serves no purpose in setup_m2p_table Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 26/55] x86_64/mm: switch to new APIs " Hongyan Xia
                   ` (30 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 103932720b..f31bd4ffde 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -397,7 +397,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 {
     unsigned long i, va, smap, emap;
     unsigned int n;
-    l2_pgentry_t *l2_ro_mpt = NULL;
+    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt;
     l3_pgentry_t *l3_ro_mpt = NULL;
     int ret = 0;
 
@@ -458,7 +458,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
                   _PAGE_PSE));
             if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
               _PAGE_PRESENT )
-                l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
+                pl2e = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
                   l2_table_offset(va);
             else
             {
@@ -473,11 +473,12 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
                 l3e_write(&l3_ro_mpt[l3_table_offset(va)],
                           l3e_from_paddr(__pa(l2_ro_mpt),
                                          __PAGE_HYPERVISOR_RO | _PAGE_USER));
-                l2_ro_mpt += l2_table_offset(va);
+                pl2e = l2_ro_mpt;
+                pl2e += l2_table_offset(va);
             }
 
             /* NB. Cannot be GLOBAL: guest user mode should not see it. */
-            l2e_write(l2_ro_mpt, l2e_from_mfn(mfn,
+            l2e_write(pl2e, l2e_from_mfn(mfn,
                    /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT));
         }
         i += ( 1UL << (L2_PAGETABLE_SHIFT - 3));
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 26/55] x86_64/mm: switch to new APIs in setup_m2p_table
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (24 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 25/55] x86_64/mm: introduce pl2e " Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 27/55] x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table Hongyan Xia
                   ` (29 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index f31bd4ffde..d452ed3966 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -397,9 +397,10 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 {
     unsigned long i, va, smap, emap;
     unsigned int n;
-    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt;
+    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
     l3_pgentry_t *l3_ro_mpt = NULL;
     int ret = 0;
+    mfn_t l2_ro_mpt_mfn;
 
     ASSERT(l4e_get_flags(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)])
             & _PAGE_PRESENT);
@@ -462,17 +463,19 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
                   l2_table_offset(va);
             else
             {
-                l2_ro_mpt = alloc_xen_pagetable();
-                if ( !l2_ro_mpt )
+                UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+                l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+                if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
                 {
                     ret = -ENOMEM;
                     goto error;
                 }
 
+                l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
                 clear_page(l2_ro_mpt);
                 l3e_write(&l3_ro_mpt[l3_table_offset(va)],
-                          l3e_from_paddr(__pa(l2_ro_mpt),
-                                         __PAGE_HYPERVISOR_RO | _PAGE_USER));
+                          l3e_from_mfn(l2_ro_mpt_mfn,
+                                       __PAGE_HYPERVISOR_RO | _PAGE_USER));
                 pl2e = l2_ro_mpt;
                 pl2e += l2_table_offset(va);
             }
@@ -488,6 +491,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 
     ret = setup_compat_m2p_table(info);
 error:
+    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
     return ret;
 }
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 27/55] x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (25 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 26/55] x86_64/mm: switch to new APIs " Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 28/55] efi: use new page table APIs in copy_mapping Hongyan Xia
                   ` (28 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index d452ed3966..c41715cd56 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -400,11 +400,13 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
     l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
     l3_pgentry_t *l3_ro_mpt = NULL;
     int ret = 0;
-    mfn_t l2_ro_mpt_mfn;
+    mfn_t l2_ro_mpt_mfn, l3_ro_mpt_mfn;
 
     ASSERT(l4e_get_flags(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)])
             & _PAGE_PRESENT);
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
+    l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset(
+                                        RO_MPT_VIRT_START)]);
+    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
 
     smap = (info->spfn & (~((1UL << (L2_PAGETABLE_SHIFT - 3)) -1)));
     emap = ((info->epfn + ((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1 )) &
@@ -459,8 +461,13 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
                   _PAGE_PSE));
             if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
               _PAGE_PRESENT )
-                pl2e = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
-                  l2_table_offset(va);
+            {
+                UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+                l2_ro_mpt_mfn = l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]);
+                l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
+                ASSERT(l2_ro_mpt);
+                pl2e = l2_ro_mpt + l2_table_offset(va);
+            }
             else
             {
                 UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
@@ -492,6 +499,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
     ret = setup_compat_m2p_table(info);
 error:
     UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
     return ret;
 }
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 28/55] efi: use new page table APIs in copy_mapping
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (26 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 27/55] x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 29/55] efi: avoid using global variable " Hongyan Xia
                   ` (27 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Jan Beulich

From: Wei Liu <wei.liu2@citrix.com>

After inspection ARM doesn't have alloc_xen_pagetable so this function
is x86 only, which means it is safe for us to change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
XXX test this in gitlab ci to be sure.
---
 xen/common/efi/boot.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 79193784ff..62b5944e61 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1440,16 +1440,22 @@ static __init void copy_mapping(unsigned long mfn, unsigned long end,
             continue;
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
-            l3dst = alloc_xen_pagetable();
-            BUG_ON(!l3dst);
+            mfn_t l3t_mfn;
+
+            l3t_mfn = alloc_xen_pagetable_new();
+            BUG_ON(mfn_eq(l3t_mfn, INVALID_MFN));
+            l3dst = map_xen_pagetable_new(l3t_mfn);
             clear_page(l3dst);
             efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)] =
-                l4e_from_paddr(virt_to_maddr(l3dst), __PAGE_HYPERVISOR);
+                l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR);
         }
         else
-            l3dst = l4e_to_l3e(l4e);
-        l3src = l4e_to_l3e(idle_pg_table[l4_table_offset(va)]);
+            l3dst = map_xen_pagetable_new(l4e_get_mfn(l4e));
+        l3src = map_xen_pagetable_new(
+            l4e_get_mfn(idle_pg_table[l4_table_offset(va)]));
         l3dst[l3_table_offset(mfn << PAGE_SHIFT)] = l3src[l3_table_offset(va)];
+        UNMAP_XEN_PAGETABLE_NEW(l3src);
+        UNMAP_XEN_PAGETABLE_NEW(l3dst);
     }
 }
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 29/55] efi: avoid using global variable in copy_mapping
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (27 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 28/55] efi: use new page table APIs in copy_mapping Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 30/55] efi: use new page table APIs in efi_init_memory Hongyan Xia
                   ` (26 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Jan Beulich

From: Wei Liu <wei.liu2@citrix.com>

We will soon switch efi_l4_table to use ephemeral mapping. Make
copy_mapping take a pointer to the mapping instead of using the global
variable.

No functional change intended.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/common/efi/boot.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 62b5944e61..64a287690a 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1423,7 +1423,8 @@ static int __init parse_efi_param(const char *s)
 custom_param("efi", parse_efi_param);
 
 #ifndef USE_SET_VIRTUAL_ADDRESS_MAP
-static __init void copy_mapping(unsigned long mfn, unsigned long end,
+static __init void copy_mapping(l4_pgentry_t *l4,
+                                unsigned long mfn, unsigned long end,
                                 bool (*is_valid)(unsigned long smfn,
                                                  unsigned long emfn))
 {
@@ -1431,7 +1432,7 @@ static __init void copy_mapping(unsigned long mfn, unsigned long end,
 
     for ( ; mfn < end; mfn = next )
     {
-        l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)];
+        l4_pgentry_t l4e = l4[l4_table_offset(mfn << PAGE_SHIFT)];
         l3_pgentry_t *l3src, *l3dst;
         unsigned long va = (unsigned long)mfn_to_virt(mfn);
 
@@ -1446,7 +1447,7 @@ static __init void copy_mapping(unsigned long mfn, unsigned long end,
             BUG_ON(mfn_eq(l3t_mfn, INVALID_MFN));
             l3dst = map_xen_pagetable_new(l3t_mfn);
             clear_page(l3dst);
-            efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)] =
+            l4[l4_table_offset(mfn << PAGE_SHIFT)] =
                 l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR);
         }
         else
@@ -1606,7 +1607,7 @@ void __init efi_init_memory(void)
     BUG_ON(!efi_l4_pgtable);
     clear_page(efi_l4_pgtable);
 
-    copy_mapping(0, max_page, ram_range_valid);
+    copy_mapping(efi_l4_pgtable, 0, max_page, ram_range_valid);
 
     /* Insert non-RAM runtime mappings inside the direct map. */
     for ( i = 0; i < efi_memmap_size; i += efi_mdesc_size )
@@ -1619,7 +1620,7 @@ void __init efi_init_memory(void)
                 desc->Type == EfiBootServicesData))) &&
              desc->VirtualStart != INVALID_VIRTUAL_ADDRESS &&
              desc->VirtualStart != desc->PhysicalStart )
-            copy_mapping(PFN_DOWN(desc->PhysicalStart),
+             copy_mapping(efi_l4_pgtable, PFN_DOWN(desc->PhysicalStart),
                          PFN_UP(desc->PhysicalStart +
                                 (desc->NumberOfPages << EFI_PAGE_SHIFT)),
                          rt_range_valid);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 30/55] efi: use new page table APIs in efi_init_memory
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (28 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 29/55] efi: avoid using global variable " Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 31/55] efi: add emacs block to boot.c Hongyan Xia
                   ` (25 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Jan Beulich

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/common/efi/boot.c | 39 +++++++++++++++++++++++++++------------
 1 file changed, 27 insertions(+), 12 deletions(-)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 64a287690a..1d1420f02c 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1637,39 +1637,50 @@ void __init efi_init_memory(void)
 
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
-            pl3e = alloc_xen_pagetable();
-            BUG_ON(!pl3e);
+            mfn_t l3t_mfn;
+
+            l3t_mfn = alloc_xen_pagetable_new();
+            BUG_ON(mfn_eq(l3t_mfn, INVALID_MFN));
+            pl3e = map_xen_pagetable_new(l3t_mfn);
             clear_page(pl3e);
             efi_l4_pgtable[l4_table_offset(addr)] =
-                l4e_from_paddr(virt_to_maddr(pl3e), __PAGE_HYPERVISOR);
+                l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR);
         }
         else
-            pl3e = l4e_to_l3e(l4e);
+            pl3e = map_xen_pagetable_new(l4e_get_mfn(l4e));
         pl3e += l3_table_offset(addr);
+
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
-            pl2e = alloc_xen_pagetable();
-            BUG_ON(!pl2e);
+            mfn_t l2t_mfn;
+
+            l2t_mfn = alloc_xen_pagetable_new();
+            BUG_ON(mfn_eq(l2t_mfn, INVALID_MFN));
+            pl2e = map_xen_pagetable_new(l2t_mfn);
             clear_page(pl2e);
-            *pl3e = l3e_from_paddr(virt_to_maddr(pl2e), __PAGE_HYPERVISOR);
+            *pl3e = l3e_from_mfn(l2t_mfn, __PAGE_HYPERVISOR);
         }
         else
         {
             BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-            pl2e = l3e_to_l2e(*pl3e);
+            pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
         }
         pl2e += l2_table_offset(addr);
+
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
-            l1t = alloc_xen_pagetable();
-            BUG_ON(!l1t);
+            mfn_t l1t_mfn;
+
+            l1t_mfn = alloc_xen_pagetable_new();
+            BUG_ON(mfn_eq(l1t_mfn, INVALID_MFN));
+            l1t = map_xen_pagetable_new(l1t_mfn);
             clear_page(l1t);
-            *pl2e = l2e_from_paddr(virt_to_maddr(l1t), __PAGE_HYPERVISOR);
+            *pl2e = l2e_from_mfn(l1t_mfn, __PAGE_HYPERVISOR);
         }
         else
         {
             BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-            l1t = l2e_to_l1e(*pl2e);
+            l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
         }
         for ( i = l1_table_offset(addr);
               i < L1_PAGETABLE_ENTRIES && extra->smfn < extra->emfn;
@@ -1681,6 +1692,10 @@ void __init efi_init_memory(void)
             extra_head = extra->next;
             xfree(extra);
         }
+
+        UNMAP_XEN_PAGETABLE_NEW(l1t);
+        UNMAP_XEN_PAGETABLE_NEW(pl2e);
+        UNMAP_XEN_PAGETABLE_NEW(pl3e);
     }
 
     /* Insert Xen mappings. */
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 31/55] efi: add emacs block to boot.c
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (29 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 30/55] efi: use new page table APIs in efi_init_memory Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 32/55] efi: switch EFI L4 table to use new APIs Hongyan Xia
                   ` (24 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Jan Beulich

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/common/efi/boot.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 1d1420f02c..3868293d06 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1705,3 +1705,13 @@ void __init efi_init_memory(void)
 #endif
 }
 #endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 32/55] efi: switch EFI L4 table to use new APIs
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (30 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 31/55] efi: add emacs block to boot.c Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 33/55] x86/smpboot: add emacs block Hongyan Xia
                   ` (23 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

This requires storing the MFN instead of linear address of the L4
table. Adjust code accordingly.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/efi/runtime.h | 12 +++++++++---
 xen/common/efi/boot.c      |  8 ++++++--
 xen/common/efi/efi.h       |  3 ++-
 xen/common/efi/runtime.c   |  8 ++++----
 4 files changed, 21 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h
index d9eb8f5c27..277d237953 100644
--- a/xen/arch/x86/efi/runtime.h
+++ b/xen/arch/x86/efi/runtime.h
@@ -2,11 +2,17 @@
 #include <asm/mc146818rtc.h>
 
 #ifndef COMPAT
-l4_pgentry_t *__read_mostly efi_l4_pgtable;
+mfn_t __read_mostly efi_l4_mfn = INVALID_MFN_INITIALIZER;
 
 void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e)
 {
-    if ( efi_l4_pgtable )
-        l4e_write(efi_l4_pgtable + l4idx, l4e);
+    if ( !mfn_eq(efi_l4_mfn, INVALID_MFN) )
+    {
+        l4_pgentry_t *l4t;
+
+        l4t = map_xen_pagetable_new(efi_l4_mfn);
+        l4e_write(l4t + l4idx, l4e);
+        UNMAP_XEN_PAGETABLE_NEW(l4t);
+    }
 }
 #endif
diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 3868293d06..f55d6a6d76 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1488,6 +1488,7 @@ void __init efi_init_memory(void)
         unsigned int prot;
     } *extra, *extra_head = NULL;
 #endif
+    l4_pgentry_t *efi_l4_pgtable;
 
     free_ebmalloc_unused_mem();
 
@@ -1603,8 +1604,9 @@ void __init efi_init_memory(void)
                                  mdesc_ver, efi_memmap);
 #else
     /* Set up 1:1 page tables to do runtime calls in "physical" mode. */
-    efi_l4_pgtable = alloc_xen_pagetable();
-    BUG_ON(!efi_l4_pgtable);
+    efi_l4_mfn = alloc_xen_pagetable_new();
+    BUG_ON(mfn_eq(efi_l4_mfn, INVALID_MFN));
+    efi_l4_pgtable = map_xen_pagetable_new(efi_l4_mfn);
     clear_page(efi_l4_pgtable);
 
     copy_mapping(efi_l4_pgtable, 0, max_page, ram_range_valid);
@@ -1703,6 +1705,8 @@ void __init efi_init_memory(void)
           i < l4_table_offset(DIRECTMAP_VIRT_END); ++i )
         efi_l4_pgtable[i] = idle_pg_table[i];
 #endif
+
+    UNMAP_XEN_PAGETABLE_NEW(efi_l4_pgtable);
 }
 #endif
 
diff --git a/xen/common/efi/efi.h b/xen/common/efi/efi.h
index 6b9c56ead1..139b660ed7 100644
--- a/xen/common/efi/efi.h
+++ b/xen/common/efi/efi.h
@@ -6,6 +6,7 @@
 #include <efi/eficapsule.h>
 #include <efi/efiapi.h>
 #include <xen/efi.h>
+#include <xen/mm.h>
 #include <xen/spinlock.h>
 #include <asm/page.h>
 
@@ -29,7 +30,7 @@ extern UINTN efi_memmap_size, efi_mdesc_size;
 extern void *efi_memmap;
 
 #ifdef CONFIG_X86
-extern l4_pgentry_t *efi_l4_pgtable;
+extern mfn_t efi_l4_mfn;
 #endif
 
 extern const struct efi_pci_rom *efi_pci_roms;
diff --git a/xen/common/efi/runtime.c b/xen/common/efi/runtime.c
index ab53ebcc55..d4b04a04f4 100644
--- a/xen/common/efi/runtime.c
+++ b/xen/common/efi/runtime.c
@@ -85,7 +85,7 @@ struct efi_rs_state efi_rs_enter(void)
     static const u32 mxcsr = MXCSR_DEFAULT;
     struct efi_rs_state state = { .cr3 = 0 };
 
-    if ( !efi_l4_pgtable )
+    if ( mfn_eq(efi_l4_mfn, INVALID_MFN) )
         return state;
 
     state.cr3 = read_cr3();
@@ -111,7 +111,7 @@ struct efi_rs_state efi_rs_enter(void)
         lgdt(&gdt_desc);
     }
 
-    switch_cr3_cr4(virt_to_maddr(efi_l4_pgtable), read_cr4());
+    switch_cr3_cr4(mfn_to_maddr(efi_l4_mfn), read_cr4());
 
     return state;
 }
@@ -140,9 +140,9 @@ void efi_rs_leave(struct efi_rs_state *state)
 
 bool efi_rs_using_pgtables(void)
 {
-    return efi_l4_pgtable &&
+    return !mfn_eq(efi_l4_mfn, INVALID_MFN) &&
            (smp_processor_id() == efi_rs_on_cpu) &&
-           (read_cr3() == virt_to_maddr(efi_l4_pgtable));
+           (read_cr3() == mfn_to_maddr(efi_l4_mfn));
 }
 
 unsigned long efi_get_time(void)
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 33/55] x86/smpboot: add emacs block
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (31 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 32/55] efi: switch EFI L4 table to use new APIs Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 34/55] x86/smpboot: clone_mapping should have one exit path Hongyan Xia
                   ` (22 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 5b3be25f8a..55b99644af 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -1378,3 +1378,13 @@ void __init smp_intr_init(void)
     set_direct_apic_vector(INVALIDATE_TLB_VECTOR, invalidate_interrupt);
     set_direct_apic_vector(CALL_FUNCTION_VECTOR, call_function_interrupt);
 }
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 34/55] x86/smpboot: clone_mapping should have one exit path
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (32 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 33/55] x86/smpboot: add emacs block Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 35/55] x86/smpboot: switch pl3e to use new APIs in clone_mapping Hongyan Xia
                   ` (21 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will soon need to clean up page table mappings in the exit path.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 35 ++++++++++++++++++++++++++++-------
 1 file changed, 28 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 55b99644af..716dc1512d 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -667,6 +667,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     l3_pgentry_t *pl3e;
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
+    int rc;
 
     /*
      * Sanity check 'linear'.  We only allow cloning from the Xen virtual
@@ -674,11 +675,17 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
      */
     if ( root_table_offset(linear) > ROOT_PAGETABLE_LAST_XEN_SLOT ||
          root_table_offset(linear) < ROOT_PAGETABLE_FIRST_XEN_SLOT )
-        return -EINVAL;
+    {
+        rc = -EINVAL;
+        goto out;
+    }
 
     if ( linear < XEN_VIRT_START ||
          (linear >= XEN_VIRT_END && linear < DIRECTMAP_VIRT_START) )
-        return -EINVAL;
+    {
+        rc = -EINVAL;
+        goto out;
+    }
 
     pl3e = l4e_to_l3e(idle_pg_table[root_table_offset(linear)]) +
         l3_table_offset(linear);
@@ -707,7 +714,10 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
             pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(linear);
             flags = l1e_get_flags(*pl1e);
             if ( !(flags & _PAGE_PRESENT) )
-                return 0;
+            {
+                rc = 0;
+                goto out;
+            }
             pfn = l1e_get_pfn(*pl1e);
         }
     }
@@ -716,7 +726,10 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     {
         pl3e = alloc_xen_pagetable();
         if ( !pl3e )
-            return -ENOMEM;
+        {
+            rc = -ENOMEM;
+            goto out;
+        }
         clear_page(pl3e);
         l4e_write(&rpt[root_table_offset(linear)],
                   l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR));
@@ -730,7 +743,10 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     {
         pl2e = alloc_xen_pagetable();
         if ( !pl2e )
-            return -ENOMEM;
+        {
+            rc = -ENOMEM;
+            goto out;
+        }
         clear_page(pl2e);
         l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
     }
@@ -746,7 +762,10 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     {
         pl1e = alloc_xen_pagetable();
         if ( !pl1e )
-            return -ENOMEM;
+        {
+            rc = -ENOMEM;
+            goto out;
+        }
         clear_page(pl1e);
         l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
     }
@@ -767,7 +786,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     else
         l1e_write(pl1e, l1e_from_pfn(pfn, flags));
 
-    return 0;
+    rc = 0;
+ out:
+    return rc;
 }
 
 DEFINE_PER_CPU(root_pgentry_t *, root_pgt);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 35/55] x86/smpboot: switch pl3e to use new APIs in clone_mapping
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (33 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 34/55] x86/smpboot: clone_mapping should have one exit path Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 36/55] x86/smpboot: switch pl2e " Hongyan Xia
                   ` (20 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 716dc1512d..db39f5cbb2 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -664,7 +664,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 {
     unsigned long linear = (unsigned long)ptr, pfn;
     unsigned int flags;
-    l3_pgentry_t *pl3e;
+    l3_pgentry_t *pl3e = NULL;
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
     int rc;
@@ -687,8 +687,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         goto out;
     }
 
-    pl3e = l4e_to_l3e(idle_pg_table[root_table_offset(linear)]) +
-        l3_table_offset(linear);
+    pl3e = map_xen_pagetable_new(
+        l4e_get_mfn(idle_pg_table[root_table_offset(linear)]));
+    pl3e += l3_table_offset(linear);
 
     flags = l3e_get_flags(*pl3e);
     ASSERT(flags & _PAGE_PRESENT);
@@ -722,20 +723,26 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
     }
 
+    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+
     if ( !(root_get_flags(rpt[root_table_offset(linear)]) & _PAGE_PRESENT) )
     {
-        pl3e = alloc_xen_pagetable();
-        if ( !pl3e )
+        mfn_t l3t_mfn = alloc_xen_pagetable_new();
+
+        if ( mfn_eq(l3t_mfn, INVALID_MFN) )
         {
             rc = -ENOMEM;
             goto out;
         }
+
+        pl3e = map_xen_pagetable_new(l3t_mfn);
         clear_page(pl3e);
         l4e_write(&rpt[root_table_offset(linear)],
-                  l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR));
+                  l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR));
     }
     else
-        pl3e = l4e_to_l3e(rpt[root_table_offset(linear)]);
+        pl3e = map_xen_pagetable_new(
+            l4e_get_mfn(rpt[root_table_offset(linear)]));
 
     pl3e += l3_table_offset(linear);
 
@@ -788,6 +795,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     rc = 0;
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
 }
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 36/55] x86/smpboot: switch pl2e to use new APIs in clone_mapping
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (34 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 35/55] x86/smpboot: switch pl3e to use new APIs in clone_mapping Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 37/55] x86/smpboot: switch pl1e " Hongyan Xia
                   ` (19 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index db39f5cbb2..d327c062b1 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -665,7 +665,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     unsigned long linear = (unsigned long)ptr, pfn;
     unsigned int flags;
     l3_pgentry_t *pl3e = NULL;
-    l2_pgentry_t *pl2e;
+    l2_pgentry_t *pl2e = NULL;
     l1_pgentry_t *pl1e;
     int rc;
 
@@ -701,7 +701,8 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     }
     else
     {
-        pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(linear);
+        pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+        pl2e += l2_table_offset(linear);
         flags = l2e_get_flags(*pl2e);
         ASSERT(flags & _PAGE_PRESENT);
         if ( flags & _PAGE_PSE )
@@ -723,6 +724,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
     }
 
+    UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
 
     if ( !(root_get_flags(rpt[root_table_offset(linear)]) & _PAGE_PRESENT) )
@@ -748,19 +750,22 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
-        pl2e = alloc_xen_pagetable();
-        if ( !pl2e )
+        mfn_t l2t_mfn = alloc_xen_pagetable_new();
+
+        if ( mfn_eq(l2t_mfn, INVALID_MFN) )
         {
             rc = -ENOMEM;
             goto out;
         }
+
+        pl2e = map_xen_pagetable_new(l2t_mfn);
         clear_page(pl2e);
-        l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
+        l3e_write(pl3e, l3e_from_mfn(l2t_mfn, __PAGE_HYPERVISOR));
     }
     else
     {
         ASSERT(!(l3e_get_flags(*pl3e) & _PAGE_PSE));
-        pl2e = l3e_to_l2e(*pl3e);
+        pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
     }
 
     pl2e += l2_table_offset(linear);
@@ -795,6 +800,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     rc = 0;
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
 }
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 37/55] x86/smpboot: switch pl1e to use new APIs in clone_mapping
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (35 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 36/55] x86/smpboot: switch pl2e " Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 38/55] x86/smpboot: drop lXe_to_lYe invocations from cleanup_cpu_root_pgt Hongyan Xia
                   ` (18 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index d327c062b1..956e1bdbcc 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -666,7 +666,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     unsigned int flags;
     l3_pgentry_t *pl3e = NULL;
     l2_pgentry_t *pl2e = NULL;
-    l1_pgentry_t *pl1e;
+    l1_pgentry_t *pl1e = NULL;
     int rc;
 
     /*
@@ -713,7 +713,8 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
         else
         {
-            pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(linear);
+            pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            pl1e += l1_table_offset(linear);
             flags = l1e_get_flags(*pl1e);
             if ( !(flags & _PAGE_PRESENT) )
             {
@@ -724,6 +725,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
     }
 
+    UNMAP_XEN_PAGETABLE_NEW(pl1e);
     UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
 
@@ -772,19 +774,22 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
-        pl1e = alloc_xen_pagetable();
-        if ( !pl1e )
+        mfn_t l1t_mfn = alloc_xen_pagetable_new();
+
+        if ( mfn_eq(l1t_mfn, INVALID_MFN) )
         {
             rc = -ENOMEM;
             goto out;
         }
+
+        pl1e = map_xen_pagetable_new(l1t_mfn);
         clear_page(pl1e);
-        l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
+        l2e_write(pl2e, l2e_from_mfn(l1t_mfn, __PAGE_HYPERVISOR));
     }
     else
     {
         ASSERT(!(l2e_get_flags(*pl2e) & _PAGE_PSE));
-        pl1e = l2e_to_l1e(*pl2e);
+        pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
     }
 
     pl1e += l1_table_offset(linear);
@@ -800,6 +805,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     rc = 0;
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl1e);
     UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 38/55] x86/smpboot: drop lXe_to_lYe invocations from cleanup_cpu_root_pgt
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (36 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 37/55] x86/smpboot: switch pl1e " Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 39/55] x86: switch root_pgt to mfn_t and use new APIs Hongyan Xia
                   ` (17 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 956e1bdbcc..c55aaa65a2 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -891,23 +891,27 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
           r < root_table_offset(HYPERVISOR_VIRT_END); ++r )
     {
         l3_pgentry_t *l3t;
+        mfn_t l3t_mfn;
         unsigned int i3;
 
         if ( !(root_get_flags(rpt[r]) & _PAGE_PRESENT) )
             continue;
 
-        l3t = l4e_to_l3e(rpt[r]);
+        l3t_mfn = l4e_get_mfn(rpt[r]);
+        l3t = map_xen_pagetable_new(l3t_mfn);
 
         for ( i3 = 0; i3 < L3_PAGETABLE_ENTRIES; ++i3 )
         {
             l2_pgentry_t *l2t;
+            mfn_t l2t_mfn;
             unsigned int i2;
 
             if ( !(l3e_get_flags(l3t[i3]) & _PAGE_PRESENT) )
                 continue;
 
             ASSERT(!(l3e_get_flags(l3t[i3]) & _PAGE_PSE));
-            l2t = l3e_to_l2e(l3t[i3]);
+            l2t_mfn = l3e_get_mfn(l3t[i3]);
+            l2t = map_xen_pagetable_new(l2t_mfn);
 
             for ( i2 = 0; i2 < L2_PAGETABLE_ENTRIES; ++i2 )
             {
@@ -915,13 +919,15 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
                     continue;
 
                 ASSERT(!(l2e_get_flags(l2t[i2]) & _PAGE_PSE));
-                free_xen_pagetable(l2e_to_l1e(l2t[i2]));
+                free_xen_pagetable_new(l2e_get_mfn(l2t[i2]));
             }
 
-            free_xen_pagetable(l2t);
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            free_xen_pagetable_new(l2t_mfn);
         }
 
-        free_xen_pagetable(l3t);
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        free_xen_pagetable_new(l3t_mfn);
     }
 
     free_xen_pagetable(rpt);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 39/55] x86: switch root_pgt to mfn_t and use new APIs
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (37 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 38/55] x86/smpboot: drop lXe_to_lYe invocations from cleanup_cpu_root_pgt Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-10-01 13:54   ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 40/55] x86/shim: map and unmap page tables in replace_va_mapping Hongyan Xia
                   ` (16 subsequent siblings)
  55 siblings, 1 reply; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

This then requires moving declaration of root page table mfn into mm.h
and modify setup_cpu_root_pgt to have a single exit path.

We also need to force map_domain_page to use direct map when switching
per-domain mappings. This is contrary to our end goal of removing
direct map, but this will be removed once we make map_domain_page
context-switch safe in another (large) patch series.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/domain.c           | 15 ++++++++++---
 xen/arch/x86/domain_page.c      |  2 +-
 xen/arch/x86/mm.c               |  2 +-
 xen/arch/x86/pv/domain.c        |  2 +-
 xen/arch/x86/smpboot.c          | 40 ++++++++++++++++++++++-----------
 xen/include/asm-x86/mm.h        |  2 ++
 xen/include/asm-x86/processor.h |  2 +-
 7 files changed, 45 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index dbdf6b1bc2..e9bf47efce 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -69,6 +69,7 @@
 #include <asm/pv/domain.h>
 #include <asm/pv/mm.h>
 #include <asm/spec_ctrl.h>
+#include <asm/setup.h>
 
 DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 
@@ -1580,12 +1581,20 @@ void paravirt_ctxt_switch_from(struct vcpu *v)
 
 void paravirt_ctxt_switch_to(struct vcpu *v)
 {
-    root_pgentry_t *root_pgt = this_cpu(root_pgt);
+    mfn_t rpt_mfn = this_cpu(root_pgt_mfn);
 
-    if ( root_pgt )
-        root_pgt[root_table_offset(PERDOMAIN_VIRT_START)] =
+    if ( !mfn_eq(rpt_mfn, INVALID_MFN) )
+    {
+        root_pgentry_t *rpt;
+
+        mapcache_override_current(INVALID_VCPU);
+        rpt = map_xen_pagetable_new(rpt_mfn);
+        rpt[root_table_offset(PERDOMAIN_VIRT_START)] =
             l4e_from_page(v->domain->arch.perdomain_l3_pg,
                           __PAGE_HYPERVISOR_RW);
+        UNMAP_XEN_PAGETABLE_NEW(rpt);
+        mapcache_override_current(NULL);
+    }
 
     if ( unlikely(v->arch.dr7 & DR7_ACTIVE_MASK) )
         activate_debugregs(v);
diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index 24083e9a86..cfcffd35f3 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -57,7 +57,7 @@ static inline struct vcpu *mapcache_current_vcpu(void)
     return v;
 }
 
-void __init mapcache_override_current(struct vcpu *v)
+void mapcache_override_current(struct vcpu *v)
 {
     this_cpu(override) = v;
 }
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 8706dc0174..5c1d65d267 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -530,7 +530,7 @@ void write_ptbase(struct vcpu *v)
     if ( is_pv_vcpu(v) && v->domain->arch.pv.xpti )
     {
         cpu_info->root_pgt_changed = true;
-        cpu_info->pv_cr3 = __pa(this_cpu(root_pgt));
+        cpu_info->pv_cr3 = mfn_to_maddr(this_cpu(root_pgt_mfn));
         if ( new_cr4 & X86_CR4_PCIDE )
             cpu_info->pv_cr3 |= get_pcid_bits(v, true);
         switch_cr3_cr4(v->arch.cr3, new_cr4);
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index 4b6f48dea2..7e70690f03 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -360,7 +360,7 @@ static void _toggle_guest_pt(struct vcpu *v)
     if ( d->arch.pv.xpti )
     {
         cpu_info->root_pgt_changed = true;
-        cpu_info->pv_cr3 = __pa(this_cpu(root_pgt)) |
+        cpu_info->pv_cr3 = mfn_to_maddr(this_cpu(root_pgt_mfn)) |
                            (d->arch.pv.pcid ? get_pcid_bits(v, true) : 0);
     }
 
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index c55aaa65a2..ca8fc6d485 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -811,7 +811,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     return rc;
 }
 
-DEFINE_PER_CPU(root_pgentry_t *, root_pgt);
+DEFINE_PER_CPU(mfn_t, root_pgt_mfn);
 
 static root_pgentry_t common_pgt;
 
@@ -819,19 +819,27 @@ extern const char _stextentry[], _etextentry[];
 
 static int setup_cpu_root_pgt(unsigned int cpu)
 {
-    root_pgentry_t *rpt;
+    root_pgentry_t *rpt = NULL;
+    mfn_t rpt_mfn;
     unsigned int off;
     int rc;
 
     if ( !opt_xpti_hwdom && !opt_xpti_domu )
-        return 0;
+    {
+        rc = 0;
+        goto out;
+    }
 
-    rpt = alloc_xen_pagetable();
-    if ( !rpt )
-        return -ENOMEM;
+    rpt_mfn = alloc_xen_pagetable_new();
+    if ( mfn_eq(rpt_mfn, INVALID_MFN) )
+    {
+        rc = -ENOMEM;
+        goto out;
+    }
 
+    rpt = map_xen_pagetable_new(rpt_mfn);
     clear_page(rpt);
-    per_cpu(root_pgt, cpu) = rpt;
+    per_cpu(root_pgt_mfn, cpu) = rpt_mfn;
 
     rpt[root_table_offset(RO_MPT_VIRT_START)] =
         idle_pg_table[root_table_offset(RO_MPT_VIRT_START)];
@@ -848,7 +856,7 @@ static int setup_cpu_root_pgt(unsigned int cpu)
             rc = clone_mapping(ptr, rpt);
 
         if ( rc )
-            return rc;
+            goto out;
 
         common_pgt = rpt[root_table_offset(XEN_VIRT_START)];
     }
@@ -873,19 +881,24 @@ static int setup_cpu_root_pgt(unsigned int cpu)
     if ( !rc )
         rc = clone_mapping((void *)per_cpu(stubs.addr, cpu), rpt);
 
+ out:
+    UNMAP_XEN_PAGETABLE_NEW(rpt);
     return rc;
 }
 
 static void cleanup_cpu_root_pgt(unsigned int cpu)
 {
-    root_pgentry_t *rpt = per_cpu(root_pgt, cpu);
+    mfn_t rpt_mfn = per_cpu(root_pgt_mfn, cpu);
+    root_pgentry_t *rpt;
     unsigned int r;
     unsigned long stub_linear = per_cpu(stubs.addr, cpu);
 
-    if ( !rpt )
+    if ( mfn_eq(rpt_mfn, INVALID_MFN) )
         return;
 
-    per_cpu(root_pgt, cpu) = NULL;
+    per_cpu(root_pgt_mfn, cpu) = INVALID_MFN;
+
+    rpt = map_xen_pagetable_new(rpt_mfn);
 
     for ( r = root_table_offset(DIRECTMAP_VIRT_START);
           r < root_table_offset(HYPERVISOR_VIRT_END); ++r )
@@ -930,7 +943,8 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
         free_xen_pagetable_new(l3t_mfn);
     }
 
-    free_xen_pagetable(rpt);
+    UNMAP_XEN_PAGETABLE_NEW(rpt);
+    free_xen_pagetable_new(rpt_mfn);
 
     /* Also zap the stub mapping for this CPU. */
     if ( stub_linear )
@@ -1134,7 +1148,7 @@ void __init smp_prepare_cpus(void)
     rc = setup_cpu_root_pgt(0);
     if ( rc )
         panic("Error %d setting up PV root page table\n", rc);
-    if ( per_cpu(root_pgt, 0) )
+    if ( !mfn_eq(per_cpu(root_pgt_mfn, 0), INVALID_MFN) )
     {
         get_cpu_info()->pv_cr3 = 0;
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 80173eb4c3..12a10b270d 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -646,4 +646,6 @@ void free_xen_pagetable_new(mfn_t mfn);
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
 
+DECLARE_PER_CPU(mfn_t, root_pgt_mfn);
+
 #endif /* __ASM_X86_MM_H__ */
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index c6fc1987a1..68d1d82071 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -469,7 +469,7 @@ static inline void disable_each_ist(idt_entry_t *idt)
 extern idt_entry_t idt_table[];
 extern idt_entry_t *idt_tables[];
 
-DECLARE_PER_CPU(root_pgentry_t *, root_pgt);
+DECLARE_PER_CPU(struct tss_struct, init_tss);
 
 extern void write_ptbase(struct vcpu *v);
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 40/55] x86/shim: map and unmap page tables in replace_va_mapping
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (38 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 39/55] x86: switch root_pgt to mfn_t and use new APIs Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 41/55] x86_64/mm: map and unmap page tables in m2p_mapped Hongyan Xia
                   ` (15 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/pv/shim.c | 20 +++++++++++++++-----
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 324ca27f93..cf638fa965 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -167,15 +167,25 @@ static void __init replace_va_mapping(struct domain *d, l4_pgentry_t *l4start,
                                       unsigned long va, mfn_t mfn)
 {
     l4_pgentry_t *pl4e = l4start + l4_table_offset(va);
-    l3_pgentry_t *pl3e = l4e_to_l3e(*pl4e) + l3_table_offset(va);
-    l2_pgentry_t *pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(va);
-    l1_pgentry_t *pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(va);
-    struct page_info *page = mfn_to_page(l1e_get_mfn(*pl1e));
+    l3_pgentry_t *pl3e;
+    l2_pgentry_t *pl2e;
+    l1_pgentry_t *pl1e;
 
-    put_page_and_type(page);
+    pl3e = map_xen_pagetable_new(l4e_get_mfn(*pl4e));
+    pl3e += l3_table_offset(va);
+    pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+    pl2e += l2_table_offset(va);
+    pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+    pl1e += l1_table_offset(va);
+
+    put_page_and_type(mfn_to_page(l1e_get_mfn(*pl1e)));
 
     *pl1e = l1e_from_mfn(mfn, (!is_pv_32bit_domain(d) ? L1_PROT
                                                       : COMPAT_L1_PROT));
+
+    UNMAP_XEN_PAGETABLE_NEW(pl1e);
+    UNMAP_XEN_PAGETABLE_NEW(pl2e);
+    UNMAP_XEN_PAGETABLE_NEW(pl3e);
 }
 
 static void evtchn_reserve(struct domain *d, unsigned int port)
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 41/55] x86_64/mm: map and unmap page tables in m2p_mapped
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (39 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 40/55] x86/shim: map and unmap page tables in replace_va_mapping Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 42/55] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Hongyan Xia
                   ` (14 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index c41715cd56..5c5b91b785 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -130,28 +130,36 @@ static int m2p_mapped(unsigned long spfn)
 {
     unsigned long va;
     l3_pgentry_t *l3_ro_mpt;
-    l2_pgentry_t *l2_ro_mpt;
+    l2_pgentry_t *l2_ro_mpt = NULL;
+    int rc = M2P_NO_MAPPED;
 
     va = RO_MPT_VIRT_START + spfn * sizeof(*machine_to_phys_mapping);
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(va)]);
+    l3_ro_mpt = map_xen_pagetable_new(
+        l4e_get_mfn(idle_pg_table[l4_table_offset(va)]));
 
     switch ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
              (_PAGE_PRESENT |_PAGE_PSE))
     {
         case _PAGE_PSE|_PAGE_PRESENT:
-            return M2P_1G_MAPPED;
+            rc = M2P_1G_MAPPED;
+            goto out;
         /* Check for next level */
         case _PAGE_PRESENT:
             break;
         default:
-            return M2P_NO_MAPPED;
+            rc = M2P_NO_MAPPED;
+            goto out;
     }
-    l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
+    l2_ro_mpt = map_xen_pagetable_new(
+        l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
 
     if (l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT)
-        return M2P_2M_MAPPED;
+        rc = M2P_2M_MAPPED;
 
-    return M2P_NO_MAPPED;
+ out:
+    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    return rc;
 }
 
 static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 42/55] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (40 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 41/55] x86_64/mm: map and unmap page tables in m2p_mapped Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 43/55] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Hongyan Xia
                   ` (13 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 31 +++++++++++++++++++++++--------
 1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 5c5b91b785..e0d2190be1 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -166,8 +166,8 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
 {
     unsigned long i, n, v;
     mfn_t m2p_start_mfn = INVALID_MFN;
-    l3_pgentry_t l3e;
-    l2_pgentry_t l2e;
+    l3_pgentry_t l3e, *l3t;
+    l2_pgentry_t l2e, *l2t;
 
     /* M2P table is mappable read-only by privileged domains. */
     for ( v  = RDWR_MPT_VIRT_START;
@@ -175,14 +175,22 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
           v += n << PAGE_SHIFT )
     {
         n = L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES;
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-            l3_table_offset(v)];
+
+        l3t = map_xen_pagetable_new(
+            l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
+        l3e = l3t[l3_table_offset(v)];
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
+
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
         if ( !(l3e_get_flags(l3e) & _PAGE_PSE) )
         {
             n = L1_PAGETABLE_ENTRIES;
-            l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+
+            l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+            l2e = l2t[l2_table_offset(v)];
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
+
             if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
                 continue;
             m2p_start_mfn = l2e_get_mfn(l2e);
@@ -203,11 +211,18 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
           v != RDWR_COMPAT_MPT_VIRT_END;
           v += 1 << L2_PAGETABLE_SHIFT )
     {
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-            l3_table_offset(v)];
+        l3t = map_xen_pagetable_new(
+            l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
+        l3e = l3t[l3_table_offset(v)];
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
+
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
-        l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+
+        l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+        l2e = l2t[l2_table_offset(v)];
+        UNMAP_XEN_PAGETABLE_NEW(l2t);
+
         if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
             continue;
         m2p_start_mfn = l2e_get_mfn(l2e);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 43/55] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (41 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 42/55] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 44/55] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Hongyan Xia
                   ` (12 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index e0d2190be1..2fff5f9306 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -252,11 +252,13 @@ static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info)
     if ( emap > ((RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2) )
         emap = (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2;
 
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(HIRO_COMPAT_MPT_VIRT_START)]);
+    l3_ro_mpt = map_xen_pagetable_new(
+        l4e_get_mfn(idle_pg_table[l4_table_offset(HIRO_COMPAT_MPT_VIRT_START)]));
 
     ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]) & _PAGE_PRESENT);
 
-    l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]);
+    l2_ro_mpt = map_xen_pagetable_new(
+        l3e_get_mfn(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]));
 
     for ( i = smap; i < emap; )
     {
@@ -278,6 +280,9 @@ static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info)
         i += 1UL << (L2_PAGETABLE_SHIFT - 2);
     }
 
+    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+
     return;
 }
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 44/55] x86_64/mm: map and unmap page tables in destroy_m2p_mapping
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (42 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 43/55] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 45/55] x86_64/mm: map and unmap page tables in setup_compat_m2p_table Hongyan Xia
                   ` (11 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 2fff5f9306..1d2ebd642f 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -292,7 +292,8 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
     unsigned long i, va, rwva;
     unsigned long smap = info->spfn, emap = info->epfn;
 
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
+    l3_ro_mpt = map_xen_pagetable_new(
+        l4e_get_mfn(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]));
 
     /*
      * No need to clean m2p structure existing before the hotplug
@@ -314,26 +315,35 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
             continue;
         }
 
-        l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
+        l2_ro_mpt = map_xen_pagetable_new(
+            l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
         if (!(l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT))
         {
             i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) +
                     (1UL << (L2_PAGETABLE_SHIFT - 3)) ;
+            UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
             continue;
         }
 
         pt_pfn = l2e_get_pfn(l2_ro_mpt[l2_table_offset(va)]);
         if ( hotadd_mem_valid(pt_pfn, info) )
         {
+            l2_pgentry_t *l2t;
+
             destroy_xen_mappings(rwva, rwva + (1UL << L2_PAGETABLE_SHIFT));
 
-            l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
-            l2e_write(&l2_ro_mpt[l2_table_offset(va)], l2e_empty());
+            l2t = map_xen_pagetable_new(
+                l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
+            l2e_write(&l2t[l2_table_offset(va)], l2e_empty());
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
         }
         i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) +
               (1UL << (L2_PAGETABLE_SHIFT - 3));
+        UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
     }
 
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+
     destroy_compat_m2p_mapping(info);
 
     /* Brute-Force flush all TLB */
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 45/55] x86_64/mm: map and unmap page tables in setup_compat_m2p_table
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (43 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 44/55] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 46/55] x86_64/mm: map and unmap page tables in cleanup_frame_table Hongyan Xia
                   ` (10 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 1d2ebd642f..e8ed04006f 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -382,11 +382,13 @@ static int setup_compat_m2p_table(struct mem_hotadd_info *info)
 
     va = HIRO_COMPAT_MPT_VIRT_START +
          smap * sizeof(*compat_machine_to_phys_mapping);
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(va)]);
+    l3_ro_mpt = map_xen_pagetable_new(
+        l4e_get_mfn(idle_pg_table[l4_table_offset(va)]));
 
     ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) & _PAGE_PRESENT);
 
-    l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
+    l2_ro_mpt = map_xen_pagetable_new(
+        l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
 
 #define MFN(x) (((x) << L2_PAGETABLE_SHIFT) / sizeof(unsigned int))
 #define CNT ((sizeof(*frame_table) & -sizeof(*frame_table)) / \
@@ -424,6 +426,9 @@ static int setup_compat_m2p_table(struct mem_hotadd_info *info)
     }
 #undef CNT
 #undef MFN
+
+    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
     return err;
 }
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 46/55] x86_64/mm: map and unmap page tables in cleanup_frame_table
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (44 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 45/55] x86_64/mm: map and unmap page tables in setup_compat_m2p_table Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 47/55] x86_64/mm: map and unmap page tables in subarch_init_memory Hongyan Xia
                   ` (9 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 24 +++++++++++++++++-------
 1 file changed, 17 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index e8ed04006f..8d13c994af 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -801,8 +801,8 @@ void free_compat_arg_xlat(struct vcpu *v)
 static void cleanup_frame_table(struct mem_hotadd_info *info)
 {
     unsigned long sva, eva;
-    l3_pgentry_t l3e;
-    l2_pgentry_t l2e;
+    l3_pgentry_t l3e, *l3t;
+    l2_pgentry_t l2e, *l2t;
     mfn_t spfn, epfn;
 
     spfn = _mfn(info->spfn);
@@ -816,8 +816,10 @@ static void cleanup_frame_table(struct mem_hotadd_info *info)
 
     while (sva < eva)
     {
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(sva)])[
-          l3_table_offset(sva)];
+        l3t = map_xen_pagetable_new(
+            l4e_get_mfn(idle_pg_table[l4_table_offset(sva)]));
+        l3e = l3t[l3_table_offset(sva)];
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
              (l3e_get_flags(l3e) & _PAGE_PSE) )
         {
@@ -826,7 +828,9 @@ static void cleanup_frame_table(struct mem_hotadd_info *info)
             continue;
         }
 
-        l2e = l3e_to_l2e(l3e)[l2_table_offset(sva)];
+        l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+        l2e = l2t[l2_table_offset(sva)];
+        UNMAP_XEN_PAGETABLE_NEW(l2t);
         ASSERT(l2e_get_flags(l2e) & _PAGE_PRESENT);
 
         if ( (l2e_get_flags(l2e) & (_PAGE_PRESENT | _PAGE_PSE)) ==
@@ -842,8 +846,14 @@ static void cleanup_frame_table(struct mem_hotadd_info *info)
             continue;
         }
 
-        ASSERT(l1e_get_flags(l2e_to_l1e(l2e)[l1_table_offset(sva)]) &
-                _PAGE_PRESENT);
+#ifndef NDEBUG
+        {
+            l1_pgentry_t *l1t = map_xen_pagetable_new(l2e_get_mfn(l2e));
+            ASSERT(l1e_get_flags(l1t[l1_table_offset(sva)]) &
+                   _PAGE_PRESENT);
+            UNMAP_XEN_PAGETABLE_NEW(l1t);
+        }
+#endif
          sva = (sva & ~((1UL << PAGE_SHIFT) - 1)) +
                     (1UL << PAGE_SHIFT);
     }
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 47/55] x86_64/mm: map and unmap page tables in subarch_init_memory
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (45 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 46/55] x86_64/mm: map and unmap page tables in cleanup_frame_table Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 48/55] x86_64/mm: map and unmap page tables in subarch_memory_op Hongyan Xia
                   ` (8 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 31 +++++++++++++++++++++++--------
 1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 8d13c994af..7a02fcee18 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -930,8 +930,8 @@ static int extend_frame_table(struct mem_hotadd_info *info)
 void __init subarch_init_memory(void)
 {
     unsigned long i, n, v, m2p_start_mfn;
-    l3_pgentry_t l3e;
-    l2_pgentry_t l2e;
+    l3_pgentry_t l3e, *l3t;
+    l2_pgentry_t l2e, *l2t;
 
     BUILD_BUG_ON(RDWR_MPT_VIRT_START & ((1UL << L3_PAGETABLE_SHIFT) - 1));
     BUILD_BUG_ON(RDWR_MPT_VIRT_END   & ((1UL << L3_PAGETABLE_SHIFT) - 1));
@@ -941,14 +941,22 @@ void __init subarch_init_memory(void)
           v += n << PAGE_SHIFT )
     {
         n = L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES;
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-            l3_table_offset(v)];
+
+        l3t = map_xen_pagetable_new(
+            l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
+        l3e = l3t[l3_table_offset(v)];
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
+
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
         if ( !(l3e_get_flags(l3e) & _PAGE_PSE) )
         {
             n = L1_PAGETABLE_ENTRIES;
-            l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+
+            l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+            l2e = l2t[l2_table_offset(v)];
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
+
             if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
                 continue;
             m2p_start_mfn = l2e_get_pfn(l2e);
@@ -967,11 +975,18 @@ void __init subarch_init_memory(void)
           v != RDWR_COMPAT_MPT_VIRT_END;
           v += 1 << L2_PAGETABLE_SHIFT )
     {
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-            l3_table_offset(v)];
+        l3t = map_xen_pagetable_new(
+            l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
+        l3e = l3t[l3_table_offset(v)];
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
+
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
-        l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+
+        l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+        l2e = l2t[l2_table_offset(v)];
+        UNMAP_XEN_PAGETABLE_NEW(l2t);
+
         if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
             continue;
         m2p_start_mfn = l2e_get_pfn(l2e);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 48/55] x86_64/mm: map and unmap page tables in subarch_memory_op
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (46 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 47/55] x86_64/mm: map and unmap page tables in subarch_init_memory Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 49/55] x86/smpboot: remove lXe_to_lYe in cleanup_cpu_root_pgt Hongyan Xia
                   ` (7 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 7a02fcee18..a1c69d7f0e 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1016,8 +1016,8 @@ void __init subarch_init_memory(void)
 long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
-    l3_pgentry_t l3e;
-    l2_pgentry_t l2e;
+    l3_pgentry_t l3e, *l3t;
+    l2_pgentry_t l2e, *l2t;
     unsigned long v, limit;
     xen_pfn_t mfn, last_mfn;
     unsigned int i;
@@ -1036,13 +1036,18 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
               (v < (unsigned long)(machine_to_phys_mapping + max_page));
               i++, v += 1UL << L2_PAGETABLE_SHIFT )
         {
-            l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-                l3_table_offset(v)];
+            l3t = map_xen_pagetable_new(
+                l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
+            l3e = l3t[l3_table_offset(v)];
+            UNMAP_XEN_PAGETABLE_NEW(l3t);
+
             if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
                 mfn = last_mfn;
             else if ( !(l3e_get_flags(l3e) & _PAGE_PSE) )
             {
-                l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+                l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+                l2e = l2t[l2_table_offset(v)];
+                UNMAP_XEN_PAGETABLE_NEW(l2t);
                 if ( l2e_get_flags(l2e) & _PAGE_PRESENT )
                     mfn = l2e_get_pfn(l2e);
                 else
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 49/55] x86/smpboot: remove lXe_to_lYe in cleanup_cpu_root_pgt
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (47 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 48/55] x86_64/mm: map and unmap page tables in subarch_memory_op Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 50/55] x86/pv: properly map and unmap page tables in mark_pv_pt_pages_rdonly Hongyan Xia
                   ` (6 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index ca8fc6d485..9fe0ef18a1 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -949,11 +949,17 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
     /* Also zap the stub mapping for this CPU. */
     if ( stub_linear )
     {
-        l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
-        l2_pgentry_t *l2t = l3e_to_l2e(l3t[l3_table_offset(stub_linear)]);
-        l1_pgentry_t *l1t = l2e_to_l1e(l2t[l2_table_offset(stub_linear)]);
+        l3_pgentry_t *l3t = map_xen_pagetable_new(l4e_get_mfn(common_pgt));
+        l2_pgentry_t *l2t = map_xen_pagetable_new(
+            l3e_get_mfn(l3t[l3_table_offset(stub_linear)]));
+        l1_pgentry_t *l1t = map_xen_pagetable_new(
+            l2e_get_mfn(l2t[l2_table_offset(stub_linear)]));
 
         l1t[l1_table_offset(stub_linear)] = l1e_empty();
+
+        UNMAP_XEN_PAGETABLE_NEW(l1t);
+        UNMAP_XEN_PAGETABLE_NEW(l2t);
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
     }
 }
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 50/55] x86/pv: properly map and unmap page tables in mark_pv_pt_pages_rdonly
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (48 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 49/55] x86/smpboot: remove lXe_to_lYe in cleanup_cpu_root_pgt Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 51/55] x86/pv: properly map and unmap page table in dom0_construct_pv Hongyan Xia
                   ` (5 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/pv/dom0_build.c | 35 +++++++++++++++++++++++------------
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 1bd53e9c08..d7d42568fb 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -50,17 +50,17 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d,
     unsigned long count;
     struct page_info *page;
     l4_pgentry_t *pl4e;
-    l3_pgentry_t *pl3e;
-    l2_pgentry_t *pl2e;
-    l1_pgentry_t *pl1e;
+    l3_pgentry_t *pl3e, *l3t;
+    l2_pgentry_t *pl2e, *l2t;
+    l1_pgentry_t *pl1e, *l1t;
 
     pl4e = l4start + l4_table_offset(vpt_start);
-    pl3e = l4e_to_l3e(*pl4e);
-    pl3e += l3_table_offset(vpt_start);
-    pl2e = l3e_to_l2e(*pl3e);
-    pl2e += l2_table_offset(vpt_start);
-    pl1e = l2e_to_l1e(*pl2e);
-    pl1e += l1_table_offset(vpt_start);
+    l3t = map_xen_pagetable_new(l4e_get_mfn(*pl4e));
+    pl3e = l3t + l3_table_offset(vpt_start);
+    l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+    pl2e = l2t + l2_table_offset(vpt_start);
+    l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+    pl1e = l1t + l1_table_offset(vpt_start);
     for ( count = 0; count < nr_pt_pages; count++ )
     {
         l1e_remove_flags(*pl1e, _PAGE_RW);
@@ -85,12 +85,23 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d,
             if ( !((unsigned long)++pl2e & (PAGE_SIZE - 1)) )
             {
                 if ( !((unsigned long)++pl3e & (PAGE_SIZE - 1)) )
-                    pl3e = l4e_to_l3e(*++pl4e);
-                pl2e = l3e_to_l2e(*pl3e);
+                {
+                    UNMAP_XEN_PAGETABLE_NEW(l3t);
+                    l3t = map_xen_pagetable_new(l4e_get_mfn(*++pl4e));
+                    pl3e = l3t;
+                }
+                UNMAP_XEN_PAGETABLE_NEW(l2t);
+                l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+                pl2e = l2t;
             }
-            pl1e = l2e_to_l1e(*pl2e);
+            UNMAP_XEN_PAGETABLE_NEW(l1t);
+            l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            pl1e = l1t;
         }
     }
+    UNMAP_XEN_PAGETABLE_NEW(l1t);
+    UNMAP_XEN_PAGETABLE_NEW(l2t);
+    UNMAP_XEN_PAGETABLE_NEW(l3t);
 }
 
 static __init void setup_pv_physmap(struct domain *d, unsigned long pgtbl_pfn,
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 51/55] x86/pv: properly map and unmap page table in dom0_construct_pv
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (49 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 50/55] x86/pv: properly map and unmap page tables in mark_pv_pt_pages_rdonly Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 52/55] x86: remove lXe_to_lYe in __start_xen Hongyan Xia
                   ` (4 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/pv/dom0_build.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index d7d42568fb..39cb68f7da 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -679,6 +679,8 @@ int __init dom0_construct_pv(struct domain *d,
 
     if ( is_pv_32bit_domain(d) )
     {
+        l2_pgentry_t *l2t;
+
         /* Ensure the first four L3 entries are all populated. */
         for ( i = 0, l3tab = l3start; i < 4; ++i, ++l3tab )
         {
@@ -693,7 +695,9 @@ int __init dom0_construct_pv(struct domain *d,
                 l3e_get_page(*l3tab)->u.inuse.type_info |= PGT_pae_xen_l2;
         }
 
-        init_xen_pae_l2_slots(l3e_to_l2e(l3start[3]), d);
+        l2t = map_xen_pagetable_new(l3e_get_mfn(l3start[3]));
+        init_xen_pae_l2_slots(l2t, d);
+        UNMAP_XEN_PAGETABLE_NEW(l2t);
     }
 
     /* Pages that are part of page tables must be read only. */
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 52/55] x86: remove lXe_to_lYe in __start_xen
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (50 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 51/55] x86/pv: properly map and unmap page table in dom0_construct_pv Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 53/55] x86/mm: drop old page table APIs Hongyan Xia
                   ` (3 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Properly map and unmap page tables where necessary.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/setup.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index dec60d0301..d27bcf1724 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1095,13 +1095,17 @@ void __init noreturn __start_xen(unsigned long mbi_p)
             pl4e = __va(__pa(idle_pg_table));
             for ( i = 0 ; i < L4_PAGETABLE_ENTRIES; i++, pl4e++ )
             {
+                l3_pgentry_t *l3t;
+
                 if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
                     continue;
                 *pl4e = l4e_from_intpte(l4e_get_intpte(*pl4e) +
                                         xen_phys_start);
-                pl3e = l4e_to_l3e(*pl4e);
+                pl3e = l3t = map_xen_pagetable_new(l4e_get_mfn(*pl4e));
                 for ( j = 0; j < L3_PAGETABLE_ENTRIES; j++, pl3e++ )
                 {
+                    l2_pgentry_t *l2t;
+
                     /* Not present, 1GB mapping, or already relocated? */
                     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) ||
                          (l3e_get_flags(*pl3e) & _PAGE_PSE) ||
@@ -1109,7 +1113,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                         continue;
                     *pl3e = l3e_from_intpte(l3e_get_intpte(*pl3e) +
                                             xen_phys_start);
-                    pl2e = l3e_to_l2e(*pl3e);
+                    pl2e = l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
                     for ( k = 0; k < L2_PAGETABLE_ENTRIES; k++, pl2e++ )
                     {
                         /* Not present, PSE, or already relocated? */
@@ -1120,7 +1124,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                         *pl2e = l2e_from_intpte(l2e_get_intpte(*pl2e) +
                                                 xen_phys_start);
                     }
+                    UNMAP_XEN_PAGETABLE_NEW(l2t);
                 }
+                UNMAP_XEN_PAGETABLE_NEW(l3t);
             }
 
             /* The only data mappings to be relocated are in the Xen area. */
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 53/55] x86/mm: drop old page table APIs
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (51 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 52/55] x86: remove lXe_to_lYe in __start_xen Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 54/55] x86: switch to use domheap page for page tables Hongyan Xia
                   ` (2 subsequent siblings)
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Now that we've switched all users to the new APIs, the old ones aren't
needed anymore.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c          | 16 ----------------
 xen/include/asm-x86/mm.h   |  2 --
 xen/include/asm-x86/page.h |  5 -----
 3 files changed, 23 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 5c1d65d267..c9be239d53 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4850,22 +4850,6 @@ int mmcfg_intercept_write(
     return X86EMUL_OKAY;
 }
 
-void *alloc_xen_pagetable(void)
-{
-    mfn_t mfn;
-
-    mfn = alloc_xen_pagetable_new();
-    ASSERT(!mfn_eq(mfn, INVALID_MFN));
-
-    return map_xen_pagetable_new(mfn);
-}
-
-void free_xen_pagetable(void *v)
-{
-    if ( system_state != SYS_STATE_early_boot )
-        free_xen_pagetable_new(virt_to_mfn(v));
-}
-
 mfn_t alloc_xen_pagetable_new(void)
 {
     if ( system_state != SYS_STATE_early_boot )
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 12a10b270d..4fb79ab8f0 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -631,8 +631,6 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
                           unsigned int nr_frames, xen_pfn_t mfn_list[]);
 
 /* Allocator functions for Xen pagetables. */
-void *alloc_xen_pagetable(void);
-void free_xen_pagetable(void *v);
 mfn_t alloc_xen_pagetable_new(void);
 void *map_xen_pagetable_new(mfn_t mfn);
 void unmap_xen_pagetable_new(void *v);
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 05a8b1efa6..906ec701a3 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -187,11 +187,6 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
 #define l4e_has_changed(x,y,flags) \
     ( !!(((x).l4 ^ (y).l4) & ((PADDR_MASK&PAGE_MASK)|put_pte_flags(flags))) )
 
-/* Pagetable walking. */
-#define l2e_to_l1e(x)              ((l1_pgentry_t *)__va(l2e_get_paddr(x)))
-#define l3e_to_l2e(x)              ((l2_pgentry_t *)__va(l3e_get_paddr(x)))
-#define l4e_to_l3e(x)              ((l3_pgentry_t *)__va(l4e_get_paddr(x)))
-
 #define map_l1t_from_l2e(x)        (l1_pgentry_t *)map_domain_page(l2e_get_mfn(x))
 #define map_l2t_from_l3e(x)        (l2_pgentry_t *)map_domain_page(l3e_get_mfn(x))
 #define map_l3t_from_l4e(x)        (l3_pgentry_t *)map_domain_page(l4e_get_mfn(x))
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 54/55] x86: switch to use domheap page for page tables
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (52 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 53/55] x86/mm: drop old page table APIs Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 55/55] x86/mm: drop _new suffix for page table APIs Hongyan Xia
  2019-10-01 11:56 ` [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Wei Liu
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Modify all the _new APIs to handle domheap pages.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index c9be239d53..a2d2d01660 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4854,10 +4854,10 @@ mfn_t alloc_xen_pagetable_new(void)
 {
     if ( system_state != SYS_STATE_early_boot )
     {
-        void *ptr = alloc_xenheap_page();
+        struct page_info *pg = alloc_domheap_page(NULL, 0);
 
-        BUG_ON(!hardware_domain && !ptr);
-        return virt_to_mfn(ptr);
+        BUG_ON(!hardware_domain && !pg);
+        return pg ? page_to_mfn(pg) : INVALID_MFN;
     }
 
     return alloc_boot_pages(1, 1);
@@ -4865,20 +4865,21 @@ mfn_t alloc_xen_pagetable_new(void)
 
 void *map_xen_pagetable_new(mfn_t mfn)
 {
-    return mfn_to_virt(mfn_x(mfn));
+    return map_domain_page(mfn);
 }
 
 /* v can point to an entry within a table or be NULL */
 void unmap_xen_pagetable_new(void *v)
 {
-    /* XXX still using xenheap page, no need to do anything.  */
+    if ( v )
+        unmap_domain_page((const void *)((unsigned long)v & PAGE_MASK));
 }
 
 /* mfn can be INVALID_MFN */
 void free_xen_pagetable_new(mfn_t mfn)
 {
     if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
-        free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
+        free_domheap_page(mfn_to_page(mfn));
 }
 
 static DEFINE_SPINLOCK(map_pgdir_lock);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* [Xen-devel] [PATCH v2 55/55] x86/mm: drop _new suffix for page table APIs
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (53 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 54/55] x86: switch to use domheap page for page tables Hongyan Xia
@ 2019-09-30 10:33 ` Hongyan Xia
  2019-10-01 11:56 ` [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Wei Liu
  55 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-09-30 10:33 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Hongyan Xia, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyax@amazon.com>

---
Changed since v1:
- Fix rebase conflicts against new master and other changes since v1.
---
 xen/arch/x86/domain.c        |   4 +-
 xen/arch/x86/domain_page.c   |   2 +-
 xen/arch/x86/efi/runtime.h   |   4 +-
 xen/arch/x86/mm.c            | 164 +++++++++++++++++------------------
 xen/arch/x86/pv/dom0_build.c |  28 +++---
 xen/arch/x86/pv/shim.c       |  12 +--
 xen/arch/x86/setup.c         |   8 +-
 xen/arch/x86/smpboot.c       |  74 ++++++++--------
 xen/arch/x86/x86_64/mm.c     | 136 ++++++++++++++---------------
 xen/common/efi/boot.c        |  42 ++++-----
 xen/include/asm-x86/mm.h     |  18 ++--
 11 files changed, 246 insertions(+), 246 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index e9bf47efce..e49a5af3f2 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1588,11 +1588,11 @@ void paravirt_ctxt_switch_to(struct vcpu *v)
         root_pgentry_t *rpt;
 
         mapcache_override_current(INVALID_VCPU);
-        rpt = map_xen_pagetable_new(rpt_mfn);
+        rpt = map_xen_pagetable(rpt_mfn);
         rpt[root_table_offset(PERDOMAIN_VIRT_START)] =
             l4e_from_page(v->domain->arch.perdomain_l3_pg,
                           __PAGE_HYPERVISOR_RW);
-        UNMAP_XEN_PAGETABLE_NEW(rpt);
+        UNMAP_XEN_PAGETABLE(rpt);
         mapcache_override_current(NULL);
     }
 
diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index cfcffd35f3..9ea74b456c 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -343,7 +343,7 @@ mfn_t domain_page_map_to_mfn(const void *ptr)
         l1_pgentry_t *pl1e = virt_to_xen_l1e(va);
         BUG_ON(!pl1e);
         l1e = *pl1e;
-        UNMAP_XEN_PAGETABLE_NEW(pl1e);
+        UNMAP_XEN_PAGETABLE(pl1e);
     }
     else
     {
diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h
index 277d237953..ca15c5aab7 100644
--- a/xen/arch/x86/efi/runtime.h
+++ b/xen/arch/x86/efi/runtime.h
@@ -10,9 +10,9 @@ void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e)
     {
         l4_pgentry_t *l4t;
 
-        l4t = map_xen_pagetable_new(efi_l4_mfn);
+        l4t = map_xen_pagetable(efi_l4_mfn);
         l4e_write(l4t + l4idx, l4e);
-        UNMAP_XEN_PAGETABLE_NEW(l4t);
+        UNMAP_XEN_PAGETABLE(l4t);
     }
 }
 #endif
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index a2d2d01660..615d573961 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -353,22 +353,22 @@ void __init arch_init_memory(void)
             ASSERT(root_pgt_pv_xen_slots < ROOT_PAGETABLE_PV_XEN_SLOTS);
             if ( l4_table_offset(split_va) == l4_table_offset(split_va - 1) )
             {
-                mfn_t l3tab_mfn = alloc_xen_pagetable_new();
+                mfn_t l3tab_mfn = alloc_xen_pagetable();
 
                 if ( !mfn_eq(l3tab_mfn, INVALID_MFN) )
                 {
                     l3_pgentry_t *l3idle =
-                        map_xen_pagetable_new(
+                        map_xen_pagetable(
                             l4e_get_mfn(idle_pg_table[l4_table_offset(split_va)]));
-                    l3_pgentry_t *l3tab = map_xen_pagetable_new(l3tab_mfn);
+                    l3_pgentry_t *l3tab = map_xen_pagetable(l3tab_mfn);
 
                     for ( i = 0; i < l3_table_offset(split_va); ++i )
                         l3tab[i] = l3idle[i];
                     for ( ; i < L3_PAGETABLE_ENTRIES; ++i )
                         l3tab[i] = l3e_empty();
                     split_l4e = l4e_from_mfn(l3tab_mfn, __PAGE_HYPERVISOR_RW);
-                    UNMAP_XEN_PAGETABLE_NEW(l3idle);
-                    UNMAP_XEN_PAGETABLE_NEW(l3tab);
+                    UNMAP_XEN_PAGETABLE(l3idle);
+                    UNMAP_XEN_PAGETABLE(l3tab);
                 }
                 else
                     ++root_pgt_pv_xen_slots;
@@ -4850,7 +4850,7 @@ int mmcfg_intercept_write(
     return X86EMUL_OKAY;
 }
 
-mfn_t alloc_xen_pagetable_new(void)
+mfn_t alloc_xen_pagetable(void)
 {
     if ( system_state != SYS_STATE_early_boot )
     {
@@ -4863,20 +4863,20 @@ mfn_t alloc_xen_pagetable_new(void)
     return alloc_boot_pages(1, 1);
 }
 
-void *map_xen_pagetable_new(mfn_t mfn)
+void *map_xen_pagetable(mfn_t mfn)
 {
     return map_domain_page(mfn);
 }
 
 /* v can point to an entry within a table or be NULL */
-void unmap_xen_pagetable_new(void *v)
+void unmap_xen_pagetable(void *v)
 {
     if ( v )
         unmap_domain_page((const void *)((unsigned long)v & PAGE_MASK));
 }
 
 /* mfn can be INVALID_MFN */
-void free_xen_pagetable_new(mfn_t mfn)
+void free_xen_pagetable(mfn_t mfn)
 {
     if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
         free_domheap_page(mfn_to_page(mfn));
@@ -4900,11 +4900,11 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
         l3_pgentry_t *l3t;
         mfn_t mfn;
 
-        mfn = alloc_xen_pagetable_new();
+        mfn = alloc_xen_pagetable();
         if ( mfn_eq(mfn, INVALID_MFN) )
             goto out;
 
-        l3t = map_xen_pagetable_new(mfn);
+        l3t = map_xen_pagetable(mfn);
 
         if ( locking )
             spin_lock(&map_pgdir_lock);
@@ -4924,15 +4924,15 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
         {
             ASSERT(!pl3e);
             ASSERT(!mfn_eq(mfn, INVALID_MFN));
-            UNMAP_XEN_PAGETABLE_NEW(l3t);
-            free_xen_pagetable_new(mfn);
+            UNMAP_XEN_PAGETABLE(l3t);
+            free_xen_pagetable(mfn);
         }
     }
 
     if ( !pl3e )
     {
         ASSERT(l4e_get_flags(*pl4e) & _PAGE_PRESENT);
-        pl3e = (l3_pgentry_t *)map_xen_pagetable_new(l4e_get_mfn(*pl4e))
+        pl3e = (l3_pgentry_t *)map_xen_pagetable(l4e_get_mfn(*pl4e))
             + l3_table_offset(v);
     }
 
@@ -4959,11 +4959,11 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
         l2_pgentry_t *l2t;
         mfn_t mfn;
 
-        mfn = alloc_xen_pagetable_new();
+        mfn = alloc_xen_pagetable();
         if ( mfn_eq(mfn, INVALID_MFN) )
             goto out;
 
-        l2t = map_xen_pagetable_new(mfn);
+        l2t = map_xen_pagetable(mfn);
 
         if ( locking )
             spin_lock(&map_pgdir_lock);
@@ -4981,8 +4981,8 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
         {
             ASSERT(!pl2e);
             ASSERT(!mfn_eq(mfn, INVALID_MFN));
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
-            free_xen_pagetable_new(mfn);
+            UNMAP_XEN_PAGETABLE(l2t);
+            free_xen_pagetable(mfn);
         }
     }
 
@@ -4991,12 +4991,12 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
     if ( !pl2e )
     {
         ASSERT(l3e_get_flags(*pl3e) & _PAGE_PRESENT);
-        pl2e = (l2_pgentry_t *)map_xen_pagetable_new(l3e_get_mfn(*pl3e))
+        pl2e = (l2_pgentry_t *)map_xen_pagetable(l3e_get_mfn(*pl3e))
             + l2_table_offset(v);
     }
 
  out:
-    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    UNMAP_XEN_PAGETABLE(pl3e);
     return pl2e;
 }
 
@@ -5015,11 +5015,11 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
         l1_pgentry_t *l1t;
         mfn_t mfn;
 
-        mfn = alloc_xen_pagetable_new();
+        mfn = alloc_xen_pagetable();
         if ( mfn_eq(mfn, INVALID_MFN) )
             goto out;
 
-        l1t = map_xen_pagetable_new(mfn);
+        l1t = map_xen_pagetable(mfn);
 
         if ( locking )
             spin_lock(&map_pgdir_lock);
@@ -5037,8 +5037,8 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
         {
             ASSERT(!pl1e);
             ASSERT(!mfn_eq(mfn, INVALID_MFN));
-            UNMAP_XEN_PAGETABLE_NEW(l1t);
-            free_xen_pagetable_new(mfn);
+            UNMAP_XEN_PAGETABLE(l1t);
+            free_xen_pagetable(mfn);
         }
     }
 
@@ -5047,12 +5047,12 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
     if ( !pl1e )
     {
         ASSERT(l2e_get_flags(*pl2e) & _PAGE_PRESENT);
-        pl1e = (l1_pgentry_t *)map_xen_pagetable_new(l2e_get_mfn(*pl2e))
+        pl1e = (l1_pgentry_t *)map_xen_pagetable(l2e_get_mfn(*pl2e))
             + l1_table_offset(v);
     }
 
  out:
-    UNMAP_XEN_PAGETABLE_NEW(pl2e);
+    UNMAP_XEN_PAGETABLE(pl2e);
     return pl1e;
 }
 
@@ -5131,7 +5131,7 @@ int map_pages_to_xen(
                     l2_pgentry_t *l2t;
                     mfn_t l2t_mfn = l3e_get_mfn(ol3e);
 
-                    l2t = map_xen_pagetable_new(l2t_mfn);
+                    l2t = map_xen_pagetable(l2t_mfn);
 
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
@@ -5146,10 +5146,10 @@ int map_pages_to_xen(
                             l1_pgentry_t *l1t;
                             mfn_t l1t_mfn = l2e_get_mfn(ol2e);
 
-                            l1t = map_xen_pagetable_new(l1t_mfn);
+                            l1t = map_xen_pagetable(l1t_mfn);
                             for ( j = 0; j < L1_PAGETABLE_ENTRIES; j++ )
                                 flush_flags(l1e_get_flags(l1t[j]));
-                            UNMAP_XEN_PAGETABLE_NEW(l1t);
+                            UNMAP_XEN_PAGETABLE(l1t);
                         }
                     }
                     flush_area(virt, flush_flags);
@@ -5158,9 +5158,9 @@ int map_pages_to_xen(
                         ol2e = l2t[i];
                         if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) &&
                              !(l2e_get_flags(ol2e) & _PAGE_PSE) )
-                            free_xen_pagetable_new(l2e_get_mfn(ol2e));
+                            free_xen_pagetable(l2e_get_mfn(ol2e));
                     }
-                    free_xen_pagetable_new(l2t_mfn);
+                    free_xen_pagetable(l2t_mfn);
                 }
             }
 
@@ -5199,14 +5199,14 @@ int map_pages_to_xen(
                 goto end_of_loop;
             }
 
-            l2t_mfn = alloc_xen_pagetable_new();
+            l2t_mfn = alloc_xen_pagetable();
             if ( mfn_eq(l2t_mfn, INVALID_MFN) )
             {
                 ASSERT(rc == -ENOMEM);
                 goto out;
             }
 
-            l2t = map_xen_pagetable_new(l2t_mfn);
+            l2t = map_xen_pagetable(l2t_mfn);
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
@@ -5224,15 +5224,15 @@ int map_pages_to_xen(
             {
                 l3e_write_atomic(pl3e,
                                  l3e_from_mfn(l2t_mfn, __PAGE_HYPERVISOR));
-                UNMAP_XEN_PAGETABLE_NEW(l2t);
+                UNMAP_XEN_PAGETABLE(l2t);
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
             flush_area(virt, flush_flags);
             if ( l2t )
             {
-                UNMAP_XEN_PAGETABLE_NEW(l2t);
-                free_xen_pagetable_new(l2t_mfn);
+                UNMAP_XEN_PAGETABLE(l2t);
+                free_xen_pagetable(l2t_mfn);
             }
         }
 
@@ -5267,12 +5267,12 @@ int map_pages_to_xen(
                     l1_pgentry_t *l1t;
                     mfn_t l1t_mfn = l2e_get_mfn(ol2e);
 
-                    l1t = map_xen_pagetable_new(l1t_mfn);
+                    l1t = map_xen_pagetable(l1t_mfn);
                     for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                         flush_flags(l1e_get_flags(l1t[i]));
                     flush_area(virt, flush_flags);
-                    UNMAP_XEN_PAGETABLE_NEW(l1t);
-                    free_xen_pagetable_new(l1t_mfn);
+                    UNMAP_XEN_PAGETABLE(l1t);
+                    free_xen_pagetable(l1t_mfn);
                 }
             }
 
@@ -5293,7 +5293,7 @@ int map_pages_to_xen(
                     ASSERT(rc == -ENOMEM);
                     goto out;
                 }
-                UNMAP_XEN_PAGETABLE_NEW(pl1e);
+                UNMAP_XEN_PAGETABLE(pl1e);
             }
             else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
             {
@@ -5320,14 +5320,14 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                l1t_mfn = alloc_xen_pagetable_new();
+                l1t_mfn = alloc_xen_pagetable();
                 if ( mfn_eq(l1t_mfn, INVALID_MFN) )
                 {
                     ASSERT(rc == -ENOMEM);
                     goto out;
                 }
 
-                l1t = map_xen_pagetable_new(l1t_mfn);
+                l1t = map_xen_pagetable(l1t_mfn);
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
@@ -5344,23 +5344,23 @@ int map_pages_to_xen(
                 {
                     l2e_write_atomic(pl2e, l2e_from_mfn(l1t_mfn,
                                                         __PAGE_HYPERVISOR));
-                    UNMAP_XEN_PAGETABLE_NEW(l1t);
+                    UNMAP_XEN_PAGETABLE(l1t);
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(virt, flush_flags);
                 if ( l1t )
                 {
-                    UNMAP_XEN_PAGETABLE_NEW(l1t);
-                    free_xen_pagetable_new(l1t_mfn);
+                    UNMAP_XEN_PAGETABLE(l1t);
+                    free_xen_pagetable(l1t_mfn);
                 }
             }
 
-            pl1e  = map_xen_pagetable_new(l2e_get_mfn((*pl2e)));
+            pl1e  = map_xen_pagetable(l2e_get_mfn((*pl2e)));
             pl1e += l1_table_offset(virt);
             ol1e  = *pl1e;
             l1e_write_atomic(pl1e, l1e_from_mfn(mfn, flags));
-            UNMAP_XEN_PAGETABLE_NEW(pl1e);
+            UNMAP_XEN_PAGETABLE(pl1e);
             if ( (l1e_get_flags(ol1e) & _PAGE_PRESENT) )
             {
                 unsigned int flush_flags = FLUSH_TLB | FLUSH_ORDER(0);
@@ -5406,14 +5406,14 @@ int map_pages_to_xen(
                 }
 
                 l1t_mfn = l2e_get_mfn(ol2e);
-                l1t = map_xen_pagetable_new(l1t_mfn);
+                l1t = map_xen_pagetable(l1t_mfn);
 
                 base_mfn = l1e_get_pfn(l1t[0]) & ~(L1_PAGETABLE_ENTRIES - 1);
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) ||
                          (l1e_get_flags(l1t[i]) != flags) )
                         break;
-                UNMAP_XEN_PAGETABLE_NEW(l1t);
+                UNMAP_XEN_PAGETABLE(l1t);
                 if ( i == L1_PAGETABLE_ENTRIES )
                 {
                     l2e_write_atomic(pl2e, l2e_from_pfn(base_mfn,
@@ -5423,7 +5423,7 @@ int map_pages_to_xen(
                     flush_area(virt - PAGE_SIZE,
                                FLUSH_TLB_GLOBAL |
                                FLUSH_ORDER(PAGETABLE_ORDER));
-                    free_xen_pagetable_new(l1t_mfn);
+                    free_xen_pagetable(l1t_mfn);
                 }
                 else if ( locking )
                     spin_unlock(&map_pgdir_lock);
@@ -5458,7 +5458,7 @@ int map_pages_to_xen(
             }
 
             l2t_mfn = l3e_get_mfn(ol3e);
-            l2t = map_xen_pagetable_new(l2t_mfn);
+            l2t = map_xen_pagetable(l2t_mfn);
 
             base_mfn = l2e_get_pfn(l2t[0]) & ~(L2_PAGETABLE_ENTRIES *
                                               L1_PAGETABLE_ENTRIES - 1);
@@ -5467,7 +5467,7 @@ int map_pages_to_xen(
                       (base_mfn + (i << PAGETABLE_ORDER))) ||
                      (l2e_get_flags(l2t[i]) != l1f_to_lNf(flags)) )
                     break;
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            UNMAP_XEN_PAGETABLE(l2t);
             if ( i == L2_PAGETABLE_ENTRIES )
             {
                 l3e_write_atomic(pl3e, l3e_from_pfn(base_mfn,
@@ -5477,15 +5477,15 @@ int map_pages_to_xen(
                 flush_area(virt - PAGE_SIZE,
                            FLUSH_TLB_GLOBAL |
                            FLUSH_ORDER(2*PAGETABLE_ORDER));
-                free_xen_pagetable_new(l2t_mfn);
+                free_xen_pagetable(l2t_mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
     end_of_loop:
-        UNMAP_XEN_PAGETABLE_NEW(pl1e);
-        UNMAP_XEN_PAGETABLE_NEW(pl2e);
-        UNMAP_XEN_PAGETABLE_NEW(pl3e);
+        UNMAP_XEN_PAGETABLE(pl1e);
+        UNMAP_XEN_PAGETABLE(pl2e);
+        UNMAP_XEN_PAGETABLE(pl3e);
     }
 
 #undef flush_flags
@@ -5493,9 +5493,9 @@ int map_pages_to_xen(
     rc = 0;
 
  out:
-    UNMAP_XEN_PAGETABLE_NEW(pl1e);
-    UNMAP_XEN_PAGETABLE_NEW(pl2e);
-    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    UNMAP_XEN_PAGETABLE(pl1e);
+    UNMAP_XEN_PAGETABLE(pl2e);
+    UNMAP_XEN_PAGETABLE(pl3e);
     return rc;
 }
 
@@ -5566,14 +5566,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
-            mfn = alloc_xen_pagetable_new();
+            mfn = alloc_xen_pagetable();
             if ( mfn_eq(mfn, INVALID_MFN) )
             {
                 ASSERT(rc == -ENOMEM);
                 goto out;
             }
 
-            l2t = map_xen_pagetable_new(mfn);
+            l2t = map_xen_pagetable(mfn);
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
@@ -5586,14 +5586,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
                 l3e_write_atomic(pl3e, l3e_from_mfn(mfn, __PAGE_HYPERVISOR));
-                UNMAP_XEN_PAGETABLE_NEW(l2t);
+                UNMAP_XEN_PAGETABLE(l2t);
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
             if ( l2t )
             {
-                UNMAP_XEN_PAGETABLE_NEW(l2t);
-                free_xen_pagetable_new(mfn);
+                UNMAP_XEN_PAGETABLE(l2t);
+                free_xen_pagetable(mfn);
             }
         }
 
@@ -5601,7 +5601,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
          * The L3 entry has been verified to be present, and we've dealt with
          * 1G pages as well, so the L2 table cannot require allocation.
          */
-        pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+        pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e));
         pl2e += l2_table_offset(v);
 
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
@@ -5633,14 +5633,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 mfn_t mfn;
 
                 /* PSE: shatter the superpage and try again. */
-                mfn = alloc_xen_pagetable_new();
+                mfn = alloc_xen_pagetable();
                 if ( mfn_eq(mfn, INVALID_MFN) )
                 {
                     ASSERT(rc == -ENOMEM);
                     goto out;
                 }
 
-                l1t = map_xen_pagetable_new(mfn);
+                l1t = map_xen_pagetable(mfn);
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
@@ -5653,14 +5653,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 {
                     l2e_write_atomic(pl2e, l2e_from_mfn(mfn,
                                                         __PAGE_HYPERVISOR));
-                    UNMAP_XEN_PAGETABLE_NEW(l1t);
+                    UNMAP_XEN_PAGETABLE(l1t);
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 if ( l1t )
                 {
-                    UNMAP_XEN_PAGETABLE_NEW(l1t);
-                    free_xen_pagetable_new(mfn);
+                    UNMAP_XEN_PAGETABLE(l1t);
+                    free_xen_pagetable(mfn);
                 }
             }
         }
@@ -5674,7 +5674,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
              * present, and we've dealt with 2M pages as well, so the L1 table
              * cannot require allocation.
              */
-            pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            pl1e = map_xen_pagetable(l2e_get_mfn(*pl2e));
             pl1e += l1_table_offset(v);
 
             /* Confirm the caller isn't trying to create new mappings. */
@@ -5686,7 +5686,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                                (l1e_get_flags(*pl1e) & ~FLAGS_MASK) | nf);
 
             l1e_write_atomic(pl1e, nl1e);
-            UNMAP_XEN_PAGETABLE_NEW(pl1e);
+            UNMAP_XEN_PAGETABLE(pl1e);
             v += PAGE_SIZE;
 
             /*
@@ -5717,11 +5717,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             l1t_mfn = l2e_get_mfn(*pl2e);
-            l1t = map_xen_pagetable_new(l1t_mfn);
+            l1t = map_xen_pagetable(l1t_mfn);
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                 if ( l1e_get_intpte(l1t[i]) != 0 )
                     break;
-            UNMAP_XEN_PAGETABLE_NEW(l1t);
+            UNMAP_XEN_PAGETABLE(l1t);
             if ( i == L1_PAGETABLE_ENTRIES )
             {
                 /* Empty: zap the L2E and free the L1 page. */
@@ -5729,7 +5729,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable_new(l1t_mfn);
+                free_xen_pagetable(l1t_mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5763,11 +5763,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             mfn_t l2t_mfn;
 
             l2t_mfn = l3e_get_mfn(*pl3e);
-            l2t = map_xen_pagetable_new(l2t_mfn);
+            l2t = map_xen_pagetable(l2t_mfn);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 if ( l2e_get_intpte(l2t[i]) != 0 )
                     break;
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            UNMAP_XEN_PAGETABLE(l2t);
             if ( i == L2_PAGETABLE_ENTRIES )
             {
                 /* Empty: zap the L3E and free the L2 page. */
@@ -5775,14 +5775,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable_new(l2t_mfn);
+                free_xen_pagetable(l2t_mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
     end_of_loop:
-        UNMAP_XEN_PAGETABLE_NEW(pl2e);
-        UNMAP_XEN_PAGETABLE_NEW(pl3e);
+        UNMAP_XEN_PAGETABLE(pl2e);
+        UNMAP_XEN_PAGETABLE(pl3e);
     }
 
     flush_area(NULL, FLUSH_TLB_GLOBAL);
@@ -5791,8 +5791,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     rc = 0;
 
  out:
-    UNMAP_XEN_PAGETABLE_NEW(pl2e);
-    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    UNMAP_XEN_PAGETABLE(pl2e);
+    UNMAP_XEN_PAGETABLE(pl3e);
     return rc;
 }
 
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 39cb68f7da..02d7f1c27c 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -55,11 +55,11 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d,
     l1_pgentry_t *pl1e, *l1t;
 
     pl4e = l4start + l4_table_offset(vpt_start);
-    l3t = map_xen_pagetable_new(l4e_get_mfn(*pl4e));
+    l3t = map_xen_pagetable(l4e_get_mfn(*pl4e));
     pl3e = l3t + l3_table_offset(vpt_start);
-    l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+    l2t = map_xen_pagetable(l3e_get_mfn(*pl3e));
     pl2e = l2t + l2_table_offset(vpt_start);
-    l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+    l1t = map_xen_pagetable(l2e_get_mfn(*pl2e));
     pl1e = l1t + l1_table_offset(vpt_start);
     for ( count = 0; count < nr_pt_pages; count++ )
     {
@@ -86,22 +86,22 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d,
             {
                 if ( !((unsigned long)++pl3e & (PAGE_SIZE - 1)) )
                 {
-                    UNMAP_XEN_PAGETABLE_NEW(l3t);
-                    l3t = map_xen_pagetable_new(l4e_get_mfn(*++pl4e));
+                    UNMAP_XEN_PAGETABLE(l3t);
+                    l3t = map_xen_pagetable(l4e_get_mfn(*++pl4e));
                     pl3e = l3t;
                 }
-                UNMAP_XEN_PAGETABLE_NEW(l2t);
-                l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+                UNMAP_XEN_PAGETABLE(l2t);
+                l2t = map_xen_pagetable(l3e_get_mfn(*pl3e));
                 pl2e = l2t;
             }
-            UNMAP_XEN_PAGETABLE_NEW(l1t);
-            l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            UNMAP_XEN_PAGETABLE(l1t);
+            l1t = map_xen_pagetable(l2e_get_mfn(*pl2e));
             pl1e = l1t;
         }
     }
-    UNMAP_XEN_PAGETABLE_NEW(l1t);
-    UNMAP_XEN_PAGETABLE_NEW(l2t);
-    UNMAP_XEN_PAGETABLE_NEW(l3t);
+    UNMAP_XEN_PAGETABLE(l1t);
+    UNMAP_XEN_PAGETABLE(l2t);
+    UNMAP_XEN_PAGETABLE(l3t);
 }
 
 static __init void setup_pv_physmap(struct domain *d, unsigned long pgtbl_pfn,
@@ -695,9 +695,9 @@ int __init dom0_construct_pv(struct domain *d,
                 l3e_get_page(*l3tab)->u.inuse.type_info |= PGT_pae_xen_l2;
         }
 
-        l2t = map_xen_pagetable_new(l3e_get_mfn(l3start[3]));
+        l2t = map_xen_pagetable(l3e_get_mfn(l3start[3]));
         init_xen_pae_l2_slots(l2t, d);
-        UNMAP_XEN_PAGETABLE_NEW(l2t);
+        UNMAP_XEN_PAGETABLE(l2t);
     }
 
     /* Pages that are part of page tables must be read only. */
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index cf638fa965..09c7766ec5 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -171,11 +171,11 @@ static void __init replace_va_mapping(struct domain *d, l4_pgentry_t *l4start,
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
 
-    pl3e = map_xen_pagetable_new(l4e_get_mfn(*pl4e));
+    pl3e = map_xen_pagetable(l4e_get_mfn(*pl4e));
     pl3e += l3_table_offset(va);
-    pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+    pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e));
     pl2e += l2_table_offset(va);
-    pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+    pl1e = map_xen_pagetable(l2e_get_mfn(*pl2e));
     pl1e += l1_table_offset(va);
 
     put_page_and_type(mfn_to_page(l1e_get_mfn(*pl1e)));
@@ -183,9 +183,9 @@ static void __init replace_va_mapping(struct domain *d, l4_pgentry_t *l4start,
     *pl1e = l1e_from_mfn(mfn, (!is_pv_32bit_domain(d) ? L1_PROT
                                                       : COMPAT_L1_PROT));
 
-    UNMAP_XEN_PAGETABLE_NEW(pl1e);
-    UNMAP_XEN_PAGETABLE_NEW(pl2e);
-    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    UNMAP_XEN_PAGETABLE(pl1e);
+    UNMAP_XEN_PAGETABLE(pl2e);
+    UNMAP_XEN_PAGETABLE(pl3e);
 }
 
 static void evtchn_reserve(struct domain *d, unsigned int port)
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index d27bcf1724..e6d8dfb0b3 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1101,7 +1101,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                     continue;
                 *pl4e = l4e_from_intpte(l4e_get_intpte(*pl4e) +
                                         xen_phys_start);
-                pl3e = l3t = map_xen_pagetable_new(l4e_get_mfn(*pl4e));
+                pl3e = l3t = map_xen_pagetable(l4e_get_mfn(*pl4e));
                 for ( j = 0; j < L3_PAGETABLE_ENTRIES; j++, pl3e++ )
                 {
                     l2_pgentry_t *l2t;
@@ -1113,7 +1113,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                         continue;
                     *pl3e = l3e_from_intpte(l3e_get_intpte(*pl3e) +
                                             xen_phys_start);
-                    pl2e = l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+                    pl2e = l2t = map_xen_pagetable(l3e_get_mfn(*pl3e));
                     for ( k = 0; k < L2_PAGETABLE_ENTRIES; k++, pl2e++ )
                     {
                         /* Not present, PSE, or already relocated? */
@@ -1124,9 +1124,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                         *pl2e = l2e_from_intpte(l2e_get_intpte(*pl2e) +
                                                 xen_phys_start);
                     }
-                    UNMAP_XEN_PAGETABLE_NEW(l2t);
+                    UNMAP_XEN_PAGETABLE(l2t);
                 }
-                UNMAP_XEN_PAGETABLE_NEW(l3t);
+                UNMAP_XEN_PAGETABLE(l3t);
             }
 
             /* The only data mappings to be relocated are in the Xen area. */
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 9fe0ef18a1..cbaff23f7e 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -687,7 +687,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         goto out;
     }
 
-    pl3e = map_xen_pagetable_new(
+    pl3e = map_xen_pagetable(
         l4e_get_mfn(idle_pg_table[root_table_offset(linear)]));
     pl3e += l3_table_offset(linear);
 
@@ -701,7 +701,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     }
     else
     {
-        pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+        pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e));
         pl2e += l2_table_offset(linear);
         flags = l2e_get_flags(*pl2e);
         ASSERT(flags & _PAGE_PRESENT);
@@ -713,7 +713,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
         else
         {
-            pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            pl1e = map_xen_pagetable(l2e_get_mfn(*pl2e));
             pl1e += l1_table_offset(linear);
             flags = l1e_get_flags(*pl1e);
             if ( !(flags & _PAGE_PRESENT) )
@@ -725,13 +725,13 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
     }
 
-    UNMAP_XEN_PAGETABLE_NEW(pl1e);
-    UNMAP_XEN_PAGETABLE_NEW(pl2e);
-    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    UNMAP_XEN_PAGETABLE(pl1e);
+    UNMAP_XEN_PAGETABLE(pl2e);
+    UNMAP_XEN_PAGETABLE(pl3e);
 
     if ( !(root_get_flags(rpt[root_table_offset(linear)]) & _PAGE_PRESENT) )
     {
-        mfn_t l3t_mfn = alloc_xen_pagetable_new();
+        mfn_t l3t_mfn = alloc_xen_pagetable();
 
         if ( mfn_eq(l3t_mfn, INVALID_MFN) )
         {
@@ -739,20 +739,20 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
             goto out;
         }
 
-        pl3e = map_xen_pagetable_new(l3t_mfn);
+        pl3e = map_xen_pagetable(l3t_mfn);
         clear_page(pl3e);
         l4e_write(&rpt[root_table_offset(linear)],
                   l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR));
     }
     else
-        pl3e = map_xen_pagetable_new(
+        pl3e = map_xen_pagetable(
             l4e_get_mfn(rpt[root_table_offset(linear)]));
 
     pl3e += l3_table_offset(linear);
 
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
-        mfn_t l2t_mfn = alloc_xen_pagetable_new();
+        mfn_t l2t_mfn = alloc_xen_pagetable();
 
         if ( mfn_eq(l2t_mfn, INVALID_MFN) )
         {
@@ -760,21 +760,21 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
             goto out;
         }
 
-        pl2e = map_xen_pagetable_new(l2t_mfn);
+        pl2e = map_xen_pagetable(l2t_mfn);
         clear_page(pl2e);
         l3e_write(pl3e, l3e_from_mfn(l2t_mfn, __PAGE_HYPERVISOR));
     }
     else
     {
         ASSERT(!(l3e_get_flags(*pl3e) & _PAGE_PSE));
-        pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+        pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e));
     }
 
     pl2e += l2_table_offset(linear);
 
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
-        mfn_t l1t_mfn = alloc_xen_pagetable_new();
+        mfn_t l1t_mfn = alloc_xen_pagetable();
 
         if ( mfn_eq(l1t_mfn, INVALID_MFN) )
         {
@@ -782,14 +782,14 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
             goto out;
         }
 
-        pl1e = map_xen_pagetable_new(l1t_mfn);
+        pl1e = map_xen_pagetable(l1t_mfn);
         clear_page(pl1e);
         l2e_write(pl2e, l2e_from_mfn(l1t_mfn, __PAGE_HYPERVISOR));
     }
     else
     {
         ASSERT(!(l2e_get_flags(*pl2e) & _PAGE_PSE));
-        pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+        pl1e = map_xen_pagetable(l2e_get_mfn(*pl2e));
     }
 
     pl1e += l1_table_offset(linear);
@@ -805,9 +805,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     rc = 0;
  out:
-    UNMAP_XEN_PAGETABLE_NEW(pl1e);
-    UNMAP_XEN_PAGETABLE_NEW(pl2e);
-    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    UNMAP_XEN_PAGETABLE(pl1e);
+    UNMAP_XEN_PAGETABLE(pl2e);
+    UNMAP_XEN_PAGETABLE(pl3e);
     return rc;
 }
 
@@ -830,14 +830,14 @@ static int setup_cpu_root_pgt(unsigned int cpu)
         goto out;
     }
 
-    rpt_mfn = alloc_xen_pagetable_new();
+    rpt_mfn = alloc_xen_pagetable();
     if ( mfn_eq(rpt_mfn, INVALID_MFN) )
     {
         rc = -ENOMEM;
         goto out;
     }
 
-    rpt = map_xen_pagetable_new(rpt_mfn);
+    rpt = map_xen_pagetable(rpt_mfn);
     clear_page(rpt);
     per_cpu(root_pgt_mfn, cpu) = rpt_mfn;
 
@@ -882,7 +882,7 @@ static int setup_cpu_root_pgt(unsigned int cpu)
         rc = clone_mapping((void *)per_cpu(stubs.addr, cpu), rpt);
 
  out:
-    UNMAP_XEN_PAGETABLE_NEW(rpt);
+    UNMAP_XEN_PAGETABLE(rpt);
     return rc;
 }
 
@@ -898,7 +898,7 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
 
     per_cpu(root_pgt_mfn, cpu) = INVALID_MFN;
 
-    rpt = map_xen_pagetable_new(rpt_mfn);
+    rpt = map_xen_pagetable(rpt_mfn);
 
     for ( r = root_table_offset(DIRECTMAP_VIRT_START);
           r < root_table_offset(HYPERVISOR_VIRT_END); ++r )
@@ -911,7 +911,7 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
             continue;
 
         l3t_mfn = l4e_get_mfn(rpt[r]);
-        l3t = map_xen_pagetable_new(l3t_mfn);
+        l3t = map_xen_pagetable(l3t_mfn);
 
         for ( i3 = 0; i3 < L3_PAGETABLE_ENTRIES; ++i3 )
         {
@@ -924,7 +924,7 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
 
             ASSERT(!(l3e_get_flags(l3t[i3]) & _PAGE_PSE));
             l2t_mfn = l3e_get_mfn(l3t[i3]);
-            l2t = map_xen_pagetable_new(l2t_mfn);
+            l2t = map_xen_pagetable(l2t_mfn);
 
             for ( i2 = 0; i2 < L2_PAGETABLE_ENTRIES; ++i2 )
             {
@@ -932,34 +932,34 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
                     continue;
 
                 ASSERT(!(l2e_get_flags(l2t[i2]) & _PAGE_PSE));
-                free_xen_pagetable_new(l2e_get_mfn(l2t[i2]));
+                free_xen_pagetable(l2e_get_mfn(l2t[i2]));
             }
 
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
-            free_xen_pagetable_new(l2t_mfn);
+            UNMAP_XEN_PAGETABLE(l2t);
+            free_xen_pagetable(l2t_mfn);
         }
 
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
-        free_xen_pagetable_new(l3t_mfn);
+        UNMAP_XEN_PAGETABLE(l3t);
+        free_xen_pagetable(l3t_mfn);
     }
 
-    UNMAP_XEN_PAGETABLE_NEW(rpt);
-    free_xen_pagetable_new(rpt_mfn);
+    UNMAP_XEN_PAGETABLE(rpt);
+    free_xen_pagetable(rpt_mfn);
 
     /* Also zap the stub mapping for this CPU. */
     if ( stub_linear )
     {
-        l3_pgentry_t *l3t = map_xen_pagetable_new(l4e_get_mfn(common_pgt));
-        l2_pgentry_t *l2t = map_xen_pagetable_new(
+        l3_pgentry_t *l3t = map_xen_pagetable(l4e_get_mfn(common_pgt));
+        l2_pgentry_t *l2t = map_xen_pagetable(
             l3e_get_mfn(l3t[l3_table_offset(stub_linear)]));
-        l1_pgentry_t *l1t = map_xen_pagetable_new(
+        l1_pgentry_t *l1t = map_xen_pagetable(
             l2e_get_mfn(l2t[l2_table_offset(stub_linear)]));
 
         l1t[l1_table_offset(stub_linear)] = l1e_empty();
 
-        UNMAP_XEN_PAGETABLE_NEW(l1t);
-        UNMAP_XEN_PAGETABLE_NEW(l2t);
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        UNMAP_XEN_PAGETABLE(l1t);
+        UNMAP_XEN_PAGETABLE(l2t);
+        UNMAP_XEN_PAGETABLE(l3t);
     }
 }
 
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index a1c69d7f0e..842548b925 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -134,7 +134,7 @@ static int m2p_mapped(unsigned long spfn)
     int rc = M2P_NO_MAPPED;
 
     va = RO_MPT_VIRT_START + spfn * sizeof(*machine_to_phys_mapping);
-    l3_ro_mpt = map_xen_pagetable_new(
+    l3_ro_mpt = map_xen_pagetable(
         l4e_get_mfn(idle_pg_table[l4_table_offset(va)]));
 
     switch ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
@@ -150,15 +150,15 @@ static int m2p_mapped(unsigned long spfn)
             rc = M2P_NO_MAPPED;
             goto out;
     }
-    l2_ro_mpt = map_xen_pagetable_new(
+    l2_ro_mpt = map_xen_pagetable(
         l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
 
     if (l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT)
         rc = M2P_2M_MAPPED;
 
  out:
-    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
     return rc;
 }
 
@@ -176,10 +176,10 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
     {
         n = L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES;
 
-        l3t = map_xen_pagetable_new(
+        l3t = map_xen_pagetable(
             l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
         l3e = l3t[l3_table_offset(v)];
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        UNMAP_XEN_PAGETABLE(l3t);
 
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
@@ -187,9 +187,9 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
         {
             n = L1_PAGETABLE_ENTRIES;
 
-            l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+            l2t = map_xen_pagetable(l3e_get_mfn(l3e));
             l2e = l2t[l2_table_offset(v)];
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            UNMAP_XEN_PAGETABLE(l2t);
 
             if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
                 continue;
@@ -211,17 +211,17 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
           v != RDWR_COMPAT_MPT_VIRT_END;
           v += 1 << L2_PAGETABLE_SHIFT )
     {
-        l3t = map_xen_pagetable_new(
+        l3t = map_xen_pagetable(
             l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
         l3e = l3t[l3_table_offset(v)];
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        UNMAP_XEN_PAGETABLE(l3t);
 
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
 
-        l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+        l2t = map_xen_pagetable(l3e_get_mfn(l3e));
         l2e = l2t[l2_table_offset(v)];
-        UNMAP_XEN_PAGETABLE_NEW(l2t);
+        UNMAP_XEN_PAGETABLE(l2t);
 
         if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
             continue;
@@ -252,12 +252,12 @@ static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info)
     if ( emap > ((RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2) )
         emap = (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2;
 
-    l3_ro_mpt = map_xen_pagetable_new(
+    l3_ro_mpt = map_xen_pagetable(
         l4e_get_mfn(idle_pg_table[l4_table_offset(HIRO_COMPAT_MPT_VIRT_START)]));
 
     ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]) & _PAGE_PRESENT);
 
-    l2_ro_mpt = map_xen_pagetable_new(
+    l2_ro_mpt = map_xen_pagetable(
         l3e_get_mfn(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]));
 
     for ( i = smap; i < emap; )
@@ -280,8 +280,8 @@ static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info)
         i += 1UL << (L2_PAGETABLE_SHIFT - 2);
     }
 
-    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
 
     return;
 }
@@ -292,7 +292,7 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
     unsigned long i, va, rwva;
     unsigned long smap = info->spfn, emap = info->epfn;
 
-    l3_ro_mpt = map_xen_pagetable_new(
+    l3_ro_mpt = map_xen_pagetable(
         l4e_get_mfn(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]));
 
     /*
@@ -315,13 +315,13 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
             continue;
         }
 
-        l2_ro_mpt = map_xen_pagetable_new(
+        l2_ro_mpt = map_xen_pagetable(
             l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
         if (!(l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT))
         {
             i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) +
                     (1UL << (L2_PAGETABLE_SHIFT - 3)) ;
-            UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+            UNMAP_XEN_PAGETABLE(l2_ro_mpt);
             continue;
         }
 
@@ -332,17 +332,17 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
 
             destroy_xen_mappings(rwva, rwva + (1UL << L2_PAGETABLE_SHIFT));
 
-            l2t = map_xen_pagetable_new(
+            l2t = map_xen_pagetable(
                 l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
             l2e_write(&l2t[l2_table_offset(va)], l2e_empty());
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            UNMAP_XEN_PAGETABLE(l2t);
         }
         i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) +
               (1UL << (L2_PAGETABLE_SHIFT - 3));
-        UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+        UNMAP_XEN_PAGETABLE(l2_ro_mpt);
     }
 
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
 
     destroy_compat_m2p_mapping(info);
 
@@ -382,12 +382,12 @@ static int setup_compat_m2p_table(struct mem_hotadd_info *info)
 
     va = HIRO_COMPAT_MPT_VIRT_START +
          smap * sizeof(*compat_machine_to_phys_mapping);
-    l3_ro_mpt = map_xen_pagetable_new(
+    l3_ro_mpt = map_xen_pagetable(
         l4e_get_mfn(idle_pg_table[l4_table_offset(va)]));
 
     ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) & _PAGE_PRESENT);
 
-    l2_ro_mpt = map_xen_pagetable_new(
+    l2_ro_mpt = map_xen_pagetable(
         l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
 
 #define MFN(x) (((x) << L2_PAGETABLE_SHIFT) / sizeof(unsigned int))
@@ -427,8 +427,8 @@ static int setup_compat_m2p_table(struct mem_hotadd_info *info)
 #undef CNT
 #undef MFN
 
-    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
     return err;
 }
 
@@ -449,7 +449,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
             & _PAGE_PRESENT);
     l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset(
                                         RO_MPT_VIRT_START)]);
-    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
+    l3_ro_mpt = map_xen_pagetable(l3_ro_mpt_mfn);
 
     smap = (info->spfn & (~((1UL << (L2_PAGETABLE_SHIFT - 3)) -1)));
     emap = ((info->epfn + ((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1 )) &
@@ -505,23 +505,23 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
             if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
               _PAGE_PRESENT )
             {
-                UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+                UNMAP_XEN_PAGETABLE(l2_ro_mpt);
                 l2_ro_mpt_mfn = l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]);
-                l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
+                l2_ro_mpt = map_xen_pagetable(l2_ro_mpt_mfn);
                 ASSERT(l2_ro_mpt);
                 pl2e = l2_ro_mpt + l2_table_offset(va);
             }
             else
             {
-                UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-                l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+                UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+                l2_ro_mpt_mfn = alloc_xen_pagetable();
                 if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
                 {
                     ret = -ENOMEM;
                     goto error;
                 }
 
-                l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
+                l2_ro_mpt = map_xen_pagetable(l2_ro_mpt_mfn);
                 clear_page(l2_ro_mpt);
                 l3e_write(&l3_ro_mpt[l3_table_offset(va)],
                           l3e_from_mfn(l2_ro_mpt_mfn,
@@ -541,8 +541,8 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 
     ret = setup_compat_m2p_table(info);
 error:
-    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
     return ret;
 }
 
@@ -569,23 +569,23 @@ void __init paging_init(void)
             l3_pgentry_t *pl3t;
             mfn_t mfn;
 
-            mfn = alloc_xen_pagetable_new();
+            mfn = alloc_xen_pagetable();
             if ( mfn_eq(mfn, INVALID_MFN) )
                 goto nomem;
 
-            pl3t = map_xen_pagetable_new(mfn);
+            pl3t = map_xen_pagetable(mfn);
             clear_page(pl3t);
             l4e_write(&idle_pg_table[l4_table_offset(va)],
                       l4e_from_mfn(mfn, __PAGE_HYPERVISOR_RW));
-            UNMAP_XEN_PAGETABLE_NEW(pl3t);
+            UNMAP_XEN_PAGETABLE(pl3t);
         }
     }
 
     /* Create user-accessible L2 directory to map the MPT for guests. */
-    l3_ro_mpt_mfn = alloc_xen_pagetable_new();
+    l3_ro_mpt_mfn = alloc_xen_pagetable();
     if ( mfn_eq(l3_ro_mpt_mfn, INVALID_MFN) )
         goto nomem;
-    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
+    l3_ro_mpt = map_xen_pagetable(l3_ro_mpt_mfn);
     clear_page(l3_ro_mpt);
     l4e_write(&idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)],
               l4e_from_mfn(l3_ro_mpt_mfn, __PAGE_HYPERVISOR_RO | _PAGE_USER));
@@ -675,13 +675,13 @@ void __init paging_init(void)
              * Unmap l2_ro_mpt, which could've been mapped in previous
              * iteration.
              */
-            unmap_xen_pagetable_new(l2_ro_mpt);
+            unmap_xen_pagetable(l2_ro_mpt);
 
-            l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+            l2_ro_mpt_mfn = alloc_xen_pagetable();
             if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
                 goto nomem;
 
-            l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
+            l2_ro_mpt = map_xen_pagetable(l2_ro_mpt_mfn);
             clear_page(l2_ro_mpt);
             l3e_write(&l3_ro_mpt[l3_table_offset(va)],
                       l3e_from_mfn(l2_ro_mpt_mfn,
@@ -697,8 +697,8 @@ void __init paging_init(void)
     }
 #undef CNT
 #undef MFN
-    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
 
     /* Create user-accessible L2 directory to map the MPT for compat guests. */
     BUILD_BUG_ON(l4_table_offset(RDWR_MPT_VIRT_START) !=
@@ -706,12 +706,12 @@ void __init paging_init(void)
 
     l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset(
                                         HIRO_COMPAT_MPT_VIRT_START)]);
-    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
+    l3_ro_mpt = map_xen_pagetable(l3_ro_mpt_mfn);
 
-    l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+    l2_ro_mpt_mfn = alloc_xen_pagetable();
     if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
         goto nomem;
-    l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
+    l2_ro_mpt = map_xen_pagetable(l2_ro_mpt_mfn);
     compat_idle_pg_table_l2 = map_domain_page_global(l2_ro_mpt_mfn);
     clear_page(l2_ro_mpt);
     l3e_write(&l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)],
@@ -757,8 +757,8 @@ void __init paging_init(void)
 #undef CNT
 #undef MFN
 
-    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
 
     machine_to_phys_mapping_valid = 1;
 
@@ -816,10 +816,10 @@ static void cleanup_frame_table(struct mem_hotadd_info *info)
 
     while (sva < eva)
     {
-        l3t = map_xen_pagetable_new(
+        l3t = map_xen_pagetable(
             l4e_get_mfn(idle_pg_table[l4_table_offset(sva)]));
         l3e = l3t[l3_table_offset(sva)];
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        UNMAP_XEN_PAGETABLE(l3t);
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
              (l3e_get_flags(l3e) & _PAGE_PSE) )
         {
@@ -828,9 +828,9 @@ static void cleanup_frame_table(struct mem_hotadd_info *info)
             continue;
         }
 
-        l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+        l2t = map_xen_pagetable(l3e_get_mfn(l3e));
         l2e = l2t[l2_table_offset(sva)];
-        UNMAP_XEN_PAGETABLE_NEW(l2t);
+        UNMAP_XEN_PAGETABLE(l2t);
         ASSERT(l2e_get_flags(l2e) & _PAGE_PRESENT);
 
         if ( (l2e_get_flags(l2e) & (_PAGE_PRESENT | _PAGE_PSE)) ==
@@ -848,10 +848,10 @@ static void cleanup_frame_table(struct mem_hotadd_info *info)
 
 #ifndef NDEBUG
         {
-            l1_pgentry_t *l1t = map_xen_pagetable_new(l2e_get_mfn(l2e));
+            l1_pgentry_t *l1t = map_xen_pagetable(l2e_get_mfn(l2e));
             ASSERT(l1e_get_flags(l1t[l1_table_offset(sva)]) &
                    _PAGE_PRESENT);
-            UNMAP_XEN_PAGETABLE_NEW(l1t);
+            UNMAP_XEN_PAGETABLE(l1t);
         }
 #endif
          sva = (sva & ~((1UL << PAGE_SHIFT) - 1)) +
@@ -942,10 +942,10 @@ void __init subarch_init_memory(void)
     {
         n = L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES;
 
-        l3t = map_xen_pagetable_new(
+        l3t = map_xen_pagetable(
             l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
         l3e = l3t[l3_table_offset(v)];
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        UNMAP_XEN_PAGETABLE(l3t);
 
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
@@ -953,9 +953,9 @@ void __init subarch_init_memory(void)
         {
             n = L1_PAGETABLE_ENTRIES;
 
-            l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+            l2t = map_xen_pagetable(l3e_get_mfn(l3e));
             l2e = l2t[l2_table_offset(v)];
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            UNMAP_XEN_PAGETABLE(l2t);
 
             if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
                 continue;
@@ -975,17 +975,17 @@ void __init subarch_init_memory(void)
           v != RDWR_COMPAT_MPT_VIRT_END;
           v += 1 << L2_PAGETABLE_SHIFT )
     {
-        l3t = map_xen_pagetable_new(
+        l3t = map_xen_pagetable(
             l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
         l3e = l3t[l3_table_offset(v)];
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        UNMAP_XEN_PAGETABLE(l3t);
 
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
 
-        l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+        l2t = map_xen_pagetable(l3e_get_mfn(l3e));
         l2e = l2t[l2_table_offset(v)];
-        UNMAP_XEN_PAGETABLE_NEW(l2t);
+        UNMAP_XEN_PAGETABLE(l2t);
 
         if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
             continue;
@@ -1036,18 +1036,18 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
               (v < (unsigned long)(machine_to_phys_mapping + max_page));
               i++, v += 1UL << L2_PAGETABLE_SHIFT )
         {
-            l3t = map_xen_pagetable_new(
+            l3t = map_xen_pagetable(
                 l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
             l3e = l3t[l3_table_offset(v)];
-            UNMAP_XEN_PAGETABLE_NEW(l3t);
+            UNMAP_XEN_PAGETABLE(l3t);
 
             if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
                 mfn = last_mfn;
             else if ( !(l3e_get_flags(l3e) & _PAGE_PSE) )
             {
-                l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+                l2t = map_xen_pagetable(l3e_get_mfn(l3e));
                 l2e = l2t[l2_table_offset(v)];
-                UNMAP_XEN_PAGETABLE_NEW(l2t);
+                UNMAP_XEN_PAGETABLE(l2t);
                 if ( l2e_get_flags(l2e) & _PAGE_PRESENT )
                     mfn = l2e_get_pfn(l2e);
                 else
diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index f55d6a6d76..d47067c998 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1443,20 +1443,20 @@ static __init void copy_mapping(l4_pgentry_t *l4,
         {
             mfn_t l3t_mfn;
 
-            l3t_mfn = alloc_xen_pagetable_new();
+            l3t_mfn = alloc_xen_pagetable();
             BUG_ON(mfn_eq(l3t_mfn, INVALID_MFN));
-            l3dst = map_xen_pagetable_new(l3t_mfn);
+            l3dst = map_xen_pagetable(l3t_mfn);
             clear_page(l3dst);
             l4[l4_table_offset(mfn << PAGE_SHIFT)] =
                 l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR);
         }
         else
-            l3dst = map_xen_pagetable_new(l4e_get_mfn(l4e));
-        l3src = map_xen_pagetable_new(
+            l3dst = map_xen_pagetable(l4e_get_mfn(l4e));
+        l3src = map_xen_pagetable(
             l4e_get_mfn(idle_pg_table[l4_table_offset(va)]));
         l3dst[l3_table_offset(mfn << PAGE_SHIFT)] = l3src[l3_table_offset(va)];
-        UNMAP_XEN_PAGETABLE_NEW(l3src);
-        UNMAP_XEN_PAGETABLE_NEW(l3dst);
+        UNMAP_XEN_PAGETABLE(l3src);
+        UNMAP_XEN_PAGETABLE(l3dst);
     }
 }
 
@@ -1604,9 +1604,9 @@ void __init efi_init_memory(void)
                                  mdesc_ver, efi_memmap);
 #else
     /* Set up 1:1 page tables to do runtime calls in "physical" mode. */
-    efi_l4_mfn = alloc_xen_pagetable_new();
+    efi_l4_mfn = alloc_xen_pagetable();
     BUG_ON(mfn_eq(efi_l4_mfn, INVALID_MFN));
-    efi_l4_pgtable = map_xen_pagetable_new(efi_l4_mfn);
+    efi_l4_pgtable = map_xen_pagetable(efi_l4_mfn);
     clear_page(efi_l4_pgtable);
 
     copy_mapping(efi_l4_pgtable, 0, max_page, ram_range_valid);
@@ -1641,31 +1641,31 @@ void __init efi_init_memory(void)
         {
             mfn_t l3t_mfn;
 
-            l3t_mfn = alloc_xen_pagetable_new();
+            l3t_mfn = alloc_xen_pagetable();
             BUG_ON(mfn_eq(l3t_mfn, INVALID_MFN));
-            pl3e = map_xen_pagetable_new(l3t_mfn);
+            pl3e = map_xen_pagetable(l3t_mfn);
             clear_page(pl3e);
             efi_l4_pgtable[l4_table_offset(addr)] =
                 l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR);
         }
         else
-            pl3e = map_xen_pagetable_new(l4e_get_mfn(l4e));
+            pl3e = map_xen_pagetable(l4e_get_mfn(l4e));
         pl3e += l3_table_offset(addr);
 
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
             mfn_t l2t_mfn;
 
-            l2t_mfn = alloc_xen_pagetable_new();
+            l2t_mfn = alloc_xen_pagetable();
             BUG_ON(mfn_eq(l2t_mfn, INVALID_MFN));
-            pl2e = map_xen_pagetable_new(l2t_mfn);
+            pl2e = map_xen_pagetable(l2t_mfn);
             clear_page(pl2e);
             *pl3e = l3e_from_mfn(l2t_mfn, __PAGE_HYPERVISOR);
         }
         else
         {
             BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-            pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+            pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e));
         }
         pl2e += l2_table_offset(addr);
 
@@ -1673,16 +1673,16 @@ void __init efi_init_memory(void)
         {
             mfn_t l1t_mfn;
 
-            l1t_mfn = alloc_xen_pagetable_new();
+            l1t_mfn = alloc_xen_pagetable();
             BUG_ON(mfn_eq(l1t_mfn, INVALID_MFN));
-            l1t = map_xen_pagetable_new(l1t_mfn);
+            l1t = map_xen_pagetable(l1t_mfn);
             clear_page(l1t);
             *pl2e = l2e_from_mfn(l1t_mfn, __PAGE_HYPERVISOR);
         }
         else
         {
             BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-            l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            l1t = map_xen_pagetable(l2e_get_mfn(*pl2e));
         }
         for ( i = l1_table_offset(addr);
               i < L1_PAGETABLE_ENTRIES && extra->smfn < extra->emfn;
@@ -1695,9 +1695,9 @@ void __init efi_init_memory(void)
             xfree(extra);
         }
 
-        UNMAP_XEN_PAGETABLE_NEW(l1t);
-        UNMAP_XEN_PAGETABLE_NEW(pl2e);
-        UNMAP_XEN_PAGETABLE_NEW(pl3e);
+        UNMAP_XEN_PAGETABLE(l1t);
+        UNMAP_XEN_PAGETABLE(pl2e);
+        UNMAP_XEN_PAGETABLE(pl3e);
     }
 
     /* Insert Xen mappings. */
@@ -1706,7 +1706,7 @@ void __init efi_init_memory(void)
         efi_l4_pgtable[i] = idle_pg_table[i];
 #endif
 
-    UNMAP_XEN_PAGETABLE_NEW(efi_l4_pgtable);
+    UNMAP_XEN_PAGETABLE(efi_l4_pgtable);
 }
 #endif
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 4fb79ab8f0..a4b3c9b7af 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -631,15 +631,15 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
                           unsigned int nr_frames, xen_pfn_t mfn_list[]);
 
 /* Allocator functions for Xen pagetables. */
-mfn_t alloc_xen_pagetable_new(void);
-void *map_xen_pagetable_new(mfn_t mfn);
-void unmap_xen_pagetable_new(void *v);
-void free_xen_pagetable_new(mfn_t mfn);
-
-#define UNMAP_XEN_PAGETABLE_NEW(ptr)    \
-    do {                                \
-        unmap_xen_pagetable_new((ptr)); \
-        (ptr) = NULL;                   \
+mfn_t alloc_xen_pagetable(void);
+void *map_xen_pagetable(mfn_t mfn);
+void unmap_xen_pagetable(void *v);
+void free_xen_pagetable(mfn_t mfn);
+
+#define UNMAP_XEN_PAGETABLE(ptr)    \
+    do {                            \
+        unmap_xen_pagetable((ptr)); \
+        (ptr) = NULL;               \
     } while (0)
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 63+ messages in thread

* Re: [Xen-devel] [PATCH v2 01/55] x86/mm: defer clearing page in virt_to_xen_lXe
  2019-09-30 10:32 ` [Xen-devel] [PATCH v2 01/55] x86/mm: defer clearing page in virt_to_xen_lXe Hongyan Xia
@ 2019-09-30 15:05   ` Wei Liu
  0 siblings, 0 replies; 63+ messages in thread
From: Wei Liu @ 2019-09-30 15:05 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: xen-devel, Roger Pau Monné, Wei Liu, Jan Beulich, Andrew Cooper

On Mon, Sep 30, 2019 at 11:32:53AM +0100, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Defer the call to clear_page to the point when we're sure the page is
> going to become a page table.
> 
> This is a minor optimisation. No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

The benefit of this patch was questionable, because it made the lock
held longer. I had decided to drop it.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [Xen-devel] [PATCH v2 22/55] x86_64/mm: switch to new APIs in paging_init
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 22/55] x86_64/mm: switch to new APIs " Hongyan Xia
@ 2019-10-01 11:51   ` Wei Liu
  2019-10-01 13:39     ` Hongyan Xia
  0 siblings, 1 reply; 63+ messages in thread
From: Wei Liu @ 2019-10-01 11:51 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: xen-devel, Roger Pau Monné, Wei Liu, Jan Beulich, Andrew Cooper

On Mon, Sep 30, 2019 at 11:33:14AM +0100, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyax@amazon.com>
> 
> ---
> Changed since v1:
>   * use a global mapping for compat_idle_pg_table_l2, otherwise
>     l2_ro_mpt will unmap it.

Hmmm... I wonder why XTF didn't catch this.

If we really want to go all the way to eliminate persistent mappings
for page tables, the code should be changed such that:

1. compat_idle_pg_table_l2 should be changed to store mfn, not va.
2. map and unmap that mfn when access to the compat page table is
   required.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs
  2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
                   ` (54 preceding siblings ...)
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 55/55] x86/mm: drop _new suffix for page table APIs Hongyan Xia
@ 2019-10-01 11:56 ` Wei Liu
  55 siblings, 0 replies; 63+ messages in thread
From: Wei Liu @ 2019-10-01 11:56 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: xen-devel, Roger Pau Monné, Wei Liu, Jan Beulich, Andrew Cooper

On Mon, Sep 30, 2019 at 11:32:52AM +0100, Hongyan Xia wrote:
> This series is mostly Wei's effort to switch from xenheap to domheap for
> Xen page tables. In addition, I have also merged several bug fixes from
> my "Remove direct map from Xen" series [1]. As the title suggests, this
> series switches from xenheap to domheap for Xen PTEs.
> 
> This is needed to achieve the ultimate goal of removing the
> always-mapped direct map from Xen. To work without an always-mapped
> direct map, Xen PTE manipulations themselves must not rely on it.
> Unfortunately, PTE APIs currently use the xenheap that does not work
> without the direct map. By switching to domheap APIs, it is much easier
> for us to break the reliance on the direct map later on, not only for
> PTEs but for all other memory allocations as well.
> 
> I have broken down the direct map removal series into two. This series
> is the first batch. The patches change the life cycle of Xen PTEs from
> alloc-free to alloc-map-unmap-free, which means PTEs must be explicitly
> mapped and unmapped. This also makes sense to be the first batch from a
> stability PoV, since this is just an API change and the direct map has
> not been actually removed. Further, the map and unmap in the release
> build use the direct map as a fast path, so there is also no performance
> degredation in a release build.
> 
> I have tested both debug and release build on bare-metal and nested
> virtualisation. I am able to run PV and HVM guests and XTF tests without
> crashes so far on x86. I am able to build on AArch64.
> 
> This series is at https://xenbits.xen.org/git-http/people/hx242/xen.git,
> xen_pte_map branch.
> 
> ---
> Changed since v1:
> - squash some commits
> - merge bug fixes into this first batch
> - rebase against latest master

FYI in the future it is better to rebase against staging.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [Xen-devel] [PATCH v2 22/55] x86_64/mm: switch to new APIs in paging_init
  2019-10-01 11:51   ` Wei Liu
@ 2019-10-01 13:39     ` Hongyan Xia
  0 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-10-01 13:39 UTC (permalink / raw)
  To: Wei Liu; +Cc: xen-devel, Roger Pau Monné, Jan Beulich, Andrew Cooper

On 01/10/2019 12:51, Wei Liu wrote:
> On Mon, Sep 30, 2019 at 11:33:14AM +0100, Hongyan Xia wrote:
>> From: Wei Liu <wei.liu2@citrix.com>
>>
>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>> Signed-off-by: Hongyan Xia <hongyax@amazon.com>
>>
>> ---
>> Changed since v1:
>>    * use a global mapping for compat_idle_pg_table_l2, otherwise
>>      l2_ro_mpt will unmap it.
> 
> Hmmm... I wonder why XTF didn't catch this.
> 

Well, probably because this only shows up when we actually remove all fast 
paths and the direct map. If we just apply this batch, unmap on the direct map 
is just a no-op. I caught this with my later patches.

> If we really want to go all the way to eliminate persistent mappings
> for page tables, the code should be changed such that:
> 
> 1. compat_idle_pg_table_l2 should be changed to store mfn, not va.
> 2. map and unmap that mfn when access to the compat page table is
>     required.
> 

Sounds sensible and more consistent with other PTEs.

Hongyan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [Xen-devel] [PATCH v2 39/55] x86: switch root_pgt to mfn_t and use new APIs
  2019-09-30 10:33 ` [Xen-devel] [PATCH v2 39/55] x86: switch root_pgt to mfn_t and use new APIs Hongyan Xia
@ 2019-10-01 13:54   ` Hongyan Xia
  2019-10-01 15:20     ` Wei Liu
  0 siblings, 1 reply; 63+ messages in thread
From: Hongyan Xia @ 2019-10-01 13:54 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Jan Beulich, Wei Liu, Roger Pau Monné

On 30/09/2019 11:33, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> This then requires moving declaration of root page table mfn into mm.h
> and modify setup_cpu_root_pgt to have a single exit path.
> 
> We also need to force map_domain_page to use direct map when switching
> per-domain mappings. This is contrary to our end goal of removing
> direct map, but this will be removed once we make map_domain_page
> context-switch safe in another (large) patch series.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>   xen/arch/x86/domain.c           | 15 ++++++++++---
>   xen/arch/x86/domain_page.c      |  2 +-
>   xen/arch/x86/mm.c               |  2 +-
>   xen/arch/x86/pv/domain.c        |  2 +-
>   xen/arch/x86/smpboot.c          | 40 ++++++++++++++++++++++-----------
>   xen/include/asm-x86/mm.h        |  2 ++
>   xen/include/asm-x86/processor.h |  2 +-
>   7 files changed, 45 insertions(+), 20 deletions(-)
> 
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index dbdf6b1bc2..e9bf47efce 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -69,6 +69,7 @@
>   #include <asm/pv/domain.h>
>   #include <asm/pv/mm.h>
>   #include <asm/spec_ctrl.h>
> +#include <asm/setup.h>
>   
>   DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
>   
> @@ -1580,12 +1581,20 @@ void paravirt_ctxt_switch_from(struct vcpu *v)
>   
>   void paravirt_ctxt_switch_to(struct vcpu *v)
>   {
> -    root_pgentry_t *root_pgt = this_cpu(root_pgt);
> +    mfn_t rpt_mfn = this_cpu(root_pgt_mfn);
>   
> -    if ( root_pgt )
> -        root_pgt[root_table_offset(PERDOMAIN_VIRT_START)] =
> +    if ( !mfn_eq(rpt_mfn, INVALID_MFN) )
> +    {
> +        root_pgentry_t *rpt;
> +
> +        mapcache_override_current(INVALID_VCPU);
> +        rpt = map_xen_pagetable_new(rpt_mfn);
> +        rpt[root_table_offset(PERDOMAIN_VIRT_START)] =
>               l4e_from_page(v->domain->arch.perdomain_l3_pg,
>                             __PAGE_HYPERVISOR_RW);
> +        UNMAP_XEN_PAGETABLE_NEW(rpt);
> +        mapcache_override_current(NULL);
> +    }
>   
>       if ( unlikely(v->arch.dr7 & DR7_ACTIVE_MASK) )
>           activate_debugregs(v);

I am having second thoughts on whether I should include this patch for now. 
Obviously the per-domain mapcache in its current form cannot be used here 
during the context switch. However, I also don't want to use PMAP because it is 
just a bootstrapping mechanism and may result in heavy lock contention here.

I am inclined to drop it for now and include this after we have a 
context-switch safe mapping mechanism, as the commit message suggests.

Hongyan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [Xen-devel] [PATCH v2 39/55] x86: switch root_pgt to mfn_t and use new APIs
  2019-10-01 13:54   ` Hongyan Xia
@ 2019-10-01 15:20     ` Wei Liu
  2019-10-01 15:31       ` Hongyan Xia
  0 siblings, 1 reply; 63+ messages in thread
From: Wei Liu @ 2019-10-01 15:20 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: xen-devel, Roger Pau Monné, Jan Beulich, Wei Liu, Andrew Cooper

On Tue, Oct 01, 2019 at 02:54:19PM +0100, Hongyan Xia wrote:
> On 30/09/2019 11:33, Hongyan Xia wrote:
> > From: Wei Liu <wei.liu2@citrix.com>
> > 
> > This then requires moving declaration of root page table mfn into mm.h
> > and modify setup_cpu_root_pgt to have a single exit path.
> > 
> > We also need to force map_domain_page to use direct map when switching
> > per-domain mappings. This is contrary to our end goal of removing
> > direct map, but this will be removed once we make map_domain_page
> > context-switch safe in another (large) patch series.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >   xen/arch/x86/domain.c           | 15 ++++++++++---
> >   xen/arch/x86/domain_page.c      |  2 +-
> >   xen/arch/x86/mm.c               |  2 +-
> >   xen/arch/x86/pv/domain.c        |  2 +-
> >   xen/arch/x86/smpboot.c          | 40 ++++++++++++++++++++++-----------
> >   xen/include/asm-x86/mm.h        |  2 ++
> >   xen/include/asm-x86/processor.h |  2 +-
> >   7 files changed, 45 insertions(+), 20 deletions(-)
> > 
> > diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> > index dbdf6b1bc2..e9bf47efce 100644
> > --- a/xen/arch/x86/domain.c
> > +++ b/xen/arch/x86/domain.c
> > @@ -69,6 +69,7 @@
> >   #include <asm/pv/domain.h>
> >   #include <asm/pv/mm.h>
> >   #include <asm/spec_ctrl.h>
> > +#include <asm/setup.h>
> >   DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
> > @@ -1580,12 +1581,20 @@ void paravirt_ctxt_switch_from(struct vcpu *v)
> >   void paravirt_ctxt_switch_to(struct vcpu *v)
> >   {
> > -    root_pgentry_t *root_pgt = this_cpu(root_pgt);
> > +    mfn_t rpt_mfn = this_cpu(root_pgt_mfn);
> > -    if ( root_pgt )
> > -        root_pgt[root_table_offset(PERDOMAIN_VIRT_START)] =
> > +    if ( !mfn_eq(rpt_mfn, INVALID_MFN) )
> > +    {
> > +        root_pgentry_t *rpt;
> > +
> > +        mapcache_override_current(INVALID_VCPU);
> > +        rpt = map_xen_pagetable_new(rpt_mfn);
> > +        rpt[root_table_offset(PERDOMAIN_VIRT_START)] =
> >               l4e_from_page(v->domain->arch.perdomain_l3_pg,
> >                             __PAGE_HYPERVISOR_RW);
> > +        UNMAP_XEN_PAGETABLE_NEW(rpt);
> > +        mapcache_override_current(NULL);
> > +    }
> >       if ( unlikely(v->arch.dr7 & DR7_ACTIVE_MASK) )
> >           activate_debugregs(v);
> 
> I am having second thoughts on whether I should include this patch for now.
> Obviously the per-domain mapcache in its current form cannot be used here
> during the context switch. However, I also don't want to use PMAP because it
> is just a bootstrapping mechanism and may result in heavy lock contention
> here.
> 
> I am inclined to drop it for now and include this after we have a
> context-switch safe mapping mechanism, as the commit message suggests.
> 

Dropping this patch is of course fine. Then you need to consider how to
make the rest of the series remain applicable to staging.

I guess the plan in the short term is too keep a global mapping for each
root page table, right?

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

* Re: [Xen-devel] [PATCH v2 39/55] x86: switch root_pgt to mfn_t and use new APIs
  2019-10-01 15:20     ` Wei Liu
@ 2019-10-01 15:31       ` Hongyan Xia
  0 siblings, 0 replies; 63+ messages in thread
From: Hongyan Xia @ 2019-10-01 15:31 UTC (permalink / raw)
  To: Wei Liu; +Cc: xen-devel, Roger Pau Monné, Jan Beulich, Andrew Cooper

On 01/10/2019 16:20, Wei Liu wrote:
> On Tue, Oct 01, 2019 at 02:54:19PM +0100, Hongyan Xia wrote:
>> On 30/09/2019 11:33, Hongyan Xia wrote:
>>> From: Wei Liu <wei.liu2@citrix.com>
>>>
>>> This then requires moving declaration of root page table mfn into mm.h
>>> and modify setup_cpu_root_pgt to have a single exit path.
>>>
>>> We also need to force map_domain_page to use direct map when switching
>>> per-domain mappings. This is contrary to our end goal of removing
>>> direct map, but this will be removed once we make map_domain_page
>>> context-switch safe in another (large) patch series.
>>>
>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>> ---
>>>    xen/arch/x86/domain.c           | 15 ++++++++++---
>>>    xen/arch/x86/domain_page.c      |  2 +-
>>>    xen/arch/x86/mm.c               |  2 +-
>>>    xen/arch/x86/pv/domain.c        |  2 +-
>>>    xen/arch/x86/smpboot.c          | 40 ++++++++++++++++++++++-----------
>>>    xen/include/asm-x86/mm.h        |  2 ++
>>>    xen/include/asm-x86/processor.h |  2 +-
>>>    7 files changed, 45 insertions(+), 20 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>>> index dbdf6b1bc2..e9bf47efce 100644
>>> --- a/xen/arch/x86/domain.c
>>> +++ b/xen/arch/x86/domain.c
>>> @@ -69,6 +69,7 @@
>>>    #include <asm/pv/domain.h>
>>>    #include <asm/pv/mm.h>
>>>    #include <asm/spec_ctrl.h>
>>> +#include <asm/setup.h>
>>>    DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
>>> @@ -1580,12 +1581,20 @@ void paravirt_ctxt_switch_from(struct vcpu *v)
>>>    void paravirt_ctxt_switch_to(struct vcpu *v)
>>>    {
>>> -    root_pgentry_t *root_pgt = this_cpu(root_pgt);
>>> +    mfn_t rpt_mfn = this_cpu(root_pgt_mfn);
>>> -    if ( root_pgt )
>>> -        root_pgt[root_table_offset(PERDOMAIN_VIRT_START)] =
>>> +    if ( !mfn_eq(rpt_mfn, INVALID_MFN) )
>>> +    {
>>> +        root_pgentry_t *rpt;
>>> +
>>> +        mapcache_override_current(INVALID_VCPU);
>>> +        rpt = map_xen_pagetable_new(rpt_mfn);
>>> +        rpt[root_table_offset(PERDOMAIN_VIRT_START)] =
>>>                l4e_from_page(v->domain->arch.perdomain_l3_pg,
>>>                              __PAGE_HYPERVISOR_RW);
>>> +        UNMAP_XEN_PAGETABLE_NEW(rpt);
>>> +        mapcache_override_current(NULL);
>>> +    }
>>>        if ( unlikely(v->arch.dr7 & DR7_ACTIVE_MASK) )
>>>            activate_debugregs(v);
>>
>> I am having second thoughts on whether I should include this patch for now.
>> Obviously the per-domain mapcache in its current form cannot be used here
>> during the context switch. However, I also don't want to use PMAP because it
>> is just a bootstrapping mechanism and may result in heavy lock contention
>> here.
>>
>> I am inclined to drop it for now and include this after we have a
>> context-switch safe mapping mechanism, as the commit message suggests.
>>
> 
> Dropping this patch is of course fine. Then you need to consider how to
> make the rest of the series remain applicable to staging.

I will make sure the series still applies after dropping it.

> 
> I guess the plan in the short term is too keep a global mapping for each
> root page table, right?

Yes. I have changed rpts to be xenheap pages in my next revision, which so far 
works happily without the direct map.

> 
> Wei.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 63+ messages in thread

end of thread, other threads:[~2019-10-01 15:32 UTC | newest]

Thread overview: 63+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-30 10:32 [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Hongyan Xia
2019-09-30 10:32 ` [Xen-devel] [PATCH v2 01/55] x86/mm: defer clearing page in virt_to_xen_lXe Hongyan Xia
2019-09-30 15:05   ` Wei Liu
2019-09-30 10:32 ` [Xen-devel] [PATCH v2 02/55] x86: move some xen mm function declarations Hongyan Xia
2019-09-30 10:32 ` [Xen-devel] [PATCH v2 03/55] x86: introduce a new set of APIs to manage Xen page tables Hongyan Xia
2019-09-30 10:32 ` [Xen-devel] [PATCH v2 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Hongyan Xia
2019-09-30 10:32 ` [Xen-devel] [PATCH v2 05/55] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Hongyan Xia
2019-09-30 10:32 ` [Xen-devel] [PATCH v2 06/55] x86/mm: map_pages_to_xen should have one exit path Hongyan Xia
2019-09-30 10:32 ` [Xen-devel] [PATCH v2 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 08/55] x86/mm: make sure there is one exit path for modify_xen_mappings Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 09/55] x86/mm: add an end_of_loop label in modify_xen_mappings Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 10/55] x86/mm: change pl2e to l2t in virt_to_xen_l2e Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 11/55] x86/mm: change pl1e to l1t in virt_to_xen_l1e Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 12/55] x86/mm: change pl3e to l3t in virt_to_xen_l3e Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 13/55] x86/mm: rewrite virt_to_xen_l3e Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 14/55] x86/mm: rewrite xen_to_virt_l2e Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 15/55] x86/mm: rewrite virt_to_xen_l1e Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 16/55] x86/mm: switch to new APIs in map_pages_to_xen Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 17/55] x86/mm: drop lXe_to_lYe invocations " Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 18/55] x86/mm: switch to new APIs in modify_xen_mappings Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 19/55] x86/mm: drop lXe_to_lYe invocations from modify_xen_mappings Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 20/55] x86/mm: switch to new APIs in arch_init_memory Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 21/55] x86_64/mm: introduce pl2e in paging_init Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 22/55] x86_64/mm: switch to new APIs " Hongyan Xia
2019-10-01 11:51   ` Wei Liu
2019-10-01 13:39     ` Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 23/55] x86_64/mm: drop l4e_to_l3e invocation from paging_init Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 24/55] x86_64/mm.c: remove code that serves no purpose in setup_m2p_table Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 25/55] x86_64/mm: introduce pl2e " Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 26/55] x86_64/mm: switch to new APIs " Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 27/55] x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 28/55] efi: use new page table APIs in copy_mapping Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 29/55] efi: avoid using global variable " Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 30/55] efi: use new page table APIs in efi_init_memory Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 31/55] efi: add emacs block to boot.c Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 32/55] efi: switch EFI L4 table to use new APIs Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 33/55] x86/smpboot: add emacs block Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 34/55] x86/smpboot: clone_mapping should have one exit path Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 35/55] x86/smpboot: switch pl3e to use new APIs in clone_mapping Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 36/55] x86/smpboot: switch pl2e " Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 37/55] x86/smpboot: switch pl1e " Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 38/55] x86/smpboot: drop lXe_to_lYe invocations from cleanup_cpu_root_pgt Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 39/55] x86: switch root_pgt to mfn_t and use new APIs Hongyan Xia
2019-10-01 13:54   ` Hongyan Xia
2019-10-01 15:20     ` Wei Liu
2019-10-01 15:31       ` Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 40/55] x86/shim: map and unmap page tables in replace_va_mapping Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 41/55] x86_64/mm: map and unmap page tables in m2p_mapped Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 42/55] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 43/55] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 44/55] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 45/55] x86_64/mm: map and unmap page tables in setup_compat_m2p_table Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 46/55] x86_64/mm: map and unmap page tables in cleanup_frame_table Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 47/55] x86_64/mm: map and unmap page tables in subarch_init_memory Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 48/55] x86_64/mm: map and unmap page tables in subarch_memory_op Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 49/55] x86/smpboot: remove lXe_to_lYe in cleanup_cpu_root_pgt Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 50/55] x86/pv: properly map and unmap page tables in mark_pv_pt_pages_rdonly Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 51/55] x86/pv: properly map and unmap page table in dom0_construct_pv Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 52/55] x86: remove lXe_to_lYe in __start_xen Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 53/55] x86/mm: drop old page table APIs Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 54/55] x86: switch to use domheap page for page tables Hongyan Xia
2019-09-30 10:33 ` [Xen-devel] [PATCH v2 55/55] x86/mm: drop _new suffix for page table APIs Hongyan Xia
2019-10-01 11:56 ` [Xen-devel] [PATCH v2 00/55] Switch to domheap for Xen PTEs Wei Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).