All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 00/55] x86: use domheap page for xen page tables
@ 2019-02-07 16:44 Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 01/55] x86/mm: defer clearing page in virt_to_xen_lXe Wei Liu
                   ` (56 more replies)
  0 siblings, 57 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Wei Liu

This series switches xen page tables from xenheap page to domheap page.

This is required so that when we implement xenheap on top of vmap there won't
be a loop.

It is done in roughly three steps:

1. Introduce a new set of APIs, implement the old APIs on top of the new ones.
   New APIs still use xenheap pages.
2. Switch each site which manipulate page tables to use the new APIs.
3. Switch new APIs to use domheap page.

You can find the series at:

  https://xenbits.xen.org/git-http/people/liuw/xen.git xen-pt-allocation-1

Wei.

Wei Liu (55):
  x86/mm: defer clearing page in virt_to_xen_lXe
  x86: move some xen mm function declarations
  x86: introduce a new set of APIs to manage Xen page tables
  x86/mm: introduce l{1,2}t local variables to map_pages_to_xen
  x86/mm: introduce l{1,2}t local variables to modify_xen_mappings
  x86/mm: map_pages_to_xen should have one exit path
  x86/mm: add an end_of_loop label in map_pages_to_xen
  x86/mm: make sure there is one exit path for modify_xen_mappings
  x86/mm: add an end_of_loop label in modify_xen_mappings
  x86/mm: change pl2e to l2t in virt_to_xen_l2e
  x86/mm: change pl1e to l1t in virt_to_xen_l1e
  x86/mm: change pl3e to l3t in virt_to_xen_l3e
  x86/mm: rewrite virt_to_xen_l3e
  x86/mm: rewrite xen_to_virt_l2e
  x86/mm: rewrite virt_to_xen_l1e
  x86/mm: switch to new APIs in map_pages_to_xen
  x86/mm: drop lXe_to_lYe invocations in map_pages_to_xen
  x86/mm: switch to new APIs in modify_xen_mappings
  x86/mm: drop lXe_to_lYe invocations from modify_xen_mappings
  x86/mm: switch to new APIs in arch_init_memory
  x86_64/mm: introduce pl2e in paging_init
  x86_64/mm: switch to new APIs in paging_init
  x86_64/mm: drop l4e_to_l3e invocation from paging_init
  x86_64/mm.c: remove code that serves no purpose in setup_m2p_table
  x86_64/mm: introduce pl2e in setup_m2p_table
  x86_64/mm: switch to new APIs in setup_m2p_table
  x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table
  efi: use new page table APIs in copy_mapping
  efi: avoid using global variable in copy_mapping
  efi: use new page table APIs in efi_init_memory
  efi: add emacs block to boot.c
  efi: switch EFI L4 table to use new APIs
  x86/smpboot: add emacs block
  x86/smpboot: clone_mapping should have one exit path
  x86/smpboot: switch pl3e to use new APIs in clone_mapping
  x86/smpboot: switch pl2e to use new APIs in clone_mapping
  x86/smpboot: switch pl1e to use new APIs in clone_mapping
  x86/smpboot: drop lXe_to_lYe invocations from cleanup_cpu_root_pgt
  x86: switch root_pgt to mfn_t and use new APIs
  x86/shim: map and unmap page tables in replace_va_mapping
  x86_64/mm: map and unmap page tables in m2p_mapped
  x86_64/mm: map and unmap page tables in share_hotadd_m2p_table
  x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping
  x86_64/mm: map and unmap page tables in destroy_m2p_mapping
  x86_64/mm: map and unmap page tables in setup_compat_m2p_table
  x86_64/mm: map and unmap page tables in cleanup_frame_table
  x86_64/mm: map and unmap page tables in subarch_init_memory
  x86_64/mm: map and unmap page tables in subarch_memory_op
  x86/smpboot: remove lXe_to_lYe in cleanup_cpu_root_pgt
  x86/pv: properly map and unmap page tables in mark_pv_pt_pages_rdonly
  x86/pv: properly map and unmap page table in dom0_construct_pv
  x86: remove lXe_to_lYe in __start_xen
  x86/mm: drop old page table APIs
  x86: switch to use domheap page for page tables
  x86/mm: drop _new suffix from page table APIs

 xen/arch/x86/domain.c           |  15 +-
 xen/arch/x86/domain_page.c      |  12 +-
 xen/arch/x86/efi/runtime.h      |  12 +-
 xen/arch/x86/mm.c               | 485 ++++++++++++++++++++++++++++------------
 xen/arch/x86/pv/dom0_build.c    |  41 ++--
 xen/arch/x86/pv/domain.c        |   2 +-
 xen/arch/x86/pv/shim.c          |  20 +-
 xen/arch/x86/setup.c            |  10 +-
 xen/arch/x86/smpboot.c          | 171 ++++++++++----
 xen/arch/x86/x86_64/mm.c        | 265 +++++++++++++++-------
 xen/common/efi/boot.c           |  84 +++++--
 xen/common/efi/efi.h            |   3 +-
 xen/common/efi/runtime.c        |   8 +-
 xen/include/asm-x86/mm.h        |  16 ++
 xen/include/asm-x86/page.h      |  10 -
 xen/include/asm-x86/processor.h |   1 -
 16 files changed, 820 insertions(+), 335 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* [PATCH RFC 01/55] x86/mm: defer clearing page in virt_to_xen_lXe
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-15 14:38   ` Jan Beulich
  2019-02-07 16:44 ` [PATCH RFC 02/55] x86: move some xen mm function declarations Wei Liu
                   ` (55 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Defer the call to clear_page to the point when we're sure the page is
going to become a page table.

This is a minor optimisation. No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 7ec5954b03..043098d84f 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4752,13 +4752,13 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
 
         if ( !pl3e )
             return NULL;
-        clear_page(pl3e);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
         {
             l4_pgentry_t l4e = l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR);
 
+            clear_page(pl3e);
             l4e_write(pl4e, l4e);
             efi_update_l4_pgtable(l4_table_offset(v), l4e);
             pl3e = NULL;
@@ -4787,11 +4787,11 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
 
         if ( !pl2e )
             return NULL;
-        clear_page(pl2e);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
+            clear_page(pl2e);
             l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
             pl2e = NULL;
         }
@@ -4820,11 +4820,11 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
 
         if ( !pl1e )
             return NULL;
-        clear_page(pl1e);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
+            clear_page(pl1e);
             l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
             pl1e = NULL;
         }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 02/55] x86: move some xen mm function declarations
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 01/55] x86/mm: defer clearing page in virt_to_xen_lXe Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-15 14:42   ` Jan Beulich
  2019-02-07 16:44 ` [PATCH RFC 03/55] x86: introduce a new set of APIs to manage Xen page tables Wei Liu
                   ` (54 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

They were put into page.h but mm.h is more appropriate.

The real reason is that I will be adding some new functions which
takes mfn_t. It turns out it is a bit difficult to do in page.h.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/include/asm-x86/mm.h   | 5 +++++
 xen/include/asm-x86/page.h | 5 -----
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 6faa563167..0009fca08b 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -641,4 +641,9 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
                           unsigned int nr_frames, xen_pfn_t mfn_list[],
                           unsigned int *flags);
 
+/* Allocator functions for Xen pagetables. */
+void *alloc_xen_pagetable(void);
+void free_xen_pagetable(void *v);
+l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
+
 #endif /* __ASM_X86_MM_H__ */
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index c1e92937c0..05a8b1efa6 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -345,11 +345,6 @@ void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t);
 
 #ifndef __ASSEMBLY__
 
-/* Allocator functions for Xen pagetables. */
-void *alloc_xen_pagetable(void);
-void free_xen_pagetable(void *v);
-l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
-
 /* Convert between PAT/PCD/PWT embedded in PTE flags and 3-bit cacheattr. */
 static inline unsigned int pte_flags_to_cacheattr(unsigned int flags)
 {
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 03/55] x86: introduce a new set of APIs to manage Xen page tables
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 01/55] x86/mm: defer clearing page in virt_to_xen_lXe Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 02/55] x86: move some xen mm function declarations Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Wei Liu
                   ` (53 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

We are going to switch to using domheap page for page tables.
A new set of APIs is introduced to allocate, map, unmap and free pages
for page tables.

The allocation and deallocation work on mfn_t but not page_info,
because they are required to work even before frame table is set up.

Implement the old functions with the new ones. We will rewrite, site
by site, other mm functions that manipulate page tables to use the new
APIs.

Note these new APIs still use xenheap page underneath and no actual
map and unmap is done so that we don't break xen half way. They will
be switched to use domheap and dynamic mappings when usage of old APIs
is eliminated.

No functional change intended in this patch.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c        | 39 ++++++++++++++++++++++++++++++++++-----
 xen/include/asm-x86/mm.h | 11 +++++++++++
 2 files changed, 45 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 043098d84f..09e755dbc5 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -103,6 +103,7 @@
 #include <xen/efi.h>
 #include <xen/grant_table.h>
 #include <xen/hypercall.h>
+#include <xen/mm.h>
 #include <asm/paging.h>
 #include <asm/shadow.h>
 #include <asm/page.h>
@@ -4721,21 +4722,49 @@ int mmcfg_intercept_write(
 
 void *alloc_xen_pagetable(void)
 {
+    mfn_t mfn;
+
+    mfn = alloc_xen_pagetable_new();
+    ASSERT(!mfn_eq(mfn, INVALID_MFN));
+
+    return map_xen_pagetable_new(mfn);
+}
+
+void free_xen_pagetable(void *v)
+{
+    if ( system_state != SYS_STATE_early_boot )
+        free_xen_pagetable_new(virt_to_mfn(v));
+}
+
+mfn_t alloc_xen_pagetable_new(void)
+{
     if ( system_state != SYS_STATE_early_boot )
     {
         void *ptr = alloc_xenheap_page();
 
         BUG_ON(!hardware_domain && !ptr);
-        return ptr;
+        return virt_to_mfn(ptr);
     }
 
-    return mfn_to_virt(mfn_x(alloc_boot_pages(1, 1)));
+    return alloc_boot_pages(1, 1);
 }
 
-void free_xen_pagetable(void *v)
+void *map_xen_pagetable_new(mfn_t mfn)
 {
-    if ( system_state != SYS_STATE_early_boot )
-        free_xenheap_page(v);
+    return mfn_to_virt(mfn_x(mfn));
+}
+
+/* v can point to an entry within a table or be NULL */
+void unmap_xen_pagetable_new(void *v)
+{
+    /* XXX still using xenheap page, no need to do anything.  */
+}
+
+/* mfn can be INVALID_MFN */
+void free_xen_pagetable_new(mfn_t mfn)
+{
+    if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
+        free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
 }
 
 static DEFINE_SPINLOCK(map_pgdir_lock);
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 0009fca08b..4378d9f815 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -644,6 +644,17 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
 /* Allocator functions for Xen pagetables. */
 void *alloc_xen_pagetable(void);
 void free_xen_pagetable(void *v);
+mfn_t alloc_xen_pagetable_new(void);
+void *map_xen_pagetable_new(mfn_t mfn);
+void unmap_xen_pagetable_new(void *v);
+void free_xen_pagetable_new(mfn_t mfn);
+
+#define UNMAP_XEN_PAGETABLE_NEW(ptr)    \
+    do {                                \
+        unmap_xen_pagetable_new((ptr)); \
+        (ptr) = NULL;                   \
+    } while (0)
+
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
 
 #endif /* __ASM_X86_MM_H__ */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (2 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 03/55] x86: introduce a new set of APIs to manage Xen page tables Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-15 15:36   ` Jan Beulich
  2019-02-07 16:44 ` [PATCH RFC 05/55] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Wei Liu
                   ` (52 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

The pl2e and pl1e variables are heavily (ab)used in that function. It
is fine at the moment because all page tables are always mapped so
there is no need to track the life time of each variable.

We will soon have the requirement to map and unmap page tables. We
need to track the life time of each variable to avoid leakage.

Introduce some l{1,2}t variables with limited scope so that we can
track life time of pointers to xen page tables more easily.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 75 +++++++++++++++++++++++++++++++------------------------
 1 file changed, 42 insertions(+), 33 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 09e755dbc5..7c2d569347 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4934,10 +4934,12 @@ int map_pages_to_xen(
                 }
                 else
                 {
-                    pl2e = l3e_to_l2e(ol3e);
+                    l2_pgentry_t *l2t;
+
+                    l2t = l3e_to_l2e(ol3e);
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
-                        ol2e = pl2e[i];
+                        ol2e = l2t[i];
                         if ( !(l2e_get_flags(ol2e) & _PAGE_PRESENT) )
                             continue;
                         if ( l2e_get_flags(ol2e) & _PAGE_PSE )
@@ -4945,21 +4947,22 @@ int map_pages_to_xen(
                         else
                         {
                             unsigned int j;
+                            l1_pgentry_t *l1t;
 
-                            pl1e = l2e_to_l1e(ol2e);
+                            l1t = l2e_to_l1e(ol2e);
                             for ( j = 0; j < L1_PAGETABLE_ENTRIES; j++ )
-                                flush_flags(l1e_get_flags(pl1e[j]));
+                                flush_flags(l1e_get_flags(l1t[j]));
                         }
                     }
                     flush_area(virt, flush_flags);
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
-                        ol2e = pl2e[i];
+                        ol2e = l2t[i];
                         if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) &&
                              !(l2e_get_flags(ol2e) & _PAGE_PSE) )
                             free_xen_pagetable(l2e_to_l1e(ol2e));
                     }
-                    free_xen_pagetable(pl2e);
+                    free_xen_pagetable(l2t);
                 }
             }
 
@@ -4975,6 +4978,7 @@ int map_pages_to_xen(
         {
             unsigned int flush_flags =
                 FLUSH_TLB | FLUSH_ORDER(2 * PAGETABLE_ORDER);
+            l2_pgentry_t *l2t;
 
             /* Skip this PTE if there is no change. */
             if ( ((l3e_get_pfn(ol3e) & ~(L2_PAGETABLE_ENTRIES *
@@ -4996,12 +5000,12 @@ int map_pages_to_xen(
                 continue;
             }
 
-            pl2e = alloc_xen_pagetable();
-            if ( pl2e == NULL )
+            l2t = alloc_xen_pagetable();
+            if ( l2t == NULL )
                 return -ENOMEM;
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
-                l2e_write(pl2e + i,
+                l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(ol3e) +
                                        (i << PAGETABLE_ORDER),
                                        l3e_get_flags(ol3e)));
@@ -5014,15 +5018,15 @@ int map_pages_to_xen(
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(pl2e),
+                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
                                                     __PAGE_HYPERVISOR));
-                pl2e = NULL;
+                l2t = NULL;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
             flush_area(virt, flush_flags);
-            if ( pl2e )
-                free_xen_pagetable(pl2e);
+            if ( l2t )
+                free_xen_pagetable(l2t);
         }
 
         pl2e = virt_to_xen_l2e(virt);
@@ -5050,11 +5054,13 @@ int map_pages_to_xen(
                 }
                 else
                 {
-                    pl1e = l2e_to_l1e(ol2e);
+                    l1_pgentry_t *l1t;
+
+                    l1t = l2e_to_l1e(ol2e);
                     for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                        flush_flags(l1e_get_flags(pl1e[i]));
+                        flush_flags(l1e_get_flags(l1t[i]));
                     flush_area(virt, flush_flags);
-                    free_xen_pagetable(pl1e);
+                    free_xen_pagetable(l1t);
                 }
             }
 
@@ -5076,6 +5082,7 @@ int map_pages_to_xen(
             {
                 unsigned int flush_flags =
                     FLUSH_TLB | FLUSH_ORDER(PAGETABLE_ORDER);
+                l1_pgentry_t *l1t;
 
                 /* Skip this PTE if there is no change. */
                 if ( (((l2e_get_pfn(*pl2e) & ~(L1_PAGETABLE_ENTRIES - 1)) +
@@ -5095,12 +5102,12 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                pl1e = alloc_xen_pagetable();
-                if ( pl1e == NULL )
+                l1t = alloc_xen_pagetable();
+                if ( l1t == NULL )
                     return -ENOMEM;
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                    l1e_write(&pl1e[i],
+                    l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
                                            lNf_to_l1f(l2e_get_flags(*pl2e))));
 
@@ -5112,15 +5119,15 @@ int map_pages_to_xen(
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(pl1e),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
                                                         __PAGE_HYPERVISOR));
-                    pl1e = NULL;
+                    l1t = NULL;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(virt, flush_flags);
-                if ( pl1e )
-                    free_xen_pagetable(pl1e);
+                if ( l1t )
+                    free_xen_pagetable(l1t);
             }
 
             pl1e  = l2e_to_l1e(*pl2e) + l1_table_offset(virt);
@@ -5145,6 +5152,7 @@ int map_pages_to_xen(
                     ((1u << PAGETABLE_ORDER) - 1)) == 0)) )
             {
                 unsigned long base_mfn;
+                l1_pgentry_t *l1t;
 
                 if ( locking )
                     spin_lock(&map_pgdir_lock);
@@ -5168,11 +5176,11 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                pl1e = l2e_to_l1e(ol2e);
-                base_mfn = l1e_get_pfn(*pl1e) & ~(L1_PAGETABLE_ENTRIES - 1);
-                for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++, pl1e++ )
-                    if ( (l1e_get_pfn(*pl1e) != (base_mfn + i)) ||
-                         (l1e_get_flags(*pl1e) != flags) )
+                l1t = l2e_to_l1e(ol2e);
+                base_mfn = l1e_get_pfn(l1t[0]) & ~(L1_PAGETABLE_ENTRIES - 1);
+                for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
+                    if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) ||
+                         (l1e_get_flags(l1t[i]) != flags) )
                         break;
                 if ( i == L1_PAGETABLE_ENTRIES )
                 {
@@ -5198,6 +5206,7 @@ int map_pages_to_xen(
                 ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1))) )
         {
             unsigned long base_mfn;
+            l2_pgentry_t *l2t;
 
             if ( locking )
                 spin_lock(&map_pgdir_lock);
@@ -5215,13 +5224,13 @@ int map_pages_to_xen(
                 continue;
             }
 
-            pl2e = l3e_to_l2e(ol3e);
-            base_mfn = l2e_get_pfn(*pl2e) & ~(L2_PAGETABLE_ENTRIES *
+            l2t = l3e_to_l2e(ol3e);
+            base_mfn = l2e_get_pfn(l2t[0]) & ~(L2_PAGETABLE_ENTRIES *
                                               L1_PAGETABLE_ENTRIES - 1);
-            for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++, pl2e++ )
-                if ( (l2e_get_pfn(*pl2e) !=
+            for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
+                if ( (l2e_get_pfn(l2t[i]) !=
                       (base_mfn + (i << PAGETABLE_ORDER))) ||
-                     (l2e_get_flags(*pl2e) != l1f_to_lNf(flags)) )
+                     (l2e_get_flags(l2t[i]) != l1f_to_lNf(flags)) )
                     break;
             if ( i == L2_PAGETABLE_ENTRIES )
             {
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 05/55] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (3 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-18 21:14   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 06/55] x86/mm: map_pages_to_xen should have one exit path Wei Liu
                   ` (51 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

The pl2e and pl1e variables are heavily (ab)used in that function.  It
is fine at the moment because all page tables are always mapped so
there is no need to track the life time of each variable.

We will soon have the requirement to map and unmap page tables. We
need to track the life time of each variable to avoid leakage.

Introduce some l{1,2}t variables with limited scope so that we can
track life time of pointers to xen page tables more easily.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 68 +++++++++++++++++++++++++++++++------------------------
 1 file changed, 38 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 7c2d569347..4147a71c5d 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5301,6 +5301,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
         if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
         {
+            l2_pgentry_t *l2t;
+
             if ( l2_table_offset(v) == 0 &&
                  l1_table_offset(v) == 0 &&
                  ((e - v) >= (1UL << L3_PAGETABLE_SHIFT)) )
@@ -5316,11 +5318,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
-            pl2e = alloc_xen_pagetable();
-            if ( !pl2e )
+            l2t = alloc_xen_pagetable();
+            if ( !l2t )
                 return -ENOMEM;
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
-                l2e_write(pl2e + i,
+                l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
                                        (i << PAGETABLE_ORDER),
                                        l3e_get_flags(*pl3e)));
@@ -5329,14 +5331,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(pl2e),
+                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
                                                     __PAGE_HYPERVISOR));
-                pl2e = NULL;
+                l2t = NULL;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
-            if ( pl2e )
-                free_xen_pagetable(pl2e);
+            if ( l2t )
+                free_xen_pagetable(l2t);
         }
 
         /*
@@ -5370,12 +5372,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
             else
             {
+                l1_pgentry_t *l1t;
+
                 /* PSE: shatter the superpage and try again. */
-                pl1e = alloc_xen_pagetable();
-                if ( !pl1e )
+                l1t = alloc_xen_pagetable();
+                if ( !l1t )
                     return -ENOMEM;
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                    l1e_write(&pl1e[i],
+                    l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
                                            l2e_get_flags(*pl2e) & ~_PAGE_PSE));
                 if ( locking )
@@ -5383,19 +5387,19 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(pl1e),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
                                                         __PAGE_HYPERVISOR));
-                    pl1e = NULL;
+                    l1t = NULL;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
-                if ( pl1e )
-                    free_xen_pagetable(pl1e);
+                if ( l1t )
+                    free_xen_pagetable(l1t);
             }
         }
         else
         {
-            l1_pgentry_t nl1e;
+            l1_pgentry_t nl1e, *l1t;
 
             /*
              * Ordinary 4kB mapping: The L2 entry has been verified to be
@@ -5442,9 +5446,9 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 continue;
             }
 
-            pl1e = l2e_to_l1e(*pl2e);
+            l1t = l2e_to_l1e(*pl2e);
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                if ( l1e_get_intpte(pl1e[i]) != 0 )
+                if ( l1e_get_intpte(l1t[i]) != 0 )
                     break;
             if ( i == L1_PAGETABLE_ENTRIES )
             {
@@ -5453,7 +5457,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable(pl1e);
+                free_xen_pagetable(l1t);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5482,21 +5486,25 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             continue;
         }
 
-        pl2e = l3e_to_l2e(*pl3e);
-        for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
-            if ( l2e_get_intpte(pl2e[i]) != 0 )
-                break;
-        if ( i == L2_PAGETABLE_ENTRIES )
         {
-            /* Empty: zap the L3E and free the L2 page. */
-            l3e_write_atomic(pl3e, l3e_empty());
-            if ( locking )
+            l2_pgentry_t *l2t;
+
+            l2t = l3e_to_l2e(*pl3e);
+            for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
+                if ( l2e_get_intpte(l2t[i]) != 0 )
+                    break;
+            if ( i == L2_PAGETABLE_ENTRIES )
+            {
+                /* Empty: zap the L3E and free the L2 page. */
+                l3e_write_atomic(pl3e, l3e_empty());
+                if ( locking )
+                    spin_unlock(&map_pgdir_lock);
+                flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
+                free_xen_pagetable(l2t);
+            }
+            else if ( locking )
                 spin_unlock(&map_pgdir_lock);
-            flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-            free_xen_pagetable(pl2e);
         }
-        else if ( locking )
-            spin_unlock(&map_pgdir_lock);
     }
 
     flush_area(NULL, FLUSH_TLB_GLOBAL);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 06/55] x86/mm: map_pages_to_xen should have one exit path
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (4 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 05/55] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-18 21:14   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen Wei Liu
                   ` (50 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

We will soon rewrite the function to handle dynamically mapping and
unmapping of page tables.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 34 +++++++++++++++++++++++++++-------
 1 file changed, 27 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4147a71c5d..3ab222c8ea 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4887,9 +4887,11 @@ int map_pages_to_xen(
     unsigned int flags)
 {
     bool locking = system_state > SYS_STATE_boot;
+    l3_pgentry_t *pl3e, ol3e;
     l2_pgentry_t *pl2e, ol2e;
     l1_pgentry_t *pl1e, ol1e;
     unsigned int  i;
+    int rc = -ENOMEM;
 
 #define flush_flags(oldf) do {                 \
     unsigned int o_ = (oldf);                  \
@@ -4907,10 +4909,13 @@ int map_pages_to_xen(
 
     while ( nr_mfns != 0 )
     {
-        l3_pgentry_t ol3e, *pl3e = virt_to_xen_l3e(virt);
+        pl3e = virt_to_xen_l3e(virt);
 
         if ( !pl3e )
-            return -ENOMEM;
+        {
+            ASSERT(rc == -ENOMEM);
+            goto out;
+        }
         ol3e = *pl3e;
 
         if ( cpu_has_page1gb &&
@@ -5002,7 +5007,10 @@ int map_pages_to_xen(
 
             l2t = alloc_xen_pagetable();
             if ( l2t == NULL )
-                return -ENOMEM;
+            {
+                ASSERT(rc == -ENOMEM);
+                goto out;
+            }
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
@@ -5031,7 +5039,10 @@ int map_pages_to_xen(
 
         pl2e = virt_to_xen_l2e(virt);
         if ( !pl2e )
-            return -ENOMEM;
+        {
+            ASSERT(rc == -ENOMEM);
+            goto out;
+        }
 
         if ( ((((virt >> PAGE_SHIFT) | mfn_x(mfn)) &
                ((1u << PAGETABLE_ORDER) - 1)) == 0) &&
@@ -5076,7 +5087,10 @@ int map_pages_to_xen(
             {
                 pl1e = virt_to_xen_l1e(virt);
                 if ( pl1e == NULL )
-                    return -ENOMEM;
+                {
+                    ASSERT(rc == -ENOMEM);
+                    goto out;
+                }
             }
             else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
             {
@@ -5104,7 +5118,10 @@ int map_pages_to_xen(
 
                 l1t = alloc_xen_pagetable();
                 if ( l1t == NULL )
-                    return -ENOMEM;
+                {
+                    ASSERT(rc == -ENOMEM);
+                    goto out;
+                }
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
@@ -5250,7 +5267,10 @@ int map_pages_to_xen(
 
 #undef flush_flags
 
-    return 0;
+    rc = 0;
+
+ out:
+    return rc;
 }
 
 int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (5 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 06/55] x86/mm: map_pages_to_xen should have one exit path Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-15 15:40   ` Jan Beulich
  2019-03-18 21:14   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 08/55] x86/mm: make sure there is one exit path for modify_xen_mappings Wei Liu
                   ` (49 subsequent siblings)
  56 siblings, 2 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

We will soon need to clean up mappings whenever the out most loop is
ended. Add a new label and turn relevant continue's into goto's.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 3ab222c8ea..dc476405f8 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4975,7 +4975,7 @@ int map_pages_to_xen(
             if ( !mfn_eq(mfn, INVALID_MFN) )
                 mfn  = mfn_add(mfn, 1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT));
             nr_mfns -= 1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT);
-            continue;
+            goto end_of_loop;
         }
 
         if ( (l3e_get_flags(ol3e) & _PAGE_PRESENT) &&
@@ -5002,7 +5002,7 @@ int map_pages_to_xen(
                 if ( !mfn_eq(mfn, INVALID_MFN) )
                     mfn = mfn_add(mfn, i);
                 nr_mfns -= i;
-                continue;
+                goto end_of_loop;
             }
 
             l2t = alloc_xen_pagetable();
@@ -5183,7 +5183,7 @@ int map_pages_to_xen(
                 {
                     if ( locking )
                         spin_unlock(&map_pgdir_lock);
-                    continue;
+                    goto end_of_loop;
                 }
 
                 if ( l2e_get_flags(ol2e) & _PAGE_PSE )
@@ -5238,7 +5238,7 @@ int map_pages_to_xen(
             {
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
-                continue;
+                goto end_of_loop;
             }
 
             l2t = l3e_to_l2e(ol3e);
@@ -5263,6 +5263,7 @@ int map_pages_to_xen(
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
+    end_of_loop:;
     }
 
 #undef flush_flags
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 08/55] x86/mm: make sure there is one exit path for modify_xen_mappings
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (6 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 09/55] x86/mm: add an end_of_loop label in modify_xen_mappings Wei Liu
                   ` (48 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

We will soon need to handle dynamically mapping / unmapping page
tables in the said function.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index dc476405f8..73248f8670 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5298,6 +5298,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     l1_pgentry_t *pl1e;
     unsigned int  i;
     unsigned long v = s;
+    int rc = -ENOMEM;
 
     /* Set of valid PTE bits which may be altered. */
 #define FLAGS_MASK (_PAGE_NX|_PAGE_RW|_PAGE_PRESENT)
@@ -5341,7 +5342,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             /* PAGE1GB: shatter the superpage and fall through. */
             l2t = alloc_xen_pagetable();
             if ( !l2t )
-                return -ENOMEM;
+            {
+                ASSERT(rc == -ENOMEM);
+                goto out;
+            }
+
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
@@ -5398,7 +5403,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 /* PSE: shatter the superpage and try again. */
                 l1t = alloc_xen_pagetable();
                 if ( !l1t )
-                    return -ENOMEM;
+                {
+                    ASSERT(rc == -ENOMEM);
+                    goto out;
+                }
+
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
@@ -5531,7 +5540,10 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     flush_area(NULL, FLUSH_TLB_GLOBAL);
 
 #undef FLAGS_MASK
-    return 0;
+    rc = 0;
+
+ out:
+    return rc;
 }
 
 #undef flush_area
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 09/55] x86/mm: add an end_of_loop label in modify_xen_mappings
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (7 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 08/55] x86/mm: make sure there is one exit path for modify_xen_mappings Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 10/55] x86/mm: change pl2e to l2t in virt_to_xen_l2e Wei Liu
                   ` (47 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

We will soon need to clean up mappings whenever the out most loop
is ended. Add a new label and turn relevant continue's into goto's.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 73248f8670..a15aa80e16 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5318,7 +5318,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
             v += 1UL << L3_PAGETABLE_SHIFT;
             v &= ~((1UL << L3_PAGETABLE_SHIFT) - 1);
-            continue;
+            goto end_of_loop;
         }
 
         if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
@@ -5336,7 +5336,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
                 l3e_write_atomic(pl3e, nl3e);
                 v += 1UL << L3_PAGETABLE_SHIFT;
-                continue;
+                goto end_of_loop;
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
@@ -5380,7 +5380,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
             v += 1UL << L2_PAGETABLE_SHIFT;
             v &= ~((1UL << L2_PAGETABLE_SHIFT) - 1);
-            continue;
+            goto end_of_loop;
         }
 
         if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
@@ -5454,7 +5454,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
              * skip the empty&free check.
              */
             if ( (nf & _PAGE_PRESENT) || ((v != e) && (l1_table_offset(v) != 0)) )
-                continue;
+                goto end_of_loop;
             if ( locking )
                 spin_lock(&map_pgdir_lock);
 
@@ -5473,7 +5473,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             {
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
-                continue;
+                goto end_of_loop;
             }
 
             l1t = l2e_to_l1e(*pl2e);
@@ -5500,7 +5500,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
          */
         if ( (nf & _PAGE_PRESENT) ||
              ((v != e) && (l2_table_offset(v) + l1_table_offset(v) != 0)) )
-            continue;
+            goto end_of_loop;
         if ( locking )
             spin_lock(&map_pgdir_lock);
 
@@ -5513,7 +5513,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
         {
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
-            continue;
+            goto end_of_loop;
         }
 
         {
@@ -5535,6 +5535,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
+    end_of_loop:;
     }
 
     flush_area(NULL, FLUSH_TLB_GLOBAL);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 10/55] x86/mm: change pl2e to l2t in virt_to_xen_l2e
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (8 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 09/55] x86/mm: add an end_of_loop label in modify_xen_mappings Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 11/55] x86/mm: change pl1e to l1t in virt_to_xen_l1e Wei Liu
                   ` (46 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

We will need to have a variable named pl2e when we rewrite
virt_to_xen_l2e. Change pl2e to l2t to reflect better its purpose.
This will make reviewing later patch easier.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index a15aa80e16..f3a86052e2 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4812,22 +4812,22 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l2_pgentry_t *pl2e = alloc_xen_pagetable();
+        l2_pgentry_t *l2t = alloc_xen_pagetable();
 
-        if ( !pl2e )
+        if ( !l2t )
             return NULL;
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
-            clear_page(pl2e);
-            l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
-            pl2e = NULL;
+            clear_page(l2t);
+            l3e_write(pl3e, l3e_from_paddr(__pa(l2t), __PAGE_HYPERVISOR));
+            l2t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( pl2e )
-            free_xen_pagetable(pl2e);
+        if ( l2t )
+            free_xen_pagetable(l2t);
     }
 
     BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 11/55] x86/mm: change pl1e to l1t in virt_to_xen_l1e
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (9 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 10/55] x86/mm: change pl2e to l2t in virt_to_xen_l2e Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 12/55] x86/mm: change pl3e to l3t in virt_to_xen_l3e Wei Liu
                   ` (45 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

We will need to have a variable named pl1e when we rewrite
virt_to_xen_l1e. Change pl1e to l1t to reflect better its purpose.
This will make reviewing later patch easier.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index f3a86052e2..2606b307ea 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4845,22 +4845,22 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l1_pgentry_t *pl1e = alloc_xen_pagetable();
+        l1_pgentry_t *l1t = alloc_xen_pagetable();
 
-        if ( !pl1e )
+        if ( !l1t )
             return NULL;
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
-            clear_page(pl1e);
-            l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
-            pl1e = NULL;
+            clear_page(l1t);
+            l2e_write(pl2e, l2e_from_paddr(__pa(l1t), __PAGE_HYPERVISOR));
+            l1t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( pl1e )
-            free_xen_pagetable(pl1e);
+        if ( l1t )
+            free_xen_pagetable(l1t);
     }
 
     BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 12/55] x86/mm: change pl3e to l3t in virt_to_xen_l3e
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (10 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 11/55] x86/mm: change pl1e to l1t in virt_to_xen_l1e Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 13/55] x86/mm: rewrite virt_to_xen_l3e Wei Liu
                   ` (44 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

We will need to have a variable named pl3e when we rewrite
virt_to_xen_l3e. Change pl3e to l3t to reflect better its purpose.
This will make reviewing later patch easier.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 2606b307ea..c7fd2ca113 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4777,25 +4777,25 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
     if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l3_pgentry_t *pl3e = alloc_xen_pagetable();
+        l3_pgentry_t *l3t = alloc_xen_pagetable();
 
-        if ( !pl3e )
+        if ( !l3t )
             return NULL;
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
         {
-            l4_pgentry_t l4e = l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR);
+            l4_pgentry_t l4e = l4e_from_paddr(__pa(l3t), __PAGE_HYPERVISOR);
 
-            clear_page(pl3e);
+            clear_page(l3t);
             l4e_write(pl4e, l4e);
             efi_update_l4_pgtable(l4_table_offset(v), l4e);
-            pl3e = NULL;
+            l3t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( pl3e )
-            free_xen_pagetable(pl3e);
+        if ( l3t )
+            free_xen_pagetable(l3t);
     }
 
     return l4e_to_l3e(*pl4e) + l3_table_offset(v);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 13/55] x86/mm: rewrite virt_to_xen_l3e
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (11 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 12/55] x86/mm: change pl3e to l3t in virt_to_xen_l3e Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-04-08 15:55     ` [Xen-devel] " Jan Beulich
  2019-02-07 16:44 ` [PATCH RFC 14/55] x86/mm: rewrite xen_to_virt_l2e Wei Liu
                   ` (43 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Rewrite that function to use the new APIs. Modify its callers to unmap
the pointer returned.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 61 +++++++++++++++++++++++++++++++++++++++++++------------
 1 file changed, 48 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index c7fd2ca113..8751be6e3a 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4769,45 +4769,70 @@ void free_xen_pagetable_new(mfn_t mfn)
 
 static DEFINE_SPINLOCK(map_pgdir_lock);
 
+/*
+ * Given a virtual address, return a pointer to xen's L3 entry. Caller
+ * needs to unmap the pointer.
+ */
 static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
 {
     l4_pgentry_t *pl4e;
+    l3_pgentry_t *pl3e = NULL;
 
     pl4e = &idle_pg_table[l4_table_offset(v)];
     if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l3_pgentry_t *l3t = alloc_xen_pagetable();
+        l3_pgentry_t *l3t;
+        mfn_t mfn;
+
+        mfn = alloc_xen_pagetable_new();
+        if ( mfn_eq(mfn, INVALID_MFN) )
+            goto out;
+
+        l3t = map_xen_pagetable_new(mfn);
 
-        if ( !l3t )
-            return NULL;
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
         {
-            l4_pgentry_t l4e = l4e_from_paddr(__pa(l3t), __PAGE_HYPERVISOR);
+            l4_pgentry_t l4e = l4e_from_mfn(mfn, __PAGE_HYPERVISOR);
 
             clear_page(l3t);
             l4e_write(pl4e, l4e);
             efi_update_l4_pgtable(l4_table_offset(v), l4e);
+            pl3e = l3t + l3_table_offset(v);
             l3t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
         if ( l3t )
-            free_xen_pagetable(l3t);
+        {
+            ASSERT(!pl3e);
+            ASSERT(!mfn_eq(mfn, INVALID_MFN));
+            UNMAP_XEN_PAGETABLE_NEW(l3t);
+            free_xen_pagetable_new(mfn);
+        }
+    }
+
+    if ( !pl3e )
+    {
+        ASSERT(l4e_get_flags(*pl4e) & _PAGE_PRESENT);
+        pl3e = (l3_pgentry_t *)map_xen_pagetable_new(l4e_get_mfn(*pl4e))
+            + l3_table_offset(v);
     }
 
-    return l4e_to_l3e(*pl4e) + l3_table_offset(v);
+ out:
+    return pl3e;
 }
 
 static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
 {
     l3_pgentry_t *pl3e;
+    l2_pgentry_t *pl2e = NULL;
 
     pl3e = virt_to_xen_l3e(v);
     if ( !pl3e )
-        return NULL;
+        goto out;
 
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
@@ -4815,7 +4840,8 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
         l2_pgentry_t *l2t = alloc_xen_pagetable();
 
         if ( !l2t )
-            return NULL;
+            goto out;
+
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
@@ -4831,7 +4857,11 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
     }
 
     BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-    return l3e_to_l2e(*pl3e) + l2_table_offset(v);
+    pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(v);
+
+ out:
+    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    return pl2e;
 }
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
@@ -4887,7 +4917,7 @@ int map_pages_to_xen(
     unsigned int flags)
 {
     bool locking = system_state > SYS_STATE_boot;
-    l3_pgentry_t *pl3e, ol3e;
+    l3_pgentry_t *pl3e = NULL, ol3e;
     l2_pgentry_t *pl2e, ol2e;
     l1_pgentry_t *pl1e, ol1e;
     unsigned int  i;
@@ -5263,7 +5293,8 @@ int map_pages_to_xen(
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
-    end_of_loop:;
+    end_of_loop:
+        UNMAP_XEN_PAGETABLE_NEW(pl3e);
     }
 
 #undef flush_flags
@@ -5271,6 +5302,7 @@ int map_pages_to_xen(
     rc = 0;
 
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
 }
 
@@ -5294,6 +5326,7 @@ int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 {
     bool locking = system_state > SYS_STATE_boot;
+    l3_pgentry_t *pl3e = NULL;
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
     unsigned int  i;
@@ -5309,7 +5342,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
     while ( v < e )
     {
-        l3_pgentry_t *pl3e = virt_to_xen_l3e(v);
+        pl3e = virt_to_xen_l3e(v);
 
         if ( !pl3e || !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
@@ -5535,7 +5568,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
-    end_of_loop:;
+    end_of_loop:
+        UNMAP_XEN_PAGETABLE_NEW(pl3e);
     }
 
     flush_area(NULL, FLUSH_TLB_GLOBAL);
@@ -5544,6 +5578,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     rc = 0;
 
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 14/55] x86/mm: rewrite xen_to_virt_l2e
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (12 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 13/55] x86/mm: rewrite virt_to_xen_l3e Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-18 21:14   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 15/55] x86/mm: rewrite virt_to_xen_l1e Wei Liu
                   ` (42 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Rewrite that function to use the new APIs. Modify its callers to unmap
the pointer returned.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 46 +++++++++++++++++++++++++++++++++++++---------
 1 file changed, 37 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 8751be6e3a..91bb53b8b9 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4825,6 +4825,10 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
     return pl3e;
 }
 
+/*
+ * Given a virtual address, return a pointer to xen's L2 entry. Caller
+ * needs to unmap the pointer.
+ */
 static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
 {
     l3_pgentry_t *pl3e;
@@ -4837,27 +4841,44 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l2_pgentry_t *l2t = alloc_xen_pagetable();
+        l2_pgentry_t *l2t;
+        mfn_t mfn;
 
-        if ( !l2t )
+        mfn = alloc_xen_pagetable_new();
+        if ( mfn_eq(mfn, INVALID_MFN) )
             goto out;
 
+        l2t = map_xen_pagetable_new(mfn);
+
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
             clear_page(l2t);
-            l3e_write(pl3e, l3e_from_paddr(__pa(l2t), __PAGE_HYPERVISOR));
+            l3e_write(pl3e, l3e_from_mfn(mfn, __PAGE_HYPERVISOR));
+            pl2e = l2t + l2_table_offset(v);
             l2t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
+
         if ( l2t )
-            free_xen_pagetable(l2t);
+        {
+            ASSERT(!pl2e);
+            ASSERT(!mfn_eq(mfn, INVALID_MFN));
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            free_xen_pagetable_new(mfn);
+        }
     }
 
     BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-    pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(v);
+
+    if ( !pl2e )
+    {
+        ASSERT(l3e_get_flags(*pl3e) & _PAGE_PRESENT);
+        pl2e = (l2_pgentry_t *)map_xen_pagetable_new(l3e_get_mfn(*pl3e))
+            + l2_table_offset(v);
+    }
 
  out:
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
@@ -4867,10 +4888,11 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
 {
     l2_pgentry_t *pl2e;
+    l1_pgentry_t *pl1e = NULL;
 
     pl2e = virt_to_xen_l2e(v);
     if ( !pl2e )
-        return NULL;
+        goto out;
 
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
@@ -4878,7 +4900,7 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
         l1_pgentry_t *l1t = alloc_xen_pagetable();
 
         if ( !l1t )
-            return NULL;
+            goto out;
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
@@ -4894,7 +4916,11 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
     }
 
     BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-    return l2e_to_l1e(*pl2e) + l1_table_offset(v);
+    pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(v);
+
+ out:
+    UNMAP_XEN_PAGETABLE_NEW(pl2e);
+    return pl1e;
 }
 
 /* Convert to from superpage-mapping flags for map_pages_to_xen(). */
@@ -4918,7 +4944,7 @@ int map_pages_to_xen(
 {
     bool locking = system_state > SYS_STATE_boot;
     l3_pgentry_t *pl3e = NULL, ol3e;
-    l2_pgentry_t *pl2e, ol2e;
+    l2_pgentry_t *pl2e = NULL, ol2e;
     l1_pgentry_t *pl1e, ol1e;
     unsigned int  i;
     int rc = -ENOMEM;
@@ -5294,6 +5320,7 @@ int map_pages_to_xen(
                 spin_unlock(&map_pgdir_lock);
         }
     end_of_loop:
+        UNMAP_XEN_PAGETABLE_NEW(pl2e);
         UNMAP_XEN_PAGETABLE_NEW(pl3e);
     }
 
@@ -5302,6 +5329,7 @@ int map_pages_to_xen(
     rc = 0;
 
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
 }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 15/55] x86/mm: rewrite virt_to_xen_l1e
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (13 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 14/55] x86/mm: rewrite xen_to_virt_l2e Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 16/55] x86/mm: switch to new APIs in map_pages_to_xen Wei Liu
                   ` (41 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Rewrite this function to use new APIs. Modify its callers to unmap the
pointer returned.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/domain_page.c | 10 ++++++----
 xen/arch/x86/mm.c          | 30 +++++++++++++++++++++++++-----
 2 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index 4a07cfb18e..24083e9a86 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -333,21 +333,23 @@ void unmap_domain_page_global(const void *ptr)
 mfn_t domain_page_map_to_mfn(const void *ptr)
 {
     unsigned long va = (unsigned long)ptr;
-    const l1_pgentry_t *pl1e;
+    l1_pgentry_t l1e;
 
     if ( va >= DIRECTMAP_VIRT_START )
         return _mfn(virt_to_mfn(ptr));
 
     if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
     {
-        pl1e = virt_to_xen_l1e(va);
+        l1_pgentry_t *pl1e = virt_to_xen_l1e(va);
         BUG_ON(!pl1e);
+        l1e = *pl1e;
+        UNMAP_XEN_PAGETABLE_NEW(pl1e);
     }
     else
     {
         ASSERT(va >= MAPCACHE_VIRT_START && va < MAPCACHE_VIRT_END);
-        pl1e = &__linear_l1_table[l1_linear_offset(va)];
+        l1e = __linear_l1_table[l1_linear_offset(va)];
     }
 
-    return l1e_get_mfn(*pl1e);
+    return l1e_get_mfn(l1e);
 }
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 91bb53b8b9..356d561a06 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4897,26 +4897,44 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l1_pgentry_t *l1t = alloc_xen_pagetable();
+        l1_pgentry_t *l1t;
+        mfn_t mfn;
 
-        if ( !l1t )
+        mfn = alloc_xen_pagetable_new();
+        if ( mfn_eq(mfn, INVALID_MFN) )
             goto out;
+
+        l1t = map_xen_pagetable_new(mfn);
+
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
             clear_page(l1t);
-            l2e_write(pl2e, l2e_from_paddr(__pa(l1t), __PAGE_HYPERVISOR));
+            l2e_write(pl2e, l2e_from_mfn(mfn, __PAGE_HYPERVISOR));
+            pl1e = l1t + l1_table_offset(v);
             l1t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
+
         if ( l1t )
-            free_xen_pagetable(l1t);
+        {
+            ASSERT(!pl1e);
+            ASSERT(!mfn_eq(mfn, INVALID_MFN));
+            UNMAP_XEN_PAGETABLE_NEW(l1t);
+            free_xen_pagetable_new(mfn);
+        }
     }
 
     BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-    pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(v);
+
+    if ( !pl1e )
+    {
+        ASSERT(l2e_get_flags(*pl2e) & _PAGE_PRESENT);
+        pl1e = (l1_pgentry_t *)map_xen_pagetable_new(l2e_get_mfn(*pl2e))
+            + l1_table_offset(v);
+    }
 
  out:
     UNMAP_XEN_PAGETABLE_NEW(pl2e);
@@ -5320,6 +5338,7 @@ int map_pages_to_xen(
                 spin_unlock(&map_pgdir_lock);
         }
     end_of_loop:
+        UNMAP_XEN_PAGETABLE_NEW(pl1e);
         UNMAP_XEN_PAGETABLE_NEW(pl2e);
         UNMAP_XEN_PAGETABLE_NEW(pl3e);
     }
@@ -5329,6 +5348,7 @@ int map_pages_to_xen(
     rc = 0;
 
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl1e);
     UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 16/55] x86/mm: switch to new APIs in map_pages_to_xen
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (14 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 15/55] x86/mm: rewrite virt_to_xen_l1e Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-08 17:58   ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 17/55] x86/mm: drop lXe_to_lYe invocations " Wei Liu
                   ` (40 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Page tables allocated in that function should be mapped and unmapped
now.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 31 ++++++++++++++++++++++---------
 1 file changed, 22 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 356d561a06..c4cb6fbb60 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5058,6 +5058,7 @@ int map_pages_to_xen(
             unsigned int flush_flags =
                 FLUSH_TLB | FLUSH_ORDER(2 * PAGETABLE_ORDER);
             l2_pgentry_t *l2t;
+            mfn_t mfn;
 
             /* Skip this PTE if there is no change. */
             if ( ((l3e_get_pfn(ol3e) & ~(L2_PAGETABLE_ENTRIES *
@@ -5079,13 +5080,15 @@ int map_pages_to_xen(
                 goto end_of_loop;
             }
 
-            l2t = alloc_xen_pagetable();
-            if ( l2t == NULL )
+            mfn = alloc_xen_pagetable_new();
+            if ( mfn_eq(mfn, INVALID_MFN) )
             {
                 ASSERT(rc == -ENOMEM);
                 goto out;
             }
 
+            l2t = map_xen_pagetable_new(mfn);
+
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(ol3e) +
@@ -5100,15 +5103,18 @@ int map_pages_to_xen(
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
-                                                    __PAGE_HYPERVISOR));
+                l3e_write_atomic(pl3e, l3e_from_mfn(mfn, __PAGE_HYPERVISOR));
+                UNMAP_XEN_PAGETABLE_NEW(l2t);
                 l2t = NULL;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
             flush_area(virt, flush_flags);
             if ( l2t )
-                free_xen_pagetable(l2t);
+            {
+                UNMAP_XEN_PAGETABLE_NEW(l2t);
+                free_xen_pagetable_new(mfn);
+            }
         }
 
         pl2e = virt_to_xen_l2e(virt);
@@ -5171,6 +5177,7 @@ int map_pages_to_xen(
                 unsigned int flush_flags =
                     FLUSH_TLB | FLUSH_ORDER(PAGETABLE_ORDER);
                 l1_pgentry_t *l1t;
+                mfn_t mfn;
 
                 /* Skip this PTE if there is no change. */
                 if ( (((l2e_get_pfn(*pl2e) & ~(L1_PAGETABLE_ENTRIES - 1)) +
@@ -5190,13 +5197,15 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                l1t = alloc_xen_pagetable();
-                if ( l1t == NULL )
+                mfn = alloc_xen_pagetable_new();
+                if ( mfn_eq(mfn, INVALID_MFN) )
                 {
                     ASSERT(rc == -ENOMEM);
                     goto out;
                 }
 
+                l1t = map_xen_pagetable_new(mfn);
+
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
@@ -5210,15 +5219,19 @@ int map_pages_to_xen(
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(mfn,
                                                         __PAGE_HYPERVISOR));
+                    UNMAP_XEN_PAGETABLE_NEW(l1t);
                     l1t = NULL;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(virt, flush_flags);
                 if ( l1t )
-                    free_xen_pagetable(l1t);
+                {
+                    UNMAP_XEN_PAGETABLE_NEW(l1t);
+                    free_xen_pagetable_new(mfn);
+                }
             }
 
             pl1e  = l2e_to_l1e(*pl2e) + l1_table_offset(virt);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 17/55] x86/mm: drop lXe_to_lYe invocations in map_pages_to_xen
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (15 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 16/55] x86/mm: switch to new APIs in map_pages_to_xen Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-18 21:14   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 18/55] x86/mm: switch to new APIs in modify_xen_mappings Wei Liu
                   ` (39 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Map and unmap page tables where necessary.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 40 +++++++++++++++++++++++++++++-----------
 1 file changed, 29 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index c4cb6fbb60..1ea2974c1f 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5014,8 +5014,10 @@ int map_pages_to_xen(
                 else
                 {
                     l2_pgentry_t *l2t;
+                    mfn_t l2t_mfn = l3e_get_mfn(ol3e);
+
+                    l2t = map_xen_pagetable_new(l2t_mfn);
 
-                    l2t = l3e_to_l2e(ol3e);
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
                         ol2e = l2t[i];
@@ -5027,10 +5029,12 @@ int map_pages_to_xen(
                         {
                             unsigned int j;
                             l1_pgentry_t *l1t;
+                            mfn_t l1t_mfn = l2e_get_mfn(ol2e);
 
-                            l1t = l2e_to_l1e(ol2e);
+                            l1t = map_xen_pagetable_new(l1t_mfn);
                             for ( j = 0; j < L1_PAGETABLE_ENTRIES; j++ )
                                 flush_flags(l1e_get_flags(l1t[j]));
+                            UNMAP_XEN_PAGETABLE_NEW(l1t);
                         }
                     }
                     flush_area(virt, flush_flags);
@@ -5039,9 +5043,9 @@ int map_pages_to_xen(
                         ol2e = l2t[i];
                         if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) &&
                              !(l2e_get_flags(ol2e) & _PAGE_PSE) )
-                            free_xen_pagetable(l2e_to_l1e(ol2e));
+                            free_xen_pagetable_new(l2e_get_mfn(ol2e));
                     }
-                    free_xen_pagetable(l2t);
+                    free_xen_pagetable_new(l2t_mfn);
                 }
             }
 
@@ -5146,12 +5150,14 @@ int map_pages_to_xen(
                 else
                 {
                     l1_pgentry_t *l1t;
+                    mfn_t l1t_mfn = l2e_get_mfn(ol2e);
 
-                    l1t = l2e_to_l1e(ol2e);
+                    l1t = map_xen_pagetable_new(l1t_mfn);
                     for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                         flush_flags(l1e_get_flags(l1t[i]));
                     flush_area(virt, flush_flags);
-                    free_xen_pagetable(l1t);
+                    UNMAP_XEN_PAGETABLE_NEW(l1t);
+                    free_xen_pagetable_new(l1t_mfn);
                 }
             }
 
@@ -5165,12 +5171,14 @@ int map_pages_to_xen(
             /* Normal page mapping. */
             if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
             {
+                /* XXX This forces page table to be populated */
                 pl1e = virt_to_xen_l1e(virt);
                 if ( pl1e == NULL )
                 {
                     ASSERT(rc == -ENOMEM);
                     goto out;
                 }
+                UNMAP_XEN_PAGETABLE_NEW(pl1e);
             }
             else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
             {
@@ -5234,9 +5242,11 @@ int map_pages_to_xen(
                 }
             }
 
-            pl1e  = l2e_to_l1e(*pl2e) + l1_table_offset(virt);
+            pl1e  = map_xen_pagetable_new(l2e_get_mfn((*pl2e)));
+            pl1e += l1_table_offset(virt);
             ol1e  = *pl1e;
             l1e_write_atomic(pl1e, l1e_from_mfn(mfn, flags));
+            UNMAP_XEN_PAGETABLE_NEW(pl1e);
             if ( (l1e_get_flags(ol1e) & _PAGE_PRESENT) )
             {
                 unsigned int flush_flags = FLUSH_TLB | FLUSH_ORDER(0);
@@ -5257,6 +5267,7 @@ int map_pages_to_xen(
             {
                 unsigned long base_mfn;
                 l1_pgentry_t *l1t;
+                mfn_t l1t_mfn;
 
                 if ( locking )
                     spin_lock(&map_pgdir_lock);
@@ -5280,12 +5291,15 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                l1t = l2e_to_l1e(ol2e);
+                l1t_mfn = l2e_get_mfn(ol2e);
+                l1t = map_xen_pagetable_new(l1t_mfn);
+
                 base_mfn = l1e_get_pfn(l1t[0]) & ~(L1_PAGETABLE_ENTRIES - 1);
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) ||
                          (l1e_get_flags(l1t[i]) != flags) )
                         break;
+                UNMAP_XEN_PAGETABLE_NEW(l1t);
                 if ( i == L1_PAGETABLE_ENTRIES )
                 {
                     l2e_write_atomic(pl2e, l2e_from_pfn(base_mfn,
@@ -5295,7 +5309,7 @@ int map_pages_to_xen(
                     flush_area(virt - PAGE_SIZE,
                                FLUSH_TLB_GLOBAL |
                                FLUSH_ORDER(PAGETABLE_ORDER));
-                    free_xen_pagetable(l2e_to_l1e(ol2e));
+                    free_xen_pagetable_new(l1t_mfn);
                 }
                 else if ( locking )
                     spin_unlock(&map_pgdir_lock);
@@ -5311,6 +5325,7 @@ int map_pages_to_xen(
         {
             unsigned long base_mfn;
             l2_pgentry_t *l2t;
+            mfn_t l2t_mfn;
 
             if ( locking )
                 spin_lock(&map_pgdir_lock);
@@ -5328,7 +5343,9 @@ int map_pages_to_xen(
                 goto end_of_loop;
             }
 
-            l2t = l3e_to_l2e(ol3e);
+            l2t_mfn = l3e_get_mfn(ol3e);
+            l2t = map_xen_pagetable_new(l2t_mfn);
+
             base_mfn = l2e_get_pfn(l2t[0]) & ~(L2_PAGETABLE_ENTRIES *
                                               L1_PAGETABLE_ENTRIES - 1);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
@@ -5336,6 +5353,7 @@ int map_pages_to_xen(
                       (base_mfn + (i << PAGETABLE_ORDER))) ||
                      (l2e_get_flags(l2t[i]) != l1f_to_lNf(flags)) )
                     break;
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
             if ( i == L2_PAGETABLE_ENTRIES )
             {
                 l3e_write_atomic(pl3e, l3e_from_pfn(base_mfn,
@@ -5345,7 +5363,7 @@ int map_pages_to_xen(
                 flush_area(virt - PAGE_SIZE,
                            FLUSH_TLB_GLOBAL |
                            FLUSH_ORDER(2*PAGETABLE_ORDER));
-                free_xen_pagetable(l3e_to_l2e(ol3e));
+                free_xen_pagetable_new(l2t_mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 18/55] x86/mm: switch to new APIs in modify_xen_mappings
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (16 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 17/55] x86/mm: drop lXe_to_lYe invocations " Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-18 21:14   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 19/55] x86/mm: drop lXe_to_lYe invocations from modify_xen_mappings Wei Liu
                   ` (38 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Page tables allocated in that function should be mapped and unmapped
now.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 31 ++++++++++++++++++++++---------
 1 file changed, 22 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 1ea2974c1f..18c7b43705 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5436,6 +5436,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
         if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
         {
             l2_pgentry_t *l2t;
+            mfn_t mfn;
 
             if ( l2_table_offset(v) == 0 &&
                  l1_table_offset(v) == 0 &&
@@ -5452,13 +5453,15 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
-            l2t = alloc_xen_pagetable();
-            if ( !l2t )
+            mfn = alloc_xen_pagetable_new();
+            if ( mfn_eq(mfn, INVALID_MFN) )
             {
                 ASSERT(rc == -ENOMEM);
                 goto out;
             }
 
+            l2t = map_xen_pagetable_new(mfn);
+
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
@@ -5469,14 +5472,17 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
-                                                    __PAGE_HYPERVISOR));
+                l3e_write_atomic(pl3e, l3e_from_mfn(mfn, __PAGE_HYPERVISOR));
+                UNMAP_XEN_PAGETABLE_NEW(l2t);
                 l2t = NULL;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
             if ( l2t )
-                free_xen_pagetable(l2t);
+            {
+                UNMAP_XEN_PAGETABLE_NEW(l2t);
+                free_xen_pagetable_new(mfn);
+            }
         }
 
         /*
@@ -5511,15 +5517,18 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             else
             {
                 l1_pgentry_t *l1t;
+                mfn_t mfn;
 
                 /* PSE: shatter the superpage and try again. */
-                l1t = alloc_xen_pagetable();
-                if ( !l1t )
+                mfn = alloc_xen_pagetable_new();
+                if ( mfn_eq(mfn, INVALID_MFN) )
                 {
                     ASSERT(rc == -ENOMEM);
                     goto out;
                 }
 
+                l1t = map_xen_pagetable_new(mfn);
+
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
@@ -5529,14 +5538,18 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(mfn,
                                                         __PAGE_HYPERVISOR));
+                    UNMAP_XEN_PAGETABLE_NEW(l1t);
                     l1t = NULL;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 if ( l1t )
-                    free_xen_pagetable(l1t);
+                {
+                    UNMAP_XEN_PAGETABLE_NEW(l1t);
+                    free_xen_pagetable_new(mfn);
+                }
             }
         }
         else
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 19/55] x86/mm: drop lXe_to_lYe invocations from modify_xen_mappings
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (17 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 18/55] x86/mm: switch to new APIs in modify_xen_mappings Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 20/55] x86/mm: switch to new APIs in arch_init_memory Wei Liu
                   ` (37 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 28 +++++++++++++++++++---------
 1 file changed, 19 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 18c7b43705..ddd99ef0f2 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5406,8 +5406,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 {
     bool locking = system_state > SYS_STATE_boot;
     l3_pgentry_t *pl3e = NULL;
-    l2_pgentry_t *pl2e;
-    l1_pgentry_t *pl1e;
+    l2_pgentry_t *pl2e = NULL;
     unsigned int  i;
     unsigned long v = s;
     int rc = -ENOMEM;
@@ -5489,7 +5488,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
          * The L3 entry has been verified to be present, and we've dealt with
          * 1G pages as well, so the L2 table cannot require allocation.
          */
-        pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(v);
+        pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+        pl2e += l2_table_offset(v);
 
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
@@ -5554,14 +5554,16 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
         }
         else
         {
-            l1_pgentry_t nl1e, *l1t;
+            l1_pgentry_t nl1e, *l1t, *pl1e;
+            mfn_t l1t_mfn;
 
             /*
              * Ordinary 4kB mapping: The L2 entry has been verified to be
              * present, and we've dealt with 2M pages as well, so the L1 table
              * cannot require allocation.
              */
-            pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(v);
+            pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            pl1e += l1_table_offset(v);
 
             /* Confirm the caller isn't trying to create new mappings. */
             if ( !(l1e_get_flags(*pl1e) & _PAGE_PRESENT) )
@@ -5572,6 +5574,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                                (l1e_get_flags(*pl1e) & ~FLAGS_MASK) | nf);
 
             l1e_write_atomic(pl1e, nl1e);
+            UNMAP_XEN_PAGETABLE_NEW(pl1e);
             v += PAGE_SIZE;
 
             /*
@@ -5601,10 +5604,12 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 goto end_of_loop;
             }
 
-            l1t = l2e_to_l1e(*pl2e);
+            l1t_mfn = l2e_get_mfn(*pl2e);
+            l1t = map_xen_pagetable_new(l1t_mfn);
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                 if ( l1e_get_intpte(l1t[i]) != 0 )
                     break;
+            UNMAP_XEN_PAGETABLE_NEW(l1t);
             if ( i == L1_PAGETABLE_ENTRIES )
             {
                 /* Empty: zap the L2E and free the L1 page. */
@@ -5612,7 +5617,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable(l1t);
+                free_xen_pagetable_new(l1t_mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5643,11 +5648,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
         {
             l2_pgentry_t *l2t;
+            mfn_t l2t_mfn;
 
-            l2t = l3e_to_l2e(*pl3e);
+            l2t_mfn = l3e_get_mfn(*pl3e);
+            l2t = map_xen_pagetable_new(l2t_mfn);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 if ( l2e_get_intpte(l2t[i]) != 0 )
                     break;
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
             if ( i == L2_PAGETABLE_ENTRIES )
             {
                 /* Empty: zap the L3E and free the L2 page. */
@@ -5655,12 +5663,13 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable(l2t);
+                free_xen_pagetable_new(l2t_mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
     end_of_loop:
+        UNMAP_XEN_PAGETABLE_NEW(pl2e);
         UNMAP_XEN_PAGETABLE_NEW(pl3e);
     }
 
@@ -5670,6 +5679,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     rc = 0;
 
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
 }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 20/55] x86/mm: switch to new APIs in arch_init_memory
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (18 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 19/55] x86/mm: drop lXe_to_lYe invocations from modify_xen_mappings Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 21/55] x86_64/mm: introduce pl2e in paging_init Wei Liu
                   ` (36 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index ddd99ef0f2..9e115ef0b8 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -366,19 +366,22 @@ void __init arch_init_memory(void)
             ASSERT(root_pgt_pv_xen_slots < ROOT_PAGETABLE_PV_XEN_SLOTS);
             if ( l4_table_offset(split_va) == l4_table_offset(split_va - 1) )
             {
-                l3_pgentry_t *l3tab = alloc_xen_pagetable();
+                mfn_t l3tab_mfn = alloc_xen_pagetable_new();
 
-                if ( l3tab )
+                if ( !mfn_eq(l3tab_mfn, INVALID_MFN) )
                 {
-                    const l3_pgentry_t *l3idle =
-                        l4e_to_l3e(idle_pg_table[l4_table_offset(split_va)]);
+                    l3_pgentry_t *l3idle =
+                        map_xen_pagetable_new(
+                            l4e_get_mfn(idle_pg_table[l4_table_offset(split_va)]));
+                    l3_pgentry_t *l3tab = map_xen_pagetable_new(l3tab_mfn);
 
                     for ( i = 0; i < l3_table_offset(split_va); ++i )
                         l3tab[i] = l3idle[i];
                     for ( ; i < L3_PAGETABLE_ENTRIES; ++i )
                         l3tab[i] = l3e_empty();
-                    split_l4e = l4e_from_mfn(virt_to_mfn(l3tab),
-                                             __PAGE_HYPERVISOR_RW);
+                    split_l4e = l4e_from_mfn(l3tab_mfn, __PAGE_HYPERVISOR_RW);
+                    UNMAP_XEN_PAGETABLE_NEW(l3idle);
+                    UNMAP_XEN_PAGETABLE_NEW(l3tab);
                 }
                 else
                     ++root_pgt_pv_xen_slots;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 21/55] x86_64/mm: introduce pl2e in paging_init
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (19 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 20/55] x86/mm: switch to new APIs in arch_init_memory Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-18 21:14   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 22/55] x86_64/mm: switch to new APIs " Wei Liu
                   ` (35 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Introduce pl2e so that we can use l2_ro_mpt to point to the page table
itself.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index d8f558bc3a..83d62674c0 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -497,7 +497,7 @@ void __init paging_init(void)
     unsigned long i, mpt_size, va;
     unsigned int n, memflags;
     l3_pgentry_t *l3_ro_mpt;
-    l2_pgentry_t *l2_ro_mpt = NULL;
+    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt;
     struct page_info *l1_pg;
 
     /*
@@ -547,7 +547,7 @@ void __init paging_init(void)
             (L2_PAGETABLE_SHIFT - 3 + PAGE_SHIFT)));
 
         if ( cpu_has_page1gb &&
-             !((unsigned long)l2_ro_mpt & ~PAGE_MASK) &&
+             !((unsigned long)pl2e & ~PAGE_MASK) &&
              (mpt_size >> L3_PAGETABLE_SHIFT) > (i >> PAGETABLE_ORDER) )
         {
             unsigned int k, holes;
@@ -606,7 +606,7 @@ void __init paging_init(void)
             memset((void *)(RDWR_MPT_VIRT_START + (i << L2_PAGETABLE_SHIFT)),
                    0xFF, 1UL << L2_PAGETABLE_SHIFT);
         }
-        if ( !((unsigned long)l2_ro_mpt & ~PAGE_MASK) )
+        if ( !((unsigned long)pl2e & ~PAGE_MASK) )
         {
             if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
                 goto nomem;
@@ -614,13 +614,14 @@ void __init paging_init(void)
             l3e_write(&l3_ro_mpt[l3_table_offset(va)],
                       l3e_from_paddr(__pa(l2_ro_mpt),
                                      __PAGE_HYPERVISOR_RO | _PAGE_USER));
+            pl2e = l2_ro_mpt;
             ASSERT(!l2_table_offset(va));
         }
         /* NB. Cannot be GLOBAL: guest user mode should not see it. */
         if ( l1_pg )
-            l2e_write(l2_ro_mpt, l2e_from_page(
+            l2e_write(pl2e, l2e_from_page(
                 l1_pg, /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT));
-        l2_ro_mpt++;
+        pl2e++;
     }
 #undef CNT
 #undef MFN
@@ -636,7 +637,8 @@ void __init paging_init(void)
     clear_page(l2_ro_mpt);
     l3e_write(&l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)],
               l3e_from_paddr(__pa(l2_ro_mpt), __PAGE_HYPERVISOR_RO));
-    l2_ro_mpt += l2_table_offset(HIRO_COMPAT_MPT_VIRT_START);
+    pl2e = l2_ro_mpt;
+    pl2e += l2_table_offset(HIRO_COMPAT_MPT_VIRT_START);
     /* Allocate and map the compatibility mode machine-to-phys table. */
     mpt_size = (mpt_size >> 1) + (1UL << (L2_PAGETABLE_SHIFT - 1));
     if ( mpt_size > RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START )
@@ -649,7 +651,7 @@ void __init paging_init(void)
              sizeof(*compat_machine_to_phys_mapping))
     BUILD_BUG_ON((sizeof(*frame_table) & ~sizeof(*frame_table)) % \
                  sizeof(*compat_machine_to_phys_mapping));
-    for ( i = 0; i < (mpt_size >> L2_PAGETABLE_SHIFT); i++, l2_ro_mpt++ )
+    for ( i = 0; i < (mpt_size >> L2_PAGETABLE_SHIFT); i++, pl2e++ )
     {
         memflags = MEMF_node(phys_to_nid(i <<
             (L2_PAGETABLE_SHIFT - 2 + PAGE_SHIFT)));
@@ -671,7 +673,7 @@ void __init paging_init(void)
                0x55,
                1UL << L2_PAGETABLE_SHIFT);
         /* NB. Cannot be GLOBAL as the ptes get copied into per-VM space. */
-        l2e_write(l2_ro_mpt, l2e_from_page(l1_pg, _PAGE_PSE|_PAGE_PRESENT));
+        l2e_write(pl2e, l2e_from_page(l1_pg, _PAGE_PSE|_PAGE_PRESENT));
     }
 #undef CNT
 #undef MFN
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 22/55] x86_64/mm: switch to new APIs in paging_init
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (20 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 21/55] x86_64/mm: introduce pl2e in paging_init Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 23/55] x86_64/mm: drop l4e_to_l3e invocation from paging_init Wei Liu
                   ` (34 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 48 ++++++++++++++++++++++++++++++++++++------------
 1 file changed, 36 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 83d62674c0..02919481e4 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -496,9 +496,10 @@ void __init paging_init(void)
 {
     unsigned long i, mpt_size, va;
     unsigned int n, memflags;
-    l3_pgentry_t *l3_ro_mpt;
-    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt;
+    l3_pgentry_t *l3_ro_mpt = NULL;
+    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
     struct page_info *l1_pg;
+    mfn_t l3_ro_mpt_mfn, l2_ro_mpt_mfn;
 
     /*
      * We setup the L3s for 1:1 mapping if host support memory hotplug
@@ -511,22 +512,29 @@ void __init paging_init(void)
         if ( !(l4e_get_flags(idle_pg_table[l4_table_offset(va)]) &
               _PAGE_PRESENT) )
         {
-            l3_pgentry_t *pl3t = alloc_xen_pagetable();
+            l3_pgentry_t *pl3t;
+            mfn_t mfn;
 
-            if ( !pl3t )
+            mfn = alloc_xen_pagetable_new();
+            if ( mfn_eq(mfn, INVALID_MFN) )
                 goto nomem;
+
+            pl3t = map_xen_pagetable_new(mfn);
             clear_page(pl3t);
             l4e_write(&idle_pg_table[l4_table_offset(va)],
-                      l4e_from_paddr(__pa(pl3t), __PAGE_HYPERVISOR_RW));
+                      l4e_from_mfn(mfn, __PAGE_HYPERVISOR_RW));
+            UNMAP_XEN_PAGETABLE_NEW(pl3t);
         }
     }
 
     /* Create user-accessible L2 directory to map the MPT for guests. */
-    if ( (l3_ro_mpt = alloc_xen_pagetable()) == NULL )
+    l3_ro_mpt_mfn = alloc_xen_pagetable_new();
+    if ( mfn_eq(l3_ro_mpt_mfn, INVALID_MFN) )
         goto nomem;
+    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
     clear_page(l3_ro_mpt);
     l4e_write(&idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)],
-              l4e_from_paddr(__pa(l3_ro_mpt), __PAGE_HYPERVISOR_RO | _PAGE_USER));
+              l4e_from_mfn(l3_ro_mpt_mfn, __PAGE_HYPERVISOR_RO | _PAGE_USER));
 
     /*
      * Allocate and map the machine-to-phys table.
@@ -608,12 +616,21 @@ void __init paging_init(void)
         }
         if ( !((unsigned long)pl2e & ~PAGE_MASK) )
         {
-            if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
+            /*
+             * Unmap l2_ro_mpt, which could've been mapped in previous
+             * iteration.
+             */
+            unmap_xen_pagetable_new(l2_ro_mpt);
+
+            l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+            if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
                 goto nomem;
+
+            l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
             clear_page(l2_ro_mpt);
             l3e_write(&l3_ro_mpt[l3_table_offset(va)],
-                      l3e_from_paddr(__pa(l2_ro_mpt),
-                                     __PAGE_HYPERVISOR_RO | _PAGE_USER));
+                      l3e_from_mfn(l2_ro_mpt_mfn,
+                                   __PAGE_HYPERVISOR_RO | _PAGE_USER));
             pl2e = l2_ro_mpt;
             ASSERT(!l2_table_offset(va));
         }
@@ -625,18 +642,23 @@ void __init paging_init(void)
     }
 #undef CNT
 #undef MFN
+    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
 
     /* Create user-accessible L2 directory to map the MPT for compat guests. */
     BUILD_BUG_ON(l4_table_offset(RDWR_MPT_VIRT_START) !=
                  l4_table_offset(HIRO_COMPAT_MPT_VIRT_START));
     l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(
         HIRO_COMPAT_MPT_VIRT_START)]);
-    if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
+
+    l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+    if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
         goto nomem;
+    l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
     compat_idle_pg_table_l2 = l2_ro_mpt;
     clear_page(l2_ro_mpt);
     l3e_write(&l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)],
-              l3e_from_paddr(__pa(l2_ro_mpt), __PAGE_HYPERVISOR_RO));
+              l3e_from_mfn(l2_ro_mpt_mfn, __PAGE_HYPERVISOR_RO));
     pl2e = l2_ro_mpt;
     pl2e += l2_table_offset(HIRO_COMPAT_MPT_VIRT_START);
     /* Allocate and map the compatibility mode machine-to-phys table. */
@@ -678,6 +700,8 @@ void __init paging_init(void)
 #undef CNT
 #undef MFN
 
+    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+
     machine_to_phys_mapping_valid = 1;
 
     /* Set up linear page table mapping. */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 23/55] x86_64/mm: drop l4e_to_l3e invocation from paging_init
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (21 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 22/55] x86_64/mm: switch to new APIs " Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-18 21:14   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 24/55] x86_64/mm.c: remove code that serves no purpose in setup_m2p_table Wei Liu
                   ` (33 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 02919481e4..094c609c8c 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -648,8 +648,10 @@ void __init paging_init(void)
     /* Create user-accessible L2 directory to map the MPT for compat guests. */
     BUILD_BUG_ON(l4_table_offset(RDWR_MPT_VIRT_START) !=
                  l4_table_offset(HIRO_COMPAT_MPT_VIRT_START));
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(
-        HIRO_COMPAT_MPT_VIRT_START)]);
+
+    l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset(
+                                        HIRO_COMPAT_MPT_VIRT_START)]);
+    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
 
     l2_ro_mpt_mfn = alloc_xen_pagetable_new();
     if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
@@ -701,6 +703,7 @@ void __init paging_init(void)
 #undef MFN
 
     UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
 
     machine_to_phys_mapping_valid = 1;
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 24/55] x86_64/mm.c: remove code that serves no purpose in setup_m2p_table
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (22 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 23/55] x86_64/mm: drop l4e_to_l3e invocation from paging_init Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 25/55] x86_64/mm: introduce pl2e " Wei Liu
                   ` (32 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 094c609c8c..55fa338d71 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -480,8 +480,6 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
             l2e_write(l2_ro_mpt, l2e_from_mfn(mfn,
                    /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT));
         }
-        if ( !((unsigned long)l2_ro_mpt & ~PAGE_MASK) )
-            l2_ro_mpt = NULL;
         i += ( 1UL << (L2_PAGETABLE_SHIFT - 3));
     }
 #undef CNT
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 25/55] x86_64/mm: introduce pl2e in setup_m2p_table
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (23 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 24/55] x86_64/mm.c: remove code that serves no purpose in setup_m2p_table Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-18 21:14   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 26/55] x86_64/mm: switch to new APIs " Wei Liu
                   ` (31 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 55fa338d71..d3e2398b6c 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -397,7 +397,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 {
     unsigned long i, va, smap, emap;
     unsigned int n;
-    l2_pgentry_t *l2_ro_mpt = NULL;
+    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt;
     l3_pgentry_t *l3_ro_mpt = NULL;
     int ret = 0;
 
@@ -458,7 +458,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
                   _PAGE_PSE));
             if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
               _PAGE_PRESENT )
-                l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
+                pl2e = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
                   l2_table_offset(va);
             else
             {
@@ -473,11 +473,12 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
                 l3e_write(&l3_ro_mpt[l3_table_offset(va)],
                           l3e_from_paddr(__pa(l2_ro_mpt),
                                          __PAGE_HYPERVISOR_RO | _PAGE_USER));
-                l2_ro_mpt += l2_table_offset(va);
+                pl2e = l2_ro_mpt;
+                pl2e += l2_table_offset(va);
             }
 
             /* NB. Cannot be GLOBAL: guest user mode should not see it. */
-            l2e_write(l2_ro_mpt, l2e_from_mfn(mfn,
+            l2e_write(pl2e, l2e_from_mfn(mfn,
                    /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT));
         }
         i += ( 1UL << (L2_PAGETABLE_SHIFT - 3));
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 26/55] x86_64/mm: switch to new APIs in setup_m2p_table
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (24 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 25/55] x86_64/mm: introduce pl2e " Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 27/55] x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table Wei Liu
                   ` (30 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index d3e2398b6c..0b85961105 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -397,9 +397,10 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 {
     unsigned long i, va, smap, emap;
     unsigned int n;
-    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt;
+    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
     l3_pgentry_t *l3_ro_mpt = NULL;
     int ret = 0;
+    mfn_t l2_ro_mpt_mfn;
 
     ASSERT(l4e_get_flags(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)])
             & _PAGE_PRESENT);
@@ -462,17 +463,19 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
                   l2_table_offset(va);
             else
             {
-                l2_ro_mpt = alloc_xen_pagetable();
-                if ( !l2_ro_mpt )
+                UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+                l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+                if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
                 {
                     ret = -ENOMEM;
                     goto error;
                 }
 
+                l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
                 clear_page(l2_ro_mpt);
                 l3e_write(&l3_ro_mpt[l3_table_offset(va)],
-                          l3e_from_paddr(__pa(l2_ro_mpt),
-                                         __PAGE_HYPERVISOR_RO | _PAGE_USER));
+                          l3e_from_mfn(l2_ro_mpt_mfn,
+                                       __PAGE_HYPERVISOR_RO | _PAGE_USER));
                 pl2e = l2_ro_mpt;
                 pl2e += l2_table_offset(va);
             }
@@ -488,6 +491,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 
     ret = setup_compat_m2p_table(info);
 error:
+    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
     return ret;
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 27/55] x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (25 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 26/55] x86_64/mm: switch to new APIs " Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-18 21:14   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 28/55] efi: use new page table APIs in copy_mapping Wei Liu
                   ` (29 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 0b85961105..216f97c95f 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -400,11 +400,13 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
     l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
     l3_pgentry_t *l3_ro_mpt = NULL;
     int ret = 0;
-    mfn_t l2_ro_mpt_mfn;
+    mfn_t l2_ro_mpt_mfn, l3_ro_mpt_mfn;
 
     ASSERT(l4e_get_flags(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)])
             & _PAGE_PRESENT);
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
+    l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset(
+                                        RO_MPT_VIRT_START)]);
+    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
 
     smap = (info->spfn & (~((1UL << (L2_PAGETABLE_SHIFT - 3)) -1)));
     emap = ((info->epfn + ((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1 )) &
@@ -459,8 +461,13 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
                   _PAGE_PSE));
             if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
               _PAGE_PRESENT )
-                pl2e = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
-                  l2_table_offset(va);
+            {
+                UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+                l2_ro_mpt_mfn = l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]);
+                l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
+                ASSERT(l2_ro_mpt);
+                pl2e = l2_ro_mpt + l2_table_offset(va);
+            }
             else
             {
                 UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
@@ -492,6 +499,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
     ret = setup_compat_m2p_table(info);
 error:
     UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
     return ret;
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 28/55] efi: use new page table APIs in copy_mapping
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (26 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 27/55] x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 29/55] efi: avoid using global variable " Wei Liu
                   ` (28 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Wei Liu, Jan Beulich

After inspection ARM doesn't have alloc_xen_pagetable so this function
is x86 only, which means it is safe for us to change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
XXX test this in gitlab ci to be sure.
---
 xen/common/efi/boot.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 79193784ff..62b5944e61 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1440,16 +1440,22 @@ static __init void copy_mapping(unsigned long mfn, unsigned long end,
             continue;
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
-            l3dst = alloc_xen_pagetable();
-            BUG_ON(!l3dst);
+            mfn_t l3t_mfn;
+
+            l3t_mfn = alloc_xen_pagetable_new();
+            BUG_ON(mfn_eq(l3t_mfn, INVALID_MFN));
+            l3dst = map_xen_pagetable_new(l3t_mfn);
             clear_page(l3dst);
             efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)] =
-                l4e_from_paddr(virt_to_maddr(l3dst), __PAGE_HYPERVISOR);
+                l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR);
         }
         else
-            l3dst = l4e_to_l3e(l4e);
-        l3src = l4e_to_l3e(idle_pg_table[l4_table_offset(va)]);
+            l3dst = map_xen_pagetable_new(l4e_get_mfn(l4e));
+        l3src = map_xen_pagetable_new(
+            l4e_get_mfn(idle_pg_table[l4_table_offset(va)]));
         l3dst[l3_table_offset(mfn << PAGE_SHIFT)] = l3src[l3_table_offset(va)];
+        UNMAP_XEN_PAGETABLE_NEW(l3src);
+        UNMAP_XEN_PAGETABLE_NEW(l3dst);
     }
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 29/55] efi: avoid using global variable in copy_mapping
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (27 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 28/55] efi: use new page table APIs in copy_mapping Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 30/55] efi: use new page table APIs in efi_init_memory Wei Liu
                   ` (27 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Wei Liu, Jan Beulich

We will soon switch efi_l4_table to use ephemeral mapping. Make
copy_mapping take a pointer to the mapping instead of using the global
variable.

No functional change intended.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/common/efi/boot.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 62b5944e61..64a287690a 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1423,7 +1423,8 @@ static int __init parse_efi_param(const char *s)
 custom_param("efi", parse_efi_param);
 
 #ifndef USE_SET_VIRTUAL_ADDRESS_MAP
-static __init void copy_mapping(unsigned long mfn, unsigned long end,
+static __init void copy_mapping(l4_pgentry_t *l4,
+                                unsigned long mfn, unsigned long end,
                                 bool (*is_valid)(unsigned long smfn,
                                                  unsigned long emfn))
 {
@@ -1431,7 +1432,7 @@ static __init void copy_mapping(unsigned long mfn, unsigned long end,
 
     for ( ; mfn < end; mfn = next )
     {
-        l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)];
+        l4_pgentry_t l4e = l4[l4_table_offset(mfn << PAGE_SHIFT)];
         l3_pgentry_t *l3src, *l3dst;
         unsigned long va = (unsigned long)mfn_to_virt(mfn);
 
@@ -1446,7 +1447,7 @@ static __init void copy_mapping(unsigned long mfn, unsigned long end,
             BUG_ON(mfn_eq(l3t_mfn, INVALID_MFN));
             l3dst = map_xen_pagetable_new(l3t_mfn);
             clear_page(l3dst);
-            efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)] =
+            l4[l4_table_offset(mfn << PAGE_SHIFT)] =
                 l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR);
         }
         else
@@ -1606,7 +1607,7 @@ void __init efi_init_memory(void)
     BUG_ON(!efi_l4_pgtable);
     clear_page(efi_l4_pgtable);
 
-    copy_mapping(0, max_page, ram_range_valid);
+    copy_mapping(efi_l4_pgtable, 0, max_page, ram_range_valid);
 
     /* Insert non-RAM runtime mappings inside the direct map. */
     for ( i = 0; i < efi_memmap_size; i += efi_mdesc_size )
@@ -1619,7 +1620,7 @@ void __init efi_init_memory(void)
                 desc->Type == EfiBootServicesData))) &&
              desc->VirtualStart != INVALID_VIRTUAL_ADDRESS &&
              desc->VirtualStart != desc->PhysicalStart )
-            copy_mapping(PFN_DOWN(desc->PhysicalStart),
+             copy_mapping(efi_l4_pgtable, PFN_DOWN(desc->PhysicalStart),
                          PFN_UP(desc->PhysicalStart +
                                 (desc->NumberOfPages << EFI_PAGE_SHIFT)),
                          rt_range_valid);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 30/55] efi: use new page table APIs in efi_init_memory
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (28 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 29/55] efi: avoid using global variable " Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 31/55] efi: add emacs block to boot.c Wei Liu
                   ` (26 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Wei Liu, Jan Beulich

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/common/efi/boot.c | 39 +++++++++++++++++++++++++++------------
 1 file changed, 27 insertions(+), 12 deletions(-)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 64a287690a..1d1420f02c 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1637,39 +1637,50 @@ void __init efi_init_memory(void)
 
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
-            pl3e = alloc_xen_pagetable();
-            BUG_ON(!pl3e);
+            mfn_t l3t_mfn;
+
+            l3t_mfn = alloc_xen_pagetable_new();
+            BUG_ON(mfn_eq(l3t_mfn, INVALID_MFN));
+            pl3e = map_xen_pagetable_new(l3t_mfn);
             clear_page(pl3e);
             efi_l4_pgtable[l4_table_offset(addr)] =
-                l4e_from_paddr(virt_to_maddr(pl3e), __PAGE_HYPERVISOR);
+                l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR);
         }
         else
-            pl3e = l4e_to_l3e(l4e);
+            pl3e = map_xen_pagetable_new(l4e_get_mfn(l4e));
         pl3e += l3_table_offset(addr);
+
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
-            pl2e = alloc_xen_pagetable();
-            BUG_ON(!pl2e);
+            mfn_t l2t_mfn;
+
+            l2t_mfn = alloc_xen_pagetable_new();
+            BUG_ON(mfn_eq(l2t_mfn, INVALID_MFN));
+            pl2e = map_xen_pagetable_new(l2t_mfn);
             clear_page(pl2e);
-            *pl3e = l3e_from_paddr(virt_to_maddr(pl2e), __PAGE_HYPERVISOR);
+            *pl3e = l3e_from_mfn(l2t_mfn, __PAGE_HYPERVISOR);
         }
         else
         {
             BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-            pl2e = l3e_to_l2e(*pl3e);
+            pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
         }
         pl2e += l2_table_offset(addr);
+
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
-            l1t = alloc_xen_pagetable();
-            BUG_ON(!l1t);
+            mfn_t l1t_mfn;
+
+            l1t_mfn = alloc_xen_pagetable_new();
+            BUG_ON(mfn_eq(l1t_mfn, INVALID_MFN));
+            l1t = map_xen_pagetable_new(l1t_mfn);
             clear_page(l1t);
-            *pl2e = l2e_from_paddr(virt_to_maddr(l1t), __PAGE_HYPERVISOR);
+            *pl2e = l2e_from_mfn(l1t_mfn, __PAGE_HYPERVISOR);
         }
         else
         {
             BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-            l1t = l2e_to_l1e(*pl2e);
+            l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
         }
         for ( i = l1_table_offset(addr);
               i < L1_PAGETABLE_ENTRIES && extra->smfn < extra->emfn;
@@ -1681,6 +1692,10 @@ void __init efi_init_memory(void)
             extra_head = extra->next;
             xfree(extra);
         }
+
+        UNMAP_XEN_PAGETABLE_NEW(l1t);
+        UNMAP_XEN_PAGETABLE_NEW(pl2e);
+        UNMAP_XEN_PAGETABLE_NEW(pl3e);
     }
 
     /* Insert Xen mappings. */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 31/55] efi: add emacs block to boot.c
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (29 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 30/55] efi: use new page table APIs in efi_init_memory Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-18 21:14   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 32/55] efi: switch EFI L4 table to use new APIs Wei Liu
                   ` (25 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Wei Liu, Jan Beulich

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/common/efi/boot.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 1d1420f02c..3868293d06 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1705,3 +1705,13 @@ void __init efi_init_memory(void)
 #endif
 }
 #endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 32/55] efi: switch EFI L4 table to use new APIs
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (30 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 31/55] efi: add emacs block to boot.c Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-18 21:14   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 33/55] x86/smpboot: add emacs block Wei Liu
                   ` (24 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

This requires storing the MFN instead of linear address of the L4
table. Adjust code accordingly.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/efi/runtime.h | 12 +++++++++---
 xen/common/efi/boot.c      |  8 ++++++--
 xen/common/efi/efi.h       |  3 ++-
 xen/common/efi/runtime.c   |  8 ++++----
 4 files changed, 21 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h
index d9eb8f5c27..277d237953 100644
--- a/xen/arch/x86/efi/runtime.h
+++ b/xen/arch/x86/efi/runtime.h
@@ -2,11 +2,17 @@
 #include <asm/mc146818rtc.h>
 
 #ifndef COMPAT
-l4_pgentry_t *__read_mostly efi_l4_pgtable;
+mfn_t __read_mostly efi_l4_mfn = INVALID_MFN_INITIALIZER;
 
 void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e)
 {
-    if ( efi_l4_pgtable )
-        l4e_write(efi_l4_pgtable + l4idx, l4e);
+    if ( !mfn_eq(efi_l4_mfn, INVALID_MFN) )
+    {
+        l4_pgentry_t *l4t;
+
+        l4t = map_xen_pagetable_new(efi_l4_mfn);
+        l4e_write(l4t + l4idx, l4e);
+        UNMAP_XEN_PAGETABLE_NEW(l4t);
+    }
 }
 #endif
diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 3868293d06..f55d6a6d76 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1488,6 +1488,7 @@ void __init efi_init_memory(void)
         unsigned int prot;
     } *extra, *extra_head = NULL;
 #endif
+    l4_pgentry_t *efi_l4_pgtable;
 
     free_ebmalloc_unused_mem();
 
@@ -1603,8 +1604,9 @@ void __init efi_init_memory(void)
                                  mdesc_ver, efi_memmap);
 #else
     /* Set up 1:1 page tables to do runtime calls in "physical" mode. */
-    efi_l4_pgtable = alloc_xen_pagetable();
-    BUG_ON(!efi_l4_pgtable);
+    efi_l4_mfn = alloc_xen_pagetable_new();
+    BUG_ON(mfn_eq(efi_l4_mfn, INVALID_MFN));
+    efi_l4_pgtable = map_xen_pagetable_new(efi_l4_mfn);
     clear_page(efi_l4_pgtable);
 
     copy_mapping(efi_l4_pgtable, 0, max_page, ram_range_valid);
@@ -1703,6 +1705,8 @@ void __init efi_init_memory(void)
           i < l4_table_offset(DIRECTMAP_VIRT_END); ++i )
         efi_l4_pgtable[i] = idle_pg_table[i];
 #endif
+
+    UNMAP_XEN_PAGETABLE_NEW(efi_l4_pgtable);
 }
 #endif
 
diff --git a/xen/common/efi/efi.h b/xen/common/efi/efi.h
index 6b9c56ead1..139b660ed7 100644
--- a/xen/common/efi/efi.h
+++ b/xen/common/efi/efi.h
@@ -6,6 +6,7 @@
 #include <efi/eficapsule.h>
 #include <efi/efiapi.h>
 #include <xen/efi.h>
+#include <xen/mm.h>
 #include <xen/spinlock.h>
 #include <asm/page.h>
 
@@ -29,7 +30,7 @@ extern UINTN efi_memmap_size, efi_mdesc_size;
 extern void *efi_memmap;
 
 #ifdef CONFIG_X86
-extern l4_pgentry_t *efi_l4_pgtable;
+extern mfn_t efi_l4_mfn;
 #endif
 
 extern const struct efi_pci_rom *efi_pci_roms;
diff --git a/xen/common/efi/runtime.c b/xen/common/efi/runtime.c
index 3d118d571d..8263f1d863 100644
--- a/xen/common/efi/runtime.c
+++ b/xen/common/efi/runtime.c
@@ -85,7 +85,7 @@ struct efi_rs_state efi_rs_enter(void)
     static const u32 mxcsr = MXCSR_DEFAULT;
     struct efi_rs_state state = { .cr3 = 0 };
 
-    if ( !efi_l4_pgtable )
+    if ( mfn_eq(efi_l4_mfn, INVALID_MFN) )
         return state;
 
     state.cr3 = read_cr3();
@@ -111,7 +111,7 @@ struct efi_rs_state efi_rs_enter(void)
         lgdt(&gdt_desc);
     }
 
-    switch_cr3_cr4(virt_to_maddr(efi_l4_pgtable), read_cr4());
+    switch_cr3_cr4(mfn_to_maddr(efi_l4_mfn), read_cr4());
 
     return state;
 }
@@ -140,9 +140,9 @@ void efi_rs_leave(struct efi_rs_state *state)
 
 bool efi_rs_using_pgtables(void)
 {
-    return efi_l4_pgtable &&
+    return !mfn_eq(efi_l4_mfn, INVALID_MFN) &&
            (smp_processor_id() == efi_rs_on_cpu) &&
-           (read_cr3() == virt_to_maddr(efi_l4_pgtable));
+           (read_cr3() == mfn_to_maddr(efi_l4_mfn));
 }
 
 unsigned long efi_get_time(void)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 33/55] x86/smpboot: add emacs block
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (31 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 32/55] efi: switch EFI L4 table to use new APIs Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 34/55] x86/smpboot: clone_mapping should have one exit path Wei Liu
                   ` (23 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 7d1226d7bc..4a0982272d 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -1384,3 +1384,13 @@ void __init smp_intr_init(void)
     set_direct_apic_vector(INVALIDATE_TLB_VECTOR, invalidate_interrupt);
     set_direct_apic_vector(CALL_FUNCTION_VECTOR, call_function_interrupt);
 }
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 34/55] x86/smpboot: clone_mapping should have one exit path
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (32 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 33/55] x86/smpboot: add emacs block Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 35/55] x86/smpboot: switch pl3e to use new APIs in clone_mapping Wei Liu
                   ` (22 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

We will soon need to clean up page table mappings in the exit path.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 35 ++++++++++++++++++++++++++++-------
 1 file changed, 28 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 4a0982272d..cb38f31465 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -675,6 +675,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     l3_pgentry_t *pl3e;
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
+    int rc;
 
     /*
      * Sanity check 'linear'.  We only allow cloning from the Xen virtual
@@ -682,11 +683,17 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
      */
     if ( root_table_offset(linear) > ROOT_PAGETABLE_LAST_XEN_SLOT ||
          root_table_offset(linear) < ROOT_PAGETABLE_FIRST_XEN_SLOT )
-        return -EINVAL;
+    {
+        rc = -EINVAL;
+        goto out;
+    }
 
     if ( linear < XEN_VIRT_START ||
          (linear >= XEN_VIRT_END && linear < DIRECTMAP_VIRT_START) )
-        return -EINVAL;
+    {
+        rc = -EINVAL;
+        goto out;
+    }
 
     pl3e = l4e_to_l3e(idle_pg_table[root_table_offset(linear)]) +
         l3_table_offset(linear);
@@ -715,7 +722,10 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
             pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(linear);
             flags = l1e_get_flags(*pl1e);
             if ( !(flags & _PAGE_PRESENT) )
-                return 0;
+            {
+                rc = 0;
+                goto out;
+            }
             pfn = l1e_get_pfn(*pl1e);
         }
     }
@@ -724,7 +734,10 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     {
         pl3e = alloc_xen_pagetable();
         if ( !pl3e )
-            return -ENOMEM;
+        {
+            rc = -ENOMEM;
+            goto out;
+        }
         clear_page(pl3e);
         l4e_write(&rpt[root_table_offset(linear)],
                   l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR));
@@ -738,7 +751,10 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     {
         pl2e = alloc_xen_pagetable();
         if ( !pl2e )
-            return -ENOMEM;
+        {
+            rc = -ENOMEM;
+            goto out;
+        }
         clear_page(pl2e);
         l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
     }
@@ -754,7 +770,10 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     {
         pl1e = alloc_xen_pagetable();
         if ( !pl1e )
-            return -ENOMEM;
+        {
+            rc = -ENOMEM;
+            goto out;
+        }
         clear_page(pl1e);
         l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
     }
@@ -775,7 +794,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     else
         l1e_write(pl1e, l1e_from_pfn(pfn, flags));
 
-    return 0;
+    rc = 0;
+ out:
+    return rc;
 }
 
 DEFINE_PER_CPU(root_pgentry_t *, root_pgt);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 35/55] x86/smpboot: switch pl3e to use new APIs in clone_mapping
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (33 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 34/55] x86/smpboot: clone_mapping should have one exit path Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 36/55] x86/smpboot: switch pl2e " Wei Liu
                   ` (21 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index cb38f31465..f74a6c245f 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -672,7 +672,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 {
     unsigned long linear = (unsigned long)ptr, pfn;
     unsigned int flags;
-    l3_pgentry_t *pl3e;
+    l3_pgentry_t *pl3e = NULL;
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
     int rc;
@@ -695,8 +695,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         goto out;
     }
 
-    pl3e = l4e_to_l3e(idle_pg_table[root_table_offset(linear)]) +
-        l3_table_offset(linear);
+    pl3e = map_xen_pagetable_new(
+        l4e_get_mfn(idle_pg_table[root_table_offset(linear)]));
+    pl3e += l3_table_offset(linear);
 
     flags = l3e_get_flags(*pl3e);
     ASSERT(flags & _PAGE_PRESENT);
@@ -730,20 +731,26 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
     }
 
+    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+
     if ( !(root_get_flags(rpt[root_table_offset(linear)]) & _PAGE_PRESENT) )
     {
-        pl3e = alloc_xen_pagetable();
-        if ( !pl3e )
+        mfn_t l3t_mfn = alloc_xen_pagetable_new();
+
+        if ( mfn_eq(l3t_mfn, INVALID_MFN) )
         {
             rc = -ENOMEM;
             goto out;
         }
+
+        pl3e = map_xen_pagetable_new(l3t_mfn);
         clear_page(pl3e);
         l4e_write(&rpt[root_table_offset(linear)],
-                  l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR));
+                  l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR));
     }
     else
-        pl3e = l4e_to_l3e(rpt[root_table_offset(linear)]);
+        pl3e = map_xen_pagetable_new(
+            l4e_get_mfn(rpt[root_table_offset(linear)]));
 
     pl3e += l3_table_offset(linear);
 
@@ -796,6 +803,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     rc = 0;
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 36/55] x86/smpboot: switch pl2e to use new APIs in clone_mapping
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (34 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 35/55] x86/smpboot: switch pl3e to use new APIs in clone_mapping Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 37/55] x86/smpboot: switch pl1e " Wei Liu
                   ` (20 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index f74a6c245f..e14e48d823 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -673,7 +673,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     unsigned long linear = (unsigned long)ptr, pfn;
     unsigned int flags;
     l3_pgentry_t *pl3e = NULL;
-    l2_pgentry_t *pl2e;
+    l2_pgentry_t *pl2e = NULL;
     l1_pgentry_t *pl1e;
     int rc;
 
@@ -709,7 +709,8 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     }
     else
     {
-        pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(linear);
+        pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+        pl2e += l2_table_offset(linear);
         flags = l2e_get_flags(*pl2e);
         ASSERT(flags & _PAGE_PRESENT);
         if ( flags & _PAGE_PSE )
@@ -731,6 +732,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
     }
 
+    UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
 
     if ( !(root_get_flags(rpt[root_table_offset(linear)]) & _PAGE_PRESENT) )
@@ -756,19 +758,22 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
-        pl2e = alloc_xen_pagetable();
-        if ( !pl2e )
+        mfn_t l2t_mfn = alloc_xen_pagetable_new();
+
+        if ( mfn_eq(l2t_mfn, INVALID_MFN) )
         {
             rc = -ENOMEM;
             goto out;
         }
+
+        pl2e = map_xen_pagetable_new(l2t_mfn);
         clear_page(pl2e);
-        l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
+        l3e_write(pl3e, l3e_from_mfn(l2t_mfn, __PAGE_HYPERVISOR));
     }
     else
     {
         ASSERT(!(l3e_get_flags(*pl3e) & _PAGE_PSE));
-        pl2e = l3e_to_l2e(*pl3e);
+        pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
     }
 
     pl2e += l2_table_offset(linear);
@@ -803,6 +808,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     rc = 0;
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
 }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 37/55] x86/smpboot: switch pl1e to use new APIs in clone_mapping
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (35 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 36/55] x86/smpboot: switch pl2e " Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 38/55] x86/smpboot: drop lXe_to_lYe invocations from cleanup_cpu_root_pgt Wei Liu
                   ` (19 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index e14e48d823..7436799d80 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -674,7 +674,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     unsigned int flags;
     l3_pgentry_t *pl3e = NULL;
     l2_pgentry_t *pl2e = NULL;
-    l1_pgentry_t *pl1e;
+    l1_pgentry_t *pl1e = NULL;
     int rc;
 
     /*
@@ -721,7 +721,8 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
         else
         {
-            pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(linear);
+            pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            pl1e += l1_table_offset(linear);
             flags = l1e_get_flags(*pl1e);
             if ( !(flags & _PAGE_PRESENT) )
             {
@@ -732,6 +733,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
     }
 
+    UNMAP_XEN_PAGETABLE_NEW(pl1e);
     UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
 
@@ -780,19 +782,22 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
-        pl1e = alloc_xen_pagetable();
-        if ( !pl1e )
+        mfn_t l1t_mfn = alloc_xen_pagetable_new();
+
+        if ( mfn_eq(l1t_mfn, INVALID_MFN) )
         {
             rc = -ENOMEM;
             goto out;
         }
+
+        pl1e = map_xen_pagetable_new(l1t_mfn);
         clear_page(pl1e);
-        l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
+        l2e_write(pl2e, l2e_from_mfn(l1t_mfn, __PAGE_HYPERVISOR));
     }
     else
     {
         ASSERT(!(l2e_get_flags(*pl2e) & _PAGE_PSE));
-        pl1e = l2e_to_l1e(*pl2e);
+        pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
     }
 
     pl1e += l1_table_offset(linear);
@@ -808,6 +813,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     rc = 0;
  out:
+    UNMAP_XEN_PAGETABLE_NEW(pl1e);
     UNMAP_XEN_PAGETABLE_NEW(pl2e);
     UNMAP_XEN_PAGETABLE_NEW(pl3e);
     return rc;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 38/55] x86/smpboot: drop lXe_to_lYe invocations from cleanup_cpu_root_pgt
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (36 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 37/55] x86/smpboot: switch pl1e " Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 39/55] x86: switch root_pgt to mfn_t and use new APIs Wei Liu
                   ` (18 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 7436799d80..a9a39cea6e 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -893,23 +893,27 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
           r < root_table_offset(HYPERVISOR_VIRT_END); ++r )
     {
         l3_pgentry_t *l3t;
+        mfn_t l3t_mfn;
         unsigned int i3;
 
         if ( !(root_get_flags(rpt[r]) & _PAGE_PRESENT) )
             continue;
 
-        l3t = l4e_to_l3e(rpt[r]);
+        l3t_mfn = l4e_get_mfn(rpt[r]);
+        l3t = map_xen_pagetable_new(l3t_mfn);
 
         for ( i3 = 0; i3 < L3_PAGETABLE_ENTRIES; ++i3 )
         {
             l2_pgentry_t *l2t;
+            mfn_t l2t_mfn;
             unsigned int i2;
 
             if ( !(l3e_get_flags(l3t[i3]) & _PAGE_PRESENT) )
                 continue;
 
             ASSERT(!(l3e_get_flags(l3t[i3]) & _PAGE_PSE));
-            l2t = l3e_to_l2e(l3t[i3]);
+            l2t_mfn = l3e_get_mfn(l3t[i3]);
+            l2t = map_xen_pagetable_new(l2t_mfn);
 
             for ( i2 = 0; i2 < L2_PAGETABLE_ENTRIES; ++i2 )
             {
@@ -917,13 +921,15 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
                     continue;
 
                 ASSERT(!(l2e_get_flags(l2t[i2]) & _PAGE_PSE));
-                free_xen_pagetable(l2e_to_l1e(l2t[i2]));
+                free_xen_pagetable_new(l2e_get_mfn(l2t[i2]));
             }
 
-            free_xen_pagetable(l2t);
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            free_xen_pagetable_new(l2t_mfn);
         }
 
-        free_xen_pagetable(l3t);
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        free_xen_pagetable_new(l3t_mfn);
     }
 
     free_xen_pagetable(rpt);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 39/55] x86: switch root_pgt to mfn_t and use new APIs
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (37 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 38/55] x86/smpboot: drop lXe_to_lYe invocations from cleanup_cpu_root_pgt Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-19 16:45   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 40/55] x86/shim: map and unmap page tables in replace_va_mapping Wei Liu
                   ` (17 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

This then requires moving declaration of root page table mfn into mm.h
and modify setup_cpu_root_pgt to have a single exit path.

We also need to force map_domain_page to use direct map when switching
per-domain mappings. This is contrary to our end goal of removing
direct map, but this will be removed once we make map_domain_page
context-switch safe in another (large) patch series.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/domain.c           | 15 ++++++++++++---
 xen/arch/x86/domain_page.c      |  2 +-
 xen/arch/x86/mm.c               |  2 +-
 xen/arch/x86/pv/domain.c        |  2 +-
 xen/arch/x86/smpboot.c          | 40 +++++++++++++++++++++++++++-------------
 xen/include/asm-x86/mm.h        |  2 ++
 xen/include/asm-x86/processor.h |  1 -
 7 files changed, 44 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 32dc4253ff..603495e55a 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -68,6 +68,7 @@
 #include <asm/pv/domain.h>
 #include <asm/pv/mm.h>
 #include <asm/spec_ctrl.h>
+#include <asm/setup.h>
 
 DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 
@@ -1589,12 +1590,20 @@ void paravirt_ctxt_switch_from(struct vcpu *v)
 
 void paravirt_ctxt_switch_to(struct vcpu *v)
 {
-    root_pgentry_t *root_pgt = this_cpu(root_pgt);
+    mfn_t rpt_mfn = this_cpu(root_pgt_mfn);
 
-    if ( root_pgt )
-        root_pgt[root_table_offset(PERDOMAIN_VIRT_START)] =
+    if ( !mfn_eq(rpt_mfn, INVALID_MFN) )
+    {
+        root_pgentry_t *rpt;
+
+        mapcache_override_current(INVALID_VCPU);
+        rpt = map_xen_pagetable_new(rpt_mfn);
+        rpt[root_table_offset(PERDOMAIN_VIRT_START)] =
             l4e_from_page(v->domain->arch.perdomain_l3_pg,
                           __PAGE_HYPERVISOR_RW);
+        UNMAP_XEN_PAGETABLE_NEW(rpt);
+        mapcache_override_current(NULL);
+    }
 
     if ( unlikely(v->arch.dr7 & DR7_ACTIVE_MASK) )
         activate_debugregs(v);
diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index 24083e9a86..cfcffd35f3 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -57,7 +57,7 @@ static inline struct vcpu *mapcache_current_vcpu(void)
     return v;
 }
 
-void __init mapcache_override_current(struct vcpu *v)
+void mapcache_override_current(struct vcpu *v)
 {
     this_cpu(override) = v;
 }
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9e115ef0b8..44c9df5c9e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -564,7 +564,7 @@ void write_ptbase(struct vcpu *v)
     if ( is_pv_vcpu(v) && v->domain->arch.pv.xpti )
     {
         cpu_info->root_pgt_changed = true;
-        cpu_info->pv_cr3 = __pa(this_cpu(root_pgt));
+        cpu_info->pv_cr3 = mfn_to_maddr(this_cpu(root_pgt_mfn));
         if ( new_cr4 & X86_CR4_PCIDE )
             cpu_info->pv_cr3 |= get_pcid_bits(v, true);
         switch_cr3_cr4(v->arch.cr3, new_cr4);
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index 7e84b04082..2fd944b7e3 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -303,7 +303,7 @@ static void _toggle_guest_pt(struct vcpu *v)
         struct cpu_info *cpu_info = get_cpu_info();
 
         cpu_info->root_pgt_changed = true;
-        cpu_info->pv_cr3 = __pa(this_cpu(root_pgt)) |
+        cpu_info->pv_cr3 = mfn_to_maddr(this_cpu(root_pgt_mfn)) |
                            (d->arch.pv.pcid ? get_pcid_bits(v, true) : 0);
     }
 
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index a9a39cea6e..32dce00d10 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -819,7 +819,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     return rc;
 }
 
-DEFINE_PER_CPU(root_pgentry_t *, root_pgt);
+DEFINE_PER_CPU(mfn_t, root_pgt_mfn);
 
 static root_pgentry_t common_pgt;
 
@@ -827,19 +827,27 @@ extern const char _stextentry[], _etextentry[];
 
 static int setup_cpu_root_pgt(unsigned int cpu)
 {
-    root_pgentry_t *rpt;
+    root_pgentry_t *rpt = NULL;
+    mfn_t rpt_mfn;
     unsigned int off;
     int rc;
 
     if ( !opt_xpti_hwdom && !opt_xpti_domu )
-        return 0;
+    {
+        rc = 0;
+        goto out;
+    }
 
-    rpt = alloc_xen_pagetable();
-    if ( !rpt )
-        return -ENOMEM;
+    rpt_mfn = alloc_xen_pagetable_new();
+    if ( mfn_eq(rpt_mfn, INVALID_MFN) )
+    {
+        rc = -ENOMEM;
+        goto out;
+    }
 
+    rpt = map_xen_pagetable_new(rpt_mfn);
     clear_page(rpt);
-    per_cpu(root_pgt, cpu) = rpt;
+    per_cpu(root_pgt_mfn, cpu) = rpt_mfn;
 
     rpt[root_table_offset(RO_MPT_VIRT_START)] =
         idle_pg_table[root_table_offset(RO_MPT_VIRT_START)];
@@ -856,7 +864,7 @@ static int setup_cpu_root_pgt(unsigned int cpu)
             rc = clone_mapping(ptr, rpt);
 
         if ( rc )
-            return rc;
+            goto out;
 
         common_pgt = rpt[root_table_offset(XEN_VIRT_START)];
     }
@@ -875,19 +883,24 @@ static int setup_cpu_root_pgt(unsigned int cpu)
     if ( !rc )
         rc = clone_mapping((void *)per_cpu(stubs.addr, cpu), rpt);
 
+ out:
+    UNMAP_XEN_PAGETABLE_NEW(rpt);
     return rc;
 }
 
 static void cleanup_cpu_root_pgt(unsigned int cpu)
 {
-    root_pgentry_t *rpt = per_cpu(root_pgt, cpu);
+    mfn_t rpt_mfn = per_cpu(root_pgt_mfn, cpu);
+    root_pgentry_t *rpt;
     unsigned int r;
     unsigned long stub_linear = per_cpu(stubs.addr, cpu);
 
-    if ( !rpt )
+    if ( mfn_eq(rpt_mfn, INVALID_MFN) )
         return;
 
-    per_cpu(root_pgt, cpu) = NULL;
+    per_cpu(root_pgt_mfn, cpu) = INVALID_MFN;
+
+    rpt = map_xen_pagetable_new(rpt_mfn);
 
     for ( r = root_table_offset(DIRECTMAP_VIRT_START);
           r < root_table_offset(HYPERVISOR_VIRT_END); ++r )
@@ -932,7 +945,8 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
         free_xen_pagetable_new(l3t_mfn);
     }
 
-    free_xen_pagetable(rpt);
+    UNMAP_XEN_PAGETABLE_NEW(rpt);
+    free_xen_pagetable_new(rpt_mfn);
 
     /* Also zap the stub mapping for this CPU. */
     if ( stub_linear )
@@ -1138,7 +1152,7 @@ void __init smp_prepare_cpus(void)
     rc = setup_cpu_root_pgt(0);
     if ( rc )
         panic("Error %d setting up PV root page table\n", rc);
-    if ( per_cpu(root_pgt, 0) )
+    if ( !mfn_eq(per_cpu(root_pgt_mfn, 0), INVALID_MFN) )
     {
         get_cpu_info()->pv_cr3 = 0;
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 4378d9f815..708b84bb89 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -657,4 +657,6 @@ void free_xen_pagetable_new(mfn_t mfn);
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
 
+DECLARE_PER_CPU(mfn_t, root_pgt_mfn);
+
 #endif /* __ASM_X86_MM_H__ */
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index df01ae30d7..9f98ac96f5 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -449,7 +449,6 @@ extern idt_entry_t idt_table[];
 extern idt_entry_t *idt_tables[];
 
 DECLARE_PER_CPU(struct tss_struct, init_tss);
-DECLARE_PER_CPU(root_pgentry_t *, root_pgt);
 
 extern void write_ptbase(struct vcpu *v);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 40/55] x86/shim: map and unmap page tables in replace_va_mapping
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (38 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 39/55] x86: switch root_pgt to mfn_t and use new APIs Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 41/55] x86_64/mm: map and unmap page tables in m2p_mapped Wei Liu
                   ` (16 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/pv/shim.c | 20 +++++++++++++++-----
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 324ca27f93..cf638fa965 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -167,15 +167,25 @@ static void __init replace_va_mapping(struct domain *d, l4_pgentry_t *l4start,
                                       unsigned long va, mfn_t mfn)
 {
     l4_pgentry_t *pl4e = l4start + l4_table_offset(va);
-    l3_pgentry_t *pl3e = l4e_to_l3e(*pl4e) + l3_table_offset(va);
-    l2_pgentry_t *pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(va);
-    l1_pgentry_t *pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(va);
-    struct page_info *page = mfn_to_page(l1e_get_mfn(*pl1e));
+    l3_pgentry_t *pl3e;
+    l2_pgentry_t *pl2e;
+    l1_pgentry_t *pl1e;
 
-    put_page_and_type(page);
+    pl3e = map_xen_pagetable_new(l4e_get_mfn(*pl4e));
+    pl3e += l3_table_offset(va);
+    pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+    pl2e += l2_table_offset(va);
+    pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+    pl1e += l1_table_offset(va);
+
+    put_page_and_type(mfn_to_page(l1e_get_mfn(*pl1e)));
 
     *pl1e = l1e_from_mfn(mfn, (!is_pv_32bit_domain(d) ? L1_PROT
                                                       : COMPAT_L1_PROT));
+
+    UNMAP_XEN_PAGETABLE_NEW(pl1e);
+    UNMAP_XEN_PAGETABLE_NEW(pl2e);
+    UNMAP_XEN_PAGETABLE_NEW(pl3e);
 }
 
 static void evtchn_reserve(struct domain *d, unsigned int port)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 41/55] x86_64/mm: map and unmap page tables in m2p_mapped
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (39 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 40/55] x86/shim: map and unmap page tables in replace_va_mapping Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-19 16:45   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 42/55] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Wei Liu
                   ` (15 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 216f97c95f..2b88a1af37 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -130,28 +130,36 @@ static int m2p_mapped(unsigned long spfn)
 {
     unsigned long va;
     l3_pgentry_t *l3_ro_mpt;
-    l2_pgentry_t *l2_ro_mpt;
+    l2_pgentry_t *l2_ro_mpt = NULL;
+    int rc = M2P_NO_MAPPED;
 
     va = RO_MPT_VIRT_START + spfn * sizeof(*machine_to_phys_mapping);
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(va)]);
+    l3_ro_mpt = map_xen_pagetable_new(
+        l4e_get_mfn(idle_pg_table[l4_table_offset(va)]));
 
     switch ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
              (_PAGE_PRESENT |_PAGE_PSE))
     {
         case _PAGE_PSE|_PAGE_PRESENT:
-            return M2P_1G_MAPPED;
+            rc = M2P_1G_MAPPED;
+            goto out;
         /* Check for next level */
         case _PAGE_PRESENT:
             break;
         default:
-            return M2P_NO_MAPPED;
+            rc = M2P_NO_MAPPED;
+            goto out;
     }
-    l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
+    l2_ro_mpt = map_xen_pagetable_new(
+        l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
 
     if (l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT)
-        return M2P_2M_MAPPED;
+        rc = M2P_2M_MAPPED;
 
-    return M2P_NO_MAPPED;
+ out:
+    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    return rc;
 }
 
 static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 42/55] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (40 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 41/55] x86_64/mm: map and unmap page tables in m2p_mapped Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-19 16:45   ` Nuernberger, Stefan
  2019-02-07 16:44 ` [PATCH RFC 43/55] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Wei Liu
                   ` (14 subsequent siblings)
  56 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 31 +++++++++++++++++++++++--------
 1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 2b88a1af37..597d8e9ed8 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -166,8 +166,8 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
 {
     unsigned long i, n, v;
     mfn_t m2p_start_mfn = INVALID_MFN;
-    l3_pgentry_t l3e;
-    l2_pgentry_t l2e;
+    l3_pgentry_t l3e, *l3t;
+    l2_pgentry_t l2e, *l2t;
 
     /* M2P table is mappable read-only by privileged domains. */
     for ( v  = RDWR_MPT_VIRT_START;
@@ -175,14 +175,22 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
           v += n << PAGE_SHIFT )
     {
         n = L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES;
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-            l3_table_offset(v)];
+
+        l3t = map_xen_pagetable_new(
+            l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
+        l3e = l3t[l3_table_offset(v)];
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
+
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
         if ( !(l3e_get_flags(l3e) & _PAGE_PSE) )
         {
             n = L1_PAGETABLE_ENTRIES;
-            l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+
+            l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+            l2e = l2t[l2_table_offset(v)];
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
+
             if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
                 continue;
             m2p_start_mfn = l2e_get_mfn(l2e);
@@ -203,11 +211,18 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
           v != RDWR_COMPAT_MPT_VIRT_END;
           v += 1 << L2_PAGETABLE_SHIFT )
     {
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-            l3_table_offset(v)];
+        l3t = map_xen_pagetable_new(
+            l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
+        l3e = l3t[l3_table_offset(v)];
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
+
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
-        l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+
+        l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+        l2e = l2t[l2_table_offset(v)];
+        UNMAP_XEN_PAGETABLE_NEW(l2t);
+
         if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
             continue;
         m2p_start_mfn = l2e_get_mfn(l2e);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 43/55] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (41 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 42/55] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 44/55] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Wei Liu
                   ` (13 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 597d8e9ed8..bd298fff1b 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -252,11 +252,13 @@ static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info)
     if ( emap > ((RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2) )
         emap = (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2;
 
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(HIRO_COMPAT_MPT_VIRT_START)]);
+    l3_ro_mpt = map_xen_pagetable_new(
+        l4e_get_mfn(idle_pg_table[l4_table_offset(HIRO_COMPAT_MPT_VIRT_START)]));
 
     ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]) & _PAGE_PRESENT);
 
-    l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]);
+    l2_ro_mpt = map_xen_pagetable_new(
+        l3e_get_mfn(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]));
 
     for ( i = smap; i < emap; )
     {
@@ -278,6 +280,9 @@ static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info)
         i += 1UL << (L2_PAGETABLE_SHIFT - 2);
     }
 
+    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+
     return;
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 44/55] x86_64/mm: map and unmap page tables in destroy_m2p_mapping
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (42 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 43/55] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 45/55] x86_64/mm: map and unmap page tables in setup_compat_m2p_table Wei Liu
                   ` (12 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index bd298fff1b..36f25583f2 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -292,7 +292,8 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
     unsigned long i, va, rwva;
     unsigned long smap = info->spfn, emap = info->epfn;
 
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
+    l3_ro_mpt = map_xen_pagetable_new(
+        l4e_get_mfn(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]));
 
     /*
      * No need to clean m2p structure existing before the hotplug
@@ -314,26 +315,35 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
             continue;
         }
 
-        l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
+        l2_ro_mpt = map_xen_pagetable_new(
+            l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
         if (!(l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT))
         {
             i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) +
                     (1UL << (L2_PAGETABLE_SHIFT - 3)) ;
+            UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
             continue;
         }
 
         pt_pfn = l2e_get_pfn(l2_ro_mpt[l2_table_offset(va)]);
         if ( hotadd_mem_valid(pt_pfn, info) )
         {
+            l2_pgentry_t *l2t;
+
             destroy_xen_mappings(rwva, rwva + (1UL << L2_PAGETABLE_SHIFT));
 
-            l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
-            l2e_write(&l2_ro_mpt[l2_table_offset(va)], l2e_empty());
+            l2t = map_xen_pagetable_new(
+                l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
+            l2e_write(&l2t[l2_table_offset(va)], l2e_empty());
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
         }
         i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) +
               (1UL << (L2_PAGETABLE_SHIFT - 3));
+        UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
     }
 
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+
     destroy_compat_m2p_mapping(info);
 
     /* Brute-Force flush all TLB */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 45/55] x86_64/mm: map and unmap page tables in setup_compat_m2p_table
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (43 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 44/55] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 46/55] x86_64/mm: map and unmap page tables in cleanup_frame_table Wei Liu
                   ` (11 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 36f25583f2..6087851e69 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -382,11 +382,13 @@ static int setup_compat_m2p_table(struct mem_hotadd_info *info)
 
     va = HIRO_COMPAT_MPT_VIRT_START +
          smap * sizeof(*compat_machine_to_phys_mapping);
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(va)]);
+    l3_ro_mpt = map_xen_pagetable_new(
+        l4e_get_mfn(idle_pg_table[l4_table_offset(va)]));
 
     ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) & _PAGE_PRESENT);
 
-    l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
+    l2_ro_mpt = map_xen_pagetable_new(
+        l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
 
 #define MFN(x) (((x) << L2_PAGETABLE_SHIFT) / sizeof(unsigned int))
 #define CNT ((sizeof(*frame_table) & -sizeof(*frame_table)) / \
@@ -424,6 +426,9 @@ static int setup_compat_m2p_table(struct mem_hotadd_info *info)
     }
 #undef CNT
 #undef MFN
+
+    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
     return err;
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 46/55] x86_64/mm: map and unmap page tables in cleanup_frame_table
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (44 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 45/55] x86_64/mm: map and unmap page tables in setup_compat_m2p_table Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 47/55] x86_64/mm: map and unmap page tables in subarch_init_memory Wei Liu
                   ` (10 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 24 +++++++++++++++++-------
 1 file changed, 17 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 6087851e69..cbd1f829cf 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -800,8 +800,8 @@ void free_compat_arg_xlat(struct vcpu *v)
 static void cleanup_frame_table(struct mem_hotadd_info *info)
 {
     unsigned long sva, eva;
-    l3_pgentry_t l3e;
-    l2_pgentry_t l2e;
+    l3_pgentry_t l3e, *l3t;
+    l2_pgentry_t l2e, *l2t;
     mfn_t spfn, epfn;
 
     spfn = _mfn(info->spfn);
@@ -815,8 +815,10 @@ static void cleanup_frame_table(struct mem_hotadd_info *info)
 
     while (sva < eva)
     {
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(sva)])[
-          l3_table_offset(sva)];
+        l3t = map_xen_pagetable_new(
+            l4e_get_mfn(idle_pg_table[l4_table_offset(sva)]));
+        l3e = l3t[l3_table_offset(sva)];
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
              (l3e_get_flags(l3e) & _PAGE_PSE) )
         {
@@ -825,7 +827,9 @@ static void cleanup_frame_table(struct mem_hotadd_info *info)
             continue;
         }
 
-        l2e = l3e_to_l2e(l3e)[l2_table_offset(sva)];
+        l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+        l2e = l2t[l2_table_offset(sva)];
+        UNMAP_XEN_PAGETABLE_NEW(l2t);
         ASSERT(l2e_get_flags(l2e) & _PAGE_PRESENT);
 
         if ( (l2e_get_flags(l2e) & (_PAGE_PRESENT | _PAGE_PSE)) ==
@@ -841,8 +845,14 @@ static void cleanup_frame_table(struct mem_hotadd_info *info)
             continue;
         }
 
-        ASSERT(l1e_get_flags(l2e_to_l1e(l2e)[l1_table_offset(sva)]) &
-                _PAGE_PRESENT);
+#ifndef NDEBUG
+        {
+            l1_pgentry_t *l1t = map_xen_pagetable_new(l2e_get_mfn(l2e));
+            ASSERT(l1e_get_flags(l1t[l1_table_offset(sva)]) &
+                   _PAGE_PRESENT);
+            UNMAP_XEN_PAGETABLE_NEW(l1t);
+        }
+#endif
          sva = (sva & ~((1UL << PAGE_SHIFT) - 1)) +
                     (1UL << PAGE_SHIFT);
     }
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 47/55] x86_64/mm: map and unmap page tables in subarch_init_memory
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (45 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 46/55] x86_64/mm: map and unmap page tables in cleanup_frame_table Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 48/55] x86_64/mm: map and unmap page tables in subarch_memory_op Wei Liu
                   ` (9 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 31 +++++++++++++++++++++++--------
 1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index cbd1f829cf..9dd2ecad4a 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -929,8 +929,8 @@ static int extend_frame_table(struct mem_hotadd_info *info)
 void __init subarch_init_memory(void)
 {
     unsigned long i, n, v, m2p_start_mfn;
-    l3_pgentry_t l3e;
-    l2_pgentry_t l2e;
+    l3_pgentry_t l3e, *l3t;
+    l2_pgentry_t l2e, *l2t;
 
     BUILD_BUG_ON(RDWR_MPT_VIRT_START & ((1UL << L3_PAGETABLE_SHIFT) - 1));
     BUILD_BUG_ON(RDWR_MPT_VIRT_END   & ((1UL << L3_PAGETABLE_SHIFT) - 1));
@@ -940,14 +940,22 @@ void __init subarch_init_memory(void)
           v += n << PAGE_SHIFT )
     {
         n = L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES;
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-            l3_table_offset(v)];
+
+        l3t = map_xen_pagetable_new(
+            l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
+        l3e = l3t[l3_table_offset(v)];
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
+
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
         if ( !(l3e_get_flags(l3e) & _PAGE_PSE) )
         {
             n = L1_PAGETABLE_ENTRIES;
-            l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+
+            l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+            l2e = l2t[l2_table_offset(v)];
+            UNMAP_XEN_PAGETABLE_NEW(l2t);
+
             if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
                 continue;
             m2p_start_mfn = l2e_get_pfn(l2e);
@@ -966,11 +974,18 @@ void __init subarch_init_memory(void)
           v != RDWR_COMPAT_MPT_VIRT_END;
           v += 1 << L2_PAGETABLE_SHIFT )
     {
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-            l3_table_offset(v)];
+        l3t = map_xen_pagetable_new(
+            l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
+        l3e = l3t[l3_table_offset(v)];
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
+
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
-        l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+
+        l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+        l2e = l2t[l2_table_offset(v)];
+        UNMAP_XEN_PAGETABLE_NEW(l2t);
+
         if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
             continue;
         m2p_start_mfn = l2e_get_pfn(l2e);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 48/55] x86_64/mm: map and unmap page tables in subarch_memory_op
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (46 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 47/55] x86_64/mm: map and unmap page tables in subarch_init_memory Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 49/55] x86/smpboot: remove lXe_to_lYe in cleanup_cpu_root_pgt Wei Liu
                   ` (8 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/x86_64/mm.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 9dd2ecad4a..cac06b782d 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1015,8 +1015,8 @@ void __init subarch_init_memory(void)
 long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct xen_machphys_mfn_list xmml;
-    l3_pgentry_t l3e;
-    l2_pgentry_t l2e;
+    l3_pgentry_t l3e, *l3t;
+    l2_pgentry_t l2e, *l2t;
     unsigned long v, limit;
     xen_pfn_t mfn, last_mfn;
     unsigned int i;
@@ -1035,13 +1035,18 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
               (v < (unsigned long)(machine_to_phys_mapping + max_page));
               i++, v += 1UL << L2_PAGETABLE_SHIFT )
         {
-            l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-                l3_table_offset(v)];
+            l3t = map_xen_pagetable_new(
+                l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
+            l3e = l3t[l3_table_offset(v)];
+            UNMAP_XEN_PAGETABLE_NEW(l3t);
+
             if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
                 mfn = last_mfn;
             else if ( !(l3e_get_flags(l3e) & _PAGE_PSE) )
             {
-                l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+                l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+                l2e = l2t[l2_table_offset(v)];
+                UNMAP_XEN_PAGETABLE_NEW(l2t);
                 if ( l2e_get_flags(l2e) & _PAGE_PRESENT )
                     mfn = l2e_get_pfn(l2e);
                 else
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 49/55] x86/smpboot: remove lXe_to_lYe in cleanup_cpu_root_pgt
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (47 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 48/55] x86_64/mm: map and unmap page tables in subarch_memory_op Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 50/55] x86/pv: properly map and unmap page tables in mark_pv_pt_pages_rdonly Wei Liu
                   ` (7 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/smpboot.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 32dce00d10..3ac1924391 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -951,11 +951,17 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
     /* Also zap the stub mapping for this CPU. */
     if ( stub_linear )
     {
-        l3_pgentry_t *l3t = l4e_to_l3e(common_pgt);
-        l2_pgentry_t *l2t = l3e_to_l2e(l3t[l3_table_offset(stub_linear)]);
-        l1_pgentry_t *l1t = l2e_to_l1e(l2t[l2_table_offset(stub_linear)]);
+        l3_pgentry_t *l3t = map_xen_pagetable_new(l4e_get_mfn(common_pgt));
+        l2_pgentry_t *l2t = map_xen_pagetable_new(
+            l3e_get_mfn(l3t[l3_table_offset(stub_linear)]));
+        l1_pgentry_t *l1t = map_xen_pagetable_new(
+            l2e_get_mfn(l2t[l2_table_offset(stub_linear)]));
 
         l1t[l1_table_offset(stub_linear)] = l1e_empty();
+
+        UNMAP_XEN_PAGETABLE_NEW(l1t);
+        UNMAP_XEN_PAGETABLE_NEW(l2t);
+        UNMAP_XEN_PAGETABLE_NEW(l3t);
     }
 }
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 50/55] x86/pv: properly map and unmap page tables in mark_pv_pt_pages_rdonly
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (48 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 49/55] x86/smpboot: remove lXe_to_lYe in cleanup_cpu_root_pgt Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 51/55] x86/pv: properly map and unmap page table in dom0_construct_pv Wei Liu
                   ` (6 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/pv/dom0_build.c | 35 +++++++++++++++++++++++------------
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 837ef7bca1..293be076d9 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -50,17 +50,17 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d,
     unsigned long count;
     struct page_info *page;
     l4_pgentry_t *pl4e;
-    l3_pgentry_t *pl3e;
-    l2_pgentry_t *pl2e;
-    l1_pgentry_t *pl1e;
+    l3_pgentry_t *pl3e, *l3t;
+    l2_pgentry_t *pl2e, *l2t;
+    l1_pgentry_t *pl1e, *l1t;
 
     pl4e = l4start + l4_table_offset(vpt_start);
-    pl3e = l4e_to_l3e(*pl4e);
-    pl3e += l3_table_offset(vpt_start);
-    pl2e = l3e_to_l2e(*pl3e);
-    pl2e += l2_table_offset(vpt_start);
-    pl1e = l2e_to_l1e(*pl2e);
-    pl1e += l1_table_offset(vpt_start);
+    l3t = map_xen_pagetable_new(l4e_get_mfn(*pl4e));
+    pl3e = l3t + l3_table_offset(vpt_start);
+    l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+    pl2e = l2t + l2_table_offset(vpt_start);
+    l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+    pl1e = l1t + l1_table_offset(vpt_start);
     for ( count = 0; count < nr_pt_pages; count++ )
     {
         l1e_remove_flags(*pl1e, _PAGE_RW);
@@ -85,12 +85,23 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d,
             if ( !((unsigned long)++pl2e & (PAGE_SIZE - 1)) )
             {
                 if ( !((unsigned long)++pl3e & (PAGE_SIZE - 1)) )
-                    pl3e = l4e_to_l3e(*++pl4e);
-                pl2e = l3e_to_l2e(*pl3e);
+                {
+                    UNMAP_XEN_PAGETABLE_NEW(l3t);
+                    l3t = map_xen_pagetable_new(l4e_get_mfn(*++pl4e));
+                    pl3e = l3t;
+                }
+                UNMAP_XEN_PAGETABLE_NEW(l2t);
+                l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+                pl2e = l2t;
             }
-            pl1e = l2e_to_l1e(*pl2e);
+            UNMAP_XEN_PAGETABLE_NEW(l1t);
+            l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            pl1e = l1t;
         }
     }
+    UNMAP_XEN_PAGETABLE_NEW(l1t);
+    UNMAP_XEN_PAGETABLE_NEW(l2t);
+    UNMAP_XEN_PAGETABLE_NEW(l3t);
 }
 
 static __init void setup_pv_physmap(struct domain *d, unsigned long pgtbl_pfn,
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 51/55] x86/pv: properly map and unmap page table in dom0_construct_pv
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (49 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 50/55] x86/pv: properly map and unmap page tables in mark_pv_pt_pages_rdonly Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 52/55] x86: remove lXe_to_lYe in __start_xen Wei Liu
                   ` (5 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/pv/dom0_build.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 293be076d9..a07d2138a2 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -690,6 +690,8 @@ int __init dom0_construct_pv(struct domain *d,
 
     if ( is_pv_32bit_domain(d) )
     {
+        l2_pgentry_t *l2t;
+
         /* Ensure the first four L3 entries are all populated. */
         for ( i = 0, l3tab = l3start; i < 4; ++i, ++l3tab )
         {
@@ -704,7 +706,9 @@ int __init dom0_construct_pv(struct domain *d,
                 l3e_get_page(*l3tab)->u.inuse.type_info |= PGT_pae_xen_l2;
         }
 
-        init_xen_pae_l2_slots(l3e_to_l2e(l3start[3]), d);
+        l2t = map_xen_pagetable_new(l3e_get_mfn(l3start[3]));
+        init_xen_pae_l2_slots(l2t, d);
+        UNMAP_XEN_PAGETABLE_NEW(l2t);
     }
 
     /* Pages that are part of page tables must be read only. */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 52/55] x86: remove lXe_to_lYe in __start_xen
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (50 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 51/55] x86/pv: properly map and unmap page table in dom0_construct_pv Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 53/55] x86/mm: drop old page table APIs Wei Liu
                   ` (4 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Properly map and unmap page tables where necessary.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/setup.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 92da060915..7b6420f95a 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1079,13 +1079,17 @@ void __init noreturn __start_xen(unsigned long mbi_p)
             pl4e = __va(__pa(idle_pg_table));
             for ( i = 0 ; i < L4_PAGETABLE_ENTRIES; i++, pl4e++ )
             {
+                l3_pgentry_t *l3t;
+
                 if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
                     continue;
                 *pl4e = l4e_from_intpte(l4e_get_intpte(*pl4e) +
                                         xen_phys_start);
-                pl3e = l4e_to_l3e(*pl4e);
+                pl3e = l3t = map_xen_pagetable_new(l4e_get_mfn(*pl4e));
                 for ( j = 0; j < L3_PAGETABLE_ENTRIES; j++, pl3e++ )
                 {
+                    l2_pgentry_t *l2t;
+
                     /* Not present, 1GB mapping, or already relocated? */
                     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) ||
                          (l3e_get_flags(*pl3e) & _PAGE_PSE) ||
@@ -1093,7 +1097,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                         continue;
                     *pl3e = l3e_from_intpte(l3e_get_intpte(*pl3e) +
                                             xen_phys_start);
-                    pl2e = l3e_to_l2e(*pl3e);
+                    pl2e = l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
                     for ( k = 0; k < L2_PAGETABLE_ENTRIES; k++, pl2e++ )
                     {
                         /* Not present, PSE, or already relocated? */
@@ -1104,7 +1108,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                         *pl2e = l2e_from_intpte(l2e_get_intpte(*pl2e) +
                                                 xen_phys_start);
                     }
+                    UNMAP_XEN_PAGETABLE_NEW(l2t);
                 }
+                UNMAP_XEN_PAGETABLE_NEW(l3t);
             }
 
             /* The only data mappings to be relocated are in the Xen area. */
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 53/55] x86/mm: drop old page table APIs
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (51 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 52/55] x86: remove lXe_to_lYe in __start_xen Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 54/55] x86: switch to use domheap page for page tables Wei Liu
                   ` (3 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Now that we've switched all users to the new APIs, the old ones aren't
needed anymore.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c          | 16 ----------------
 xen/include/asm-x86/mm.h   |  2 --
 xen/include/asm-x86/page.h |  5 -----
 3 files changed, 23 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 44c9df5c9e..2a3442d881 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4723,22 +4723,6 @@ int mmcfg_intercept_write(
     return X86EMUL_OKAY;
 }
 
-void *alloc_xen_pagetable(void)
-{
-    mfn_t mfn;
-
-    mfn = alloc_xen_pagetable_new();
-    ASSERT(!mfn_eq(mfn, INVALID_MFN));
-
-    return map_xen_pagetable_new(mfn);
-}
-
-void free_xen_pagetable(void *v)
-{
-    if ( system_state != SYS_STATE_early_boot )
-        free_xen_pagetable_new(virt_to_mfn(v));
-}
-
 mfn_t alloc_xen_pagetable_new(void)
 {
     if ( system_state != SYS_STATE_early_boot )
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 708b84bb89..7fae4bb311 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -642,8 +642,6 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
                           unsigned int *flags);
 
 /* Allocator functions for Xen pagetables. */
-void *alloc_xen_pagetable(void);
-void free_xen_pagetable(void *v);
 mfn_t alloc_xen_pagetable_new(void);
 void *map_xen_pagetable_new(mfn_t mfn);
 void unmap_xen_pagetable_new(void *v);
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 05a8b1efa6..906ec701a3 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -187,11 +187,6 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
 #define l4e_has_changed(x,y,flags) \
     ( !!(((x).l4 ^ (y).l4) & ((PADDR_MASK&PAGE_MASK)|put_pte_flags(flags))) )
 
-/* Pagetable walking. */
-#define l2e_to_l1e(x)              ((l1_pgentry_t *)__va(l2e_get_paddr(x)))
-#define l3e_to_l2e(x)              ((l2_pgentry_t *)__va(l3e_get_paddr(x)))
-#define l4e_to_l3e(x)              ((l3_pgentry_t *)__va(l4e_get_paddr(x)))
-
 #define map_l1t_from_l2e(x)        (l1_pgentry_t *)map_domain_page(l2e_get_mfn(x))
 #define map_l2t_from_l3e(x)        (l2_pgentry_t *)map_domain_page(l3e_get_mfn(x))
 #define map_l3t_from_l4e(x)        (l3_pgentry_t *)map_domain_page(l4e_get_mfn(x))
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 54/55] x86: switch to use domheap page for page tables
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (52 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 53/55] x86/mm: drop old page table APIs Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-02-07 16:44 ` [PATCH RFC 55/55] x86/mm: drop _new suffix from page table APIs Wei Liu
                   ` (2 subsequent siblings)
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Modify all the _new APIs to handle domheap pages.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 2a3442d881..97dd6a7f63 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4727,10 +4727,10 @@ mfn_t alloc_xen_pagetable_new(void)
 {
     if ( system_state != SYS_STATE_early_boot )
     {
-        void *ptr = alloc_xenheap_page();
+        struct page_info *pg = alloc_domheap_page(NULL, 0);
 
-        BUG_ON(!hardware_domain && !ptr);
-        return virt_to_mfn(ptr);
+        BUG_ON(!hardware_domain && !pg);
+        return pg ? page_to_mfn(pg) : INVALID_MFN;
     }
 
     return alloc_boot_pages(1, 1);
@@ -4738,20 +4738,21 @@ mfn_t alloc_xen_pagetable_new(void)
 
 void *map_xen_pagetable_new(mfn_t mfn)
 {
-    return mfn_to_virt(mfn_x(mfn));
+    return map_domain_page(mfn);
 }
 
 /* v can point to an entry within a table or be NULL */
 void unmap_xen_pagetable_new(void *v)
 {
-    /* XXX still using xenheap page, no need to do anything.  */
+    if ( v )
+        unmap_domain_page((const void *)((unsigned long)v & PAGE_MASK));
 }
 
 /* mfn can be INVALID_MFN */
 void free_xen_pagetable_new(mfn_t mfn)
 {
     if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
-        free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
+        free_domheap_page(mfn_to_page(mfn));
 }
 
 static DEFINE_SPINLOCK(map_pgdir_lock);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* [PATCH RFC 55/55] x86/mm: drop _new suffix from page table APIs
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (53 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 54/55] x86: switch to use domheap page for page tables Wei Liu
@ 2019-02-07 16:44 ` Wei Liu
  2019-03-18 21:14 ` [PATCH RFC 00/55] x86: use domheap page for xen page tables Nuernberger, Stefan
  2019-03-28 12:52 ` Nuernberger, Stefan
  56 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-02-07 16:44 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
Patch generated with

find -name '*.[ch]' -exec sed -i 's/$OLD/$NEW/g'
---
 xen/arch/x86/domain.c        |   4 +-
 xen/arch/x86/domain_page.c   |   2 +-
 xen/arch/x86/efi/runtime.h   |   4 +-
 xen/arch/x86/mm.c            | 164 +++++++++++++++++++++----------------------
 xen/arch/x86/pv/dom0_build.c |  28 ++++----
 xen/arch/x86/pv/shim.c       |  12 ++--
 xen/arch/x86/setup.c         |   8 +--
 xen/arch/x86/smpboot.c       |  74 +++++++++----------
 xen/arch/x86/x86_64/mm.c     | 136 +++++++++++++++++------------------
 xen/common/efi/boot.c        |  42 +++++------
 xen/include/asm-x86/mm.h     |  18 ++---
 11 files changed, 246 insertions(+), 246 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 603495e55a..e0ac74a914 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1597,11 +1597,11 @@ void paravirt_ctxt_switch_to(struct vcpu *v)
         root_pgentry_t *rpt;
 
         mapcache_override_current(INVALID_VCPU);
-        rpt = map_xen_pagetable_new(rpt_mfn);
+        rpt = map_xen_pagetable(rpt_mfn);
         rpt[root_table_offset(PERDOMAIN_VIRT_START)] =
             l4e_from_page(v->domain->arch.perdomain_l3_pg,
                           __PAGE_HYPERVISOR_RW);
-        UNMAP_XEN_PAGETABLE_NEW(rpt);
+        UNMAP_XEN_PAGETABLE(rpt);
         mapcache_override_current(NULL);
     }
 
diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index cfcffd35f3..9ea74b456c 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -343,7 +343,7 @@ mfn_t domain_page_map_to_mfn(const void *ptr)
         l1_pgentry_t *pl1e = virt_to_xen_l1e(va);
         BUG_ON(!pl1e);
         l1e = *pl1e;
-        UNMAP_XEN_PAGETABLE_NEW(pl1e);
+        UNMAP_XEN_PAGETABLE(pl1e);
     }
     else
     {
diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h
index 277d237953..ca15c5aab7 100644
--- a/xen/arch/x86/efi/runtime.h
+++ b/xen/arch/x86/efi/runtime.h
@@ -10,9 +10,9 @@ void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e)
     {
         l4_pgentry_t *l4t;
 
-        l4t = map_xen_pagetable_new(efi_l4_mfn);
+        l4t = map_xen_pagetable(efi_l4_mfn);
         l4e_write(l4t + l4idx, l4e);
-        UNMAP_XEN_PAGETABLE_NEW(l4t);
+        UNMAP_XEN_PAGETABLE(l4t);
     }
 }
 #endif
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 97dd6a7f63..941460a94e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -366,22 +366,22 @@ void __init arch_init_memory(void)
             ASSERT(root_pgt_pv_xen_slots < ROOT_PAGETABLE_PV_XEN_SLOTS);
             if ( l4_table_offset(split_va) == l4_table_offset(split_va - 1) )
             {
-                mfn_t l3tab_mfn = alloc_xen_pagetable_new();
+                mfn_t l3tab_mfn = alloc_xen_pagetable();
 
                 if ( !mfn_eq(l3tab_mfn, INVALID_MFN) )
                 {
                     l3_pgentry_t *l3idle =
-                        map_xen_pagetable_new(
+                        map_xen_pagetable(
                             l4e_get_mfn(idle_pg_table[l4_table_offset(split_va)]));
-                    l3_pgentry_t *l3tab = map_xen_pagetable_new(l3tab_mfn);
+                    l3_pgentry_t *l3tab = map_xen_pagetable(l3tab_mfn);
 
                     for ( i = 0; i < l3_table_offset(split_va); ++i )
                         l3tab[i] = l3idle[i];
                     for ( ; i < L3_PAGETABLE_ENTRIES; ++i )
                         l3tab[i] = l3e_empty();
                     split_l4e = l4e_from_mfn(l3tab_mfn, __PAGE_HYPERVISOR_RW);
-                    UNMAP_XEN_PAGETABLE_NEW(l3idle);
-                    UNMAP_XEN_PAGETABLE_NEW(l3tab);
+                    UNMAP_XEN_PAGETABLE(l3idle);
+                    UNMAP_XEN_PAGETABLE(l3tab);
                 }
                 else
                     ++root_pgt_pv_xen_slots;
@@ -4723,7 +4723,7 @@ int mmcfg_intercept_write(
     return X86EMUL_OKAY;
 }
 
-mfn_t alloc_xen_pagetable_new(void)
+mfn_t alloc_xen_pagetable(void)
 {
     if ( system_state != SYS_STATE_early_boot )
     {
@@ -4736,20 +4736,20 @@ mfn_t alloc_xen_pagetable_new(void)
     return alloc_boot_pages(1, 1);
 }
 
-void *map_xen_pagetable_new(mfn_t mfn)
+void *map_xen_pagetable(mfn_t mfn)
 {
     return map_domain_page(mfn);
 }
 
 /* v can point to an entry within a table or be NULL */
-void unmap_xen_pagetable_new(void *v)
+void unmap_xen_pagetable(void *v)
 {
     if ( v )
         unmap_domain_page((const void *)((unsigned long)v & PAGE_MASK));
 }
 
 /* mfn can be INVALID_MFN */
-void free_xen_pagetable_new(mfn_t mfn)
+void free_xen_pagetable(mfn_t mfn)
 {
     if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
         free_domheap_page(mfn_to_page(mfn));
@@ -4773,11 +4773,11 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
         l3_pgentry_t *l3t;
         mfn_t mfn;
 
-        mfn = alloc_xen_pagetable_new();
+        mfn = alloc_xen_pagetable();
         if ( mfn_eq(mfn, INVALID_MFN) )
             goto out;
 
-        l3t = map_xen_pagetable_new(mfn);
+        l3t = map_xen_pagetable(mfn);
 
         if ( locking )
             spin_lock(&map_pgdir_lock);
@@ -4797,15 +4797,15 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
         {
             ASSERT(!pl3e);
             ASSERT(!mfn_eq(mfn, INVALID_MFN));
-            UNMAP_XEN_PAGETABLE_NEW(l3t);
-            free_xen_pagetable_new(mfn);
+            UNMAP_XEN_PAGETABLE(l3t);
+            free_xen_pagetable(mfn);
         }
     }
 
     if ( !pl3e )
     {
         ASSERT(l4e_get_flags(*pl4e) & _PAGE_PRESENT);
-        pl3e = (l3_pgentry_t *)map_xen_pagetable_new(l4e_get_mfn(*pl4e))
+        pl3e = (l3_pgentry_t *)map_xen_pagetable(l4e_get_mfn(*pl4e))
             + l3_table_offset(v);
     }
 
@@ -4832,11 +4832,11 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
         l2_pgentry_t *l2t;
         mfn_t mfn;
 
-        mfn = alloc_xen_pagetable_new();
+        mfn = alloc_xen_pagetable();
         if ( mfn_eq(mfn, INVALID_MFN) )
             goto out;
 
-        l2t = map_xen_pagetable_new(mfn);
+        l2t = map_xen_pagetable(mfn);
 
         if ( locking )
             spin_lock(&map_pgdir_lock);
@@ -4854,8 +4854,8 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
         {
             ASSERT(!pl2e);
             ASSERT(!mfn_eq(mfn, INVALID_MFN));
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
-            free_xen_pagetable_new(mfn);
+            UNMAP_XEN_PAGETABLE(l2t);
+            free_xen_pagetable(mfn);
         }
     }
 
@@ -4864,12 +4864,12 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
     if ( !pl2e )
     {
         ASSERT(l3e_get_flags(*pl3e) & _PAGE_PRESENT);
-        pl2e = (l2_pgentry_t *)map_xen_pagetable_new(l3e_get_mfn(*pl3e))
+        pl2e = (l2_pgentry_t *)map_xen_pagetable(l3e_get_mfn(*pl3e))
             + l2_table_offset(v);
     }
 
  out:
-    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    UNMAP_XEN_PAGETABLE(pl3e);
     return pl2e;
 }
 
@@ -4888,11 +4888,11 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
         l1_pgentry_t *l1t;
         mfn_t mfn;
 
-        mfn = alloc_xen_pagetable_new();
+        mfn = alloc_xen_pagetable();
         if ( mfn_eq(mfn, INVALID_MFN) )
             goto out;
 
-        l1t = map_xen_pagetable_new(mfn);
+        l1t = map_xen_pagetable(mfn);
 
         if ( locking )
             spin_lock(&map_pgdir_lock);
@@ -4910,8 +4910,8 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
         {
             ASSERT(!pl1e);
             ASSERT(!mfn_eq(mfn, INVALID_MFN));
-            UNMAP_XEN_PAGETABLE_NEW(l1t);
-            free_xen_pagetable_new(mfn);
+            UNMAP_XEN_PAGETABLE(l1t);
+            free_xen_pagetable(mfn);
         }
     }
 
@@ -4920,12 +4920,12 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
     if ( !pl1e )
     {
         ASSERT(l2e_get_flags(*pl2e) & _PAGE_PRESENT);
-        pl1e = (l1_pgentry_t *)map_xen_pagetable_new(l2e_get_mfn(*pl2e))
+        pl1e = (l1_pgentry_t *)map_xen_pagetable(l2e_get_mfn(*pl2e))
             + l1_table_offset(v);
     }
 
  out:
-    UNMAP_XEN_PAGETABLE_NEW(pl2e);
+    UNMAP_XEN_PAGETABLE(pl2e);
     return pl1e;
 }
 
@@ -5004,7 +5004,7 @@ int map_pages_to_xen(
                     l2_pgentry_t *l2t;
                     mfn_t l2t_mfn = l3e_get_mfn(ol3e);
 
-                    l2t = map_xen_pagetable_new(l2t_mfn);
+                    l2t = map_xen_pagetable(l2t_mfn);
 
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
@@ -5019,10 +5019,10 @@ int map_pages_to_xen(
                             l1_pgentry_t *l1t;
                             mfn_t l1t_mfn = l2e_get_mfn(ol2e);
 
-                            l1t = map_xen_pagetable_new(l1t_mfn);
+                            l1t = map_xen_pagetable(l1t_mfn);
                             for ( j = 0; j < L1_PAGETABLE_ENTRIES; j++ )
                                 flush_flags(l1e_get_flags(l1t[j]));
-                            UNMAP_XEN_PAGETABLE_NEW(l1t);
+                            UNMAP_XEN_PAGETABLE(l1t);
                         }
                     }
                     flush_area(virt, flush_flags);
@@ -5031,9 +5031,9 @@ int map_pages_to_xen(
                         ol2e = l2t[i];
                         if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) &&
                              !(l2e_get_flags(ol2e) & _PAGE_PSE) )
-                            free_xen_pagetable_new(l2e_get_mfn(ol2e));
+                            free_xen_pagetable(l2e_get_mfn(ol2e));
                     }
-                    free_xen_pagetable_new(l2t_mfn);
+                    free_xen_pagetable(l2t_mfn);
                 }
             }
 
@@ -5072,14 +5072,14 @@ int map_pages_to_xen(
                 goto end_of_loop;
             }
 
-            mfn = alloc_xen_pagetable_new();
+            mfn = alloc_xen_pagetable();
             if ( mfn_eq(mfn, INVALID_MFN) )
             {
                 ASSERT(rc == -ENOMEM);
                 goto out;
             }
 
-            l2t = map_xen_pagetable_new(mfn);
+            l2t = map_xen_pagetable(mfn);
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
@@ -5096,7 +5096,7 @@ int map_pages_to_xen(
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
                 l3e_write_atomic(pl3e, l3e_from_mfn(mfn, __PAGE_HYPERVISOR));
-                UNMAP_XEN_PAGETABLE_NEW(l2t);
+                UNMAP_XEN_PAGETABLE(l2t);
                 l2t = NULL;
             }
             if ( locking )
@@ -5104,8 +5104,8 @@ int map_pages_to_xen(
             flush_area(virt, flush_flags);
             if ( l2t )
             {
-                UNMAP_XEN_PAGETABLE_NEW(l2t);
-                free_xen_pagetable_new(mfn);
+                UNMAP_XEN_PAGETABLE(l2t);
+                free_xen_pagetable(mfn);
             }
         }
 
@@ -5140,12 +5140,12 @@ int map_pages_to_xen(
                     l1_pgentry_t *l1t;
                     mfn_t l1t_mfn = l2e_get_mfn(ol2e);
 
-                    l1t = map_xen_pagetable_new(l1t_mfn);
+                    l1t = map_xen_pagetable(l1t_mfn);
                     for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                         flush_flags(l1e_get_flags(l1t[i]));
                     flush_area(virt, flush_flags);
-                    UNMAP_XEN_PAGETABLE_NEW(l1t);
-                    free_xen_pagetable_new(l1t_mfn);
+                    UNMAP_XEN_PAGETABLE(l1t);
+                    free_xen_pagetable(l1t_mfn);
                 }
             }
 
@@ -5166,7 +5166,7 @@ int map_pages_to_xen(
                     ASSERT(rc == -ENOMEM);
                     goto out;
                 }
-                UNMAP_XEN_PAGETABLE_NEW(pl1e);
+                UNMAP_XEN_PAGETABLE(pl1e);
             }
             else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
             {
@@ -5193,14 +5193,14 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                mfn = alloc_xen_pagetable_new();
+                mfn = alloc_xen_pagetable();
                 if ( mfn_eq(mfn, INVALID_MFN) )
                 {
                     ASSERT(rc == -ENOMEM);
                     goto out;
                 }
 
-                l1t = map_xen_pagetable_new(mfn);
+                l1t = map_xen_pagetable(mfn);
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
@@ -5217,7 +5217,7 @@ int map_pages_to_xen(
                 {
                     l2e_write_atomic(pl2e, l2e_from_mfn(mfn,
                                                         __PAGE_HYPERVISOR));
-                    UNMAP_XEN_PAGETABLE_NEW(l1t);
+                    UNMAP_XEN_PAGETABLE(l1t);
                     l1t = NULL;
                 }
                 if ( locking )
@@ -5225,16 +5225,16 @@ int map_pages_to_xen(
                 flush_area(virt, flush_flags);
                 if ( l1t )
                 {
-                    UNMAP_XEN_PAGETABLE_NEW(l1t);
-                    free_xen_pagetable_new(mfn);
+                    UNMAP_XEN_PAGETABLE(l1t);
+                    free_xen_pagetable(mfn);
                 }
             }
 
-            pl1e  = map_xen_pagetable_new(l2e_get_mfn((*pl2e)));
+            pl1e  = map_xen_pagetable(l2e_get_mfn((*pl2e)));
             pl1e += l1_table_offset(virt);
             ol1e  = *pl1e;
             l1e_write_atomic(pl1e, l1e_from_mfn(mfn, flags));
-            UNMAP_XEN_PAGETABLE_NEW(pl1e);
+            UNMAP_XEN_PAGETABLE(pl1e);
             if ( (l1e_get_flags(ol1e) & _PAGE_PRESENT) )
             {
                 unsigned int flush_flags = FLUSH_TLB | FLUSH_ORDER(0);
@@ -5280,14 +5280,14 @@ int map_pages_to_xen(
                 }
 
                 l1t_mfn = l2e_get_mfn(ol2e);
-                l1t = map_xen_pagetable_new(l1t_mfn);
+                l1t = map_xen_pagetable(l1t_mfn);
 
                 base_mfn = l1e_get_pfn(l1t[0]) & ~(L1_PAGETABLE_ENTRIES - 1);
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) ||
                          (l1e_get_flags(l1t[i]) != flags) )
                         break;
-                UNMAP_XEN_PAGETABLE_NEW(l1t);
+                UNMAP_XEN_PAGETABLE(l1t);
                 if ( i == L1_PAGETABLE_ENTRIES )
                 {
                     l2e_write_atomic(pl2e, l2e_from_pfn(base_mfn,
@@ -5297,7 +5297,7 @@ int map_pages_to_xen(
                     flush_area(virt - PAGE_SIZE,
                                FLUSH_TLB_GLOBAL |
                                FLUSH_ORDER(PAGETABLE_ORDER));
-                    free_xen_pagetable_new(l1t_mfn);
+                    free_xen_pagetable(l1t_mfn);
                 }
                 else if ( locking )
                     spin_unlock(&map_pgdir_lock);
@@ -5332,7 +5332,7 @@ int map_pages_to_xen(
             }
 
             l2t_mfn = l3e_get_mfn(ol3e);
-            l2t = map_xen_pagetable_new(l2t_mfn);
+            l2t = map_xen_pagetable(l2t_mfn);
 
             base_mfn = l2e_get_pfn(l2t[0]) & ~(L2_PAGETABLE_ENTRIES *
                                               L1_PAGETABLE_ENTRIES - 1);
@@ -5341,7 +5341,7 @@ int map_pages_to_xen(
                       (base_mfn + (i << PAGETABLE_ORDER))) ||
                      (l2e_get_flags(l2t[i]) != l1f_to_lNf(flags)) )
                     break;
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            UNMAP_XEN_PAGETABLE(l2t);
             if ( i == L2_PAGETABLE_ENTRIES )
             {
                 l3e_write_atomic(pl3e, l3e_from_pfn(base_mfn,
@@ -5351,15 +5351,15 @@ int map_pages_to_xen(
                 flush_area(virt - PAGE_SIZE,
                            FLUSH_TLB_GLOBAL |
                            FLUSH_ORDER(2*PAGETABLE_ORDER));
-                free_xen_pagetable_new(l2t_mfn);
+                free_xen_pagetable(l2t_mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
     end_of_loop:
-        UNMAP_XEN_PAGETABLE_NEW(pl1e);
-        UNMAP_XEN_PAGETABLE_NEW(pl2e);
-        UNMAP_XEN_PAGETABLE_NEW(pl3e);
+        UNMAP_XEN_PAGETABLE(pl1e);
+        UNMAP_XEN_PAGETABLE(pl2e);
+        UNMAP_XEN_PAGETABLE(pl3e);
     }
 
 #undef flush_flags
@@ -5367,9 +5367,9 @@ int map_pages_to_xen(
     rc = 0;
 
  out:
-    UNMAP_XEN_PAGETABLE_NEW(pl1e);
-    UNMAP_XEN_PAGETABLE_NEW(pl2e);
-    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    UNMAP_XEN_PAGETABLE(pl1e);
+    UNMAP_XEN_PAGETABLE(pl2e);
+    UNMAP_XEN_PAGETABLE(pl3e);
     return rc;
 }
 
@@ -5440,14 +5440,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
-            mfn = alloc_xen_pagetable_new();
+            mfn = alloc_xen_pagetable();
             if ( mfn_eq(mfn, INVALID_MFN) )
             {
                 ASSERT(rc == -ENOMEM);
                 goto out;
             }
 
-            l2t = map_xen_pagetable_new(mfn);
+            l2t = map_xen_pagetable(mfn);
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
@@ -5460,15 +5460,15 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
                 l3e_write_atomic(pl3e, l3e_from_mfn(mfn, __PAGE_HYPERVISOR));
-                UNMAP_XEN_PAGETABLE_NEW(l2t);
+                UNMAP_XEN_PAGETABLE(l2t);
                 l2t = NULL;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
             if ( l2t )
             {
-                UNMAP_XEN_PAGETABLE_NEW(l2t);
-                free_xen_pagetable_new(mfn);
+                UNMAP_XEN_PAGETABLE(l2t);
+                free_xen_pagetable(mfn);
             }
         }
 
@@ -5476,7 +5476,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
          * The L3 entry has been verified to be present, and we've dealt with
          * 1G pages as well, so the L2 table cannot require allocation.
          */
-        pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+        pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e));
         pl2e += l2_table_offset(v);
 
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
@@ -5508,14 +5508,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 mfn_t mfn;
 
                 /* PSE: shatter the superpage and try again. */
-                mfn = alloc_xen_pagetable_new();
+                mfn = alloc_xen_pagetable();
                 if ( mfn_eq(mfn, INVALID_MFN) )
                 {
                     ASSERT(rc == -ENOMEM);
                     goto out;
                 }
 
-                l1t = map_xen_pagetable_new(mfn);
+                l1t = map_xen_pagetable(mfn);
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
@@ -5528,15 +5528,15 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 {
                     l2e_write_atomic(pl2e, l2e_from_mfn(mfn,
                                                         __PAGE_HYPERVISOR));
-                    UNMAP_XEN_PAGETABLE_NEW(l1t);
+                    UNMAP_XEN_PAGETABLE(l1t);
                     l1t = NULL;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 if ( l1t )
                 {
-                    UNMAP_XEN_PAGETABLE_NEW(l1t);
-                    free_xen_pagetable_new(mfn);
+                    UNMAP_XEN_PAGETABLE(l1t);
+                    free_xen_pagetable(mfn);
                 }
             }
         }
@@ -5550,7 +5550,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
              * present, and we've dealt with 2M pages as well, so the L1 table
              * cannot require allocation.
              */
-            pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            pl1e = map_xen_pagetable(l2e_get_mfn(*pl2e));
             pl1e += l1_table_offset(v);
 
             /* Confirm the caller isn't trying to create new mappings. */
@@ -5562,7 +5562,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                                (l1e_get_flags(*pl1e) & ~FLAGS_MASK) | nf);
 
             l1e_write_atomic(pl1e, nl1e);
-            UNMAP_XEN_PAGETABLE_NEW(pl1e);
+            UNMAP_XEN_PAGETABLE(pl1e);
             v += PAGE_SIZE;
 
             /*
@@ -5593,11 +5593,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             l1t_mfn = l2e_get_mfn(*pl2e);
-            l1t = map_xen_pagetable_new(l1t_mfn);
+            l1t = map_xen_pagetable(l1t_mfn);
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                 if ( l1e_get_intpte(l1t[i]) != 0 )
                     break;
-            UNMAP_XEN_PAGETABLE_NEW(l1t);
+            UNMAP_XEN_PAGETABLE(l1t);
             if ( i == L1_PAGETABLE_ENTRIES )
             {
                 /* Empty: zap the L2E and free the L1 page. */
@@ -5605,7 +5605,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable_new(l1t_mfn);
+                free_xen_pagetable(l1t_mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5639,11 +5639,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             mfn_t l2t_mfn;
 
             l2t_mfn = l3e_get_mfn(*pl3e);
-            l2t = map_xen_pagetable_new(l2t_mfn);
+            l2t = map_xen_pagetable(l2t_mfn);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 if ( l2e_get_intpte(l2t[i]) != 0 )
                     break;
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            UNMAP_XEN_PAGETABLE(l2t);
             if ( i == L2_PAGETABLE_ENTRIES )
             {
                 /* Empty: zap the L3E and free the L2 page. */
@@ -5651,14 +5651,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable_new(l2t_mfn);
+                free_xen_pagetable(l2t_mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
     end_of_loop:
-        UNMAP_XEN_PAGETABLE_NEW(pl2e);
-        UNMAP_XEN_PAGETABLE_NEW(pl3e);
+        UNMAP_XEN_PAGETABLE(pl2e);
+        UNMAP_XEN_PAGETABLE(pl3e);
     }
 
     flush_area(NULL, FLUSH_TLB_GLOBAL);
@@ -5667,8 +5667,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     rc = 0;
 
  out:
-    UNMAP_XEN_PAGETABLE_NEW(pl2e);
-    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    UNMAP_XEN_PAGETABLE(pl2e);
+    UNMAP_XEN_PAGETABLE(pl3e);
     return rc;
 }
 
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index a07d2138a2..4d46d773c1 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -55,11 +55,11 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d,
     l1_pgentry_t *pl1e, *l1t;
 
     pl4e = l4start + l4_table_offset(vpt_start);
-    l3t = map_xen_pagetable_new(l4e_get_mfn(*pl4e));
+    l3t = map_xen_pagetable(l4e_get_mfn(*pl4e));
     pl3e = l3t + l3_table_offset(vpt_start);
-    l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+    l2t = map_xen_pagetable(l3e_get_mfn(*pl3e));
     pl2e = l2t + l2_table_offset(vpt_start);
-    l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+    l1t = map_xen_pagetable(l2e_get_mfn(*pl2e));
     pl1e = l1t + l1_table_offset(vpt_start);
     for ( count = 0; count < nr_pt_pages; count++ )
     {
@@ -86,22 +86,22 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d,
             {
                 if ( !((unsigned long)++pl3e & (PAGE_SIZE - 1)) )
                 {
-                    UNMAP_XEN_PAGETABLE_NEW(l3t);
-                    l3t = map_xen_pagetable_new(l4e_get_mfn(*++pl4e));
+                    UNMAP_XEN_PAGETABLE(l3t);
+                    l3t = map_xen_pagetable(l4e_get_mfn(*++pl4e));
                     pl3e = l3t;
                 }
-                UNMAP_XEN_PAGETABLE_NEW(l2t);
-                l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+                UNMAP_XEN_PAGETABLE(l2t);
+                l2t = map_xen_pagetable(l3e_get_mfn(*pl3e));
                 pl2e = l2t;
             }
-            UNMAP_XEN_PAGETABLE_NEW(l1t);
-            l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            UNMAP_XEN_PAGETABLE(l1t);
+            l1t = map_xen_pagetable(l2e_get_mfn(*pl2e));
             pl1e = l1t;
         }
     }
-    UNMAP_XEN_PAGETABLE_NEW(l1t);
-    UNMAP_XEN_PAGETABLE_NEW(l2t);
-    UNMAP_XEN_PAGETABLE_NEW(l3t);
+    UNMAP_XEN_PAGETABLE(l1t);
+    UNMAP_XEN_PAGETABLE(l2t);
+    UNMAP_XEN_PAGETABLE(l3t);
 }
 
 static __init void setup_pv_physmap(struct domain *d, unsigned long pgtbl_pfn,
@@ -706,9 +706,9 @@ int __init dom0_construct_pv(struct domain *d,
                 l3e_get_page(*l3tab)->u.inuse.type_info |= PGT_pae_xen_l2;
         }
 
-        l2t = map_xen_pagetable_new(l3e_get_mfn(l3start[3]));
+        l2t = map_xen_pagetable(l3e_get_mfn(l3start[3]));
         init_xen_pae_l2_slots(l2t, d);
-        UNMAP_XEN_PAGETABLE_NEW(l2t);
+        UNMAP_XEN_PAGETABLE(l2t);
     }
 
     /* Pages that are part of page tables must be read only. */
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index cf638fa965..09c7766ec5 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -171,11 +171,11 @@ static void __init replace_va_mapping(struct domain *d, l4_pgentry_t *l4start,
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
 
-    pl3e = map_xen_pagetable_new(l4e_get_mfn(*pl4e));
+    pl3e = map_xen_pagetable(l4e_get_mfn(*pl4e));
     pl3e += l3_table_offset(va);
-    pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+    pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e));
     pl2e += l2_table_offset(va);
-    pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+    pl1e = map_xen_pagetable(l2e_get_mfn(*pl2e));
     pl1e += l1_table_offset(va);
 
     put_page_and_type(mfn_to_page(l1e_get_mfn(*pl1e)));
@@ -183,9 +183,9 @@ static void __init replace_va_mapping(struct domain *d, l4_pgentry_t *l4start,
     *pl1e = l1e_from_mfn(mfn, (!is_pv_32bit_domain(d) ? L1_PROT
                                                       : COMPAT_L1_PROT));
 
-    UNMAP_XEN_PAGETABLE_NEW(pl1e);
-    UNMAP_XEN_PAGETABLE_NEW(pl2e);
-    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    UNMAP_XEN_PAGETABLE(pl1e);
+    UNMAP_XEN_PAGETABLE(pl2e);
+    UNMAP_XEN_PAGETABLE(pl3e);
 }
 
 static void evtchn_reserve(struct domain *d, unsigned int port)
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 7b6420f95a..9339e285ed 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1085,7 +1085,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                     continue;
                 *pl4e = l4e_from_intpte(l4e_get_intpte(*pl4e) +
                                         xen_phys_start);
-                pl3e = l3t = map_xen_pagetable_new(l4e_get_mfn(*pl4e));
+                pl3e = l3t = map_xen_pagetable(l4e_get_mfn(*pl4e));
                 for ( j = 0; j < L3_PAGETABLE_ENTRIES; j++, pl3e++ )
                 {
                     l2_pgentry_t *l2t;
@@ -1097,7 +1097,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                         continue;
                     *pl3e = l3e_from_intpte(l3e_get_intpte(*pl3e) +
                                             xen_phys_start);
-                    pl2e = l2t = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+                    pl2e = l2t = map_xen_pagetable(l3e_get_mfn(*pl3e));
                     for ( k = 0; k < L2_PAGETABLE_ENTRIES; k++, pl2e++ )
                     {
                         /* Not present, PSE, or already relocated? */
@@ -1108,9 +1108,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                         *pl2e = l2e_from_intpte(l2e_get_intpte(*pl2e) +
                                                 xen_phys_start);
                     }
-                    UNMAP_XEN_PAGETABLE_NEW(l2t);
+                    UNMAP_XEN_PAGETABLE(l2t);
                 }
-                UNMAP_XEN_PAGETABLE_NEW(l3t);
+                UNMAP_XEN_PAGETABLE(l3t);
             }
 
             /* The only data mappings to be relocated are in the Xen area. */
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 3ac1924391..e245b42cf7 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -695,7 +695,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         goto out;
     }
 
-    pl3e = map_xen_pagetable_new(
+    pl3e = map_xen_pagetable(
         l4e_get_mfn(idle_pg_table[root_table_offset(linear)]));
     pl3e += l3_table_offset(linear);
 
@@ -709,7 +709,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     }
     else
     {
-        pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+        pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e));
         pl2e += l2_table_offset(linear);
         flags = l2e_get_flags(*pl2e);
         ASSERT(flags & _PAGE_PRESENT);
@@ -721,7 +721,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
         else
         {
-            pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            pl1e = map_xen_pagetable(l2e_get_mfn(*pl2e));
             pl1e += l1_table_offset(linear);
             flags = l1e_get_flags(*pl1e);
             if ( !(flags & _PAGE_PRESENT) )
@@ -733,13 +733,13 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
     }
 
-    UNMAP_XEN_PAGETABLE_NEW(pl1e);
-    UNMAP_XEN_PAGETABLE_NEW(pl2e);
-    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    UNMAP_XEN_PAGETABLE(pl1e);
+    UNMAP_XEN_PAGETABLE(pl2e);
+    UNMAP_XEN_PAGETABLE(pl3e);
 
     if ( !(root_get_flags(rpt[root_table_offset(linear)]) & _PAGE_PRESENT) )
     {
-        mfn_t l3t_mfn = alloc_xen_pagetable_new();
+        mfn_t l3t_mfn = alloc_xen_pagetable();
 
         if ( mfn_eq(l3t_mfn, INVALID_MFN) )
         {
@@ -747,20 +747,20 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
             goto out;
         }
 
-        pl3e = map_xen_pagetable_new(l3t_mfn);
+        pl3e = map_xen_pagetable(l3t_mfn);
         clear_page(pl3e);
         l4e_write(&rpt[root_table_offset(linear)],
                   l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR));
     }
     else
-        pl3e = map_xen_pagetable_new(
+        pl3e = map_xen_pagetable(
             l4e_get_mfn(rpt[root_table_offset(linear)]));
 
     pl3e += l3_table_offset(linear);
 
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
-        mfn_t l2t_mfn = alloc_xen_pagetable_new();
+        mfn_t l2t_mfn = alloc_xen_pagetable();
 
         if ( mfn_eq(l2t_mfn, INVALID_MFN) )
         {
@@ -768,21 +768,21 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
             goto out;
         }
 
-        pl2e = map_xen_pagetable_new(l2t_mfn);
+        pl2e = map_xen_pagetable(l2t_mfn);
         clear_page(pl2e);
         l3e_write(pl3e, l3e_from_mfn(l2t_mfn, __PAGE_HYPERVISOR));
     }
     else
     {
         ASSERT(!(l3e_get_flags(*pl3e) & _PAGE_PSE));
-        pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+        pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e));
     }
 
     pl2e += l2_table_offset(linear);
 
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
-        mfn_t l1t_mfn = alloc_xen_pagetable_new();
+        mfn_t l1t_mfn = alloc_xen_pagetable();
 
         if ( mfn_eq(l1t_mfn, INVALID_MFN) )
         {
@@ -790,14 +790,14 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
             goto out;
         }
 
-        pl1e = map_xen_pagetable_new(l1t_mfn);
+        pl1e = map_xen_pagetable(l1t_mfn);
         clear_page(pl1e);
         l2e_write(pl2e, l2e_from_mfn(l1t_mfn, __PAGE_HYPERVISOR));
     }
     else
     {
         ASSERT(!(l2e_get_flags(*pl2e) & _PAGE_PSE));
-        pl1e = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+        pl1e = map_xen_pagetable(l2e_get_mfn(*pl2e));
     }
 
     pl1e += l1_table_offset(linear);
@@ -813,9 +813,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     rc = 0;
  out:
-    UNMAP_XEN_PAGETABLE_NEW(pl1e);
-    UNMAP_XEN_PAGETABLE_NEW(pl2e);
-    UNMAP_XEN_PAGETABLE_NEW(pl3e);
+    UNMAP_XEN_PAGETABLE(pl1e);
+    UNMAP_XEN_PAGETABLE(pl2e);
+    UNMAP_XEN_PAGETABLE(pl3e);
     return rc;
 }
 
@@ -838,14 +838,14 @@ static int setup_cpu_root_pgt(unsigned int cpu)
         goto out;
     }
 
-    rpt_mfn = alloc_xen_pagetable_new();
+    rpt_mfn = alloc_xen_pagetable();
     if ( mfn_eq(rpt_mfn, INVALID_MFN) )
     {
         rc = -ENOMEM;
         goto out;
     }
 
-    rpt = map_xen_pagetable_new(rpt_mfn);
+    rpt = map_xen_pagetable(rpt_mfn);
     clear_page(rpt);
     per_cpu(root_pgt_mfn, cpu) = rpt_mfn;
 
@@ -884,7 +884,7 @@ static int setup_cpu_root_pgt(unsigned int cpu)
         rc = clone_mapping((void *)per_cpu(stubs.addr, cpu), rpt);
 
  out:
-    UNMAP_XEN_PAGETABLE_NEW(rpt);
+    UNMAP_XEN_PAGETABLE(rpt);
     return rc;
 }
 
@@ -900,7 +900,7 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
 
     per_cpu(root_pgt_mfn, cpu) = INVALID_MFN;
 
-    rpt = map_xen_pagetable_new(rpt_mfn);
+    rpt = map_xen_pagetable(rpt_mfn);
 
     for ( r = root_table_offset(DIRECTMAP_VIRT_START);
           r < root_table_offset(HYPERVISOR_VIRT_END); ++r )
@@ -913,7 +913,7 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
             continue;
 
         l3t_mfn = l4e_get_mfn(rpt[r]);
-        l3t = map_xen_pagetable_new(l3t_mfn);
+        l3t = map_xen_pagetable(l3t_mfn);
 
         for ( i3 = 0; i3 < L3_PAGETABLE_ENTRIES; ++i3 )
         {
@@ -926,7 +926,7 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
 
             ASSERT(!(l3e_get_flags(l3t[i3]) & _PAGE_PSE));
             l2t_mfn = l3e_get_mfn(l3t[i3]);
-            l2t = map_xen_pagetable_new(l2t_mfn);
+            l2t = map_xen_pagetable(l2t_mfn);
 
             for ( i2 = 0; i2 < L2_PAGETABLE_ENTRIES; ++i2 )
             {
@@ -934,34 +934,34 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
                     continue;
 
                 ASSERT(!(l2e_get_flags(l2t[i2]) & _PAGE_PSE));
-                free_xen_pagetable_new(l2e_get_mfn(l2t[i2]));
+                free_xen_pagetable(l2e_get_mfn(l2t[i2]));
             }
 
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
-            free_xen_pagetable_new(l2t_mfn);
+            UNMAP_XEN_PAGETABLE(l2t);
+            free_xen_pagetable(l2t_mfn);
         }
 
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
-        free_xen_pagetable_new(l3t_mfn);
+        UNMAP_XEN_PAGETABLE(l3t);
+        free_xen_pagetable(l3t_mfn);
     }
 
-    UNMAP_XEN_PAGETABLE_NEW(rpt);
-    free_xen_pagetable_new(rpt_mfn);
+    UNMAP_XEN_PAGETABLE(rpt);
+    free_xen_pagetable(rpt_mfn);
 
     /* Also zap the stub mapping for this CPU. */
     if ( stub_linear )
     {
-        l3_pgentry_t *l3t = map_xen_pagetable_new(l4e_get_mfn(common_pgt));
-        l2_pgentry_t *l2t = map_xen_pagetable_new(
+        l3_pgentry_t *l3t = map_xen_pagetable(l4e_get_mfn(common_pgt));
+        l2_pgentry_t *l2t = map_xen_pagetable(
             l3e_get_mfn(l3t[l3_table_offset(stub_linear)]));
-        l1_pgentry_t *l1t = map_xen_pagetable_new(
+        l1_pgentry_t *l1t = map_xen_pagetable(
             l2e_get_mfn(l2t[l2_table_offset(stub_linear)]));
 
         l1t[l1_table_offset(stub_linear)] = l1e_empty();
 
-        UNMAP_XEN_PAGETABLE_NEW(l1t);
-        UNMAP_XEN_PAGETABLE_NEW(l2t);
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        UNMAP_XEN_PAGETABLE(l1t);
+        UNMAP_XEN_PAGETABLE(l2t);
+        UNMAP_XEN_PAGETABLE(l3t);
     }
 }
 
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index cac06b782d..60cfc962c3 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -134,7 +134,7 @@ static int m2p_mapped(unsigned long spfn)
     int rc = M2P_NO_MAPPED;
 
     va = RO_MPT_VIRT_START + spfn * sizeof(*machine_to_phys_mapping);
-    l3_ro_mpt = map_xen_pagetable_new(
+    l3_ro_mpt = map_xen_pagetable(
         l4e_get_mfn(idle_pg_table[l4_table_offset(va)]));
 
     switch ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
@@ -150,15 +150,15 @@ static int m2p_mapped(unsigned long spfn)
             rc = M2P_NO_MAPPED;
             goto out;
     }
-    l2_ro_mpt = map_xen_pagetable_new(
+    l2_ro_mpt = map_xen_pagetable(
         l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
 
     if (l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT)
         rc = M2P_2M_MAPPED;
 
  out:
-    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
     return rc;
 }
 
@@ -176,10 +176,10 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
     {
         n = L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES;
 
-        l3t = map_xen_pagetable_new(
+        l3t = map_xen_pagetable(
             l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
         l3e = l3t[l3_table_offset(v)];
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        UNMAP_XEN_PAGETABLE(l3t);
 
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
@@ -187,9 +187,9 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
         {
             n = L1_PAGETABLE_ENTRIES;
 
-            l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+            l2t = map_xen_pagetable(l3e_get_mfn(l3e));
             l2e = l2t[l2_table_offset(v)];
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            UNMAP_XEN_PAGETABLE(l2t);
 
             if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
                 continue;
@@ -211,17 +211,17 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
           v != RDWR_COMPAT_MPT_VIRT_END;
           v += 1 << L2_PAGETABLE_SHIFT )
     {
-        l3t = map_xen_pagetable_new(
+        l3t = map_xen_pagetable(
             l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
         l3e = l3t[l3_table_offset(v)];
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        UNMAP_XEN_PAGETABLE(l3t);
 
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
 
-        l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+        l2t = map_xen_pagetable(l3e_get_mfn(l3e));
         l2e = l2t[l2_table_offset(v)];
-        UNMAP_XEN_PAGETABLE_NEW(l2t);
+        UNMAP_XEN_PAGETABLE(l2t);
 
         if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
             continue;
@@ -252,12 +252,12 @@ static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info)
     if ( emap > ((RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2) )
         emap = (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2;
 
-    l3_ro_mpt = map_xen_pagetable_new(
+    l3_ro_mpt = map_xen_pagetable(
         l4e_get_mfn(idle_pg_table[l4_table_offset(HIRO_COMPAT_MPT_VIRT_START)]));
 
     ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]) & _PAGE_PRESENT);
 
-    l2_ro_mpt = map_xen_pagetable_new(
+    l2_ro_mpt = map_xen_pagetable(
         l3e_get_mfn(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]));
 
     for ( i = smap; i < emap; )
@@ -280,8 +280,8 @@ static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info)
         i += 1UL << (L2_PAGETABLE_SHIFT - 2);
     }
 
-    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
 
     return;
 }
@@ -292,7 +292,7 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
     unsigned long i, va, rwva;
     unsigned long smap = info->spfn, emap = info->epfn;
 
-    l3_ro_mpt = map_xen_pagetable_new(
+    l3_ro_mpt = map_xen_pagetable(
         l4e_get_mfn(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]));
 
     /*
@@ -315,13 +315,13 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
             continue;
         }
 
-        l2_ro_mpt = map_xen_pagetable_new(
+        l2_ro_mpt = map_xen_pagetable(
             l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
         if (!(l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT))
         {
             i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) +
                     (1UL << (L2_PAGETABLE_SHIFT - 3)) ;
-            UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+            UNMAP_XEN_PAGETABLE(l2_ro_mpt);
             continue;
         }
 
@@ -332,17 +332,17 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
 
             destroy_xen_mappings(rwva, rwva + (1UL << L2_PAGETABLE_SHIFT));
 
-            l2t = map_xen_pagetable_new(
+            l2t = map_xen_pagetable(
                 l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
             l2e_write(&l2t[l2_table_offset(va)], l2e_empty());
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            UNMAP_XEN_PAGETABLE(l2t);
         }
         i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) +
               (1UL << (L2_PAGETABLE_SHIFT - 3));
-        UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+        UNMAP_XEN_PAGETABLE(l2_ro_mpt);
     }
 
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
 
     destroy_compat_m2p_mapping(info);
 
@@ -382,12 +382,12 @@ static int setup_compat_m2p_table(struct mem_hotadd_info *info)
 
     va = HIRO_COMPAT_MPT_VIRT_START +
          smap * sizeof(*compat_machine_to_phys_mapping);
-    l3_ro_mpt = map_xen_pagetable_new(
+    l3_ro_mpt = map_xen_pagetable(
         l4e_get_mfn(idle_pg_table[l4_table_offset(va)]));
 
     ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) & _PAGE_PRESENT);
 
-    l2_ro_mpt = map_xen_pagetable_new(
+    l2_ro_mpt = map_xen_pagetable(
         l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
 
 #define MFN(x) (((x) << L2_PAGETABLE_SHIFT) / sizeof(unsigned int))
@@ -427,8 +427,8 @@ static int setup_compat_m2p_table(struct mem_hotadd_info *info)
 #undef CNT
 #undef MFN
 
-    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
     return err;
 }
 
@@ -449,7 +449,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
             & _PAGE_PRESENT);
     l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset(
                                         RO_MPT_VIRT_START)]);
-    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
+    l3_ro_mpt = map_xen_pagetable(l3_ro_mpt_mfn);
 
     smap = (info->spfn & (~((1UL << (L2_PAGETABLE_SHIFT - 3)) -1)));
     emap = ((info->epfn + ((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1 )) &
@@ -505,23 +505,23 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
             if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
               _PAGE_PRESENT )
             {
-                UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
+                UNMAP_XEN_PAGETABLE(l2_ro_mpt);
                 l2_ro_mpt_mfn = l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]);
-                l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
+                l2_ro_mpt = map_xen_pagetable(l2_ro_mpt_mfn);
                 ASSERT(l2_ro_mpt);
                 pl2e = l2_ro_mpt + l2_table_offset(va);
             }
             else
             {
-                UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-                l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+                UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+                l2_ro_mpt_mfn = alloc_xen_pagetable();
                 if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
                 {
                     ret = -ENOMEM;
                     goto error;
                 }
 
-                l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
+                l2_ro_mpt = map_xen_pagetable(l2_ro_mpt_mfn);
                 clear_page(l2_ro_mpt);
                 l3e_write(&l3_ro_mpt[l3_table_offset(va)],
                           l3e_from_mfn(l2_ro_mpt_mfn,
@@ -541,8 +541,8 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 
     ret = setup_compat_m2p_table(info);
 error:
-    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
     return ret;
 }
 
@@ -569,23 +569,23 @@ void __init paging_init(void)
             l3_pgentry_t *pl3t;
             mfn_t mfn;
 
-            mfn = alloc_xen_pagetable_new();
+            mfn = alloc_xen_pagetable();
             if ( mfn_eq(mfn, INVALID_MFN) )
                 goto nomem;
 
-            pl3t = map_xen_pagetable_new(mfn);
+            pl3t = map_xen_pagetable(mfn);
             clear_page(pl3t);
             l4e_write(&idle_pg_table[l4_table_offset(va)],
                       l4e_from_mfn(mfn, __PAGE_HYPERVISOR_RW));
-            UNMAP_XEN_PAGETABLE_NEW(pl3t);
+            UNMAP_XEN_PAGETABLE(pl3t);
         }
     }
 
     /* Create user-accessible L2 directory to map the MPT for guests. */
-    l3_ro_mpt_mfn = alloc_xen_pagetable_new();
+    l3_ro_mpt_mfn = alloc_xen_pagetable();
     if ( mfn_eq(l3_ro_mpt_mfn, INVALID_MFN) )
         goto nomem;
-    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
+    l3_ro_mpt = map_xen_pagetable(l3_ro_mpt_mfn);
     clear_page(l3_ro_mpt);
     l4e_write(&idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)],
               l4e_from_mfn(l3_ro_mpt_mfn, __PAGE_HYPERVISOR_RO | _PAGE_USER));
@@ -674,13 +674,13 @@ void __init paging_init(void)
              * Unmap l2_ro_mpt, which could've been mapped in previous
              * iteration.
              */
-            unmap_xen_pagetable_new(l2_ro_mpt);
+            unmap_xen_pagetable(l2_ro_mpt);
 
-            l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+            l2_ro_mpt_mfn = alloc_xen_pagetable();
             if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
                 goto nomem;
 
-            l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
+            l2_ro_mpt = map_xen_pagetable(l2_ro_mpt_mfn);
             clear_page(l2_ro_mpt);
             l3e_write(&l3_ro_mpt[l3_table_offset(va)],
                       l3e_from_mfn(l2_ro_mpt_mfn,
@@ -696,8 +696,8 @@ void __init paging_init(void)
     }
 #undef CNT
 #undef MFN
-    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
 
     /* Create user-accessible L2 directory to map the MPT for compat guests. */
     BUILD_BUG_ON(l4_table_offset(RDWR_MPT_VIRT_START) !=
@@ -705,12 +705,12 @@ void __init paging_init(void)
 
     l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset(
                                         HIRO_COMPAT_MPT_VIRT_START)]);
-    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
+    l3_ro_mpt = map_xen_pagetable(l3_ro_mpt_mfn);
 
-    l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+    l2_ro_mpt_mfn = alloc_xen_pagetable();
     if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
         goto nomem;
-    l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
+    l2_ro_mpt = map_xen_pagetable(l2_ro_mpt_mfn);
     compat_idle_pg_table_l2 = l2_ro_mpt;
     clear_page(l2_ro_mpt);
     l3e_write(&l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)],
@@ -756,8 +756,8 @@ void __init paging_init(void)
 #undef CNT
 #undef MFN
 
-    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
-    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l2_ro_mpt);
+    UNMAP_XEN_PAGETABLE(l3_ro_mpt);
 
     machine_to_phys_mapping_valid = 1;
 
@@ -815,10 +815,10 @@ static void cleanup_frame_table(struct mem_hotadd_info *info)
 
     while (sva < eva)
     {
-        l3t = map_xen_pagetable_new(
+        l3t = map_xen_pagetable(
             l4e_get_mfn(idle_pg_table[l4_table_offset(sva)]));
         l3e = l3t[l3_table_offset(sva)];
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        UNMAP_XEN_PAGETABLE(l3t);
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) ||
              (l3e_get_flags(l3e) & _PAGE_PSE) )
         {
@@ -827,9 +827,9 @@ static void cleanup_frame_table(struct mem_hotadd_info *info)
             continue;
         }
 
-        l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+        l2t = map_xen_pagetable(l3e_get_mfn(l3e));
         l2e = l2t[l2_table_offset(sva)];
-        UNMAP_XEN_PAGETABLE_NEW(l2t);
+        UNMAP_XEN_PAGETABLE(l2t);
         ASSERT(l2e_get_flags(l2e) & _PAGE_PRESENT);
 
         if ( (l2e_get_flags(l2e) & (_PAGE_PRESENT | _PAGE_PSE)) ==
@@ -847,10 +847,10 @@ static void cleanup_frame_table(struct mem_hotadd_info *info)
 
 #ifndef NDEBUG
         {
-            l1_pgentry_t *l1t = map_xen_pagetable_new(l2e_get_mfn(l2e));
+            l1_pgentry_t *l1t = map_xen_pagetable(l2e_get_mfn(l2e));
             ASSERT(l1e_get_flags(l1t[l1_table_offset(sva)]) &
                    _PAGE_PRESENT);
-            UNMAP_XEN_PAGETABLE_NEW(l1t);
+            UNMAP_XEN_PAGETABLE(l1t);
         }
 #endif
          sva = (sva & ~((1UL << PAGE_SHIFT) - 1)) +
@@ -941,10 +941,10 @@ void __init subarch_init_memory(void)
     {
         n = L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES;
 
-        l3t = map_xen_pagetable_new(
+        l3t = map_xen_pagetable(
             l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
         l3e = l3t[l3_table_offset(v)];
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        UNMAP_XEN_PAGETABLE(l3t);
 
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
@@ -952,9 +952,9 @@ void __init subarch_init_memory(void)
         {
             n = L1_PAGETABLE_ENTRIES;
 
-            l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+            l2t = map_xen_pagetable(l3e_get_mfn(l3e));
             l2e = l2t[l2_table_offset(v)];
-            UNMAP_XEN_PAGETABLE_NEW(l2t);
+            UNMAP_XEN_PAGETABLE(l2t);
 
             if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
                 continue;
@@ -974,17 +974,17 @@ void __init subarch_init_memory(void)
           v != RDWR_COMPAT_MPT_VIRT_END;
           v += 1 << L2_PAGETABLE_SHIFT )
     {
-        l3t = map_xen_pagetable_new(
+        l3t = map_xen_pagetable(
             l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
         l3e = l3t[l3_table_offset(v)];
-        UNMAP_XEN_PAGETABLE_NEW(l3t);
+        UNMAP_XEN_PAGETABLE(l3t);
 
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
 
-        l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+        l2t = map_xen_pagetable(l3e_get_mfn(l3e));
         l2e = l2t[l2_table_offset(v)];
-        UNMAP_XEN_PAGETABLE_NEW(l2t);
+        UNMAP_XEN_PAGETABLE(l2t);
 
         if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
             continue;
@@ -1035,18 +1035,18 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
               (v < (unsigned long)(machine_to_phys_mapping + max_page));
               i++, v += 1UL << L2_PAGETABLE_SHIFT )
         {
-            l3t = map_xen_pagetable_new(
+            l3t = map_xen_pagetable(
                 l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
             l3e = l3t[l3_table_offset(v)];
-            UNMAP_XEN_PAGETABLE_NEW(l3t);
+            UNMAP_XEN_PAGETABLE(l3t);
 
             if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
                 mfn = last_mfn;
             else if ( !(l3e_get_flags(l3e) & _PAGE_PSE) )
             {
-                l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
+                l2t = map_xen_pagetable(l3e_get_mfn(l3e));
                 l2e = l2t[l2_table_offset(v)];
-                UNMAP_XEN_PAGETABLE_NEW(l2t);
+                UNMAP_XEN_PAGETABLE(l2t);
                 if ( l2e_get_flags(l2e) & _PAGE_PRESENT )
                     mfn = l2e_get_pfn(l2e);
                 else
diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index f55d6a6d76..d47067c998 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1443,20 +1443,20 @@ static __init void copy_mapping(l4_pgentry_t *l4,
         {
             mfn_t l3t_mfn;
 
-            l3t_mfn = alloc_xen_pagetable_new();
+            l3t_mfn = alloc_xen_pagetable();
             BUG_ON(mfn_eq(l3t_mfn, INVALID_MFN));
-            l3dst = map_xen_pagetable_new(l3t_mfn);
+            l3dst = map_xen_pagetable(l3t_mfn);
             clear_page(l3dst);
             l4[l4_table_offset(mfn << PAGE_SHIFT)] =
                 l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR);
         }
         else
-            l3dst = map_xen_pagetable_new(l4e_get_mfn(l4e));
-        l3src = map_xen_pagetable_new(
+            l3dst = map_xen_pagetable(l4e_get_mfn(l4e));
+        l3src = map_xen_pagetable(
             l4e_get_mfn(idle_pg_table[l4_table_offset(va)]));
         l3dst[l3_table_offset(mfn << PAGE_SHIFT)] = l3src[l3_table_offset(va)];
-        UNMAP_XEN_PAGETABLE_NEW(l3src);
-        UNMAP_XEN_PAGETABLE_NEW(l3dst);
+        UNMAP_XEN_PAGETABLE(l3src);
+        UNMAP_XEN_PAGETABLE(l3dst);
     }
 }
 
@@ -1604,9 +1604,9 @@ void __init efi_init_memory(void)
                                  mdesc_ver, efi_memmap);
 #else
     /* Set up 1:1 page tables to do runtime calls in "physical" mode. */
-    efi_l4_mfn = alloc_xen_pagetable_new();
+    efi_l4_mfn = alloc_xen_pagetable();
     BUG_ON(mfn_eq(efi_l4_mfn, INVALID_MFN));
-    efi_l4_pgtable = map_xen_pagetable_new(efi_l4_mfn);
+    efi_l4_pgtable = map_xen_pagetable(efi_l4_mfn);
     clear_page(efi_l4_pgtable);
 
     copy_mapping(efi_l4_pgtable, 0, max_page, ram_range_valid);
@@ -1641,31 +1641,31 @@ void __init efi_init_memory(void)
         {
             mfn_t l3t_mfn;
 
-            l3t_mfn = alloc_xen_pagetable_new();
+            l3t_mfn = alloc_xen_pagetable();
             BUG_ON(mfn_eq(l3t_mfn, INVALID_MFN));
-            pl3e = map_xen_pagetable_new(l3t_mfn);
+            pl3e = map_xen_pagetable(l3t_mfn);
             clear_page(pl3e);
             efi_l4_pgtable[l4_table_offset(addr)] =
                 l4e_from_mfn(l3t_mfn, __PAGE_HYPERVISOR);
         }
         else
-            pl3e = map_xen_pagetable_new(l4e_get_mfn(l4e));
+            pl3e = map_xen_pagetable(l4e_get_mfn(l4e));
         pl3e += l3_table_offset(addr);
 
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
             mfn_t l2t_mfn;
 
-            l2t_mfn = alloc_xen_pagetable_new();
+            l2t_mfn = alloc_xen_pagetable();
             BUG_ON(mfn_eq(l2t_mfn, INVALID_MFN));
-            pl2e = map_xen_pagetable_new(l2t_mfn);
+            pl2e = map_xen_pagetable(l2t_mfn);
             clear_page(pl2e);
             *pl3e = l3e_from_mfn(l2t_mfn, __PAGE_HYPERVISOR);
         }
         else
         {
             BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-            pl2e = map_xen_pagetable_new(l3e_get_mfn(*pl3e));
+            pl2e = map_xen_pagetable(l3e_get_mfn(*pl3e));
         }
         pl2e += l2_table_offset(addr);
 
@@ -1673,16 +1673,16 @@ void __init efi_init_memory(void)
         {
             mfn_t l1t_mfn;
 
-            l1t_mfn = alloc_xen_pagetable_new();
+            l1t_mfn = alloc_xen_pagetable();
             BUG_ON(mfn_eq(l1t_mfn, INVALID_MFN));
-            l1t = map_xen_pagetable_new(l1t_mfn);
+            l1t = map_xen_pagetable(l1t_mfn);
             clear_page(l1t);
             *pl2e = l2e_from_mfn(l1t_mfn, __PAGE_HYPERVISOR);
         }
         else
         {
             BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-            l1t = map_xen_pagetable_new(l2e_get_mfn(*pl2e));
+            l1t = map_xen_pagetable(l2e_get_mfn(*pl2e));
         }
         for ( i = l1_table_offset(addr);
               i < L1_PAGETABLE_ENTRIES && extra->smfn < extra->emfn;
@@ -1695,9 +1695,9 @@ void __init efi_init_memory(void)
             xfree(extra);
         }
 
-        UNMAP_XEN_PAGETABLE_NEW(l1t);
-        UNMAP_XEN_PAGETABLE_NEW(pl2e);
-        UNMAP_XEN_PAGETABLE_NEW(pl3e);
+        UNMAP_XEN_PAGETABLE(l1t);
+        UNMAP_XEN_PAGETABLE(pl2e);
+        UNMAP_XEN_PAGETABLE(pl3e);
     }
 
     /* Insert Xen mappings. */
@@ -1706,7 +1706,7 @@ void __init efi_init_memory(void)
         efi_l4_pgtable[i] = idle_pg_table[i];
 #endif
 
-    UNMAP_XEN_PAGETABLE_NEW(efi_l4_pgtable);
+    UNMAP_XEN_PAGETABLE(efi_l4_pgtable);
 }
 #endif
 
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 7fae4bb311..ec5f5573a9 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -642,15 +642,15 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
                           unsigned int *flags);
 
 /* Allocator functions for Xen pagetables. */
-mfn_t alloc_xen_pagetable_new(void);
-void *map_xen_pagetable_new(mfn_t mfn);
-void unmap_xen_pagetable_new(void *v);
-void free_xen_pagetable_new(mfn_t mfn);
-
-#define UNMAP_XEN_PAGETABLE_NEW(ptr)    \
-    do {                                \
-        unmap_xen_pagetable_new((ptr)); \
-        (ptr) = NULL;                   \
+mfn_t alloc_xen_pagetable(void);
+void *map_xen_pagetable(mfn_t mfn);
+void unmap_xen_pagetable(void *v);
+void free_xen_pagetable(mfn_t mfn);
+
+#define UNMAP_XEN_PAGETABLE(ptr)    \
+    do {                            \
+        unmap_xen_pagetable((ptr)); \
+        (ptr) = NULL;               \
     } while (0)
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 16/55] x86/mm: switch to new APIs in map_pages_to_xen
  2019-02-07 16:44 ` [PATCH RFC 16/55] x86/mm: switch to new APIs in map_pages_to_xen Wei Liu
@ 2019-02-08 17:58   ` Wei Liu
  2019-03-18 21:14     ` Nuernberger, Stefan
  0 siblings, 1 reply; 119+ messages in thread
From: Wei Liu @ 2019-02-08 17:58 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

On Thu, Feb 07, 2019 at 04:44:17PM +0000, Wei Liu wrote:
> Page tables allocated in that function should be mapped and unmapped
> now.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/arch/x86/mm.c | 31 ++++++++++++++++++++++---------
>  1 file changed, 22 insertions(+), 9 deletions(-)
> 

Gitlab CI has discovered ...

> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 356d561a06..c4cb6fbb60 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5058,6 +5058,7 @@ int map_pages_to_xen(
>              unsigned int flush_flags =
>                  FLUSH_TLB | FLUSH_ORDER(2 * PAGETABLE_ORDER);
>              l2_pgentry_t *l2t;
> +            mfn_t mfn;

this and ...

>          pl2e = virt_to_xen_l2e(virt);
> @@ -5171,6 +5177,7 @@ int map_pages_to_xen(
>                  unsigned int flush_flags =
>                      FLUSH_TLB | FLUSH_ORDER(PAGETABLE_ORDER);
>                  l1_pgentry_t *l1t;
> +                mfn_t mfn;

... this shadowed the mfn variable from outer scope. I have fixed these
two issues in my local branch by turning them into l2t_mfn and l1t_mfn
respectively.

Interestingly my local build environment didn't complain and Xen worked
fine (presumably due to this particular path was never hit).

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 01/55] x86/mm: defer clearing page in virt_to_xen_lXe
  2019-02-07 16:44 ` [PATCH RFC 01/55] x86/mm: defer clearing page in virt_to_xen_lXe Wei Liu
@ 2019-03-15 14:38   ` Jan Beulich
  2019-04-09 12:21       ` [Xen-devel] " Wei Liu
  0 siblings, 1 reply; 119+ messages in thread
From: Jan Beulich @ 2019-03-15 14:38 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4752,13 +4752,13 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
>  
>          if ( !pl3e )
>              return NULL;
> -        clear_page(pl3e);
>          if ( locking )
>              spin_lock(&map_pgdir_lock);
>          if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
>          {
>              l4_pgentry_t l4e = l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR);
>  
> +            clear_page(pl3e);

Is this really an optimization? You treat avoiding to clear the page
in a hopefully infrequent case of a race for holding the spin lock
for quite a bit longer.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 02/55] x86: move some xen mm function declarations
  2019-02-07 16:44 ` [PATCH RFC 02/55] x86: move some xen mm function declarations Wei Liu
@ 2019-03-15 14:42   ` Jan Beulich
  0 siblings, 0 replies; 119+ messages in thread
From: Jan Beulich @ 2019-03-15 14:42 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> They were put into page.h but mm.h is more appropriate.

I'm not fully convinced of this, but ...

> The real reason is that I will be adding some new functions which
> takes mfn_t. It turns out it is a bit difficult to do in page.h.

... this is good enough a reason.

> No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
  2019-02-07 16:44 ` [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Wei Liu
@ 2019-03-15 15:36   ` Jan Beulich
  2019-04-09 12:22       ` [Xen-devel] " Wei Liu
  0 siblings, 1 reply; 119+ messages in thread
From: Jan Beulich @ 2019-03-15 15:36 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> The pl2e and pl1e variables are heavily (ab)used in that function. It
> is fine at the moment because all page tables are always mapped so
> there is no need to track the life time of each variable.
> 
> We will soon have the requirement to map and unmap page tables. We
> need to track the life time of each variable to avoid leakage.
> 
> Introduce some l{1,2}t variables with limited scope so that we can
> track life time of pointers to xen page tables more easily.

But you retain some uses of the old variables, and to be honest it's
not really clear to me by what criteria (and having multiple instances
of a variable name in a single function isn't necessarily less confusing).
I think we either stick to what's there (doesn't look too bad to me),
or you switch to scope restricted page table pointers throughout the
function, such that the function scope symbols can go away
altogether).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen
  2019-02-07 16:44 ` [PATCH RFC 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen Wei Liu
@ 2019-03-15 15:40   ` Jan Beulich
  2019-04-09 12:22       ` [Xen-devel] " Wei Liu
  2019-03-18 21:14   ` Nuernberger, Stefan
  1 sibling, 1 reply; 119+ messages in thread
From: Jan Beulich @ 2019-03-15 15:40 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> We will soon need to clean up mappings whenever the out most loop is
> ended. Add a new label and turn relevant continue's into goto's.

To be honest, I was on the edge of already suggesting less use
of goto in the previous patch. This one definitely goes too far
for my taste - I can somehow live with goto-s used for error
handling, but please not much more. Is there really no better
way?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 00/55] x86: use domheap page for xen page tables
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (54 preceding siblings ...)
  2019-02-07 16:44 ` [PATCH RFC 55/55] x86/mm: drop _new suffix from page table APIs Wei Liu
@ 2019-03-18 21:14 ` Nuernberger, Stefan
  2019-03-28 12:52 ` Nuernberger, Stefan
  56 siblings, 0 replies; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> This series switches xen page tables from xenheap page to domheap
> page.
> 
> This is required so that when we implement xenheap on top of vmap
> there won't
> be a loop.
> 
> It is done in roughly three steps:
> 
> 1. Introduce a new set of APIs, implement the old APIs on top of the
> new ones.
>    New APIs still use xenheap pages.
> 2. Switch each site which manipulate page tables to use the new APIs.
> 3. Switch new APIs to use domheap page.
> 
> You can find the series at:
> 
>   https://xenbits.xen.org/git-http/people/liuw/xen.git xen-pt-
> allocation-1
> 
> Wei.
> 

Thanks for starting this work Wei. I tested and reviewed most of the
series and haven't found issues with the functionality so far. See some
remarks on the individual patches. I am not finished reviewing all of
it, though. Some more reviewing to be done in the next days.

- Stefan




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 05/55] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings
  2019-02-07 16:44 ` [PATCH RFC 05/55] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Wei Liu
@ 2019-03-18 21:14   ` Nuernberger, Stefan
  0 siblings, 0 replies; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> The pl2e and pl1e variables are heavily (ab)used in that
> function.  It
> is fine at the moment because all page tables are always mapped so
> there is no need to track the life time of each variable.
> 
> We will soon have the requirement to map and unmap page tables. We
> need to track the life time of each variable to avoid leakage.
> 
> Introduce some l{1,2}t variables with limited scope so that we can
> track life time of pointers to xen page tables more easily.
> 
> No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

Same remark as Jan on the previous patch, some of the original
pl1e/pl2e usage is retained. I don't see yet how this scoping will
later help with map/unmap, but I'd also prefer if all usage was scoped
and the function-scope pl1e/pl2e removed.

- Stefan



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 06/55] x86/mm: map_pages_to_xen should have one exit path
  2019-02-07 16:44 ` [PATCH RFC 06/55] x86/mm: map_pages_to_xen should have one exit path Wei Liu
@ 2019-03-18 21:14   ` Nuernberger, Stefan
  2019-04-09 12:22       ` [Xen-devel] " Wei Liu
  0 siblings, 1 reply; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> We will soon rewrite the function to handle dynamically mapping and
> unmapping of page tables.
> 
> No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/arch/x86/mm.c | 34 +++++++++++++++++++++++++++-------
>  1 file changed, 27 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 4147a71c5d..3ab222c8ea 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4887,9 +4887,11 @@ int map_pages_to_xen(
>      unsigned int flags)
>  {
>      bool locking = system_state > SYS_STATE_boot;
> +    l3_pgentry_t *pl3e, ol3e;

After limiting the scope of other variables in the previous patches,
you now widen the scope for this one. Is this so you can handle unmap
in a common exit/error path later?

>      l2_pgentry_t *pl2e, ol2e;
>      l1_pgentry_t *pl1e, ol1e;
>      unsigned int  i;
> +    int rc = -ENOMEM;
>  
>  #define flush_flags(oldf) do {                 \
>      unsigned int o_ = (oldf);                  \
> @@ -4907,10 +4909,13 @@ int map_pages_to_xen(
>  
>      while ( nr_mfns != 0 )
>      {
> -        l3_pgentry_t ol3e, *pl3e = virt_to_xen_l3e(virt);
> +        pl3e = virt_to_xen_l3e(virt);
>  
>          if ( !pl3e )
> -            return -ENOMEM;
> +        {
> +            ASSERT(rc == -ENOMEM);
> +            goto out;
> +        }
>          ol3e = *pl3e;
>  
>          if ( cpu_has_page1gb &&
> @@ -5002,7 +5007,10 @@ int map_pages_to_xen(
>  
>              l2t = alloc_xen_pagetable();
>              if ( l2t == NULL )
> -                return -ENOMEM;
> +            {
> +                ASSERT(rc == -ENOMEM);
> +                goto out;
> +            }
>  
>              for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
>                  l2e_write(l2t + i,
> @@ -5031,7 +5039,10 @@ int map_pages_to_xen(
>  
>          pl2e = virt_to_xen_l2e(virt);
>          if ( !pl2e )
> -            return -ENOMEM;
> +        {
> +            ASSERT(rc == -ENOMEM);
> +            goto out;
> +        }
>  
>          if ( ((((virt >> PAGE_SHIFT) | mfn_x(mfn)) &
>                 ((1u << PAGETABLE_ORDER) - 1)) == 0) &&
> @@ -5076,7 +5087,10 @@ int map_pages_to_xen(
>              {
>                  pl1e = virt_to_xen_l1e(virt);
>                  if ( pl1e == NULL )
> -                    return -ENOMEM;
> +                {
> +                    ASSERT(rc == -ENOMEM);
> +                    goto out;
> +                }
>              }
>              else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
>              {
> @@ -5104,7 +5118,10 @@ int map_pages_to_xen(
>  
>                  l1t = alloc_xen_pagetable();
>                  if ( l1t == NULL )
> -                    return -ENOMEM;
> +                {
> +                    ASSERT(rc == -ENOMEM);
> +                    goto out;
> +                }
>  
>                  for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
>                      l1e_write(&l1t[i],
> @@ -5250,7 +5267,10 @@ int map_pages_to_xen(
>  
>  #undef flush_flags
>  
> -    return 0;
> +    rc = 0;
> +
> + out:
> +    return rc;
>  }
>  
>  int populate_pt_range(unsigned long virt, unsigned long nr_mfns)



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen
  2019-02-07 16:44 ` [PATCH RFC 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen Wei Liu
  2019-03-15 15:40   ` Jan Beulich
@ 2019-03-18 21:14   ` Nuernberger, Stefan
  1 sibling, 0 replies; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> We will soon need to clean up mappings whenever the out most loop is
> ended. Add a new label and turn relevant continue's into goto's.
> 
> No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---

I'm not as opposed to this use of goto as Jan, but I'd prefer to at
least call the label 'cleanup' or 'cleanup_mappings' to make it clear
what needs to happen.

Stefan



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 14/55] x86/mm: rewrite xen_to_virt_l2e
  2019-02-07 16:44 ` [PATCH RFC 14/55] x86/mm: rewrite xen_to_virt_l2e Wei Liu
@ 2019-03-18 21:14   ` Nuernberger, Stefan
  2019-04-09 12:27       ` [Xen-devel] " Wei Liu
  0 siblings, 1 reply; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> Rewrite that function to use the new APIs. Modify its callers to
> unmap
> the pointer returned.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---

nit: the commit title should be 'virt_to_xen_l2e' not 'xen_to_virt...'




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 16/55] x86/mm: switch to new APIs in map_pages_to_xen
  2019-02-08 17:58   ` Wei Liu
@ 2019-03-18 21:14     ` Nuernberger, Stefan
  0 siblings, 0 replies; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Fri, 2019-02-08 at 17:58 +0000, Wei Liu wrote:
> On Thu, Feb 07, 2019 at 04:44:17PM +0000, Wei Liu wrote:
> > 
> > Page tables allocated in that function should be mapped and
> > unmapped
> > now.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  xen/arch/x86/mm.c | 31 ++++++++++++++++++++++---------
> >  1 file changed, 22 insertions(+), 9 deletions(-)
> > 
> Gitlab CI has discovered ...
> 
> > 
> > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> > index 356d561a06..c4cb6fbb60 100644
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -5058,6 +5058,7 @@ int map_pages_to_xen(
> >              unsigned int flush_flags =
> >                  FLUSH_TLB | FLUSH_ORDER(2 * PAGETABLE_ORDER);
> >              l2_pgentry_t *l2t;
> > +            mfn_t mfn;
> this and ...
> 
> > 
> >          pl2e = virt_to_xen_l2e(virt);
> > @@ -5171,6 +5177,7 @@ int map_pages_to_xen(
> >                  unsigned int flush_flags =
> >                      FLUSH_TLB | FLUSH_ORDER(PAGETABLE_ORDER);
> >                  l1_pgentry_t *l1t;
> > +                mfn_t mfn;
> ... this shadowed the mfn variable from outer scope. I have fixed
> these
> two issues in my local branch by turning them into l2t_mfn and
> l1t_mfn
> respectively.

I checked the fixup on your stash and that looks good.

- Stefan

> 
> Interestingly my local build environment didn't complain and Xen
> worked
> fine (presumably due to this particular path was never hit).
> 
> Wei.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 17/55] x86/mm: drop lXe_to_lYe invocations in map_pages_to_xen
  2019-02-07 16:44 ` [PATCH RFC 17/55] x86/mm: drop lXe_to_lYe invocations " Wei Liu
@ 2019-03-18 21:14   ` Nuernberger, Stefan
  0 siblings, 0 replies; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> Map and unmap page tables where necessary.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/arch/x86/mm.c | 40 +++++++++++++++++++++++++++++-----------
>  1 file changed, 29 insertions(+), 11 deletions(-)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index c4cb6fbb60..1ea2974c1f 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5014,8 +5014,10 @@ int map_pages_to_xen(
>                  else
>                  {
>                      l2_pgentry_t *l2t;
> +                    mfn_t l2t_mfn = l3e_get_mfn(ol3e);
> +
> +                    l2t = map_xen_pagetable_new(l2t_mfn);
>  
> -                    l2t = l3e_to_l2e(ol3e);
>                      for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
>                      {
>                          ol2e = l2t[i];
> @@ -5027,10 +5029,12 @@ int map_pages_to_xen(
>                          {
>                              unsigned int j;
>                              l1_pgentry_t *l1t;
> +                            mfn_t l1t_mfn = l2e_get_mfn(ol2e);
>  
> -                            l1t = l2e_to_l1e(ol2e);
> +                            l1t = map_xen_pagetable_new(l1t_mfn);
>                              for ( j = 0; j < L1_PAGETABLE_ENTRIES;
> j++ )
>                                  flush_flags(l1e_get_flags(l1t[j]));
> +                            UNMAP_XEN_PAGETABLE_NEW(l1t);
>                          }
>                      }
>                      flush_area(virt, flush_flags);
> @@ -5039,9 +5043,9 @@ int map_pages_to_xen(
>                          ol2e = l2t[i];
>                          if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT)
> &&
>                               !(l2e_get_flags(ol2e) & _PAGE_PSE) )
> -                            free_xen_pagetable(l2e_to_l1e(ol2e));
> +                            free_xen_pagetable_new(l2e_get_mfn(ol2e)
> );
>                      }
> -                    free_xen_pagetable(l2t);
> +                    free_xen_pagetable_new(l2t_mfn);
>                  }
>              }
>  
> @@ -5146,12 +5150,14 @@ int map_pages_to_xen(
>                  else
>                  {
>                      l1_pgentry_t *l1t;
> +                    mfn_t l1t_mfn = l2e_get_mfn(ol2e);
>  
> -                    l1t = l2e_to_l1e(ol2e);
> +                    l1t = map_xen_pagetable_new(l1t_mfn);
>                      for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
>                          flush_flags(l1e_get_flags(l1t[i]));
>                      flush_area(virt, flush_flags);
> -                    free_xen_pagetable(l1t);
> +                    UNMAP_XEN_PAGETABLE_NEW(l1t);
> +                    free_xen_pagetable_new(l1t_mfn);
>                  }
>              }
>  
> @@ -5165,12 +5171,14 @@ int map_pages_to_xen(
>              /* Normal page mapping. */
>              if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
>              {
> +                /* XXX This forces page table to be populated */
>                  pl1e = virt_to_xen_l1e(virt);
>                  if ( pl1e == NULL )
>                  {
>                      ASSERT(rc == -ENOMEM);
>                      goto out;
>                  }
> +                UNMAP_XEN_PAGETABLE_NEW(pl1e);

This should be part of patch 15/55 "x86/mm: rewrite virt_to_xen_l1e"
where you adapt the callers of virt_to_xen_l1e to free the returned
pointer.

- Stefan

>              }
>              else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
>              {
> @@ -5234,9 +5242,11 @@ int map_pages_to_xen(
>                  }
>              }
>  
> -            pl1e  = l2e_to_l1e(*pl2e) + l1_table_offset(virt);
> +            pl1e  = map_xen_pagetable_new(l2e_get_mfn((*pl2e)));
> +            pl1e += l1_table_offset(virt);
>              ol1e  = *pl1e;
>              l1e_write_atomic(pl1e, l1e_from_mfn(mfn, flags));
> +            UNMAP_XEN_PAGETABLE_NEW(pl1e);
>              if ( (l1e_get_flags(ol1e) & _PAGE_PRESENT) )
>              {
>                  unsigned int flush_flags = FLUSH_TLB |
> FLUSH_ORDER(0);
> @@ -5257,6 +5267,7 @@ int map_pages_to_xen(
>              {
>                  unsigned long base_mfn;
>                  l1_pgentry_t *l1t;
> +                mfn_t l1t_mfn;
>  
>                  if ( locking )
>                      spin_lock(&map_pgdir_lock);
> @@ -5280,12 +5291,15 @@ int map_pages_to_xen(
>                      goto check_l3;
>                  }
>  
> -                l1t = l2e_to_l1e(ol2e);
> +                l1t_mfn = l2e_get_mfn(ol2e);
> +                l1t = map_xen_pagetable_new(l1t_mfn);
> +
>                  base_mfn = l1e_get_pfn(l1t[0]) &
> ~(L1_PAGETABLE_ENTRIES - 1);
>                  for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
>                      if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) ||
>                           (l1e_get_flags(l1t[i]) != flags) )
>                          break;
> +                UNMAP_XEN_PAGETABLE_NEW(l1t);
>                  if ( i == L1_PAGETABLE_ENTRIES )
>                  {
>                      l2e_write_atomic(pl2e, l2e_from_pfn(base_mfn,
> @@ -5295,7 +5309,7 @@ int map_pages_to_xen(
>                      flush_area(virt - PAGE_SIZE,
>                                 FLUSH_TLB_GLOBAL |
>                                 FLUSH_ORDER(PAGETABLE_ORDER));
> -                    free_xen_pagetable(l2e_to_l1e(ol2e));
> +                    free_xen_pagetable_new(l1t_mfn);
>                  }
>                  else if ( locking )
>                      spin_unlock(&map_pgdir_lock);
> @@ -5311,6 +5325,7 @@ int map_pages_to_xen(
>          {
>              unsigned long base_mfn;
>              l2_pgentry_t *l2t;
> +            mfn_t l2t_mfn;
>  
>              if ( locking )
>                  spin_lock(&map_pgdir_lock);
> @@ -5328,7 +5343,9 @@ int map_pages_to_xen(
>                  goto end_of_loop;
>              }
>  
> -            l2t = l3e_to_l2e(ol3e);
> +            l2t_mfn = l3e_get_mfn(ol3e);
> +            l2t = map_xen_pagetable_new(l2t_mfn);
> +
>              base_mfn = l2e_get_pfn(l2t[0]) & ~(L2_PAGETABLE_ENTRIES
> *
>                                                L1_PAGETABLE_ENTRIES -
> 1);
>              for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
> @@ -5336,6 +5353,7 @@ int map_pages_to_xen(
>                        (base_mfn + (i << PAGETABLE_ORDER))) ||
>                       (l2e_get_flags(l2t[i]) != l1f_to_lNf(flags)) )
>                      break;
> +            UNMAP_XEN_PAGETABLE_NEW(l2t);
>              if ( i == L2_PAGETABLE_ENTRIES )
>              {
>                  l3e_write_atomic(pl3e, l3e_from_pfn(base_mfn,
> @@ -5345,7 +5363,7 @@ int map_pages_to_xen(
>                  flush_area(virt - PAGE_SIZE,
>                             FLUSH_TLB_GLOBAL |
>                             FLUSH_ORDER(2*PAGETABLE_ORDER));
> -                free_xen_pagetable(l3e_to_l2e(ol3e));
> +                free_xen_pagetable_new(l2t_mfn);
>              }
>              else if ( locking )
>                  spin_unlock(&map_pgdir_lock);



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 18/55] x86/mm: switch to new APIs in modify_xen_mappings
  2019-02-07 16:44 ` [PATCH RFC 18/55] x86/mm: switch to new APIs in modify_xen_mappings Wei Liu
@ 2019-03-18 21:14   ` Nuernberger, Stefan
  2019-04-09 12:22       ` [Xen-devel] " Wei Liu
  0 siblings, 1 reply; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> Page tables allocated in that function should be mapped and unmapped
> now.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/arch/x86/mm.c | 31 ++++++++++++++++++++++---------
>  1 file changed, 22 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 1ea2974c1f..18c7b43705 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5436,6 +5436,7 @@ int modify_xen_mappings(unsigned long s,
> unsigned long e, unsigned int nf)
>          if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
>          {
>              l2_pgentry_t *l2t;
> +            mfn_t mfn;

nit: I wouldn't mind if these scoped mfns were caled l2t_mfn / l1t_mfn
in this patch, too.

>  
>              if ( l2_table_offset(v) == 0 &&
>                   l1_table_offset(v) == 0 &&
> @@ -5452,13 +5453,15 @@ int modify_xen_mappings(unsigned long s,
> unsigned long e, unsigned int nf)
>              }
>  
>              /* PAGE1GB: shatter the superpage and fall through. */
> -            l2t = alloc_xen_pagetable();
> -            if ( !l2t )
> +            mfn = alloc_xen_pagetable_new();
> +            if ( mfn_eq(mfn, INVALID_MFN) )
>              {
>                  ASSERT(rc == -ENOMEM);
>                  goto out;
>              }
>  
> +            l2t = map_xen_pagetable_new(mfn);

Is map_xen_pagetable always guaranteed to succeed on a valid mfn (also
in the future)? Otherwise the validity check should be done on l2t as
before instead of mfn. But it looks like map_xen_pagetable{_new} does
not deal with invalid mfns.

> +
>              for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
>                  l2e_write(l2t + i,
>                            l2e_from_pfn(l3e_get_pfn(*pl3e) +
> @@ -5469,14 +5472,17 @@ int modify_xen_mappings(unsigned long s,
> unsigned long e, unsigned int nf)
>              if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
>                   (l3e_get_flags(*pl3e) & _PAGE_PSE) )
>              {
> -                l3e_write_atomic(pl3e,
> l3e_from_mfn(virt_to_mfn(l2t),
> -                                                    __PAGE_HYPERVISO
> R));
> +                l3e_write_atomic(pl3e, l3e_from_mfn(mfn,
> __PAGE_HYPERVISOR));
> +                UNMAP_XEN_PAGETABLE_NEW(l2t);
>                  l2t = NULL;

That NULL assignment is redundant now. It's done by the UNMAP macro.

>              }
>              if ( locking )
>                  spin_unlock(&map_pgdir_lock);
>              if ( l2t )
> -                free_xen_pagetable(l2t);
> +            {
> +                UNMAP_XEN_PAGETABLE_NEW(l2t);
> +                free_xen_pagetable_new(mfn);
> +            }
>          }
>  
>          /*
> @@ -5511,15 +5517,18 @@ int modify_xen_mappings(unsigned long s,
> unsigned long e, unsigned int nf)
>              else
>              {
>                  l1_pgentry_t *l1t;
> +                mfn_t mfn;
>  
>                  /* PSE: shatter the superpage and try again. */
> -                l1t = alloc_xen_pagetable();
> -                if ( !l1t )
> +                mfn = alloc_xen_pagetable_new();
> +                if ( mfn_eq(mfn, INVALID_MFN) )
>                  {
>                      ASSERT(rc == -ENOMEM);
>                      goto out;
>                  }
>  
> +                l1t = map_xen_pagetable_new(mfn);
> +
>                  for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
>                      l1e_write(&l1t[i],
>                                l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
> @@ -5529,14 +5538,18 @@ int modify_xen_mappings(unsigned long s,
> unsigned long e, unsigned int nf)
>                  if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
>                       (l2e_get_flags(*pl2e) & _PAGE_PSE) )
>                  {
> -                    l2e_write_atomic(pl2e,
> l2e_from_mfn(virt_to_mfn(l1t),
> +                    l2e_write_atomic(pl2e, l2e_from_mfn(mfn,
>                                                          __PAGE_HYPER
> VISOR));
> +                    UNMAP_XEN_PAGETABLE_NEW(l1t);
>                      l1t = NULL;
>                  }
>                  if ( locking )
>                      spin_unlock(&map_pgdir_lock);
>                  if ( l1t )
> -                    free_xen_pagetable(l1t);
> +                {
> +                    UNMAP_XEN_PAGETABLE_NEW(l1t);
> +                    free_xen_pagetable_new(mfn);
> +                }
>              }
>          }
>          else



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 21/55] x86_64/mm: introduce pl2e in paging_init
  2019-02-07 16:44 ` [PATCH RFC 21/55] x86_64/mm: introduce pl2e in paging_init Wei Liu
@ 2019-03-18 21:14   ` Nuernberger, Stefan
  0 siblings, 0 replies; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> Introduce pl2e so that we can use l2_ro_mpt to point to the page
> table
> itself.
> 
> No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/arch/x86/x86_64/mm.c | 18 ++++++++++--------
>  1 file changed, 10 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
> index d8f558bc3a..83d62674c0 100644
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -497,7 +497,7 @@ void __init paging_init(void)
>      unsigned long i, mpt_size, va;
>      unsigned int n, memflags;
>      l3_pgentry_t *l3_ro_mpt;
> -    l2_pgentry_t *l2_ro_mpt = NULL;
> +    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt;

nit: In the next patch the NULL initialization for l2_ro_mpt is added
back in. No need to remove it here.

>      struct page_info *l1_pg;
>  
>      /*
> @@ -547,7 +547,7 @@ void __init paging_init(void)
>              (L2_PAGETABLE_SHIFT - 3 + PAGE_SHIFT)));
>  
>          if ( cpu_has_page1gb &&
> -             !((unsigned long)l2_ro_mpt & ~PAGE_MASK) &&
> +             !((unsigned long)pl2e & ~PAGE_MASK) &&
>               (mpt_size >> L3_PAGETABLE_SHIFT) > (i >>
> PAGETABLE_ORDER) )
>          {
>              unsigned int k, holes;
> @@ -606,7 +606,7 @@ void __init paging_init(void)
>              memset((void *)(RDWR_MPT_VIRT_START + (i <<
> L2_PAGETABLE_SHIFT)),
>                     0xFF, 1UL << L2_PAGETABLE_SHIFT);
>          }
> -        if ( !((unsigned long)l2_ro_mpt & ~PAGE_MASK) )
> +        if ( !((unsigned long)pl2e & ~PAGE_MASK) )
>          {
>              if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
>                  goto nomem;
> @@ -614,13 +614,14 @@ void __init paging_init(void)
>              l3e_write(&l3_ro_mpt[l3_table_offset(va)],
>                        l3e_from_paddr(__pa(l2_ro_mpt),
>                                       __PAGE_HYPERVISOR_RO |
> _PAGE_USER));
> +            pl2e = l2_ro_mpt;
>              ASSERT(!l2_table_offset(va));
>          }
>          /* NB. Cannot be GLOBAL: guest user mode should not see it.
> */
>          if ( l1_pg )
> -            l2e_write(l2_ro_mpt, l2e_from_page(
> +            l2e_write(pl2e, l2e_from_page(
>                  l1_pg,
> /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT));
> -        l2_ro_mpt++;
> +        pl2e++;
>      }
>  #undef CNT
>  #undef MFN
> @@ -636,7 +637,8 @@ void __init paging_init(void)
>      clear_page(l2_ro_mpt);
>      l3e_write(&l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)
> ],
>                l3e_from_paddr(__pa(l2_ro_mpt),
> __PAGE_HYPERVISOR_RO));
> -    l2_ro_mpt += l2_table_offset(HIRO_COMPAT_MPT_VIRT_START);
> +    pl2e = l2_ro_mpt;
> +    pl2e += l2_table_offset(HIRO_COMPAT_MPT_VIRT_START);

nit: Those two lines could be combined.

- Stefan

>      /* Allocate and map the compatibility mode machine-to-phys
> table. */
>      mpt_size = (mpt_size >> 1) + (1UL << (L2_PAGETABLE_SHIFT - 1));
>      if ( mpt_size > RDWR_COMPAT_MPT_VIRT_END -
> RDWR_COMPAT_MPT_VIRT_START )
> @@ -649,7 +651,7 @@ void __init paging_init(void)
>               sizeof(*compat_machine_to_phys_mapping))
>      BUILD_BUG_ON((sizeof(*frame_table) & ~sizeof(*frame_table)) % \
>                   sizeof(*compat_machine_to_phys_mapping));
> -    for ( i = 0; i < (mpt_size >> L2_PAGETABLE_SHIFT); i++,
> l2_ro_mpt++ )
> +    for ( i = 0; i < (mpt_size >> L2_PAGETABLE_SHIFT); i++, pl2e++ )
>      {
>          memflags = MEMF_node(phys_to_nid(i <<
>              (L2_PAGETABLE_SHIFT - 2 + PAGE_SHIFT)));
> @@ -671,7 +673,7 @@ void __init paging_init(void)
>                 0x55,
>                 1UL << L2_PAGETABLE_SHIFT);
>          /* NB. Cannot be GLOBAL as the ptes get copied into per-VM
> space. */
> -        l2e_write(l2_ro_mpt, l2e_from_page(l1_pg,
> _PAGE_PSE|_PAGE_PRESENT));
> +        l2e_write(pl2e, l2e_from_page(l1_pg,
> _PAGE_PSE|_PAGE_PRESENT));
>      }
>  #undef CNT
>  #undef MFN



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 23/55] x86_64/mm: drop l4e_to_l3e invocation from paging_init
  2019-02-07 16:44 ` [PATCH RFC 23/55] x86_64/mm: drop l4e_to_l3e invocation from paging_init Wei Liu
@ 2019-03-18 21:14   ` Nuernberger, Stefan
  2019-04-09 12:22       ` [Xen-devel] " Wei Liu
  0 siblings, 1 reply; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

Any reason why this isn't squashed with the previous patch?

- Stefan

> ---
>  xen/arch/x86/x86_64/mm.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
> index 02919481e4..094c609c8c 100644
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -648,8 +648,10 @@ void __init paging_init(void)
>      /* Create user-accessible L2 directory to map the MPT for compat
> guests. */
>      BUILD_BUG_ON(l4_table_offset(RDWR_MPT_VIRT_START) !=
>                   l4_table_offset(HIRO_COMPAT_MPT_VIRT_START));
> -    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(
> -        HIRO_COMPAT_MPT_VIRT_START)]);
> +
> +    l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset(
> +                                        HIRO_COMPAT_MPT_VIRT_START)]
> );
> +    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
>  
>      l2_ro_mpt_mfn = alloc_xen_pagetable_new();
>      if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
> @@ -701,6 +703,7 @@ void __init paging_init(void)
>  #undef MFN
>  
>      UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
> +    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
>  
>      machine_to_phys_mapping_valid = 1;
>  



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 25/55] x86_64/mm: introduce pl2e in setup_m2p_table
  2019-02-07 16:44 ` [PATCH RFC 25/55] x86_64/mm: introduce pl2e " Wei Liu
@ 2019-03-18 21:14   ` Nuernberger, Stefan
  0 siblings, 0 replies; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/arch/x86/x86_64/mm.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
> index 55fa338d71..d3e2398b6c 100644
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -397,7 +397,7 @@ static int setup_m2p_table(struct mem_hotadd_info
> *info)
>  {
>      unsigned long i, va, smap, emap;
>      unsigned int n;
> -    l2_pgentry_t *l2_ro_mpt = NULL;
> +    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt;

nit: Here too, the next patch will add the NULL initialization back in.

>      l3_pgentry_t *l3_ro_mpt = NULL;
>      int ret = 0;
>  
> @@ -458,7 +458,7 @@ static int setup_m2p_table(struct mem_hotadd_info
> *info)
>                    _PAGE_PSE));
>              if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
>                _PAGE_PRESENT )
> -                l2_ro_mpt =
> l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
> +                pl2e = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
>                    l2_table_offset(va);
>              else
>              {
> @@ -473,11 +473,12 @@ static int setup_m2p_table(struct
> mem_hotadd_info *info)
>                  l3e_write(&l3_ro_mpt[l3_table_offset(va)],
>                            l3e_from_paddr(__pa(l2_ro_mpt),
>                                           __PAGE_HYPERVISOR_RO |
> _PAGE_USER));
> -                l2_ro_mpt += l2_table_offset(va);
> +                pl2e = l2_ro_mpt;
> +                pl2e += l2_table_offset(va);

nit: These could also be on a single line.

- Stefan

>              }
>  
>              /* NB. Cannot be GLOBAL: guest user mode should not see
> it. */
> -            l2e_write(l2_ro_mpt, l2e_from_mfn(mfn,
> +            l2e_write(pl2e, l2e_from_mfn(mfn,
>                     /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESE
> NT));
>          }
>          i += ( 1UL << (L2_PAGETABLE_SHIFT - 3));



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 27/55] x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table
  2019-02-07 16:44 ` [PATCH RFC 27/55] x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table Wei Liu
@ 2019-03-18 21:14   ` Nuernberger, Stefan
  2019-04-09 12:30       ` [Xen-devel] " Wei Liu
  0 siblings, 1 reply; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/arch/x86/x86_64/mm.c | 16 ++++++++++++----
>  1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
> index 0b85961105..216f97c95f 100644
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -400,11 +400,13 @@ static int setup_m2p_table(struct
> mem_hotadd_info *info)
>      l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
>      l3_pgentry_t *l3_ro_mpt = NULL;
>      int ret = 0;
> -    mfn_t l2_ro_mpt_mfn;
> +    mfn_t l2_ro_mpt_mfn, l3_ro_mpt_mfn;
>  
>      ASSERT(l4e_get_flags(idle_pg_table[l4_table_offset(RO_MPT_VIRT_S
> TART)])
>              & _PAGE_PRESENT);
> -    l3_ro_mpt =
> l4e_to_l3e(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
> +    l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset(
> +                                        RO_MPT_VIRT_START)]);
> +    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
>  
>      smap = (info->spfn & (~((1UL << (L2_PAGETABLE_SHIFT - 3)) -1)));
>      emap = ((info->epfn + ((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1 ))
> &
> @@ -459,8 +461,13 @@ static int setup_m2p_table(struct
> mem_hotadd_info *info)
>                    _PAGE_PSE));
>              if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
>                _PAGE_PRESENT )
> -                pl2e = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
> -                  l2_table_offset(va);
> +            {
> +                UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
> +                l2_ro_mpt_mfn =
> l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]);
> +                l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
> +                ASSERT(l2_ro_mpt);

Do we need this assert here? What are the possibilities to recover from
that situation? I think this should be BUG_ON or it should at least
return through the error path.

- Stefan

> +                pl2e = l2_ro_mpt + l2_table_offset(va);
> +            }
>              else
>              {
>                  UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
> @@ -492,6 +499,7 @@ static int setup_m2p_table(struct mem_hotadd_info
> *info)
>      ret = setup_compat_m2p_table(info);
>  error:
>      UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
> +    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
>      return ret;
>  }
>  



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 31/55] efi: add emacs block to boot.c
  2019-02-07 16:44 ` [PATCH RFC 31/55] efi: add emacs block to boot.c Wei Liu
@ 2019-03-18 21:14   ` Nuernberger, Stefan
  2019-04-09 12:23       ` [Xen-devel] " Wei Liu
  0 siblings, 1 reply; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: jbeulich

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/common/efi/boot.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
> index 1d1420f02c..3868293d06 100644
> --- a/xen/common/efi/boot.c
> +++ b/xen/common/efi/boot.c
> @@ -1705,3 +1705,13 @@ void __init efi_init_memory(void)
>  #endif
>  }
>  #endif
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */

Not relevant to the patch series. What's the upstream position on these
changes? Should they be introduced in separate 'cleanup' series or is
it common to include that as part of unrelated functional change
series? I personally don't mind either way. (And same applies to the
other emacs block commit in the series, of course.)

- Stefan




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 32/55] efi: switch EFI L4 table to use new APIs
  2019-02-07 16:44 ` [PATCH RFC 32/55] efi: switch EFI L4 table to use new APIs Wei Liu
@ 2019-03-18 21:14   ` Nuernberger, Stefan
  2019-04-09 12:23       ` [Xen-devel] " Wei Liu
  0 siblings, 1 reply; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-18 21:14 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> This requires storing the MFN instead of linear address of the L4
> table. Adjust code accordingly.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/arch/x86/efi/runtime.h | 12 +++++++++---
>  xen/common/efi/boot.c      |  8 ++++++--
>  xen/common/efi/efi.h       |  3 ++-
>  xen/common/efi/runtime.c   |  8 ++++----
>  4 files changed, 21 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h
> index d9eb8f5c27..277d237953 100644
> --- a/xen/arch/x86/efi/runtime.h
> +++ b/xen/arch/x86/efi/runtime.h
> @@ -2,11 +2,17 @@
>  #include <asm/mc146818rtc.h>
>  
>  #ifndef COMPAT
> -l4_pgentry_t *__read_mostly efi_l4_pgtable;
> +mfn_t __read_mostly efi_l4_mfn = INVALID_MFN_INITIALIZER;
>  
>  void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e)
>  {
> -    if ( efi_l4_pgtable )
> -        l4e_write(efi_l4_pgtable + l4idx, l4e);
> +    if ( !mfn_eq(efi_l4_mfn, INVALID_MFN) )
> +    {
> +        l4_pgentry_t *l4t;
> +
> +        l4t = map_xen_pagetable_new(efi_l4_mfn);
> +        l4e_write(l4t + l4idx, l4e);
> +        UNMAP_XEN_PAGETABLE_NEW(l4t);

nit: This doesn't need the implicit NULL assignment. The non-macro
unmap_xen_pagetable_new is sufficient here. Do you have a guideline on
when to use the macro over the function call? I assume the compiler
eliminates most of the dead stores on function return.

- Stefan

> +    }
>  }
>  #endif
> diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
> index 3868293d06..f55d6a6d76 100644
> --- a/xen/common/efi/boot.c
> +++ b/xen/common/efi/boot.c
> @@ -1488,6 +1488,7 @@ void __init efi_init_memory(void)
>          unsigned int prot;
>      } *extra, *extra_head = NULL;
>  #endif
> +    l4_pgentry_t *efi_l4_pgtable;
>  
>      free_ebmalloc_unused_mem();
>  
> @@ -1603,8 +1604,9 @@ void __init efi_init_memory(void)
>                                   mdesc_ver, efi_memmap);
>  #else
>      /* Set up 1:1 page tables to do runtime calls in "physical"
> mode. */
> -    efi_l4_pgtable = alloc_xen_pagetable();
> -    BUG_ON(!efi_l4_pgtable);
> +    efi_l4_mfn = alloc_xen_pagetable_new();
> +    BUG_ON(mfn_eq(efi_l4_mfn, INVALID_MFN));
> +    efi_l4_pgtable = map_xen_pagetable_new(efi_l4_mfn);
>      clear_page(efi_l4_pgtable);
>  
>      copy_mapping(efi_l4_pgtable, 0, max_page, ram_range_valid);
> @@ -1703,6 +1705,8 @@ void __init efi_init_memory(void)
>            i < l4_table_offset(DIRECTMAP_VIRT_END); ++i )
>          efi_l4_pgtable[i] = idle_pg_table[i];
>  #endif
> +
> +    UNMAP_XEN_PAGETABLE_NEW(efi_l4_pgtable);
>  }
>  #endif
>  
> diff --git a/xen/common/efi/efi.h b/xen/common/efi/efi.h
> index 6b9c56ead1..139b660ed7 100644
> --- a/xen/common/efi/efi.h
> +++ b/xen/common/efi/efi.h
> @@ -6,6 +6,7 @@
>  #include <efi/eficapsule.h>
>  #include <efi/efiapi.h>
>  #include <xen/efi.h>
> +#include <xen/mm.h>
>  #include <xen/spinlock.h>
>  #include <asm/page.h>
>  
> @@ -29,7 +30,7 @@ extern UINTN efi_memmap_size, efi_mdesc_size;
>  extern void *efi_memmap;
>  
>  #ifdef CONFIG_X86
> -extern l4_pgentry_t *efi_l4_pgtable;
> +extern mfn_t efi_l4_mfn;
>  #endif
>  
>  extern const struct efi_pci_rom *efi_pci_roms;
> diff --git a/xen/common/efi/runtime.c b/xen/common/efi/runtime.c
> index 3d118d571d..8263f1d863 100644
> --- a/xen/common/efi/runtime.c
> +++ b/xen/common/efi/runtime.c
> @@ -85,7 +85,7 @@ struct efi_rs_state efi_rs_enter(void)
>      static const u32 mxcsr = MXCSR_DEFAULT;
>      struct efi_rs_state state = { .cr3 = 0 };
>  
> -    if ( !efi_l4_pgtable )
> +    if ( mfn_eq(efi_l4_mfn, INVALID_MFN) )
>          return state;
>  
>      state.cr3 = read_cr3();
> @@ -111,7 +111,7 @@ struct efi_rs_state efi_rs_enter(void)
>          lgdt(&gdt_desc);
>      }
>  
> -    switch_cr3_cr4(virt_to_maddr(efi_l4_pgtable), read_cr4());
> +    switch_cr3_cr4(mfn_to_maddr(efi_l4_mfn), read_cr4());
>  
>      return state;
>  }
> @@ -140,9 +140,9 @@ void efi_rs_leave(struct efi_rs_state *state)
>  
>  bool efi_rs_using_pgtables(void)
>  {
> -    return efi_l4_pgtable &&
> +    return !mfn_eq(efi_l4_mfn, INVALID_MFN) &&
>             (smp_processor_id() == efi_rs_on_cpu) &&
> -           (read_cr3() == virt_to_maddr(efi_l4_pgtable));
> +           (read_cr3() == mfn_to_maddr(efi_l4_mfn));
>  }
>  
>  unsigned long efi_get_time(void)



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 39/55] x86: switch root_pgt to mfn_t and use new APIs
  2019-02-07 16:44 ` [PATCH RFC 39/55] x86: switch root_pgt to mfn_t and use new APIs Wei Liu
@ 2019-03-19 16:45   ` Nuernberger, Stefan
  2019-04-09 12:23       ` [Xen-devel] " Wei Liu
  0 siblings, 1 reply; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-19 16:45 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> This then requires moving declaration of root page table mfn into
> mm.h
> and modify setup_cpu_root_pgt to have a single exit path.
> 
> We also need to force map_domain_page to use direct map when
> switching
> per-domain mappings. This is contrary to our end goal of removing
> direct map, but this will be removed once we make map_domain_page
> context-switch safe in another (large) patch series.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/arch/x86/domain.c           | 15 ++++++++++++---
>  xen/arch/x86/domain_page.c      |  2 +-
>  xen/arch/x86/mm.c               |  2 +-
>  xen/arch/x86/pv/domain.c        |  2 +-
>  xen/arch/x86/smpboot.c          | 40 +++++++++++++++++++++++++++--
> -----------
>  xen/include/asm-x86/mm.h        |  2 ++
>  xen/include/asm-x86/processor.h |  1 -
>  7 files changed, 44 insertions(+), 20 deletions(-)
> 
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index 32dc4253ff..603495e55a 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -68,6 +68,7 @@
>  #include <asm/pv/domain.h>
>  #include <asm/pv/mm.h>
>  #include <asm/spec_ctrl.h>
> +#include <asm/setup.h>
>  
>  DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
>  
> @@ -1589,12 +1590,20 @@ void paravirt_ctxt_switch_from(struct vcpu
> *v)
>  
>  void paravirt_ctxt_switch_to(struct vcpu *v)
>  {
> -    root_pgentry_t *root_pgt = this_cpu(root_pgt);
> +    mfn_t rpt_mfn = this_cpu(root_pgt_mfn);
>  
> -    if ( root_pgt )
> -        root_pgt[root_table_offset(PERDOMAIN_VIRT_START)] =
> +    if ( !mfn_eq(rpt_mfn, INVALID_MFN) )
> +    {
> +        root_pgentry_t *rpt;
> +
> +        mapcache_override_current(INVALID_VCPU);

Can you elaborate why this is required? A comment might help. I assume
this forces the root pt to be mapped in idle domain context instead of
current vcpu?

- Stefan

> +        rpt = map_xen_pagetable_new(rpt_mfn);
> +        rpt[root_table_offset(PERDOMAIN_VIRT_START)] =
>              l4e_from_page(v->domain->arch.perdomain_l3_pg,
>                            __PAGE_HYPERVISOR_RW);
> +        UNMAP_XEN_PAGETABLE_NEW(rpt);
> +        mapcache_override_current(NULL);
> +    }
>  
>      if ( unlikely(v->arch.dr7 & DR7_ACTIVE_MASK) )
>          activate_debugregs(v);
> diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
> index 24083e9a86..cfcffd35f3 100644
> --- a/xen/arch/x86/domain_page.c
> +++ b/xen/arch/x86/domain_page.c
> @@ -57,7 +57,7 @@ static inline struct vcpu
> *mapcache_current_vcpu(void)
>      return v;
>  }
>  
> -void __init mapcache_override_current(struct vcpu *v)
> +void mapcache_override_current(struct vcpu *v)
>  {
>      this_cpu(override) = v;
>  }
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 9e115ef0b8..44c9df5c9e 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -564,7 +564,7 @@ void write_ptbase(struct vcpu *v)
>      if ( is_pv_vcpu(v) && v->domain->arch.pv.xpti )
>      {
>          cpu_info->root_pgt_changed = true;
> -        cpu_info->pv_cr3 = __pa(this_cpu(root_pgt));
> +        cpu_info->pv_cr3 = mfn_to_maddr(this_cpu(root_pgt_mfn));
>          if ( new_cr4 & X86_CR4_PCIDE )
>              cpu_info->pv_cr3 |= get_pcid_bits(v, true);
>          switch_cr3_cr4(v->arch.cr3, new_cr4);
> diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
> index 7e84b04082..2fd944b7e3 100644
> --- a/xen/arch/x86/pv/domain.c
> +++ b/xen/arch/x86/pv/domain.c
> @@ -303,7 +303,7 @@ static void _toggle_guest_pt(struct vcpu *v)
>          struct cpu_info *cpu_info = get_cpu_info();
>  
>          cpu_info->root_pgt_changed = true;
> -        cpu_info->pv_cr3 = __pa(this_cpu(root_pgt)) |
> +        cpu_info->pv_cr3 = mfn_to_maddr(this_cpu(root_pgt_mfn)) |
>                             (d->arch.pv.pcid ? get_pcid_bits(v, true)
> : 0);
>      }
>  
> diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
> index a9a39cea6e..32dce00d10 100644
> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -819,7 +819,7 @@ static int clone_mapping(const void *ptr,
> root_pgentry_t *rpt)
>      return rc;
>  }
>  
> -DEFINE_PER_CPU(root_pgentry_t *, root_pgt);
> +DEFINE_PER_CPU(mfn_t, root_pgt_mfn);
>  
>  static root_pgentry_t common_pgt;
>  
> @@ -827,19 +827,27 @@ extern const char _stextentry[], _etextentry[];
>  
>  static int setup_cpu_root_pgt(unsigned int cpu)
>  {
> -    root_pgentry_t *rpt;
> +    root_pgentry_t *rpt = NULL;
> +    mfn_t rpt_mfn;
>      unsigned int off;
>      int rc;
>  
>      if ( !opt_xpti_hwdom && !opt_xpti_domu )
> -        return 0;
> +    {
> +        rc = 0;
> +        goto out;
> +    }
>  
> -    rpt = alloc_xen_pagetable();
> -    if ( !rpt )
> -        return -ENOMEM;
> +    rpt_mfn = alloc_xen_pagetable_new();
> +    if ( mfn_eq(rpt_mfn, INVALID_MFN) )
> +    {
> +        rc = -ENOMEM;
> +        goto out;
> +    }
>  
> +    rpt = map_xen_pagetable_new(rpt_mfn);
>      clear_page(rpt);
> -    per_cpu(root_pgt, cpu) = rpt;
> +    per_cpu(root_pgt_mfn, cpu) = rpt_mfn;
>  
>      rpt[root_table_offset(RO_MPT_VIRT_START)] =
>          idle_pg_table[root_table_offset(RO_MPT_VIRT_START)];
> @@ -856,7 +864,7 @@ static int setup_cpu_root_pgt(unsigned int cpu)
>              rc = clone_mapping(ptr, rpt);
>  
>          if ( rc )
> -            return rc;
> +            goto out;
>  
>          common_pgt = rpt[root_table_offset(XEN_VIRT_START)];
>      }
> @@ -875,19 +883,24 @@ static int setup_cpu_root_pgt(unsigned int cpu)
>      if ( !rc )
>          rc = clone_mapping((void *)per_cpu(stubs.addr, cpu), rpt);
>  
> + out:
> +    UNMAP_XEN_PAGETABLE_NEW(rpt);
>      return rc;
>  }
>  
>  static void cleanup_cpu_root_pgt(unsigned int cpu)
>  {
> -    root_pgentry_t *rpt = per_cpu(root_pgt, cpu);
> +    mfn_t rpt_mfn = per_cpu(root_pgt_mfn, cpu);
> +    root_pgentry_t *rpt;
>      unsigned int r;
>      unsigned long stub_linear = per_cpu(stubs.addr, cpu);
>  
> -    if ( !rpt )
> +    if ( mfn_eq(rpt_mfn, INVALID_MFN) )
>          return;
>  
> -    per_cpu(root_pgt, cpu) = NULL;
> +    per_cpu(root_pgt_mfn, cpu) = INVALID_MFN;
> +
> +    rpt = map_xen_pagetable_new(rpt_mfn);
>  
>      for ( r = root_table_offset(DIRECTMAP_VIRT_START);
>            r < root_table_offset(HYPERVISOR_VIRT_END); ++r )
> @@ -932,7 +945,8 @@ static void cleanup_cpu_root_pgt(unsigned int
> cpu)
>          free_xen_pagetable_new(l3t_mfn);
>      }
>  
> -    free_xen_pagetable(rpt);
> +    UNMAP_XEN_PAGETABLE_NEW(rpt);
> +    free_xen_pagetable_new(rpt_mfn);
>  
>      /* Also zap the stub mapping for this CPU. */
>      if ( stub_linear )
> @@ -1138,7 +1152,7 @@ void __init smp_prepare_cpus(void)
>      rc = setup_cpu_root_pgt(0);
>      if ( rc )
>          panic("Error %d setting up PV root page table\n", rc);
> -    if ( per_cpu(root_pgt, 0) )
> +    if ( !mfn_eq(per_cpu(root_pgt_mfn, 0), INVALID_MFN) )
>      {
>          get_cpu_info()->pv_cr3 = 0;
>  
> diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
> index 4378d9f815..708b84bb89 100644
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -657,4 +657,6 @@ void free_xen_pagetable_new(mfn_t mfn);
>  
>  l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
>  
> +DECLARE_PER_CPU(mfn_t, root_pgt_mfn);
> +
>  #endif /* __ASM_X86_MM_H__ */
> diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-
> x86/processor.h
> index df01ae30d7..9f98ac96f5 100644
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -449,7 +449,6 @@ extern idt_entry_t idt_table[];
>  extern idt_entry_t *idt_tables[];
>  
>  DECLARE_PER_CPU(struct tss_struct, init_tss);
> -DECLARE_PER_CPU(root_pgentry_t *, root_pgt);
>  
>  extern void write_ptbase(struct vcpu *v);
>  



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 41/55] x86_64/mm: map and unmap page tables in m2p_mapped
  2019-02-07 16:44 ` [PATCH RFC 41/55] x86_64/mm: map and unmap page tables in m2p_mapped Wei Liu
@ 2019-03-19 16:45   ` Nuernberger, Stefan
  0 siblings, 0 replies; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-19 16:45 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/arch/x86/x86_64/mm.c | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
> index 216f97c95f..2b88a1af37 100644
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -130,28 +130,36 @@ static int m2p_mapped(unsigned long spfn)
>  {
>      unsigned long va;
>      l3_pgentry_t *l3_ro_mpt;
> -    l2_pgentry_t *l2_ro_mpt;
> +    l2_pgentry_t *l2_ro_mpt = NULL;
> +    int rc = M2P_NO_MAPPED;
>  
>      va = RO_MPT_VIRT_START + spfn *
> sizeof(*machine_to_phys_mapping);
> -    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(va)]);
> +    l3_ro_mpt = map_xen_pagetable_new(
> +        l4e_get_mfn(idle_pg_table[l4_table_offset(va)]));
>  
>      switch ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
>               (_PAGE_PRESENT |_PAGE_PSE))
>      {
>          case _PAGE_PSE|_PAGE_PRESENT:
> -            return M2P_1G_MAPPED;
> +            rc = M2P_1G_MAPPED;
> +            goto out;
>          /* Check for next level */
>          case _PAGE_PRESENT:
>              break;
>          default:
> -            return M2P_NO_MAPPED;
> +            rc = M2P_NO_MAPPED;

nit: This assignment is redundant now, but it might stay for clarity.

- Stefan

> +            goto out;
>      }
> -    l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
> +    l2_ro_mpt = map_xen_pagetable_new(
> +        l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]));
>  
>      if (l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) &
> _PAGE_PRESENT)
> -        return M2P_2M_MAPPED;
> +        rc = M2P_2M_MAPPED;
>  
> -    return M2P_NO_MAPPED;
> + out:
> +    UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
> +    UNMAP_XEN_PAGETABLE_NEW(l3_ro_mpt);
> +    return rc;
>  }
>  
>  static int share_hotadd_m2p_table(struct mem_hotadd_info *info)



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 42/55] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table
  2019-02-07 16:44 ` [PATCH RFC 42/55] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Wei Liu
@ 2019-03-19 16:45   ` Nuernberger, Stefan
  0 siblings, 0 replies; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-19 16:45 UTC (permalink / raw)
  To: xen-devel, wei.liu2; +Cc: andrew.cooper3, jbeulich, roger.pau

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  xen/arch/x86/x86_64/mm.c | 31 +++++++++++++++++++++++--------
>  1 file changed, 23 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
> index 2b88a1af37..597d8e9ed8 100644
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -166,8 +166,8 @@ static int share_hotadd_m2p_table(struct
> mem_hotadd_info *info)
>  {
>      unsigned long i, n, v;
>      mfn_t m2p_start_mfn = INVALID_MFN;
> -    l3_pgentry_t l3e;
> -    l2_pgentry_t l2e;
> +    l3_pgentry_t l3e, *l3t;
> +    l2_pgentry_t l2e, *l2t;
>  
>      /* M2P table is mappable read-only by privileged domains. */
>      for ( v  = RDWR_MPT_VIRT_START;
> @@ -175,14 +175,22 @@ static int share_hotadd_m2p_table(struct
> mem_hotadd_info *info)
>            v += n << PAGE_SHIFT )
>      {
>          n = L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES;
> -        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
> -            l3_table_offset(v)];
> +
> +        l3t = map_xen_pagetable_new(
> +            l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
> +        l3e = l3t[l3_table_offset(v)];
> Can you elaborate why this is required?
> 
> > 
> +        UNMAP_XEN_PAGETABLE_NEW(l3t);
> +

This pattern of mapping the table, retrieving the entry and then
immediately unmapping the table is repeated a couple of times here and
in later patches. This looks like it could use a convenience wrapper.

- Stefan


>          if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
>              continue;
>          if ( !(l3e_get_flags(l3e) & _PAGE_PSE) )
>          {
>              n = L1_PAGETABLE_ENTRIES;
> -            l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
> +
> +            l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
> +            l2e = l2t[l2_table_offset(v)];
> +            UNMAP_XEN_PAGETABLE_NEW(l2t);
> +
>              if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
>                  continue;
>              m2p_start_mfn = l2e_get_mfn(l2e);
> @@ -203,11 +211,18 @@ static int share_hotadd_m2p_table(struct
> mem_hotadd_info *info)
>            v != RDWR_COMPAT_MPT_VIRT_END;
>            v += 1 << L2_PAGETABLE_SHIFT )
>      {
> -        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
> -            l3_table_offset(v)];
> +        l3t = map_xen_pagetable_new(
> +            l4e_get_mfn(idle_pg_table[l4_table_offset(v)]));
> +        l3e = l3t[l3_table_offset(v)];
> +        UNMAP_XEN_PAGETABLE_NEW(l3t);
> +
>          if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
>              continue;
> -        l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
> +
> +        l2t = map_xen_pagetable_new(l3e_get_mfn(l3e));
> +        l2e = l2t[l2_table_offset(v)];
> +        UNMAP_XEN_PAGETABLE_NEW(l2t);
> +
>          if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
>              continue;
>          m2p_start_mfn = l2e_get_mfn(l2e);



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 00/55] x86: use domheap page for xen page tables
  2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
                   ` (55 preceding siblings ...)
  2019-03-18 21:14 ` [PATCH RFC 00/55] x86: use domheap page for xen page tables Nuernberger, Stefan
@ 2019-03-28 12:52 ` Nuernberger, Stefan
  56 siblings, 0 replies; 119+ messages in thread
From: Nuernberger, Stefan @ 2019-03-28 12:52 UTC (permalink / raw)
  To: xen-devel, wei.liu2

On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> This series switches xen page tables from xenheap page to domheap
> page.
> 
> This is required so that when we implement xenheap on top of vmap
> there won't
> be a loop.
> 
> It is done in roughly three steps:
> 
> 1. Introduce a new set of APIs, implement the old APIs on top of the
> new ones.
>    New APIs still use xenheap pages.
> 2. Switch each site which manipulate page tables to use the new APIs.
> 3. Switch new APIs to use domheap page.
> 
> You can find the series at:
> 
>   https://xenbits.xen.org/git-http/people/liuw/xen.git xen-pt-
> allocation-1
> 
> Wei.

Tested-by: Stefan Nuernberger <snu@amazon.de>

I backported the latest series to an internal version based on the
4.11.1 release.

I gave it a good spin on a variety of hardware generations. It's stable
with no regressions found. I did not do any performance measurements,
though. I did not test PV guests, only HVM domU with PV dom0.

- Stefan



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
Ust-ID: DE 289 237 879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 13/55] x86/mm: rewrite virt_to_xen_l3e
@ 2019-04-08 15:55     ` Jan Beulich
  0 siblings, 0 replies; 119+ messages in thread
From: Jan Beulich @ 2019-04-08 15:55 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> @@ -4769,45 +4769,70 @@ void free_xen_pagetable_new(mfn_t mfn)
>  
>  static DEFINE_SPINLOCK(map_pgdir_lock);
>  
> +/*
> + * Given a virtual address, return a pointer to xen's L3 entry. Caller
> + * needs to unmap the pointer.
> + */
>  static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
>  {
>      l4_pgentry_t *pl4e;
> +    l3_pgentry_t *pl3e = NULL;
>  
>      pl4e = &idle_pg_table[l4_table_offset(v)];
>      if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
>      {
>          bool locking = system_state > SYS_STATE_boot;
> -        l3_pgentry_t *l3t = alloc_xen_pagetable();
> +        l3_pgentry_t *l3t;
> +        mfn_t mfn;
> +
> +        mfn = alloc_xen_pagetable_new();
> +        if ( mfn_eq(mfn, INVALID_MFN) )
> +            goto out;

Any reason not to use "return NULL" here, avoiding the goto
and label altogether in tis function?

>  static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
>  {
>      l3_pgentry_t *pl3e;
> +    l2_pgentry_t *pl2e = NULL;
>  
>      pl3e = virt_to_xen_l3e(v);
>      if ( !pl3e )
> -        return NULL;
> +        goto out;

Why not keep this one the way it is?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 13/55] x86/mm: rewrite virt_to_xen_l3e
@ 2019-04-08 15:55     ` Jan Beulich
  0 siblings, 0 replies; 119+ messages in thread
From: Jan Beulich @ 2019-04-08 15:55 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> @@ -4769,45 +4769,70 @@ void free_xen_pagetable_new(mfn_t mfn)
>  
>  static DEFINE_SPINLOCK(map_pgdir_lock);
>  
> +/*
> + * Given a virtual address, return a pointer to xen's L3 entry. Caller
> + * needs to unmap the pointer.
> + */
>  static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
>  {
>      l4_pgentry_t *pl4e;
> +    l3_pgentry_t *pl3e = NULL;
>  
>      pl4e = &idle_pg_table[l4_table_offset(v)];
>      if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
>      {
>          bool locking = system_state > SYS_STATE_boot;
> -        l3_pgentry_t *l3t = alloc_xen_pagetable();
> +        l3_pgentry_t *l3t;
> +        mfn_t mfn;
> +
> +        mfn = alloc_xen_pagetable_new();
> +        if ( mfn_eq(mfn, INVALID_MFN) )
> +            goto out;

Any reason not to use "return NULL" here, avoiding the goto
and label altogether in tis function?

>  static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
>  {
>      l3_pgentry_t *pl3e;
> +    l2_pgentry_t *pl2e = NULL;
>  
>      pl3e = virt_to_xen_l3e(v);
>      if ( !pl3e )
> -        return NULL;
> +        goto out;

Why not keep this one the way it is?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 01/55] x86/mm: defer clearing page in virt_to_xen_lXe
@ 2019-04-09 12:21       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:21 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On Fri, Mar 15, 2019 at 08:38:38AM -0600, Jan Beulich wrote:
> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -4752,13 +4752,13 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
> >  
> >          if ( !pl3e )
> >              return NULL;
> > -        clear_page(pl3e);
> >          if ( locking )
> >              spin_lock(&map_pgdir_lock);
> >          if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
> >          {
> >              l4_pgentry_t l4e = l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR);
> >  
> > +            clear_page(pl3e);
> 
> Is this really an optimization? You treat avoiding to clear the page
> in a hopefully infrequent case of a race for holding the spin lock
> for quite a bit longer.

I was actually two minded while writing this patch. It is difficult to
prove one way or the other. To avoid distracting from the main issue at
hand, dropping this patch is the best way forward.

Wei.

> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 01/55] x86/mm: defer clearing page in virt_to_xen_lXe
@ 2019-04-09 12:21       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:21 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On Fri, Mar 15, 2019 at 08:38:38AM -0600, Jan Beulich wrote:
> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -4752,13 +4752,13 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
> >  
> >          if ( !pl3e )
> >              return NULL;
> > -        clear_page(pl3e);
> >          if ( locking )
> >              spin_lock(&map_pgdir_lock);
> >          if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
> >          {
> >              l4_pgentry_t l4e = l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR);
> >  
> > +            clear_page(pl3e);
> 
> Is this really an optimization? You treat avoiding to clear the page
> in a hopefully infrequent case of a race for holding the spin lock
> for quite a bit longer.

I was actually two minded while writing this patch. It is difficult to
prove one way or the other. To avoid distracting from the main issue at
hand, dropping this patch is the best way forward.

Wei.

> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
@ 2019-04-09 12:22       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:22 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Roger Pau Monne, Wei Liu, xen-devel

On Fri, Mar 15, 2019 at 09:36:37AM -0600, Jan Beulich wrote:
> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> > The pl2e and pl1e variables are heavily (ab)used in that function. It
> > is fine at the moment because all page tables are always mapped so
> > there is no need to track the life time of each variable.
> > 
> > We will soon have the requirement to map and unmap page tables. We
> > need to track the life time of each variable to avoid leakage.
> > 
> > Introduce some l{1,2}t variables with limited scope so that we can
> > track life time of pointers to xen page tables more easily.
> 
> But you retain some uses of the old variables, and to be honest it's
> not really clear to me by what criteria (and having multiple instances
> of a variable name in a single function isn't necessarily less confusing).
> I think we either stick to what's there (doesn't look too bad to me),
> or you switch to scope restricted page table pointers throughout the
> function, such that the function scope symbols can go away
> altogether).

I thought my commit message was clear enough: it helped tracking the
lifetime of pointers more easily. There is nothing new in this trick:
the less variables of the same name the better.

In this patch, places where it is clear that using local scope variables
suffices are changed to use local variables.

In the end, pl*e are only used to point to entries which point to the
page tables related to the linear address being mapped. They won't be
used to point to any intermediate pointers which are used to manipulate
page tables.

We can't eliminate function scope variables because they need to stay
function scope -- see later patches where they are unmapped in the out
path.

Wei.

> 
> Jan
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
@ 2019-04-09 12:22       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:22 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Roger Pau Monne, Wei Liu, xen-devel

On Fri, Mar 15, 2019 at 09:36:37AM -0600, Jan Beulich wrote:
> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> > The pl2e and pl1e variables are heavily (ab)used in that function. It
> > is fine at the moment because all page tables are always mapped so
> > there is no need to track the life time of each variable.
> > 
> > We will soon have the requirement to map and unmap page tables. We
> > need to track the life time of each variable to avoid leakage.
> > 
> > Introduce some l{1,2}t variables with limited scope so that we can
> > track life time of pointers to xen page tables more easily.
> 
> But you retain some uses of the old variables, and to be honest it's
> not really clear to me by what criteria (and having multiple instances
> of a variable name in a single function isn't necessarily less confusing).
> I think we either stick to what's there (doesn't look too bad to me),
> or you switch to scope restricted page table pointers throughout the
> function, such that the function scope symbols can go away
> altogether).

I thought my commit message was clear enough: it helped tracking the
lifetime of pointers more easily. There is nothing new in this trick:
the less variables of the same name the better.

In this patch, places where it is clear that using local scope variables
suffices are changed to use local variables.

In the end, pl*e are only used to point to entries which point to the
page tables related to the linear address being mapped. They won't be
used to point to any intermediate pointers which are used to manipulate
page tables.

We can't eliminate function scope variables because they need to stay
function scope -- see later patches where they are unmapped in the out
path.

Wei.

> 
> Jan
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 06/55] x86/mm: map_pages_to_xen should have one exit path
@ 2019-04-09 12:22       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:22 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, andrew.cooper3, wei.liu2, jbeulich, roger.pau

On Mon, Mar 18, 2019 at 09:14:14PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > We will soon rewrite the function to handle dynamically mapping and
> > unmapping of page tables.
> > 
> > No functional change.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  xen/arch/x86/mm.c | 34 +++++++++++++++++++++++++++-------
> >  1 file changed, 27 insertions(+), 7 deletions(-)
> > 
> > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> > index 4147a71c5d..3ab222c8ea 100644
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -4887,9 +4887,11 @@ int map_pages_to_xen(
> >      unsigned int flags)
> >  {
> >      bool locking = system_state > SYS_STATE_boot;
> > +    l3_pgentry_t *pl3e, ol3e;
> 
> After limiting the scope of other variables in the previous patches,
> you now widen the scope for this one. Is this so you can handle unmap
> in a common exit/error path later?

Yes.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 06/55] x86/mm: map_pages_to_xen should have one exit path
@ 2019-04-09 12:22       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:22 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, andrew.cooper3, wei.liu2, jbeulich, roger.pau

On Mon, Mar 18, 2019 at 09:14:14PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > We will soon rewrite the function to handle dynamically mapping and
> > unmapping of page tables.
> > 
> > No functional change.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  xen/arch/x86/mm.c | 34 +++++++++++++++++++++++++++-------
> >  1 file changed, 27 insertions(+), 7 deletions(-)
> > 
> > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> > index 4147a71c5d..3ab222c8ea 100644
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -4887,9 +4887,11 @@ int map_pages_to_xen(
> >      unsigned int flags)
> >  {
> >      bool locking = system_state > SYS_STATE_boot;
> > +    l3_pgentry_t *pl3e, ol3e;
> 
> After limiting the scope of other variables in the previous patches,
> you now widen the scope for this one. Is this so you can handle unmap
> in a common exit/error path later?

Yes.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen
@ 2019-04-09 12:22       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:22 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On Fri, Mar 15, 2019 at 09:40:39AM -0600, Jan Beulich wrote:
> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> > We will soon need to clean up mappings whenever the out most loop is
> > ended. Add a new label and turn relevant continue's into goto's.
> 
> To be honest, I was on the edge of already suggesting less use
> of goto in the previous patch. This one definitely goes too far
> for my taste - I can somehow live with goto-s used for error
> handling, but please not much more. Is there really no better
> way?
> 

It may be possible to call UNMAP at the beginning of the loop instead.
How does that sound to you?

Wei.

> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen
@ 2019-04-09 12:22       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:22 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On Fri, Mar 15, 2019 at 09:40:39AM -0600, Jan Beulich wrote:
> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> > We will soon need to clean up mappings whenever the out most loop is
> > ended. Add a new label and turn relevant continue's into goto's.
> 
> To be honest, I was on the edge of already suggesting less use
> of goto in the previous patch. This one definitely goes too far
> for my taste - I can somehow live with goto-s used for error
> handling, but please not much more. Is there really no better
> way?
> 

It may be possible to call UNMAP at the beginning of the loop instead.
How does that sound to you?

Wei.

> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 18/55] x86/mm: switch to new APIs in modify_xen_mappings
@ 2019-04-09 12:22       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:22 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, andrew.cooper3, wei.liu2, jbeulich, roger.pau

On Mon, Mar 18, 2019 at 09:14:28PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > Page tables allocated in that function should be mapped and unmapped
> > now.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  xen/arch/x86/mm.c | 31 ++++++++++++++++++++++---------
> >  1 file changed, 22 insertions(+), 9 deletions(-)
> > 
> > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> > index 1ea2974c1f..18c7b43705 100644
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -5436,6 +5436,7 @@ int modify_xen_mappings(unsigned long s,
> > unsigned long e, unsigned int nf)
> >          if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
> >          {
> >              l2_pgentry_t *l2t;
> > +            mfn_t mfn;
> 
> nit: I wouldn't mind if these scoped mfns were caled l2t_mfn / l1t_mfn
> in this patch, too.
> 
> >  
> >              if ( l2_table_offset(v) == 0 &&
> >                   l1_table_offset(v) == 0 &&
> > @@ -5452,13 +5453,15 @@ int modify_xen_mappings(unsigned long s,
> > unsigned long e, unsigned int nf)
> >              }
> >  
> >              /* PAGE1GB: shatter the superpage and fall through. */
> > -            l2t = alloc_xen_pagetable();
> > -            if ( !l2t )
> > +            mfn = alloc_xen_pagetable_new();
> > +            if ( mfn_eq(mfn, INVALID_MFN) )
> >              {
> >                  ASSERT(rc == -ENOMEM);
> >                  goto out;
> >              }
> >  
> > +            l2t = map_xen_pagetable_new(mfn);
> 
> Is map_xen_pagetable always guaranteed to succeed on a valid mfn (also
> in the future)? Otherwise the validity check should be done on l2t as
> before instead of mfn. But it looks like map_xen_pagetable{_new} does
> not deal with invalid mfns.

It is guaranteed to succeed by design.

> 
> > +
> >              for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
> >                  l2e_write(l2t + i,
> >                            l2e_from_pfn(l3e_get_pfn(*pl3e) +
> > @@ -5469,14 +5472,17 @@ int modify_xen_mappings(unsigned long s,
> > unsigned long e, unsigned int nf)
> >              if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
> >                   (l3e_get_flags(*pl3e) & _PAGE_PSE) )
> >              {
> > -                l3e_write_atomic(pl3e,
> > l3e_from_mfn(virt_to_mfn(l2t),
> > -                                                    __PAGE_HYPERVISO
> > R));
> > +                l3e_write_atomic(pl3e, l3e_from_mfn(mfn,
> > __PAGE_HYPERVISOR));
> > +                UNMAP_XEN_PAGETABLE_NEW(l2t);
> >                  l2t = NULL;
> 
> That NULL assignment is redundant now. It's done by the UNMAP macro.

Not yet. This is left as-is intentionally. It depends on what we will do
regarding UNMAP_XEN_PAGETABLE.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 18/55] x86/mm: switch to new APIs in modify_xen_mappings
@ 2019-04-09 12:22       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:22 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, andrew.cooper3, wei.liu2, jbeulich, roger.pau

On Mon, Mar 18, 2019 at 09:14:28PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > Page tables allocated in that function should be mapped and unmapped
> > now.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  xen/arch/x86/mm.c | 31 ++++++++++++++++++++++---------
> >  1 file changed, 22 insertions(+), 9 deletions(-)
> > 
> > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> > index 1ea2974c1f..18c7b43705 100644
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -5436,6 +5436,7 @@ int modify_xen_mappings(unsigned long s,
> > unsigned long e, unsigned int nf)
> >          if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
> >          {
> >              l2_pgentry_t *l2t;
> > +            mfn_t mfn;
> 
> nit: I wouldn't mind if these scoped mfns were caled l2t_mfn / l1t_mfn
> in this patch, too.
> 
> >  
> >              if ( l2_table_offset(v) == 0 &&
> >                   l1_table_offset(v) == 0 &&
> > @@ -5452,13 +5453,15 @@ int modify_xen_mappings(unsigned long s,
> > unsigned long e, unsigned int nf)
> >              }
> >  
> >              /* PAGE1GB: shatter the superpage and fall through. */
> > -            l2t = alloc_xen_pagetable();
> > -            if ( !l2t )
> > +            mfn = alloc_xen_pagetable_new();
> > +            if ( mfn_eq(mfn, INVALID_MFN) )
> >              {
> >                  ASSERT(rc == -ENOMEM);
> >                  goto out;
> >              }
> >  
> > +            l2t = map_xen_pagetable_new(mfn);
> 
> Is map_xen_pagetable always guaranteed to succeed on a valid mfn (also
> in the future)? Otherwise the validity check should be done on l2t as
> before instead of mfn. But it looks like map_xen_pagetable{_new} does
> not deal with invalid mfns.

It is guaranteed to succeed by design.

> 
> > +
> >              for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
> >                  l2e_write(l2t + i,
> >                            l2e_from_pfn(l3e_get_pfn(*pl3e) +
> > @@ -5469,14 +5472,17 @@ int modify_xen_mappings(unsigned long s,
> > unsigned long e, unsigned int nf)
> >              if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
> >                   (l3e_get_flags(*pl3e) & _PAGE_PSE) )
> >              {
> > -                l3e_write_atomic(pl3e,
> > l3e_from_mfn(virt_to_mfn(l2t),
> > -                                                    __PAGE_HYPERVISO
> > R));
> > +                l3e_write_atomic(pl3e, l3e_from_mfn(mfn,
> > __PAGE_HYPERVISOR));
> > +                UNMAP_XEN_PAGETABLE_NEW(l2t);
> >                  l2t = NULL;
> 
> That NULL assignment is redundant now. It's done by the UNMAP macro.

Not yet. This is left as-is intentionally. It depends on what we will do
regarding UNMAP_XEN_PAGETABLE.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 23/55] x86_64/mm: drop l4e_to_l3e invocation from paging_init
@ 2019-04-09 12:22       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:22 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, andrew.cooper3, wei.liu2, jbeulich, roger.pau

On Mon, Mar 18, 2019 at 09:14:34PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> 
> Any reason why this isn't squashed with the previous patch?

Because of how I classified changes -- lXe_to_lYe belongs to their own
group.

I can squash a lot of stuff together but I decided this way was better.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 23/55] x86_64/mm: drop l4e_to_l3e invocation from paging_init
@ 2019-04-09 12:22       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:22 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, andrew.cooper3, wei.liu2, jbeulich, roger.pau

On Mon, Mar 18, 2019 at 09:14:34PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> 
> Any reason why this isn't squashed with the previous patch?

Because of how I classified changes -- lXe_to_lYe belongs to their own
group.

I can squash a lot of stuff together but I decided this way was better.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 31/55] efi: add emacs block to boot.c
@ 2019-04-09 12:23       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:23 UTC (permalink / raw)
  To: Nuernberger, Stefan; +Cc: xen-devel, wei.liu2, jbeulich

On Mon, Mar 18, 2019 at 09:14:41PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  xen/common/efi/boot.c | 10 ++++++++++
> >  1 file changed, 10 insertions(+)
> > 
> > diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
> > index 1d1420f02c..3868293d06 100644
> > --- a/xen/common/efi/boot.c
> > +++ b/xen/common/efi/boot.c
> > @@ -1705,3 +1705,13 @@ void __init efi_init_memory(void)
> >  #endif
> >  }
> >  #endif
> > +
> > +/*
> > + * Local variables:
> > + * mode: C
> > + * c-file-style: "BSD"
> > + * c-basic-offset: 4
> > + * tab-width: 4
> > + * indent-tabs-mode: nil
> > + * End:
> > + */
> 
> Not relevant to the patch series. What's the upstream position on these
> changes? Should they be introduced in separate 'cleanup' series or is
> it common to include that as part of unrelated functional change
> series? I personally don't mind either way. (And same applies to the
> other emacs block commit in the series, of course.)
> 

I normally do cleanups as I change existing code to avoid too much code
churn in one go.

Wei.

> - Stefan
> 
> 
> 
> 
> Amazon Development Center Germany GmbH
> Krausenstr. 38
> 10117 Berlin
> Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
> Ust-ID: DE 289 237 879
> Eingetragen am Amtsgericht Charlottenburg HRB 149173 B
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 31/55] efi: add emacs block to boot.c
@ 2019-04-09 12:23       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:23 UTC (permalink / raw)
  To: Nuernberger, Stefan; +Cc: xen-devel, wei.liu2, jbeulich

On Mon, Mar 18, 2019 at 09:14:41PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  xen/common/efi/boot.c | 10 ++++++++++
> >  1 file changed, 10 insertions(+)
> > 
> > diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
> > index 1d1420f02c..3868293d06 100644
> > --- a/xen/common/efi/boot.c
> > +++ b/xen/common/efi/boot.c
> > @@ -1705,3 +1705,13 @@ void __init efi_init_memory(void)
> >  #endif
> >  }
> >  #endif
> > +
> > +/*
> > + * Local variables:
> > + * mode: C
> > + * c-file-style: "BSD"
> > + * c-basic-offset: 4
> > + * tab-width: 4
> > + * indent-tabs-mode: nil
> > + * End:
> > + */
> 
> Not relevant to the patch series. What's the upstream position on these
> changes? Should they be introduced in separate 'cleanup' series or is
> it common to include that as part of unrelated functional change
> series? I personally don't mind either way. (And same applies to the
> other emacs block commit in the series, of course.)
> 

I normally do cleanups as I change existing code to avoid too much code
churn in one go.

Wei.

> - Stefan
> 
> 
> 
> 
> Amazon Development Center Germany GmbH
> Krausenstr. 38
> 10117 Berlin
> Geschaeftsfuehrer: Christian Schlaeger, Ralf Herbrich
> Ust-ID: DE 289 237 879
> Eingetragen am Amtsgericht Charlottenburg HRB 149173 B
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 32/55] efi: switch EFI L4 table to use new APIs
@ 2019-04-09 12:23       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:23 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, andrew.cooper3, wei.liu2, jbeulich, roger.pau

On Mon, Mar 18, 2019 at 09:14:44PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > This requires storing the MFN instead of linear address of the L4
> > table. Adjust code accordingly.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  xen/arch/x86/efi/runtime.h | 12 +++++++++---
> >  xen/common/efi/boot.c      |  8 ++++++--
> >  xen/common/efi/efi.h       |  3 ++-
> >  xen/common/efi/runtime.c   |  8 ++++----
> >  4 files changed, 21 insertions(+), 10 deletions(-)
> > 
> > diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h
> > index d9eb8f5c27..277d237953 100644
> > --- a/xen/arch/x86/efi/runtime.h
> > +++ b/xen/arch/x86/efi/runtime.h
> > @@ -2,11 +2,17 @@
> >  #include <asm/mc146818rtc.h>
> >  
> >  #ifndef COMPAT
> > -l4_pgentry_t *__read_mostly efi_l4_pgtable;
> > +mfn_t __read_mostly efi_l4_mfn = INVALID_MFN_INITIALIZER;
> >  
> >  void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e)
> >  {
> > -    if ( efi_l4_pgtable )
> > -        l4e_write(efi_l4_pgtable + l4idx, l4e);
> > +    if ( !mfn_eq(efi_l4_mfn, INVALID_MFN) )
> > +    {
> > +        l4_pgentry_t *l4t;
> > +
> > +        l4t = map_xen_pagetable_new(efi_l4_mfn);
> > +        l4e_write(l4t + l4idx, l4e);
> > +        UNMAP_XEN_PAGETABLE_NEW(l4t);
> 
> nit: This doesn't need the implicit NULL assignment. The non-macro
> unmap_xen_pagetable_new is sufficient here. Do you have a guideline on
> when to use the macro over the function call? I assume the compiler
> eliminates most of the dead stores on function return.
> 

I introduced the macro such that any misuse of stale pointer could be
caught as early as possible. We've got similar construct -- see XFREE.

I certainly hope compilers can become smart enough. We can also
eliminate useless assignment in a later series. I didn't spend much time
going over all the places during the development of this series.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 32/55] efi: switch EFI L4 table to use new APIs
@ 2019-04-09 12:23       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:23 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, andrew.cooper3, wei.liu2, jbeulich, roger.pau

On Mon, Mar 18, 2019 at 09:14:44PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > This requires storing the MFN instead of linear address of the L4
> > table. Adjust code accordingly.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  xen/arch/x86/efi/runtime.h | 12 +++++++++---
> >  xen/common/efi/boot.c      |  8 ++++++--
> >  xen/common/efi/efi.h       |  3 ++-
> >  xen/common/efi/runtime.c   |  8 ++++----
> >  4 files changed, 21 insertions(+), 10 deletions(-)
> > 
> > diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h
> > index d9eb8f5c27..277d237953 100644
> > --- a/xen/arch/x86/efi/runtime.h
> > +++ b/xen/arch/x86/efi/runtime.h
> > @@ -2,11 +2,17 @@
> >  #include <asm/mc146818rtc.h>
> >  
> >  #ifndef COMPAT
> > -l4_pgentry_t *__read_mostly efi_l4_pgtable;
> > +mfn_t __read_mostly efi_l4_mfn = INVALID_MFN_INITIALIZER;
> >  
> >  void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e)
> >  {
> > -    if ( efi_l4_pgtable )
> > -        l4e_write(efi_l4_pgtable + l4idx, l4e);
> > +    if ( !mfn_eq(efi_l4_mfn, INVALID_MFN) )
> > +    {
> > +        l4_pgentry_t *l4t;
> > +
> > +        l4t = map_xen_pagetable_new(efi_l4_mfn);
> > +        l4e_write(l4t + l4idx, l4e);
> > +        UNMAP_XEN_PAGETABLE_NEW(l4t);
> 
> nit: This doesn't need the implicit NULL assignment. The non-macro
> unmap_xen_pagetable_new is sufficient here. Do you have a guideline on
> when to use the macro over the function call? I assume the compiler
> eliminates most of the dead stores on function return.
> 

I introduced the macro such that any misuse of stale pointer could be
caught as early as possible. We've got similar construct -- see XFREE.

I certainly hope compilers can become smart enough. We can also
eliminate useless assignment in a later series. I didn't spend much time
going over all the places during the development of this series.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 39/55] x86: switch root_pgt to mfn_t and use new APIs
@ 2019-04-09 12:23       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:23 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, roger.pau, wei.liu2, jbeulich, andrew.cooper3

On Tue, Mar 19, 2019 at 04:45:24PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > This then requires moving declaration of root page table mfn into
> > mm.h
> > and modify setup_cpu_root_pgt to have a single exit path.
> > 
> > We also need to force map_domain_page to use direct map when
> > switching
> > per-domain mappings. This is contrary to our end goal of removing
> > direct map, but this will be removed once we make map_domain_page
> > context-switch safe in another (large) patch series.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  xen/arch/x86/domain.c           | 15 ++++++++++++---
> >  xen/arch/x86/domain_page.c      |  2 +-
> >  xen/arch/x86/mm.c               |  2 +-
> >  xen/arch/x86/pv/domain.c        |  2 +-
> >  xen/arch/x86/smpboot.c          | 40 +++++++++++++++++++++++++++--
> > -----------
> >  xen/include/asm-x86/mm.h        |  2 ++
> >  xen/include/asm-x86/processor.h |  1 -
> >  7 files changed, 44 insertions(+), 20 deletions(-)
> > 
> > diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> > index 32dc4253ff..603495e55a 100644
> > --- a/xen/arch/x86/domain.c
> > +++ b/xen/arch/x86/domain.c
> > @@ -68,6 +68,7 @@
> >  #include <asm/pv/domain.h>
> >  #include <asm/pv/mm.h>
> >  #include <asm/spec_ctrl.h>
> > +#include <asm/setup.h>
> >  
> >  DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
> >  
> > @@ -1589,12 +1590,20 @@ void paravirt_ctxt_switch_from(struct vcpu
> > *v)
> >  
> >  void paravirt_ctxt_switch_to(struct vcpu *v)
> >  {
> > -    root_pgentry_t *root_pgt = this_cpu(root_pgt);
> > +    mfn_t rpt_mfn = this_cpu(root_pgt_mfn);
> >  
> > -    if ( root_pgt )
> > -        root_pgt[root_table_offset(PERDOMAIN_VIRT_START)] =
> > +    if ( !mfn_eq(rpt_mfn, INVALID_MFN) )
> > +    {
> > +        root_pgentry_t *rpt;
> > +
> > +        mapcache_override_current(INVALID_VCPU);
> 
> Can you elaborate why this is required? A comment might help. I assume
> this forces the root pt to be mapped in idle domain context instead of
> current vcpu?
> 


Sure. Frankly this change is not something I like.


  /*
   * XXX We're manipulating live page tables here. map_domain_page
   * relies on this exact snippet to work. For now, force
   * map_domain_page to use direct map. This hack can be removed once we
   * have per-cpu map_domain_page implementation.
   */

If we would rather not have this for now, we can apply this series up to
the patch to switch from xenheap to domheap.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 39/55] x86: switch root_pgt to mfn_t and use new APIs
@ 2019-04-09 12:23       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:23 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, roger.pau, wei.liu2, jbeulich, andrew.cooper3

On Tue, Mar 19, 2019 at 04:45:24PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > This then requires moving declaration of root page table mfn into
> > mm.h
> > and modify setup_cpu_root_pgt to have a single exit path.
> > 
> > We also need to force map_domain_page to use direct map when
> > switching
> > per-domain mappings. This is contrary to our end goal of removing
> > direct map, but this will be removed once we make map_domain_page
> > context-switch safe in another (large) patch series.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  xen/arch/x86/domain.c           | 15 ++++++++++++---
> >  xen/arch/x86/domain_page.c      |  2 +-
> >  xen/arch/x86/mm.c               |  2 +-
> >  xen/arch/x86/pv/domain.c        |  2 +-
> >  xen/arch/x86/smpboot.c          | 40 +++++++++++++++++++++++++++--
> > -----------
> >  xen/include/asm-x86/mm.h        |  2 ++
> >  xen/include/asm-x86/processor.h |  1 -
> >  7 files changed, 44 insertions(+), 20 deletions(-)
> > 
> > diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> > index 32dc4253ff..603495e55a 100644
> > --- a/xen/arch/x86/domain.c
> > +++ b/xen/arch/x86/domain.c
> > @@ -68,6 +68,7 @@
> >  #include <asm/pv/domain.h>
> >  #include <asm/pv/mm.h>
> >  #include <asm/spec_ctrl.h>
> > +#include <asm/setup.h>
> >  
> >  DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
> >  
> > @@ -1589,12 +1590,20 @@ void paravirt_ctxt_switch_from(struct vcpu
> > *v)
> >  
> >  void paravirt_ctxt_switch_to(struct vcpu *v)
> >  {
> > -    root_pgentry_t *root_pgt = this_cpu(root_pgt);
> > +    mfn_t rpt_mfn = this_cpu(root_pgt_mfn);
> >  
> > -    if ( root_pgt )
> > -        root_pgt[root_table_offset(PERDOMAIN_VIRT_START)] =
> > +    if ( !mfn_eq(rpt_mfn, INVALID_MFN) )
> > +    {
> > +        root_pgentry_t *rpt;
> > +
> > +        mapcache_override_current(INVALID_VCPU);
> 
> Can you elaborate why this is required? A comment might help. I assume
> this forces the root pt to be mapped in idle domain context instead of
> current vcpu?
> 


Sure. Frankly this change is not something I like.


  /*
   * XXX We're manipulating live page tables here. map_domain_page
   * relies on this exact snippet to work. For now, force
   * map_domain_page to use direct map. This hack can be removed once we
   * have per-cpu map_domain_page implementation.
   */

If we would rather not have this for now, we can apply this series up to
the patch to switch from xenheap to domheap.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 13/55] x86/mm: rewrite virt_to_xen_l3e
@ 2019-04-09 12:27       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:27 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On Mon, Apr 08, 2019 at 09:55:47AM -0600, Jan Beulich wrote:
> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> > @@ -4769,45 +4769,70 @@ void free_xen_pagetable_new(mfn_t mfn)
> >  
> >  static DEFINE_SPINLOCK(map_pgdir_lock);
> >  
> > +/*
> > + * Given a virtual address, return a pointer to xen's L3 entry. Caller
> > + * needs to unmap the pointer.
> > + */
> >  static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
> >  {
> >      l4_pgentry_t *pl4e;
> > +    l3_pgentry_t *pl3e = NULL;
> >  
> >      pl4e = &idle_pg_table[l4_table_offset(v)];
> >      if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
> >      {
> >          bool locking = system_state > SYS_STATE_boot;
> > -        l3_pgentry_t *l3t = alloc_xen_pagetable();
> > +        l3_pgentry_t *l3t;
> > +        mfn_t mfn;
> > +
> > +        mfn = alloc_xen_pagetable_new();
> > +        if ( mfn_eq(mfn, INVALID_MFN) )
> > +            goto out;
> 
> Any reason not to use "return NULL" here, avoiding the goto
> and label altogether in tis function?

This can be done, but I wanted to have all virt_to_xen_lXe have the same
structure.

> 
> >  static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
> >  {
> >      l3_pgentry_t *pl3e;
> > +    l2_pgentry_t *pl2e = NULL;
> >  
> >      pl3e = virt_to_xen_l3e(v);
> >      if ( !pl3e )
> > -        return NULL;
> > +        goto out;
> 
> Why not keep this one the way it is?

Because there will be an out label anyway, why not use it to have a
single exit path?

Wei.

> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 13/55] x86/mm: rewrite virt_to_xen_l3e
@ 2019-04-09 12:27       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:27 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On Mon, Apr 08, 2019 at 09:55:47AM -0600, Jan Beulich wrote:
> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> > @@ -4769,45 +4769,70 @@ void free_xen_pagetable_new(mfn_t mfn)
> >  
> >  static DEFINE_SPINLOCK(map_pgdir_lock);
> >  
> > +/*
> > + * Given a virtual address, return a pointer to xen's L3 entry. Caller
> > + * needs to unmap the pointer.
> > + */
> >  static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
> >  {
> >      l4_pgentry_t *pl4e;
> > +    l3_pgentry_t *pl3e = NULL;
> >  
> >      pl4e = &idle_pg_table[l4_table_offset(v)];
> >      if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
> >      {
> >          bool locking = system_state > SYS_STATE_boot;
> > -        l3_pgentry_t *l3t = alloc_xen_pagetable();
> > +        l3_pgentry_t *l3t;
> > +        mfn_t mfn;
> > +
> > +        mfn = alloc_xen_pagetable_new();
> > +        if ( mfn_eq(mfn, INVALID_MFN) )
> > +            goto out;
> 
> Any reason not to use "return NULL" here, avoiding the goto
> and label altogether in tis function?

This can be done, but I wanted to have all virt_to_xen_lXe have the same
structure.

> 
> >  static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
> >  {
> >      l3_pgentry_t *pl3e;
> > +    l2_pgentry_t *pl2e = NULL;
> >  
> >      pl3e = virt_to_xen_l3e(v);
> >      if ( !pl3e )
> > -        return NULL;
> > +        goto out;
> 
> Why not keep this one the way it is?

Because there will be an out label anyway, why not use it to have a
single exit path?

Wei.

> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 14/55] x86/mm: rewrite xen_to_virt_l2e
@ 2019-04-09 12:27       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:27 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, andrew.cooper3, wei.liu2, jbeulich, roger.pau

On Mon, Mar 18, 2019 at 09:14:19PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > Rewrite that function to use the new APIs. Modify its callers to
> > unmap
> > the pointer returned.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> 
> nit: the commit title should be 'virt_to_xen_l2e' not 'xen_to_virt...'

Fixed. Thanks.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 14/55] x86/mm: rewrite xen_to_virt_l2e
@ 2019-04-09 12:27       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:27 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, andrew.cooper3, wei.liu2, jbeulich, roger.pau

On Mon, Mar 18, 2019 at 09:14:19PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > Rewrite that function to use the new APIs. Modify its callers to
> > unmap
> > the pointer returned.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> 
> nit: the commit title should be 'virt_to_xen_l2e' not 'xen_to_virt...'

Fixed. Thanks.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 27/55] x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table
@ 2019-04-09 12:30       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:30 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, roger.pau, wei.liu2, jbeulich, andrew.cooper3

On Mon, Mar 18, 2019 at 09:14:38PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  xen/arch/x86/x86_64/mm.c | 16 ++++++++++++----
> >  1 file changed, 12 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
> > index 0b85961105..216f97c95f 100644
> > --- a/xen/arch/x86/x86_64/mm.c
> > +++ b/xen/arch/x86/x86_64/mm.c
> > @@ -400,11 +400,13 @@ static int setup_m2p_table(struct
> > mem_hotadd_info *info)
> >      l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
> >      l3_pgentry_t *l3_ro_mpt = NULL;
> >      int ret = 0;
> > -    mfn_t l2_ro_mpt_mfn;
> > +    mfn_t l2_ro_mpt_mfn, l3_ro_mpt_mfn;
> >  
> >      ASSERT(l4e_get_flags(idle_pg_table[l4_table_offset(RO_MPT_VIRT_S
> > TART)])
> >              & _PAGE_PRESENT);
> > -    l3_ro_mpt =
> > l4e_to_l3e(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
> > +    l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset(
> > +                                        RO_MPT_VIRT_START)]);
> > +    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
> >  
> >      smap = (info->spfn & (~((1UL << (L2_PAGETABLE_SHIFT - 3)) -1)));
> >      emap = ((info->epfn + ((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1 ))
> > &
> > @@ -459,8 +461,13 @@ static int setup_m2p_table(struct
> > mem_hotadd_info *info)
> >                    _PAGE_PSE));
> >              if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
> >                _PAGE_PRESENT )
> > -                pl2e = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
> > -                  l2_table_offset(va);
> > +            {
> > +                UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
> > +                l2_ro_mpt_mfn =
> > l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]);
> > +                l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
> > +                ASSERT(l2_ro_mpt);
> 
> Do we need this assert here? What are the possibilities to recover from
> that situation? I think this should be BUG_ON or it should at least
> return through the error path.

I'm not too fussed. This is early initialisation of xen. If this doesn't
work, all bets are off.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 27/55] x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table
@ 2019-04-09 12:30       ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:30 UTC (permalink / raw)
  To: Nuernberger, Stefan
  Cc: xen-devel, roger.pau, wei.liu2, jbeulich, andrew.cooper3

On Mon, Mar 18, 2019 at 09:14:38PM +0000, Nuernberger, Stefan wrote:
> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > ---
> >  xen/arch/x86/x86_64/mm.c | 16 ++++++++++++----
> >  1 file changed, 12 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
> > index 0b85961105..216f97c95f 100644
> > --- a/xen/arch/x86/x86_64/mm.c
> > +++ b/xen/arch/x86/x86_64/mm.c
> > @@ -400,11 +400,13 @@ static int setup_m2p_table(struct
> > mem_hotadd_info *info)
> >      l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
> >      l3_pgentry_t *l3_ro_mpt = NULL;
> >      int ret = 0;
> > -    mfn_t l2_ro_mpt_mfn;
> > +    mfn_t l2_ro_mpt_mfn, l3_ro_mpt_mfn;
> >  
> >      ASSERT(l4e_get_flags(idle_pg_table[l4_table_offset(RO_MPT_VIRT_S
> > TART)])
> >              & _PAGE_PRESENT);
> > -    l3_ro_mpt =
> > l4e_to_l3e(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
> > +    l3_ro_mpt_mfn = l4e_get_mfn(idle_pg_table[l4_table_offset(
> > +                                        RO_MPT_VIRT_START)]);
> > +    l3_ro_mpt = map_xen_pagetable_new(l3_ro_mpt_mfn);
> >  
> >      smap = (info->spfn & (~((1UL << (L2_PAGETABLE_SHIFT - 3)) -1)));
> >      emap = ((info->epfn + ((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1 ))
> > &
> > @@ -459,8 +461,13 @@ static int setup_m2p_table(struct
> > mem_hotadd_info *info)
> >                    _PAGE_PSE));
> >              if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
> >                _PAGE_PRESENT )
> > -                pl2e = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
> > -                  l2_table_offset(va);
> > +            {
> > +                UNMAP_XEN_PAGETABLE_NEW(l2_ro_mpt);
> > +                l2_ro_mpt_mfn =
> > l3e_get_mfn(l3_ro_mpt[l3_table_offset(va)]);
> > +                l2_ro_mpt = map_xen_pagetable_new(l2_ro_mpt_mfn);
> > +                ASSERT(l2_ro_mpt);
> 
> Do we need this assert here? What are the possibilities to recover from
> that situation? I think this should be BUG_ON or it should at least
> return through the error path.

I'm not too fussed. This is early initialisation of xen. If this doesn't
work, all bets are off.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
@ 2019-04-09 12:49         ` Jan Beulich
  0 siblings, 0 replies; 119+ messages in thread
From: Jan Beulich @ 2019-04-09 12:49 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 09.04.19 at 14:22, <wei.liu2@citrix.com> wrote:
> On Fri, Mar 15, 2019 at 09:36:37AM -0600, Jan Beulich wrote:
>> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
>> > The pl2e and pl1e variables are heavily (ab)used in that function. It
>> > is fine at the moment because all page tables are always mapped so
>> > there is no need to track the life time of each variable.
>> > 
>> > We will soon have the requirement to map and unmap page tables. We
>> > need to track the life time of each variable to avoid leakage.
>> > 
>> > Introduce some l{1,2}t variables with limited scope so that we can
>> > track life time of pointers to xen page tables more easily.
>> 
>> But you retain some uses of the old variables, and to be honest it's
>> not really clear to me by what criteria (and having multiple instances
>> of a variable name in a single function isn't necessarily less confusing).
>> I think we either stick to what's there (doesn't look too bad to me),
>> or you switch to scope restricted page table pointers throughout the
>> function, such that the function scope symbols can go away
>> altogether).
> 
> I thought my commit message was clear enough: it helped tracking the
> lifetime of pointers more easily. There is nothing new in this trick:
> the less variables of the same name the better.
> 
> In this patch, places where it is clear that using local scope variables
> suffices are changed to use local variables.
> 
> In the end, pl*e are only used to point to entries which point to the
> page tables related to the linear address being mapped. They won't be
> used to point to any intermediate pointers which are used to manipulate
> page tables.
> 
> We can't eliminate function scope variables because they need to stay
> function scope -- see later patches where they are unmapped in the out
> path.

Except that I'm not really happy with that approach either (not
only but also because of the proliferation of goto-s).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
@ 2019-04-09 12:49         ` Jan Beulich
  0 siblings, 0 replies; 119+ messages in thread
From: Jan Beulich @ 2019-04-09 12:49 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 09.04.19 at 14:22, <wei.liu2@citrix.com> wrote:
> On Fri, Mar 15, 2019 at 09:36:37AM -0600, Jan Beulich wrote:
>> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
>> > The pl2e and pl1e variables are heavily (ab)used in that function. It
>> > is fine at the moment because all page tables are always mapped so
>> > there is no need to track the life time of each variable.
>> > 
>> > We will soon have the requirement to map and unmap page tables. We
>> > need to track the life time of each variable to avoid leakage.
>> > 
>> > Introduce some l{1,2}t variables with limited scope so that we can
>> > track life time of pointers to xen page tables more easily.
>> 
>> But you retain some uses of the old variables, and to be honest it's
>> not really clear to me by what criteria (and having multiple instances
>> of a variable name in a single function isn't necessarily less confusing).
>> I think we either stick to what's there (doesn't look too bad to me),
>> or you switch to scope restricted page table pointers throughout the
>> function, such that the function scope symbols can go away
>> altogether).
> 
> I thought my commit message was clear enough: it helped tracking the
> lifetime of pointers more easily. There is nothing new in this trick:
> the less variables of the same name the better.
> 
> In this patch, places where it is clear that using local scope variables
> suffices are changed to use local variables.
> 
> In the end, pl*e are only used to point to entries which point to the
> page tables related to the linear address being mapped. They won't be
> used to point to any intermediate pointers which are used to manipulate
> page tables.
> 
> We can't eliminate function scope variables because they need to stay
> function scope -- see later patches where they are unmapped in the out
> path.

Except that I'm not really happy with that approach either (not
only but also because of the proliferation of goto-s).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen
@ 2019-04-09 12:50         ` Jan Beulich
  0 siblings, 0 replies; 119+ messages in thread
From: Jan Beulich @ 2019-04-09 12:50 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 09.04.19 at 14:22, <wei.liu2@citrix.com> wrote:
> On Fri, Mar 15, 2019 at 09:40:39AM -0600, Jan Beulich wrote:
>> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
>> > We will soon need to clean up mappings whenever the out most loop is
>> > ended. Add a new label and turn relevant continue's into goto's.
>> 
>> To be honest, I was on the edge of already suggesting less use
>> of goto in the previous patch. This one definitely goes too far
>> for my taste - I can somehow live with goto-s used for error
>> handling, but please not much more. Is there really no better
>> way?
>> 
> 
> It may be possible to call UNMAP at the beginning of the loop instead.
> How does that sound to you?

I think that's going to be better.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen
@ 2019-04-09 12:50         ` Jan Beulich
  0 siblings, 0 replies; 119+ messages in thread
From: Jan Beulich @ 2019-04-09 12:50 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 09.04.19 at 14:22, <wei.liu2@citrix.com> wrote:
> On Fri, Mar 15, 2019 at 09:40:39AM -0600, Jan Beulich wrote:
>> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
>> > We will soon need to clean up mappings whenever the out most loop is
>> > ended. Add a new label and turn relevant continue's into goto's.
>> 
>> To be honest, I was on the edge of already suggesting less use
>> of goto in the previous patch. This one definitely goes too far
>> for my taste - I can somehow live with goto-s used for error
>> handling, but please not much more. Is there really no better
>> way?
>> 
> 
> It may be possible to call UNMAP at the beginning of the loop instead.
> How does that sound to you?

I think that's going to be better.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 13/55] x86/mm: rewrite virt_to_xen_l3e
@ 2019-04-09 12:52         ` Jan Beulich
  0 siblings, 0 replies; 119+ messages in thread
From: Jan Beulich @ 2019-04-09 12:52 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 09.04.19 at 14:27, <wei.liu2@citrix.com> wrote:
> On Mon, Apr 08, 2019 at 09:55:47AM -0600, Jan Beulich wrote:
>> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
>> > @@ -4769,45 +4769,70 @@ void free_xen_pagetable_new(mfn_t mfn)
>> >  
>> >  static DEFINE_SPINLOCK(map_pgdir_lock);
>> >  
>> > +/*
>> > + * Given a virtual address, return a pointer to xen's L3 entry. Caller
>> > + * needs to unmap the pointer.
>> > + */
>> >  static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
>> >  {
>> >      l4_pgentry_t *pl4e;
>> > +    l3_pgentry_t *pl3e = NULL;
>> >  
>> >      pl4e = &idle_pg_table[l4_table_offset(v)];
>> >      if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
>> >      {
>> >          bool locking = system_state > SYS_STATE_boot;
>> > -        l3_pgentry_t *l3t = alloc_xen_pagetable();
>> > +        l3_pgentry_t *l3t;
>> > +        mfn_t mfn;
>> > +
>> > +        mfn = alloc_xen_pagetable_new();
>> > +        if ( mfn_eq(mfn, INVALID_MFN) )
>> > +            goto out;
>> 
>> Any reason not to use "return NULL" here, avoiding the goto
>> and label altogether in tis function?
> 
> This can be done, but I wanted to have all virt_to_xen_lXe have the same
> structure.

I sort of guessed this but for both this and ...

>> >  static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
>> >  {
>> >      l3_pgentry_t *pl3e;
>> > +    l2_pgentry_t *pl2e = NULL;
>> >  
>> >      pl3e = virt_to_xen_l3e(v);
>> >      if ( !pl3e )
>> > -        return NULL;
>> > +        goto out;
>> 
>> Why not keep this one the way it is?
> 
> Because there will be an out label anyway, why not use it to have a
> single exit path?

.. this: To me, the fewer goto-s the better, especially when - like
here - they can be replaced with just a single (non-block) statement.
But yes, this is personal taste ...

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 13/55] x86/mm: rewrite virt_to_xen_l3e
@ 2019-04-09 12:52         ` Jan Beulich
  0 siblings, 0 replies; 119+ messages in thread
From: Jan Beulich @ 2019-04-09 12:52 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 09.04.19 at 14:27, <wei.liu2@citrix.com> wrote:
> On Mon, Apr 08, 2019 at 09:55:47AM -0600, Jan Beulich wrote:
>> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
>> > @@ -4769,45 +4769,70 @@ void free_xen_pagetable_new(mfn_t mfn)
>> >  
>> >  static DEFINE_SPINLOCK(map_pgdir_lock);
>> >  
>> > +/*
>> > + * Given a virtual address, return a pointer to xen's L3 entry. Caller
>> > + * needs to unmap the pointer.
>> > + */
>> >  static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
>> >  {
>> >      l4_pgentry_t *pl4e;
>> > +    l3_pgentry_t *pl3e = NULL;
>> >  
>> >      pl4e = &idle_pg_table[l4_table_offset(v)];
>> >      if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
>> >      {
>> >          bool locking = system_state > SYS_STATE_boot;
>> > -        l3_pgentry_t *l3t = alloc_xen_pagetable();
>> > +        l3_pgentry_t *l3t;
>> > +        mfn_t mfn;
>> > +
>> > +        mfn = alloc_xen_pagetable_new();
>> > +        if ( mfn_eq(mfn, INVALID_MFN) )
>> > +            goto out;
>> 
>> Any reason not to use "return NULL" here, avoiding the goto
>> and label altogether in tis function?
> 
> This can be done, but I wanted to have all virt_to_xen_lXe have the same
> structure.

I sort of guessed this but for both this and ...

>> >  static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
>> >  {
>> >      l3_pgentry_t *pl3e;
>> > +    l2_pgentry_t *pl2e = NULL;
>> >  
>> >      pl3e = virt_to_xen_l3e(v);
>> >      if ( !pl3e )
>> > -        return NULL;
>> > +        goto out;
>> 
>> Why not keep this one the way it is?
> 
> Because there will be an out label anyway, why not use it to have a
> single exit path?

.. this: To me, the fewer goto-s the better, especially when - like
here - they can be replaced with just a single (non-block) statement.
But yes, this is personal taste ...

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 32/55] efi: switch EFI L4 table to use new APIs
@ 2019-04-09 12:56         ` Jan Beulich
  0 siblings, 0 replies; 119+ messages in thread
From: Jan Beulich @ 2019-04-09 12:56 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Stefan Nuernberger, Roger Pau Monne

>>> On 09.04.19 at 14:23, <wei.liu2@citrix.com> wrote:
> On Mon, Mar 18, 2019 at 09:14:44PM +0000, Nuernberger, Stefan wrote:
>> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
>> > This requires storing the MFN instead of linear address of the L4
>> > table. Adjust code accordingly.
>> > 
>> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>> > ---
>> >  xen/arch/x86/efi/runtime.h | 12 +++++++++---
>> >  xen/common/efi/boot.c      |  8 ++++++--
>> >  xen/common/efi/efi.h       |  3 ++-
>> >  xen/common/efi/runtime.c   |  8 ++++----
>> >  4 files changed, 21 insertions(+), 10 deletions(-)
>> > 
>> > diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h
>> > index d9eb8f5c27..277d237953 100644
>> > --- a/xen/arch/x86/efi/runtime.h
>> > +++ b/xen/arch/x86/efi/runtime.h
>> > @@ -2,11 +2,17 @@
>> >  #include <asm/mc146818rtc.h>
>> >  
>> >  #ifndef COMPAT
>> > -l4_pgentry_t *__read_mostly efi_l4_pgtable;
>> > +mfn_t __read_mostly efi_l4_mfn = INVALID_MFN_INITIALIZER;
>> >  
>> >  void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e)
>> >  {
>> > -    if ( efi_l4_pgtable )
>> > -        l4e_write(efi_l4_pgtable + l4idx, l4e);
>> > +    if ( !mfn_eq(efi_l4_mfn, INVALID_MFN) )
>> > +    {
>> > +        l4_pgentry_t *l4t;
>> > +
>> > +        l4t = map_xen_pagetable_new(efi_l4_mfn);
>> > +        l4e_write(l4t + l4idx, l4e);
>> > +        UNMAP_XEN_PAGETABLE_NEW(l4t);
>> 
>> nit: This doesn't need the implicit NULL assignment. The non-macro
>> unmap_xen_pagetable_new is sufficient here. Do you have a guideline on
>> when to use the macro over the function call? I assume the compiler
>> eliminates most of the dead stores on function return.
>> 
> 
> I introduced the macro such that any misuse of stale pointer could be
> caught as early as possible. We've got similar construct -- see XFREE.
> 
> I certainly hope compilers can become smart enough. We can also
> eliminate useless assignment in a later series. I didn't spend much time
> going over all the places during the development of this series.

But it surely wouldn't be too hard to switch to the lower-case
sibling function in cases where the NULL assignment is pointless?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 32/55] efi: switch EFI L4 table to use new APIs
@ 2019-04-09 12:56         ` Jan Beulich
  0 siblings, 0 replies; 119+ messages in thread
From: Jan Beulich @ 2019-04-09 12:56 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Stefan Nuernberger, Roger Pau Monne

>>> On 09.04.19 at 14:23, <wei.liu2@citrix.com> wrote:
> On Mon, Mar 18, 2019 at 09:14:44PM +0000, Nuernberger, Stefan wrote:
>> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
>> > This requires storing the MFN instead of linear address of the L4
>> > table. Adjust code accordingly.
>> > 
>> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>> > ---
>> >  xen/arch/x86/efi/runtime.h | 12 +++++++++---
>> >  xen/common/efi/boot.c      |  8 ++++++--
>> >  xen/common/efi/efi.h       |  3 ++-
>> >  xen/common/efi/runtime.c   |  8 ++++----
>> >  4 files changed, 21 insertions(+), 10 deletions(-)
>> > 
>> > diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h
>> > index d9eb8f5c27..277d237953 100644
>> > --- a/xen/arch/x86/efi/runtime.h
>> > +++ b/xen/arch/x86/efi/runtime.h
>> > @@ -2,11 +2,17 @@
>> >  #include <asm/mc146818rtc.h>
>> >  
>> >  #ifndef COMPAT
>> > -l4_pgentry_t *__read_mostly efi_l4_pgtable;
>> > +mfn_t __read_mostly efi_l4_mfn = INVALID_MFN_INITIALIZER;
>> >  
>> >  void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e)
>> >  {
>> > -    if ( efi_l4_pgtable )
>> > -        l4e_write(efi_l4_pgtable + l4idx, l4e);
>> > +    if ( !mfn_eq(efi_l4_mfn, INVALID_MFN) )
>> > +    {
>> > +        l4_pgentry_t *l4t;
>> > +
>> > +        l4t = map_xen_pagetable_new(efi_l4_mfn);
>> > +        l4e_write(l4t + l4idx, l4e);
>> > +        UNMAP_XEN_PAGETABLE_NEW(l4t);
>> 
>> nit: This doesn't need the implicit NULL assignment. The non-macro
>> unmap_xen_pagetable_new is sufficient here. Do you have a guideline on
>> when to use the macro over the function call? I assume the compiler
>> eliminates most of the dead stores on function return.
>> 
> 
> I introduced the macro such that any misuse of stale pointer could be
> caught as early as possible. We've got similar construct -- see XFREE.
> 
> I certainly hope compilers can become smart enough. We can also
> eliminate useless assignment in a later series. I didn't spend much time
> going over all the places during the development of this series.

But it surely wouldn't be too hard to switch to the lower-case
sibling function in cases where the NULL assignment is pointless?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
@ 2019-04-09 12:59           ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:59 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On Tue, Apr 09, 2019 at 06:49:34AM -0600, Jan Beulich wrote:
> >>> On 09.04.19 at 14:22, <wei.liu2@citrix.com> wrote:
> > On Fri, Mar 15, 2019 at 09:36:37AM -0600, Jan Beulich wrote:
> >> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> >> > The pl2e and pl1e variables are heavily (ab)used in that function. It
> >> > is fine at the moment because all page tables are always mapped so
> >> > there is no need to track the life time of each variable.
> >> > 
> >> > We will soon have the requirement to map and unmap page tables. We
> >> > need to track the life time of each variable to avoid leakage.
> >> > 
> >> > Introduce some l{1,2}t variables with limited scope so that we can
> >> > track life time of pointers to xen page tables more easily.
> >> 
> >> But you retain some uses of the old variables, and to be honest it's
> >> not really clear to me by what criteria (and having multiple instances
> >> of a variable name in a single function isn't necessarily less confusing).
> >> I think we either stick to what's there (doesn't look too bad to me),
> >> or you switch to scope restricted page table pointers throughout the
> >> function, such that the function scope symbols can go away
> >> altogether).
> > 
> > I thought my commit message was clear enough: it helped tracking the
> > lifetime of pointers more easily. There is nothing new in this trick:
> > the less variables of the same name the better.
> > 
> > In this patch, places where it is clear that using local scope variables
> > suffices are changed to use local variables.
> > 
> > In the end, pl*e are only used to point to entries which point to the
> > page tables related to the linear address being mapped. They won't be
> > used to point to any intermediate pointers which are used to manipulate
> > page tables.
> > 
> > We can't eliminate function scope variables because they need to stay
> > function scope -- see later patches where they are unmapped in the out
> > path.
> 
> Except that I'm not really happy with that approach either (not
> only but also because of the proliferation of goto-s).

The root cause is there are far too many exit paths in this function.
Previously it was okay because we didn't need to clean up. It won't be
okay once we have to.

I picked the option to unify them into one or two places.

One option is to duplicate cleanup code in each exit path. That's
repetitive and error-prone IMHO.

One option is to break up map_pages_to_xen into smaller functions which
don't require gotos but require more mapping / unmapping. I thought
about that but it was too much faff without getting us to where we
wanted to be.

Pick your poison. There isn't an easy solution to this...

Wei.

> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
@ 2019-04-09 12:59           ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 12:59 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Wei Liu, xen-devel, Roger Pau Monne

On Tue, Apr 09, 2019 at 06:49:34AM -0600, Jan Beulich wrote:
> >>> On 09.04.19 at 14:22, <wei.liu2@citrix.com> wrote:
> > On Fri, Mar 15, 2019 at 09:36:37AM -0600, Jan Beulich wrote:
> >> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
> >> > The pl2e and pl1e variables are heavily (ab)used in that function. It
> >> > is fine at the moment because all page tables are always mapped so
> >> > there is no need to track the life time of each variable.
> >> > 
> >> > We will soon have the requirement to map and unmap page tables. We
> >> > need to track the life time of each variable to avoid leakage.
> >> > 
> >> > Introduce some l{1,2}t variables with limited scope so that we can
> >> > track life time of pointers to xen page tables more easily.
> >> 
> >> But you retain some uses of the old variables, and to be honest it's
> >> not really clear to me by what criteria (and having multiple instances
> >> of a variable name in a single function isn't necessarily less confusing).
> >> I think we either stick to what's there (doesn't look too bad to me),
> >> or you switch to scope restricted page table pointers throughout the
> >> function, such that the function scope symbols can go away
> >> altogether).
> > 
> > I thought my commit message was clear enough: it helped tracking the
> > lifetime of pointers more easily. There is nothing new in this trick:
> > the less variables of the same name the better.
> > 
> > In this patch, places where it is clear that using local scope variables
> > suffices are changed to use local variables.
> > 
> > In the end, pl*e are only used to point to entries which point to the
> > page tables related to the linear address being mapped. They won't be
> > used to point to any intermediate pointers which are used to manipulate
> > page tables.
> > 
> > We can't eliminate function scope variables because they need to stay
> > function scope -- see later patches where they are unmapped in the out
> > path.
> 
> Except that I'm not really happy with that approach either (not
> only but also because of the proliferation of goto-s).

The root cause is there are far too many exit paths in this function.
Previously it was okay because we didn't need to clean up. It won't be
okay once we have to.

I picked the option to unify them into one or two places.

One option is to duplicate cleanup code in each exit path. That's
repetitive and error-prone IMHO.

One option is to break up map_pages_to_xen into smaller functions which
don't require gotos but require more mapping / unmapping. I thought
about that but it was too much faff without getting us to where we
wanted to be.

Pick your poison. There isn't an easy solution to this...

Wei.

> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 32/55] efi: switch EFI L4 table to use new APIs
@ 2019-04-09 13:00           ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 13:00 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Wei Liu, xen-devel, Stefan Nuernberger, Roger Pau Monne

On Tue, Apr 09, 2019 at 06:56:56AM -0600, Jan Beulich wrote:
> >>> On 09.04.19 at 14:23, <wei.liu2@citrix.com> wrote:
> > On Mon, Mar 18, 2019 at 09:14:44PM +0000, Nuernberger, Stefan wrote:
> >> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> >> > This requires storing the MFN instead of linear address of the L4
> >> > table. Adjust code accordingly.
> >> > 
> >> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> >> > ---
> >> >  xen/arch/x86/efi/runtime.h | 12 +++++++++---
> >> >  xen/common/efi/boot.c      |  8 ++++++--
> >> >  xen/common/efi/efi.h       |  3 ++-
> >> >  xen/common/efi/runtime.c   |  8 ++++----
> >> >  4 files changed, 21 insertions(+), 10 deletions(-)
> >> > 
> >> > diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h
> >> > index d9eb8f5c27..277d237953 100644
> >> > --- a/xen/arch/x86/efi/runtime.h
> >> > +++ b/xen/arch/x86/efi/runtime.h
> >> > @@ -2,11 +2,17 @@
> >> >  #include <asm/mc146818rtc.h>
> >> >  
> >> >  #ifndef COMPAT
> >> > -l4_pgentry_t *__read_mostly efi_l4_pgtable;
> >> > +mfn_t __read_mostly efi_l4_mfn = INVALID_MFN_INITIALIZER;
> >> >  
> >> >  void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e)
> >> >  {
> >> > -    if ( efi_l4_pgtable )
> >> > -        l4e_write(efi_l4_pgtable + l4idx, l4e);
> >> > +    if ( !mfn_eq(efi_l4_mfn, INVALID_MFN) )
> >> > +    {
> >> > +        l4_pgentry_t *l4t;
> >> > +
> >> > +        l4t = map_xen_pagetable_new(efi_l4_mfn);
> >> > +        l4e_write(l4t + l4idx, l4e);
> >> > +        UNMAP_XEN_PAGETABLE_NEW(l4t);
> >> 
> >> nit: This doesn't need the implicit NULL assignment. The non-macro
> >> unmap_xen_pagetable_new is sufficient here. Do you have a guideline on
> >> when to use the macro over the function call? I assume the compiler
> >> eliminates most of the dead stores on function return.
> >> 
> > 
> > I introduced the macro such that any misuse of stale pointer could be
> > caught as early as possible. We've got similar construct -- see XFREE.
> > 
> > I certainly hope compilers can become smart enough. We can also
> > eliminate useless assignment in a later series. I didn't spend much time
> > going over all the places during the development of this series.
> 
> But it surely wouldn't be too hard to switch to the lower-case
> sibling function in cases where the NULL assignment is pointless?

It can be done when I'm reasonably sure about the final incarnation of
this series.

Wei.

> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 32/55] efi: switch EFI L4 table to use new APIs
@ 2019-04-09 13:00           ` Wei Liu
  0 siblings, 0 replies; 119+ messages in thread
From: Wei Liu @ 2019-04-09 13:00 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Wei Liu, xen-devel, Stefan Nuernberger, Roger Pau Monne

On Tue, Apr 09, 2019 at 06:56:56AM -0600, Jan Beulich wrote:
> >>> On 09.04.19 at 14:23, <wei.liu2@citrix.com> wrote:
> > On Mon, Mar 18, 2019 at 09:14:44PM +0000, Nuernberger, Stefan wrote:
> >> On Thu, 2019-02-07 at 16:44 +0000, Wei Liu wrote:
> >> > This requires storing the MFN instead of linear address of the L4
> >> > table. Adjust code accordingly.
> >> > 
> >> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> >> > ---
> >> >  xen/arch/x86/efi/runtime.h | 12 +++++++++---
> >> >  xen/common/efi/boot.c      |  8 ++++++--
> >> >  xen/common/efi/efi.h       |  3 ++-
> >> >  xen/common/efi/runtime.c   |  8 ++++----
> >> >  4 files changed, 21 insertions(+), 10 deletions(-)
> >> > 
> >> > diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h
> >> > index d9eb8f5c27..277d237953 100644
> >> > --- a/xen/arch/x86/efi/runtime.h
> >> > +++ b/xen/arch/x86/efi/runtime.h
> >> > @@ -2,11 +2,17 @@
> >> >  #include <asm/mc146818rtc.h>
> >> >  
> >> >  #ifndef COMPAT
> >> > -l4_pgentry_t *__read_mostly efi_l4_pgtable;
> >> > +mfn_t __read_mostly efi_l4_mfn = INVALID_MFN_INITIALIZER;
> >> >  
> >> >  void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e)
> >> >  {
> >> > -    if ( efi_l4_pgtable )
> >> > -        l4e_write(efi_l4_pgtable + l4idx, l4e);
> >> > +    if ( !mfn_eq(efi_l4_mfn, INVALID_MFN) )
> >> > +    {
> >> > +        l4_pgentry_t *l4t;
> >> > +
> >> > +        l4t = map_xen_pagetable_new(efi_l4_mfn);
> >> > +        l4e_write(l4t + l4idx, l4e);
> >> > +        UNMAP_XEN_PAGETABLE_NEW(l4t);
> >> 
> >> nit: This doesn't need the implicit NULL assignment. The non-macro
> >> unmap_xen_pagetable_new is sufficient here. Do you have a guideline on
> >> when to use the macro over the function call? I assume the compiler
> >> eliminates most of the dead stores on function return.
> >> 
> > 
> > I introduced the macro such that any misuse of stale pointer could be
> > caught as early as possible. We've got similar construct -- see XFREE.
> > 
> > I certainly hope compilers can become smart enough. We can also
> > eliminate useless assignment in a later series. I didn't spend much time
> > going over all the places during the development of this series.
> 
> But it surely wouldn't be too hard to switch to the lower-case
> sibling function in cases where the NULL assignment is pointless?

It can be done when I'm reasonably sure about the final incarnation of
this series.

Wei.

> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
@ 2019-04-09 13:20             ` Jan Beulich
  0 siblings, 0 replies; 119+ messages in thread
From: Jan Beulich @ 2019-04-09 13:20 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 09.04.19 at 14:59, <wei.liu2@citrix.com> wrote:
> On Tue, Apr 09, 2019 at 06:49:34AM -0600, Jan Beulich wrote:
>> >>> On 09.04.19 at 14:22, <wei.liu2@citrix.com> wrote:
>> > On Fri, Mar 15, 2019 at 09:36:37AM -0600, Jan Beulich wrote:
>> >> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
>> >> > The pl2e and pl1e variables are heavily (ab)used in that function. It
>> >> > is fine at the moment because all page tables are always mapped so
>> >> > there is no need to track the life time of each variable.
>> >> > 
>> >> > We will soon have the requirement to map and unmap page tables. We
>> >> > need to track the life time of each variable to avoid leakage.
>> >> > 
>> >> > Introduce some l{1,2}t variables with limited scope so that we can
>> >> > track life time of pointers to xen page tables more easily.
>> >> 
>> >> But you retain some uses of the old variables, and to be honest it's
>> >> not really clear to me by what criteria (and having multiple instances
>> >> of a variable name in a single function isn't necessarily less confusing).
>> >> I think we either stick to what's there (doesn't look too bad to me),
>> >> or you switch to scope restricted page table pointers throughout the
>> >> function, such that the function scope symbols can go away
>> >> altogether).
>> > 
>> > I thought my commit message was clear enough: it helped tracking the
>> > lifetime of pointers more easily. There is nothing new in this trick:
>> > the less variables of the same name the better.
>> > 
>> > In this patch, places where it is clear that using local scope variables
>> > suffices are changed to use local variables.
>> > 
>> > In the end, pl*e are only used to point to entries which point to the
>> > page tables related to the linear address being mapped. They won't be
>> > used to point to any intermediate pointers which are used to manipulate
>> > page tables.
>> > 
>> > We can't eliminate function scope variables because they need to stay
>> > function scope -- see later patches where they are unmapped in the out
>> > path.
>> 
>> Except that I'm not really happy with that approach either (not
>> only but also because of the proliferation of goto-s).
> 
> The root cause is there are far too many exit paths in this function.
> Previously it was okay because we didn't need to clean up. It won't be
> okay once we have to.
> 
> I picked the option to unify them into one or two places.
> 
> One option is to duplicate cleanup code in each exit path. That's
> repetitive and error-prone IMHO.
> 
> One option is to break up map_pages_to_xen into smaller functions which
> don't require gotos but require more mapping / unmapping. I thought
> about that but it was too much faff without getting us to where we
> wanted to be.
> 
> Pick your poison. There isn't an easy solution to this...

Breaking up the function is quite likely going to become very desirable
for 5-level page tables anyway. But I'm not going to insist you do this
work as a prereq here.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

* Re: [Xen-devel] [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
@ 2019-04-09 13:20             ` Jan Beulich
  0 siblings, 0 replies; 119+ messages in thread
From: Jan Beulich @ 2019-04-09 13:20 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, xen-devel, Roger Pau Monne

>>> On 09.04.19 at 14:59, <wei.liu2@citrix.com> wrote:
> On Tue, Apr 09, 2019 at 06:49:34AM -0600, Jan Beulich wrote:
>> >>> On 09.04.19 at 14:22, <wei.liu2@citrix.com> wrote:
>> > On Fri, Mar 15, 2019 at 09:36:37AM -0600, Jan Beulich wrote:
>> >> >>> On 07.02.19 at 17:44, <wei.liu2@citrix.com> wrote:
>> >> > The pl2e and pl1e variables are heavily (ab)used in that function. It
>> >> > is fine at the moment because all page tables are always mapped so
>> >> > there is no need to track the life time of each variable.
>> >> > 
>> >> > We will soon have the requirement to map and unmap page tables. We
>> >> > need to track the life time of each variable to avoid leakage.
>> >> > 
>> >> > Introduce some l{1,2}t variables with limited scope so that we can
>> >> > track life time of pointers to xen page tables more easily.
>> >> 
>> >> But you retain some uses of the old variables, and to be honest it's
>> >> not really clear to me by what criteria (and having multiple instances
>> >> of a variable name in a single function isn't necessarily less confusing).
>> >> I think we either stick to what's there (doesn't look too bad to me),
>> >> or you switch to scope restricted page table pointers throughout the
>> >> function, such that the function scope symbols can go away
>> >> altogether).
>> > 
>> > I thought my commit message was clear enough: it helped tracking the
>> > lifetime of pointers more easily. There is nothing new in this trick:
>> > the less variables of the same name the better.
>> > 
>> > In this patch, places where it is clear that using local scope variables
>> > suffices are changed to use local variables.
>> > 
>> > In the end, pl*e are only used to point to entries which point to the
>> > page tables related to the linear address being mapped. They won't be
>> > used to point to any intermediate pointers which are used to manipulate
>> > page tables.
>> > 
>> > We can't eliminate function scope variables because they need to stay
>> > function scope -- see later patches where they are unmapped in the out
>> > path.
>> 
>> Except that I'm not really happy with that approach either (not
>> only but also because of the proliferation of goto-s).
> 
> The root cause is there are far too many exit paths in this function.
> Previously it was okay because we didn't need to clean up. It won't be
> okay once we have to.
> 
> I picked the option to unify them into one or two places.
> 
> One option is to duplicate cleanup code in each exit path. That's
> repetitive and error-prone IMHO.
> 
> One option is to break up map_pages_to_xen into smaller functions which
> don't require gotos but require more mapping / unmapping. I thought
> about that but it was too much faff without getting us to where we
> wanted to be.
> 
> Pick your poison. There isn't an easy solution to this...

Breaking up the function is quite likely going to become very desirable
for 5-level page tables anyway. But I'm not going to insist you do this
work as a prereq here.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 119+ messages in thread

end of thread, other threads:[~2019-04-09 13:21 UTC | newest]

Thread overview: 119+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-07 16:44 [PATCH RFC 00/55] x86: use domheap page for xen page tables Wei Liu
2019-02-07 16:44 ` [PATCH RFC 01/55] x86/mm: defer clearing page in virt_to_xen_lXe Wei Liu
2019-03-15 14:38   ` Jan Beulich
2019-04-09 12:21     ` Wei Liu
2019-04-09 12:21       ` [Xen-devel] " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 02/55] x86: move some xen mm function declarations Wei Liu
2019-03-15 14:42   ` Jan Beulich
2019-02-07 16:44 ` [PATCH RFC 03/55] x86: introduce a new set of APIs to manage Xen page tables Wei Liu
2019-02-07 16:44 ` [PATCH RFC 04/55] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Wei Liu
2019-03-15 15:36   ` Jan Beulich
2019-04-09 12:22     ` Wei Liu
2019-04-09 12:22       ` [Xen-devel] " Wei Liu
2019-04-09 12:49       ` Jan Beulich
2019-04-09 12:49         ` [Xen-devel] " Jan Beulich
2019-04-09 12:59         ` Wei Liu
2019-04-09 12:59           ` [Xen-devel] " Wei Liu
2019-04-09 13:20           ` Jan Beulich
2019-04-09 13:20             ` [Xen-devel] " Jan Beulich
2019-02-07 16:44 ` [PATCH RFC 05/55] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Wei Liu
2019-03-18 21:14   ` Nuernberger, Stefan
2019-02-07 16:44 ` [PATCH RFC 06/55] x86/mm: map_pages_to_xen should have one exit path Wei Liu
2019-03-18 21:14   ` Nuernberger, Stefan
2019-04-09 12:22     ` Wei Liu
2019-04-09 12:22       ` [Xen-devel] " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 07/55] x86/mm: add an end_of_loop label in map_pages_to_xen Wei Liu
2019-03-15 15:40   ` Jan Beulich
2019-04-09 12:22     ` Wei Liu
2019-04-09 12:22       ` [Xen-devel] " Wei Liu
2019-04-09 12:50       ` Jan Beulich
2019-04-09 12:50         ` [Xen-devel] " Jan Beulich
2019-03-18 21:14   ` Nuernberger, Stefan
2019-02-07 16:44 ` [PATCH RFC 08/55] x86/mm: make sure there is one exit path for modify_xen_mappings Wei Liu
2019-02-07 16:44 ` [PATCH RFC 09/55] x86/mm: add an end_of_loop label in modify_xen_mappings Wei Liu
2019-02-07 16:44 ` [PATCH RFC 10/55] x86/mm: change pl2e to l2t in virt_to_xen_l2e Wei Liu
2019-02-07 16:44 ` [PATCH RFC 11/55] x86/mm: change pl1e to l1t in virt_to_xen_l1e Wei Liu
2019-02-07 16:44 ` [PATCH RFC 12/55] x86/mm: change pl3e to l3t in virt_to_xen_l3e Wei Liu
2019-02-07 16:44 ` [PATCH RFC 13/55] x86/mm: rewrite virt_to_xen_l3e Wei Liu
2019-04-08 15:55   ` Jan Beulich
2019-04-08 15:55     ` [Xen-devel] " Jan Beulich
2019-04-09 12:27     ` Wei Liu
2019-04-09 12:27       ` [Xen-devel] " Wei Liu
2019-04-09 12:52       ` Jan Beulich
2019-04-09 12:52         ` [Xen-devel] " Jan Beulich
2019-02-07 16:44 ` [PATCH RFC 14/55] x86/mm: rewrite xen_to_virt_l2e Wei Liu
2019-03-18 21:14   ` Nuernberger, Stefan
2019-04-09 12:27     ` Wei Liu
2019-04-09 12:27       ` [Xen-devel] " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 15/55] x86/mm: rewrite virt_to_xen_l1e Wei Liu
2019-02-07 16:44 ` [PATCH RFC 16/55] x86/mm: switch to new APIs in map_pages_to_xen Wei Liu
2019-02-08 17:58   ` Wei Liu
2019-03-18 21:14     ` Nuernberger, Stefan
2019-02-07 16:44 ` [PATCH RFC 17/55] x86/mm: drop lXe_to_lYe invocations " Wei Liu
2019-03-18 21:14   ` Nuernberger, Stefan
2019-02-07 16:44 ` [PATCH RFC 18/55] x86/mm: switch to new APIs in modify_xen_mappings Wei Liu
2019-03-18 21:14   ` Nuernberger, Stefan
2019-04-09 12:22     ` Wei Liu
2019-04-09 12:22       ` [Xen-devel] " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 19/55] x86/mm: drop lXe_to_lYe invocations from modify_xen_mappings Wei Liu
2019-02-07 16:44 ` [PATCH RFC 20/55] x86/mm: switch to new APIs in arch_init_memory Wei Liu
2019-02-07 16:44 ` [PATCH RFC 21/55] x86_64/mm: introduce pl2e in paging_init Wei Liu
2019-03-18 21:14   ` Nuernberger, Stefan
2019-02-07 16:44 ` [PATCH RFC 22/55] x86_64/mm: switch to new APIs " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 23/55] x86_64/mm: drop l4e_to_l3e invocation from paging_init Wei Liu
2019-03-18 21:14   ` Nuernberger, Stefan
2019-04-09 12:22     ` Wei Liu
2019-04-09 12:22       ` [Xen-devel] " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 24/55] x86_64/mm.c: remove code that serves no purpose in setup_m2p_table Wei Liu
2019-02-07 16:44 ` [PATCH RFC 25/55] x86_64/mm: introduce pl2e " Wei Liu
2019-03-18 21:14   ` Nuernberger, Stefan
2019-02-07 16:44 ` [PATCH RFC 26/55] x86_64/mm: switch to new APIs " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 27/55] x86_64/mm: drop lXe_to_lYe invocations from setup_m2p_table Wei Liu
2019-03-18 21:14   ` Nuernberger, Stefan
2019-04-09 12:30     ` Wei Liu
2019-04-09 12:30       ` [Xen-devel] " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 28/55] efi: use new page table APIs in copy_mapping Wei Liu
2019-02-07 16:44 ` [PATCH RFC 29/55] efi: avoid using global variable " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 30/55] efi: use new page table APIs in efi_init_memory Wei Liu
2019-02-07 16:44 ` [PATCH RFC 31/55] efi: add emacs block to boot.c Wei Liu
2019-03-18 21:14   ` Nuernberger, Stefan
2019-04-09 12:23     ` Wei Liu
2019-04-09 12:23       ` [Xen-devel] " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 32/55] efi: switch EFI L4 table to use new APIs Wei Liu
2019-03-18 21:14   ` Nuernberger, Stefan
2019-04-09 12:23     ` Wei Liu
2019-04-09 12:23       ` [Xen-devel] " Wei Liu
2019-04-09 12:56       ` Jan Beulich
2019-04-09 12:56         ` [Xen-devel] " Jan Beulich
2019-04-09 13:00         ` Wei Liu
2019-04-09 13:00           ` [Xen-devel] " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 33/55] x86/smpboot: add emacs block Wei Liu
2019-02-07 16:44 ` [PATCH RFC 34/55] x86/smpboot: clone_mapping should have one exit path Wei Liu
2019-02-07 16:44 ` [PATCH RFC 35/55] x86/smpboot: switch pl3e to use new APIs in clone_mapping Wei Liu
2019-02-07 16:44 ` [PATCH RFC 36/55] x86/smpboot: switch pl2e " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 37/55] x86/smpboot: switch pl1e " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 38/55] x86/smpboot: drop lXe_to_lYe invocations from cleanup_cpu_root_pgt Wei Liu
2019-02-07 16:44 ` [PATCH RFC 39/55] x86: switch root_pgt to mfn_t and use new APIs Wei Liu
2019-03-19 16:45   ` Nuernberger, Stefan
2019-04-09 12:23     ` Wei Liu
2019-04-09 12:23       ` [Xen-devel] " Wei Liu
2019-02-07 16:44 ` [PATCH RFC 40/55] x86/shim: map and unmap page tables in replace_va_mapping Wei Liu
2019-02-07 16:44 ` [PATCH RFC 41/55] x86_64/mm: map and unmap page tables in m2p_mapped Wei Liu
2019-03-19 16:45   ` Nuernberger, Stefan
2019-02-07 16:44 ` [PATCH RFC 42/55] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Wei Liu
2019-03-19 16:45   ` Nuernberger, Stefan
2019-02-07 16:44 ` [PATCH RFC 43/55] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Wei Liu
2019-02-07 16:44 ` [PATCH RFC 44/55] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Wei Liu
2019-02-07 16:44 ` [PATCH RFC 45/55] x86_64/mm: map and unmap page tables in setup_compat_m2p_table Wei Liu
2019-02-07 16:44 ` [PATCH RFC 46/55] x86_64/mm: map and unmap page tables in cleanup_frame_table Wei Liu
2019-02-07 16:44 ` [PATCH RFC 47/55] x86_64/mm: map and unmap page tables in subarch_init_memory Wei Liu
2019-02-07 16:44 ` [PATCH RFC 48/55] x86_64/mm: map and unmap page tables in subarch_memory_op Wei Liu
2019-02-07 16:44 ` [PATCH RFC 49/55] x86/smpboot: remove lXe_to_lYe in cleanup_cpu_root_pgt Wei Liu
2019-02-07 16:44 ` [PATCH RFC 50/55] x86/pv: properly map and unmap page tables in mark_pv_pt_pages_rdonly Wei Liu
2019-02-07 16:44 ` [PATCH RFC 51/55] x86/pv: properly map and unmap page table in dom0_construct_pv Wei Liu
2019-02-07 16:44 ` [PATCH RFC 52/55] x86: remove lXe_to_lYe in __start_xen Wei Liu
2019-02-07 16:44 ` [PATCH RFC 53/55] x86/mm: drop old page table APIs Wei Liu
2019-02-07 16:44 ` [PATCH RFC 54/55] x86: switch to use domheap page for page tables Wei Liu
2019-02-07 16:44 ` [PATCH RFC 55/55] x86/mm: drop _new suffix from page table APIs Wei Liu
2019-03-18 21:14 ` [PATCH RFC 00/55] x86: use domheap page for xen page tables Nuernberger, Stefan
2019-03-28 12:52 ` Nuernberger, Stefan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.