All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v3 0/9] Add alternative API for Xen PTEs
@ 2019-10-02 17:16 Hongyan Xia
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 1/9] x86: move some xen mm function declarations Hongyan Xia
                   ` (8 more replies)
  0 siblings, 9 replies; 21+ messages in thread
From: Hongyan Xia @ 2019-10-02 17:16 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

This batch adds an alternative alloc-map-unmap-free Xen PTE API to the
normal alloc-free on the xenheap, in preparation of switching to domheap
for Xen page tables. Since map and unmap are basically no-ops now, and
other changes are cosmetic to ease future patches, this batch does not
introduce any functional changes.

tree:
https://xenbits.xen.org/git-http/people/hx242/xen.git xen_pte_map-v3

v2: 
https://lists.xenproject.org/archives/html/xen-devel/2019-09/msg03221.html

---
Changed since v2:
- split into a smaller series
- drop the clear_page optimisation as Wei suggests
- rebase

Changed since v1:
- squash some commits
- merge bug fixes into this first batch
- rebase against latest master

Wei Liu (9):
  x86: move some xen mm function declarations
  x86: introduce a new set of APIs to manage Xen page tables
  x86/mm: introduce l{1,2}t local variables to map_pages_to_xen
  x86/mm: introduce l{1,2}t local variables to modify_xen_mappings
  x86/mm: map_pages_to_xen should have one exit path
  x86/mm: add an end_of_loop label in map_pages_to_xen
  x86/mm: make sure there is one exit path for modify_xen_mappings
  x86/mm: add an end_of_loop label in modify_xen_mappings
  x86/mm: change pl*e to l*t in virt_to_xen_l*e

 xen/arch/x86/mm.c          | 300 +++++++++++++++++++++++--------------
 xen/include/asm-x86/mm.h   |  16 ++
 xen/include/asm-x86/page.h |   5 -
 3 files changed, 206 insertions(+), 115 deletions(-)

-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [Xen-devel] [PATCH v3 1/9] x86: move some xen mm function declarations
  2019-10-02 17:16 [Xen-devel] [PATCH v3 0/9] Add alternative API for Xen PTEs Hongyan Xia
@ 2019-10-02 17:16 ` Hongyan Xia
  2019-11-15 15:12   ` Jan Beulich
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 2/9] x86: introduce a new set of APIs to manage Xen page tables Hongyan Xia
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Hongyan Xia @ 2019-10-02 17:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

They were put into page.h but mm.h is more appropriate.

The real reason is that I will be adding some new functions which
takes mfn_t. It turns out it is a bit difficult to do in page.h.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/include/asm-x86/mm.h   | 5 +++++
 xen/include/asm-x86/page.h | 5 -----
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 3863e4ce57..2800106327 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -630,4 +630,9 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
                           unsigned int id, unsigned long frame,
                           unsigned int nr_frames, xen_pfn_t mfn_list[]);
 
+/* Allocator functions for Xen pagetables. */
+void *alloc_xen_pagetable(void);
+void free_xen_pagetable(void *v);
+l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
+
 #endif /* __ASM_X86_MM_H__ */
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index c1e92937c0..05a8b1efa6 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -345,11 +345,6 @@ void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t);
 
 #ifndef __ASSEMBLY__
 
-/* Allocator functions for Xen pagetables. */
-void *alloc_xen_pagetable(void);
-void free_xen_pagetable(void *v);
-l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
-
 /* Convert between PAT/PCD/PWT embedded in PTE flags and 3-bit cacheattr. */
 static inline unsigned int pte_flags_to_cacheattr(unsigned int flags)
 {
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Xen-devel] [PATCH v3 2/9] x86: introduce a new set of APIs to manage Xen page tables
  2019-10-02 17:16 [Xen-devel] [PATCH v3 0/9] Add alternative API for Xen PTEs Hongyan Xia
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 1/9] x86: move some xen mm function declarations Hongyan Xia
@ 2019-10-02 17:16 ` Hongyan Xia
  2019-11-15 15:23   ` Jan Beulich
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 3/9] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Hongyan Xia
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Hongyan Xia @ 2019-10-02 17:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We are going to switch to using domheap page for page tables.
A new set of APIs is introduced to allocate, map, unmap and free pages
for page tables.

The allocation and deallocation work on mfn_t but not page_info,
because they are required to work even before frame table is set up.

Implement the old functions with the new ones. We will rewrite, site
by site, other mm functions that manipulate page tables to use the new
APIs.

Note these new APIs still use xenheap page underneath and no actual
map and unmap is done so that we don't break xen half way. They will
be switched to use domheap and dynamic mappings when usage of old APIs
is eliminated.

No functional change intended in this patch.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c        | 39 ++++++++++++++++++++++++++++++++++-----
 xen/include/asm-x86/mm.h | 11 +++++++++++
 2 files changed, 45 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 99816fc67c..88a15ecdf2 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -119,6 +119,7 @@
 #include <xen/efi.h>
 #include <xen/grant_table.h>
 #include <xen/hypercall.h>
+#include <xen/mm.h>
 #include <asm/paging.h>
 #include <asm/shadow.h>
 #include <asm/page.h>
@@ -4847,22 +4848,50 @@ int mmcfg_intercept_write(
 }
 
 void *alloc_xen_pagetable(void)
+{
+    mfn_t mfn;
+
+    mfn = alloc_xen_pagetable_new();
+    ASSERT(!mfn_eq(mfn, INVALID_MFN));
+
+    return map_xen_pagetable_new(mfn);
+}
+
+void free_xen_pagetable(void *v)
+{
+    if ( system_state != SYS_STATE_early_boot )
+        free_xen_pagetable_new(virt_to_mfn(v));
+}
+
+mfn_t alloc_xen_pagetable_new(void)
 {
     if ( system_state != SYS_STATE_early_boot )
     {
         void *ptr = alloc_xenheap_page();
 
         BUG_ON(!hardware_domain && !ptr);
-        return ptr;
+        return virt_to_mfn(ptr);
     }
 
-    return mfn_to_virt(mfn_x(alloc_boot_pages(1, 1)));
+    return alloc_boot_pages(1, 1);
 }
 
-void free_xen_pagetable(void *v)
+void *map_xen_pagetable_new(mfn_t mfn)
 {
-    if ( system_state != SYS_STATE_early_boot )
-        free_xenheap_page(v);
+    return mfn_to_virt(mfn_x(mfn));
+}
+
+/* v can point to an entry within a table or be NULL */
+void unmap_xen_pagetable_new(void *v)
+{
+    /* XXX still using xenheap page, no need to do anything.  */
+}
+
+/* mfn can be INVALID_MFN */
+void free_xen_pagetable_new(mfn_t mfn)
+{
+    if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
+        free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
 }
 
 static DEFINE_SPINLOCK(map_pgdir_lock);
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 2800106327..80173eb4c3 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -633,6 +633,17 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
 /* Allocator functions for Xen pagetables. */
 void *alloc_xen_pagetable(void);
 void free_xen_pagetable(void *v);
+mfn_t alloc_xen_pagetable_new(void);
+void *map_xen_pagetable_new(mfn_t mfn);
+void unmap_xen_pagetable_new(void *v);
+void free_xen_pagetable_new(mfn_t mfn);
+
+#define UNMAP_XEN_PAGETABLE_NEW(ptr)    \
+    do {                                \
+        unmap_xen_pagetable_new((ptr)); \
+        (ptr) = NULL;                   \
+    } while (0)
+
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
 
 #endif /* __ASM_X86_MM_H__ */
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Xen-devel] [PATCH v3 3/9] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
  2019-10-02 17:16 [Xen-devel] [PATCH v3 0/9] Add alternative API for Xen PTEs Hongyan Xia
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 1/9] x86: move some xen mm function declarations Hongyan Xia
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 2/9] x86: introduce a new set of APIs to manage Xen page tables Hongyan Xia
@ 2019-10-02 17:16 ` Hongyan Xia
  2019-11-20 16:08   ` Jan Beulich
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 4/9] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Hongyan Xia
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Hongyan Xia @ 2019-10-02 17:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

The pl2e and pl1e variables are heavily (ab)used in that function. It
is fine at the moment because all page tables are always mapped so
there is no need to track the life time of each variable.

We will soon have the requirement to map and unmap page tables. We
need to track the life time of each variable to avoid leakage.

Introduce some l{1,2}t variables with limited scope so that we can
track life time of pointers to xen page tables more easily.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 75 ++++++++++++++++++++++++++---------------------
 1 file changed, 42 insertions(+), 33 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 88a15ecdf2..23066c492e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5061,10 +5061,12 @@ int map_pages_to_xen(
                 }
                 else
                 {
-                    pl2e = l3e_to_l2e(ol3e);
+                    l2_pgentry_t *l2t;
+
+                    l2t = l3e_to_l2e(ol3e);
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
-                        ol2e = pl2e[i];
+                        ol2e = l2t[i];
                         if ( !(l2e_get_flags(ol2e) & _PAGE_PRESENT) )
                             continue;
                         if ( l2e_get_flags(ol2e) & _PAGE_PSE )
@@ -5072,21 +5074,22 @@ int map_pages_to_xen(
                         else
                         {
                             unsigned int j;
+                            l1_pgentry_t *l1t;
 
-                            pl1e = l2e_to_l1e(ol2e);
+                            l1t = l2e_to_l1e(ol2e);
                             for ( j = 0; j < L1_PAGETABLE_ENTRIES; j++ )
-                                flush_flags(l1e_get_flags(pl1e[j]));
+                                flush_flags(l1e_get_flags(l1t[j]));
                         }
                     }
                     flush_area(virt, flush_flags);
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
-                        ol2e = pl2e[i];
+                        ol2e = l2t[i];
                         if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) &&
                              !(l2e_get_flags(ol2e) & _PAGE_PSE) )
                             free_xen_pagetable(l2e_to_l1e(ol2e));
                     }
-                    free_xen_pagetable(pl2e);
+                    free_xen_pagetable(l2t);
                 }
             }
 
@@ -5102,6 +5105,7 @@ int map_pages_to_xen(
         {
             unsigned int flush_flags =
                 FLUSH_TLB | FLUSH_ORDER(2 * PAGETABLE_ORDER);
+            l2_pgentry_t *l2t;
 
             /* Skip this PTE if there is no change. */
             if ( ((l3e_get_pfn(ol3e) & ~(L2_PAGETABLE_ENTRIES *
@@ -5123,12 +5127,12 @@ int map_pages_to_xen(
                 continue;
             }
 
-            pl2e = alloc_xen_pagetable();
-            if ( pl2e == NULL )
+            l2t = alloc_xen_pagetable();
+            if ( l2t == NULL )
                 return -ENOMEM;
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
-                l2e_write(pl2e + i,
+                l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(ol3e) +
                                        (i << PAGETABLE_ORDER),
                                        l3e_get_flags(ol3e)));
@@ -5141,15 +5145,15 @@ int map_pages_to_xen(
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(pl2e),
+                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
                                                     __PAGE_HYPERVISOR));
-                pl2e = NULL;
+                l2t = NULL;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
             flush_area(virt, flush_flags);
-            if ( pl2e )
-                free_xen_pagetable(pl2e);
+            if ( l2t )
+                free_xen_pagetable(l2t);
         }
 
         pl2e = virt_to_xen_l2e(virt);
@@ -5177,11 +5181,13 @@ int map_pages_to_xen(
                 }
                 else
                 {
-                    pl1e = l2e_to_l1e(ol2e);
+                    l1_pgentry_t *l1t;
+
+                    l1t = l2e_to_l1e(ol2e);
                     for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                        flush_flags(l1e_get_flags(pl1e[i]));
+                        flush_flags(l1e_get_flags(l1t[i]));
                     flush_area(virt, flush_flags);
-                    free_xen_pagetable(pl1e);
+                    free_xen_pagetable(l1t);
                 }
             }
 
@@ -5203,6 +5209,7 @@ int map_pages_to_xen(
             {
                 unsigned int flush_flags =
                     FLUSH_TLB | FLUSH_ORDER(PAGETABLE_ORDER);
+                l1_pgentry_t *l1t;
 
                 /* Skip this PTE if there is no change. */
                 if ( (((l2e_get_pfn(*pl2e) & ~(L1_PAGETABLE_ENTRIES - 1)) +
@@ -5222,12 +5229,12 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                pl1e = alloc_xen_pagetable();
-                if ( pl1e == NULL )
+                l1t = alloc_xen_pagetable();
+                if ( l1t == NULL )
                     return -ENOMEM;
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                    l1e_write(&pl1e[i],
+                    l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
                                            lNf_to_l1f(l2e_get_flags(*pl2e))));
 
@@ -5239,15 +5246,15 @@ int map_pages_to_xen(
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(pl1e),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
                                                         __PAGE_HYPERVISOR));
-                    pl1e = NULL;
+                    l1t = NULL;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(virt, flush_flags);
-                if ( pl1e )
-                    free_xen_pagetable(pl1e);
+                if ( l1t )
+                    free_xen_pagetable(l1t);
             }
 
             pl1e  = l2e_to_l1e(*pl2e) + l1_table_offset(virt);
@@ -5272,6 +5279,7 @@ int map_pages_to_xen(
                     ((1u << PAGETABLE_ORDER) - 1)) == 0)) )
             {
                 unsigned long base_mfn;
+                l1_pgentry_t *l1t;
 
                 if ( locking )
                     spin_lock(&map_pgdir_lock);
@@ -5295,11 +5303,11 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                pl1e = l2e_to_l1e(ol2e);
-                base_mfn = l1e_get_pfn(*pl1e) & ~(L1_PAGETABLE_ENTRIES - 1);
-                for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++, pl1e++ )
-                    if ( (l1e_get_pfn(*pl1e) != (base_mfn + i)) ||
-                         (l1e_get_flags(*pl1e) != flags) )
+                l1t = l2e_to_l1e(ol2e);
+                base_mfn = l1e_get_pfn(l1t[0]) & ~(L1_PAGETABLE_ENTRIES - 1);
+                for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
+                    if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) ||
+                         (l1e_get_flags(l1t[i]) != flags) )
                         break;
                 if ( i == L1_PAGETABLE_ENTRIES )
                 {
@@ -5325,6 +5333,7 @@ int map_pages_to_xen(
                 ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1))) )
         {
             unsigned long base_mfn;
+            l2_pgentry_t *l2t;
 
             if ( locking )
                 spin_lock(&map_pgdir_lock);
@@ -5342,13 +5351,13 @@ int map_pages_to_xen(
                 continue;
             }
 
-            pl2e = l3e_to_l2e(ol3e);
-            base_mfn = l2e_get_pfn(*pl2e) & ~(L2_PAGETABLE_ENTRIES *
+            l2t = l3e_to_l2e(ol3e);
+            base_mfn = l2e_get_pfn(l2t[0]) & ~(L2_PAGETABLE_ENTRIES *
                                               L1_PAGETABLE_ENTRIES - 1);
-            for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++, pl2e++ )
-                if ( (l2e_get_pfn(*pl2e) !=
+            for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
+                if ( (l2e_get_pfn(l2t[i]) !=
                       (base_mfn + (i << PAGETABLE_ORDER))) ||
-                     (l2e_get_flags(*pl2e) != l1f_to_lNf(flags)) )
+                     (l2e_get_flags(l2t[i]) != l1f_to_lNf(flags)) )
                     break;
             if ( i == L2_PAGETABLE_ENTRIES )
             {
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Xen-devel] [PATCH v3 4/9] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings
  2019-10-02 17:16 [Xen-devel] [PATCH v3 0/9] Add alternative API for Xen PTEs Hongyan Xia
                   ` (2 preceding siblings ...)
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 3/9] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Hongyan Xia
@ 2019-10-02 17:16 ` Hongyan Xia
  2019-11-20 16:16   ` Jan Beulich
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 5/9] x86/mm: map_pages_to_xen should have one exit path Hongyan Xia
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Hongyan Xia @ 2019-10-02 17:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

The pl2e and pl1e variables are heavily (ab)used in that function.  It
is fine at the moment because all page tables are always mapped so
there is no need to track the life time of each variable.

We will soon have the requirement to map and unmap page tables. We
need to track the life time of each variable to avoid leakage.

Introduce some l{1,2}t variables with limited scope so that we can
track life time of pointers to xen page tables more easily.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 68 ++++++++++++++++++++++++++---------------------
 1 file changed, 38 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 23066c492e..2b8e192e26 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5428,6 +5428,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
         if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
         {
+            l2_pgentry_t *l2t;
+
             if ( l2_table_offset(v) == 0 &&
                  l1_table_offset(v) == 0 &&
                  ((e - v) >= (1UL << L3_PAGETABLE_SHIFT)) )
@@ -5443,11 +5445,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
-            pl2e = alloc_xen_pagetable();
-            if ( !pl2e )
+            l2t = alloc_xen_pagetable();
+            if ( !l2t )
                 return -ENOMEM;
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
-                l2e_write(pl2e + i,
+                l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
                                        (i << PAGETABLE_ORDER),
                                        l3e_get_flags(*pl3e)));
@@ -5456,14 +5458,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(pl2e),
+                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
                                                     __PAGE_HYPERVISOR));
-                pl2e = NULL;
+                l2t = NULL;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
-            if ( pl2e )
-                free_xen_pagetable(pl2e);
+            if ( l2t )
+                free_xen_pagetable(l2t);
         }
 
         /*
@@ -5497,12 +5499,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
             else
             {
+                l1_pgentry_t *l1t;
+
                 /* PSE: shatter the superpage and try again. */
-                pl1e = alloc_xen_pagetable();
-                if ( !pl1e )
+                l1t = alloc_xen_pagetable();
+                if ( !l1t )
                     return -ENOMEM;
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                    l1e_write(&pl1e[i],
+                    l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
                                            l2e_get_flags(*pl2e) & ~_PAGE_PSE));
                 if ( locking )
@@ -5510,19 +5514,19 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(pl1e),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
                                                         __PAGE_HYPERVISOR));
-                    pl1e = NULL;
+                    l1t = NULL;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
-                if ( pl1e )
-                    free_xen_pagetable(pl1e);
+                if ( l1t )
+                    free_xen_pagetable(l1t);
             }
         }
         else
         {
-            l1_pgentry_t nl1e;
+            l1_pgentry_t nl1e, *l1t;
 
             /*
              * Ordinary 4kB mapping: The L2 entry has been verified to be
@@ -5569,9 +5573,9 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 continue;
             }
 
-            pl1e = l2e_to_l1e(*pl2e);
+            l1t = l2e_to_l1e(*pl2e);
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                if ( l1e_get_intpte(pl1e[i]) != 0 )
+                if ( l1e_get_intpte(l1t[i]) != 0 )
                     break;
             if ( i == L1_PAGETABLE_ENTRIES )
             {
@@ -5580,7 +5584,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable(pl1e);
+                free_xen_pagetable(l1t);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5609,21 +5613,25 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             continue;
         }
 
-        pl2e = l3e_to_l2e(*pl3e);
-        for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
-            if ( l2e_get_intpte(pl2e[i]) != 0 )
-                break;
-        if ( i == L2_PAGETABLE_ENTRIES )
         {
-            /* Empty: zap the L3E and free the L2 page. */
-            l3e_write_atomic(pl3e, l3e_empty());
-            if ( locking )
+            l2_pgentry_t *l2t;
+
+            l2t = l3e_to_l2e(*pl3e);
+            for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
+                if ( l2e_get_intpte(l2t[i]) != 0 )
+                    break;
+            if ( i == L2_PAGETABLE_ENTRIES )
+            {
+                /* Empty: zap the L3E and free the L2 page. */
+                l3e_write_atomic(pl3e, l3e_empty());
+                if ( locking )
+                    spin_unlock(&map_pgdir_lock);
+                flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
+                free_xen_pagetable(l2t);
+            }
+            else if ( locking )
                 spin_unlock(&map_pgdir_lock);
-            flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-            free_xen_pagetable(pl2e);
         }
-        else if ( locking )
-            spin_unlock(&map_pgdir_lock);
     }
 
     flush_area(NULL, FLUSH_TLB_GLOBAL);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Xen-devel] [PATCH v3 5/9] x86/mm: map_pages_to_xen should have one exit path
  2019-10-02 17:16 [Xen-devel] [PATCH v3 0/9] Add alternative API for Xen PTEs Hongyan Xia
                   ` (3 preceding siblings ...)
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 4/9] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Hongyan Xia
@ 2019-10-02 17:16 ` Hongyan Xia
  2019-11-20 16:24   ` Jan Beulich
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 6/9] x86/mm: add an end_of_loop label in map_pages_to_xen Hongyan Xia
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Hongyan Xia @ 2019-10-02 17:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will soon rewrite the function to handle dynamically mapping and
unmapping of page tables.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 34 +++++++++++++++++++++++++++-------
 1 file changed, 27 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 2b8e192e26..26fcb2709b 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5014,9 +5014,11 @@ int map_pages_to_xen(
     unsigned int flags)
 {
     bool locking = system_state > SYS_STATE_boot;
+    l3_pgentry_t *pl3e, ol3e;
     l2_pgentry_t *pl2e, ol2e;
     l1_pgentry_t *pl1e, ol1e;
     unsigned int  i;
+    int rc = -ENOMEM;
 
 #define flush_flags(oldf) do {                 \
     unsigned int o_ = (oldf);                  \
@@ -5034,10 +5036,13 @@ int map_pages_to_xen(
 
     while ( nr_mfns != 0 )
     {
-        l3_pgentry_t ol3e, *pl3e = virt_to_xen_l3e(virt);
+        pl3e = virt_to_xen_l3e(virt);
 
         if ( !pl3e )
-            return -ENOMEM;
+        {
+            ASSERT(rc == -ENOMEM);
+            goto out;
+        }
         ol3e = *pl3e;
 
         if ( cpu_has_page1gb &&
@@ -5129,7 +5134,10 @@ int map_pages_to_xen(
 
             l2t = alloc_xen_pagetable();
             if ( l2t == NULL )
-                return -ENOMEM;
+            {
+                ASSERT(rc == -ENOMEM);
+                goto out;
+            }
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
@@ -5158,7 +5166,10 @@ int map_pages_to_xen(
 
         pl2e = virt_to_xen_l2e(virt);
         if ( !pl2e )
-            return -ENOMEM;
+        {
+            ASSERT(rc == -ENOMEM);
+            goto out;
+        }
 
         if ( ((((virt >> PAGE_SHIFT) | mfn_x(mfn)) &
                ((1u << PAGETABLE_ORDER) - 1)) == 0) &&
@@ -5203,7 +5214,10 @@ int map_pages_to_xen(
             {
                 pl1e = virt_to_xen_l1e(virt);
                 if ( pl1e == NULL )
-                    return -ENOMEM;
+                {
+                    ASSERT(rc == -ENOMEM);
+                    goto out;
+                }
             }
             else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
             {
@@ -5231,7 +5245,10 @@ int map_pages_to_xen(
 
                 l1t = alloc_xen_pagetable();
                 if ( l1t == NULL )
-                    return -ENOMEM;
+                {
+                    ASSERT(rc == -ENOMEM);
+                    goto out;
+                }
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
@@ -5377,7 +5394,10 @@ int map_pages_to_xen(
 
 #undef flush_flags
 
-    return 0;
+    rc = 0;
+
+ out:
+    return rc;
 }
 
 int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Xen-devel] [PATCH v3 6/9] x86/mm: add an end_of_loop label in map_pages_to_xen
  2019-10-02 17:16 [Xen-devel] [PATCH v3 0/9] Add alternative API for Xen PTEs Hongyan Xia
                   ` (4 preceding siblings ...)
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 5/9] x86/mm: map_pages_to_xen should have one exit path Hongyan Xia
@ 2019-10-02 17:16 ` Hongyan Xia
  2019-11-22 13:55   ` Jan Beulich
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 7/9] x86/mm: make sure there is one exit path for modify_xen_mappings Hongyan Xia
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 21+ messages in thread
From: Hongyan Xia @ 2019-10-02 17:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will soon need to clean up mappings whenever the out most loop is
ended. Add a new label and turn relevant continue's into goto's.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 26fcb2709b..5a5f0685cc 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5102,7 +5102,7 @@ int map_pages_to_xen(
             if ( !mfn_eq(mfn, INVALID_MFN) )
                 mfn  = mfn_add(mfn, 1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT));
             nr_mfns -= 1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT);
-            continue;
+            goto end_of_loop;
         }
 
         if ( (l3e_get_flags(ol3e) & _PAGE_PRESENT) &&
@@ -5129,7 +5129,7 @@ int map_pages_to_xen(
                 if ( !mfn_eq(mfn, INVALID_MFN) )
                     mfn = mfn_add(mfn, i);
                 nr_mfns -= i;
-                continue;
+                goto end_of_loop;
             }
 
             l2t = alloc_xen_pagetable();
@@ -5310,7 +5310,7 @@ int map_pages_to_xen(
                 {
                     if ( locking )
                         spin_unlock(&map_pgdir_lock);
-                    continue;
+                    goto end_of_loop;
                 }
 
                 if ( l2e_get_flags(ol2e) & _PAGE_PSE )
@@ -5365,7 +5365,7 @@ int map_pages_to_xen(
             {
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
-                continue;
+                goto end_of_loop;
             }
 
             l2t = l3e_to_l2e(ol3e);
@@ -5390,6 +5390,7 @@ int map_pages_to_xen(
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
+    end_of_loop:;
     }
 
 #undef flush_flags
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Xen-devel] [PATCH v3 7/9] x86/mm: make sure there is one exit path for modify_xen_mappings
  2019-10-02 17:16 [Xen-devel] [PATCH v3 0/9] Add alternative API for Xen PTEs Hongyan Xia
                   ` (5 preceding siblings ...)
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 6/9] x86/mm: add an end_of_loop label in map_pages_to_xen Hongyan Xia
@ 2019-10-02 17:16 ` Hongyan Xia
  2019-11-20 16:26   ` Jan Beulich
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 8/9] x86/mm: add an end_of_loop label in modify_xen_mappings Hongyan Xia
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 9/9] x86/mm: change pl*e to l*t in virt_to_xen_l*e Hongyan Xia
  8 siblings, 1 reply; 21+ messages in thread
From: Hongyan Xia @ 2019-10-02 17:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will soon need to handle dynamically mapping / unmapping page
tables in the said function.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 5a5f0685cc..3838343b87 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5425,6 +5425,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     l1_pgentry_t *pl1e;
     unsigned int  i;
     unsigned long v = s;
+    int rc = -ENOMEM;
 
     /* Set of valid PTE bits which may be altered. */
 #define FLAGS_MASK (_PAGE_NX|_PAGE_RW|_PAGE_PRESENT)
@@ -5468,7 +5469,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             /* PAGE1GB: shatter the superpage and fall through. */
             l2t = alloc_xen_pagetable();
             if ( !l2t )
-                return -ENOMEM;
+            {
+                ASSERT(rc == -ENOMEM);
+                goto out;
+            }
+
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
@@ -5525,7 +5530,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 /* PSE: shatter the superpage and try again. */
                 l1t = alloc_xen_pagetable();
                 if ( !l1t )
-                    return -ENOMEM;
+                {
+                    ASSERT(rc == -ENOMEM);
+                    goto out;
+                }
+
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
@@ -5658,7 +5667,10 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     flush_area(NULL, FLUSH_TLB_GLOBAL);
 
 #undef FLAGS_MASK
-    return 0;
+    rc = 0;
+
+ out:
+    return rc;
 }
 
 #undef flush_area
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Xen-devel] [PATCH v3 8/9] x86/mm: add an end_of_loop label in modify_xen_mappings
  2019-10-02 17:16 [Xen-devel] [PATCH v3 0/9] Add alternative API for Xen PTEs Hongyan Xia
                   ` (6 preceding siblings ...)
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 7/9] x86/mm: make sure there is one exit path for modify_xen_mappings Hongyan Xia
@ 2019-10-02 17:16 ` Hongyan Xia
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 9/9] x86/mm: change pl*e to l*t in virt_to_xen_l*e Hongyan Xia
  8 siblings, 0 replies; 21+ messages in thread
From: Hongyan Xia @ 2019-10-02 17:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will soon need to clean up mappings whenever the out most loop
is ended. Add a new label and turn relevant continue's into goto's.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 xen/arch/x86/mm.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 3838343b87..d7eb804f06 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5445,7 +5445,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
             v += 1UL << L3_PAGETABLE_SHIFT;
             v &= ~((1UL << L3_PAGETABLE_SHIFT) - 1);
-            continue;
+            goto end_of_loop;
         }
 
         if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
@@ -5463,7 +5463,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
                 l3e_write_atomic(pl3e, nl3e);
                 v += 1UL << L3_PAGETABLE_SHIFT;
-                continue;
+                goto end_of_loop;
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
@@ -5507,7 +5507,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
             v += 1UL << L2_PAGETABLE_SHIFT;
             v &= ~((1UL << L2_PAGETABLE_SHIFT) - 1);
-            continue;
+            goto end_of_loop;
         }
 
         if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
@@ -5581,7 +5581,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
              * skip the empty&free check.
              */
             if ( (nf & _PAGE_PRESENT) || ((v != e) && (l1_table_offset(v) != 0)) )
-                continue;
+                goto end_of_loop;
             if ( locking )
                 spin_lock(&map_pgdir_lock);
 
@@ -5600,7 +5600,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             {
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
-                continue;
+                goto end_of_loop;
             }
 
             l1t = l2e_to_l1e(*pl2e);
@@ -5627,7 +5627,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
          */
         if ( (nf & _PAGE_PRESENT) ||
              ((v != e) && (l2_table_offset(v) + l1_table_offset(v) != 0)) )
-            continue;
+            goto end_of_loop;
         if ( locking )
             spin_lock(&map_pgdir_lock);
 
@@ -5640,7 +5640,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
         {
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
-            continue;
+            goto end_of_loop;
         }
 
         {
@@ -5662,6 +5662,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
         }
+    end_of_loop:;
     }
 
     flush_area(NULL, FLUSH_TLB_GLOBAL);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Xen-devel] [PATCH v3 9/9] x86/mm: change pl*e to l*t in virt_to_xen_l*e
  2019-10-02 17:16 [Xen-devel] [PATCH v3 0/9] Add alternative API for Xen PTEs Hongyan Xia
                   ` (7 preceding siblings ...)
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 8/9] x86/mm: add an end_of_loop label in modify_xen_mappings Hongyan Xia
@ 2019-10-02 17:16 ` Hongyan Xia
  2019-11-20 17:14   ` Jan Beulich
  8 siblings, 1 reply; 21+ messages in thread
From: Hongyan Xia @ 2019-10-02 17:16 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Wei Liu, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will need to have a variable named pl*e when we rewrite
virt_to_xen_l*e. Change pl*e to l*t to reflect better its purpose.
This will make reviewing later patch easier.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyax@amazon.com>
---
 xen/arch/x86/mm.c | 42 +++++++++++++++++++++---------------------
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index d7eb804f06..79c65b15d5 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4904,25 +4904,25 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
     if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l3_pgentry_t *pl3e = alloc_xen_pagetable();
+        l3_pgentry_t *l3t = alloc_xen_pagetable();
 
-        if ( !pl3e )
+        if ( !l3t )
             return NULL;
-        clear_page(pl3e);
+        clear_page(l3t);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
         {
-            l4_pgentry_t l4e = l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR);
+            l4_pgentry_t l4e = l4e_from_paddr(__pa(l3t), __PAGE_HYPERVISOR);
 
             l4e_write(pl4e, l4e);
             efi_update_l4_pgtable(l4_table_offset(v), l4e);
-            pl3e = NULL;
+            l3t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( pl3e )
-            free_xen_pagetable(pl3e);
+        if ( l3t )
+            free_xen_pagetable(l3t);
     }
 
     return l4e_to_l3e(*pl4e) + l3_table_offset(v);
@@ -4939,22 +4939,22 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l2_pgentry_t *pl2e = alloc_xen_pagetable();
+        l2_pgentry_t *l2t = alloc_xen_pagetable();
 
-        if ( !pl2e )
+        if ( !l2t )
             return NULL;
-        clear_page(pl2e);
+        clear_page(l2t);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
-            l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
-            pl2e = NULL;
+            l3e_write(pl3e, l3e_from_paddr(__pa(l2t), __PAGE_HYPERVISOR));
+            l2t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( pl2e )
-            free_xen_pagetable(pl2e);
+        if ( l2t )
+            free_xen_pagetable(l2t);
     }
 
     BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
@@ -4972,22 +4972,22 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l1_pgentry_t *pl1e = alloc_xen_pagetable();
+        l1_pgentry_t *l1t = alloc_xen_pagetable();
 
-        if ( !pl1e )
+        if ( !l1t )
             return NULL;
-        clear_page(pl1e);
+        clear_page(l1t);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
-            l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
-            pl1e = NULL;
+            l2e_write(pl2e, l2e_from_paddr(__pa(l1t), __PAGE_HYPERVISOR));
+            l1t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( pl1e )
-            free_xen_pagetable(pl1e);
+        if ( l1t )
+            free_xen_pagetable(l1t);
     }
 
     BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [PATCH v3 1/9] x86: move some xen mm function declarations
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 1/9] x86: move some xen mm function declarations Hongyan Xia
@ 2019-11-15 15:12   ` Jan Beulich
  0 siblings, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2019-11-15 15:12 UTC (permalink / raw)
  To: Hongyan Xia, Wei Liu; +Cc: xen-devel, Roger Pau Monné, Andrew Cooper

On 02.10.2019 19:16, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> They were put into page.h but mm.h is more appropriate.
> 
> The real reason is that I will be adding some new functions which
> takes mfn_t. It turns out it is a bit difficult to do in page.h.
> 
> No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
with one further request:

> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -630,4 +630,9 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
>                            unsigned int id, unsigned long frame,
>                            unsigned int nr_frames, xen_pfn_t mfn_list[]);
>  
> +/* Allocator functions for Xen pagetables. */
> +void *alloc_xen_pagetable(void);
> +void free_xen_pagetable(void *v);
> +l1_pgentry_t *virt_to_xen_l1e(unsigned long v);

Can these please be put next to e.g. do_page_walk()?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [PATCH v3 2/9] x86: introduce a new set of APIs to manage Xen page tables
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 2/9] x86: introduce a new set of APIs to manage Xen page tables Hongyan Xia
@ 2019-11-15 15:23   ` Jan Beulich
  2019-11-15 17:16     ` Wei Liu
  0 siblings, 1 reply; 21+ messages in thread
From: Jan Beulich @ 2019-11-15 15:23 UTC (permalink / raw)
  To: Hongyan Xia, Wei Liu; +Cc: xen-devel, Roger Pau Monné, Andrew Cooper

On 02.10.2019 19:16, Hongyan Xia wrote:
> @@ -4847,22 +4848,50 @@ int mmcfg_intercept_write(
>  }
>  
>  void *alloc_xen_pagetable(void)
> +{
> +    mfn_t mfn;
> +
> +    mfn = alloc_xen_pagetable_new();
> +    ASSERT(!mfn_eq(mfn, INVALID_MFN));
> +
> +    return map_xen_pagetable_new(mfn);
> +}
> +
> +void free_xen_pagetable(void *v)
> +{
> +    if ( system_state != SYS_STATE_early_boot )
> +        free_xen_pagetable_new(virt_to_mfn(v));
> +}
> +
> +mfn_t alloc_xen_pagetable_new(void)
>  {
>      if ( system_state != SYS_STATE_early_boot )
>      {
>          void *ptr = alloc_xenheap_page();
>  
>          BUG_ON(!hardware_domain && !ptr);
> -        return ptr;
> +        return virt_to_mfn(ptr);
>      }
>  
> -    return mfn_to_virt(mfn_x(alloc_boot_pages(1, 1)));
> +    return alloc_boot_pages(1, 1);
>  }
>  
> -void free_xen_pagetable(void *v)
> +void *map_xen_pagetable_new(mfn_t mfn)

There's no need for the map/unmap functions to have a _new
suffix, is there?

>  {
> -    if ( system_state != SYS_STATE_early_boot )
> -        free_xenheap_page(v);
> +    return mfn_to_virt(mfn_x(mfn));
> +}
> +
> +/* v can point to an entry within a table or be NULL */
> +void unmap_xen_pagetable_new(void *v)

Can this please take const void *, such that callers needing
mappings just for read purposes can have their pointer const-
qualified as well?

> +{
> +    /* XXX still using xenheap page, no need to do anything.  */

I wonder if it wouldn't be a good idea to at least have some
leak detection here temporarily, such that we have a chance of
noticing paths not properly doing the necessary unmapping.

The again a question is why you introduce such a map/unmap pair
in the first place. This is going to be a thin wrapper around
{,un}map_domain_page() in the end anyway, and hence callers
could as well be switched to calling those function directly,
as they're no-ops on Xen heap pages.

> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -633,6 +633,17 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
>  /* Allocator functions for Xen pagetables. */
>  void *alloc_xen_pagetable(void);
>  void free_xen_pagetable(void *v);
> +mfn_t alloc_xen_pagetable_new(void);
> +void *map_xen_pagetable_new(mfn_t mfn);
> +void unmap_xen_pagetable_new(void *v);
> +void free_xen_pagetable_new(mfn_t mfn);
> +
> +#define UNMAP_XEN_PAGETABLE_NEW(ptr)    \
> +    do {                                \
> +        unmap_xen_pagetable_new((ptr)); \

Stray double parentheses.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [PATCH v3 2/9] x86: introduce a new set of APIs to manage Xen page tables
  2019-11-15 15:23   ` Jan Beulich
@ 2019-11-15 17:16     ` Wei Liu
  2019-11-18  9:50       ` Jan Beulich
  0 siblings, 1 reply; 21+ messages in thread
From: Wei Liu @ 2019-11-15 17:16 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Andrew Cooper, Roger Pau Monné, Wei Liu, Hongyan Xia

On Fri, Nov 15, 2019 at 04:23:30PM +0100, Jan Beulich wrote:
> On 02.10.2019 19:16, Hongyan Xia wrote:
> > @@ -4847,22 +4848,50 @@ int mmcfg_intercept_write(
> >  }
> >  
> >  void *alloc_xen_pagetable(void)
> > +{
> > +    mfn_t mfn;
> > +
> > +    mfn = alloc_xen_pagetable_new();
> > +    ASSERT(!mfn_eq(mfn, INVALID_MFN));
> > +
> > +    return map_xen_pagetable_new(mfn);
> > +}
> > +
> > +void free_xen_pagetable(void *v)
> > +{
> > +    if ( system_state != SYS_STATE_early_boot )
> > +        free_xen_pagetable_new(virt_to_mfn(v));
> > +}
> > +
> > +mfn_t alloc_xen_pagetable_new(void)
> >  {
> >      if ( system_state != SYS_STATE_early_boot )
> >      {
> >          void *ptr = alloc_xenheap_page();
> >  
> >          BUG_ON(!hardware_domain && !ptr);
> > -        return ptr;
> > +        return virt_to_mfn(ptr);
> >      }
> >  
> > -    return mfn_to_virt(mfn_x(alloc_boot_pages(1, 1)));
> > +    return alloc_boot_pages(1, 1);
> >  }
> >  
> > -void free_xen_pagetable(void *v)
> > +void *map_xen_pagetable_new(mfn_t mfn)
> 
> There's no need for the map/unmap functions to have a _new
> suffix, is there?
> 

It is more consistent.

> >  {
> > -    if ( system_state != SYS_STATE_early_boot )
> > -        free_xenheap_page(v);
> > +    return mfn_to_virt(mfn_x(mfn));
> > +}
> > +
[...]
> 
> > +{
> > +    /* XXX still using xenheap page, no need to do anything.  */
> 
> I wonder if it wouldn't be a good idea to at least have some
> leak detection here temporarily, such that we have a chance of
> noticing paths not properly doing the necessary unmapping.
> 
> The again a question is why you introduce such a map/unmap pair
> in the first place. This is going to be a thin wrapper around
> {,un}map_domain_page() in the end anyway, and hence callers
> could as well be switched to calling those function directly,
> as they're no-ops on Xen heap pages.


All roads lead to Rome, but some roads are easier than others.  Having a
set of APIs to deal with page tables make the code easier to follow IMO.

And we can potentially do more stuff in this function, for example, make
the unmap function test if the page is really a page table to avoid
misuse; or like you said, have some leak detection check there.

Also, please consider there are dozens of patches that are built on top
of this set of new APIs.  Having to rewrite them just for mechanical
changes is not fun for Hongyan. I would suggest we be more pragmatic
here.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [PATCH v3 2/9] x86: introduce a new set of APIs to manage Xen page tables
  2019-11-15 17:16     ` Wei Liu
@ 2019-11-18  9:50       ` Jan Beulich
  2019-11-18 13:02         ` Wei Liu
  0 siblings, 1 reply; 21+ messages in thread
From: Jan Beulich @ 2019-11-18  9:50 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, Roger Pau Monné, xen-devel, Hongyan Xia

On 15.11.2019 18:16, Wei Liu wrote:
> On Fri, Nov 15, 2019 at 04:23:30PM +0100, Jan Beulich wrote:
>> On 02.10.2019 19:16, Hongyan Xia wrote:
>>> @@ -4847,22 +4848,50 @@ int mmcfg_intercept_write(
>>>  }
>>>  
>>>  void *alloc_xen_pagetable(void)
>>> +{
>>> +    mfn_t mfn;
>>> +
>>> +    mfn = alloc_xen_pagetable_new();
>>> +    ASSERT(!mfn_eq(mfn, INVALID_MFN));
>>> +
>>> +    return map_xen_pagetable_new(mfn);
>>> +}
>>> +
>>> +void free_xen_pagetable(void *v)
>>> +{
>>> +    if ( system_state != SYS_STATE_early_boot )
>>> +        free_xen_pagetable_new(virt_to_mfn(v));
>>> +}
>>> +
>>> +mfn_t alloc_xen_pagetable_new(void)
>>>  {
>>>      if ( system_state != SYS_STATE_early_boot )
>>>      {
>>>          void *ptr = alloc_xenheap_page();
>>>  
>>>          BUG_ON(!hardware_domain && !ptr);
>>> -        return ptr;
>>> +        return virt_to_mfn(ptr);
>>>      }
>>>  
>>> -    return mfn_to_virt(mfn_x(alloc_boot_pages(1, 1)));
>>> +    return alloc_boot_pages(1, 1);
>>>  }
>>>  
>>> -void free_xen_pagetable(void *v)
>>> +void *map_xen_pagetable_new(mfn_t mfn)
>>
>> There's no need for the map/unmap functions to have a _new
>> suffix, is there?
>>
> 
> It is more consistent.

But will require touching all callers again when the _new suffixes
get dropped.

>>>  {
>>> -    if ( system_state != SYS_STATE_early_boot )
>>> -        free_xenheap_page(v);
>>> +    return mfn_to_virt(mfn_x(mfn));
>>> +}
>>> +
> [...]
>>
>>> +{
>>> +    /* XXX still using xenheap page, no need to do anything.  */
>>
>> I wonder if it wouldn't be a good idea to at least have some
>> leak detection here temporarily, such that we have a chance of
>> noticing paths not properly doing the necessary unmapping.
>>
>> The again a question is why you introduce such a map/unmap pair
>> in the first place. This is going to be a thin wrapper around
>> {,un}map_domain_page() in the end anyway, and hence callers
>> could as well be switched to calling those function directly,
>> as they're no-ops on Xen heap pages.
> 
> 
> All roads lead to Rome, but some roads are easier than others.  Having a
> set of APIs to deal with page tables make the code easier to follow IMO.
> 
> And we can potentially do more stuff in this function, for example, make
> the unmap function test if the page is really a page table to avoid
> misuse; or like you said, have some leak detection check there.
> 
> Also, please consider there are dozens of patches that are built on top
> of this set of new APIs.  Having to rewrite them just for mechanical
> changes is not fun for Hongyan. I would suggest we be more pragmatic
> here.

Whether to use separate functions depends - as you say - on the
longer term plans. If there's a good reasons to have these separate
(and that reason is stated in the description), then yes, I'll be
fine with having them. But introducing them just for the sake of
doing so isn't appropriate imo.

As to dozens of patches on top - I'm sorry to say it this bluntly,
but that's the risk anyone takes when compiling large series
without sufficient up front agreement. I've too been suffering from
such a penalty in a few cases; that's simply the way it is.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [PATCH v3 2/9] x86: introduce a new set of APIs to manage Xen page tables
  2019-11-18  9:50       ` Jan Beulich
@ 2019-11-18 13:02         ` Wei Liu
  0 siblings, 0 replies; 21+ messages in thread
From: Wei Liu @ 2019-11-18 13:02 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Roger Pau Monné, xen-devel, Wei Liu, Hongyan Xia

On Mon, Nov 18, 2019 at 10:50:47AM +0100, Jan Beulich wrote:
> On 15.11.2019 18:16, Wei Liu wrote:
> > On Fri, Nov 15, 2019 at 04:23:30PM +0100, Jan Beulich wrote:
> >> On 02.10.2019 19:16, Hongyan Xia wrote:
> >>> @@ -4847,22 +4848,50 @@ int mmcfg_intercept_write(
> >>>  }
> >>>  
> >>>  void *alloc_xen_pagetable(void)
> >>> +{
> >>> +    mfn_t mfn;
> >>> +
> >>> +    mfn = alloc_xen_pagetable_new();
> >>> +    ASSERT(!mfn_eq(mfn, INVALID_MFN));
> >>> +
> >>> +    return map_xen_pagetable_new(mfn);
> >>> +}
> >>> +
> >>> +void free_xen_pagetable(void *v)
> >>> +{
> >>> +    if ( system_state != SYS_STATE_early_boot )
> >>> +        free_xen_pagetable_new(virt_to_mfn(v));
> >>> +}
> >>> +
> >>> +mfn_t alloc_xen_pagetable_new(void)
> >>>  {
> >>>      if ( system_state != SYS_STATE_early_boot )
> >>>      {
> >>>          void *ptr = alloc_xenheap_page();
> >>>  
> >>>          BUG_ON(!hardware_domain && !ptr);
> >>> -        return ptr;
> >>> +        return virt_to_mfn(ptr);
> >>>      }
> >>>  
> >>> -    return mfn_to_virt(mfn_x(alloc_boot_pages(1, 1)));
> >>> +    return alloc_boot_pages(1, 1);
> >>>  }
> >>>  
> >>> -void free_xen_pagetable(void *v)
> >>> +void *map_xen_pagetable_new(mfn_t mfn)
> >>
> >> There's no need for the map/unmap functions to have a _new
> >> suffix, is there?
> >>
> > 
> > It is more consistent.
> 
> But will require touching all callers again when the _new suffixes
> get dropped.

Yes but that's just a mechanical change that's very easy to review, so I
didn't think that's a big deal.

> 
> >>>  {
> >>> -    if ( system_state != SYS_STATE_early_boot )
> >>> -        free_xenheap_page(v);
> >>> +    return mfn_to_virt(mfn_x(mfn));
> >>> +}
> >>> +
> > [...]
> >>
> >>> +{
> >>> +    /* XXX still using xenheap page, no need to do anything.  */
> >>
> >> I wonder if it wouldn't be a good idea to at least have some
> >> leak detection here temporarily, such that we have a chance of
> >> noticing paths not properly doing the necessary unmapping.
> >>
> >> The again a question is why you introduce such a map/unmap pair
> >> in the first place. This is going to be a thin wrapper around
> >> {,un}map_domain_page() in the end anyway, and hence callers
> >> could as well be switched to calling those function directly,
> >> as they're no-ops on Xen heap pages.
> > 
> > 
> > All roads lead to Rome, but some roads are easier than others.  Having a
> > set of APIs to deal with page tables make the code easier to follow IMO.
> > 
> > And we can potentially do more stuff in this function, for example, make
> > the unmap function test if the page is really a page table to avoid
> > misuse; or like you said, have some leak detection check there.
> > 
> > Also, please consider there are dozens of patches that are built on top
> > of this set of new APIs.  Having to rewrite them just for mechanical
> > changes is not fun for Hongyan. I would suggest we be more pragmatic
> > here.
> 
> Whether to use separate functions depends - as you say - on the
> longer term plans. If there's a good reasons to have these separate
> (and that reason is stated in the description), then yes, I'll be
> fine with having them. But introducing them just for the sake of
> doing so isn't appropriate imo.
> 
> As to dozens of patches on top - I'm sorry to say it this bluntly,
> but that's the risk anyone takes when compiling large series
> without sufficient up front agreement. I've too been suffering from
> such a penalty in a few cases; that's simply the way it is.

The first paragraph illustrates why it is difficult to get sufficient
agreement up front. There will always be some changes that need
weighting benefits against long term and short term goal.  There will
always be differences in opinions in what is worthwhile or not.

Also, it is often said that a decision can't be made until patches are
written and posted, hence everyone has a significant risk if he/she
wants to work on something complex. You're well aware of this given the
second paragraph.

You've been on both sides of this. I'm sure you don't think that sort of
experience is nice. I just want to say, things don't have to be that
way.

My high-level modus operandi has been:

 * If a patch (series) achieves no apparent short term or long term
   goal, or it achieves one but actively works against the other, it
   should be rejected.
 * If a patch (series) achieves one of the two, also doesn't hinder
   further progress of the other, it should be accepted.
 * If a patch (series) achieves both at the same time, it should be
   accepted with open arms.

This applies regardless of how _I_ think things should be. If I have
very strong opinions, I will of course express them appropriately. But
if it is things are only internal to a component, I will be more relaxed
and let the contributors take the driver's seat. Sometimes I even
actively look beyond what is said in the commit message for positive
impact of the code that the patch author is not aware of.

I understand everyone has their own working style. That's fine. But
please consider what I said above. If that looks workable to you, it can
potentially make your life easier both as a reviewer and a
contributor. ;-)

Wei.

> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [PATCH v3 3/9] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 3/9] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Hongyan Xia
@ 2019-11-20 16:08   ` Jan Beulich
  0 siblings, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2019-11-20 16:08 UTC (permalink / raw)
  To: Hongyan Xia, Wei Liu; +Cc: xen-devel, Roger Pau Monné, AndrewCooper

On 02.10.2019 19:16, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> The pl2e and pl1e variables are heavily (ab)used in that function. It
> is fine at the moment because all page tables are always mapped so
> there is no need to track the life time of each variable.
> 
> We will soon have the requirement to map and unmap page tables. We
> need to track the life time of each variable to avoid leakage.
> 
> Introduce some l{1,2}t variables with limited scope so that we can
> track life time of pointers to xen page tables more easily.
> 
> No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with a couple of remarks:

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5061,10 +5061,12 @@ int map_pages_to_xen(
>                  }
>                  else
>                  {
> -                    pl2e = l3e_to_l2e(ol3e);
> +                    l2_pgentry_t *l2t;
> +
> +                    l2t = l3e_to_l2e(ol3e);

Here and elsewhere assignments could have become variable
initializers.

> @@ -5123,12 +5127,12 @@ int map_pages_to_xen(
>                  continue;
>              }
>  
> -            pl2e = alloc_xen_pagetable();
> -            if ( pl2e == NULL )
> +            l2t = alloc_xen_pagetable();
> +            if ( l2t == NULL )
>                  return -ENOMEM;
>  
>              for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
> -                l2e_write(pl2e + i,
> +                l2e_write(l2t + i,

The style here and ...

> @@ -5222,12 +5229,12 @@ int map_pages_to_xen(
>                      goto check_l3;
>                  }
>  
> -                pl1e = alloc_xen_pagetable();
> -                if ( pl1e == NULL )
> +                l1t = alloc_xen_pagetable();
> +                if ( l1t == NULL )
>                      return -ENOMEM;
>  
>                  for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
> -                    l1e_write(&pl1e[i],
> +                    l1e_write(&l1t[i],

... here (and perhaps elsewhere when touched anyway) would have
been nice if it was brought in sync (I guess I'm guilty of the
difference).

> @@ -5272,6 +5279,7 @@ int map_pages_to_xen(
>                      ((1u << PAGETABLE_ORDER) - 1)) == 0)) )
>              {
>                  unsigned long base_mfn;
> +                l1_pgentry_t *l1t;

const, as this looks to be used for lookup only?

> @@ -5325,6 +5333,7 @@ int map_pages_to_xen(
>                  ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1))) )
>          {
>              unsigned long base_mfn;
> +            l2_pgentry_t *l2t;

Same here then. There also look to be more cases further up that
I did miss initially.

Whether you address the style aspects further up I'll leave to you,
but I'd really like to see the const-ness added wherever possible.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [PATCH v3 4/9] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 4/9] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Hongyan Xia
@ 2019-11-20 16:16   ` Jan Beulich
  0 siblings, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2019-11-20 16:16 UTC (permalink / raw)
  To: Hongyan Xia, Wei Liu; +Cc: xen-devel, Roger Pau Monné, AndrewCooper

On 02.10.2019 19:16, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> The pl2e and pl1e variables are heavily (ab)used in that function.  It
> is fine at the moment because all page tables are always mapped so
> there is no need to track the life time of each variable.
> 
> We will soon have the requirement to map and unmap page tables. We
> need to track the life time of each variable to avoid leakage.
> 
> Introduce some l{1,2}t variables with limited scope so that we can
> track life time of pointers to xen page tables more easily.
> 
> No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

With adjustments similar to whatever gets done for patch 3
(i.e. minimally const-ness)
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [PATCH v3 5/9] x86/mm: map_pages_to_xen should have one exit path
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 5/9] x86/mm: map_pages_to_xen should have one exit path Hongyan Xia
@ 2019-11-20 16:24   ` Jan Beulich
  0 siblings, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2019-11-20 16:24 UTC (permalink / raw)
  To: Hongyan Xia, Wei Liu; +Cc: xen-devel, Roger Pau Monné, AndrewCooper

On 02.10.2019 19:16, Hongyan Xia wrote:
> @@ -5034,10 +5036,13 @@ int map_pages_to_xen(
>  
>      while ( nr_mfns != 0 )
>      {
> -        l3_pgentry_t ol3e, *pl3e = virt_to_xen_l3e(virt);
> +        pl3e = virt_to_xen_l3e(virt);
>  
>          if ( !pl3e )
> -            return -ENOMEM;
> +        {
> +            ASSERT(rc == -ENOMEM);
> +            goto out;
> +        }

rc never gets changed to any other error code, and I also can't
foresee this happening in the future. Are all these ASSERT()s
(and the associated brace pairs) really helpful?

Also I think I'd prefer a less strong title, e.g. "would better"
instead of "should".

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [PATCH v3 7/9] x86/mm: make sure there is one exit path for modify_xen_mappings
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 7/9] x86/mm: make sure there is one exit path for modify_xen_mappings Hongyan Xia
@ 2019-11-20 16:26   ` Jan Beulich
  0 siblings, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2019-11-20 16:26 UTC (permalink / raw)
  To: Hongyan Xia, Wei Liu; +Cc: xen-devel, Roger Pau Monné, AndrewCooper

On 02.10.2019 19:16, Hongyan Xia wrote:
> @@ -5468,7 +5469,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
>              /* PAGE1GB: shatter the superpage and fall through. */
>              l2t = alloc_xen_pagetable();
>              if ( !l2t )
> -                return -ENOMEM;
> +            {
> +                ASSERT(rc == -ENOMEM);
> +                goto out;
> +            }

Same as for patch 5 - I think these ASSERT()s aren't very helpful.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [PATCH v3 9/9] x86/mm: change pl*e to l*t in virt_to_xen_l*e
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 9/9] x86/mm: change pl*e to l*t in virt_to_xen_l*e Hongyan Xia
@ 2019-11-20 17:14   ` Jan Beulich
  0 siblings, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2019-11-20 17:14 UTC (permalink / raw)
  To: Hongyan Xia, Wei Liu; +Cc: xen-devel, Roger Pau Monné, AndrewCooper

On 02.10.2019 19:16, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> We will need to have a variable named pl*e when we rewrite
> virt_to_xen_l*e. Change pl*e to l*t to reflect better its purpose.
> This will make reviewing later patch easier.
> 
> No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyax@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Xen-devel] [PATCH v3 6/9] x86/mm: add an end_of_loop label in map_pages_to_xen
  2019-10-02 17:16 ` [Xen-devel] [PATCH v3 6/9] x86/mm: add an end_of_loop label in map_pages_to_xen Hongyan Xia
@ 2019-11-22 13:55   ` Jan Beulich
  0 siblings, 0 replies; 21+ messages in thread
From: Jan Beulich @ 2019-11-22 13:55 UTC (permalink / raw)
  To: Hongyan Xia, Wei Liu; +Cc: xen-devel, Roger Pau Monné, AndrewCooper

On 02.10.2019 19:16, Hongyan Xia wrote:
> We will soon need to clean up mappings whenever the out most loop is
> ended. Add a new label and turn relevant continue's into goto's.

I think already when this still was RFC I did indicate that I'm not
happy about the introduction of these labels (including also patch 8).
I realize it's quite a lot to ask, but both functions would benefit
from splitting up into per-level helper functions, which - afaict -
would avoid the need for such labels, and which would at the same
time likely make it quite a bit easier to extend these to the
5-level page tables case down the road.

Thoughts?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2019-11-22 13:55 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-02 17:16 [Xen-devel] [PATCH v3 0/9] Add alternative API for Xen PTEs Hongyan Xia
2019-10-02 17:16 ` [Xen-devel] [PATCH v3 1/9] x86: move some xen mm function declarations Hongyan Xia
2019-11-15 15:12   ` Jan Beulich
2019-10-02 17:16 ` [Xen-devel] [PATCH v3 2/9] x86: introduce a new set of APIs to manage Xen page tables Hongyan Xia
2019-11-15 15:23   ` Jan Beulich
2019-11-15 17:16     ` Wei Liu
2019-11-18  9:50       ` Jan Beulich
2019-11-18 13:02         ` Wei Liu
2019-10-02 17:16 ` [Xen-devel] [PATCH v3 3/9] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Hongyan Xia
2019-11-20 16:08   ` Jan Beulich
2019-10-02 17:16 ` [Xen-devel] [PATCH v3 4/9] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Hongyan Xia
2019-11-20 16:16   ` Jan Beulich
2019-10-02 17:16 ` [Xen-devel] [PATCH v3 5/9] x86/mm: map_pages_to_xen should have one exit path Hongyan Xia
2019-11-20 16:24   ` Jan Beulich
2019-10-02 17:16 ` [Xen-devel] [PATCH v3 6/9] x86/mm: add an end_of_loop label in map_pages_to_xen Hongyan Xia
2019-11-22 13:55   ` Jan Beulich
2019-10-02 17:16 ` [Xen-devel] [PATCH v3 7/9] x86/mm: make sure there is one exit path for modify_xen_mappings Hongyan Xia
2019-11-20 16:26   ` Jan Beulich
2019-10-02 17:16 ` [Xen-devel] [PATCH v3 8/9] x86/mm: add an end_of_loop label in modify_xen_mappings Hongyan Xia
2019-10-02 17:16 ` [Xen-devel] [PATCH v3 9/9] x86/mm: change pl*e to l*t in virt_to_xen_l*e Hongyan Xia
2019-11-20 17:14   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.