All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v5 0/7] Add alternative API for XEN PTEs
@ 2020-01-07 12:06 Hongyan Xia
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 1/7] x86: move some xen mm function declarations Hongyan Xia
                   ` (6 more replies)
  0 siblings, 7 replies; 20+ messages in thread
From: Hongyan Xia @ 2020-01-07 12:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, jgrall, Wei Liu, Jan Beulich, Roger Pau Monné

This batch adds an alternative alloc-map-unmap-free Xen PTE API to the
normal alloc-free on the xenheap, in preparation of switching to domheap
for Xen page tables. Since map and unmap are basically no-ops now, and
other changes are cosmetic to ease future patches, this batch does not
introduce any functional changes.

tree:
https://xenbits.xen.org/git-http/people/hx242/xen.git directnonmap-v3

---
Changed since v4:
- handle INVALID_MFN in new APIs
- drop some goto labels since there could be better options
- const qualify introduced variables
- defer some changes to future patches due to ongoing discussions on
  map_pages_to_xen

Changed since v3:
- change my email address in all patches
- address many style issues in v3
- rebase

Changed since v2:
- split into a smaller series
- drop the clear_page optimisation as Wei suggests
- rebase

Changed since v1:
- squash some commits
- merge bug fixes into this first batch
- rebase against latest master

Wei Liu (7):
  x86: move some xen mm function declarations
  x86: introduce a new set of APIs to manage Xen page tables
  x86/mm: introduce l{1,2}t local variables to map_pages_to_xen
  x86/mm: introduce l{1,2}t local variables to modify_xen_mappings
  x86/mm: map_pages_to_xen would better have one exit path
  x86/mm: make sure there is one exit path for modify_xen_mappings
  x86/mm: change pl*e to l*t in virt_to_xen_l*e

 xen/arch/x86/mm.c          | 258 ++++++++++++++++++++++++++++-----------------
 xen/include/asm-x86/mm.h   |  16 +++
 xen/include/asm-x86/page.h |   5 -
 3 files changed, 175 insertions(+), 104 deletions(-)

-- 
2.15.3.AMZN


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH v5 1/7] x86: move some xen mm function declarations
  2020-01-07 12:06 [Xen-devel] [PATCH v5 0/7] Add alternative API for XEN PTEs Hongyan Xia
@ 2020-01-07 12:06 ` Hongyan Xia
  2020-01-07 12:13   ` Wei Liu
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 2/7] x86: introduce a new set of APIs to manage Xen page tables Hongyan Xia
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Hongyan Xia @ 2020-01-07 12:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Wei Liu, Andrew Cooper, jgrall, Jan Beulich,
	Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

They were put into page.h but mm.h is more appropriate.

The real reason is that I will be adding some new functions which
takes mfn_t. It turns out it is a bit difficult to do in page.h.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>

---
Changed since v3:
- move Xen PTE API declarations next to do_page_walk().
---
 xen/include/asm-x86/mm.h   | 5 +++++
 xen/include/asm-x86/page.h | 5 -----
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 1479ba6703..2ca8882ad0 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -579,6 +579,11 @@ void update_cr3(struct vcpu *v);
 int vcpu_destroy_pagetables(struct vcpu *);
 void *do_page_walk(struct vcpu *v, unsigned long addr);
 
+/* Allocator functions for Xen pagetables. */
+void *alloc_xen_pagetable(void);
+void free_xen_pagetable(void *v);
+l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
+
 int __sync_local_execstate(void);
 
 /* Arch-specific portion of memory_op hypercall. */
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index c1e92937c0..05a8b1efa6 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -345,11 +345,6 @@ void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t);
 
 #ifndef __ASSEMBLY__
 
-/* Allocator functions for Xen pagetables. */
-void *alloc_xen_pagetable(void);
-void free_xen_pagetable(void *v);
-l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
-
 /* Convert between PAT/PCD/PWT embedded in PTE flags and 3-bit cacheattr. */
 static inline unsigned int pte_flags_to_cacheattr(unsigned int flags)
 {
-- 
2.15.3.AMZN


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH v5 2/7] x86: introduce a new set of APIs to manage Xen page tables
  2020-01-07 12:06 [Xen-devel] [PATCH v5 0/7] Add alternative API for XEN PTEs Hongyan Xia
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 1/7] x86: move some xen mm function declarations Hongyan Xia
@ 2020-01-07 12:06 ` Hongyan Xia
  2020-01-16 12:04   ` Jan Beulich
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 3/7] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Hongyan Xia
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 20+ messages in thread
From: Hongyan Xia @ 2020-01-07 12:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Wei Liu, Andrew Cooper, jgrall, Jan Beulich,
	Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We are going to switch to using domheap page for page tables.
A new set of APIs is introduced to allocate, map, unmap and free pages
for page tables.

The allocation and deallocation work on mfn_t but not page_info,
because they are required to work even before frame table is set up.

Implement the old functions with the new ones. We will rewrite, site
by site, other mm functions that manipulate page tables to use the new
APIs.

Note these new APIs still use xenheap page underneath and no actual
map and unmap is done so that we don't break xen half way. They will
be switched to use domheap and dynamic mappings when usage of old APIs
is eliminated.

No functional change intended in this patch.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>

---
Changed since v4:
- properly handle INVALID_MFN.
- remove the _new suffix for map/unmap_xen_pagetable because they do not
  have old alternatives.

Changed since v3:
- const qualify unmap_xen_pagetable_new().
- remove redundant parentheses.
---
 xen/arch/x86/mm.c        | 44 +++++++++++++++++++++++++++++++++++++++-----
 xen/include/asm-x86/mm.h | 11 +++++++++++
 2 files changed, 50 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index cc0d71996c..22b55390f1 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -119,6 +119,7 @@
 #include <xen/efi.h>
 #include <xen/grant_table.h>
 #include <xen/hypercall.h>
+#include <xen/mm.h>
 #include <asm/paging.h>
 #include <asm/shadow.h>
 #include <asm/page.h>
@@ -4992,22 +4993,55 @@ int mmcfg_intercept_write(
 }
 
 void *alloc_xen_pagetable(void)
+{
+    mfn_t mfn = alloc_xen_pagetable_new();
+
+    return mfn_eq(mfn, INVALID_MFN) ? NULL : mfn_to_virt(mfn_x(mfn));
+}
+
+void free_xen_pagetable(void *v)
+{
+    mfn_t mfn = v ? virt_to_mfn(v) : INVALID_MFN;
+
+    if ( system_state != SYS_STATE_early_boot )
+        free_xen_pagetable_new(mfn);
+}
+
+/*
+ * For these PTE APIs, the caller must follow the alloc-map-unmap-free
+ * lifecycle, which means explicitly mapping the PTE pages before accessing
+ * them. The caller must check whether the allocation has succeeded, and only
+ * pass valid MFNs to map_xen_pagetable().
+ */
+mfn_t alloc_xen_pagetable_new(void)
 {
     if ( system_state != SYS_STATE_early_boot )
     {
         void *ptr = alloc_xenheap_page();
 
         BUG_ON(!hardware_domain && !ptr);
-        return ptr;
+        return ptr ? virt_to_mfn(ptr) : INVALID_MFN;
     }
 
-    return mfn_to_virt(mfn_x(alloc_boot_pages(1, 1)));
+    return alloc_boot_pages(1, 1);
 }
 
-void free_xen_pagetable(void *v)
+void *map_xen_pagetable(mfn_t mfn)
 {
-    if ( system_state != SYS_STATE_early_boot )
-        free_xenheap_page(v);
+    return mfn_to_virt(mfn_x(mfn));
+}
+
+/* v can point to an entry within a table or be NULL */
+void unmap_xen_pagetable(const void *v)
+{
+    /* XXX still using xenheap page, no need to do anything.  */
+}
+
+/* mfn can be INVALID_MFN */
+void free_xen_pagetable_new(mfn_t mfn)
+{
+    if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
+        free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
 }
 
 static DEFINE_SPINLOCK(map_pgdir_lock);
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 2ca8882ad0..861edba34e 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -582,6 +582,17 @@ void *do_page_walk(struct vcpu *v, unsigned long addr);
 /* Allocator functions for Xen pagetables. */
 void *alloc_xen_pagetable(void);
 void free_xen_pagetable(void *v);
+mfn_t alloc_xen_pagetable_new(void);
+void *map_xen_pagetable(mfn_t mfn);
+void unmap_xen_pagetable(const void *v);
+void free_xen_pagetable_new(mfn_t mfn);
+
+#define UNMAP_XEN_PAGETABLE(ptr)    \
+    do {                            \
+        unmap_xen_pagetable(ptr);   \
+        (ptr) = NULL;               \
+    } while (0)
+
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
 
 int __sync_local_execstate(void);
-- 
2.15.3.AMZN


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH v5 3/7] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen
  2020-01-07 12:06 [Xen-devel] [PATCH v5 0/7] Add alternative API for XEN PTEs Hongyan Xia
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 1/7] x86: move some xen mm function declarations Hongyan Xia
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 2/7] x86: introduce a new set of APIs to manage Xen page tables Hongyan Xia
@ 2020-01-07 12:06 ` Hongyan Xia
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 4/7] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Hongyan Xia
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Hongyan Xia @ 2020-01-07 12:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Wei Liu, Andrew Cooper, jgrall, Jan Beulich,
	Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

The pl2e and pl1e variables are heavily (ab)used in that function. It
is fine at the moment because all page tables are always mapped so
there is no need to track the life time of each variable.

We will soon have the requirement to map and unmap page tables. We
need to track the life time of each variable to avoid leakage.

Introduce some l{1,2}t variables with limited scope so that we can
track life time of pointers to xen page tables more easily.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changed since v4:
- style fixes.
- const qualify introduced variables.
---
 xen/arch/x86/mm.c | 72 ++++++++++++++++++++++++++++++-------------------------
 1 file changed, 39 insertions(+), 33 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 22b55390f1..699aa6bbdf 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5211,10 +5211,11 @@ int map_pages_to_xen(
                 }
                 else
                 {
-                    pl2e = l3e_to_l2e(ol3e);
+                    l2_pgentry_t *l2t = l3e_to_l2e(ol3e);
+
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
-                        ol2e = pl2e[i];
+                        ol2e = l2t[i];
                         if ( !(l2e_get_flags(ol2e) & _PAGE_PRESENT) )
                             continue;
                         if ( l2e_get_flags(ol2e) & _PAGE_PSE )
@@ -5222,21 +5223,21 @@ int map_pages_to_xen(
                         else
                         {
                             unsigned int j;
+                            const l1_pgentry_t *l1t = l2e_to_l1e(ol2e);
 
-                            pl1e = l2e_to_l1e(ol2e);
                             for ( j = 0; j < L1_PAGETABLE_ENTRIES; j++ )
-                                flush_flags(l1e_get_flags(pl1e[j]));
+                                flush_flags(l1e_get_flags(l1t[j]));
                         }
                     }
                     flush_area(virt, flush_flags);
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
-                        ol2e = pl2e[i];
+                        ol2e = l2t[i];
                         if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) &&
                              !(l2e_get_flags(ol2e) & _PAGE_PSE) )
                             free_xen_pagetable(l2e_to_l1e(ol2e));
                     }
-                    free_xen_pagetable(pl2e);
+                    free_xen_pagetable(l2t);
                 }
             }
 
@@ -5252,6 +5253,7 @@ int map_pages_to_xen(
         {
             unsigned int flush_flags =
                 FLUSH_TLB | FLUSH_ORDER(2 * PAGETABLE_ORDER);
+            l2_pgentry_t *l2t;
 
             /* Skip this PTE if there is no change. */
             if ( ((l3e_get_pfn(ol3e) & ~(L2_PAGETABLE_ENTRIES *
@@ -5273,12 +5275,12 @@ int map_pages_to_xen(
                 continue;
             }
 
-            pl2e = alloc_xen_pagetable();
-            if ( pl2e == NULL )
+            l2t = alloc_xen_pagetable();
+            if ( l2t == NULL )
                 return -ENOMEM;
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
-                l2e_write(pl2e + i,
+                l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(ol3e) +
                                        (i << PAGETABLE_ORDER),
                                        l3e_get_flags(ol3e)));
@@ -5291,15 +5293,15 @@ int map_pages_to_xen(
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(pl2e),
+                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
                                                     __PAGE_HYPERVISOR));
-                pl2e = NULL;
+                l2t = NULL;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
             flush_area(virt, flush_flags);
-            if ( pl2e )
-                free_xen_pagetable(pl2e);
+            if ( l2t )
+                free_xen_pagetable(l2t);
         }
 
         pl2e = virt_to_xen_l2e(virt);
@@ -5327,11 +5329,12 @@ int map_pages_to_xen(
                 }
                 else
                 {
-                    pl1e = l2e_to_l1e(ol2e);
+                    l1_pgentry_t *l1t = l2e_to_l1e(ol2e);
+
                     for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                        flush_flags(l1e_get_flags(pl1e[i]));
+                        flush_flags(l1e_get_flags(l1t[i]));
                     flush_area(virt, flush_flags);
-                    free_xen_pagetable(pl1e);
+                    free_xen_pagetable(l1t);
                 }
             }
 
@@ -5353,6 +5356,7 @@ int map_pages_to_xen(
             {
                 unsigned int flush_flags =
                     FLUSH_TLB | FLUSH_ORDER(PAGETABLE_ORDER);
+                l1_pgentry_t *l1t;
 
                 /* Skip this PTE if there is no change. */
                 if ( (((l2e_get_pfn(*pl2e) & ~(L1_PAGETABLE_ENTRIES - 1)) +
@@ -5372,12 +5376,12 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                pl1e = alloc_xen_pagetable();
-                if ( pl1e == NULL )
+                l1t = alloc_xen_pagetable();
+                if ( l1t == NULL )
                     return -ENOMEM;
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                    l1e_write(&pl1e[i],
+                    l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
                                            lNf_to_l1f(l2e_get_flags(*pl2e))));
 
@@ -5389,15 +5393,15 @@ int map_pages_to_xen(
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(pl1e),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
                                                         __PAGE_HYPERVISOR));
-                    pl1e = NULL;
+                    l1t = NULL;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(virt, flush_flags);
-                if ( pl1e )
-                    free_xen_pagetable(pl1e);
+                if ( l1t )
+                    free_xen_pagetable(l1t);
             }
 
             pl1e  = l2e_to_l1e(*pl2e) + l1_table_offset(virt);
@@ -5422,6 +5426,7 @@ int map_pages_to_xen(
                     ((1u << PAGETABLE_ORDER) - 1)) == 0)) )
             {
                 unsigned long base_mfn;
+                const l1_pgentry_t *l1t;
 
                 if ( locking )
                     spin_lock(&map_pgdir_lock);
@@ -5445,11 +5450,11 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                pl1e = l2e_to_l1e(ol2e);
-                base_mfn = l1e_get_pfn(*pl1e) & ~(L1_PAGETABLE_ENTRIES - 1);
-                for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++, pl1e++ )
-                    if ( (l1e_get_pfn(*pl1e) != (base_mfn + i)) ||
-                         (l1e_get_flags(*pl1e) != flags) )
+                l1t = l2e_to_l1e(ol2e);
+                base_mfn = l1e_get_pfn(l1t[0]) & ~(L1_PAGETABLE_ENTRIES - 1);
+                for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
+                    if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) ||
+                         (l1e_get_flags(l1t[i]) != flags) )
                         break;
                 if ( i == L1_PAGETABLE_ENTRIES )
                 {
@@ -5475,6 +5480,7 @@ int map_pages_to_xen(
                 ((1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1))) )
         {
             unsigned long base_mfn;
+            const l2_pgentry_t *l2t;
 
             if ( locking )
                 spin_lock(&map_pgdir_lock);
@@ -5492,13 +5498,13 @@ int map_pages_to_xen(
                 continue;
             }
 
-            pl2e = l3e_to_l2e(ol3e);
-            base_mfn = l2e_get_pfn(*pl2e) & ~(L2_PAGETABLE_ENTRIES *
+            l2t = l3e_to_l2e(ol3e);
+            base_mfn = l2e_get_pfn(l2t[0]) & ~(L2_PAGETABLE_ENTRIES *
                                               L1_PAGETABLE_ENTRIES - 1);
-            for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++, pl2e++ )
-                if ( (l2e_get_pfn(*pl2e) !=
+            for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
+                if ( (l2e_get_pfn(l2t[i]) !=
                       (base_mfn + (i << PAGETABLE_ORDER))) ||
-                     (l2e_get_flags(*pl2e) != l1f_to_lNf(flags)) )
+                     (l2e_get_flags(l2t[i]) != l1f_to_lNf(flags)) )
                     break;
             if ( i == L2_PAGETABLE_ENTRIES )
             {
-- 
2.15.3.AMZN


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH v5 4/7] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings
  2020-01-07 12:06 [Xen-devel] [PATCH v5 0/7] Add alternative API for XEN PTEs Hongyan Xia
                   ` (2 preceding siblings ...)
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 3/7] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Hongyan Xia
@ 2020-01-07 12:06 ` Hongyan Xia
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 5/7] x86/mm: map_pages_to_xen would better have one exit path Hongyan Xia
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 20+ messages in thread
From: Hongyan Xia @ 2020-01-07 12:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Wei Liu, Andrew Cooper, jgrall, Jan Beulich,
	Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

The pl2e and pl1e variables are heavily (ab)used in that function.  It
is fine at the moment because all page tables are always mapped so
there is no need to track the life time of each variable.

We will soon have the requirement to map and unmap page tables. We
need to track the life time of each variable to avoid leakage.

Introduce some l{1,2}t variables with limited scope so that we can
track life time of pointers to xen page tables more easily.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/mm.c | 68 +++++++++++++++++++++++++++++++------------------------
 1 file changed, 38 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 699aa6bbdf..7160ddcb67 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5575,6 +5575,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
         if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
         {
+            l2_pgentry_t *l2t;
+
             if ( l2_table_offset(v) == 0 &&
                  l1_table_offset(v) == 0 &&
                  ((e - v) >= (1UL << L3_PAGETABLE_SHIFT)) )
@@ -5590,11 +5592,11 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
-            pl2e = alloc_xen_pagetable();
-            if ( !pl2e )
+            l2t = alloc_xen_pagetable();
+            if ( !l2t )
                 return -ENOMEM;
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
-                l2e_write(pl2e + i,
+                l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
                                        (i << PAGETABLE_ORDER),
                                        l3e_get_flags(*pl3e)));
@@ -5603,14 +5605,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(pl2e),
+                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
                                                     __PAGE_HYPERVISOR));
-                pl2e = NULL;
+                l2t = NULL;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
-            if ( pl2e )
-                free_xen_pagetable(pl2e);
+            if ( l2t )
+                free_xen_pagetable(l2t);
         }
 
         /*
@@ -5644,12 +5646,14 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
             else
             {
+                l1_pgentry_t *l1t;
+
                 /* PSE: shatter the superpage and try again. */
-                pl1e = alloc_xen_pagetable();
-                if ( !pl1e )
+                l1t = alloc_xen_pagetable();
+                if ( !l1t )
                     return -ENOMEM;
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                    l1e_write(&pl1e[i],
+                    l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
                                            l2e_get_flags(*pl2e) & ~_PAGE_PSE));
                 if ( locking )
@@ -5657,19 +5661,19 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(pl1e),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
                                                         __PAGE_HYPERVISOR));
-                    pl1e = NULL;
+                    l1t = NULL;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
-                if ( pl1e )
-                    free_xen_pagetable(pl1e);
+                if ( l1t )
+                    free_xen_pagetable(l1t);
             }
         }
         else
         {
-            l1_pgentry_t nl1e;
+            l1_pgentry_t nl1e, *l1t;
 
             /*
              * Ordinary 4kB mapping: The L2 entry has been verified to be
@@ -5716,9 +5720,9 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 continue;
             }
 
-            pl1e = l2e_to_l1e(*pl2e);
+            l1t = l2e_to_l1e(*pl2e);
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
-                if ( l1e_get_intpte(pl1e[i]) != 0 )
+                if ( l1e_get_intpte(l1t[i]) != 0 )
                     break;
             if ( i == L1_PAGETABLE_ENTRIES )
             {
@@ -5727,7 +5731,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable(pl1e);
+                free_xen_pagetable(l1t);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5756,21 +5760,25 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             continue;
         }
 
-        pl2e = l3e_to_l2e(*pl3e);
-        for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
-            if ( l2e_get_intpte(pl2e[i]) != 0 )
-                break;
-        if ( i == L2_PAGETABLE_ENTRIES )
         {
-            /* Empty: zap the L3E and free the L2 page. */
-            l3e_write_atomic(pl3e, l3e_empty());
-            if ( locking )
+            l2_pgentry_t *l2t;
+
+            l2t = l3e_to_l2e(*pl3e);
+            for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
+                if ( l2e_get_intpte(l2t[i]) != 0 )
+                    break;
+            if ( i == L2_PAGETABLE_ENTRIES )
+            {
+                /* Empty: zap the L3E and free the L2 page. */
+                l3e_write_atomic(pl3e, l3e_empty());
+                if ( locking )
+                    spin_unlock(&map_pgdir_lock);
+                flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
+                free_xen_pagetable(l2t);
+            }
+            else if ( locking )
                 spin_unlock(&map_pgdir_lock);
-            flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-            free_xen_pagetable(pl2e);
         }
-        else if ( locking )
-            spin_unlock(&map_pgdir_lock);
     }
 
     flush_area(NULL, FLUSH_TLB_GLOBAL);
-- 
2.15.3.AMZN


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH v5 5/7] x86/mm: map_pages_to_xen would better have one exit path
  2020-01-07 12:06 [Xen-devel] [PATCH v5 0/7] Add alternative API for XEN PTEs Hongyan Xia
                   ` (3 preceding siblings ...)
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 4/7] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Hongyan Xia
@ 2020-01-07 12:06 ` Hongyan Xia
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 6/7] x86/mm: make sure there is one exit path for modify_xen_mappings Hongyan Xia
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 7/7] x86/mm: change pl*e to l*t in virt_to_xen_l*e Hongyan Xia
  6 siblings, 0 replies; 20+ messages in thread
From: Hongyan Xia @ 2020-01-07 12:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Wei Liu, Andrew Cooper, jgrall, Jan Beulich,
	Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will soon rewrite the function to handle dynamically mapping and
unmapping of page tables.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed since v4:
- drop the end_of_loop goto label since this function may be refactored
  in the future and there are options to do things without the goto.

Changed since v3:
- remove asserts on rc since rc never gets changed to anything else.
- reword commit message.
---
 xen/arch/x86/mm.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 7160ddcb67..71e9c4b19e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5164,9 +5164,11 @@ int map_pages_to_xen(
     unsigned int flags)
 {
     bool locking = system_state > SYS_STATE_boot;
+    l3_pgentry_t *pl3e, ol3e;
     l2_pgentry_t *pl2e, ol2e;
     l1_pgentry_t *pl1e, ol1e;
     unsigned int  i;
+    int rc = -ENOMEM;
 
 #define flush_flags(oldf) do {                 \
     unsigned int o_ = (oldf);                  \
@@ -5184,10 +5186,11 @@ int map_pages_to_xen(
 
     while ( nr_mfns != 0 )
     {
-        l3_pgentry_t ol3e, *pl3e = virt_to_xen_l3e(virt);
+        pl3e = virt_to_xen_l3e(virt);
 
         if ( !pl3e )
-            return -ENOMEM;
+            goto out;
+
         ol3e = *pl3e;
 
         if ( cpu_has_page1gb &&
@@ -5277,7 +5280,7 @@ int map_pages_to_xen(
 
             l2t = alloc_xen_pagetable();
             if ( l2t == NULL )
-                return -ENOMEM;
+                goto out;
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
@@ -5306,7 +5309,7 @@ int map_pages_to_xen(
 
         pl2e = virt_to_xen_l2e(virt);
         if ( !pl2e )
-            return -ENOMEM;
+            goto out;
 
         if ( ((((virt >> PAGE_SHIFT) | mfn_x(mfn)) &
                ((1u << PAGETABLE_ORDER) - 1)) == 0) &&
@@ -5350,7 +5353,7 @@ int map_pages_to_xen(
             {
                 pl1e = virt_to_xen_l1e(virt);
                 if ( pl1e == NULL )
-                    return -ENOMEM;
+                    goto out;
             }
             else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
             {
@@ -5378,7 +5381,7 @@ int map_pages_to_xen(
 
                 l1t = alloc_xen_pagetable();
                 if ( l1t == NULL )
-                    return -ENOMEM;
+                    goto out;
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
@@ -5524,7 +5527,10 @@ int map_pages_to_xen(
 
 #undef flush_flags
 
-    return 0;
+    rc = 0;
+
+ out:
+    return rc;
 }
 
 int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
-- 
2.15.3.AMZN


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH v5 6/7] x86/mm: make sure there is one exit path for modify_xen_mappings
  2020-01-07 12:06 [Xen-devel] [PATCH v5 0/7] Add alternative API for XEN PTEs Hongyan Xia
                   ` (4 preceding siblings ...)
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 5/7] x86/mm: map_pages_to_xen would better have one exit path Hongyan Xia
@ 2020-01-07 12:06 ` Hongyan Xia
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 7/7] x86/mm: change pl*e to l*t in virt_to_xen_l*e Hongyan Xia
  6 siblings, 0 replies; 20+ messages in thread
From: Hongyan Xia @ 2020-01-07 12:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Wei Liu, Andrew Cooper, jgrall, Jan Beulich,
	Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will soon need to handle dynamically mapping / unmapping page
tables in the said function.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed since v4:
- drop the end_of_loop goto label since this function may be refactored
  in the future and there are options to do things without the goto.

Changed since v3:
- remove asserts on rc since it never gets changed to anything else.
---
 xen/arch/x86/mm.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 71e9c4b19e..6b589762b1 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5557,6 +5557,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     l1_pgentry_t *pl1e;
     unsigned int  i;
     unsigned long v = s;
+    int rc = -ENOMEM;
 
     /* Set of valid PTE bits which may be altered. */
 #define FLAGS_MASK (_PAGE_NX|_PAGE_RW|_PAGE_PRESENT)
@@ -5600,7 +5601,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             /* PAGE1GB: shatter the superpage and fall through. */
             l2t = alloc_xen_pagetable();
             if ( !l2t )
-                return -ENOMEM;
+                goto out;
+
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
@@ -5657,7 +5659,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 /* PSE: shatter the superpage and try again. */
                 l1t = alloc_xen_pagetable();
                 if ( !l1t )
-                    return -ENOMEM;
+                    goto out;
+
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
@@ -5790,7 +5793,10 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     flush_area(NULL, FLUSH_TLB_GLOBAL);
 
 #undef FLAGS_MASK
-    return 0;
+    rc = 0;
+
+ out:
+    return rc;
 }
 
 #undef flush_area
-- 
2.15.3.AMZN


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [Xen-devel] [PATCH v5 7/7] x86/mm: change pl*e to l*t in virt_to_xen_l*e
  2020-01-07 12:06 [Xen-devel] [PATCH v5 0/7] Add alternative API for XEN PTEs Hongyan Xia
                   ` (5 preceding siblings ...)
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 6/7] x86/mm: make sure there is one exit path for modify_xen_mappings Hongyan Xia
@ 2020-01-07 12:06 ` Hongyan Xia
  6 siblings, 0 replies; 20+ messages in thread
From: Hongyan Xia @ 2020-01-07 12:06 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Wei Liu, Andrew Cooper, jgrall, Jan Beulich,
	Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

We will need to have a variable named pl*e when we rewrite
virt_to_xen_l*e. Change pl*e to l*t to reflect better its purpose.
This will make reviewing later patch easier.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/mm.c | 42 +++++++++++++++++++++---------------------
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 6b589762b1..d594d6abfb 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5054,25 +5054,25 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
     if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l3_pgentry_t *pl3e = alloc_xen_pagetable();
+        l3_pgentry_t *l3t = alloc_xen_pagetable();
 
-        if ( !pl3e )
+        if ( !l3t )
             return NULL;
-        clear_page(pl3e);
+        clear_page(l3t);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
         {
-            l4_pgentry_t l4e = l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR);
+            l4_pgentry_t l4e = l4e_from_paddr(__pa(l3t), __PAGE_HYPERVISOR);
 
             l4e_write(pl4e, l4e);
             efi_update_l4_pgtable(l4_table_offset(v), l4e);
-            pl3e = NULL;
+            l3t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( pl3e )
-            free_xen_pagetable(pl3e);
+        if ( l3t )
+            free_xen_pagetable(l3t);
     }
 
     return l4e_to_l3e(*pl4e) + l3_table_offset(v);
@@ -5089,22 +5089,22 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l2_pgentry_t *pl2e = alloc_xen_pagetable();
+        l2_pgentry_t *l2t = alloc_xen_pagetable();
 
-        if ( !pl2e )
+        if ( !l2t )
             return NULL;
-        clear_page(pl2e);
+        clear_page(l2t);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
-            l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
-            pl2e = NULL;
+            l3e_write(pl3e, l3e_from_paddr(__pa(l2t), __PAGE_HYPERVISOR));
+            l2t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( pl2e )
-            free_xen_pagetable(pl2e);
+        if ( l2t )
+            free_xen_pagetable(l2t);
     }
 
     BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
@@ -5122,22 +5122,22 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l1_pgentry_t *pl1e = alloc_xen_pagetable();
+        l1_pgentry_t *l1t = alloc_xen_pagetable();
 
-        if ( !pl1e )
+        if ( !l1t )
             return NULL;
-        clear_page(pl1e);
+        clear_page(l1t);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
-            l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
-            pl1e = NULL;
+            l2e_write(pl2e, l2e_from_paddr(__pa(l1t), __PAGE_HYPERVISOR));
+            l1t = NULL;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( pl1e )
-            free_xen_pagetable(pl1e);
+        if ( l1t )
+            free_xen_pagetable(l1t);
     }
 
     BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-- 
2.15.3.AMZN


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v5 1/7] x86: move some xen mm function declarations
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 1/7] x86: move some xen mm function declarations Hongyan Xia
@ 2020-01-07 12:13   ` Wei Liu
  2020-01-07 13:09     ` Jan Beulich
  0 siblings, 1 reply; 20+ messages in thread
From: Wei Liu @ 2020-01-07 12:13 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: Wei Liu, Wei Liu, Andrew Cooper, jgrall, Jan Beulich, xen-devel,
	Roger Pau Monné

On Tue, Jan 07, 2020 at 12:06:43PM +0000, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> They were put into page.h but mm.h is more appropriate.
> 
> The real reason is that I will be adding some new functions which
> takes mfn_t. It turns out it is a bit difficult to do in page.h.
> 
> No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

I will commit this trivial patch soon-ish to reduce Honyan's patch queue
length.

Shout if your disagree.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v5 1/7] x86: move some xen mm function declarations
  2020-01-07 12:13   ` Wei Liu
@ 2020-01-07 13:09     ` Jan Beulich
  2020-01-07 13:37       ` Wei Liu
  2020-01-07 13:48       ` Xia, Hongyan
  0 siblings, 2 replies; 20+ messages in thread
From: Jan Beulich @ 2020-01-07 13:09 UTC (permalink / raw)
  To: Wei Liu
  Cc: Wei Liu, Andrew Cooper, jgrall, Hongyan Xia, xen-devel,
	Roger Pau Monné

On 07.01.2020 13:13, Wei Liu wrote:
> On Tue, Jan 07, 2020 at 12:06:43PM +0000, Hongyan Xia wrote:
>> From: Wei Liu <wei.liu2@citrix.com>
>>
>> They were put into page.h but mm.h is more appropriate.
>>
>> The real reason is that I will be adding some new functions which
>> takes mfn_t. It turns out it is a bit difficult to do in page.h.
>>
>> No functional change.
>>
>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> I will commit this trivial patch soon-ish to reduce Honyan's patch queue
> length.

Looks like I simply forgot every time I went through my list of
pending (for the various stages of processing) patches. I guess
patches 3 and 4 are also independent of patch 2 and hence could
go in as well.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v5 1/7] x86: move some xen mm function declarations
  2020-01-07 13:09     ` Jan Beulich
@ 2020-01-07 13:37       ` Wei Liu
  2020-01-07 13:48       ` Xia, Hongyan
  1 sibling, 0 replies; 20+ messages in thread
From: Wei Liu @ 2020-01-07 13:37 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Wei Liu, Wei Liu, Andrew Cooper, jgrall, Hongyan Xia, xen-devel,
	Roger Pau Monné

On Tue, Jan 07, 2020 at 02:09:05PM +0100, Jan Beulich wrote:
> On 07.01.2020 13:13, Wei Liu wrote:
> > On Tue, Jan 07, 2020 at 12:06:43PM +0000, Hongyan Xia wrote:
> >> From: Wei Liu <wei.liu2@citrix.com>
> >>
> >> They were put into page.h but mm.h is more appropriate.
> >>
> >> The real reason is that I will be adding some new functions which
> >> takes mfn_t. It turns out it is a bit difficult to do in page.h.
> >>
> >> No functional change.
> >>
> >> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> >> Acked-by: Jan Beulich <jbeulich@suse.com>
> > 
> > I will commit this trivial patch soon-ish to reduce Honyan's patch queue
> > length.
> 
> Looks like I simply forgot every time I went through my list of
> pending (for the various stages of processing) patches. I guess
> patches 3 and 4 are also independent of patch 2 and hence could
> go in as well.

Sure. I pushed all three patches (1, 3 and 4).

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v5 1/7] x86: move some xen mm function declarations
  2020-01-07 13:09     ` Jan Beulich
  2020-01-07 13:37       ` Wei Liu
@ 2020-01-07 13:48       ` Xia, Hongyan
  2020-01-07 14:03         ` Wei Liu
  1 sibling, 1 reply; 20+ messages in thread
From: Xia, Hongyan @ 2020-01-07 13:48 UTC (permalink / raw)
  To: jbeulich, wl; +Cc: andrew.cooper3, Grall, Julien, xen-devel, roger.pau

On Tue, 2020-01-07 at 14:09 +0100, Jan Beulich wrote:
> ...
> 
> Looks like I simply forgot every time I went through my list of
> pending (for the various stages of processing) patches. I guess
> patches 3 and 4 are also independent of patch 2 and hence could
> go in as well.

If so, looks like patch 7/7 is also in a committable state?

Hongyan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v5 1/7] x86: move some xen mm function declarations
  2020-01-07 13:48       ` Xia, Hongyan
@ 2020-01-07 14:03         ` Wei Liu
  0 siblings, 0 replies; 20+ messages in thread
From: Wei Liu @ 2020-01-07 14:03 UTC (permalink / raw)
  To: Xia, Hongyan
  Cc: wl, andrew.cooper3, Grall, Julien, jbeulich, xen-devel, roger.pau

On Tue, Jan 07, 2020 at 01:48:41PM +0000, Xia, Hongyan wrote:
> On Tue, 2020-01-07 at 14:09 +0100, Jan Beulich wrote:
> > ...
> > 
> > Looks like I simply forgot every time I went through my list of
> > pending (for the various stages of processing) patches. I guess
> > patches 3 and 4 are also independent of patch 2 and hence could
> > go in as well.
> 
> If so, looks like patch 7/7 is also in a committable state?

Looks like so. I will commit that one as well.

Thanks for putting in the effort to upstream these patches, Hongyan.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v5 2/7] x86: introduce a new set of APIs to manage Xen page tables
  2020-01-07 12:06 ` [Xen-devel] [PATCH v5 2/7] x86: introduce a new set of APIs to manage Xen page tables Hongyan Xia
@ 2020-01-16 12:04   ` Jan Beulich
  2020-01-27 14:33     ` Xia, Hongyan
  2020-01-27 17:49     ` Wei Liu
  0 siblings, 2 replies; 20+ messages in thread
From: Jan Beulich @ 2020-01-16 12:04 UTC (permalink / raw)
  To: Hongyan Xia, Wei Liu
  Cc: xen-devel, jgrall, Roger Pau Monné, Wei Liu, Andrew Cooper

On 07.01.2020 13:06, Hongyan Xia wrote:
> @@ -4992,22 +4993,55 @@ int mmcfg_intercept_write(
>  }
>  
>  void *alloc_xen_pagetable(void)
> +{
> +    mfn_t mfn = alloc_xen_pagetable_new();
> +
> +    return mfn_eq(mfn, INVALID_MFN) ? NULL : mfn_to_virt(mfn_x(mfn));
> +}

Isn't it more dangerous to do the mapping this way round than the
opposite (new calls old)? Done the opposite way the new functions
could be switched to their new implementations ahead of the
removal of the old ones, and - if suitably isolated - perhaps
some of their users. Anyway, perhaps more a theoretical remark.

> +void free_xen_pagetable(void *v)
> +{
> +    mfn_t mfn = v ? virt_to_mfn(v) : INVALID_MFN;
> +
> +    if ( system_state != SYS_STATE_early_boot )
> +        free_xen_pagetable_new(mfn);

The condition is (partly) redundant with what
free_xen_pagetable_new() does. Therefore I'd like to ask that
either the if() be dropped here, or be completed by also
checking v to be non-NULL, at which point this would likely
become just

void free_xen_pagetable(void *v)
{
    if ( v && system_state != SYS_STATE_early_boot )
        free_xen_pagetable_new(virt_to_mfn(v));
}

> +/* v can point to an entry within a table or be NULL */
> +void unmap_xen_pagetable(const void *v)

Why "entry" in the comment?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v5 2/7] x86: introduce a new set of APIs to manage Xen page tables
  2020-01-16 12:04   ` Jan Beulich
@ 2020-01-27 14:33     ` Xia, Hongyan
  2020-01-27 14:43       ` Jan Beulich
  2020-01-27 17:49     ` Wei Liu
  1 sibling, 1 reply; 20+ messages in thread
From: Xia, Hongyan @ 2020-01-27 14:33 UTC (permalink / raw)
  To: jbeulich; +Cc: andrew.cooper3, Grall, Julien, xen-devel, wl, roger.pau

On Thu, 2020-01-16 at 13:04 +0100, Jan Beulich wrote:
> ...
> 
> > +void free_xen_pagetable(void *v)
> > +{
> > +    mfn_t mfn = v ? virt_to_mfn(v) : INVALID_MFN;
> > +
> > +    if ( system_state != SYS_STATE_early_boot )
> > +        free_xen_pagetable_new(mfn);
> 
> The condition is (partly) redundant with what
> free_xen_pagetable_new() does. Therefore I'd like to ask that
> either the if() be dropped here, or be completed by also
> checking v to be non-NULL, at which point this would likely
> become just
> 
> void free_xen_pagetable(void *v)
> {
>     if ( v && system_state != SYS_STATE_early_boot )
>         free_xen_pagetable_new(virt_to_mfn(v));
> }

You are right. Will change in the next revision.

> > +/* v can point to an entry within a table or be NULL */
> > +void unmap_xen_pagetable(const void *v)
> 
> Why "entry" in the comment?

I think the comment originally meant pointing to the entry in its
parent table, which then meant pointing to this table. It's a bit
confusing. Will reword.

Reflecting back to your comment in v3 about whether the new Xen page
table mapping APIs (map/unmap_xen_pagetable) are really necessary, I
agree in the end they will just do the same thing as
map/unmap_domain_page, although one thing is that the latter suggests
it is trying to map a `domain' page, whose definition probably does not
include Xen page tables, so the name could be a bit confusing (well, we
could argue that Xen page tables are just idle `domain' pages so the
name still holds). If we are happy with using map_domain_page on Xen
PTE tables then I am okay with dropping the new mapping APIs and only
include the new alloc APIs.

Hongyan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v5 2/7] x86: introduce a new set of APIs to manage Xen page tables
  2020-01-27 14:33     ` Xia, Hongyan
@ 2020-01-27 14:43       ` Jan Beulich
  2020-01-27 15:07         ` Xia, Hongyan
  2020-01-27 17:51         ` Wei Liu
  0 siblings, 2 replies; 20+ messages in thread
From: Jan Beulich @ 2020-01-27 14:43 UTC (permalink / raw)
  To: andrew.cooper3, roger.pau, wl, George Dunlap
  Cc: xen-devel, Grall, Julien, Xia, Hongyan

On 27.01.2020 15:33, Xia, Hongyan wrote:
> On Thu, 2020-01-16 at 13:04 +0100, Jan Beulich wrote:
>> ...
>>
>>> +void free_xen_pagetable(void *v)
>>> +{
>>> +    mfn_t mfn = v ? virt_to_mfn(v) : INVALID_MFN;
>>> +
>>> +    if ( system_state != SYS_STATE_early_boot )
>>> +        free_xen_pagetable_new(mfn);
>>
>> The condition is (partly) redundant with what
>> free_xen_pagetable_new() does. Therefore I'd like to ask that
>> either the if() be dropped here, or be completed by also
>> checking v to be non-NULL, at which point this would likely
>> become just
>>
>> void free_xen_pagetable(void *v)
>> {
>>     if ( v && system_state != SYS_STATE_early_boot )
>>         free_xen_pagetable_new(virt_to_mfn(v));
>> }
> 
> You are right. Will change in the next revision.
> 
>>> +/* v can point to an entry within a table or be NULL */
>>> +void unmap_xen_pagetable(const void *v)
>>
>> Why "entry" in the comment?
> 
> I think the comment originally meant pointing to the entry in its
> parent table, which then meant pointing to this table. It's a bit
> confusing. Will reword.
> 
> Reflecting back to your comment in v3 about whether the new Xen page
> table mapping APIs (map/unmap_xen_pagetable) are really necessary, I
> agree in the end they will just do the same thing as
> map/unmap_domain_page, although one thing is that the latter suggests
> it is trying to map a `domain' page, whose definition probably does not
> include Xen page tables, so the name could be a bit confusing (well, we
> could argue that Xen page tables are just idle `domain' pages so the
> name still holds). If we are happy with using map_domain_page on Xen
> PTE tables then I am okay with dropping the new mapping APIs and only
> include the new alloc APIs.

Anyone on the To: list - opinions?

Thanks, Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v5 2/7] x86: introduce a new set of APIs to manage Xen page tables
  2020-01-27 14:43       ` Jan Beulich
@ 2020-01-27 15:07         ` Xia, Hongyan
  2020-01-27 17:13           ` George Dunlap
  2020-01-27 17:51         ` Wei Liu
  1 sibling, 1 reply; 20+ messages in thread
From: Xia, Hongyan @ 2020-01-27 15:07 UTC (permalink / raw)
  To: George.Dunlap, jbeulich, roger.pau, wl, andrew.cooper3
  Cc: xen-devel, Grall,  Julien

On Mon, 2020-01-27 at 15:43 +0100, Jan Beulich wrote:
> On 27.01.2020 15:33, Xia, Hongyan wrote:
> > ...
> > 
> > Reflecting back to your comment in v3 about whether the new Xen
> > page
> > table mapping APIs (map/unmap_xen_pagetable) are really necessary,
> > I
> > agree in the end they will just do the same thing as
> > map/unmap_domain_page, although one thing is that the latter
> > suggests
> > it is trying to map a `domain' page, whose definition probably does
> > not
> > include Xen page tables, so the name could be a bit confusing
> > (well, we
> > could argue that Xen page tables are just idle `domain' pages so
> > the
> > name still holds). If we are happy with using map_domain_page on
> > Xen
> > PTE tables then I am okay with dropping the new mapping APIs and
> > only
> > include the new alloc APIs.
> 
> Anyone on the To: list - opinions?

There is one argument. We already use map_domain_page on non-domain
pages and outside the domheap. For example, the trap handler walks page
tables and print traces via map_domain_page regardless of where the
underlying page is from. The mapped page could just be from Xen.

Hongyan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v5 2/7] x86: introduce a new set of APIs to manage Xen page tables
  2020-01-27 15:07         ` Xia, Hongyan
@ 2020-01-27 17:13           ` George Dunlap
  0 siblings, 0 replies; 20+ messages in thread
From: George Dunlap @ 2020-01-27 17:13 UTC (permalink / raw)
  To: Xia, Hongyan, George.Dunlap, jbeulich, roger.pau, wl, andrew.cooper3
  Cc: xen-devel, Grall,  Julien

On 1/27/20 3:07 PM, Xia, Hongyan wrote:
> On Mon, 2020-01-27 at 15:43 +0100, Jan Beulich wrote:
>> On 27.01.2020 15:33, Xia, Hongyan wrote:
>>> ...
>>>
>>> Reflecting back to your comment in v3 about whether the new Xen
>>> page
>>> table mapping APIs (map/unmap_xen_pagetable) are really necessary,
>>> I
>>> agree in the end they will just do the same thing as
>>> map/unmap_domain_page, although one thing is that the latter
>>> suggests
>>> it is trying to map a `domain' page, whose definition probably does
>>> not
>>> include Xen page tables, so the name could be a bit confusing
>>> (well, we
>>> could argue that Xen page tables are just idle `domain' pages so
>>> the
>>> name still holds). If we are happy with using map_domain_page on
>>> Xen
>>> PTE tables then I am okay with dropping the new mapping APIs and
>>> only
>>> include the new alloc APIs.
>>
>> Anyone on the To: list - opinions?
> 
> There is one argument. We already use map_domain_page on non-domain
> pages and outside the domheap. For example, the trap handler walks page
> tables and print traces via map_domain_page regardless of where the
> underlying page is from. The mapped page could just be from Xen.

Yes, I've long sort of mentally filtered out the 'domain' part of
"map_domain_page()"; it's really an artifact of a bygone era, when there
were separate xen and domain heaps.  (In fact, it's really "map domheap
page", and at the moment we only  have a domheap.)

It might be worth thinking about doing a global
s/map_domain_page/map_page/; but until then, I think it's fine to use
"map_domain_page" for "map pages in the domheap", given that there *is*
only the domheap.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v5 2/7] x86: introduce a new set of APIs to manage Xen page tables
  2020-01-16 12:04   ` Jan Beulich
  2020-01-27 14:33     ` Xia, Hongyan
@ 2020-01-27 17:49     ` Wei Liu
  1 sibling, 0 replies; 20+ messages in thread
From: Wei Liu @ 2020-01-27 17:49 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Wei Liu, Andrew Cooper, jgrall, Hongyan Xia, xen-devel,
	Roger Pau Monné

On Thu, Jan 16, 2020 at 01:04:16PM +0100, Jan Beulich wrote:
[...]
> > +/* v can point to an entry within a table or be NULL */
> > +void unmap_xen_pagetable(const void *v)
> 
> Why "entry" in the comment?

IIRC there would be cases that umap_xen_pagetable would be called with a
pointer to a PTE.

The comment was mostly a note to remind me that when I got around
implementing that function it would need to do v&PAGE_MASK.

This may or may not apply to this patch series in its current form.

Wei.

> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Xen-devel] [PATCH v5 2/7] x86: introduce a new set of APIs to manage Xen page tables
  2020-01-27 14:43       ` Jan Beulich
  2020-01-27 15:07         ` Xia, Hongyan
@ 2020-01-27 17:51         ` Wei Liu
  1 sibling, 0 replies; 20+ messages in thread
From: Wei Liu @ 2020-01-27 17:51 UTC (permalink / raw)
  To: Jan Beulich
  Cc: wl, George Dunlap, andrew.cooper3, Grall, Julien, Xia, Hongyan,
	xen-devel, roger.pau

On Mon, Jan 27, 2020 at 03:43:12PM +0100, Jan Beulich wrote:
> On 27.01.2020 15:33, Xia, Hongyan wrote:
> > On Thu, 2020-01-16 at 13:04 +0100, Jan Beulich wrote:
> >> ...
> >>
> >>> +void free_xen_pagetable(void *v)
> >>> +{
> >>> +    mfn_t mfn = v ? virt_to_mfn(v) : INVALID_MFN;
> >>> +
> >>> +    if ( system_state != SYS_STATE_early_boot )
> >>> +        free_xen_pagetable_new(mfn);
> >>
> >> The condition is (partly) redundant with what
> >> free_xen_pagetable_new() does. Therefore I'd like to ask that
> >> either the if() be dropped here, or be completed by also
> >> checking v to be non-NULL, at which point this would likely
> >> become just
> >>
> >> void free_xen_pagetable(void *v)
> >> {
> >>     if ( v && system_state != SYS_STATE_early_boot )
> >>         free_xen_pagetable_new(virt_to_mfn(v));
> >> }
> > 
> > You are right. Will change in the next revision.
> > 
> >>> +/* v can point to an entry within a table or be NULL */
> >>> +void unmap_xen_pagetable(const void *v)
> >>
> >> Why "entry" in the comment?
> > 
> > I think the comment originally meant pointing to the entry in its
> > parent table, which then meant pointing to this table. It's a bit
> > confusing. Will reword.
> > 
> > Reflecting back to your comment in v3 about whether the new Xen page
> > table mapping APIs (map/unmap_xen_pagetable) are really necessary, I
> > agree in the end they will just do the same thing as
> > map/unmap_domain_page, although one thing is that the latter suggests
> > it is trying to map a `domain' page, whose definition probably does not
> > include Xen page tables, so the name could be a bit confusing (well, we
> > could argue that Xen page tables are just idle `domain' pages so the
> > name still holds). If we are happy with using map_domain_page on Xen
> > PTE tables then I am okay with dropping the new mapping APIs and only
> > include the new alloc APIs.
> 
> Anyone on the To: list - opinions?

I'm not too fussed about this.

I think eventually we will just change map_domain_page to map_page and
use that directly. The intermediate steps aren't terribly interesting to
me.

Wei.

> 
> Thanks, Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2020-01-27 17:52 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-07 12:06 [Xen-devel] [PATCH v5 0/7] Add alternative API for XEN PTEs Hongyan Xia
2020-01-07 12:06 ` [Xen-devel] [PATCH v5 1/7] x86: move some xen mm function declarations Hongyan Xia
2020-01-07 12:13   ` Wei Liu
2020-01-07 13:09     ` Jan Beulich
2020-01-07 13:37       ` Wei Liu
2020-01-07 13:48       ` Xia, Hongyan
2020-01-07 14:03         ` Wei Liu
2020-01-07 12:06 ` [Xen-devel] [PATCH v5 2/7] x86: introduce a new set of APIs to manage Xen page tables Hongyan Xia
2020-01-16 12:04   ` Jan Beulich
2020-01-27 14:33     ` Xia, Hongyan
2020-01-27 14:43       ` Jan Beulich
2020-01-27 15:07         ` Xia, Hongyan
2020-01-27 17:13           ` George Dunlap
2020-01-27 17:51         ` Wei Liu
2020-01-27 17:49     ` Wei Liu
2020-01-07 12:06 ` [Xen-devel] [PATCH v5 3/7] x86/mm: introduce l{1, 2}t local variables to map_pages_to_xen Hongyan Xia
2020-01-07 12:06 ` [Xen-devel] [PATCH v5 4/7] x86/mm: introduce l{1, 2}t local variables to modify_xen_mappings Hongyan Xia
2020-01-07 12:06 ` [Xen-devel] [PATCH v5 5/7] x86/mm: map_pages_to_xen would better have one exit path Hongyan Xia
2020-01-07 12:06 ` [Xen-devel] [PATCH v5 6/7] x86/mm: make sure there is one exit path for modify_xen_mappings Hongyan Xia
2020-01-07 12:06 ` [Xen-devel] [PATCH v5 7/7] x86/mm: change pl*e to l*t in virt_to_xen_l*e Hongyan Xia

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.