All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] use new API for Xen page tables
@ 2020-04-08 13:36 Hongyan Xia
  2020-04-08 13:36 ` [PATCH v2 1/5] x86/shim: map and unmap page tables in replace_va_mapping Hongyan Xia
                   ` (4 more replies)
  0 siblings, 5 replies; 22+ messages in thread
From: Hongyan Xia @ 2020-04-08 13:36 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, julien, Wei Liu, Jan Beulich, Roger Pau Monné

From: Hongyan Xia <hongyxia@amazon.com>

This small series is basically just rewriting functions using the new
API to map and unmap PTEs. Each patch is independent.

Apart from mapping and unmapping page tables, no other functional change
intended.

---
Changed in v2:
- I kept UNMAP_DOMAIN_PAGE() for now in v2, but I if people say
  in some cases it is an overkill and unmap_domain_page() should be
  used, I am okay with that and can make the change.
- code cleanup and style fixes.
- unmap as early as possible.

Wei Liu (5):
  x86/shim: map and unmap page tables in replace_va_mapping
  x86_64/mm: map and unmap page tables in m2p_mapped
  x86_64/mm: map and unmap page tables in share_hotadd_m2p_table
  x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping
  x86_64/mm: map and unmap page tables in destroy_m2p_mapping

 xen/arch/x86/pv/shim.c     |  9 ++++---
 xen/arch/x86/x86_64/mm.c   | 55 ++++++++++++++++++++++----------------
 xen/include/asm-x86/page.h | 13 +++++++++
 3 files changed, 50 insertions(+), 27 deletions(-)

-- 
2.24.1.AMZN



^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 1/5] x86/shim: map and unmap page tables in replace_va_mapping
  2020-04-08 13:36 [PATCH v2 0/5] use new API for Xen page tables Hongyan Xia
@ 2020-04-08 13:36 ` Hongyan Xia
  2020-04-09  9:42   ` Jan Beulich
  2020-04-08 13:36 ` [PATCH v2 2/5] x86_64/mm: map and unmap page tables in m2p_mapped Hongyan Xia
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 22+ messages in thread
From: Hongyan Xia @ 2020-04-08 13:36 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, julien, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Also, introduce lYe_from_lXe() macros which do not rely on the direct
map when walking page tables. Unfortunately, they cannot be inline
functions due to the header dependency on domain_page.h, so keep them as
macros just like map_lYt_from_lXe().

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v2:
- unmap as early as possible instead of unmapping all at the end.
- use lYe_from_lXe() macros and lift them from a later patch to this
  patch.
- const qualify pointers in new macros.
---
 xen/arch/x86/pv/shim.c     |  9 +++++----
 xen/include/asm-x86/page.h | 13 +++++++++++++
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index ed2ece8a8a..28d2076065 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -168,16 +168,17 @@ const struct platform_bad_page *__init pv_shim_reserved_pages(unsigned int *size
 static void __init replace_va_mapping(struct domain *d, l4_pgentry_t *l4start,
                                       unsigned long va, mfn_t mfn)
 {
-    l4_pgentry_t *pl4e = l4start + l4_table_offset(va);
-    l3_pgentry_t *pl3e = l4e_to_l3e(*pl4e) + l3_table_offset(va);
-    l2_pgentry_t *pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(va);
-    l1_pgentry_t *pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(va);
+    l4_pgentry_t l4e = l4start[l4_table_offset(va)];
+    l3_pgentry_t l3e = l3e_from_l4e(l4e, l3_table_offset(va));
+    l2_pgentry_t l2e = l2e_from_l3e(l3e, l2_table_offset(va));
+    l1_pgentry_t *pl1e = map_l1t_from_l2e(l2e) + l1_table_offset(va);
     struct page_info *page = mfn_to_page(l1e_get_mfn(*pl1e));
 
     put_page_and_type(page);
 
     *pl1e = l1e_from_mfn(mfn, (!is_pv_32bit_domain(d) ? L1_PROT
                                                       : COMPAT_L1_PROT));
+    UNMAP_DOMAIN_PAGE(pl1e);
 }
 
 static void evtchn_reserve(struct domain *d, unsigned int port)
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index c98d8f5ede..1e16f3980d 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -196,6 +196,19 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
 #define map_l2t_from_l3e(x)        (l2_pgentry_t *)map_domain_page(l3e_get_mfn(x))
 #define map_l3t_from_l4e(x)        (l3_pgentry_t *)map_domain_page(l4e_get_mfn(x))
 
+/* Unlike lYe_to_lXe(), lXe_from_lYe() do not rely on the direct map. */
+#define l2e_from_l3e(l3e, offset) ({                        \
+        const l2_pgentry_t *l2t = map_l2t_from_l3e(l3e);    \
+        l2_pgentry_t l2e = l2t[offset];                     \
+        UNMAP_DOMAIN_PAGE(l2t);                             \
+        l2e; })
+
+#define l3e_from_l4e(l4e, offset) ({                        \
+        const l3_pgentry_t *l3t = map_l3t_from_l4e(l4e);    \
+        l3_pgentry_t l3e = l3t[offset];                     \
+        UNMAP_DOMAIN_PAGE(l3t);                             \
+        l3e; })
+
 /* Given a virtual address, get an entry offset into a page table. */
 #define l1_table_offset(a)         \
     (((a) >> L1_PAGETABLE_SHIFT) & (L1_PAGETABLE_ENTRIES - 1))
-- 
2.24.1.AMZN



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 2/5] x86_64/mm: map and unmap page tables in m2p_mapped
  2020-04-08 13:36 [PATCH v2 0/5] use new API for Xen page tables Hongyan Xia
  2020-04-08 13:36 ` [PATCH v2 1/5] x86/shim: map and unmap page tables in replace_va_mapping Hongyan Xia
@ 2020-04-08 13:36 ` Hongyan Xia
  2020-04-15  7:54   ` Jan Beulich
  2020-04-08 13:36 ` [PATCH v2 3/5] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Hongyan Xia
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 22+ messages in thread
From: Hongyan Xia @ 2020-04-08 13:36 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, julien, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v2:
- avoid adding goto labels, simply get the PTE and unmap quickly.
- code style fixes.
---
 xen/arch/x86/x86_64/mm.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index cee836ec37..8e0caa7327 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -129,14 +129,14 @@ static mfn_t alloc_hotadd_mfn(struct mem_hotadd_info *info)
 static int m2p_mapped(unsigned long spfn)
 {
     unsigned long va;
-    l3_pgentry_t *l3_ro_mpt;
-    l2_pgentry_t *l2_ro_mpt;
+    l3_pgentry_t l3e_ro_mpt;
+    l2_pgentry_t l2e_ro_mpt;
 
     va = RO_MPT_VIRT_START + spfn * sizeof(*machine_to_phys_mapping);
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(va)]);
+    l3e_ro_mpt = l3e_from_l4e(idle_pg_table[l4_table_offset(va)],
+                              l3_table_offset(va));
 
-    switch ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
-             (_PAGE_PRESENT |_PAGE_PSE))
+    switch ( l3e_get_flags(l3e_ro_mpt) & (_PAGE_PRESENT | _PAGE_PSE) )
     {
         case _PAGE_PSE|_PAGE_PRESENT:
             return M2P_1G_MAPPED;
@@ -146,9 +146,9 @@ static int m2p_mapped(unsigned long spfn)
         default:
             return M2P_NO_MAPPED;
     }
-    l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
+    l2e_ro_mpt = l2e_from_l3e(l3e_ro_mpt, l2_table_offset(va));
 
-    if (l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT)
+    if ( l2e_get_flags(l2e_ro_mpt) & _PAGE_PRESENT )
         return M2P_2M_MAPPED;
 
     return M2P_NO_MAPPED;
-- 
2.24.1.AMZN



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 3/5] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table
  2020-04-08 13:36 [PATCH v2 0/5] use new API for Xen page tables Hongyan Xia
  2020-04-08 13:36 ` [PATCH v2 1/5] x86/shim: map and unmap page tables in replace_va_mapping Hongyan Xia
  2020-04-08 13:36 ` [PATCH v2 2/5] x86_64/mm: map and unmap page tables in m2p_mapped Hongyan Xia
@ 2020-04-08 13:36 ` Hongyan Xia
  2020-04-15  7:55   ` Jan Beulich
  2020-04-08 13:36 ` [PATCH v2 4/5] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Hongyan Xia
  2020-04-08 13:36 ` [PATCH v2 5/5] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Hongyan Xia
  4 siblings, 1 reply; 22+ messages in thread
From: Hongyan Xia @ 2020-04-08 13:36 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, julien, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Fetch lYe by mapping and unmapping lXe instead of using the direct map,
which is now done via the lYe_from_lXe() helpers.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v2:
- the introduction of the macros is now lifted to a previous patch.
---
 xen/arch/x86/x86_64/mm.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 8e0caa7327..d23c7e4f5b 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -167,14 +167,14 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
           v += n << PAGE_SHIFT )
     {
         n = L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES;
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-            l3_table_offset(v)];
+        l3e = l3e_from_l4e(idle_pg_table[l4_table_offset(v)],
+                           l3_table_offset(v));
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
         if ( !(l3e_get_flags(l3e) & _PAGE_PSE) )
         {
             n = L1_PAGETABLE_ENTRIES;
-            l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+            l2e = l2e_from_l3e(l3e, l2_table_offset(v));
             if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
                 continue;
             m2p_start_mfn = l2e_get_mfn(l2e);
@@ -195,11 +195,11 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
           v != RDWR_COMPAT_MPT_VIRT_END;
           v += 1 << L2_PAGETABLE_SHIFT )
     {
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-            l3_table_offset(v)];
+        l3e = l3e_from_l4e(idle_pg_table[l4_table_offset(v)],
+                           l3_table_offset(v));
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
-        l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+        l2e = l2e_from_l3e(l3e, l2_table_offset(v));
         if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
             continue;
         m2p_start_mfn = l2e_get_mfn(l2e);
-- 
2.24.1.AMZN



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 4/5] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping
  2020-04-08 13:36 [PATCH v2 0/5] use new API for Xen page tables Hongyan Xia
                   ` (2 preceding siblings ...)
  2020-04-08 13:36 ` [PATCH v2 3/5] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Hongyan Xia
@ 2020-04-08 13:36 ` Hongyan Xia
  2020-04-15  8:21   ` [PATCH 0/2] x86: high compat r/o M2P table handling adjustments Jan Beulich
  2020-04-15  8:25   ` [PATCH v2 4/5] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Jan Beulich
  2020-04-08 13:36 ` [PATCH v2 5/5] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Hongyan Xia
  4 siblings, 2 replies; 22+ messages in thread
From: Hongyan Xia @ 2020-04-08 13:36 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, julien, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v2:
- there is pretty much no point in keeping l3_ro_mpt mapped, just fetch
  the l3e instead, which also cleans up the code.
---
 xen/arch/x86/x86_64/mm.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index d23c7e4f5b..ae8f4dd0b9 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -220,7 +220,7 @@ static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info)
     unsigned long i, va, rwva, pt_pfn;
     unsigned long smap = info->spfn, emap = info->spfn;
 
-    l3_pgentry_t *l3_ro_mpt;
+    l3_pgentry_t l3e_ro_mpt;
     l2_pgentry_t *l2_ro_mpt;
 
     if ( smap > ((RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2) )
@@ -229,11 +229,13 @@ static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info)
     if ( emap > ((RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2) )
         emap = (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2;
 
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(HIRO_COMPAT_MPT_VIRT_START)]);
+    l3e_ro_mpt = l3e_from_l4e(idle_pg_table[
+                                  l4_table_offset(HIRO_COMPAT_MPT_VIRT_START)],
+                              l3_table_offset(HIRO_COMPAT_MPT_VIRT_START));
 
-    ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]) & _PAGE_PRESENT);
+    ASSERT(l3e_get_flags(l3e_ro_mpt) & _PAGE_PRESENT);
 
-    l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]);
+    l2_ro_mpt = map_l2t_from_l3e(l3e_ro_mpt);
 
     for ( i = smap; i < emap; )
     {
@@ -255,6 +257,8 @@ static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info)
         i += 1UL << (L2_PAGETABLE_SHIFT - 2);
     }
 
+    UNMAP_DOMAIN_PAGE(l2_ro_mpt);
+
     return;
 }
 
-- 
2.24.1.AMZN



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 5/5] x86_64/mm: map and unmap page tables in destroy_m2p_mapping
  2020-04-08 13:36 [PATCH v2 0/5] use new API for Xen page tables Hongyan Xia
                   ` (3 preceding siblings ...)
  2020-04-08 13:36 ` [PATCH v2 4/5] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Hongyan Xia
@ 2020-04-08 13:36 ` Hongyan Xia
  2020-04-15  8:32   ` Jan Beulich
  4 siblings, 1 reply; 22+ messages in thread
From: Hongyan Xia @ 2020-04-08 13:36 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, julien, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v2:
- no point in re-mapping l2t because it is exactly the same as
  l2_ro_mpt.
- point l2_ro_mpt to the entry instead of doing l2_table_offset() all
  the time.
---
 xen/arch/x86/x86_64/mm.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index ae8f4dd0b9..ec5269e7fc 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -268,7 +268,8 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
     unsigned long i, va, rwva;
     unsigned long smap = info->spfn, emap = info->epfn;
 
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
+    l3_ro_mpt = map_l3t_from_l4e(
+                    idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
 
     /*
      * No need to clean m2p structure existing before the hotplug
@@ -290,26 +291,30 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
             continue;
         }
 
-        l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
-        if (!(l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT))
+        l2_ro_mpt = map_l2t_from_l3e(l3_ro_mpt[l3_table_offset(va)]) +
+                    l2_table_offset(va);
+        if ( !(l2e_get_flags(*l2_ro_mpt) & _PAGE_PRESENT) )
         {
             i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) +
                     (1UL << (L2_PAGETABLE_SHIFT - 3)) ;
+            UNMAP_DOMAIN_PAGE(l2_ro_mpt);
             continue;
         }
 
-        pt_pfn = l2e_get_pfn(l2_ro_mpt[l2_table_offset(va)]);
+        pt_pfn = l2e_get_pfn(*l2_ro_mpt);
         if ( hotadd_mem_valid(pt_pfn, info) )
         {
             destroy_xen_mappings(rwva, rwva + (1UL << L2_PAGETABLE_SHIFT));
 
-            l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
-            l2e_write(&l2_ro_mpt[l2_table_offset(va)], l2e_empty());
+            l2e_write(l2_ro_mpt, l2e_empty());
         }
         i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) +
               (1UL << (L2_PAGETABLE_SHIFT - 3));
+        UNMAP_DOMAIN_PAGE(l2_ro_mpt);
     }
 
+    UNMAP_DOMAIN_PAGE(l3_ro_mpt);
+
     destroy_compat_m2p_mapping(info);
 
     /* Brute-Force flush all TLB */
-- 
2.24.1.AMZN



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] x86/shim: map and unmap page tables in replace_va_mapping
  2020-04-08 13:36 ` [PATCH v2 1/5] x86/shim: map and unmap page tables in replace_va_mapping Hongyan Xia
@ 2020-04-09  9:42   ` Jan Beulich
  2020-04-14 16:53     ` Hongyan Xia
  0 siblings, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2020-04-09  9:42 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: xen-devel, Roger Pau Monné, julien, Wei Liu, Andrew Cooper

On 08.04.2020 15:36, Hongyan Xia wrote:
> --- a/xen/arch/x86/pv/shim.c
> +++ b/xen/arch/x86/pv/shim.c
> @@ -168,16 +168,17 @@ const struct platform_bad_page *__init pv_shim_reserved_pages(unsigned int *size
>  static void __init replace_va_mapping(struct domain *d, l4_pgentry_t *l4start,
>                                        unsigned long va, mfn_t mfn)
>  {
> -    l4_pgentry_t *pl4e = l4start + l4_table_offset(va);
> -    l3_pgentry_t *pl3e = l4e_to_l3e(*pl4e) + l3_table_offset(va);
> -    l2_pgentry_t *pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(va);
> -    l1_pgentry_t *pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(va);
> +    l4_pgentry_t l4e = l4start[l4_table_offset(va)];
> +    l3_pgentry_t l3e = l3e_from_l4e(l4e, l3_table_offset(va));
> +    l2_pgentry_t l2e = l2e_from_l3e(l3e, l2_table_offset(va));
> +    l1_pgentry_t *pl1e = map_l1t_from_l2e(l2e) + l1_table_offset(va);
>      struct page_info *page = mfn_to_page(l1e_get_mfn(*pl1e));
>  
>      put_page_and_type(page);
>  
>      *pl1e = l1e_from_mfn(mfn, (!is_pv_32bit_domain(d) ? L1_PROT
>                                                        : COMPAT_L1_PROT));
> +    UNMAP_DOMAIN_PAGE(pl1e);
>  }

As said before, here and below I think it should be unmap_domain_page().

> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -196,6 +196,19 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
>  #define map_l2t_from_l3e(x)        (l2_pgentry_t *)map_domain_page(l3e_get_mfn(x))
>  #define map_l3t_from_l4e(x)        (l3_pgentry_t *)map_domain_page(l4e_get_mfn(x))
>  
> +/* Unlike lYe_to_lXe(), lXe_from_lYe() do not rely on the direct map. */
> +#define l2e_from_l3e(l3e, offset) ({                        \
> +        const l2_pgentry_t *l2t = map_l2t_from_l3e(l3e);    \
> +        l2_pgentry_t l2e = l2t[offset];                     \
> +        UNMAP_DOMAIN_PAGE(l2t);                             \
> +        l2e; })
> +
> +#define l3e_from_l4e(l4e, offset) ({                        \
> +        const l3_pgentry_t *l3t = map_l3t_from_l4e(l4e);    \
> +        l3_pgentry_t l3e = l3t[offset];                     \
> +        UNMAP_DOMAIN_PAGE(l3t);                             \
> +        l3e; })

I think l1e_from_l2e() should be introduced at the same time, even
if for now it's unused. I also think, like we do elsewhere, that
macro-local variables would better have _ suffixes, to avoid
possible variable aliasing issues.

Jan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] x86/shim: map and unmap page tables in replace_va_mapping
  2020-04-09  9:42   ` Jan Beulich
@ 2020-04-14 16:53     ` Hongyan Xia
  2020-04-15  6:13       ` Jan Beulich
  0 siblings, 1 reply; 22+ messages in thread
From: Hongyan Xia @ 2020-04-14 16:53 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Roger Pau Monné, julien, Wei Liu, Andrew Cooper

On Thu, 2020-04-09 at 11:42 +0200, Jan Beulich wrote:
> On 08.04.2020 15:36, Hongyan Xia wrote:
> > --- a/xen/arch/x86/pv/shim.c
> > +++ b/xen/arch/x86/pv/shim.c
> > @@ -168,16 +168,17 @@ const struct platform_bad_page *__init
> > pv_shim_reserved_pages(unsigned int *size
> >  static void __init replace_va_mapping(struct domain *d,
> > l4_pgentry_t *l4start,
> >                                        unsigned long va, mfn_t mfn)
> >  {
> > -    l4_pgentry_t *pl4e = l4start + l4_table_offset(va);
> > -    l3_pgentry_t *pl3e = l4e_to_l3e(*pl4e) + l3_table_offset(va);
> > -    l2_pgentry_t *pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(va);
> > -    l1_pgentry_t *pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(va);
> > +    l4_pgentry_t l4e = l4start[l4_table_offset(va)];
> > +    l3_pgentry_t l3e = l3e_from_l4e(l4e, l3_table_offset(va));
> > +    l2_pgentry_t l2e = l2e_from_l3e(l3e, l2_table_offset(va));
> > +    l1_pgentry_t *pl1e = map_l1t_from_l2e(l2e) +
> > l1_table_offset(va);
> >      struct page_info *page = mfn_to_page(l1e_get_mfn(*pl1e));
> >  
> >      put_page_and_type(page);
> >  
> >      *pl1e = l1e_from_mfn(mfn, (!is_pv_32bit_domain(d) ? L1_PROT
> >                                                        :
> > COMPAT_L1_PROT));
> > +    UNMAP_DOMAIN_PAGE(pl1e);
> >  }
> 
> As said before, here and below I think it should be
> unmap_domain_page().
> 
> > --- a/xen/include/asm-x86/page.h
> > +++ b/xen/include/asm-x86/page.h
> > @@ -196,6 +196,19 @@ static inline l4_pgentry_t
> > l4e_from_paddr(paddr_t pa, unsigned int flags)
> >  #define map_l2t_from_l3e(x)        (l2_pgentry_t
> > *)map_domain_page(l3e_get_mfn(x))
> >  #define map_l3t_from_l4e(x)        (l3_pgentry_t
> > *)map_domain_page(l4e_get_mfn(x))
> >  
> > +/* Unlike lYe_to_lXe(), lXe_from_lYe() do not rely on the direct
> > map. */
> > +#define l2e_from_l3e(l3e, offset) ({                        \
> > +        const l2_pgentry_t *l2t = map_l2t_from_l3e(l3e);    \
> > +        l2_pgentry_t l2e = l2t[offset];                     \
> > +        UNMAP_DOMAIN_PAGE(l2t);                             \
> > +        l2e; })
> > +
> > +#define l3e_from_l4e(l4e, offset) ({                        \
> > +        const l3_pgentry_t *l3t = map_l3t_from_l4e(l4e);    \
> > +        l3_pgentry_t l3e = l3t[offset];                     \
> > +        UNMAP_DOMAIN_PAGE(l3t);                             \
> > +        l3e; })
> 
> I think l1e_from_l2e() should be introduced at the same time, even
> if for now it's unused. I also think, like we do elsewhere, that
> macro-local variables would better have _ suffixes, to avoid
> possible variable aliasing issues.

Shall I address the comments and send a new rev now, or is this small
series still being reviewed?

Hongyan



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/5] x86/shim: map and unmap page tables in replace_va_mapping
  2020-04-14 16:53     ` Hongyan Xia
@ 2020-04-15  6:13       ` Jan Beulich
  0 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2020-04-15  6:13 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: xen-devel, Roger Pau Monné, julien, Wei Liu, Andrew Cooper

On 14.04.2020 18:53, Hongyan Xia wrote:
> On Thu, 2020-04-09 at 11:42 +0200, Jan Beulich wrote:
>> On 08.04.2020 15:36, Hongyan Xia wrote:
>>> --- a/xen/arch/x86/pv/shim.c
>>> +++ b/xen/arch/x86/pv/shim.c
>>> @@ -168,16 +168,17 @@ const struct platform_bad_page *__init
>>> pv_shim_reserved_pages(unsigned int *size
>>>  static void __init replace_va_mapping(struct domain *d,
>>> l4_pgentry_t *l4start,
>>>                                        unsigned long va, mfn_t mfn)
>>>  {
>>> -    l4_pgentry_t *pl4e = l4start + l4_table_offset(va);
>>> -    l3_pgentry_t *pl3e = l4e_to_l3e(*pl4e) + l3_table_offset(va);
>>> -    l2_pgentry_t *pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(va);
>>> -    l1_pgentry_t *pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(va);
>>> +    l4_pgentry_t l4e = l4start[l4_table_offset(va)];
>>> +    l3_pgentry_t l3e = l3e_from_l4e(l4e, l3_table_offset(va));
>>> +    l2_pgentry_t l2e = l2e_from_l3e(l3e, l2_table_offset(va));
>>> +    l1_pgentry_t *pl1e = map_l1t_from_l2e(l2e) +
>>> l1_table_offset(va);
>>>      struct page_info *page = mfn_to_page(l1e_get_mfn(*pl1e));
>>>  
>>>      put_page_and_type(page);
>>>  
>>>      *pl1e = l1e_from_mfn(mfn, (!is_pv_32bit_domain(d) ? L1_PROT
>>>                                                        :
>>> COMPAT_L1_PROT));
>>> +    UNMAP_DOMAIN_PAGE(pl1e);
>>>  }
>>
>> As said before, here and below I think it should be
>> unmap_domain_page().
>>
>>> --- a/xen/include/asm-x86/page.h
>>> +++ b/xen/include/asm-x86/page.h
>>> @@ -196,6 +196,19 @@ static inline l4_pgentry_t
>>> l4e_from_paddr(paddr_t pa, unsigned int flags)
>>>  #define map_l2t_from_l3e(x)        (l2_pgentry_t
>>> *)map_domain_page(l3e_get_mfn(x))
>>>  #define map_l3t_from_l4e(x)        (l3_pgentry_t
>>> *)map_domain_page(l4e_get_mfn(x))
>>>  
>>> +/* Unlike lYe_to_lXe(), lXe_from_lYe() do not rely on the direct
>>> map. */
>>> +#define l2e_from_l3e(l3e, offset) ({                        \
>>> +        const l2_pgentry_t *l2t = map_l2t_from_l3e(l3e);    \
>>> +        l2_pgentry_t l2e = l2t[offset];                     \
>>> +        UNMAP_DOMAIN_PAGE(l2t);                             \
>>> +        l2e; })
>>> +
>>> +#define l3e_from_l4e(l4e, offset) ({                        \
>>> +        const l3_pgentry_t *l3t = map_l3t_from_l4e(l4e);    \
>>> +        l3_pgentry_t l3e = l3t[offset];                     \
>>> +        UNMAP_DOMAIN_PAGE(l3t);                             \
>>> +        l3e; })
>>
>> I think l1e_from_l2e() should be introduced at the same time, even
>> if for now it's unused. I also think, like we do elsewhere, that
>> macro-local variables would better have _ suffixes, to avoid
>> possible variable aliasing issues.
> 
> Shall I address the comments and send a new rev now, or is this small
> series still being reviewed?

I didn't get to look at patches 2 thru 5 yet, if this (partly)
answers the question.

Jan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 2/5] x86_64/mm: map and unmap page tables in m2p_mapped
  2020-04-08 13:36 ` [PATCH v2 2/5] x86_64/mm: map and unmap page tables in m2p_mapped Hongyan Xia
@ 2020-04-15  7:54   ` Jan Beulich
  0 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2020-04-15  7:54 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: xen-devel, Roger Pau Monné, julien, Wei Liu, Andrew Cooper

On 08.04.2020 15:36, Hongyan Xia wrote:
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -129,14 +129,14 @@ static mfn_t alloc_hotadd_mfn(struct mem_hotadd_info *info)
>  static int m2p_mapped(unsigned long spfn)
>  {
>      unsigned long va;
> -    l3_pgentry_t *l3_ro_mpt;
> -    l2_pgentry_t *l2_ro_mpt;
> +    l3_pgentry_t l3e_ro_mpt;
> +    l2_pgentry_t l2e_ro_mpt;

Preferably with the _ro_mpt tags here dropped
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/5] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table
  2020-04-08 13:36 ` [PATCH v2 3/5] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Hongyan Xia
@ 2020-04-15  7:55   ` Jan Beulich
  0 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2020-04-15  7:55 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: xen-devel, Roger Pau Monné, julien, Wei Liu, Andrew Cooper

On 08.04.2020 15:36, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Fetch lYe by mapping and unmapping lXe instead of using the direct map,
> which is now done via the lYe_from_lXe() helpers.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 0/2] x86: high compat r/o M2P table handling adjustments
  2020-04-08 13:36 ` [PATCH v2 4/5] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Hongyan Xia
@ 2020-04-15  8:21   ` Jan Beulich
  2020-04-15  8:23     ` [PATCH 1/2] x86: drop unnecessary page table walking in compat r/o M2P handling Jan Beulich
                       ` (2 more replies)
  2020-04-15  8:25   ` [PATCH v2 4/5] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Jan Beulich
  1 sibling, 3 replies; 22+ messages in thread
From: Jan Beulich @ 2020-04-15  8:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Hongyan Xia, Andrew Cooper, julien, Wei Liu, Roger Pau Monné

While looking at "x86_64/mm: map and unmap page tables in
destroy_compat_m2p_mapping" it occurred to me that the mappings
changed there can be dropped altogether, as can the linear
address range used for this. Note that both patches have only
been lightly tested so far, I guess in particular the 2nd one
may still have issues.

1: x86: drop unnecessary page table walking in compat r/o M2P handling
2: x86: drop high compat r/o M2P table address range

Jan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/2] x86: drop unnecessary page table walking in compat r/o M2P handling
  2020-04-15  8:21   ` [PATCH 0/2] x86: high compat r/o M2P table handling adjustments Jan Beulich
@ 2020-04-15  8:23     ` Jan Beulich
  2020-04-15  9:59       ` Hongyan Xia
  2020-04-15 11:16       ` Wei Liu
  2020-04-15  8:23     ` [PATCH 2/2] x86: drop high compat r/o M2P table address range Jan Beulich
  2020-04-28  6:14     ` [PATCH 0/2] x86: high compat r/o M2P table handling adjustments Jan Beulich
  2 siblings, 2 replies; 22+ messages in thread
From: Jan Beulich @ 2020-04-15  8:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Hongyan Xia, Andrew Cooper, julien, Wei Liu, Roger Pau Monné

We have a global variable where the necessary L2 table is recorded; no
need to inspect L4 and L3 tables (and this way a few less places will
eventually need adjustment when we want to support 5-level page tables).
Also avoid setting up the L3 entry, as the address range never gets used
anyway (it'll be dropped altogether in a subsequent patch).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -219,9 +219,7 @@ static void destroy_compat_m2p_mapping(s
 {
     unsigned long i, va, rwva, pt_pfn;
     unsigned long smap = info->spfn, emap = info->spfn;
-
-    l3_pgentry_t *l3_ro_mpt;
-    l2_pgentry_t *l2_ro_mpt;
+    l2_pgentry_t *l2_ro_mpt = compat_idle_pg_table_l2;
 
     if ( smap > ((RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2) )
         return;
@@ -229,12 +227,6 @@ static void destroy_compat_m2p_mapping(s
     if ( emap > ((RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2) )
         emap = (RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2;
 
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(HIRO_COMPAT_MPT_VIRT_START)]);
-
-    ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]) & _PAGE_PRESENT);
-
-    l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]);
-
     for ( i = smap; i < emap; )
     {
         va = HIRO_COMPAT_MPT_VIRT_START +
@@ -323,7 +315,6 @@ static int setup_compat_m2p_table(struct
     unsigned long i, va, smap, emap, rwva, epfn = info->epfn;
     mfn_t mfn;
     unsigned int n;
-    l3_pgentry_t *l3_ro_mpt = NULL;
     l2_pgentry_t *l2_ro_mpt = NULL;
     int err = 0;
 
@@ -342,13 +333,7 @@ static int setup_compat_m2p_table(struct
     emap = ( (epfn + ((1UL << (L2_PAGETABLE_SHIFT - 2)) - 1 )) &
                 ~((1UL << (L2_PAGETABLE_SHIFT - 2)) - 1) );
 
-    va = HIRO_COMPAT_MPT_VIRT_START +
-         smap * sizeof(*compat_machine_to_phys_mapping);
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(va)]);
-
-    ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) & _PAGE_PRESENT);
-
-    l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
+    l2_ro_mpt = compat_idle_pg_table_l2;
 
 #define MFN(x) (((x) << L2_PAGETABLE_SHIFT) / sizeof(unsigned int))
 #define CNT ((sizeof(*frame_table) & -sizeof(*frame_table)) / \
@@ -627,16 +612,10 @@ void __init paging_init(void)
 #undef MFN
 
     /* Create user-accessible L2 directory to map the MPT for compat guests. */
-    BUILD_BUG_ON(l4_table_offset(RDWR_MPT_VIRT_START) !=
-                 l4_table_offset(HIRO_COMPAT_MPT_VIRT_START));
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(
-        HIRO_COMPAT_MPT_VIRT_START)]);
     if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
         goto nomem;
     compat_idle_pg_table_l2 = l2_ro_mpt;
     clear_page(l2_ro_mpt);
-    l3e_write(&l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)],
-              l3e_from_paddr(__pa(l2_ro_mpt), __PAGE_HYPERVISOR_RO));
     l2_ro_mpt += l2_table_offset(HIRO_COMPAT_MPT_VIRT_START);
     /* Allocate and map the compatibility mode machine-to-phys table. */
     mpt_size = (mpt_size >> 1) + (1UL << (L2_PAGETABLE_SHIFT - 1));



^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 2/2] x86: drop high compat r/o M2P table address range
  2020-04-15  8:21   ` [PATCH 0/2] x86: high compat r/o M2P table handling adjustments Jan Beulich
  2020-04-15  8:23     ` [PATCH 1/2] x86: drop unnecessary page table walking in compat r/o M2P handling Jan Beulich
@ 2020-04-15  8:23     ` Jan Beulich
  2020-04-27 19:52       ` Wei Liu
  2020-04-28  6:14     ` [PATCH 0/2] x86: high compat r/o M2P table handling adjustments Jan Beulich
  2 siblings, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2020-04-15  8:23 UTC (permalink / raw)
  To: xen-devel
  Cc: Hongyan Xia, Andrew Cooper, julien, Wei Liu, Roger Pau Monné

Now that we don't properly hook things up into the page tables anymore
we also don't need to set aside an address range. Drop it, using
compat_idle_pg_table_l2[] simply (explicitly) from slot 0.

While doing the re-arrangement, which is accompanied by the dropping or
replacing of some local variables, restrict the scopes of some further
ones at the same time.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TBD: With the changed usage perhaps we want to also rename
     compat_idle_pg_table_l2[] (to e.g. compat_idle_l2_entries[])?

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1454,8 +1454,7 @@ static bool pae_xen_mappings_check(const
 void init_xen_pae_l2_slots(l2_pgentry_t *l2t, const struct domain *d)
 {
     memcpy(&l2t[COMPAT_L2_PAGETABLE_FIRST_XEN_SLOT(d)],
-           &compat_idle_pg_table_l2[
-               l2_table_offset(HIRO_COMPAT_MPT_VIRT_START)],
+           compat_idle_pg_table_l2,
            COMPAT_L2_PAGETABLE_XEN_SLOTS(d) * sizeof(*l2t));
 }
 
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -217,9 +217,7 @@ static int share_hotadd_m2p_table(struct
 
 static void destroy_compat_m2p_mapping(struct mem_hotadd_info *info)
 {
-    unsigned long i, va, rwva, pt_pfn;
-    unsigned long smap = info->spfn, emap = info->spfn;
-    l2_pgentry_t *l2_ro_mpt = compat_idle_pg_table_l2;
+    unsigned long i, smap = info->spfn, emap = info->spfn;
 
     if ( smap > ((RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START) >> 2) )
         return;
@@ -229,18 +227,19 @@ static void destroy_compat_m2p_mapping(s
 
     for ( i = smap; i < emap; )
     {
-        va = HIRO_COMPAT_MPT_VIRT_START +
-              i * sizeof(*compat_machine_to_phys_mapping);
-        rwva = RDWR_COMPAT_MPT_VIRT_START +
-             i * sizeof(*compat_machine_to_phys_mapping);
-        if ( l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT )
+        unsigned int off = i * sizeof(*compat_machine_to_phys_mapping);
+        l2_pgentry_t *pl2e = compat_idle_pg_table_l2 + l2_table_offset(off);
+
+        if ( l2e_get_flags(*pl2e) & _PAGE_PRESENT )
         {
-            pt_pfn = l2e_get_pfn(l2_ro_mpt[l2_table_offset(va)]);
+            unsigned long pt_pfn = l2e_get_pfn(*pl2e);
+
             if ( hotadd_mem_valid(pt_pfn, info) )
             {
-                destroy_xen_mappings(rwva, rwva +
-                        (1UL << L2_PAGETABLE_SHIFT));
-                l2e_write(&l2_ro_mpt[l2_table_offset(va)], l2e_empty());
+                unsigned long rwva = RDWR_COMPAT_MPT_VIRT_START + off;
+
+                destroy_xen_mappings(rwva, rwva + (1UL << L2_PAGETABLE_SHIFT));
+                l2e_write(pl2e, l2e_empty());
             }
         }
 
@@ -312,10 +311,9 @@ static void destroy_m2p_mapping(struct m
  */
 static int setup_compat_m2p_table(struct mem_hotadd_info *info)
 {
-    unsigned long i, va, smap, emap, rwva, epfn = info->epfn;
+    unsigned long i, smap, emap, epfn = info->epfn;
     mfn_t mfn;
     unsigned int n;
-    l2_pgentry_t *l2_ro_mpt = NULL;
     int err = 0;
 
     smap = info->spfn & (~((1UL << (L2_PAGETABLE_SHIFT - 2)) -1));
@@ -333,8 +331,6 @@ static int setup_compat_m2p_table(struct
     emap = ( (epfn + ((1UL << (L2_PAGETABLE_SHIFT - 2)) - 1 )) &
                 ~((1UL << (L2_PAGETABLE_SHIFT - 2)) - 1) );
 
-    l2_ro_mpt = compat_idle_pg_table_l2;
-
 #define MFN(x) (((x) << L2_PAGETABLE_SHIFT) / sizeof(unsigned int))
 #define CNT ((sizeof(*frame_table) & -sizeof(*frame_table)) / \
              sizeof(*compat_machine_to_phys_mapping))
@@ -343,13 +339,11 @@ static int setup_compat_m2p_table(struct
 
     for ( i = smap; i < emap; i += (1UL << (L2_PAGETABLE_SHIFT - 2)) )
     {
-        va = HIRO_COMPAT_MPT_VIRT_START +
-              i * sizeof(*compat_machine_to_phys_mapping);
-
-        rwva = RDWR_COMPAT_MPT_VIRT_START +
-                i * sizeof(*compat_machine_to_phys_mapping);
+        unsigned int off = i * sizeof(*compat_machine_to_phys_mapping);
+        l2_pgentry_t *pl2e = compat_idle_pg_table_l2 + l2_table_offset(off);
+        unsigned long rwva = RDWR_COMPAT_MPT_VIRT_START + off;
 
-        if (l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT)
+        if ( l2e_get_flags(*pl2e) & _PAGE_PRESENT )
             continue;
 
         for ( n = 0; n < CNT; ++n)
@@ -366,8 +360,7 @@ static int setup_compat_m2p_table(struct
         /* Fill with INVALID_M2P_ENTRY. */
         memset((void *)rwva, 0xFF, 1UL << L2_PAGETABLE_SHIFT);
         /* NB. Cannot be GLOBAL as the ptes get copied into per-VM space. */
-        l2e_write(&l2_ro_mpt[l2_table_offset(va)],
-                  l2e_from_mfn(mfn, _PAGE_PSE|_PAGE_PRESENT));
+        l2e_write(pl2e, l2e_from_mfn(mfn, _PAGE_PSE|_PAGE_PRESENT));
     }
 #undef CNT
 #undef MFN
@@ -616,7 +609,6 @@ void __init paging_init(void)
         goto nomem;
     compat_idle_pg_table_l2 = l2_ro_mpt;
     clear_page(l2_ro_mpt);
-    l2_ro_mpt += l2_table_offset(HIRO_COMPAT_MPT_VIRT_START);
     /* Allocate and map the compatibility mode machine-to-phys table. */
     mpt_size = (mpt_size >> 1) + (1UL << (L2_PAGETABLE_SHIFT - 1));
     if ( mpt_size > RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START )
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -215,11 +215,8 @@ extern unsigned char boot_edid_info[128]
 /* Slot 261: compatibility machine-to-phys conversion table (1GB). */
 #define RDWR_COMPAT_MPT_VIRT_START VMAP_VIRT_END
 #define RDWR_COMPAT_MPT_VIRT_END (RDWR_COMPAT_MPT_VIRT_START + GB(1))
-/* Slot 261: high read-only compat machine-to-phys conversion table (1GB). */
-#define HIRO_COMPAT_MPT_VIRT_START RDWR_COMPAT_MPT_VIRT_END
-#define HIRO_COMPAT_MPT_VIRT_END (HIRO_COMPAT_MPT_VIRT_START + GB(1))
 /* Slot 261: xen text, static data, bss, per-cpu stubs and executable fixmap (1GB). */
-#define XEN_VIRT_START          (HIRO_COMPAT_MPT_VIRT_END)
+#define XEN_VIRT_START          RDWR_COMPAT_MPT_VIRT_END
 #define XEN_VIRT_END            (XEN_VIRT_START + GB(1))
 
 #ifndef CONFIG_BIGMEM



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 4/5] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping
  2020-04-08 13:36 ` [PATCH v2 4/5] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Hongyan Xia
  2020-04-15  8:21   ` [PATCH 0/2] x86: high compat r/o M2P table handling adjustments Jan Beulich
@ 2020-04-15  8:25   ` Jan Beulich
  1 sibling, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2020-04-15  8:25 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: xen-devel, Roger Pau Monné, julien, Wei Liu, Andrew Cooper

On 08.04.2020 15:36, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> 
> ---
> Changed in v2:
> - there is pretty much no point in keeping l3_ro_mpt mapped, just fetch
>   the l3e instead, which also cleans up the code.

Even better, there's no need to do any mapping her at all. I've
posted a pair of patches, the first of which can be seen as a
suggested replacement for the one here.

Jan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 5/5] x86_64/mm: map and unmap page tables in destroy_m2p_mapping
  2020-04-08 13:36 ` [PATCH v2 5/5] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Hongyan Xia
@ 2020-04-15  8:32   ` Jan Beulich
  0 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2020-04-15  8:32 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: xen-devel, Roger Pau Monné, julien, Wei Liu, Andrew Cooper

On 08.04.2020 15:36, Hongyan Xia wrote:
> @@ -290,26 +291,30 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
>              continue;
>          }
>  
> -        l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
> -        if (!(l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT))
> +        l2_ro_mpt = map_l2t_from_l3e(l3_ro_mpt[l3_table_offset(va)]) +
> +                    l2_table_offset(va);

Either the name of the variable should be changed or you shouldn't
add in l2_table_offset(va) here. Personally I'd go with renaming
to just pl2e.

Jan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/2] x86: drop unnecessary page table walking in compat r/o M2P handling
  2020-04-15  8:23     ` [PATCH 1/2] x86: drop unnecessary page table walking in compat r/o M2P handling Jan Beulich
@ 2020-04-15  9:59       ` Hongyan Xia
  2020-04-15 10:34         ` Jan Beulich
  2020-04-15 11:16       ` Wei Liu
  1 sibling, 1 reply; 22+ messages in thread
From: Hongyan Xia @ 2020-04-15  9:59 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: Andrew Cooper, julien, Wei Liu, Roger Pau Monné

On Wed, 2020-04-15 at 10:23 +0200, Jan Beulich wrote:
> We have a global variable where the necessary L2 table is recorded;
> no
> need to inspect L4 and L3 tables (and this way a few less places will
> eventually need adjustment when we want to support 5-level page
> tables).
> Also avoid setting up the L3 entry, as the address range never gets
> used
> anyway (it'll be dropped altogether in a subsequent patch).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -219,9 +219,7 @@ static void destroy_compat_m2p_mapping(s
>  {
>      unsigned long i, va, rwva, pt_pfn;
>      unsigned long smap = info->spfn, emap = info->spfn;
> -
> -    l3_pgentry_t *l3_ro_mpt;
> -    l2_pgentry_t *l2_ro_mpt;
> +    l2_pgentry_t *l2_ro_mpt = compat_idle_pg_table_l2;
>  
>      if ( smap > ((RDWR_COMPAT_MPT_VIRT_END -
> RDWR_COMPAT_MPT_VIRT_START) >> 2) )
>          return;
> @@ -229,12 +227,6 @@ static void destroy_compat_m2p_mapping(s
>      if ( emap > ((RDWR_COMPAT_MPT_VIRT_END -
> RDWR_COMPAT_MPT_VIRT_START) >> 2) )
>          emap = (RDWR_COMPAT_MPT_VIRT_END -
> RDWR_COMPAT_MPT_VIRT_START) >> 2;
>  
> -    l3_ro_mpt =
> l4e_to_l3e(idle_pg_table[l4_table_offset(HIRO_COMPAT_MPT_VIRT_START)]
> );
> -
> -    ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_V
> IRT_START)]) & _PAGE_PRESENT);
> -
> -    l2_ro_mpt =
> l3e_to_l2e(l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)]);
> -
>      for ( i = smap; i < emap; )
>      {
>          va = HIRO_COMPAT_MPT_VIRT_START +
> @@ -323,7 +315,6 @@ static int setup_compat_m2p_table(struct
>      unsigned long i, va, smap, emap, rwva, epfn = info->epfn;
>      mfn_t mfn;
>      unsigned int n;
> -    l3_pgentry_t *l3_ro_mpt = NULL;
>      l2_pgentry_t *l2_ro_mpt = NULL;
>      int err = 0;
>  
> @@ -342,13 +333,7 @@ static int setup_compat_m2p_table(struct
>      emap = ( (epfn + ((1UL << (L2_PAGETABLE_SHIFT - 2)) - 1 )) &
>                  ~((1UL << (L2_PAGETABLE_SHIFT - 2)) - 1) );
>  
> -    va = HIRO_COMPAT_MPT_VIRT_START +
> -         smap * sizeof(*compat_machine_to_phys_mapping);
> -    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(va)]);
> -
> -    ASSERT(l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
> _PAGE_PRESENT);
> -
> -    l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
> +    l2_ro_mpt = compat_idle_pg_table_l2;
>  
>  #define MFN(x) (((x) << L2_PAGETABLE_SHIFT) / sizeof(unsigned int))
>  #define CNT ((sizeof(*frame_table) & -sizeof(*frame_table)) / \
> @@ -627,16 +612,10 @@ void __init paging_init(void)
>  #undef MFN
>  
>      /* Create user-accessible L2 directory to map the MPT for compat
> guests. */
> -    BUILD_BUG_ON(l4_table_offset(RDWR_MPT_VIRT_START) !=
> -                 l4_table_offset(HIRO_COMPAT_MPT_VIRT_START));
> -    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(
> -        HIRO_COMPAT_MPT_VIRT_START)]);
>      if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
>          goto nomem;
>      compat_idle_pg_table_l2 = l2_ro_mpt;
>      clear_page(l2_ro_mpt);
> -    l3e_write(&l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)
> ],
> -              l3e_from_paddr(__pa(l2_ro_mpt),
> __PAGE_HYPERVISOR_RO));
>      l2_ro_mpt += l2_table_offset(HIRO_COMPAT_MPT_VIRT_START);
>      /* Allocate and map the compatibility mode machine-to-phys
> table. */
>      mpt_size = (mpt_size >> 1) + (1UL << (L2_PAGETABLE_SHIFT - 1));

The code around here, I am wondering if there is a reason to put it in
this patch. If we bisect, we can end up in a commit which says the
address range of compat is still there in config.h, but actually it's
gone, so this probably belongs to the 2nd patch.

Apart from that,
Reviewed-by: Hongyan Xia <hongyxia@amazon.com>

I would like to drop relevant map/unmap patches and replace them with
the new clean-up ones (and place them at the beginning of the series),
if there is no objection with that.

Hongyan



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/2] x86: drop unnecessary page table walking in compat r/o M2P handling
  2020-04-15  9:59       ` Hongyan Xia
@ 2020-04-15 10:34         ` Jan Beulich
  2020-04-15 10:50           ` Hongyan Xia
  0 siblings, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2020-04-15 10:34 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: xen-devel, Roger Pau Monné, julien, Wei Liu, Andrew Cooper

On 15.04.2020 11:59, Hongyan Xia wrote:
> On Wed, 2020-04-15 at 10:23 +0200, Jan Beulich wrote:
>> @@ -627,16 +612,10 @@ void __init paging_init(void)
>>  #undef MFN
>>  
>>      /* Create user-accessible L2 directory to map the MPT for compat
>> guests. */
>> -    BUILD_BUG_ON(l4_table_offset(RDWR_MPT_VIRT_START) !=
>> -                 l4_table_offset(HIRO_COMPAT_MPT_VIRT_START));
>> -    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(
>> -        HIRO_COMPAT_MPT_VIRT_START)]);
>>      if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
>>          goto nomem;
>>      compat_idle_pg_table_l2 = l2_ro_mpt;
>>      clear_page(l2_ro_mpt);
>> -    l3e_write(&l3_ro_mpt[l3_table_offset(HIRO_COMPAT_MPT_VIRT_START)
>> ],
>> -              l3e_from_paddr(__pa(l2_ro_mpt),
>> __PAGE_HYPERVISOR_RO));
>>      l2_ro_mpt += l2_table_offset(HIRO_COMPAT_MPT_VIRT_START);
>>      /* Allocate and map the compatibility mode machine-to-phys
>> table. */
>>      mpt_size = (mpt_size >> 1) + (1UL << (L2_PAGETABLE_SHIFT - 1));
> 
> The code around here, I am wondering if there is a reason to put it in
> this patch. If we bisect, we can end up in a commit which says the
> address range of compat is still there in config.h, but actually it's
> gone, so this probably belongs to the 2nd patch.

I could be done either way, I guess. Note though how patch 2's
description starts with "Now that we don't properly hook things
up into the page tables anymore". I.e. after this patch the
address range continues to exist solely for calculation purposes.

> Apart from that,
> Reviewed-by: Hongyan Xia <hongyxia@amazon.com>

Thanks.

> I would like to drop relevant map/unmap patches and replace them with
> the new clean-up ones (and place them at the beginning of the series),
> if there is no objection with that.

Depending on turnaround, I'd much rather see this go in before
you re-post. But if you feel like making it part of your series,
go ahead.

Jan


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/2] x86: drop unnecessary page table walking in compat r/o M2P handling
  2020-04-15 10:34         ` Jan Beulich
@ 2020-04-15 10:50           ` Hongyan Xia
  0 siblings, 0 replies; 22+ messages in thread
From: Hongyan Xia @ 2020-04-15 10:50 UTC (permalink / raw)
  To: Jan Beulich
  Cc: xen-devel, Roger Pau Monné, julien, Wei Liu, Andrew Cooper

On Wed, 2020-04-15 at 12:34 +0200, Jan Beulich wrote:
> On 15.04.2020 11:59, Hongyan Xia wrote:
> > ...
> > I would like to drop relevant map/unmap patches and replace them
> > with
> > the new clean-up ones (and place them at the beginning of the
> > series),
> > if there is no objection with that.
> 
> Depending on turnaround, I'd much rather see this go in before
> you re-post. But if you feel like making it part of your series,
> go ahead.

I actually also very much prefer to see those clean-up patches go in
before I post the next revision, so please go ahead.

I will post the next revision minus patch 4 then to avoid conflict.

Hongyan



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/2] x86: drop unnecessary page table walking in compat r/o M2P handling
  2020-04-15  8:23     ` [PATCH 1/2] x86: drop unnecessary page table walking in compat r/o M2P handling Jan Beulich
  2020-04-15  9:59       ` Hongyan Xia
@ 2020-04-15 11:16       ` Wei Liu
  1 sibling, 0 replies; 22+ messages in thread
From: Wei Liu @ 2020-04-15 11:16 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Hongyan Xia, julien, Wei Liu, Andrew Cooper, xen-devel,
	Roger Pau Monné

On Wed, Apr 15, 2020 at 10:23:31AM +0200, Jan Beulich wrote:
> We have a global variable where the necessary L2 table is recorded; no
> need to inspect L4 and L3 tables (and this way a few less places will
> eventually need adjustment when we want to support 5-level page tables).
> Also avoid setting up the L3 entry, as the address range never gets used
> anyway (it'll be dropped altogether in a subsequent patch).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Wei Liu <wl@xen.org>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/2] x86: drop high compat r/o M2P table address range
  2020-04-15  8:23     ` [PATCH 2/2] x86: drop high compat r/o M2P table address range Jan Beulich
@ 2020-04-27 19:52       ` Wei Liu
  0 siblings, 0 replies; 22+ messages in thread
From: Wei Liu @ 2020-04-27 19:52 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Hongyan Xia, julien, Wei Liu, Andrew Cooper, xen-devel,
	Roger Pau Monné

On Wed, Apr 15, 2020 at 10:23:58AM +0200, Jan Beulich wrote:
> Now that we don't properly hook things up into the page tables anymore
> we also don't need to set aside an address range. Drop it, using
> compat_idle_pg_table_l2[] simply (explicitly) from slot 0.
> 
> While doing the re-arrangement, which is accompanied by the dropping or
> replacing of some local variables, restrict the scopes of some further
> ones at the same time.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Wei Liu <wl@xen.org>

> ---
> TBD: With the changed usage perhaps we want to also rename
>      compat_idle_pg_table_l2[] (to e.g. compat_idle_l2_entries[])?

No opinion on this.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 0/2] x86: high compat r/o M2P table handling adjustments
  2020-04-15  8:21   ` [PATCH 0/2] x86: high compat r/o M2P table handling adjustments Jan Beulich
  2020-04-15  8:23     ` [PATCH 1/2] x86: drop unnecessary page table walking in compat r/o M2P handling Jan Beulich
  2020-04-15  8:23     ` [PATCH 2/2] x86: drop high compat r/o M2P table address range Jan Beulich
@ 2020-04-28  6:14     ` Jan Beulich
  2 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2020-04-28  6:14 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Hongyan Xia, xen-devel, julien, Wei Liu, Roger Pau Monné

On 15.04.2020 10:21, Jan Beulich wrote:
> While looking at "x86_64/mm: map and unmap page tables in
> destroy_compat_m2p_mapping" it occurred to me that the mappings
> changed there can be dropped altogether, as can the linear
> address range used for this. Note that both patches have only
> been lightly tested so far, I guess in particular the 2nd one
> may still have issues.
> 
> 1: x86: drop unnecessary page table walking in compat r/o M2P handling
> 2: x86: drop high compat r/o M2P table address range

Just fyi - with the reviews I've got I'm intending to commit
this later this week, unless I hear otherwise soon.

Jan


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2020-04-28  6:15 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-08 13:36 [PATCH v2 0/5] use new API for Xen page tables Hongyan Xia
2020-04-08 13:36 ` [PATCH v2 1/5] x86/shim: map and unmap page tables in replace_va_mapping Hongyan Xia
2020-04-09  9:42   ` Jan Beulich
2020-04-14 16:53     ` Hongyan Xia
2020-04-15  6:13       ` Jan Beulich
2020-04-08 13:36 ` [PATCH v2 2/5] x86_64/mm: map and unmap page tables in m2p_mapped Hongyan Xia
2020-04-15  7:54   ` Jan Beulich
2020-04-08 13:36 ` [PATCH v2 3/5] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Hongyan Xia
2020-04-15  7:55   ` Jan Beulich
2020-04-08 13:36 ` [PATCH v2 4/5] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Hongyan Xia
2020-04-15  8:21   ` [PATCH 0/2] x86: high compat r/o M2P table handling adjustments Jan Beulich
2020-04-15  8:23     ` [PATCH 1/2] x86: drop unnecessary page table walking in compat r/o M2P handling Jan Beulich
2020-04-15  9:59       ` Hongyan Xia
2020-04-15 10:34         ` Jan Beulich
2020-04-15 10:50           ` Hongyan Xia
2020-04-15 11:16       ` Wei Liu
2020-04-15  8:23     ` [PATCH 2/2] x86: drop high compat r/o M2P table address range Jan Beulich
2020-04-27 19:52       ` Wei Liu
2020-04-28  6:14     ` [PATCH 0/2] x86: high compat r/o M2P table handling adjustments Jan Beulich
2020-04-15  8:25   ` [PATCH v2 4/5] x86_64/mm: map and unmap page tables in destroy_compat_m2p_mapping Jan Beulich
2020-04-08 13:36 ` [PATCH v2 5/5] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Hongyan Xia
2020-04-15  8:32   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.