All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/4] use new API for Xen page tables
@ 2020-04-15 18:37 Hongyan Xia
  2020-04-15 18:37 ` [PATCH v4 1/4] x86/shim: map and unmap page tables in replace_va_mapping Hongyan Xia
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Hongyan Xia @ 2020-04-15 18:37 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, julien, Wei Liu, Jan Beulich, Roger Pau Monné

From: Hongyan Xia <hongyxia@amazon.com>

This small series is basically just rewriting functions using the new
API to map and unmap PTEs. Each patch is independent.

Apart from mapping and unmapping page tables, no other functional change
intended.

---
Changed in v4:
- use _ suffix instead of prefix in macros.
- use normal unmap_domain_page() for variable right before end-of-scope.

Changed in v3:
- address all comments in v2.
- drop patch 4, since other clean-ups will make it unnecessary.

Changed in v2:
- I kept UNMAP_DOMAIN_PAGE() for now in v2, but I if people say
  in some cases it is an overkill and unmap_domain_page() should be
  used, I am okay with that and can make the change.
- code cleanup and style fixes.
- unmap as early as possible.

Wei Liu (4):
  x86/shim: map and unmap page tables in replace_va_mapping
  x86_64/mm: map and unmap page tables in m2p_mapped
  x86_64/mm: map and unmap page tables in share_hotadd_m2p_table
  x86_64/mm: map and unmap page tables in destroy_m2p_mapping

 xen/arch/x86/pv/shim.c     |  9 ++++----
 xen/arch/x86/x86_64/mm.c   | 44 +++++++++++++++++++++-----------------
 xen/include/asm-x86/page.h | 19 ++++++++++++++++
 3 files changed, 48 insertions(+), 24 deletions(-)

-- 
2.24.1.AMZN



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v4 1/4] x86/shim: map and unmap page tables in replace_va_mapping
  2020-04-15 18:37 [PATCH v4 0/4] use new API for Xen page tables Hongyan Xia
@ 2020-04-15 18:37 ` Hongyan Xia
  2020-04-16  7:08   ` Jan Beulich
  2020-04-15 18:37 ` [PATCH v4 2/4] x86_64/mm: map and unmap page tables in m2p_mapped Hongyan Xia
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 7+ messages in thread
From: Hongyan Xia @ 2020-04-15 18:37 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, julien, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Also, introduce lYe_from_lXe() macros which do not rely on the direct
map when walking page tables. Unfortunately, they cannot be inline
functions due to the header dependency on domain_page.h, so keep them as
macros just like map_lYt_from_lXe().

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v4:
- use _ suffixes instead of prefixes.

Changed in v3:
- use unmap_domain_page() instead of the macro in several places.
- also introduce l1e_from_l2e().
- add _ prefix in macros to avoid aliasing.

Changed in v2:
- instead of map, map, map, read/write, unmap, unmap, unmap, do map,
  read PTE, unmap for each level instead.
- use lYe_from_lXe() macros and lift them from a later patch to this
  patch.
- const qualify pointers in new macros.
---
 xen/arch/x86/pv/shim.c     |  9 +++++----
 xen/include/asm-x86/page.h | 19 +++++++++++++++++++
 2 files changed, 24 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index ed2ece8a8a..31264582cc 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -168,16 +168,17 @@ const struct platform_bad_page *__init pv_shim_reserved_pages(unsigned int *size
 static void __init replace_va_mapping(struct domain *d, l4_pgentry_t *l4start,
                                       unsigned long va, mfn_t mfn)
 {
-    l4_pgentry_t *pl4e = l4start + l4_table_offset(va);
-    l3_pgentry_t *pl3e = l4e_to_l3e(*pl4e) + l3_table_offset(va);
-    l2_pgentry_t *pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(va);
-    l1_pgentry_t *pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(va);
+    l4_pgentry_t l4e = l4start[l4_table_offset(va)];
+    l3_pgentry_t l3e = l3e_from_l4e(l4e, l3_table_offset(va));
+    l2_pgentry_t l2e = l2e_from_l3e(l3e, l2_table_offset(va));
+    l1_pgentry_t *pl1e = map_l1t_from_l2e(l2e) + l1_table_offset(va);
     struct page_info *page = mfn_to_page(l1e_get_mfn(*pl1e));
 
     put_page_and_type(page);
 
     *pl1e = l1e_from_mfn(mfn, (!is_pv_32bit_domain(d) ? L1_PROT
                                                       : COMPAT_L1_PROT));
+    unmap_domain_page(pl1e);
 }
 
 static void evtchn_reserve(struct domain *d, unsigned int port)
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index eb73a0fc23..5acf3d3d5a 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -197,6 +197,25 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
 #define map_l2t_from_l3e(x)        (l2_pgentry_t *)map_domain_page(l3e_get_mfn(x))
 #define map_l3t_from_l4e(x)        (l3_pgentry_t *)map_domain_page(l4e_get_mfn(x))
 
+/* Unlike lYe_to_lXe(), lXe_from_lYe() do not rely on the direct map. */
+#define l1e_from_l2e(l2e_, offset_) ({                      \
+        const l1_pgentry_t *l1t_ = map_l1t_from_l2e(l2e_);  \
+        l1_pgentry_t l1e_ = l1t_[offset_];                  \
+        unmap_domain_page(l1t_);                            \
+        l1e_; })
+
+#define l2e_from_l3e(l3e_, offset_) ({                      \
+        const l2_pgentry_t *l2t_ = map_l2t_from_l3e(l3e_);  \
+        l2_pgentry_t l2e_ = l2t_[offset_];                  \
+        unmap_domain_page(l2t_);                            \
+        l2e_; })
+
+#define l3e_from_l4e(l4e_, offset_) ({                      \
+        const l3_pgentry_t *l3t_ = map_l3t_from_l4e(l4e_);  \
+        l3_pgentry_t l3e_ = l3t_[offset_];                  \
+        unmap_domain_page(l3t_);                            \
+        l3e_; })
+
 /* Given a virtual address, get an entry offset into a page table. */
 #define l1_table_offset(a)         \
     (((a) >> L1_PAGETABLE_SHIFT) & (L1_PAGETABLE_ENTRIES - 1))
-- 
2.24.1.AMZN



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 2/4] x86_64/mm: map and unmap page tables in m2p_mapped
  2020-04-15 18:37 [PATCH v4 0/4] use new API for Xen page tables Hongyan Xia
  2020-04-15 18:37 ` [PATCH v4 1/4] x86/shim: map and unmap page tables in replace_va_mapping Hongyan Xia
@ 2020-04-15 18:37 ` Hongyan Xia
  2020-04-15 18:37 ` [PATCH v4 3/4] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Hongyan Xia
  2020-04-15 18:37 ` [PATCH v4 4/4] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Hongyan Xia
  3 siblings, 0 replies; 7+ messages in thread
From: Hongyan Xia @ 2020-04-15 18:37 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, julien, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changed in v3:
- rename l3e_ro_mpt and l2e_ro_mpt, just call them l3e and l2e.

Changed in v2:
- avoid adding goto labels, simply get the PTE and unmap quickly.
- code style fixes.
---
 xen/arch/x86/x86_64/mm.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index cee836ec37..41755ded26 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -129,14 +129,13 @@ static mfn_t alloc_hotadd_mfn(struct mem_hotadd_info *info)
 static int m2p_mapped(unsigned long spfn)
 {
     unsigned long va;
-    l3_pgentry_t *l3_ro_mpt;
-    l2_pgentry_t *l2_ro_mpt;
+    l3_pgentry_t l3e;
+    l2_pgentry_t l2e;
 
     va = RO_MPT_VIRT_START + spfn * sizeof(*machine_to_phys_mapping);
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(va)]);
+    l3e = l3e_from_l4e(idle_pg_table[l4_table_offset(va)], l3_table_offset(va));
 
-    switch ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
-             (_PAGE_PRESENT |_PAGE_PSE))
+    switch ( l3e_get_flags(l3e) & (_PAGE_PRESENT | _PAGE_PSE) )
     {
         case _PAGE_PSE|_PAGE_PRESENT:
             return M2P_1G_MAPPED;
@@ -146,9 +145,9 @@ static int m2p_mapped(unsigned long spfn)
         default:
             return M2P_NO_MAPPED;
     }
-    l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
+    l2e = l2e_from_l3e(l3e, l2_table_offset(va));
 
-    if (l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT)
+    if ( l2e_get_flags(l2e) & _PAGE_PRESENT )
         return M2P_2M_MAPPED;
 
     return M2P_NO_MAPPED;
-- 
2.24.1.AMZN



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 3/4] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table
  2020-04-15 18:37 [PATCH v4 0/4] use new API for Xen page tables Hongyan Xia
  2020-04-15 18:37 ` [PATCH v4 1/4] x86/shim: map and unmap page tables in replace_va_mapping Hongyan Xia
  2020-04-15 18:37 ` [PATCH v4 2/4] x86_64/mm: map and unmap page tables in m2p_mapped Hongyan Xia
@ 2020-04-15 18:37 ` Hongyan Xia
  2020-04-15 18:37 ` [PATCH v4 4/4] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Hongyan Xia
  3 siblings, 0 replies; 7+ messages in thread
From: Hongyan Xia @ 2020-04-15 18:37 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, julien, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Fetch lYe by mapping and unmapping lXe instead of using the direct map,
which is now done via the lYe_from_lXe() helpers.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changed in v2:
- the introduction of the macros is now lifted to a previous patch.
---
 xen/arch/x86/x86_64/mm.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 41755ded26..cfaeae84e9 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -166,14 +166,14 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
           v += n << PAGE_SHIFT )
     {
         n = L2_PAGETABLE_ENTRIES * L1_PAGETABLE_ENTRIES;
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-            l3_table_offset(v)];
+        l3e = l3e_from_l4e(idle_pg_table[l4_table_offset(v)],
+                           l3_table_offset(v));
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
         if ( !(l3e_get_flags(l3e) & _PAGE_PSE) )
         {
             n = L1_PAGETABLE_ENTRIES;
-            l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+            l2e = l2e_from_l3e(l3e, l2_table_offset(v));
             if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
                 continue;
             m2p_start_mfn = l2e_get_mfn(l2e);
@@ -194,11 +194,11 @@ static int share_hotadd_m2p_table(struct mem_hotadd_info *info)
           v != RDWR_COMPAT_MPT_VIRT_END;
           v += 1 << L2_PAGETABLE_SHIFT )
     {
-        l3e = l4e_to_l3e(idle_pg_table[l4_table_offset(v)])[
-            l3_table_offset(v)];
+        l3e = l3e_from_l4e(idle_pg_table[l4_table_offset(v)],
+                           l3_table_offset(v));
         if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
             continue;
-        l2e = l3e_to_l2e(l3e)[l2_table_offset(v)];
+        l2e = l2e_from_l3e(l3e, l2_table_offset(v));
         if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
             continue;
         m2p_start_mfn = l2e_get_mfn(l2e);
-- 
2.24.1.AMZN



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 4/4] x86_64/mm: map and unmap page tables in destroy_m2p_mapping
  2020-04-15 18:37 [PATCH v4 0/4] use new API for Xen page tables Hongyan Xia
                   ` (2 preceding siblings ...)
  2020-04-15 18:37 ` [PATCH v4 3/4] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Hongyan Xia
@ 2020-04-15 18:37 ` Hongyan Xia
  2020-04-16  7:09   ` Jan Beulich
  3 siblings, 1 reply; 7+ messages in thread
From: Hongyan Xia @ 2020-04-15 18:37 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, julien, Wei Liu, Jan Beulich, Roger Pau Monné

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v4:
- switch to normal unmap_domain_page() for variable right before
  end-of-scope.

Changed in v3:
- rename l2_ro_mpt into pl2e to avoid confusion.

Changed in v2:
- no point in re-mapping l2t because it is exactly the same as
  l2_ro_mpt.
- point l2_ro_mpt to the entry instead of doing l2_table_offset() all
  the time.
---
 xen/arch/x86/x86_64/mm.c | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index cfaeae84e9..e85ef449f3 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -263,7 +263,8 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
     unsigned long i, va, rwva;
     unsigned long smap = info->spfn, emap = info->epfn;
 
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
+    l3_ro_mpt = map_l3t_from_l4e(
+                    idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
 
     /*
      * No need to clean m2p structure existing before the hotplug
@@ -271,7 +272,7 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
     for (i = smap; i < emap;)
     {
         unsigned long pt_pfn;
-        l2_pgentry_t *l2_ro_mpt;
+        l2_pgentry_t *pl2e;
 
         va = RO_MPT_VIRT_START + i * sizeof(*machine_to_phys_mapping);
         rwva = RDWR_MPT_VIRT_START + i * sizeof(*machine_to_phys_mapping);
@@ -285,26 +286,30 @@ static void destroy_m2p_mapping(struct mem_hotadd_info *info)
             continue;
         }
 
-        l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
-        if (!(l2e_get_flags(l2_ro_mpt[l2_table_offset(va)]) & _PAGE_PRESENT))
+        pl2e = map_l2t_from_l3e(l3_ro_mpt[l3_table_offset(va)]) +
+                    l2_table_offset(va);
+        if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
             i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) +
                     (1UL << (L2_PAGETABLE_SHIFT - 3)) ;
+            UNMAP_DOMAIN_PAGE(pl2e);
             continue;
         }
 
-        pt_pfn = l2e_get_pfn(l2_ro_mpt[l2_table_offset(va)]);
+        pt_pfn = l2e_get_pfn(*pl2e);
         if ( hotadd_mem_valid(pt_pfn, info) )
         {
             destroy_xen_mappings(rwva, rwva + (1UL << L2_PAGETABLE_SHIFT));
 
-            l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]);
-            l2e_write(&l2_ro_mpt[l2_table_offset(va)], l2e_empty());
+            l2e_write(pl2e, l2e_empty());
         }
         i = ( i & ~((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1)) +
               (1UL << (L2_PAGETABLE_SHIFT - 3));
+        unmap_domain_page(pl2e);
     }
 
+    UNMAP_DOMAIN_PAGE(l3_ro_mpt);
+
     destroy_compat_m2p_mapping(info);
 
     /* Brute-Force flush all TLB */
-- 
2.24.1.AMZN



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 1/4] x86/shim: map and unmap page tables in replace_va_mapping
  2020-04-15 18:37 ` [PATCH v4 1/4] x86/shim: map and unmap page tables in replace_va_mapping Hongyan Xia
@ 2020-04-16  7:08   ` Jan Beulich
  0 siblings, 0 replies; 7+ messages in thread
From: Jan Beulich @ 2020-04-16  7:08 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: xen-devel, Roger Pau Monné, julien, Wei Liu, Andrew Cooper

On 15.04.2020 20:37, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Also, introduce lYe_from_lXe() macros which do not rely on the direct
> map when walking page tables. Unfortunately, they cannot be inline
> functions due to the header dependency on domain_page.h, so keep them as
> macros just like map_lYt_from_lXe().
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 4/4] x86_64/mm: map and unmap page tables in destroy_m2p_mapping
  2020-04-15 18:37 ` [PATCH v4 4/4] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Hongyan Xia
@ 2020-04-16  7:09   ` Jan Beulich
  0 siblings, 0 replies; 7+ messages in thread
From: Jan Beulich @ 2020-04-16  7:09 UTC (permalink / raw)
  To: Hongyan Xia
  Cc: xen-devel, Roger Pau Monné, julien, Wei Liu, Andrew Cooper

On 15.04.2020 20:37, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-04-16  7:09 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-15 18:37 [PATCH v4 0/4] use new API for Xen page tables Hongyan Xia
2020-04-15 18:37 ` [PATCH v4 1/4] x86/shim: map and unmap page tables in replace_va_mapping Hongyan Xia
2020-04-16  7:08   ` Jan Beulich
2020-04-15 18:37 ` [PATCH v4 2/4] x86_64/mm: map and unmap page tables in m2p_mapped Hongyan Xia
2020-04-15 18:37 ` [PATCH v4 3/4] x86_64/mm: map and unmap page tables in share_hotadd_m2p_table Hongyan Xia
2020-04-15 18:37 ` [PATCH v4 4/4] x86_64/mm: map and unmap page tables in destroy_m2p_mapping Hongyan Xia
2020-04-16  7:09   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.