All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/4] pvh/dom0/shadow/amd fixes
@ 2019-02-18 17:27 Roger Pau Monne
  2019-02-18 17:27 ` [PATCH v4 1/4] x86/mm: split p2m ioreq server pages special handling into helper Roger Pau Monne
                   ` (3 more replies)
  0 siblings, 4 replies; 18+ messages in thread
From: Roger Pau Monne @ 2019-02-18 17:27 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne

The remaining set of patches contain changes to the p2m code that could
affect HVM guests. Note that without those changes a PVH dom0 running on
AMD hardware will be unable to create guests. Overall the patches are a
nice cleanup to the handling of p2m_ioreq_server and p2m_map_foreign
types IMO.

The series can also be found at:

git://xenbits.xen.org/people/royger/xen.git fixes-v4

Thanks, Roger.

Roger Pau Monne (4):
  x86/mm: split p2m ioreq server pages special handling into helper
  p2m: change write_p2m_entry to return an error code
  x86/mm: handle foreign mappings in p2m_entry_modify
  npt/shadow: allow getting foreign page table entries

 xen/arch/x86/mm/hap/hap.c        |  15 ++++-
 xen/arch/x86/mm/hap/nested_hap.c |   4 +-
 xen/arch/x86/mm/p2m-ept.c        | 107 ++++++-------------------------
 xen/arch/x86/mm/p2m-pt.c         |  92 +++++++++++++-------------
 xen/arch/x86/mm/paging.c         |  12 ++--
 xen/arch/x86/mm/shadow/common.c  |   8 ++-
 xen/arch/x86/mm/shadow/none.c    |   7 +-
 xen/arch/x86/mm/shadow/private.h |   6 +-
 xen/include/asm-x86/p2m.h        |  62 +++++++++++++++++-
 xen/include/asm-x86/paging.h     |   8 +--
 10 files changed, 171 insertions(+), 150 deletions(-)

-- 
2.17.2 (Apple Git-113)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v4 1/4] x86/mm: split p2m ioreq server pages special handling into helper
  2019-02-18 17:27 [PATCH v4 0/4] pvh/dom0/shadow/amd fixes Roger Pau Monne
@ 2019-02-18 17:27 ` Roger Pau Monne
  2019-02-19  5:47   ` Tian, Kevin
  2019-02-19  8:49   ` Jan Beulich
  2019-02-18 17:27 ` [PATCH v4 2/4] p2m: change write_p2m_entry to return an error code Roger Pau Monne
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 18+ messages in thread
From: Roger Pau Monne @ 2019-02-18 17:27 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, George Dunlap, Andrew Cooper,
	Paul Durrant, Jun Nakajima, Roger Pau Monne

So that it can be shared by both ept, npt and shadow code, instead of
duplicating it.

No change in functionality intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
---
Changes since v1:
 - Remove unused p2mt_old from p2m_pt_set_entry.
---
 xen/arch/x86/mm/hap/hap.c       |  3 ++
 xen/arch/x86/mm/p2m-ept.c       | 55 +++++++++++----------------------
 xen/arch/x86/mm/p2m-pt.c        | 24 --------------
 xen/arch/x86/mm/shadow/common.c |  3 ++
 xen/include/asm-x86/p2m.h       | 32 +++++++++++++++++++
 5 files changed, 56 insertions(+), 61 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 3d651b94c3..dc46d5e14f 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -734,6 +734,9 @@ hap_write_p2m_entry(struct domain *d, unsigned long gfn, l1_pgentry_t *p,
             && perms_strictly_increased(old_flags, l1e_get_flags(new)) );
     }
 
+    p2m_entry_modify(p2m_get_hostp2m(d), p2m_flags_to_type(l1e_get_flags(new)),
+                     p2m_flags_to_type(old_flags), level);
+
     safe_write_pte(p, new);
     if ( old_flags & _PAGE_PRESENT )
         flush_tlb_mask(d->dirty_cpumask);
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index bb562607f7..0ece6608cb 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -46,7 +46,8 @@ static inline bool_t is_epte_valid(ept_entry_t *e)
 }
 
 /* returns : 0 for success, -errno otherwise */
-static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
+static int atomic_write_ept_entry(struct p2m_domain *p2m,
+                                  ept_entry_t *entryptr, ept_entry_t new,
                                   int level)
 {
     int rc;
@@ -89,6 +90,8 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
     if ( unlikely(p2m_is_foreign(entryptr->sa_p2mt)) && check_foreign )
         oldmfn = entryptr->mfn;
 
+    p2m_entry_modify(p2m, new.sa_p2mt, entryptr->sa_p2mt, level);
+
     write_atomic(&entryptr->epte, new.epte);
 
     if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
@@ -390,7 +393,8 @@ static int ept_next_level(struct p2m_domain *p2m, bool_t read_only,
  * present entries in the given page table, optionally marking the entries
  * also for their subtrees needing P2M type re-calculation.
  */
-static bool_t ept_invalidate_emt(mfn_t mfn, bool_t recalc, int level)
+static bool_t ept_invalidate_emt(struct p2m_domain *p2m, mfn_t mfn,
+                                 bool_t recalc, int level)
 {
     int rc;
     ept_entry_t *epte = map_domain_page(mfn);
@@ -408,7 +412,7 @@ static bool_t ept_invalidate_emt(mfn_t mfn, bool_t recalc, int level)
         e.emt = MTRR_NUM_TYPES;
         if ( recalc )
             e.recalc = 1;
-        rc = atomic_write_ept_entry(&epte[i], e, level);
+        rc = atomic_write_ept_entry(p2m, &epte[i], e, level);
         ASSERT(rc == 0);
         changed = 1;
     }
@@ -459,7 +463,7 @@ static int ept_invalidate_emt_range(struct p2m_domain *p2m,
             rc = -ENOMEM;
             goto out;
         }
-        wrc = atomic_write_ept_entry(&table[index], split_ept_entry, i);
+        wrc = atomic_write_ept_entry(p2m, &table[index], split_ept_entry, i);
         ASSERT(wrc == 0);
 
         for ( ; i > target; --i )
@@ -479,7 +483,7 @@ static int ept_invalidate_emt_range(struct p2m_domain *p2m,
         {
             e.emt = MTRR_NUM_TYPES;
             e.recalc = 1;
-            wrc = atomic_write_ept_entry(&table[index], e, target);
+            wrc = atomic_write_ept_entry(p2m, &table[index], e, target);
             ASSERT(wrc == 0);
             rc = 1;
         }
@@ -549,17 +553,11 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
                     nt = p2m_recalc_type(e.recalc, e.sa_p2mt, p2m, gfn + i);
                     if ( nt != e.sa_p2mt )
                     {
-                        if ( e.sa_p2mt == p2m_ioreq_server )
-                        {
-                            ASSERT(p2m->ioreq.entry_count > 0);
-                            p2m->ioreq.entry_count--;
-                        }
-
                         e.sa_p2mt = nt;
                         ept_p2m_type_to_flags(p2m, &e, e.sa_p2mt, e.access);
                     }
                     e.recalc = 0;
-                    wrc = atomic_write_ept_entry(&epte[i], e, level);
+                    wrc = atomic_write_ept_entry(p2m, &epte[i], e, level);
                     ASSERT(wrc == 0);
                 }
             }
@@ -595,7 +593,7 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
                 {
                     if ( ept_split_super_page(p2m, &e, level, level - 1) )
                     {
-                        wrc = atomic_write_ept_entry(&epte[i], e, level);
+                        wrc = atomic_write_ept_entry(p2m, &epte[i], e, level);
                         ASSERT(wrc == 0);
                         unmap_domain_page(epte);
                         mfn = e.mfn;
@@ -610,7 +608,7 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
                 e.recalc = 0;
                 if ( recalc && p2m_is_changeable(e.sa_p2mt) )
                     ept_p2m_type_to_flags(p2m, &e, e.sa_p2mt, e.access);
-                wrc = atomic_write_ept_entry(&epte[i], e, level);
+                wrc = atomic_write_ept_entry(p2m, &epte[i], e, level);
                 ASSERT(wrc == 0);
             }
 
@@ -621,11 +619,11 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
         if ( e.emt == MTRR_NUM_TYPES )
         {
             ASSERT(is_epte_present(&e));
-            ept_invalidate_emt(_mfn(e.mfn), e.recalc, level);
+            ept_invalidate_emt(p2m, _mfn(e.mfn), e.recalc, level);
             smp_wmb();
             e.emt = 0;
             e.recalc = 0;
-            wrc = atomic_write_ept_entry(&epte[i], e, level);
+            wrc = atomic_write_ept_entry(p2m, &epte[i], e, level);
             ASSERT(wrc == 0);
             unmap_domain_page(epte);
             rc = 1;
@@ -786,7 +784,7 @@ ept_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
 
         /* now install the newly split ept sub-tree */
         /* NB: please make sure domian is paused and no in-fly VT-d DMA. */
-        rc = atomic_write_ept_entry(ept_entry, split_ept_entry, i);
+        rc = atomic_write_ept_entry(p2m, ept_entry, split_ept_entry, i);
         ASSERT(rc == 0);
 
         /* then move to the level we want to make real changes */
@@ -833,24 +831,7 @@ ept_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         new_entry.suppress_ve = is_epte_valid(&old_entry) ?
                                     old_entry.suppress_ve : 1;
 
-    /*
-     * p2m_ioreq_server is only used for 4K pages, so the
-     * count is only done on ept page table entries.
-     */
-    if ( p2mt == p2m_ioreq_server )
-    {
-        ASSERT(i == 0);
-        p2m->ioreq.entry_count++;
-    }
-
-    if ( ept_entry->sa_p2mt == p2m_ioreq_server )
-    {
-        ASSERT(i == 0);
-        ASSERT(p2m->ioreq.entry_count > 0);
-        p2m->ioreq.entry_count--;
-    }
-
-    rc = atomic_write_ept_entry(ept_entry, new_entry, target);
+    rc = atomic_write_ept_entry(p2m, ept_entry, new_entry, target);
     if ( unlikely(rc) )
         old_entry.epte = 0;
     else
@@ -1070,7 +1051,7 @@ static void ept_change_entry_type_global(struct p2m_domain *p2m,
     if ( !mfn )
         return;
 
-    if ( ept_invalidate_emt(_mfn(mfn), 1, p2m->ept.wl) )
+    if ( ept_invalidate_emt(p2m, _mfn(mfn), 1, p2m->ept.wl) )
         ept_sync_domain(p2m);
 }
 
@@ -1128,7 +1109,7 @@ static void ept_memory_type_changed(struct p2m_domain *p2m)
     if ( !mfn )
         return;
 
-    if ( ept_invalidate_emt(_mfn(mfn), 0, p2m->ept.wl) )
+    if ( ept_invalidate_emt(p2m, _mfn(mfn), 0, p2m->ept.wl) )
         ept_sync_domain(p2m);
 }
 
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 52eaa24b18..04e9d81cf6 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -436,13 +436,6 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
                 flags |= _PAGE_PSE;
             }
 
-            if ( ot == p2m_ioreq_server )
-            {
-                ASSERT(p2m->ioreq.entry_count > 0);
-                ASSERT(level == 0);
-                p2m->ioreq.entry_count--;
-            }
-
             e = l1e_from_pfn(mfn, flags);
             p2m_add_iommu_flags(&e, level,
                                 (nt == p2m_ram_rw)
@@ -616,8 +609,6 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
 
     if ( page_order == PAGE_ORDER_4K )
     {
-        p2m_type_t p2mt_old;
-
         rc = p2m_next_level(p2m, &table, &gfn_remainder, gfn,
                             L2_PAGETABLE_SHIFT - PAGE_SHIFT,
                             L2_PAGETABLE_ENTRIES, 1, 1);
@@ -641,21 +632,6 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         if ( entry_content.l1 != 0 )
             p2m_add_iommu_flags(&entry_content, 0, iommu_pte_flags);
 
-        p2mt_old = p2m_flags_to_type(l1e_get_flags(*p2m_entry));
-
-        /*
-         * p2m_ioreq_server is only used for 4K pages, so
-         * the count is only done for level 1 entries.
-         */
-        if ( p2mt == p2m_ioreq_server )
-            p2m->ioreq.entry_count++;
-
-        if ( p2mt_old == p2m_ioreq_server )
-        {
-            ASSERT(p2m->ioreq.entry_count > 0);
-            p2m->ioreq.entry_count--;
-        }
-
         /* level 1 entry */
         p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
         /* NB: paging_write_p2m_entry() handles tlb flushes properly */
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 07840ff727..0576c3c2d2 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3188,6 +3188,9 @@ shadow_write_p2m_entry(struct domain *d, unsigned long gfn,
     if ( likely(d->arch.paging.shadow.total_pages != 0) )
          sh_unshadow_for_p2m_change(d, gfn, p, new, level);
 
+    p2m_entry_modify(p2m_get_hostp2m(d), p2m_flags_to_type(l1e_get_flags(new)),
+                     p2m_flags_to_type(l1e_get_flags(*p)), level);
+
     /* Update the entry with new content */
     safe_write_pte(p, new);
 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 2095076556..834d49d2d4 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -932,6 +932,38 @@ int p2m_set_ioreq_server(struct domain *d, unsigned int flags,
 struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
                                               unsigned int *flags);
 
+static inline void p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt,
+                                    p2m_type_t ot, unsigned int level)
+{
+    if ( level != 1 || nt == ot )
+        return;
+
+    switch ( nt )
+    {
+    case p2m_ioreq_server:
+        /*
+         * p2m_ioreq_server is only used for 4K pages, so
+         * the count is only done for level 1 entries.
+         */
+        p2m->ioreq.entry_count++;
+        break;
+
+    default:
+        break;
+    }
+
+    switch ( ot )
+    {
+    case p2m_ioreq_server:
+        ASSERT(p2m->ioreq.entry_count > 0);
+        p2m->ioreq.entry_count--;
+        break;
+
+    default:
+        break;
+    }
+}
+
 #endif /* _XEN_ASM_X86_P2M_H */
 
 /*
-- 
2.17.2 (Apple Git-113)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 2/4] p2m: change write_p2m_entry to return an error code
  2019-02-18 17:27 [PATCH v4 0/4] pvh/dom0/shadow/amd fixes Roger Pau Monne
  2019-02-18 17:27 ` [PATCH v4 1/4] x86/mm: split p2m ioreq server pages special handling into helper Roger Pau Monne
@ 2019-02-18 17:27 ` Roger Pau Monne
  2019-02-19  9:05   ` Jan Beulich
  2019-02-18 17:27 ` [PATCH v4 3/4] x86/mm: handle foreign mappings in p2m_entry_modify Roger Pau Monne
  2019-02-18 17:27 ` [PATCH v4 4/4] npt/shadow: allow getting foreign page table entries Roger Pau Monne
  3 siblings, 1 reply; 18+ messages in thread
From: Roger Pau Monne @ 2019-02-18 17:27 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, George Dunlap, Andrew Cooper, Tim Deegan, Jan Beulich,
	Roger Pau Monne

This is in preparation for also changing p2m_entry_modify to return an
error code.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Tim Deegan <tim@xen.org>
---
Changes since v3:
 - Use asserts instead of bugs to check return code from
   write_p2m_entry from callers that don't support or expect
   write_p2m_entry to fail.

Changes since v2:
 - New in this version.
---
 xen/arch/x86/mm/hap/hap.c        |  4 ++-
 xen/arch/x86/mm/hap/nested_hap.c |  4 ++-
 xen/arch/x86/mm/p2m-pt.c         | 57 +++++++++++++++++++++++++-------
 xen/arch/x86/mm/paging.c         | 12 ++++---
 xen/arch/x86/mm/shadow/common.c  |  4 ++-
 xen/arch/x86/mm/shadow/none.c    |  7 ++--
 xen/arch/x86/mm/shadow/private.h |  6 ++--
 xen/include/asm-x86/p2m.h        |  4 +--
 xen/include/asm-x86/paging.h     |  8 ++---
 9 files changed, 75 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index dc46d5e14f..5b507376bc 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -708,7 +708,7 @@ static void hap_update_paging_modes(struct vcpu *v)
     put_gfn(d, cr3_gfn);
 }
 
-static void
+static int
 hap_write_p2m_entry(struct domain *d, unsigned long gfn, l1_pgentry_t *p,
                     l1_pgentry_t new, unsigned int level)
 {
@@ -745,6 +745,8 @@ hap_write_p2m_entry(struct domain *d, unsigned long gfn, l1_pgentry_t *p,
 
     if ( flush_nestedp2m )
         p2m_flush_nestedp2m(d);
+
+    return 0;
 }
 
 static unsigned long hap_gva_to_gfn_real_mode(
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index d2a07a5c79..abe5958a52 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -71,7 +71,7 @@
 /*        NESTED VIRT P2M FUNCTIONS         */
 /********************************************/
 
-void
+int
 nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
     l1_pgentry_t *p, l1_pgentry_t new, unsigned int level)
 {
@@ -87,6 +87,8 @@ nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
         flush_tlb_mask(p2m->dirty_cpumask);
 
     paging_unlock(d);
+
+    return 0;
 }
 
 /********************************************/
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 04e9d81cf6..3a8dc04efc 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -184,6 +184,7 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
     l1_pgentry_t *p2m_entry, new_entry;
     void *next;
     unsigned int flags;
+    int rc;
 
     if ( !(p2m_entry = p2m_find_entry(*table, gfn_remainder, gfn,
                                       shift, max)) )
@@ -202,7 +203,13 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
         new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
 
         p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
-        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
+        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
+        if ( rc )
+        {
+            ASSERT_UNREACHABLE();
+            p2m_free_ptp(p2m, mfn_to_page(mfn));
+            return rc;
+        }
     }
     else if ( flags & _PAGE_PSE )
     {
@@ -250,14 +257,27 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
         {
             new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)),
                                      flags);
-            p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
+            rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
+            if ( rc )
+            {
+                ASSERT_UNREACHABLE();
+                unmap_domain_page(l1_entry);
+                p2m_free_ptp(p2m, mfn_to_page(mfn));
+                return rc;
+            }
         }
 
         unmap_domain_page(l1_entry);
 
         new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
         p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
-        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
+        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
+        if ( rc )
+        {
+            ASSERT_UNREACHABLE();
+            p2m_free_ptp(p2m, mfn_to_page(mfn));
+            return rc;
+        }
     }
     else
         ASSERT(flags & _PAGE_PRESENT);
@@ -321,7 +341,8 @@ static int p2m_pt_set_recalc_range(struct p2m_domain *p2m,
             if ( (l1e_get_flags(e) & _PAGE_PRESENT) && !needs_recalc(l1, e) )
             {
                 set_recalc(l1, e);
-                p2m->write_p2m_entry(p2m, first_gfn, pent, e, level);
+                err = p2m->write_p2m_entry(p2m, first_gfn, pent, e, level);
+                ASSERT(!err);
             }
             first_gfn += 1UL << (i * PAGETABLE_ORDER);
         }
@@ -392,14 +413,16 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
                      !needs_recalc(l1, ent) )
                 {
                     set_recalc(l1, ent);
-                    p2m->write_p2m_entry(p2m, gfn - remainder, &ptab[i],
-                                         ent, level);
+                    err = p2m->write_p2m_entry(p2m, gfn - remainder, &ptab[i],
+                                               ent, level);
+                    ASSERT(!err);
                 }
                 remainder -= 1UL << ((level - 1) * PAGETABLE_ORDER);
             }
             smp_wmb();
             clear_recalc(l1, e);
-            p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
+            err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
+            ASSERT(!err);
         }
         unmap_domain_page((void *)((unsigned long)pent & PAGE_MASK));
     }
@@ -444,7 +467,8 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
         }
         else
             clear_recalc(l1, e);
-        p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
+        err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
+        ASSERT(!err);
     }
 
  out:
@@ -595,8 +619,10 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         if ( entry_content.l1 != 0 )
             p2m_add_iommu_flags(&entry_content, 0, iommu_pte_flags);
 
-        p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 3);
+        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 3);
         /* NB: paging_write_p2m_entry() handles tlb flushes properly */
+        if ( rc )
+            goto out;
     }
     else 
     {
@@ -633,8 +659,10 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
             p2m_add_iommu_flags(&entry_content, 0, iommu_pte_flags);
 
         /* level 1 entry */
-        p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
+        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
         /* NB: paging_write_p2m_entry() handles tlb flushes properly */
+        if ( rc )
+            goto out;
     }
     else if ( page_order == PAGE_ORDER_2M )
     {
@@ -669,8 +697,10 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         if ( entry_content.l1 != 0 )
             p2m_add_iommu_flags(&entry_content, 0, iommu_pte_flags);
 
-        p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 2);
+        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 2);
         /* NB: paging_write_p2m_entry() handles tlb flushes properly */
+        if ( rc )
+            goto out;
     }
 
     /* Track the highest gfn for which we have ever had a valid mapping */
@@ -894,8 +924,11 @@ static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
         if ( (l1e_get_flags(e) & _PAGE_PRESENT) &&
              !needs_recalc(l1, e) )
         {
+            int rc;
+
             set_recalc(l1, e);
-            p2m->write_p2m_entry(p2m, gfn, &tab[i], e, 4);
+            rc = p2m->write_p2m_entry(p2m, gfn, &tab[i], e, 4);
+            ASSERT(!rc);
             ++changed;
         }
         gfn += 1UL << (L4_PAGETABLE_SHIFT - PAGE_SHIFT);
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index d5836eb688..6d46aa967b 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -932,18 +932,22 @@ void paging_update_nestedmode(struct vcpu *v)
 }
 #endif
 
-void paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                            l1_pgentry_t *p, l1_pgentry_t new,
-                            unsigned int level)
+int paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
+                           l1_pgentry_t *p, l1_pgentry_t new,
+                           unsigned int level)
 {
     struct domain *d = p2m->domain;
     struct vcpu *v = current;
+    int rc = 0;
+
     if ( v->domain != d )
         v = d->vcpu ? d->vcpu[0] : NULL;
     if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v) != NULL) )
-        paging_get_hostmode(v)->write_p2m_entry(d, gfn, p, new, level);
+        rc = paging_get_hostmode(v)->write_p2m_entry(d, gfn, p, new, level);
     else
         safe_write_pte(p, new);
+
+    return rc;
 }
 
 int paging_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 0576c3c2d2..fe48c4a02b 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3176,7 +3176,7 @@ static void sh_unshadow_for_p2m_change(struct domain *d, unsigned long gfn,
     }
 }
 
-void
+int
 shadow_write_p2m_entry(struct domain *d, unsigned long gfn,
                        l1_pgentry_t *p, l1_pgentry_t new,
                        unsigned int level)
@@ -3209,6 +3209,8 @@ shadow_write_p2m_entry(struct domain *d, unsigned long gfn,
 #endif
 
     paging_unlock(d);
+
+    return 0;
 }
 
 /**************************************************************************/
diff --git a/xen/arch/x86/mm/shadow/none.c b/xen/arch/x86/mm/shadow/none.c
index 4de645a433..6344d755ab 100644
--- a/xen/arch/x86/mm/shadow/none.c
+++ b/xen/arch/x86/mm/shadow/none.c
@@ -60,11 +60,12 @@ static void _update_paging_modes(struct vcpu *v)
     ASSERT_UNREACHABLE();
 }
 
-static void _write_p2m_entry(struct domain *d, unsigned long gfn,
-                             l1_pgentry_t *p, l1_pgentry_t new,
-                             unsigned int level)
+static int _write_p2m_entry(struct domain *d, unsigned long gfn,
+                            l1_pgentry_t *p, l1_pgentry_t new,
+                            unsigned int level)
 {
     ASSERT_UNREACHABLE();
+    return -ENOSYS;
 }
 
 static const struct paging_mode sh_paging_none = {
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index e8ed7ac714..e331771d3e 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -372,9 +372,9 @@ extern int sh_remove_write_access(struct domain *d, mfn_t readonly_mfn,
                                   unsigned long fault_addr);
 
 /* Functions that atomically write PT/P2M entries and update state */
-void shadow_write_p2m_entry(struct domain *d, unsigned long gfn,
-                            l1_pgentry_t *p, l1_pgentry_t new,
-                            unsigned int level);
+int shadow_write_p2m_entry(struct domain *d, unsigned long gfn,
+                           l1_pgentry_t *p, l1_pgentry_t new,
+                           unsigned int level);
 
 /* Update all the things that are derived from the guest's CR0/CR3/CR4.
  * Called to initialize paging structures if the paging mode
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 834d49d2d4..f4ec2becbd 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -265,7 +265,7 @@ struct p2m_domain {
                                                   unsigned long last_gfn);
     void               (*memory_type_changed)(struct p2m_domain *p2m);
     
-    void               (*write_p2m_entry)(struct p2m_domain *p2m,
+    int                (*write_p2m_entry)(struct p2m_domain *p2m,
                                           unsigned long gfn, l1_pgentry_t *p,
                                           l1_pgentry_t new, unsigned int level);
     long               (*audit_p2m)(struct p2m_domain *p2m);
@@ -837,7 +837,7 @@ void p2m_flush_nestedp2m(struct domain *d);
 /* Flushes the np2m specified by np2m_base (if it exists) */
 void np2m_flush_base(struct vcpu *v, unsigned long np2m_base);
 
-void nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
+int nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
     l1_pgentry_t *p, l1_pgentry_t new, unsigned int level);
 
 /*
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index fdcc22844b..81031bb79b 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -124,7 +124,7 @@ struct paging_mode {
     void          (*update_cr3            )(struct vcpu *v, int do_locking,
                                             bool noflush);
     void          (*update_paging_modes   )(struct vcpu *v);
-    void          (*write_p2m_entry       )(struct domain *d, unsigned long gfn,
+    int           (*write_p2m_entry       )(struct domain *d, unsigned long gfn,
                                             l1_pgentry_t *p, l1_pgentry_t new,
                                             unsigned int level);
 
@@ -339,9 +339,9 @@ static inline void safe_write_pte(l1_pgentry_t *p, l1_pgentry_t new)
  * we are writing. */
 struct p2m_domain;
 
-void paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                            l1_pgentry_t *p, l1_pgentry_t new,
-                            unsigned int level);
+int paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
+                           l1_pgentry_t *p, l1_pgentry_t new,
+                           unsigned int level);
 
 /*
  * Called from the guest to indicate that the a process is being
-- 
2.17.2 (Apple Git-113)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 3/4] x86/mm: handle foreign mappings in p2m_entry_modify
  2019-02-18 17:27 [PATCH v4 0/4] pvh/dom0/shadow/amd fixes Roger Pau Monne
  2019-02-18 17:27 ` [PATCH v4 1/4] x86/mm: split p2m ioreq server pages special handling into helper Roger Pau Monne
  2019-02-18 17:27 ` [PATCH v4 2/4] p2m: change write_p2m_entry to return an error code Roger Pau Monne
@ 2019-02-18 17:27 ` Roger Pau Monne
  2019-02-19  6:14   ` Tian, Kevin
  2019-02-18 17:27 ` [PATCH v4 4/4] npt/shadow: allow getting foreign page table entries Roger Pau Monne
  3 siblings, 1 reply; 18+ messages in thread
From: Roger Pau Monne @ 2019-02-18 17:27 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, George Dunlap, Andrew Cooper,
	Tim Deegan, Jun Nakajima, Roger Pau Monne

So that the specific handling can be removed from
atomic_write_ept_entry and be shared with npt and shadow code.

This commit also removes the check that prevent non-ept PVH dom0 from
mapping foreign pages.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Tim Deegan <tim@xen.org>
---
Changes since v3:
 - Replace the mfn_valid BUG_ONs with an assert & return.

Changes since v2:
 - Return an error code from p2m_entry_modify and propagate it to the
   callers.

Changes since v1:
 - Simply code since mfn_to_page cannot return NULL.
 - Check if the mfn is valid before getting/dropping the page reference.
 - Use BUG_ON instead of ASSERTs, since getting the reference counting
   wrong is more dangerous than a DoS.
---
 xen/arch/x86/mm/hap/hap.c       | 12 +++++--
 xen/arch/x86/mm/p2m-ept.c       | 56 +++------------------------------
 xen/arch/x86/mm/p2m-pt.c        |  7 -----
 xen/arch/x86/mm/shadow/common.c |  3 +-
 xen/include/asm-x86/p2m.h       | 34 +++++++++++++++++---
 5 files changed, 47 insertions(+), 65 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 5b507376bc..2daf8424f6 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -714,6 +714,7 @@ hap_write_p2m_entry(struct domain *d, unsigned long gfn, l1_pgentry_t *p,
 {
     uint32_t old_flags;
     bool_t flush_nestedp2m = 0;
+    int rc;
 
     /* We know always use the host p2m here, regardless if the vcpu
      * is in host or guest mode. The vcpu can be in guest mode by
@@ -734,8 +735,15 @@ hap_write_p2m_entry(struct domain *d, unsigned long gfn, l1_pgentry_t *p,
             && perms_strictly_increased(old_flags, l1e_get_flags(new)) );
     }
 
-    p2m_entry_modify(p2m_get_hostp2m(d), p2m_flags_to_type(l1e_get_flags(new)),
-                     p2m_flags_to_type(old_flags), level);
+    rc = p2m_entry_modify(p2m_get_hostp2m(d),
+                          p2m_flags_to_type(l1e_get_flags(new)),
+                          p2m_flags_to_type(old_flags), l1e_get_mfn(new),
+                          l1e_get_mfn(*p), level);
+    if ( rc )
+    {
+        paging_unlock(d);
+        return rc;
+    }
 
     safe_write_pte(p, new);
     if ( old_flags & _PAGE_PRESENT )
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 0ece6608cb..83bd602fc4 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -45,65 +45,19 @@ static inline bool_t is_epte_valid(ept_entry_t *e)
     return ((e->epte & ~(1ul << 63)) != 0 && e->sa_p2mt != p2m_invalid);
 }
 
-/* returns : 0 for success, -errno otherwise */
 static int atomic_write_ept_entry(struct p2m_domain *p2m,
                                   ept_entry_t *entryptr, ept_entry_t new,
                                   int level)
 {
-    int rc;
-    unsigned long oldmfn = mfn_x(INVALID_MFN);
-    bool_t check_foreign = (new.mfn != entryptr->mfn ||
-                            new.sa_p2mt != entryptr->sa_p2mt);
-
-    if ( level )
-    {
-        ASSERT(!is_epte_superpage(&new) || !p2m_is_foreign(new.sa_p2mt));
-        write_atomic(&entryptr->epte, new.epte);
-        return 0;
-    }
-
-    if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
-    {
-        rc = -EINVAL;
-        if ( !is_epte_present(&new) )
-                goto out;
-
-        if ( check_foreign )
-        {
-            struct domain *fdom;
-
-            if ( !mfn_valid(_mfn(new.mfn)) )
-                goto out;
-
-            rc = -ESRCH;
-            fdom = page_get_owner(mfn_to_page(_mfn(new.mfn)));
-            if ( fdom == NULL )
-                goto out;
+    int rc = p2m_entry_modify(p2m, new.sa_p2mt, entryptr->sa_p2mt,
+                              _mfn(new.mfn), _mfn(entryptr->mfn), level);
 
-            /* get refcount on the page */
-            rc = -EBUSY;
-            if ( !get_page(mfn_to_page(_mfn(new.mfn)), fdom) )
-                goto out;
-        }
-    }
-
-    if ( unlikely(p2m_is_foreign(entryptr->sa_p2mt)) && check_foreign )
-        oldmfn = entryptr->mfn;
-
-    p2m_entry_modify(p2m, new.sa_p2mt, entryptr->sa_p2mt, level);
+    if ( rc )
+        return rc;
 
     write_atomic(&entryptr->epte, new.epte);
 
-    if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
-        put_page(mfn_to_page(_mfn(oldmfn)));
-
-    rc = 0;
-
- out:
-    if ( rc )
-        gdprintk(XENLOG_ERR, "epte o:%"PRIx64" n:%"PRIx64" rc:%d\n",
-                 entryptr->epte, new.epte, rc);
-    return rc;
+    return 0;
 }
 
 static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry,
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 3a8dc04efc..57afa37c71 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -564,13 +564,6 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         __trace_var(TRC_MEM_SET_P2M_ENTRY, 0, sizeof(t), &t);
     }
 
-    if ( unlikely(p2m_is_foreign(p2mt)) )
-    {
-        /* hvm fixme: foreign types are only supported on ept at present */
-        gdprintk(XENLOG_WARNING, "Unimplemented foreign p2m type.\n");
-        return -EINVAL;
-    }
-
     /* Carry out any eventually pending earlier changes first. */
     rc = do_recalc(p2m, gfn);
     if ( rc < 0 )
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index fe48c4a02b..ad670de515 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3189,7 +3189,8 @@ shadow_write_p2m_entry(struct domain *d, unsigned long gfn,
          sh_unshadow_for_p2m_change(d, gfn, p, new, level);
 
     p2m_entry_modify(p2m_get_hostp2m(d), p2m_flags_to_type(l1e_get_flags(new)),
-                     p2m_flags_to_type(l1e_get_flags(*p)), level);
+                     p2m_flags_to_type(l1e_get_flags(*p)), l1e_get_mfn(new),
+                     l1e_get_mfn(*p), level);
 
     /* Update the entry with new content */
     safe_write_pte(p, new);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index f4ec2becbd..2801a8ccca 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -932,11 +932,14 @@ int p2m_set_ioreq_server(struct domain *d, unsigned int flags,
 struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
                                               unsigned int *flags);
 
-static inline void p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt,
-                                    p2m_type_t ot, unsigned int level)
+static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt,
+                                   p2m_type_t ot, mfn_t nfn, mfn_t ofn,
+                                   unsigned int level)
 {
-    if ( level != 1 || nt == ot )
-        return;
+    BUG_ON(level > 1 && (nt == p2m_ioreq_server || nt == p2m_map_foreign));
+
+    if ( level != 1 || (nt == ot && mfn_eq(nfn, ofn)) )
+        return 0;
 
     switch ( nt )
     {
@@ -948,6 +951,18 @@ static inline void p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt,
         p2m->ioreq.entry_count++;
         break;
 
+    case p2m_map_foreign:
+        if ( !mfn_valid(nfn) )
+        {
+            ASSERT_UNREACHABLE();
+            return -EINVAL;
+        }
+
+        if ( !page_get_owner_and_reference(mfn_to_page(nfn)) )
+            return -EBUSY;
+
+        break;
+
     default:
         break;
     }
@@ -959,9 +974,20 @@ static inline void p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt,
         p2m->ioreq.entry_count--;
         break;
 
+    case p2m_map_foreign:
+        if ( !mfn_valid(ofn) )
+        {
+            ASSERT_UNREACHABLE();
+            return -EINVAL;
+        }
+        put_page(mfn_to_page(ofn));
+        break;
+
     default:
         break;
     }
+
+    return 0;
 }
 
 #endif /* _XEN_ASM_X86_P2M_H */
-- 
2.17.2 (Apple Git-113)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v4 4/4] npt/shadow: allow getting foreign page table entries
  2019-02-18 17:27 [PATCH v4 0/4] pvh/dom0/shadow/amd fixes Roger Pau Monne
                   ` (2 preceding siblings ...)
  2019-02-18 17:27 ` [PATCH v4 3/4] x86/mm: handle foreign mappings in p2m_entry_modify Roger Pau Monne
@ 2019-02-18 17:27 ` Roger Pau Monne
  2019-02-19  8:40   ` Jan Beulich
  3 siblings, 1 reply; 18+ messages in thread
From: Roger Pau Monne @ 2019-02-18 17:27 UTC (permalink / raw)
  To: xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monne

Current npt and shadow code to get an entry will always return
INVALID_MFN for foreign entries. Allow to return the entry mfn for
foreign entries, like it's done for grant table entries.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
Changes since v2:
 - Use p2m_is_any_ram.
---
 xen/arch/x86/mm/p2m-pt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 57afa37c71..17969643b8 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -893,8 +893,8 @@ pod_retry_l1:
     *t = p2m_recalc_type(recalc || _needs_recalc(flags), l1t, p2m, gfn);
     unmap_domain_page(l1e);
 
-    ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t) || p2m_is_paging(*t));
-    return (p2m_is_valid(*t) || p2m_is_grant(*t)) ? mfn : INVALID_MFN;
+    ASSERT(mfn_valid(mfn) || !p2m_is_any_ram(*t) || p2m_is_paging(*t));
+    return (p2m_is_valid(*t) || p2m_is_any_ram(*t)) ? mfn : INVALID_MFN;
 }
 
 static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
-- 
2.17.2 (Apple Git-113)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 1/4] x86/mm: split p2m ioreq server pages special handling into helper
  2019-02-18 17:27 ` [PATCH v4 1/4] x86/mm: split p2m ioreq server pages special handling into helper Roger Pau Monne
@ 2019-02-19  5:47   ` Tian, Kevin
  2019-02-19 10:15     ` Roger Pau Monné
  2019-02-19  8:49   ` Jan Beulich
  1 sibling, 1 reply; 18+ messages in thread
From: Tian, Kevin @ 2019-02-19  5:47 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: Wei Liu, Jan Beulich, George Dunlap, Andrew Cooper, Paul Durrant,
	Nakajima, Jun

> From: Roger Pau Monne [mailto:roger.pau@citrix.com]
> Sent: Tuesday, February 19, 2019 1:27 AM
> 
> So that it can be shared by both ept, npt and shadow code, instead of
> duplicating it.
> 
> No change in functionality intended.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>. with one
small comment:

> +static inline void p2m_entry_modify(struct p2m_domain *p2m,
> p2m_type_t nt,
> +                                    p2m_type_t ot, unsigned int level)
> +{
> +    if ( level != 1 || nt == ot )
> +        return;

based on type check, does it make more sense to call it
p2m_entry_modify_type?

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/4] x86/mm: handle foreign mappings in p2m_entry_modify
  2019-02-18 17:27 ` [PATCH v4 3/4] x86/mm: handle foreign mappings in p2m_entry_modify Roger Pau Monne
@ 2019-02-19  6:14   ` Tian, Kevin
  2019-02-19  8:39     ` Jan Beulich
                       ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Tian, Kevin @ 2019-02-19  6:14 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: Wei Liu, Jan Beulich, George Dunlap, Andrew Cooper, Tim Deegan,
	Nakajima, Jun

> From: Roger Pau Monne [mailto:roger.pau@citrix.com]
> Sent: Tuesday, February 19, 2019 1:27 AM
> 
> So that the specific handling can be removed from
> atomic_write_ept_entry and be shared with npt and shadow code.
> 
> This commit also removes the check that prevent non-ept PVH dom0 from
> mapping foreign pages.

ah, so please ignore my comment to [1/4]. p2m_entry_modify
is used more than type change now. :-)

> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: Tim Deegan <tim@xen.org>
> ---
> Changes since v3:
>  - Replace the mfn_valid BUG_ONs with an assert & return.
> 
> Changes since v2:
>  - Return an error code from p2m_entry_modify and propagate it to the
>    callers.
> 
> Changes since v1:
>  - Simply code since mfn_to_page cannot return NULL.
>  - Check if the mfn is valid before getting/dropping the page reference.
>  - Use BUG_ON instead of ASSERTs, since getting the reference counting
>    wrong is more dangerous than a DoS.
> ---
>  xen/arch/x86/mm/hap/hap.c       | 12 +++++--
>  xen/arch/x86/mm/p2m-ept.c       | 56 +++------------------------------
>  xen/arch/x86/mm/p2m-pt.c        |  7 -----
>  xen/arch/x86/mm/shadow/common.c |  3 +-
>  xen/include/asm-x86/p2m.h       | 34 +++++++++++++++++---
>  5 files changed, 47 insertions(+), 65 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
> index 5b507376bc..2daf8424f6 100644
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -714,6 +714,7 @@ hap_write_p2m_entry(struct domain *d, unsigned
> long gfn, l1_pgentry_t *p,
>  {
>      uint32_t old_flags;
>      bool_t flush_nestedp2m = 0;
> +    int rc;
> 
>      /* We know always use the host p2m here, regardless if the vcpu
>       * is in host or guest mode. The vcpu can be in guest mode by
> @@ -734,8 +735,15 @@ hap_write_p2m_entry(struct domain *d, unsigned
> long gfn, l1_pgentry_t *p,
>              && perms_strictly_increased(old_flags, l1e_get_flags(new)) );
>      }
> 
> -    p2m_entry_modify(p2m_get_hostp2m(d),
> p2m_flags_to_type(l1e_get_flags(new)),
> -                     p2m_flags_to_type(old_flags), level);
> +    rc = p2m_entry_modify(p2m_get_hostp2m(d),
> +                          p2m_flags_to_type(l1e_get_flags(new)),
> +                          p2m_flags_to_type(old_flags), l1e_get_mfn(new),
> +                          l1e_get_mfn(*p), level);

why not passing old/new pte to reduce #parameters and thus stable
against future changes?

> +    if ( rc )
> +    {
> +        paging_unlock(d);
> +        return rc;
> +    }
> 
>      safe_write_pte(p, new);
>      if ( old_flags & _PAGE_PRESENT )
> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> index 0ece6608cb..83bd602fc4 100644
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -45,65 +45,19 @@ static inline bool_t is_epte_valid(ept_entry_t *e)
>      return ((e->epte & ~(1ul << 63)) != 0 && e->sa_p2mt != p2m_invalid);
>  }
> 
> -/* returns : 0 for success, -errno otherwise */
>  static int atomic_write_ept_entry(struct p2m_domain *p2m,
>                                    ept_entry_t *entryptr, ept_entry_t new,
>                                    int level)
>  {
> -    int rc;
> -    unsigned long oldmfn = mfn_x(INVALID_MFN);
> -    bool_t check_foreign = (new.mfn != entryptr->mfn ||
> -                            new.sa_p2mt != entryptr->sa_p2mt);
> -
> -    if ( level )
> -    {
> -        ASSERT(!is_epte_superpage(&new)
> || !p2m_is_foreign(new.sa_p2mt));
> -        write_atomic(&entryptr->epte, new.epte);
> -        return 0;
> -    }
> -
> -    if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
> -    {
> -        rc = -EINVAL;
> -        if ( !is_epte_present(&new) )
> -                goto out;
> -
> -        if ( check_foreign )
> -        {
> -            struct domain *fdom;
> -
> -            if ( !mfn_valid(_mfn(new.mfn)) )
> -                goto out;
> -
> -            rc = -ESRCH;
> -            fdom = page_get_owner(mfn_to_page(_mfn(new.mfn)));
> -            if ( fdom == NULL )
> -                goto out;
> +    int rc = p2m_entry_modify(p2m, new.sa_p2mt, entryptr->sa_p2mt,
> +                              _mfn(new.mfn), _mfn(entryptr->mfn), level);
> 
> -            /* get refcount on the page */
> -            rc = -EBUSY;
> -            if ( !get_page(mfn_to_page(_mfn(new.mfn)), fdom) )
> -                goto out;
> -        }
> -    }
> -
> -    if ( unlikely(p2m_is_foreign(entryptr->sa_p2mt)) && check_foreign )
> -        oldmfn = entryptr->mfn;
> -
> -    p2m_entry_modify(p2m, new.sa_p2mt, entryptr->sa_p2mt, level);
> +    if ( rc )
> +        return rc;
> 
>      write_atomic(&entryptr->epte, new.epte);
> 
> -    if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
> -        put_page(mfn_to_page(_mfn(oldmfn)));
> -
> -    rc = 0;
> -
> - out:
> -    if ( rc )
> -        gdprintk(XENLOG_ERR, "epte o:%"PRIx64" n:%"PRIx64" rc:%d\n",
> -                 entryptr->epte, new.epte, rc);
> -    return rc;
> +    return 0;
>  }
> 
>  static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t
> *entry,
> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> index 3a8dc04efc..57afa37c71 100644
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -564,13 +564,6 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
> gfn_t gfn_, mfn_t mfn,
>          __trace_var(TRC_MEM_SET_P2M_ENTRY, 0, sizeof(t), &t);
>      }
> 
> -    if ( unlikely(p2m_is_foreign(p2mt)) )
> -    {
> -        /* hvm fixme: foreign types are only supported on ept at present */
> -        gdprintk(XENLOG_WARNING, "Unimplemented foreign p2m type.\n");
> -        return -EINVAL;
> -    }
> -
>      /* Carry out any eventually pending earlier changes first. */
>      rc = do_recalc(p2m, gfn);
>      if ( rc < 0 )
> diff --git a/xen/arch/x86/mm/shadow/common.c
> b/xen/arch/x86/mm/shadow/common.c
> index fe48c4a02b..ad670de515 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -3189,7 +3189,8 @@ shadow_write_p2m_entry(struct domain *d,
> unsigned long gfn,
>           sh_unshadow_for_p2m_change(d, gfn, p, new, level);
> 
>      p2m_entry_modify(p2m_get_hostp2m(d),
> p2m_flags_to_type(l1e_get_flags(new)),
> -                     p2m_flags_to_type(l1e_get_flags(*p)), level);
> +                     p2m_flags_to_type(l1e_get_flags(*p)), l1e_get_mfn(new),
> +                     l1e_get_mfn(*p), level);
> 
>      /* Update the entry with new content */
>      safe_write_pte(p, new);
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index f4ec2becbd..2801a8ccca 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -932,11 +932,14 @@ int p2m_set_ioreq_server(struct domain *d,
> unsigned int flags,
>  struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
>                                                unsigned int *flags);
> 
> -static inline void p2m_entry_modify(struct p2m_domain *p2m,
> p2m_type_t nt,
> -                                    p2m_type_t ot, unsigned int level)
> +static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t
> nt,
> +                                   p2m_type_t ot, mfn_t nfn, mfn_t ofn,
> +                                   unsigned int level)
>  {
> -    if ( level != 1 || nt == ot )
> -        return;
> +    BUG_ON(level > 1 && (nt == p2m_ioreq_server || nt ==
> p2m_map_foreign));
> +
> +    if ( level != 1 || (nt == ot && mfn_eq(nfn, ofn)) )
> +        return 0;

could also return here in case of nt==ot==p2m_ioreq_server,
otherwise p2m->ioreq.entry_count might be unnecessarily
inc/dec if mfn changes while type stays as p2m_ioreq_server...

> 
>      switch ( nt )
>      {
> @@ -948,6 +951,18 @@ static inline void p2m_entry_modify(struct
> p2m_domain *p2m, p2m_type_t nt,
>          p2m->ioreq.entry_count++;
>          break;
> 
> +    case p2m_map_foreign:
> +        if ( !mfn_valid(nfn) )
> +        {
> +            ASSERT_UNREACHABLE();
> +            return -EINVAL;
> +        }
> +
> +        if ( !page_get_owner_and_reference(mfn_to_page(nfn)) )
> +            return -EBUSY;
> +
> +        break;
> +
>      default:
>          break;
>      }
> @@ -959,9 +974,20 @@ static inline void p2m_entry_modify(struct
> p2m_domain *p2m, p2m_type_t nt,
>          p2m->ioreq.entry_count--;
>          break;
> 
> +    case p2m_map_foreign:
> +        if ( !mfn_valid(ofn) )
> +        {
> +            ASSERT_UNREACHABLE();
> +            return -EINVAL;
> +        }
> +        put_page(mfn_to_page(ofn));
> +        break;
> +
>      default:
>          break;
>      }
> +
> +    return 0;
>  }
> 
>  #endif /* _XEN_ASM_X86_P2M_H */
> --
> 2.17.2 (Apple Git-113)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/4] x86/mm: handle foreign mappings in p2m_entry_modify
  2019-02-19  6:14   ` Tian, Kevin
@ 2019-02-19  8:39     ` Jan Beulich
  2019-02-19 13:53     ` George Dunlap
  2019-02-19 14:00     ` Roger Pau Monné
  2 siblings, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2019-02-19  8:39 UTC (permalink / raw)
  To: Kevin Tian
  Cc: Wei Liu, George Dunlap, Andrew Cooper, Tim Deegan, Jun Nakajima,
	xen-devel, Roger Pau Monne

>>> On 19.02.19 at 07:14, <kevin.tian@intel.com> wrote:
>>  From: Roger Pau Monne [mailto:roger.pau@citrix.com]
>> Sent: Tuesday, February 19, 2019 1:27 AM
>> 
>> @@ -734,8 +735,15 @@ hap_write_p2m_entry(struct domain *d, unsigned
>> long gfn, l1_pgentry_t *p,
>>              && perms_strictly_increased(old_flags, l1e_get_flags(new)) );
>>      }
>> 
>> -    p2m_entry_modify(p2m_get_hostp2m(d),
>> p2m_flags_to_type(l1e_get_flags(new)),
>> -                     p2m_flags_to_type(old_flags), level);
>> +    rc = p2m_entry_modify(p2m_get_hostp2m(d),
>> +                          p2m_flags_to_type(l1e_get_flags(new)),
>> +                          p2m_flags_to_type(old_flags), l1e_get_mfn(new),
>> +                          l1e_get_mfn(*p), level);
> 
> why not passing old/new pte to reduce #parameters and thus stable
> against future changes?

The PTE layout is different between EPT and shadow/NPT, yet
the function here is generic. It's not impossible to do what you
suggest, but it would require field extraction abstraction
functions, which I don't think is worth it.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 4/4] npt/shadow: allow getting foreign page table entries
  2019-02-18 17:27 ` [PATCH v4 4/4] npt/shadow: allow getting foreign page table entries Roger Pau Monne
@ 2019-02-19  8:40   ` Jan Beulich
  0 siblings, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2019-02-19  8:40 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: George Dunlap, Andrew Cooper, Wei Liu, xen-devel

>>> On 18.02.19 at 18:27, <roger.pau@citrix.com> wrote:
> Current npt and shadow code to get an entry will always return
> INVALID_MFN for foreign entries. Allow to return the entry mfn for
> foreign entries, like it's done for grant table entries.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 1/4] x86/mm: split p2m ioreq server pages special handling into helper
  2019-02-18 17:27 ` [PATCH v4 1/4] x86/mm: split p2m ioreq server pages special handling into helper Roger Pau Monne
  2019-02-19  5:47   ` Tian, Kevin
@ 2019-02-19  8:49   ` Jan Beulich
  2019-02-19 11:22     ` Roger Pau Monné
  1 sibling, 1 reply; 18+ messages in thread
From: Jan Beulich @ 2019-02-19  8:49 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Kevin Tian, Wei Liu, George Dunlap, Andrew Cooper, Paul Durrant,
	Jun Nakajima, xen-devel

>>> On 18.02.19 at 18:27, <roger.pau@citrix.com> wrote:
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -3188,6 +3188,9 @@ shadow_write_p2m_entry(struct domain *d, unsigned long gfn,
>      if ( likely(d->arch.paging.shadow.total_pages != 0) )
>           sh_unshadow_for_p2m_change(d, gfn, p, new, level);
>  
> +    p2m_entry_modify(p2m_get_hostp2m(d), p2m_flags_to_type(l1e_get_flags(new)),
> +                     p2m_flags_to_type(l1e_get_flags(*p)), level);
> +
>      /* Update the entry with new content */
>      safe_write_pte(p, new);

Strictly speaking you should Cc Tim for this change.

Also at this example (a possible issue elsewhere as well) - is
acting on the host P2M (only) really the right thing to do here?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 2/4] p2m: change write_p2m_entry to return an error code
  2019-02-18 17:27 ` [PATCH v4 2/4] p2m: change write_p2m_entry to return an error code Roger Pau Monne
@ 2019-02-19  9:05   ` Jan Beulich
  2019-02-19 11:56     ` Roger Pau Monné
  0 siblings, 1 reply; 18+ messages in thread
From: Jan Beulich @ 2019-02-19  9:05 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: George Dunlap, Andrew Cooper, Tim Deegan, Wei Liu, xen-devel

>>> On 18.02.19 at 18:27, <roger.pau@citrix.com> wrote:
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -184,6 +184,7 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>      l1_pgentry_t *p2m_entry, new_entry;
>      void *next;
>      unsigned int flags;
> +    int rc;
>  
>      if ( !(p2m_entry = p2m_find_entry(*table, gfn_remainder, gfn,
>                                        shift, max)) )
> @@ -202,7 +203,13 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>          new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
>  
>          p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
> -        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +        if ( rc )
> +        {
> +            ASSERT_UNREACHABLE();
> +            p2m_free_ptp(p2m, mfn_to_page(mfn));
> +            return rc;
> +        }
>      }
>      else if ( flags & _PAGE_PSE )
>      {
> @@ -250,14 +257,27 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>          {
>              new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)),
>                                       flags);
> -            p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
> +            rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
> +            if ( rc )
> +            {
> +                ASSERT_UNREACHABLE();
> +                unmap_domain_page(l1_entry);
> +                p2m_free_ptp(p2m, mfn_to_page(mfn));
> +                return rc;
> +            }
>          }
>  
>          unmap_domain_page(l1_entry);
>  
>          new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
>          p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
> -        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +        if ( rc )
> +        {
> +            ASSERT_UNREACHABLE();
> +            p2m_free_ptp(p2m, mfn_to_page(mfn));
> +            return rc;
> +        }
>      }
>      else
>          ASSERT(flags & _PAGE_PRESENT);

Personally I would find it quite desirable if there was just one instance of
this error handling you add, which doesn't look overly difficult to arrange
for.

> @@ -321,7 +341,8 @@ static int p2m_pt_set_recalc_range(struct p2m_domain *p2m,
>              if ( (l1e_get_flags(e) & _PAGE_PRESENT) && !needs_recalc(l1, e) )
>              {
>                  set_recalc(l1, e);
> -                p2m->write_p2m_entry(p2m, first_gfn, pent, e, level);
> +                err = p2m->write_p2m_entry(p2m, first_gfn, pent, e, level);
> +                ASSERT(!err);
>              }
>              first_gfn += 1UL << (i * PAGETABLE_ORDER);
>          }

So on a release build what result the caller would observe in case
there is an (unexpected) error depends on whether this occurs on
the last iteration. I don't consider this very helpful behavior in
terms of debuggability. (Along these lines also again further down).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 1/4] x86/mm: split p2m ioreq server pages special handling into helper
  2019-02-19  5:47   ` Tian, Kevin
@ 2019-02-19 10:15     ` Roger Pau Monné
  0 siblings, 0 replies; 18+ messages in thread
From: Roger Pau Monné @ 2019-02-19 10:15 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: Wei Liu, Jan Beulich, George Dunlap, Andrew Cooper, Paul Durrant,
	Nakajima, Jun, xen-devel

On Tue, Feb 19, 2019 at 05:47:34AM +0000, Tian, Kevin wrote:
> > From: Roger Pau Monne [mailto:roger.pau@citrix.com]
> > Sent: Tuesday, February 19, 2019 1:27 AM
> > 
> > So that it can be shared by both ept, npt and shadow code, instead of
> > duplicating it.
> > 
> > No change in functionality intended.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
> > Reviewed-by: George Dunlap <george.dunlap@citrix.com>
> 
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>. with one
> small comment:
> 
> > +static inline void p2m_entry_modify(struct p2m_domain *p2m,
> > p2m_type_t nt,
> > +                                    p2m_type_t ot, unsigned int level)
> > +{
> > +    if ( level != 1 || nt == ot )
> > +        return;
> 
> based on type check, does it make more sense to call it
> p2m_entry_modify_type?

Later on the function will be expanded to also take a new/old mfn pair
of parameters, so I think the name is fine taking into account the
expansion of the function later in the patch series.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 1/4] x86/mm: split p2m ioreq server pages special handling into helper
  2019-02-19  8:49   ` Jan Beulich
@ 2019-02-19 11:22     ` Roger Pau Monné
  0 siblings, 0 replies; 18+ messages in thread
From: Roger Pau Monné @ 2019-02-19 11:22 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Wei Liu, George Dunlap, Andrew Cooper, Paul Durrant,
	Jun Nakajima, xen-devel

On Tue, Feb 19, 2019 at 01:49:18AM -0700, Jan Beulich wrote:
> >>> On 18.02.19 at 18:27, <roger.pau@citrix.com> wrote:
> > --- a/xen/arch/x86/mm/shadow/common.c
> > +++ b/xen/arch/x86/mm/shadow/common.c
> > @@ -3188,6 +3188,9 @@ shadow_write_p2m_entry(struct domain *d, unsigned long gfn,
> >      if ( likely(d->arch.paging.shadow.total_pages != 0) )
> >           sh_unshadow_for_p2m_change(d, gfn, p, new, level);
> >  
> > +    p2m_entry_modify(p2m_get_hostp2m(d), p2m_flags_to_type(l1e_get_flags(new)),
> > +                     p2m_flags_to_type(l1e_get_flags(*p)), level);
> > +
> >      /* Update the entry with new content */
> >      safe_write_pte(p, new);
> 
> Strictly speaking you should Cc Tim for this change.

sorry, I guess I didn't rerun get_maintainer after changing the patch.

> Also at this example (a possible issue elsewhere as well) - is
> acting on the host P2M (only) really the right thing to do here?

I've wondered the same while working on this, what's even worse is
that paging_write_p2m_entry takes a p2m as it's first argument, but
then the write_p2m_entry expect a domain instead of a p2m.

I have the following patch that pushes the p2m up to the
write_p2m_entry handlers, thus removing the need to do a
p2m_get_hostp2m in the write_p2m_entry handlers.

I'm planning to add this as a pre-patch to my series.

Thanks, Roger.
---8<---
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 3d651b94c3..28fe48d158 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -709,9 +709,10 @@ static void hap_update_paging_modes(struct vcpu *v)
 }
 
 static void
-hap_write_p2m_entry(struct domain *d, unsigned long gfn, l1_pgentry_t *p,
+hap_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, l1_pgentry_t *p,
                     l1_pgentry_t new, unsigned int level)
 {
+    struct domain *d = p2m->domain;
     uint32_t old_flags;
     bool_t flush_nestedp2m = 0;
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index d5836eb688..e6ed3006fe 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -941,7 +941,7 @@ void paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
     if ( v->domain != d )
         v = d->vcpu ? d->vcpu[0] : NULL;
     if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v) != NULL) )
-        paging_get_hostmode(v)->write_p2m_entry(d, gfn, p, new, level);
+        paging_get_hostmode(v)->write_p2m_entry(p2m, gfn, p, new, level);
     else
         safe_write_pte(p, new);
 }
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 07840ff727..6c67ef4996 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3177,10 +3177,12 @@ static void sh_unshadow_for_p2m_change(struct domain *d, unsigned long gfn,
 }
 
 void
-shadow_write_p2m_entry(struct domain *d, unsigned long gfn,
+shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
                        l1_pgentry_t *p, l1_pgentry_t new,
                        unsigned int level)
 {
+    struct domain *d = p2m->domain;
+
     paging_lock(d);
 
     /* If there are any shadows, update them.  But if shadow_teardown()
diff --git a/xen/arch/x86/mm/shadow/none.c b/xen/arch/x86/mm/shadow/none.c
index 4de645a433..316002771d 100644
--- a/xen/arch/x86/mm/shadow/none.c
+++ b/xen/arch/x86/mm/shadow/none.c
@@ -60,7 +60,7 @@ static void _update_paging_modes(struct vcpu *v)
     ASSERT_UNREACHABLE();
 }
 
-static void _write_p2m_entry(struct domain *d, unsigned long gfn,
+static void _write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
                              l1_pgentry_t *p, l1_pgentry_t new,
                              unsigned int level)
 {
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index e8ed7ac714..0aaed1edfc 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -372,7 +372,7 @@ extern int sh_remove_write_access(struct domain *d, mfn_t readonly_mfn,
                                   unsigned long fault_addr);
 
 /* Functions that atomically write PT/P2M entries and update state */
-void shadow_write_p2m_entry(struct domain *d, unsigned long gfn,
+void shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
                             l1_pgentry_t *p, l1_pgentry_t new,
                             unsigned int level);
 
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index fdcc22844b..7ec09d7b11 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -124,7 +124,8 @@ struct paging_mode {
     void          (*update_cr3            )(struct vcpu *v, int do_locking,
                                             bool noflush);
     void          (*update_paging_modes   )(struct vcpu *v);
-    void          (*write_p2m_entry       )(struct domain *d, unsigned long gfn,
+    void          (*write_p2m_entry       )(struct p2m_domain *p2m,
+                                            unsigned long gfn,
                                             l1_pgentry_t *p, l1_pgentry_t new,
                                             unsigned int level);
 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 2/4] p2m: change write_p2m_entry to return an error code
  2019-02-19  9:05   ` Jan Beulich
@ 2019-02-19 11:56     ` Roger Pau Monné
  2019-02-19 12:28       ` Jan Beulich
  0 siblings, 1 reply; 18+ messages in thread
From: Roger Pau Monné @ 2019-02-19 11:56 UTC (permalink / raw)
  To: Jan Beulich; +Cc: George Dunlap, Andrew Cooper, Tim Deegan, Wei Liu, xen-devel

On Tue, Feb 19, 2019 at 02:05:37AM -0700, Jan Beulich wrote:
> >>> On 18.02.19 at 18:27, <roger.pau@citrix.com> wrote:
> > --- a/xen/arch/x86/mm/p2m-pt.c
> > +++ b/xen/arch/x86/mm/p2m-pt.c
> > @@ -184,6 +184,7 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
> >      l1_pgentry_t *p2m_entry, new_entry;
> >      void *next;
> >      unsigned int flags;
> > +    int rc;
> >  
> >      if ( !(p2m_entry = p2m_find_entry(*table, gfn_remainder, gfn,
> >                                        shift, max)) )
> > @@ -202,7 +203,13 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
> >          new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
> >  
> >          p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
> > -        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> > +        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> > +        if ( rc )
> > +        {
> > +            ASSERT_UNREACHABLE();
> > +            p2m_free_ptp(p2m, mfn_to_page(mfn));
> > +            return rc;
> > +        }
> >      }
> >      else if ( flags & _PAGE_PSE )
> >      {
> > @@ -250,14 +257,27 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
> >          {
> >              new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)),
> >                                       flags);
> > -            p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
> > +            rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
> > +            if ( rc )
> > +            {
> > +                ASSERT_UNREACHABLE();
> > +                unmap_domain_page(l1_entry);
> > +                p2m_free_ptp(p2m, mfn_to_page(mfn));
> > +                return rc;
> > +            }
> >          }
> >  
> >          unmap_domain_page(l1_entry);
> >  
> >          new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
> >          p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
> > -        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> > +        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> > +        if ( rc )
> > +        {
> > +            ASSERT_UNREACHABLE();
> > +            p2m_free_ptp(p2m, mfn_to_page(mfn));
> > +            return rc;
> > +        }
> >      }
> >      else
> >          ASSERT(flags & _PAGE_PRESENT);
> 
> Personally I would find it quite desirable if there was just one instance of
> this error handling you add, which doesn't look overly difficult to arrange
> for.

Sure. Would you be fine with me adding a label?

> > @@ -321,7 +341,8 @@ static int p2m_pt_set_recalc_range(struct p2m_domain *p2m,
> >              if ( (l1e_get_flags(e) & _PAGE_PRESENT) && !needs_recalc(l1, e) )
> >              {
> >                  set_recalc(l1, e);
> > -                p2m->write_p2m_entry(p2m, first_gfn, pent, e, level);
> > +                err = p2m->write_p2m_entry(p2m, first_gfn, pent, e, level);
> > +                ASSERT(!err);
> >              }
> >              first_gfn += 1UL << (i * PAGETABLE_ORDER);
> >          }
> 
> So on a release build what result the caller would observe in case
> there is an (unexpected) error depends on whether this occurs on
> the last iteration. I don't consider this very helpful behavior in
> terms of debuggability. (Along these lines also again further down).

Done, thanks for spotting those.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 2/4] p2m: change write_p2m_entry to return an error code
  2019-02-19 11:56     ` Roger Pau Monné
@ 2019-02-19 12:28       ` Jan Beulich
  0 siblings, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2019-02-19 12:28 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: George Dunlap, Andrew Cooper, Tim Deegan, Wei Liu, xen-devel

>>> On 19.02.19 at 12:56, <roger.pau@citrix.com> wrote:
> On Tue, Feb 19, 2019 at 02:05:37AM -0700, Jan Beulich wrote:
>> >>> On 18.02.19 at 18:27, <roger.pau@citrix.com> wrote:
>> > --- a/xen/arch/x86/mm/p2m-pt.c
>> > +++ b/xen/arch/x86/mm/p2m-pt.c
>> > @@ -184,6 +184,7 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>> >      l1_pgentry_t *p2m_entry, new_entry;
>> >      void *next;
>> >      unsigned int flags;
>> > +    int rc;
>> >  
>> >      if ( !(p2m_entry = p2m_find_entry(*table, gfn_remainder, gfn,
>> >                                        shift, max)) )
>> > @@ -202,7 +203,13 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>> >          new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
>> >  
>> >          p2m_add_iommu_flags(&new_entry, level, 
> IOMMUF_readable|IOMMUF_writable);
>> > -        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
>> > +        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 
> 1);
>> > +        if ( rc )
>> > +        {
>> > +            ASSERT_UNREACHABLE();
>> > +            p2m_free_ptp(p2m, mfn_to_page(mfn));
>> > +            return rc;
>> > +        }
>> >      }
>> >      else if ( flags & _PAGE_PSE )
>> >      {
>> > @@ -250,14 +257,27 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>> >          {
>> >              new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * 
> PAGETABLE_ORDER)),
>> >                                       flags);
>> > -            p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
>> > +            rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, 
> level);
>> > +            if ( rc )
>> > +            {
>> > +                ASSERT_UNREACHABLE();
>> > +                unmap_domain_page(l1_entry);
>> > +                p2m_free_ptp(p2m, mfn_to_page(mfn));
>> > +                return rc;
>> > +            }
>> >          }
>> >  
>> >          unmap_domain_page(l1_entry);
>> >  
>> >          new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
>> >          p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
>> > -        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
>> > +        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
>> > +        if ( rc )
>> > +        {
>> > +            ASSERT_UNREACHABLE();
>> > +            p2m_free_ptp(p2m, mfn_to_page(mfn));
>> > +            return rc;
>> > +        }
>> >      }
>> >      else
>> >          ASSERT(flags & _PAGE_PRESENT);
>> 
>> Personally I would find it quite desirable if there was just one instance of
>> this error handling you add, which doesn't look overly difficult to arrange
>> for.
> 
> Sure. Would you be fine with me adding a label?

Personally (again) I'd prefer if you got away without, which (as said before)
doesn't look overly difficult to arrange for.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/4] x86/mm: handle foreign mappings in p2m_entry_modify
  2019-02-19  6:14   ` Tian, Kevin
  2019-02-19  8:39     ` Jan Beulich
@ 2019-02-19 13:53     ` George Dunlap
  2019-02-19 14:00     ` Roger Pau Monné
  2 siblings, 0 replies; 18+ messages in thread
From: George Dunlap @ 2019-02-19 13:53 UTC (permalink / raw)
  To: Tian, Kevin, Roger Pau Monne, xen-devel
  Cc: Wei Liu, Jan Beulich, George Dunlap, Andrew Cooper, Tim Deegan,
	Nakajima, Jun

On 2/19/19 6:14 AM, Tian, Kevin wrote:
>> From: Roger Pau Monne [mailto:roger.pau@citrix.com]
>> Sent: Tuesday, February 19, 2019 1:27 AM
>>
>> So that the specific handling can be removed from
>> atomic_write_ept_entry and be shared with npt and shadow code.
>>
>> This commit also removes the check that prevent non-ept PVH dom0 from
>> mapping foreign pages.
> 
> ah, so please ignore my comment to [1/4]. p2m_entry_modify
> is used more than type change now. :-)
> 
>>
>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>> Cc: Wei Liu <wei.liu2@citrix.com>
>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>> Cc: Kevin Tian <kevin.tian@intel.com>
>> Cc: Tim Deegan <tim@xen.org>
>> ---
>> Changes since v3:
>>  - Replace the mfn_valid BUG_ONs with an assert & return.
>>
>> Changes since v2:
>>  - Return an error code from p2m_entry_modify and propagate it to the
>>    callers.
>>
>> Changes since v1:
>>  - Simply code since mfn_to_page cannot return NULL.
>>  - Check if the mfn is valid before getting/dropping the page reference.
>>  - Use BUG_ON instead of ASSERTs, since getting the reference counting
>>    wrong is more dangerous than a DoS.
>> ---
>>  xen/arch/x86/mm/hap/hap.c       | 12 +++++--
>>  xen/arch/x86/mm/p2m-ept.c       | 56 +++------------------------------
>>  xen/arch/x86/mm/p2m-pt.c        |  7 -----
>>  xen/arch/x86/mm/shadow/common.c |  3 +-
>>  xen/include/asm-x86/p2m.h       | 34 +++++++++++++++++---
>>  5 files changed, 47 insertions(+), 65 deletions(-)
>>
>> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
>> index 5b507376bc..2daf8424f6 100644
>> --- a/xen/arch/x86/mm/hap/hap.c
>> +++ b/xen/arch/x86/mm/hap/hap.c
>> @@ -714,6 +714,7 @@ hap_write_p2m_entry(struct domain *d, unsigned
>> long gfn, l1_pgentry_t *p,
>>  {
>>      uint32_t old_flags;
>>      bool_t flush_nestedp2m = 0;
>> +    int rc;
>>
>>      /* We know always use the host p2m here, regardless if the vcpu
>>       * is in host or guest mode. The vcpu can be in guest mode by
>> @@ -734,8 +735,15 @@ hap_write_p2m_entry(struct domain *d, unsigned
>> long gfn, l1_pgentry_t *p,
>>              && perms_strictly_increased(old_flags, l1e_get_flags(new)) );
>>      }
>>
>> -    p2m_entry_modify(p2m_get_hostp2m(d),
>> p2m_flags_to_type(l1e_get_flags(new)),
>> -                     p2m_flags_to_type(old_flags), level);
>> +    rc = p2m_entry_modify(p2m_get_hostp2m(d),
>> +                          p2m_flags_to_type(l1e_get_flags(new)),
>> +                          p2m_flags_to_type(old_flags), l1e_get_mfn(new),
>> +                          l1e_get_mfn(*p), level);
> 
> why not passing old/new pte to reduce #parameters and thus stable
> against future changes?
> 
>> +    if ( rc )
>> +    {
>> +        paging_unlock(d);
>> +        return rc;
>> +    }
>>
>>      safe_write_pte(p, new);
>>      if ( old_flags & _PAGE_PRESENT )
>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
>> index 0ece6608cb..83bd602fc4 100644
>> --- a/xen/arch/x86/mm/p2m-ept.c
>> +++ b/xen/arch/x86/mm/p2m-ept.c
>> @@ -45,65 +45,19 @@ static inline bool_t is_epte_valid(ept_entry_t *e)
>>      return ((e->epte & ~(1ul << 63)) != 0 && e->sa_p2mt != p2m_invalid);
>>  }
>>
>> -/* returns : 0 for success, -errno otherwise */
>>  static int atomic_write_ept_entry(struct p2m_domain *p2m,
>>                                    ept_entry_t *entryptr, ept_entry_t new,
>>                                    int level)
>>  {
>> -    int rc;
>> -    unsigned long oldmfn = mfn_x(INVALID_MFN);
>> -    bool_t check_foreign = (new.mfn != entryptr->mfn ||
>> -                            new.sa_p2mt != entryptr->sa_p2mt);
>> -
>> -    if ( level )
>> -    {
>> -        ASSERT(!is_epte_superpage(&new)
>> || !p2m_is_foreign(new.sa_p2mt));
>> -        write_atomic(&entryptr->epte, new.epte);
>> -        return 0;
>> -    }
>> -
>> -    if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
>> -    {
>> -        rc = -EINVAL;
>> -        if ( !is_epte_present(&new) )
>> -                goto out;
>> -
>> -        if ( check_foreign )
>> -        {
>> -            struct domain *fdom;
>> -
>> -            if ( !mfn_valid(_mfn(new.mfn)) )
>> -                goto out;
>> -
>> -            rc = -ESRCH;
>> -            fdom = page_get_owner(mfn_to_page(_mfn(new.mfn)));
>> -            if ( fdom == NULL )
>> -                goto out;
>> +    int rc = p2m_entry_modify(p2m, new.sa_p2mt, entryptr->sa_p2mt,
>> +                              _mfn(new.mfn), _mfn(entryptr->mfn), level);
>>
>> -            /* get refcount on the page */
>> -            rc = -EBUSY;
>> -            if ( !get_page(mfn_to_page(_mfn(new.mfn)), fdom) )
>> -                goto out;
>> -        }
>> -    }
>> -
>> -    if ( unlikely(p2m_is_foreign(entryptr->sa_p2mt)) && check_foreign )
>> -        oldmfn = entryptr->mfn;
>> -
>> -    p2m_entry_modify(p2m, new.sa_p2mt, entryptr->sa_p2mt, level);
>> +    if ( rc )
>> +        return rc;
>>
>>      write_atomic(&entryptr->epte, new.epte);
>>
>> -    if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
>> -        put_page(mfn_to_page(_mfn(oldmfn)));
>> -
>> -    rc = 0;
>> -
>> - out:
>> -    if ( rc )
>> -        gdprintk(XENLOG_ERR, "epte o:%"PRIx64" n:%"PRIx64" rc:%d\n",
>> -                 entryptr->epte, new.epte, rc);
>> -    return rc;
>> +    return 0;
>>  }
>>
>>  static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t
>> *entry,
>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
>> index 3a8dc04efc..57afa37c71 100644
>> --- a/xen/arch/x86/mm/p2m-pt.c
>> +++ b/xen/arch/x86/mm/p2m-pt.c
>> @@ -564,13 +564,6 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
>> gfn_t gfn_, mfn_t mfn,
>>          __trace_var(TRC_MEM_SET_P2M_ENTRY, 0, sizeof(t), &t);
>>      }
>>
>> -    if ( unlikely(p2m_is_foreign(p2mt)) )
>> -    {
>> -        /* hvm fixme: foreign types are only supported on ept at present */
>> -        gdprintk(XENLOG_WARNING, "Unimplemented foreign p2m type.\n");
>> -        return -EINVAL;
>> -    }
>> -
>>      /* Carry out any eventually pending earlier changes first. */
>>      rc = do_recalc(p2m, gfn);
>>      if ( rc < 0 )
>> diff --git a/xen/arch/x86/mm/shadow/common.c
>> b/xen/arch/x86/mm/shadow/common.c
>> index fe48c4a02b..ad670de515 100644
>> --- a/xen/arch/x86/mm/shadow/common.c
>> +++ b/xen/arch/x86/mm/shadow/common.c
>> @@ -3189,7 +3189,8 @@ shadow_write_p2m_entry(struct domain *d,
>> unsigned long gfn,
>>           sh_unshadow_for_p2m_change(d, gfn, p, new, level);
>>
>>      p2m_entry_modify(p2m_get_hostp2m(d),
>> p2m_flags_to_type(l1e_get_flags(new)),
>> -                     p2m_flags_to_type(l1e_get_flags(*p)), level);
>> +                     p2m_flags_to_type(l1e_get_flags(*p)), l1e_get_mfn(new),
>> +                     l1e_get_mfn(*p), level);
>>
>>      /* Update the entry with new content */
>>      safe_write_pte(p, new);
>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
>> index f4ec2becbd..2801a8ccca 100644
>> --- a/xen/include/asm-x86/p2m.h
>> +++ b/xen/include/asm-x86/p2m.h
>> @@ -932,11 +932,14 @@ int p2m_set_ioreq_server(struct domain *d,
>> unsigned int flags,
>>  struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
>>                                                unsigned int *flags);
>>
>> -static inline void p2m_entry_modify(struct p2m_domain *p2m,
>> p2m_type_t nt,
>> -                                    p2m_type_t ot, unsigned int level)
>> +static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t
>> nt,
>> +                                   p2m_type_t ot, mfn_t nfn, mfn_t ofn,
>> +                                   unsigned int level)
>>  {
>> -    if ( level != 1 || nt == ot )
>> -        return;
>> +    BUG_ON(level > 1 && (nt == p2m_ioreq_server || nt ==
>> p2m_map_foreign));
>> +
>> +    if ( level != 1 || (nt == ot && mfn_eq(nfn, ofn)) )
>> +        return 0;
> 
> could also return here in case of nt==ot==p2m_ioreq_server,
> otherwise p2m->ioreq.entry_count might be unnecessarily
> inc/dec if mfn changes while type stays as p2m_ioreq_server...

That's an interesting idea.  But it would have you do an extra check for
all operations, rather than doing an extra increment / decrement in a
very unusual corner case.  Probably faster just to let the uncommon case
be a tiny bit slower.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/4] x86/mm: handle foreign mappings in p2m_entry_modify
  2019-02-19  6:14   ` Tian, Kevin
  2019-02-19  8:39     ` Jan Beulich
  2019-02-19 13:53     ` George Dunlap
@ 2019-02-19 14:00     ` Roger Pau Monné
  2019-02-21  6:26       ` Tian, Kevin
  2 siblings, 1 reply; 18+ messages in thread
From: Roger Pau Monné @ 2019-02-19 14:00 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: Wei Liu, Jan Beulich, George Dunlap, Andrew Cooper, Tim Deegan,
	Nakajima, Jun, xen-devel

On Tue, Feb 19, 2019 at 06:14:00AM +0000, Tian, Kevin wrote:
> > From: Roger Pau Monne [mailto:roger.pau@citrix.com]
> > Sent: Tuesday, February 19, 2019 1:27 AM
> > diff --git a/xen/arch/x86/mm/shadow/common.c
> > b/xen/arch/x86/mm/shadow/common.c
> > index fe48c4a02b..ad670de515 100644
> > --- a/xen/arch/x86/mm/shadow/common.c
> > +++ b/xen/arch/x86/mm/shadow/common.c
> > @@ -3189,7 +3189,8 @@ shadow_write_p2m_entry(struct domain *d,
> > unsigned long gfn,
> >           sh_unshadow_for_p2m_change(d, gfn, p, new, level);
> > 
> >      p2m_entry_modify(p2m_get_hostp2m(d),
> > p2m_flags_to_type(l1e_get_flags(new)),
> > -                     p2m_flags_to_type(l1e_get_flags(*p)), level);
> > +                     p2m_flags_to_type(l1e_get_flags(*p)), l1e_get_mfn(new),
> > +                     l1e_get_mfn(*p), level);
> > 
> >      /* Update the entry with new content */
> >      safe_write_pte(p, new);
> > diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> > index f4ec2becbd..2801a8ccca 100644
> > --- a/xen/include/asm-x86/p2m.h
> > +++ b/xen/include/asm-x86/p2m.h
> > @@ -932,11 +932,14 @@ int p2m_set_ioreq_server(struct domain *d,
> > unsigned int flags,
> >  struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
> >                                                unsigned int *flags);
> > 
> > -static inline void p2m_entry_modify(struct p2m_domain *p2m,
> > p2m_type_t nt,
> > -                                    p2m_type_t ot, unsigned int level)
> > +static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t
> > nt,
> > +                                   p2m_type_t ot, mfn_t nfn, mfn_t ofn,
> > +                                   unsigned int level)
> >  {
> > -    if ( level != 1 || nt == ot )
> > -        return;
> > +    BUG_ON(level > 1 && (nt == p2m_ioreq_server || nt ==
> > p2m_map_foreign));
> > +
> > +    if ( level != 1 || (nt == ot && mfn_eq(nfn, ofn)) )
> > +        return 0;
> 
> could also return here in case of nt==ot==p2m_ioreq_server,
> otherwise p2m->ioreq.entry_count might be unnecessarily
> inc/dec if mfn changes while type stays as p2m_ioreq_server...

The original code that open-coded the handling of p2m_ioreq_server
didn't have this optimization, and as said by George I'm not sure the
extra check is going to be worth it. I expect changing the mfn of an
entry with type p2m_ioreq_server is not something very common.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v4 3/4] x86/mm: handle foreign mappings in p2m_entry_modify
  2019-02-19 14:00     ` Roger Pau Monné
@ 2019-02-21  6:26       ` Tian, Kevin
  0 siblings, 0 replies; 18+ messages in thread
From: Tian, Kevin @ 2019-02-21  6:26 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Wei Liu, Jan Beulich, George Dunlap, Andrew Cooper, Tim Deegan,
	Nakajima, Jun, xen-devel

> From: Roger Pau Monné [mailto:roger.pau@citrix.com]
> Sent: Tuesday, February 19, 2019 10:01 PM
> 
> On Tue, Feb 19, 2019 at 06:14:00AM +0000, Tian, Kevin wrote:
> > > From: Roger Pau Monne [mailto:roger.pau@citrix.com]
> > > Sent: Tuesday, February 19, 2019 1:27 AM
> > > diff --git a/xen/arch/x86/mm/shadow/common.c
> > > b/xen/arch/x86/mm/shadow/common.c
> > > index fe48c4a02b..ad670de515 100644
> > > --- a/xen/arch/x86/mm/shadow/common.c
> > > +++ b/xen/arch/x86/mm/shadow/common.c
> > > @@ -3189,7 +3189,8 @@ shadow_write_p2m_entry(struct domain *d,
> > > unsigned long gfn,
> > >           sh_unshadow_for_p2m_change(d, gfn, p, new, level);
> > >
> > >      p2m_entry_modify(p2m_get_hostp2m(d),
> > > p2m_flags_to_type(l1e_get_flags(new)),
> > > -                     p2m_flags_to_type(l1e_get_flags(*p)), level);
> > > +                     p2m_flags_to_type(l1e_get_flags(*p)), l1e_get_mfn(new),
> > > +                     l1e_get_mfn(*p), level);
> > >
> > >      /* Update the entry with new content */
> > >      safe_write_pte(p, new);
> > > diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> > > index f4ec2becbd..2801a8ccca 100644
> > > --- a/xen/include/asm-x86/p2m.h
> > > +++ b/xen/include/asm-x86/p2m.h
> > > @@ -932,11 +932,14 @@ int p2m_set_ioreq_server(struct domain *d,
> > > unsigned int flags,
> > >  struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
> > >                                                unsigned int *flags);
> > >
> > > -static inline void p2m_entry_modify(struct p2m_domain *p2m,
> > > p2m_type_t nt,
> > > -                                    p2m_type_t ot, unsigned int level)
> > > +static inline int p2m_entry_modify(struct p2m_domain *p2m,
> p2m_type_t
> > > nt,
> > > +                                   p2m_type_t ot, mfn_t nfn, mfn_t ofn,
> > > +                                   unsigned int level)
> > >  {
> > > -    if ( level != 1 || nt == ot )
> > > -        return;
> > > +    BUG_ON(level > 1 && (nt == p2m_ioreq_server || nt ==
> > > p2m_map_foreign));
> > > +
> > > +    if ( level != 1 || (nt == ot && mfn_eq(nfn, ofn)) )
> > > +        return 0;
> >
> > could also return here in case of nt==ot==p2m_ioreq_server,
> > otherwise p2m->ioreq.entry_count might be unnecessarily
> > inc/dec if mfn changes while type stays as p2m_ioreq_server...
> 
> The original code that open-coded the handling of p2m_ioreq_server
> didn't have this optimization, and as said by George I'm not sure the
> extra check is going to be worth it. I expect changing the mfn of an
> entry with type p2m_ioreq_server is not something very common.
> 

OK. Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2019-02-21  6:26 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-18 17:27 [PATCH v4 0/4] pvh/dom0/shadow/amd fixes Roger Pau Monne
2019-02-18 17:27 ` [PATCH v4 1/4] x86/mm: split p2m ioreq server pages special handling into helper Roger Pau Monne
2019-02-19  5:47   ` Tian, Kevin
2019-02-19 10:15     ` Roger Pau Monné
2019-02-19  8:49   ` Jan Beulich
2019-02-19 11:22     ` Roger Pau Monné
2019-02-18 17:27 ` [PATCH v4 2/4] p2m: change write_p2m_entry to return an error code Roger Pau Monne
2019-02-19  9:05   ` Jan Beulich
2019-02-19 11:56     ` Roger Pau Monné
2019-02-19 12:28       ` Jan Beulich
2019-02-18 17:27 ` [PATCH v4 3/4] x86/mm: handle foreign mappings in p2m_entry_modify Roger Pau Monne
2019-02-19  6:14   ` Tian, Kevin
2019-02-19  8:39     ` Jan Beulich
2019-02-19 13:53     ` George Dunlap
2019-02-19 14:00     ` Roger Pau Monné
2019-02-21  6:26       ` Tian, Kevin
2019-02-18 17:27 ` [PATCH v4 4/4] npt/shadow: allow getting foreign page table entries Roger Pau Monne
2019-02-19  8:40   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.