All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/5] pvh/dom0/shadow/amd fixes
@ 2019-02-21 16:50 Roger Pau Monne
  2019-02-21 16:50 ` [PATCH v5 1/5] x86/p2m: pass the p2m to write_p2m_entry handlers Roger Pau Monne
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Roger Pau Monne @ 2019-02-21 16:50 UTC (permalink / raw)
  To: xen-devel; +Cc: Roger Pau Monne

The remaining set of patches contain changes to the p2m code that could
affect HVM guests. Note that without those changes a PVH dom0 running on
AMD hardware will be unable to create guests. Overall the patches are a
nice cleanup to the handling of p2m_ioreq_server and p2m_map_foreign
types IMO.

The series can also be found at:

git://xenbits.xen.org/people/royger/xen.git fixes-v5

Thanks, Roger.

Roger Pau Monne (5):
  x86/p2m: pass the p2m to write_p2m_entry handlers
  x86/mm: split p2m ioreq server pages special handling into helper
  p2m: change write_p2m_entry to return an error code
  x86/mm: handle foreign mappings in p2m_entry_modify
  npt/shadow: allow getting foreign page table entries

 xen/arch/x86/mm/hap/hap.c        |  17 ++++-
 xen/arch/x86/mm/hap/nested_hap.c |   4 +-
 xen/arch/x86/mm/p2m-ept.c        | 106 ++++++---------------------
 xen/arch/x86/mm/p2m-pt.c         | 118 ++++++++++++++++++-------------
 xen/arch/x86/mm/paging.c         |  12 ++--
 xen/arch/x86/mm/shadow/common.c  |  18 ++++-
 xen/arch/x86/mm/shadow/none.c    |   7 +-
 xen/arch/x86/mm/shadow/private.h |   6 +-
 xen/include/asm-x86/p2m.h        |  62 +++++++++++++++-
 xen/include/asm-x86/paging.h     |   9 +--
 10 files changed, 204 insertions(+), 155 deletions(-)

-- 
2.17.2 (Apple Git-113)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v5 1/5] x86/p2m: pass the p2m to write_p2m_entry handlers
  2019-02-21 16:50 [PATCH v5 0/5] pvh/dom0/shadow/amd fixes Roger Pau Monne
@ 2019-02-21 16:50 ` Roger Pau Monne
  2019-02-21 16:55   ` Wei Liu
  2019-02-25 17:02   ` George Dunlap
  2019-02-21 16:50 ` [PATCH v5 2/5] x86/mm: split p2m ioreq server pages special handling into helper Roger Pau Monne
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 14+ messages in thread
From: Roger Pau Monne @ 2019-02-21 16:50 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, George Dunlap, Andrew Cooper, Tim Deegan, Jan Beulich,
	Roger Pau Monne

Current callers pass the p2m to paging_write_p2m_entry, but the
implementation specific handlers of the write_p2m_entry hook instead
of a p2m get a domain struct due to the handling done in
paging_write_p2m_entry.

Change the code so that the implementations of write_p2m_entry take a
p2m instead of a domain.

This is a non-functional change, but will be used by follow up
patches.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Tim Deegan <tim@xen.org>
---
Changes since v4:
 - New in this version.
---
 xen/arch/x86/mm/hap/hap.c        | 3 ++-
 xen/arch/x86/mm/paging.c         | 2 +-
 xen/arch/x86/mm/shadow/common.c  | 4 +++-
 xen/arch/x86/mm/shadow/none.c    | 2 +-
 xen/arch/x86/mm/shadow/private.h | 2 +-
 xen/include/asm-x86/paging.h     | 3 ++-
 6 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 3d651b94c3..28fe48d158 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -709,9 +709,10 @@ static void hap_update_paging_modes(struct vcpu *v)
 }
 
 static void
-hap_write_p2m_entry(struct domain *d, unsigned long gfn, l1_pgentry_t *p,
+hap_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, l1_pgentry_t *p,
                     l1_pgentry_t new, unsigned int level)
 {
+    struct domain *d = p2m->domain;
     uint32_t old_flags;
     bool_t flush_nestedp2m = 0;
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index d5836eb688..e6ed3006fe 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -941,7 +941,7 @@ void paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
     if ( v->domain != d )
         v = d->vcpu ? d->vcpu[0] : NULL;
     if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v) != NULL) )
-        paging_get_hostmode(v)->write_p2m_entry(d, gfn, p, new, level);
+        paging_get_hostmode(v)->write_p2m_entry(p2m, gfn, p, new, level);
     else
         safe_write_pte(p, new);
 }
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 07840ff727..6c67ef4996 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3177,10 +3177,12 @@ static void sh_unshadow_for_p2m_change(struct domain *d, unsigned long gfn,
 }
 
 void
-shadow_write_p2m_entry(struct domain *d, unsigned long gfn,
+shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
                        l1_pgentry_t *p, l1_pgentry_t new,
                        unsigned int level)
 {
+    struct domain *d = p2m->domain;
+
     paging_lock(d);
 
     /* If there are any shadows, update them.  But if shadow_teardown()
diff --git a/xen/arch/x86/mm/shadow/none.c b/xen/arch/x86/mm/shadow/none.c
index 4de645a433..316002771d 100644
--- a/xen/arch/x86/mm/shadow/none.c
+++ b/xen/arch/x86/mm/shadow/none.c
@@ -60,7 +60,7 @@ static void _update_paging_modes(struct vcpu *v)
     ASSERT_UNREACHABLE();
 }
 
-static void _write_p2m_entry(struct domain *d, unsigned long gfn,
+static void _write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
                              l1_pgentry_t *p, l1_pgentry_t new,
                              unsigned int level)
 {
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index e8ed7ac714..0aaed1edfc 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -372,7 +372,7 @@ extern int sh_remove_write_access(struct domain *d, mfn_t readonly_mfn,
                                   unsigned long fault_addr);
 
 /* Functions that atomically write PT/P2M entries and update state */
-void shadow_write_p2m_entry(struct domain *d, unsigned long gfn,
+void shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
                             l1_pgentry_t *p, l1_pgentry_t new,
                             unsigned int level);
 
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index fdcc22844b..7ec09d7b11 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -124,7 +124,8 @@ struct paging_mode {
     void          (*update_cr3            )(struct vcpu *v, int do_locking,
                                             bool noflush);
     void          (*update_paging_modes   )(struct vcpu *v);
-    void          (*write_p2m_entry       )(struct domain *d, unsigned long gfn,
+    void          (*write_p2m_entry       )(struct p2m_domain *p2m,
+                                            unsigned long gfn,
                                             l1_pgentry_t *p, l1_pgentry_t new,
                                             unsigned int level);
 
-- 
2.17.2 (Apple Git-113)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 2/5] x86/mm: split p2m ioreq server pages special handling into helper
  2019-02-21 16:50 [PATCH v5 0/5] pvh/dom0/shadow/amd fixes Roger Pau Monne
  2019-02-21 16:50 ` [PATCH v5 1/5] x86/p2m: pass the p2m to write_p2m_entry handlers Roger Pau Monne
@ 2019-02-21 16:50 ` Roger Pau Monne
  2019-02-21 16:50 ` [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code Roger Pau Monne
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 14+ messages in thread
From: Roger Pau Monne @ 2019-02-21 16:50 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, George Dunlap, Andrew Cooper,
	Tim Deegan, Paul Durrant, Jun Nakajima, Roger Pau Monne

So that it can be shared by both ept, npt and shadow code, instead of
duplicating it.

No change in functionality intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Tim Deegan <tim@xen.org>
---
Changes since v4:
 - Remove the p2m_get_hostp2m from callers of p2m_entry_modify.

Changes since v1:
 - Remove unused p2mt_old from p2m_pt_set_entry.
---
 xen/arch/x86/mm/hap/hap.c       |  3 ++
 xen/arch/x86/mm/p2m-ept.c       | 55 +++++++++++----------------------
 xen/arch/x86/mm/p2m-pt.c        | 24 --------------
 xen/arch/x86/mm/shadow/common.c |  3 ++
 xen/include/asm-x86/p2m.h       | 32 +++++++++++++++++++
 5 files changed, 56 insertions(+), 61 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 28fe48d158..2db7f2c04a 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -735,6 +735,9 @@ hap_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, l1_pgentry_t *p,
             && perms_strictly_increased(old_flags, l1e_get_flags(new)) );
     }
 
+    p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
+                     p2m_flags_to_type(old_flags), level);
+
     safe_write_pte(p, new);
     if ( old_flags & _PAGE_PRESENT )
         flush_tlb_mask(d->dirty_cpumask);
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index bb562607f7..0ece6608cb 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -46,7 +46,8 @@ static inline bool_t is_epte_valid(ept_entry_t *e)
 }
 
 /* returns : 0 for success, -errno otherwise */
-static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
+static int atomic_write_ept_entry(struct p2m_domain *p2m,
+                                  ept_entry_t *entryptr, ept_entry_t new,
                                   int level)
 {
     int rc;
@@ -89,6 +90,8 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
     if ( unlikely(p2m_is_foreign(entryptr->sa_p2mt)) && check_foreign )
         oldmfn = entryptr->mfn;
 
+    p2m_entry_modify(p2m, new.sa_p2mt, entryptr->sa_p2mt, level);
+
     write_atomic(&entryptr->epte, new.epte);
 
     if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
@@ -390,7 +393,8 @@ static int ept_next_level(struct p2m_domain *p2m, bool_t read_only,
  * present entries in the given page table, optionally marking the entries
  * also for their subtrees needing P2M type re-calculation.
  */
-static bool_t ept_invalidate_emt(mfn_t mfn, bool_t recalc, int level)
+static bool_t ept_invalidate_emt(struct p2m_domain *p2m, mfn_t mfn,
+                                 bool_t recalc, int level)
 {
     int rc;
     ept_entry_t *epte = map_domain_page(mfn);
@@ -408,7 +412,7 @@ static bool_t ept_invalidate_emt(mfn_t mfn, bool_t recalc, int level)
         e.emt = MTRR_NUM_TYPES;
         if ( recalc )
             e.recalc = 1;
-        rc = atomic_write_ept_entry(&epte[i], e, level);
+        rc = atomic_write_ept_entry(p2m, &epte[i], e, level);
         ASSERT(rc == 0);
         changed = 1;
     }
@@ -459,7 +463,7 @@ static int ept_invalidate_emt_range(struct p2m_domain *p2m,
             rc = -ENOMEM;
             goto out;
         }
-        wrc = atomic_write_ept_entry(&table[index], split_ept_entry, i);
+        wrc = atomic_write_ept_entry(p2m, &table[index], split_ept_entry, i);
         ASSERT(wrc == 0);
 
         for ( ; i > target; --i )
@@ -479,7 +483,7 @@ static int ept_invalidate_emt_range(struct p2m_domain *p2m,
         {
             e.emt = MTRR_NUM_TYPES;
             e.recalc = 1;
-            wrc = atomic_write_ept_entry(&table[index], e, target);
+            wrc = atomic_write_ept_entry(p2m, &table[index], e, target);
             ASSERT(wrc == 0);
             rc = 1;
         }
@@ -549,17 +553,11 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
                     nt = p2m_recalc_type(e.recalc, e.sa_p2mt, p2m, gfn + i);
                     if ( nt != e.sa_p2mt )
                     {
-                        if ( e.sa_p2mt == p2m_ioreq_server )
-                        {
-                            ASSERT(p2m->ioreq.entry_count > 0);
-                            p2m->ioreq.entry_count--;
-                        }
-
                         e.sa_p2mt = nt;
                         ept_p2m_type_to_flags(p2m, &e, e.sa_p2mt, e.access);
                     }
                     e.recalc = 0;
-                    wrc = atomic_write_ept_entry(&epte[i], e, level);
+                    wrc = atomic_write_ept_entry(p2m, &epte[i], e, level);
                     ASSERT(wrc == 0);
                 }
             }
@@ -595,7 +593,7 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
                 {
                     if ( ept_split_super_page(p2m, &e, level, level - 1) )
                     {
-                        wrc = atomic_write_ept_entry(&epte[i], e, level);
+                        wrc = atomic_write_ept_entry(p2m, &epte[i], e, level);
                         ASSERT(wrc == 0);
                         unmap_domain_page(epte);
                         mfn = e.mfn;
@@ -610,7 +608,7 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
                 e.recalc = 0;
                 if ( recalc && p2m_is_changeable(e.sa_p2mt) )
                     ept_p2m_type_to_flags(p2m, &e, e.sa_p2mt, e.access);
-                wrc = atomic_write_ept_entry(&epte[i], e, level);
+                wrc = atomic_write_ept_entry(p2m, &epte[i], e, level);
                 ASSERT(wrc == 0);
             }
 
@@ -621,11 +619,11 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
         if ( e.emt == MTRR_NUM_TYPES )
         {
             ASSERT(is_epte_present(&e));
-            ept_invalidate_emt(_mfn(e.mfn), e.recalc, level);
+            ept_invalidate_emt(p2m, _mfn(e.mfn), e.recalc, level);
             smp_wmb();
             e.emt = 0;
             e.recalc = 0;
-            wrc = atomic_write_ept_entry(&epte[i], e, level);
+            wrc = atomic_write_ept_entry(p2m, &epte[i], e, level);
             ASSERT(wrc == 0);
             unmap_domain_page(epte);
             rc = 1;
@@ -786,7 +784,7 @@ ept_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
 
         /* now install the newly split ept sub-tree */
         /* NB: please make sure domian is paused and no in-fly VT-d DMA. */
-        rc = atomic_write_ept_entry(ept_entry, split_ept_entry, i);
+        rc = atomic_write_ept_entry(p2m, ept_entry, split_ept_entry, i);
         ASSERT(rc == 0);
 
         /* then move to the level we want to make real changes */
@@ -833,24 +831,7 @@ ept_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         new_entry.suppress_ve = is_epte_valid(&old_entry) ?
                                     old_entry.suppress_ve : 1;
 
-    /*
-     * p2m_ioreq_server is only used for 4K pages, so the
-     * count is only done on ept page table entries.
-     */
-    if ( p2mt == p2m_ioreq_server )
-    {
-        ASSERT(i == 0);
-        p2m->ioreq.entry_count++;
-    }
-
-    if ( ept_entry->sa_p2mt == p2m_ioreq_server )
-    {
-        ASSERT(i == 0);
-        ASSERT(p2m->ioreq.entry_count > 0);
-        p2m->ioreq.entry_count--;
-    }
-
-    rc = atomic_write_ept_entry(ept_entry, new_entry, target);
+    rc = atomic_write_ept_entry(p2m, ept_entry, new_entry, target);
     if ( unlikely(rc) )
         old_entry.epte = 0;
     else
@@ -1070,7 +1051,7 @@ static void ept_change_entry_type_global(struct p2m_domain *p2m,
     if ( !mfn )
         return;
 
-    if ( ept_invalidate_emt(_mfn(mfn), 1, p2m->ept.wl) )
+    if ( ept_invalidate_emt(p2m, _mfn(mfn), 1, p2m->ept.wl) )
         ept_sync_domain(p2m);
 }
 
@@ -1128,7 +1109,7 @@ static void ept_memory_type_changed(struct p2m_domain *p2m)
     if ( !mfn )
         return;
 
-    if ( ept_invalidate_emt(_mfn(mfn), 0, p2m->ept.wl) )
+    if ( ept_invalidate_emt(p2m, _mfn(mfn), 0, p2m->ept.wl) )
         ept_sync_domain(p2m);
 }
 
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 52eaa24b18..04e9d81cf6 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -436,13 +436,6 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
                 flags |= _PAGE_PSE;
             }
 
-            if ( ot == p2m_ioreq_server )
-            {
-                ASSERT(p2m->ioreq.entry_count > 0);
-                ASSERT(level == 0);
-                p2m->ioreq.entry_count--;
-            }
-
             e = l1e_from_pfn(mfn, flags);
             p2m_add_iommu_flags(&e, level,
                                 (nt == p2m_ram_rw)
@@ -616,8 +609,6 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
 
     if ( page_order == PAGE_ORDER_4K )
     {
-        p2m_type_t p2mt_old;
-
         rc = p2m_next_level(p2m, &table, &gfn_remainder, gfn,
                             L2_PAGETABLE_SHIFT - PAGE_SHIFT,
                             L2_PAGETABLE_ENTRIES, 1, 1);
@@ -641,21 +632,6 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         if ( entry_content.l1 != 0 )
             p2m_add_iommu_flags(&entry_content, 0, iommu_pte_flags);
 
-        p2mt_old = p2m_flags_to_type(l1e_get_flags(*p2m_entry));
-
-        /*
-         * p2m_ioreq_server is only used for 4K pages, so
-         * the count is only done for level 1 entries.
-         */
-        if ( p2mt == p2m_ioreq_server )
-            p2m->ioreq.entry_count++;
-
-        if ( p2mt_old == p2m_ioreq_server )
-        {
-            ASSERT(p2m->ioreq.entry_count > 0);
-            p2m->ioreq.entry_count--;
-        }
-
         /* level 1 entry */
         p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
         /* NB: paging_write_p2m_entry() handles tlb flushes properly */
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 6c67ef4996..de7abf7150 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3190,6 +3190,9 @@ shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
     if ( likely(d->arch.paging.shadow.total_pages != 0) )
          sh_unshadow_for_p2m_change(d, gfn, p, new, level);
 
+    p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
+                     p2m_flags_to_type(l1e_get_flags(*p)), level);
+
     /* Update the entry with new content */
     safe_write_pte(p, new);
 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 2095076556..834d49d2d4 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -932,6 +932,38 @@ int p2m_set_ioreq_server(struct domain *d, unsigned int flags,
 struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
                                               unsigned int *flags);
 
+static inline void p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt,
+                                    p2m_type_t ot, unsigned int level)
+{
+    if ( level != 1 || nt == ot )
+        return;
+
+    switch ( nt )
+    {
+    case p2m_ioreq_server:
+        /*
+         * p2m_ioreq_server is only used for 4K pages, so
+         * the count is only done for level 1 entries.
+         */
+        p2m->ioreq.entry_count++;
+        break;
+
+    default:
+        break;
+    }
+
+    switch ( ot )
+    {
+    case p2m_ioreq_server:
+        ASSERT(p2m->ioreq.entry_count > 0);
+        p2m->ioreq.entry_count--;
+        break;
+
+    default:
+        break;
+    }
+}
+
 #endif /* _XEN_ASM_X86_P2M_H */
 
 /*
-- 
2.17.2 (Apple Git-113)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code
  2019-02-21 16:50 [PATCH v5 0/5] pvh/dom0/shadow/amd fixes Roger Pau Monne
  2019-02-21 16:50 ` [PATCH v5 1/5] x86/p2m: pass the p2m to write_p2m_entry handlers Roger Pau Monne
  2019-02-21 16:50 ` [PATCH v5 2/5] x86/mm: split p2m ioreq server pages special handling into helper Roger Pau Monne
@ 2019-02-21 16:50 ` Roger Pau Monne
  2019-02-21 17:04   ` Wei Liu
                     ` (2 more replies)
  2019-02-21 16:50 ` [PATCH v5 4/5] x86/mm: handle foreign mappings in p2m_entry_modify Roger Pau Monne
  2019-02-21 16:50 ` [PATCH v5 5/5] npt/shadow: allow getting foreign page table entries Roger Pau Monne
  4 siblings, 3 replies; 14+ messages in thread
From: Roger Pau Monne @ 2019-02-21 16:50 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, George Dunlap, Andrew Cooper, Tim Deegan, Jan Beulich,
	Roger Pau Monne

This is in preparation for also changing p2m_entry_modify to return an
error code.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Tim Deegan <tim@xen.org>
---
Changes since v4:
 - Handle errors in loops to avoid overwriting the variable
   containing the error code in non-debug builds.
 - Change error handling in p2m_next_level so it's done in a single
   place.

Changes since v3:
 - Use asserts instead of bugs to check return code from
   write_p2m_entry from callers that don't support or expect
   write_p2m_entry to fail.

Changes since v2:
 - New in this version.
---
 xen/arch/x86/mm/hap/hap.c        |  4 +-
 xen/arch/x86/mm/hap/nested_hap.c |  4 +-
 xen/arch/x86/mm/p2m-pt.c         | 83 ++++++++++++++++++++++++++------
 xen/arch/x86/mm/paging.c         | 12 +++--
 xen/arch/x86/mm/shadow/common.c  |  4 +-
 xen/arch/x86/mm/shadow/none.c    |  7 +--
 xen/arch/x86/mm/shadow/private.h |  6 +--
 xen/include/asm-x86/p2m.h        |  4 +-
 xen/include/asm-x86/paging.h     |  8 +--
 9 files changed, 97 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 2db7f2c04a..fdf77c59a5 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -708,7 +708,7 @@ static void hap_update_paging_modes(struct vcpu *v)
     put_gfn(d, cr3_gfn);
 }
 
-static void
+static int
 hap_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, l1_pgentry_t *p,
                     l1_pgentry_t new, unsigned int level)
 {
@@ -746,6 +746,8 @@ hap_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, l1_pgentry_t *p,
 
     if ( flush_nestedp2m )
         p2m_flush_nestedp2m(d);
+
+    return 0;
 }
 
 static unsigned long hap_gva_to_gfn_real_mode(
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index d2a07a5c79..abe5958a52 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -71,7 +71,7 @@
 /*        NESTED VIRT P2M FUNCTIONS         */
 /********************************************/
 
-void
+int
 nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
     l1_pgentry_t *p, l1_pgentry_t new, unsigned int level)
 {
@@ -87,6 +87,8 @@ nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
         flush_tlb_mask(p2m->dirty_cpumask);
 
     paging_unlock(d);
+
+    return 0;
 }
 
 /********************************************/
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 04e9d81cf6..254c5dfd19 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -184,6 +184,8 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
     l1_pgentry_t *p2m_entry, new_entry;
     void *next;
     unsigned int flags;
+    int rc;
+    mfn_t mfn = INVALID_MFN;
 
     if ( !(p2m_entry = p2m_find_entry(*table, gfn_remainder, gfn,
                                       shift, max)) )
@@ -194,7 +196,7 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
     /* PoD/paging: Not present doesn't imply empty. */
     if ( !flags )
     {
-        mfn_t mfn = p2m_alloc_ptp(p2m, level);
+        mfn = p2m_alloc_ptp(p2m, level);
 
         if ( mfn_eq(mfn, INVALID_MFN) )
             return -ENOMEM;
@@ -202,13 +204,14 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
         new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
 
         p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
-        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
+        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
+        if ( rc )
+            ASSERT_UNREACHABLE();
     }
     else if ( flags & _PAGE_PSE )
     {
         /* Split superpages pages into smaller ones. */
         unsigned long pfn = l1e_get_pfn(*p2m_entry);
-        mfn_t mfn;
         l1_pgentry_t *l1_entry;
         unsigned int i;
 
@@ -250,18 +253,37 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
         {
             new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)),
                                      flags);
-            p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
+            rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
+            if ( rc )
+            {
+                ASSERT_UNREACHABLE();
+                break;
+            }
         }
 
         unmap_domain_page(l1_entry);
 
-        new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
-        p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
-        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
+        if ( !rc )
+        {
+            new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
+            p2m_add_iommu_flags(&new_entry, level,
+                                IOMMUF_readable|IOMMUF_writable);
+            rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry,
+                                      level + 1);
+            if ( rc )
+                ASSERT_UNREACHABLE();
+        }
     }
     else
         ASSERT(flags & _PAGE_PRESENT);
 
+    if ( rc )
+    {
+        ASSERT(mfn_valid(mfn));
+        p2m_free_ptp(p2m, mfn_to_page(mfn));
+        return rc;
+    }
+
     next = map_domain_page(l1e_get_mfn(*p2m_entry));
     if ( unmap )
         unmap_domain_page(*table);
@@ -321,7 +343,12 @@ static int p2m_pt_set_recalc_range(struct p2m_domain *p2m,
             if ( (l1e_get_flags(e) & _PAGE_PRESENT) && !needs_recalc(l1, e) )
             {
                 set_recalc(l1, e);
-                p2m->write_p2m_entry(p2m, first_gfn, pent, e, level);
+                err = p2m->write_p2m_entry(p2m, first_gfn, pent, e, level);
+                if ( err )
+                {
+                    ASSERT_UNREACHABLE();
+                    goto out;
+                }
             }
             first_gfn += 1UL << (i * PAGETABLE_ORDER);
         }
@@ -392,14 +419,24 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
                      !needs_recalc(l1, ent) )
                 {
                     set_recalc(l1, ent);
-                    p2m->write_p2m_entry(p2m, gfn - remainder, &ptab[i],
-                                         ent, level);
+                    err = p2m->write_p2m_entry(p2m, gfn - remainder, &ptab[i],
+                                               ent, level);
+                    if ( err )
+                    {
+                        ASSERT_UNREACHABLE();
+                        goto out;
+                    }
                 }
                 remainder -= 1UL << ((level - 1) * PAGETABLE_ORDER);
             }
             smp_wmb();
             clear_recalc(l1, e);
-            p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
+            err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
+            if ( err )
+            {
+                ASSERT_UNREACHABLE();
+                goto out;
+            }
         }
         unmap_domain_page((void *)((unsigned long)pent & PAGE_MASK));
     }
@@ -444,7 +481,8 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
         }
         else
             clear_recalc(l1, e);
-        p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
+        err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
+        ASSERT(!err);
     }
 
  out:
@@ -595,8 +633,10 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         if ( entry_content.l1 != 0 )
             p2m_add_iommu_flags(&entry_content, 0, iommu_pte_flags);
 
-        p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 3);
+        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 3);
         /* NB: paging_write_p2m_entry() handles tlb flushes properly */
+        if ( rc )
+            goto out;
     }
     else 
     {
@@ -633,8 +673,10 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
             p2m_add_iommu_flags(&entry_content, 0, iommu_pte_flags);
 
         /* level 1 entry */
-        p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
+        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
         /* NB: paging_write_p2m_entry() handles tlb flushes properly */
+        if ( rc )
+            goto out;
     }
     else if ( page_order == PAGE_ORDER_2M )
     {
@@ -669,8 +711,10 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         if ( entry_content.l1 != 0 )
             p2m_add_iommu_flags(&entry_content, 0, iommu_pte_flags);
 
-        p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 2);
+        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 2);
         /* NB: paging_write_p2m_entry() handles tlb flushes properly */
+        if ( rc )
+            goto out;
     }
 
     /* Track the highest gfn for which we have ever had a valid mapping */
@@ -894,8 +938,15 @@ static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
         if ( (l1e_get_flags(e) & _PAGE_PRESENT) &&
              !needs_recalc(l1, e) )
         {
+            int rc;
+
             set_recalc(l1, e);
-            p2m->write_p2m_entry(p2m, gfn, &tab[i], e, 4);
+            rc = p2m->write_p2m_entry(p2m, gfn, &tab[i], e, 4);
+            if ( rc )
+            {
+                ASSERT_UNREACHABLE();
+                break;
+            }
             ++changed;
         }
         gfn += 1UL << (L4_PAGETABLE_SHIFT - PAGE_SHIFT);
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index e6ed3006fe..21db3eceb6 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -932,18 +932,22 @@ void paging_update_nestedmode(struct vcpu *v)
 }
 #endif
 
-void paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                            l1_pgentry_t *p, l1_pgentry_t new,
-                            unsigned int level)
+int paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
+                           l1_pgentry_t *p, l1_pgentry_t new,
+                           unsigned int level)
 {
     struct domain *d = p2m->domain;
     struct vcpu *v = current;
+    int rc = 0;
+
     if ( v->domain != d )
         v = d->vcpu ? d->vcpu[0] : NULL;
     if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v) != NULL) )
-        paging_get_hostmode(v)->write_p2m_entry(p2m, gfn, p, new, level);
+        rc = paging_get_hostmode(v)->write_p2m_entry(p2m, gfn, p, new, level);
     else
         safe_write_pte(p, new);
+
+    return rc;
 }
 
 int paging_set_allocation(struct domain *d, unsigned int pages, bool *preempted)
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index de7abf7150..c818112360 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3176,7 +3176,7 @@ static void sh_unshadow_for_p2m_change(struct domain *d, unsigned long gfn,
     }
 }
 
-void
+int
 shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
                        l1_pgentry_t *p, l1_pgentry_t new,
                        unsigned int level)
@@ -3211,6 +3211,8 @@ shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
 #endif
 
     paging_unlock(d);
+
+    return 0;
 }
 
 /**************************************************************************/
diff --git a/xen/arch/x86/mm/shadow/none.c b/xen/arch/x86/mm/shadow/none.c
index 316002771d..1b54ec05a8 100644
--- a/xen/arch/x86/mm/shadow/none.c
+++ b/xen/arch/x86/mm/shadow/none.c
@@ -60,11 +60,12 @@ static void _update_paging_modes(struct vcpu *v)
     ASSERT_UNREACHABLE();
 }
 
-static void _write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                             l1_pgentry_t *p, l1_pgentry_t new,
-                             unsigned int level)
+static int _write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
+                            l1_pgentry_t *p, l1_pgentry_t new,
+                            unsigned int level)
 {
     ASSERT_UNREACHABLE();
+    return -ENOSYS;
 }
 
 static const struct paging_mode sh_paging_none = {
diff --git a/xen/arch/x86/mm/shadow/private.h b/xen/arch/x86/mm/shadow/private.h
index 0aaed1edfc..580ef3e29e 100644
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -372,9 +372,9 @@ extern int sh_remove_write_access(struct domain *d, mfn_t readonly_mfn,
                                   unsigned long fault_addr);
 
 /* Functions that atomically write PT/P2M entries and update state */
-void shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                            l1_pgentry_t *p, l1_pgentry_t new,
-                            unsigned int level);
+int shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
+                           l1_pgentry_t *p, l1_pgentry_t new,
+                           unsigned int level);
 
 /* Update all the things that are derived from the guest's CR0/CR3/CR4.
  * Called to initialize paging structures if the paging mode
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 834d49d2d4..f4ec2becbd 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -265,7 +265,7 @@ struct p2m_domain {
                                                   unsigned long last_gfn);
     void               (*memory_type_changed)(struct p2m_domain *p2m);
     
-    void               (*write_p2m_entry)(struct p2m_domain *p2m,
+    int                (*write_p2m_entry)(struct p2m_domain *p2m,
                                           unsigned long gfn, l1_pgentry_t *p,
                                           l1_pgentry_t new, unsigned int level);
     long               (*audit_p2m)(struct p2m_domain *p2m);
@@ -837,7 +837,7 @@ void p2m_flush_nestedp2m(struct domain *d);
 /* Flushes the np2m specified by np2m_base (if it exists) */
 void np2m_flush_base(struct vcpu *v, unsigned long np2m_base);
 
-void nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
+int nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
     l1_pgentry_t *p, l1_pgentry_t new, unsigned int level);
 
 /*
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index 7ec09d7b11..18a7eaeca4 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -124,7 +124,7 @@ struct paging_mode {
     void          (*update_cr3            )(struct vcpu *v, int do_locking,
                                             bool noflush);
     void          (*update_paging_modes   )(struct vcpu *v);
-    void          (*write_p2m_entry       )(struct p2m_domain *p2m,
+    int           (*write_p2m_entry       )(struct p2m_domain *p2m,
                                             unsigned long gfn,
                                             l1_pgentry_t *p, l1_pgentry_t new,
                                             unsigned int level);
@@ -340,9 +340,9 @@ static inline void safe_write_pte(l1_pgentry_t *p, l1_pgentry_t new)
  * we are writing. */
 struct p2m_domain;
 
-void paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                            l1_pgentry_t *p, l1_pgentry_t new,
-                            unsigned int level);
+int paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
+                           l1_pgentry_t *p, l1_pgentry_t new,
+                           unsigned int level);
 
 /*
  * Called from the guest to indicate that the a process is being
-- 
2.17.2 (Apple Git-113)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 4/5] x86/mm: handle foreign mappings in p2m_entry_modify
  2019-02-21 16:50 [PATCH v5 0/5] pvh/dom0/shadow/amd fixes Roger Pau Monne
                   ` (2 preceding siblings ...)
  2019-02-21 16:50 ` [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code Roger Pau Monne
@ 2019-02-21 16:50 ` Roger Pau Monne
  2019-02-21 17:09   ` Wei Liu
  2019-02-21 16:50 ` [PATCH v5 5/5] npt/shadow: allow getting foreign page table entries Roger Pau Monne
  4 siblings, 1 reply; 14+ messages in thread
From: Roger Pau Monne @ 2019-02-21 16:50 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, George Dunlap, Andrew Cooper,
	Tim Deegan, Jun Nakajima, Roger Pau Monne

So that the specific handling can be removed from
atomic_write_ept_entry and be shared with npt and shadow code.

This commit also removes the check that prevent non-ept PVH dom0 from
mapping foreign pages.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Tim Deegan <tim@xen.org>
---
Changes since v3:
 - Replace the mfn_valid BUG_ONs with an assert & return.

Changes since v2:
 - Return an error code from p2m_entry_modify and propagate it to the
   callers.

Changes since v1:
 - Simply code since mfn_to_page cannot return NULL.
 - Check if the mfn is valid before getting/dropping the page reference.
 - Use BUG_ON instead of ASSERTs, since getting the reference counting
   wrong is more dangerous than a DoS.
---
 xen/arch/x86/mm/hap/hap.c       | 11 +++++--
 xen/arch/x86/mm/p2m-ept.c       | 55 +++------------------------------
 xen/arch/x86/mm/p2m-pt.c        |  9 +-----
 xen/arch/x86/mm/shadow/common.c | 11 +++++--
 xen/include/asm-x86/p2m.h       | 34 +++++++++++++++++---
 5 files changed, 54 insertions(+), 66 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index fdf77c59a5..412a442b6a 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -715,6 +715,7 @@ hap_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, l1_pgentry_t *p,
     struct domain *d = p2m->domain;
     uint32_t old_flags;
     bool_t flush_nestedp2m = 0;
+    int rc;
 
     /* We know always use the host p2m here, regardless if the vcpu
      * is in host or guest mode. The vcpu can be in guest mode by
@@ -735,8 +736,14 @@ hap_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, l1_pgentry_t *p,
             && perms_strictly_increased(old_flags, l1e_get_flags(new)) );
     }
 
-    p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
-                     p2m_flags_to_type(old_flags), level);
+    rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
+                          p2m_flags_to_type(old_flags), l1e_get_mfn(new),
+                          l1e_get_mfn(*p), level);
+    if ( rc )
+    {
+        paging_unlock(d);
+        return rc;
+    }
 
     safe_write_pte(p, new);
     if ( old_flags & _PAGE_PRESENT )
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 0ece6608cb..e3044bee2e 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -50,60 +50,15 @@ static int atomic_write_ept_entry(struct p2m_domain *p2m,
                                   ept_entry_t *entryptr, ept_entry_t new,
                                   int level)
 {
-    int rc;
-    unsigned long oldmfn = mfn_x(INVALID_MFN);
-    bool_t check_foreign = (new.mfn != entryptr->mfn ||
-                            new.sa_p2mt != entryptr->sa_p2mt);
-
-    if ( level )
-    {
-        ASSERT(!is_epte_superpage(&new) || !p2m_is_foreign(new.sa_p2mt));
-        write_atomic(&entryptr->epte, new.epte);
-        return 0;
-    }
-
-    if ( unlikely(p2m_is_foreign(new.sa_p2mt)) )
-    {
-        rc = -EINVAL;
-        if ( !is_epte_present(&new) )
-                goto out;
-
-        if ( check_foreign )
-        {
-            struct domain *fdom;
-
-            if ( !mfn_valid(_mfn(new.mfn)) )
-                goto out;
-
-            rc = -ESRCH;
-            fdom = page_get_owner(mfn_to_page(_mfn(new.mfn)));
-            if ( fdom == NULL )
-                goto out;
+    int rc = p2m_entry_modify(p2m, new.sa_p2mt, entryptr->sa_p2mt,
+                              _mfn(new.mfn), _mfn(entryptr->mfn), level);
 
-            /* get refcount on the page */
-            rc = -EBUSY;
-            if ( !get_page(mfn_to_page(_mfn(new.mfn)), fdom) )
-                goto out;
-        }
-    }
-
-    if ( unlikely(p2m_is_foreign(entryptr->sa_p2mt)) && check_foreign )
-        oldmfn = entryptr->mfn;
-
-    p2m_entry_modify(p2m, new.sa_p2mt, entryptr->sa_p2mt, level);
+    if ( rc )
+        return rc;
 
     write_atomic(&entryptr->epte, new.epte);
 
-    if ( unlikely(oldmfn != mfn_x(INVALID_MFN)) )
-        put_page(mfn_to_page(_mfn(oldmfn)));
-
-    rc = 0;
-
- out:
-    if ( rc )
-        gdprintk(XENLOG_ERR, "epte o:%"PRIx64" n:%"PRIx64" rc:%d\n",
-                 entryptr->epte, new.epte, rc);
-    return rc;
+    return 0;
 }
 
 static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry,
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 254c5dfd19..7f94fdb6a3 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -184,7 +184,7 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
     l1_pgentry_t *p2m_entry, new_entry;
     void *next;
     unsigned int flags;
-    int rc;
+    int rc = 0;
     mfn_t mfn = INVALID_MFN;
 
     if ( !(p2m_entry = p2m_find_entry(*table, gfn_remainder, gfn,
@@ -578,13 +578,6 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         __trace_var(TRC_MEM_SET_P2M_ENTRY, 0, sizeof(t), &t);
     }
 
-    if ( unlikely(p2m_is_foreign(p2mt)) )
-    {
-        /* hvm fixme: foreign types are only supported on ept at present */
-        gdprintk(XENLOG_WARNING, "Unimplemented foreign p2m type.\n");
-        return -EINVAL;
-    }
-
     /* Carry out any eventually pending earlier changes first. */
     rc = do_recalc(p2m, gfn);
     if ( rc < 0 )
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index c818112360..025071a163 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3182,6 +3182,7 @@ shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
                        unsigned int level)
 {
     struct domain *d = p2m->domain;
+    int rc;
 
     paging_lock(d);
 
@@ -3190,8 +3191,14 @@ shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
     if ( likely(d->arch.paging.shadow.total_pages != 0) )
          sh_unshadow_for_p2m_change(d, gfn, p, new, level);
 
-    p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
-                     p2m_flags_to_type(l1e_get_flags(*p)), level);
+    rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
+                          p2m_flags_to_type(l1e_get_flags(*p)),
+                          l1e_get_mfn(new), l1e_get_mfn(*p), level);
+    if ( rc )
+    {
+        paging_unlock(d);
+        return rc;
+    }
 
     /* Update the entry with new content */
     safe_write_pte(p, new);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index f4ec2becbd..2801a8ccca 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -932,11 +932,14 @@ int p2m_set_ioreq_server(struct domain *d, unsigned int flags,
 struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
                                               unsigned int *flags);
 
-static inline void p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt,
-                                    p2m_type_t ot, unsigned int level)
+static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt,
+                                   p2m_type_t ot, mfn_t nfn, mfn_t ofn,
+                                   unsigned int level)
 {
-    if ( level != 1 || nt == ot )
-        return;
+    BUG_ON(level > 1 && (nt == p2m_ioreq_server || nt == p2m_map_foreign));
+
+    if ( level != 1 || (nt == ot && mfn_eq(nfn, ofn)) )
+        return 0;
 
     switch ( nt )
     {
@@ -948,6 +951,18 @@ static inline void p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt,
         p2m->ioreq.entry_count++;
         break;
 
+    case p2m_map_foreign:
+        if ( !mfn_valid(nfn) )
+        {
+            ASSERT_UNREACHABLE();
+            return -EINVAL;
+        }
+
+        if ( !page_get_owner_and_reference(mfn_to_page(nfn)) )
+            return -EBUSY;
+
+        break;
+
     default:
         break;
     }
@@ -959,9 +974,20 @@ static inline void p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt,
         p2m->ioreq.entry_count--;
         break;
 
+    case p2m_map_foreign:
+        if ( !mfn_valid(ofn) )
+        {
+            ASSERT_UNREACHABLE();
+            return -EINVAL;
+        }
+        put_page(mfn_to_page(ofn));
+        break;
+
     default:
         break;
     }
+
+    return 0;
 }
 
 #endif /* _XEN_ASM_X86_P2M_H */
-- 
2.17.2 (Apple Git-113)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 5/5] npt/shadow: allow getting foreign page table entries
  2019-02-21 16:50 [PATCH v5 0/5] pvh/dom0/shadow/amd fixes Roger Pau Monne
                   ` (3 preceding siblings ...)
  2019-02-21 16:50 ` [PATCH v5 4/5] x86/mm: handle foreign mappings in p2m_entry_modify Roger Pau Monne
@ 2019-02-21 16:50 ` Roger Pau Monne
  4 siblings, 0 replies; 14+ messages in thread
From: Roger Pau Monne @ 2019-02-21 16:50 UTC (permalink / raw)
  To: xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monne

Current npt and shadow code to get an entry will always return
INVALID_MFN for foreign entries. Allow to return the entry mfn for
foreign entries, like it's done for grant table entries.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
---
Changes since v2:
 - Use p2m_is_any_ram.
---
 xen/arch/x86/mm/p2m-pt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 7f94fdb6a3..bbc9a3e278 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -907,8 +907,8 @@ pod_retry_l1:
     *t = p2m_recalc_type(recalc || _needs_recalc(flags), l1t, p2m, gfn);
     unmap_domain_page(l1e);
 
-    ASSERT(mfn_valid(mfn) || !p2m_is_ram(*t) || p2m_is_paging(*t));
-    return (p2m_is_valid(*t) || p2m_is_grant(*t)) ? mfn : INVALID_MFN;
+    ASSERT(mfn_valid(mfn) || !p2m_is_any_ram(*t) || p2m_is_paging(*t));
+    return (p2m_is_valid(*t) || p2m_is_any_ram(*t)) ? mfn : INVALID_MFN;
 }
 
 static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
-- 
2.17.2 (Apple Git-113)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/5] x86/p2m: pass the p2m to write_p2m_entry handlers
  2019-02-21 16:50 ` [PATCH v5 1/5] x86/p2m: pass the p2m to write_p2m_entry handlers Roger Pau Monne
@ 2019-02-21 16:55   ` Wei Liu
  2019-02-25 17:02   ` George Dunlap
  1 sibling, 0 replies; 14+ messages in thread
From: Wei Liu @ 2019-02-21 16:55 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Wei Liu, George Dunlap, Andrew Cooper, Tim Deegan, Jan Beulich,
	xen-devel

On Thu, Feb 21, 2019 at 05:50:37PM +0100, Roger Pau Monne wrote:
> Current callers pass the p2m to paging_write_p2m_entry, but the
> implementation specific handlers of the write_p2m_entry hook instead
> of a p2m get a domain struct due to the handling done in
> paging_write_p2m_entry.
> 
> Change the code so that the implementations of write_p2m_entry take a
> p2m instead of a domain.
> 
> This is a non-functional change, but will be used by follow up
> patches.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code
  2019-02-21 16:50 ` [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code Roger Pau Monne
@ 2019-02-21 17:04   ` Wei Liu
  2019-02-22 16:36   ` Jan Beulich
  2019-02-25 17:42   ` George Dunlap
  2 siblings, 0 replies; 14+ messages in thread
From: Wei Liu @ 2019-02-21 17:04 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Wei Liu, George Dunlap, Andrew Cooper, Tim Deegan, Jan Beulich,
	xen-devel

On Thu, Feb 21, 2019 at 05:50:39PM +0100, Roger Pau Monne wrote:
[...]
> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> index 04e9d81cf6..254c5dfd19 100644
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -184,6 +184,8 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>      l1_pgentry_t *p2m_entry, new_entry;
>      void *next;
>      unsigned int flags;
> +    int rc;

Shouldn't rc be initialised to 0 here? There seems to be a path which
can leave rc uninitialised without calling write_p2m_entry.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 4/5] x86/mm: handle foreign mappings in p2m_entry_modify
  2019-02-21 16:50 ` [PATCH v5 4/5] x86/mm: handle foreign mappings in p2m_entry_modify Roger Pau Monne
@ 2019-02-21 17:09   ` Wei Liu
  0 siblings, 0 replies; 14+ messages in thread
From: Wei Liu @ 2019-02-21 17:09 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Kevin Tian, Wei Liu, Jan Beulich, George Dunlap, Andrew Cooper,
	Tim Deegan, Jun Nakajima, xen-devel

On Thu, Feb 21, 2019 at 05:50:40PM +0100, Roger Pau Monne wrote:
>  
>  static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry,
> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> index 254c5dfd19..7f94fdb6a3 100644
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -184,7 +184,7 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>      l1_pgentry_t *p2m_entry, new_entry;
>      void *next;
>      unsigned int flags;
> -    int rc;
> +    int rc = 0;

Oh here it is. It should be moved to previous patch.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code
  2019-02-21 16:50 ` [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code Roger Pau Monne
  2019-02-21 17:04   ` Wei Liu
@ 2019-02-22 16:36   ` Jan Beulich
  2019-02-25 17:42   ` George Dunlap
  2 siblings, 0 replies; 14+ messages in thread
From: Jan Beulich @ 2019-02-22 16:36 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: George Dunlap, Andrew Cooper, Tim Deegan, Wei Liu, xen-devel

>>> On 21.02.19 at 17:50, <roger.pau@citrix.com> wrote:
> @@ -202,13 +204,14 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>          new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
>  
>          p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
> -        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +        if ( rc )
> +            ASSERT_UNREACHABLE();

Why not simply ASSERT(!rc)? Also further down then.

> --- a/xen/arch/x86/mm/shadow/none.c
> +++ b/xen/arch/x86/mm/shadow/none.c
> @@ -60,11 +60,12 @@ static void _update_paging_modes(struct vcpu *v)
>      ASSERT_UNREACHABLE();
>  }
>  
> -static void _write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
> -                             l1_pgentry_t *p, l1_pgentry_t new,
> -                             unsigned int level)
> +static int _write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
> +                            l1_pgentry_t *p, l1_pgentry_t new,
> +                            unsigned int level)
>  {
>      ASSERT_UNREACHABLE();
> +    return -ENOSYS;

-EOPNOTSUPP or basically anything else, but not -ENOSYS please.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/5] x86/p2m: pass the p2m to write_p2m_entry handlers
  2019-02-21 16:50 ` [PATCH v5 1/5] x86/p2m: pass the p2m to write_p2m_entry handlers Roger Pau Monne
  2019-02-21 16:55   ` Wei Liu
@ 2019-02-25 17:02   ` George Dunlap
  1 sibling, 0 replies; 14+ messages in thread
From: George Dunlap @ 2019-02-25 17:02 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, Tim Deegan

On 2/21/19 4:50 PM, Roger Pau Monne wrote:
> Current callers pass the p2m to paging_write_p2m_entry, but the
> implementation specific handlers of the write_p2m_entry hook instead
> of a p2m get a domain struct due to the handling done in
> paging_write_p2m_entry.
> 
> Change the code so that the implementations of write_p2m_entry take a
> p2m instead of a domain.
> 
> This is a non-functional change, but will be used by follow up
> patches.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code
  2019-02-21 16:50 ` [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code Roger Pau Monne
  2019-02-21 17:04   ` Wei Liu
  2019-02-22 16:36   ` Jan Beulich
@ 2019-02-25 17:42   ` George Dunlap
  2019-02-26  9:23     ` Roger Pau Monné
  2019-02-26 10:40     ` Jan Beulich
  2 siblings, 2 replies; 14+ messages in thread
From: George Dunlap @ 2019-02-25 17:42 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: George Dunlap, Andrew Cooper, Wei Liu, Jan Beulich, Tim Deegan

[-- Attachment #1: Type: text/plain, Size: 3567 bytes --]

On 2/21/19 4:50 PM, Roger Pau Monne wrote:
> This is in preparation for also changing p2m_entry_modify to return an
> error code.
> 
> No functional change intended.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
[snip]
> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> index 04e9d81cf6..254c5dfd19 100644
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -184,6 +184,8 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>      l1_pgentry_t *p2m_entry, new_entry;
>      void *next;
>      unsigned int flags;
> +    int rc;
> +    mfn_t mfn = INVALID_MFN;
>  
>      if ( !(p2m_entry = p2m_find_entry(*table, gfn_remainder, gfn,
>                                        shift, max)) )
> @@ -194,7 +196,7 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>      /* PoD/paging: Not present doesn't imply empty. */
>      if ( !flags )
>      {
> -        mfn_t mfn = p2m_alloc_ptp(p2m, level);
> +        mfn = p2m_alloc_ptp(p2m, level);
>  
>          if ( mfn_eq(mfn, INVALID_MFN) )
>              return -ENOMEM;
> @@ -202,13 +204,14 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>          new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
>  
>          p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
> -        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +        if ( rc )
> +            ASSERT_UNREACHABLE();
>      }
>      else if ( flags & _PAGE_PSE )
>      {
>          /* Split superpages pages into smaller ones. */
>          unsigned long pfn = l1e_get_pfn(*p2m_entry);
> -        mfn_t mfn;
>          l1_pgentry_t *l1_entry;
>          unsigned int i;
>  
> @@ -250,18 +253,37 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>          {
>              new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)),
>                                       flags);
> -            p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
> +            rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
> +            if ( rc )
> +            {
> +                ASSERT_UNREACHABLE();
> +                break;
> +            }
>          }
>  
>          unmap_domain_page(l1_entry);
>  
> -        new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
> -        p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
> -        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> +        if ( !rc )
> +        {
> +            new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
> +            p2m_add_iommu_flags(&new_entry, level,
> +                                IOMMUF_readable|IOMMUF_writable);
> +            rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry,
> +                                      level + 1);
> +            if ( rc )
> +                ASSERT_UNREACHABLE();
> +        }
>      }
>      else
>          ASSERT(flags & _PAGE_PRESENT);
>  
> +    if ( rc )
> +    {
> +        ASSERT(mfn_valid(mfn));
> +        p2m_free_ptp(p2m, mfn_to_page(mfn));
> +        return rc;
> +    }
> +

I think the idiomatic way to deal with this would be to have a label at
the end that everything jumps to something like the attached?  That way
you don't have to spend mental effort making sure that nothing happens
between the error and the clean-up call.

Everything else looks good to me; thanks for doing this.

 -George

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: free_ptp.diff --]
[-- Type: text/x-patch; name="free_ptp.diff", Size: 2179 bytes --]

diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 254c5dfd19..1eb22bd06e 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -206,7 +206,10 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
         p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
         rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
         if ( rc )
+        {
             ASSERT_UNREACHABLE();
+            goto free_ptp;
+        }
     }
     else if ( flags & _PAGE_PSE )
     {
@@ -257,39 +260,37 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
             if ( rc )
             {
                 ASSERT_UNREACHABLE();
-                break;
+                unmap_domain_page(l1_entry);
+                goto free_ptp;
             }
         }
 
         unmap_domain_page(l1_entry);
 
-        if ( !rc )
-        {
-            new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
-            p2m_add_iommu_flags(&new_entry, level,
-                                IOMMUF_readable|IOMMUF_writable);
-            rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry,
-                                      level + 1);
-            if ( rc )
-                ASSERT_UNREACHABLE();
+        new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
+        p2m_add_iommu_flags(&new_entry, level,
+                            IOMMUF_readable|IOMMUF_writable);
+        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry,
+                                  level + 1);
+        if ( rc ) {
+            ASSERT_UNREACHABLE();
+            goto free_ptp;
         }
     }
     else
         ASSERT(flags & _PAGE_PRESENT);
 
-    if ( rc )
-    {
-        ASSERT(mfn_valid(mfn));
-        p2m_free_ptp(p2m, mfn_to_page(mfn));
-        return rc;
-    }
-
     next = map_domain_page(l1e_get_mfn(*p2m_entry));
     if ( unmap )
         unmap_domain_page(*table);
     *table = next;
 
     return 0;
+
+ free_ptp:
+    ASSERT(mfn_valid(mfn));
+    p2m_free_ptp(p2m, mfn_to_page(mfn));
+    return rc;
 }
 
 /*

[-- Attachment #3: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code
  2019-02-25 17:42   ` George Dunlap
@ 2019-02-26  9:23     ` Roger Pau Monné
  2019-02-26 10:40     ` Jan Beulich
  1 sibling, 0 replies; 14+ messages in thread
From: Roger Pau Monné @ 2019-02-26  9:23 UTC (permalink / raw)
  To: George Dunlap
  Cc: Wei Liu, George Dunlap, Andrew Cooper, Tim Deegan, Jan Beulich,
	xen-devel

On Mon, Feb 25, 2019 at 05:42:02PM +0000, George Dunlap wrote:
> On 2/21/19 4:50 PM, Roger Pau Monne wrote:
> > This is in preparation for also changing p2m_entry_modify to return an
> > error code.
> > 
> > No functional change intended.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> [snip]
> > diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> > index 04e9d81cf6..254c5dfd19 100644
> > --- a/xen/arch/x86/mm/p2m-pt.c
> > +++ b/xen/arch/x86/mm/p2m-pt.c
> > @@ -184,6 +184,8 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
> >      l1_pgentry_t *p2m_entry, new_entry;
> >      void *next;
> >      unsigned int flags;
> > +    int rc;
> > +    mfn_t mfn = INVALID_MFN;
> >  
> >      if ( !(p2m_entry = p2m_find_entry(*table, gfn_remainder, gfn,
> >                                        shift, max)) )
> > @@ -194,7 +196,7 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
> >      /* PoD/paging: Not present doesn't imply empty. */
> >      if ( !flags )
> >      {
> > -        mfn_t mfn = p2m_alloc_ptp(p2m, level);
> > +        mfn = p2m_alloc_ptp(p2m, level);
> >  
> >          if ( mfn_eq(mfn, INVALID_MFN) )
> >              return -ENOMEM;
> > @@ -202,13 +204,14 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
> >          new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
> >  
> >          p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
> > -        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> > +        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> > +        if ( rc )
> > +            ASSERT_UNREACHABLE();
> >      }
> >      else if ( flags & _PAGE_PSE )
> >      {
> >          /* Split superpages pages into smaller ones. */
> >          unsigned long pfn = l1e_get_pfn(*p2m_entry);
> > -        mfn_t mfn;
> >          l1_pgentry_t *l1_entry;
> >          unsigned int i;
> >  
> > @@ -250,18 +253,37 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
> >          {
> >              new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)),
> >                                       flags);
> > -            p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
> > +            rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
> > +            if ( rc )
> > +            {
> > +                ASSERT_UNREACHABLE();
> > +                break;
> > +            }
> >          }
> >  
> >          unmap_domain_page(l1_entry);
> >  
> > -        new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
> > -        p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
> > -        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
> > +        if ( !rc )
> > +        {
> > +            new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
> > +            p2m_add_iommu_flags(&new_entry, level,
> > +                                IOMMUF_readable|IOMMUF_writable);
> > +            rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry,
> > +                                      level + 1);
> > +            if ( rc )
> > +                ASSERT_UNREACHABLE();
> > +        }
> >      }
> >      else
> >          ASSERT(flags & _PAGE_PRESENT);
> >  
> > +    if ( rc )
> > +    {
> > +        ASSERT(mfn_valid(mfn));
> > +        p2m_free_ptp(p2m, mfn_to_page(mfn));
> > +        return rc;
> > +    }
> > +
> 
> I think the idiomatic way to deal with this would be to have a label at
> the end that everything jumps to something like the attached?  That way
> you don't have to spend mental effort making sure that nothing happens
> between the error and the clean-up call.

Jan explicitly asked to not use a label. I can change it, but I would
like to have consensus first.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code
  2019-02-25 17:42   ` George Dunlap
  2019-02-26  9:23     ` Roger Pau Monné
@ 2019-02-26 10:40     ` Jan Beulich
  1 sibling, 0 replies; 14+ messages in thread
From: Jan Beulich @ 2019-02-26 10:40 UTC (permalink / raw)
  To: george.dunlap
  Cc: Wei Liu, George Dunlap, Andrew Cooper, Tim Deegan, xen-devel,
	Roger Pau Monne

>>> On 25.02.19 at 18:42, <george.dunlap@citrix.com> wrote:
> On 2/21/19 4:50 PM, Roger Pau Monne wrote:
>> @@ -250,18 +253,37 @@ p2m_next_level(struct p2m_domain *p2m, void **table,
>>          {
>>              new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)),
>>                                       flags);
>> -            p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
>> +            rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
>> +            if ( rc )
>> +            {
>> +                ASSERT_UNREACHABLE();
>> +                break;
>> +            }
>>          }
>>  
>>          unmap_domain_page(l1_entry);
>>  
>> -        new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
>> -        p2m_add_iommu_flags(&new_entry, level, IOMMUF_readable|IOMMUF_writable);
>> -        p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
>> +        if ( !rc )
>> +        {
>> +            new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
>> +            p2m_add_iommu_flags(&new_entry, level,
>> +                                IOMMUF_readable|IOMMUF_writable);
>> +            rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry,
>> +                                      level + 1);
>> +            if ( rc )
>> +                ASSERT_UNREACHABLE();
>> +        }
>>      }
>>      else
>>          ASSERT(flags & _PAGE_PRESENT);
>>  
>> +    if ( rc )
>> +    {
>> +        ASSERT(mfn_valid(mfn));
>> +        p2m_free_ptp(p2m, mfn_to_page(mfn));
>> +        return rc;
>> +    }
>> +
> 
> I think the idiomatic way to deal with this would be to have a label at
> the end that everything jumps to something like the attached?  That way
> you don't have to spend mental effort making sure that nothing happens
> between the error and the clean-up call.

Well, as Roger has said already, this not being a classical
end-of-the-function error path, I did ask to get away without a
label. But you're the maintainer of the code, so if you want it
that way, then I guess I did misguide Roger.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-02-26 10:40 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-21 16:50 [PATCH v5 0/5] pvh/dom0/shadow/amd fixes Roger Pau Monne
2019-02-21 16:50 ` [PATCH v5 1/5] x86/p2m: pass the p2m to write_p2m_entry handlers Roger Pau Monne
2019-02-21 16:55   ` Wei Liu
2019-02-25 17:02   ` George Dunlap
2019-02-21 16:50 ` [PATCH v5 2/5] x86/mm: split p2m ioreq server pages special handling into helper Roger Pau Monne
2019-02-21 16:50 ` [PATCH v5 3/5] p2m: change write_p2m_entry to return an error code Roger Pau Monne
2019-02-21 17:04   ` Wei Liu
2019-02-22 16:36   ` Jan Beulich
2019-02-25 17:42   ` George Dunlap
2019-02-26  9:23     ` Roger Pau Monné
2019-02-26 10:40     ` Jan Beulich
2019-02-21 16:50 ` [PATCH v5 4/5] x86/mm: handle foreign mappings in p2m_entry_modify Roger Pau Monne
2019-02-21 17:09   ` Wei Liu
2019-02-21 16:50 ` [PATCH v5 5/5] npt/shadow: allow getting foreign page table entries Roger Pau Monne

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.