All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] x86/shadow: two tiny further bits of PV/HVM separation
@ 2019-03-11 16:56 Jan Beulich
  2019-03-11 16:58 ` [PATCH 1/2] x86/shadow: sh_validate_guest_pt_write() is HVM-only Jan Beulich
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Jan Beulich @ 2019-03-11 16:56 UTC (permalink / raw)
  To: xen-devel
  Cc: George Dunlap, Andrew Cooper, Tim Deegan, Wei Liu, Roger Pau Monne

1: sh_validate_guest_pt_write() is HVM-only
2: sh_{write,cmpxchg}_guest_entry() are PV-only

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] x86/shadow: sh_validate_guest_pt_write() is HVM-only
  2019-03-11 16:56 [PATCH 0/2] x86/shadow: two tiny further bits of PV/HVM separation Jan Beulich
@ 2019-03-11 16:58 ` Jan Beulich
  2019-03-11 17:07   ` Andrew Cooper
  2019-03-11 16:58 ` [PATCH 2/2] x86/shadow: sh_{write, cmpxchg}_guest_entry() are PV-only Jan Beulich
  2019-03-11 21:20 ` [PATCH 0/2] x86/shadow: two tiny further bits of PV/HVM separation Tim Deegan
  2 siblings, 1 reply; 6+ messages in thread
From: Jan Beulich @ 2019-03-11 16:58 UTC (permalink / raw)
  To: xen-devel
  Cc: George Dunlap, Andrew Cooper, Tim Deegan, Wei Liu, Roger Pau Monne

Move the function to hvm.c, make it static, and drop its sh_ prefix.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -886,33 +886,6 @@ sh_validate_guest_entry(struct vcpu *v,
 }
 
 
-void
-sh_validate_guest_pt_write(struct vcpu *v, mfn_t gmfn,
-                           void *entry, u32 size)
-/* This is the entry point for emulated writes to pagetables in HVM guests and
- * PV translated guests.
- */
-{
-    struct domain *d = v->domain;
-    int rc;
-
-    ASSERT(paging_locked_by_me(v->domain));
-    rc = sh_validate_guest_entry(v, gmfn, entry, size);
-    if ( rc & SHADOW_SET_FLUSH )
-        /* Need to flush TLBs to pick up shadow PT changes */
-        flush_tlb_mask(d->dirty_cpumask);
-    if ( rc & SHADOW_SET_ERROR )
-    {
-        /* This page is probably not a pagetable any more: tear it out of the
-         * shadows, along with any tables that reference it.
-         * Since the validate call above will have made a "safe" (i.e. zero)
-         * shadow entry, we can let the domain live even if we can't fully
-         * unshadow the page. */
-        sh_remove_shadows(d, gmfn, 0, 0);
-    }
-}
-
-
 /**************************************************************************/
 /* Memory management for shadow pages. */
 
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -493,6 +493,34 @@ static inline void check_for_early_unsha
 #endif
 }
 
+/* This is the entry point for emulated writes to pagetables in HVM guests */
+static void validate_guest_pt_write(struct vcpu *v, mfn_t gmfn,
+                                    void *entry, unsigned int size)
+{
+    struct domain *d = v->domain;
+    int rc;
+
+    ASSERT(paging_locked_by_me(v->domain));
+
+    rc = sh_validate_guest_entry(v, gmfn, entry, size);
+
+    if ( rc & SHADOW_SET_FLUSH )
+        /* Need to flush TLBs to pick up shadow PT changes */
+        flush_tlb_mask(d->dirty_cpumask);
+
+    if ( rc & SHADOW_SET_ERROR )
+    {
+        /*
+         * This page is probably not a pagetable any more: tear it out of the
+         * shadows, along with any tables that reference it.
+         * Since the validate call above will have made a "safe" (i.e. zero)
+         * shadow entry, we can let the domain live even if we can't fully
+         * unshadow the page.
+         */
+        sh_remove_shadows(d, gmfn, 0, 0);
+    }
+}
+
 /*
  * Tidy up after the emulated write: mark pages dirty, verify the new
  * contents, and undo the mapping.
@@ -558,9 +586,9 @@ static void sh_emulate_unmap_dest(struct
             ASSERT(b2 < bytes);
         }
         if ( likely(b1 > 0) )
-            sh_validate_guest_pt_write(v, sh_ctxt->mfn[0], addr, b1);
+            validate_guest_pt_write(v, sh_ctxt->mfn[0], addr, b1);
         if ( unlikely(b2 > 0) )
-            sh_validate_guest_pt_write(v, sh_ctxt->mfn[1], addr + b1, b2);
+            validate_guest_pt_write(v, sh_ctxt->mfn[1], addr + b1, b2);
     }
 
     paging_mark_dirty(v->domain, sh_ctxt->mfn[0]);
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -359,10 +359,6 @@ void sh_install_xen_entries_in_l4(struct
 /* Update the shadows in response to a pagetable write from Xen */
 int sh_validate_guest_entry(struct vcpu *v, mfn_t gmfn, void *entry, u32 size);
 
-/* Update the shadows in response to a pagetable write from a HVM guest */
-void sh_validate_guest_pt_write(struct vcpu *v, mfn_t gmfn,
-                                void *entry, u32 size);
-
 /* Remove all writeable mappings of a guest frame from the shadows.
  * Returns non-zero if we need to flush TLBs.
  * level and fault_addr desribe how we found this to be a pagetable;





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 2/2] x86/shadow: sh_{write, cmpxchg}_guest_entry() are PV-only
  2019-03-11 16:56 [PATCH 0/2] x86/shadow: two tiny further bits of PV/HVM separation Jan Beulich
  2019-03-11 16:58 ` [PATCH 1/2] x86/shadow: sh_validate_guest_pt_write() is HVM-only Jan Beulich
@ 2019-03-11 16:58 ` Jan Beulich
  2019-03-11 17:24   ` Andrew Cooper
  2019-03-11 21:20 ` [PATCH 0/2] x86/shadow: two tiny further bits of PV/HVM separation Tim Deegan
  2 siblings, 1 reply; 6+ messages in thread
From: Jan Beulich @ 2019-03-11 16:58 UTC (permalink / raw)
  To: xen-devel
  Cc: George Dunlap, Andrew Cooper, Tim Deegan, Wei Liu, Roger Pau Monne

Move them to a new pv.c. Make the respective struct shadow_paging_mode
fields as well as the paging.h wrappers PV-only as well.

Take the liberty and switch both functions' "failed" local variables to
more appropriate types.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -146,7 +146,9 @@
 #include <asm/pv/grant_table.h>
 #include <asm/pv/mm.h>
 
+#ifdef CONFIG_PV
 #include "pv/mm.h"
+#endif
 
 /* Override macros from asm/page.h to make them work with mfn_t */
 #undef virt_to_mfn
--- a/xen/arch/x86/mm/shadow/Makefile
+++ b/xen/arch/x86/mm/shadow/Makefile
@@ -1,6 +1,7 @@
 ifeq ($(CONFIG_SHADOW_PAGING),y)
 obj-y += common.o guest_2.o guest_3.o guest_4.o
 obj-$(CONFIG_HVM) += hvm.o
+obj-$(CONFIG_PV) += pv.o
 else
 obj-y += none.o
 endif
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -372,55 +372,6 @@ static void sh_audit_gw(struct vcpu *v,
 #endif /* SHADOW_AUDIT & SHADOW_AUDIT_ENTRIES */
 }
 
-/*
- * Write a new value into the guest pagetable, and update the shadows
- * appropriately.  Returns false if we page-faulted, true for success.
- */
-static bool
-sh_write_guest_entry(struct vcpu *v, intpte_t *p, intpte_t new, mfn_t gmfn)
-{
-#if CONFIG_PAGING_LEVELS == GUEST_PAGING_LEVELS
-    int failed;
-
-    paging_lock(v->domain);
-    failed = __copy_to_user(p, &new, sizeof(new));
-    if ( failed != sizeof(new) )
-        sh_validate_guest_entry(v, gmfn, p, sizeof(new));
-    paging_unlock(v->domain);
-
-    return !failed;
-#else
-    return false;
-#endif
-}
-
-/*
- * Cmpxchg a new value into the guest pagetable, and update the shadows
- * appropriately. Returns false if we page-faulted, true if not.
- * N.B. caller should check the value of "old" to see if the cmpxchg itself
- * was successful.
- */
-static bool
-sh_cmpxchg_guest_entry(struct vcpu *v, intpte_t *p, intpte_t *old,
-                       intpte_t new, mfn_t gmfn)
-{
-#if CONFIG_PAGING_LEVELS == GUEST_PAGING_LEVELS
-    int failed;
-    guest_intpte_t t = *old;
-
-    paging_lock(v->domain);
-    failed = cmpxchg_user(p, t, new);
-    if ( t == *old )
-        sh_validate_guest_entry(v, gmfn, p, sizeof(new));
-    *old = t;
-    paging_unlock(v->domain);
-
-    return !failed;
-#else
-    return false;
-#endif
-}
-
 /**************************************************************************/
 /* Functions to compute the correct index into a shadow page, given an
  * index into the guest page (as returned by guest_get_index()).
@@ -4925,8 +4876,10 @@ const struct paging_mode sh_paging_mode
     .write_p2m_entry               = shadow_write_p2m_entry,
     .guest_levels                  = GUEST_PAGING_LEVELS,
     .shadow.detach_old_tables      = sh_detach_old_tables,
+#ifdef CONFIG_PV
     .shadow.write_guest_entry      = sh_write_guest_entry,
     .shadow.cmpxchg_guest_entry    = sh_cmpxchg_guest_entry,
+#endif
     .shadow.make_monitor_table     = sh_make_monitor_table,
     .shadow.destroy_monitor_table  = sh_destroy_monitor_table,
 #if SHADOW_OPTIMIZATIONS & SHOPT_WRITABLE_HEURISTIC
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -372,6 +372,12 @@ int shadow_write_p2m_entry(struct p2m_do
                            l1_pgentry_t *p, l1_pgentry_t new,
                            unsigned int level);
 
+/* Functions that atomically write PV guest PT entries */
+bool sh_write_guest_entry(struct vcpu *v, intpte_t *p, intpte_t new,
+                          mfn_t gmfn);
+bool sh_cmpxchg_guest_entry(struct vcpu *v, intpte_t *p, intpte_t *old,
+                            intpte_t new, mfn_t gmfn);
+
 /* Update all the things that are derived from the guest's CR0/CR3/CR4.
  * Called to initialize paging structures if the paging mode
  * has changed, and when bringing up a VCPU for the first time. */
--- /dev/null
+++ b/xen/arch/x86/mm/shadow/pv.c
@@ -0,0 +1,75 @@
+/******************************************************************************
+ * arch/x86/mm/shadow/pv.c
+ *
+ * PV-only shadow code (which hence does not need to be multiply compiled).
+ * Parts of this code are Copyright (c) 2006 by XenSource Inc.
+ * Parts of this code are Copyright (c) 2006 by Michael A Fetterman
+ * Parts based on earlier work by Michael A Fetterman, Ian Pratt et al.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/types.h>
+#include <asm/shadow.h>
+#include "private.h"
+
+/*
+ * Write a new value into the guest pagetable, and update the shadows
+ * appropriately.  Returns false if we page-faulted, true for success.
+ */
+bool
+sh_write_guest_entry(struct vcpu *v, intpte_t *p, intpte_t new, mfn_t gmfn)
+{
+    unsigned int failed;
+
+    paging_lock(v->domain);
+    failed = __copy_to_user(p, &new, sizeof(new));
+    if ( failed != sizeof(new) )
+        sh_validate_guest_entry(v, gmfn, p, sizeof(new));
+    paging_unlock(v->domain);
+
+    return !failed;
+}
+
+/*
+ * Cmpxchg a new value into the guest pagetable, and update the shadows
+ * appropriately. Returns false if we page-faulted, true if not.
+ * N.B. caller should check the value of "old" to see if the cmpxchg itself
+ * was successful.
+ */
+bool
+sh_cmpxchg_guest_entry(struct vcpu *v, intpte_t *p, intpte_t *old,
+                       intpte_t new, mfn_t gmfn)
+{
+    bool failed;
+    intpte_t t = *old;
+
+    paging_lock(v->domain);
+    failed = cmpxchg_user(p, t, new);
+    if ( t == *old )
+        sh_validate_guest_entry(v, gmfn, p, sizeof(new));
+    *old = t;
+    paging_unlock(v->domain);
+
+    return !failed;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -86,11 +86,13 @@ struct sh_emulate_ctxt;
 struct shadow_paging_mode {
 #ifdef CONFIG_SHADOW_PAGING
     void          (*detach_old_tables     )(struct vcpu *v);
+#ifdef CONFIG_PV
     bool          (*write_guest_entry     )(struct vcpu *v, intpte_t *p,
                                             intpte_t new, mfn_t gmfn);
     bool          (*cmpxchg_guest_entry   )(struct vcpu *v, intpte_t *p,
                                             intpte_t *old, intpte_t new,
                                             mfn_t gmfn);
+#endif
     mfn_t         (*make_monitor_table    )(struct vcpu *v);
     void          (*destroy_monitor_table )(struct vcpu *v, mfn_t mmfn);
     int           (*guess_wrmap           )(struct vcpu *v, 
@@ -290,6 +292,7 @@ static inline void paging_update_paging_
     paging_get_hostmode(v)->update_paging_modes(v);
 }
 
+#ifdef CONFIG_PV
 
 /*
  * Write a new value into the guest pagetable, and update the
@@ -325,6 +328,8 @@ static inline bool paging_cmpxchg_guest_
     return !cmpxchg_user(p, *old, new);
 }
 
+#endif /* CONFIG_PV */
+
 /* Helper function that writes a pte in such a way that a concurrent read 
  * never sees a half-written entry that has _PAGE_PRESENT set */
 static inline void safe_write_pte(l1_pgentry_t *p, l1_pgentry_t new)




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] x86/shadow: sh_validate_guest_pt_write() is HVM-only
  2019-03-11 16:58 ` [PATCH 1/2] x86/shadow: sh_validate_guest_pt_write() is HVM-only Jan Beulich
@ 2019-03-11 17:07   ` Andrew Cooper
  0 siblings, 0 replies; 6+ messages in thread
From: Andrew Cooper @ 2019-03-11 17:07 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: George Dunlap, Tim Deegan, Wei Liu, Roger Pau Monne

On 11/03/2019 16:58, Jan Beulich wrote:
> Move the function to hvm.c, make it static, and drop its sh_ prefix.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/2] x86/shadow: sh_{write, cmpxchg}_guest_entry() are PV-only
  2019-03-11 16:58 ` [PATCH 2/2] x86/shadow: sh_{write, cmpxchg}_guest_entry() are PV-only Jan Beulich
@ 2019-03-11 17:24   ` Andrew Cooper
  0 siblings, 0 replies; 6+ messages in thread
From: Andrew Cooper @ 2019-03-11 17:24 UTC (permalink / raw)
  To: Jan Beulich, xen-devel
  Cc: George Dunlap, Tim Deegan, Wei Liu, Roger Pau Monne

On 11/03/2019 16:58, Jan Beulich wrote:
> Move them to a new pv.c. Make the respective struct shadow_paging_mode
> fields as well as the paging.h wrappers PV-only as well.
>
> Take the liberty and switch both functions' "failed" local variables to
> more appropriate types.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/2] x86/shadow: two tiny further bits of PV/HVM separation
  2019-03-11 16:56 [PATCH 0/2] x86/shadow: two tiny further bits of PV/HVM separation Jan Beulich
  2019-03-11 16:58 ` [PATCH 1/2] x86/shadow: sh_validate_guest_pt_write() is HVM-only Jan Beulich
  2019-03-11 16:58 ` [PATCH 2/2] x86/shadow: sh_{write, cmpxchg}_guest_entry() are PV-only Jan Beulich
@ 2019-03-11 21:20 ` Tim Deegan
  2 siblings, 0 replies; 6+ messages in thread
From: Tim Deegan @ 2019-03-11 21:20 UTC (permalink / raw)
  To: Jan Beulich
  Cc: George Dunlap, xen-devel, Roger Pau Monne, Wei Liu, Andrew Cooper

At 10:56 -0600 on 11 Mar (1552301785), Jan Beulich wrote:
> 1: sh_validate_guest_pt_write() is HVM-only
> 2: sh_{write,cmpxchg}_guest_entry() are PV-only

Acked-by: Tim Deegan <tim@xen.org>

Thanks,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-03-11 21:20 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-11 16:56 [PATCH 0/2] x86/shadow: two tiny further bits of PV/HVM separation Jan Beulich
2019-03-11 16:58 ` [PATCH 1/2] x86/shadow: sh_validate_guest_pt_write() is HVM-only Jan Beulich
2019-03-11 17:07   ` Andrew Cooper
2019-03-11 16:58 ` [PATCH 2/2] x86/shadow: sh_{write, cmpxchg}_guest_entry() are PV-only Jan Beulich
2019-03-11 17:24   ` Andrew Cooper
2019-03-11 21:20 ` [PATCH 0/2] x86/shadow: two tiny further bits of PV/HVM separation Tim Deegan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.