All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs
@ 2017-09-04  8:14 Sergey Dyasli
  2017-09-04  8:14 ` [PATCH v1 01/14] x86/np2m: refactor p2m_get_nestedp2m() Sergey Dyasli
                   ` (14 more replies)
  0 siblings, 15 replies; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

Nested p2m (shadow EPT) is an object that stores memory address
translations from L2 GPA directly to L0 HPA. This is achieved by
combining together L1 EPT with L0 EPT during L2 EPT violations.

In the usual case, L1 uses the same EPTP value in VMCS12 for all vCPUs
of a L2 guest. But unfortunately, in current Xen's implementation, each
vCPU has its own n2pm object which cannot be shared with other vCPUs.
This leads to the following issues if a nested guest has SMP:

    1. There will be multiple np2m objects (1 per nested vCPU) with
       the same np2m_base (L1 EPTP value in VMCS12).

    2. Same EPT violations will be processed independently by each vCPU.

    3. Since MAX_NESTEDP2M is defined as 10, if a domain has more than
       10 nested vCPUs, performance will be extremely degraded due to
       constant np2m LRU list thrashing and np2m flushing.

This patch series makes it possible to share one np2m object between
different vCPUs that have the same np2m_base. Sharing of np2m objects
improves scalability of a domain from 10 nested vCPUs to 10 nested
guests (with arbitrary number of vCPUs per guest).

RFC --> v1:
- Some commit messages are updated based on George's comments
- Replaced VMX's terminology in common code with HVM's one
- Patch "x86/vvmx: add stale_eptp flag" is split into
  "x86/np2m: add stale_np2m flag" and
  "x86/vvmx: restart nested vmentry in case of stale_np2m"
- Added "x86/np2m: refactor p2m_get_nestedp2m_locked()" patch
- I've done some light nested SVM testing and fixed 1 regression
  (see patch #4)

Sergey Dyasli (14):
  x86/np2m: refactor p2m_get_nestedp2m()
  x86/np2m: add np2m_flush_base()
  x86/vvmx: use np2m_flush_base() for INVEPT_SINGLE_CONTEXT
  x86/np2m: remove np2m_base from p2m_get_nestedp2m()
  x86/np2m: add np2m_generation
  x86/np2m: add stale_np2m flag
  x86/vvmx: restart nested vmentry in case of stale_np2m
  x86/np2m: add np2m_schedule()
  x86/np2m: add p2m_get_nestedp2m_locked()
  x86/np2m: improve nestedhvm_hap_nested_page_fault()
  x86/np2m: implement sharing of np2m between vCPUs
  x86/np2m: refactor p2m_get_nestedp2m_locked()
  x86/np2m: add break to np2m_flush_eptp()
  x86/vvmx: remove EPTP write from ept_handle_violation()

 xen/arch/x86/domain.c            |   2 +
 xen/arch/x86/hvm/nestedhvm.c     |   3 +
 xen/arch/x86/hvm/svm/nestedsvm.c |   6 +-
 xen/arch/x86/hvm/vmx/entry.S     |   6 ++
 xen/arch/x86/hvm/vmx/vmx.c       |  14 ++--
 xen/arch/x86/hvm/vmx/vvmx.c      |  28 +++++--
 xen/arch/x86/mm/hap/nested_hap.c |  29 +++----
 xen/arch/x86/mm/p2m.c            | 174 ++++++++++++++++++++++++++++++++-------
 xen/include/asm-x86/hvm/vcpu.h   |   2 +
 xen/include/asm-x86/p2m.h        |  17 +++-
 10 files changed, 211 insertions(+), 70 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v1 01/14] x86/np2m: refactor p2m_get_nestedp2m()
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-28 14:00   ` George Dunlap
  2017-09-04  8:14 ` [PATCH v1 02/14] x86/np2m: add np2m_flush_base() Sergey Dyasli
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

1. Add a helper function assign_np2m()
2. Remove useless volatile
3. Update function's comment in the header
4. Minor style fixes ('\n' and d)

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/mm/p2m.c     | 31 ++++++++++++++++++-------------
 xen/include/asm-x86/p2m.h |  6 +++---
 2 files changed, 21 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index e8a57d118c..b8c8bba421 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1773,14 +1773,24 @@ p2m_flush_nestedp2m(struct domain *d)
         p2m_flush_table(d->arch.nested_p2m[i]);
 }
 
+static void assign_np2m(struct vcpu *v, struct p2m_domain *p2m)
+{
+    struct nestedvcpu *nv = &vcpu_nestedhvm(v);
+    struct domain *d = v->domain;
+
+    /* Bring this np2m to the top of the LRU list */
+    p2m_getlru_nestedp2m(d, p2m);
+
+    nv->nv_flushp2m = 0;
+    nv->nv_p2m = p2m;
+    cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
+}
+
 struct p2m_domain *
 p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
 {
-    /* Use volatile to prevent gcc to cache nv->nv_p2m in a cpu register as
-     * this may change within the loop by an other (v)cpu.
-     */
-    volatile struct nestedvcpu *nv = &vcpu_nestedhvm(v);
-    struct domain *d;
+    struct nestedvcpu *nv = &vcpu_nestedhvm(v);
+    struct domain *d = v->domain;
     struct p2m_domain *p2m;
 
     /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */
@@ -1790,7 +1800,6 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
         nv->nv_p2m = NULL;
     }
 
-    d = v->domain;
     nestedp2m_lock(d);
     p2m = nv->nv_p2m;
     if ( p2m ) 
@@ -1798,15 +1807,13 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
         p2m_lock(p2m);
         if ( p2m->np2m_base == np2m_base || p2m->np2m_base == P2M_BASE_EADDR )
         {
-            nv->nv_flushp2m = 0;
-            p2m_getlru_nestedp2m(d, p2m);
-            nv->nv_p2m = p2m;
             if ( p2m->np2m_base == P2M_BASE_EADDR )
                 hvm_asid_flush_vcpu(v);
             p2m->np2m_base = np2m_base;
-            cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
+            assign_np2m(v, p2m);
             p2m_unlock(p2m);
             nestedp2m_unlock(d);
+
             return p2m;
         }
         p2m_unlock(p2m);
@@ -1817,11 +1824,9 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
     p2m = p2m_getlru_nestedp2m(d, NULL);
     p2m_flush_table(p2m);
     p2m_lock(p2m);
-    nv->nv_p2m = p2m;
     p2m->np2m_base = np2m_base;
-    nv->nv_flushp2m = 0;
     hvm_asid_flush_vcpu(v);
-    cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
+    assign_np2m(v, p2m);
     p2m_unlock(p2m);
     nestedp2m_unlock(d);
 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 6395e8fd1d..9086bb35dc 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -359,9 +359,9 @@ struct p2m_domain {
 /* get host p2m table */
 #define p2m_get_hostp2m(d)      ((d)->arch.p2m)
 
-/* Get p2m table (re)usable for specified np2m base.
- * Automatically destroys and re-initializes a p2m if none found.
- * If np2m_base == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
+/*
+ * Assigns an np2m with the specified np2m_base to the specified vCPU
+ * and returns that np2m.
  */
 struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 02/14] x86/np2m: add np2m_flush_base()
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
  2017-09-04  8:14 ` [PATCH v1 01/14] x86/np2m: refactor p2m_get_nestedp2m() Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-28 14:01   ` George Dunlap
  2017-09-04  8:14 ` [PATCH v1 03/14] x86/vvmx: use np2m_flush_base() for INVEPT_SINGLE_CONTEXT Sergey Dyasli
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

The new function finds all np2m objects with the specified np2m_base
and flushes them.

Convert p2m_flush_table() into p2m_flush_table_locked() in order not to
release the p2m_lock after np2m_base check.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
RFC --> v1:
- p2m_unlock(p2m) is moved from p2m_flush_table_locked() to
  p2m_flush_table() for balanced lock/unlock
- np2m_flush_eptp() is renamed to np2m_flush_base()

 xen/arch/x86/mm/p2m.c     | 35 +++++++++++++++++++++++++++++------
 xen/include/asm-x86/p2m.h |  2 ++
 2 files changed, 31 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index b8c8bba421..94a42400ad 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1708,15 +1708,14 @@ p2m_getlru_nestedp2m(struct domain *d, struct p2m_domain *p2m)
     return p2m;
 }
 
-/* Reset this p2m table to be empty */
 static void
-p2m_flush_table(struct p2m_domain *p2m)
+p2m_flush_table_locked(struct p2m_domain *p2m)
 {
     struct page_info *top, *pg;
     struct domain *d = p2m->domain;
     mfn_t mfn;
 
-    p2m_lock(p2m);
+    ASSERT(p2m_locked_by_me(p2m));
 
     /*
      * "Host" p2m tables can have shared entries &c that need a bit more care
@@ -1729,10 +1728,7 @@ p2m_flush_table(struct p2m_domain *p2m)
 
     /* No need to flush if it's already empty */
     if ( p2m_is_nestedp2m(p2m) && p2m->np2m_base == P2M_BASE_EADDR )
-    {
-        p2m_unlock(p2m);
         return;
-    }
 
     /* This is no longer a valid nested p2m for any address space */
     p2m->np2m_base = P2M_BASE_EADDR;
@@ -1752,7 +1748,14 @@ p2m_flush_table(struct p2m_domain *p2m)
             d->arch.paging.free_page(d, pg);
     }
     page_list_add(top, &p2m->pages);
+}
 
+/* Reset this p2m table to be empty */
+static void
+p2m_flush_table(struct p2m_domain *p2m)
+{
+    p2m_lock(p2m);
+    p2m_flush_table_locked(p2m);
     p2m_unlock(p2m);
 }
 
@@ -1773,6 +1776,26 @@ p2m_flush_nestedp2m(struct domain *d)
         p2m_flush_table(d->arch.nested_p2m[i]);
 }
 
+void np2m_flush_base(struct vcpu *v, unsigned long np2m_base)
+{
+    struct domain *d = v->domain;
+    struct p2m_domain *p2m;
+    unsigned int i;
+
+    np2m_base &= ~(0xfffull);
+
+    nestedp2m_lock(d);
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+    {
+        p2m = d->arch.nested_p2m[i];
+        p2m_lock(p2m);
+        if ( p2m->np2m_base == np2m_base )
+            p2m_flush_table_locked(p2m);
+        p2m_unlock(p2m);
+    }
+    nestedp2m_unlock(d);
+}
+
 static void assign_np2m(struct vcpu *v, struct p2m_domain *p2m)
 {
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 9086bb35dc..cfb00591cd 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -779,6 +779,8 @@ int p2m_pt_handle_deferred_changes(uint64_t gpa);
 void p2m_flush(struct vcpu *v, struct p2m_domain *p2m);
 /* Flushes all nested p2m tables */
 void p2m_flush_nestedp2m(struct domain *d);
+/* Flushes all np2m objects with the specified np2m_base */
+void np2m_flush_base(struct vcpu *v, unsigned long np2m_base);
 
 void nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
     l1_pgentry_t *p, l1_pgentry_t new, unsigned int level);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 03/14] x86/vvmx: use np2m_flush_base() for INVEPT_SINGLE_CONTEXT
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
  2017-09-04  8:14 ` [PATCH v1 01/14] x86/np2m: refactor p2m_get_nestedp2m() Sergey Dyasli
  2017-09-04  8:14 ` [PATCH v1 02/14] x86/np2m: add np2m_flush_base() Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-26 16:05   ` George Dunlap
  2017-09-04  8:14 ` [PATCH v1 04/14] x86/np2m: remove np2m_base from p2m_get_nestedp2m() Sergey Dyasli
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

nvmx_handle_invept() updates current's np2m just to flush it. Instead,
use the new np2m_flush_base() directly for this purpose.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index e2361a1394..3c5f560aec 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1910,12 +1910,7 @@ int nvmx_handle_invept(struct cpu_user_regs *regs)
     {
     case INVEPT_SINGLE_CONTEXT:
     {
-        struct p2m_domain *p2m = p2m_get_nestedp2m(current, eptp);
-        if ( p2m )
-        {
-            p2m_flush(current, p2m);
-            ept_sync_domain(p2m);
-        }
+        np2m_flush_base(current, eptp);
         break;
     }
     case INVEPT_ALL_CONTEXT:
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 04/14] x86/np2m: remove np2m_base from p2m_get_nestedp2m()
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
                   ` (2 preceding siblings ...)
  2017-09-04  8:14 ` [PATCH v1 03/14] x86/vvmx: use np2m_flush_base() for INVEPT_SINGLE_CONTEXT Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-26 16:06   ` George Dunlap
  2017-09-04  8:14 ` [PATCH v1 05/14] x86/np2m: add np2m_generation Sergey Dyasli
                   ` (10 subsequent siblings)
  14 siblings, 1 reply; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

Remove np2m_base parameter as it should always match the value of
np2m_base in VMCX12.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
RFC --> v1:
- Nested SVM: added early update of ns_vmcb_hostcr3

 xen/arch/x86/hvm/svm/nestedsvm.c | 6 +++++-
 xen/arch/x86/hvm/vmx/vvmx.c      | 3 +--
 xen/arch/x86/mm/hap/nested_hap.c | 2 +-
 xen/arch/x86/mm/p2m.c            | 8 ++++----
 xen/include/asm-x86/p2m.h        | 5 ++---
 5 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 8fd9c23a02..629d5ea497 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -411,7 +411,11 @@ static void nestedsvm_vmcb_set_nestedp2m(struct vcpu *v,
     ASSERT(v != NULL);
     ASSERT(vvmcb != NULL);
     ASSERT(n2vmcb != NULL);
-    p2m = p2m_get_nestedp2m(v, vvmcb->_h_cr3);
+
+    /* This will allow nsvm_vcpu_hostcr3() to return correct np2m_base */
+    vcpu_nestedsvm(v).ns_vmcb_hostcr3 = vvmcb->_h_cr3;
+
+    p2m = p2m_get_nestedp2m(v);
     n2vmcb->_h_cr3 = pagetable_get_paddr(p2m_get_pagetable(p2m));
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 3c5f560aec..ea2da14489 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1109,8 +1109,7 @@ static void load_shadow_guest_state(struct vcpu *v)
 
 uint64_t get_shadow_eptp(struct vcpu *v)
 {
-    uint64_t np2m_base = nvmx_vcpu_eptp_base(v);
-    struct p2m_domain *p2m = p2m_get_nestedp2m(v, np2m_base);
+    struct p2m_domain *p2m = p2m_get_nestedp2m(v);
     struct ept_data *ept = &p2m->ept;
 
     ept->mfn = pagetable_get_pfn(p2m_get_pagetable(p2m));
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 162afed46b..ed137fa784 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -212,7 +212,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     uint8_t p2ma_21 = p2m_access_rwx;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
-    nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
+    nested_p2m = p2m_get_nestedp2m(v);
 
     /* walk the L1 P2M table */
     rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21, &p2ma_21,
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 94a42400ad..b735950349 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1810,11 +1810,12 @@ static void assign_np2m(struct vcpu *v, struct p2m_domain *p2m)
 }
 
 struct p2m_domain *
-p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
+p2m_get_nestedp2m(struct vcpu *v)
 {
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
     struct domain *d = v->domain;
     struct p2m_domain *p2m;
+    uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
 
     /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */
     np2m_base &= ~(0xfffull);
@@ -1862,7 +1863,7 @@ p2m_get_p2m(struct vcpu *v)
     if (!nestedhvm_is_n2(v))
         return p2m_get_hostp2m(v->domain);
 
-    return p2m_get_nestedp2m(v, nhvm_vcpu_p2m_base(v));
+    return p2m_get_nestedp2m(v);
 }
 
 unsigned long paging_gva_to_gfn(struct vcpu *v,
@@ -1877,13 +1878,12 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         unsigned long l2_gfn, l1_gfn;
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
-        uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
         uint8_t l1_p2ma;
         unsigned int l1_page_order;
         int rv;
 
         /* translate l2 guest va into l2 guest gfn */
-        p2m = p2m_get_nestedp2m(v, np2m_base);
+        p2m = p2m_get_nestedp2m(v);
         mode = paging_get_nestedmode(v);
         l2_gfn = mode->gva_to_gfn(v, p2m, va, pfec);
 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index cfb00591cd..1d17fd5f97 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -360,10 +360,9 @@ struct p2m_domain {
 #define p2m_get_hostp2m(d)      ((d)->arch.p2m)
 
 /*
- * Assigns an np2m with the specified np2m_base to the specified vCPU
- * and returns that np2m.
+ * Updates vCPU's n2pm to match its np2m_base in VMCX12 and returns that np2m.
  */
-struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base);
+struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v);
 
 /* If vcpu is in host mode then behaviour matches p2m_get_hostp2m().
  * If vcpu is in guest mode then behaviour matches p2m_get_nestedp2m().
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 05/14] x86/np2m: add np2m_generation
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
                   ` (3 preceding siblings ...)
  2017-09-04  8:14 ` [PATCH v1 04/14] x86/np2m: remove np2m_base from p2m_get_nestedp2m() Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-04  8:14 ` [PATCH v1 06/14] x86/np2m: add stale_np2m flag Sergey Dyasli
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

Add np2m_generation element to both p2m_domain and nestedvcpu.

np2m's generation will be incremented each time the np2m is flushed.
This will allow to detect if a nested vcpu has the stale np2m.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/hvm/nestedhvm.c   | 1 +
 xen/arch/x86/mm/p2m.c          | 3 +++
 xen/include/asm-x86/hvm/vcpu.h | 1 +
 xen/include/asm-x86/p2m.h      | 1 +
 4 files changed, 6 insertions(+)

diff --git a/xen/arch/x86/hvm/nestedhvm.c b/xen/arch/x86/hvm/nestedhvm.c
index f2f7469d86..32b8acca6a 100644
--- a/xen/arch/x86/hvm/nestedhvm.c
+++ b/xen/arch/x86/hvm/nestedhvm.c
@@ -56,6 +56,7 @@ nestedhvm_vcpu_reset(struct vcpu *v)
     nv->nv_vvmcxaddr = INVALID_PADDR;
     nv->nv_flushp2m = 0;
     nv->nv_p2m = NULL;
+    nv->np2m_generation = 0;
 
     hvm_asid_flush_vcpu_asid(&nv->nv_n2asid);
 
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index b735950349..2999b858e4 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -73,6 +73,7 @@ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     p2m->p2m_class = p2m_host;
 
     p2m->np2m_base = P2M_BASE_EADDR;
+    p2m->np2m_generation = 0;
 
     for ( i = 0; i < ARRAY_SIZE(p2m->pod.mrp.list); ++i )
         p2m->pod.mrp.list[i] = gfn_x(INVALID_GFN);
@@ -1732,6 +1733,7 @@ p2m_flush_table_locked(struct p2m_domain *p2m)
 
     /* This is no longer a valid nested p2m for any address space */
     p2m->np2m_base = P2M_BASE_EADDR;
+    p2m->np2m_generation++;
 
     /* Make sure nobody else is using this p2m table */
     nestedhvm_vmcx_flushtlb(p2m);
@@ -1806,6 +1808,7 @@ static void assign_np2m(struct vcpu *v, struct p2m_domain *p2m)
 
     nv->nv_flushp2m = 0;
     nv->nv_p2m = p2m;
+    nv->np2m_generation = p2m->np2m_generation;
     cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
 }
 
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 6c54773f1c..91651581db 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -115,6 +115,7 @@ struct nestedvcpu {
 
     bool_t nv_flushp2m; /* True, when p2m table must be flushed */
     struct p2m_domain *nv_p2m; /* used p2m table for this vcpu */
+    uint64_t np2m_generation;
 
     struct hvm_vcpu_asid nv_n2asid;
 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 1d17fd5f97..1a7002cbcd 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -209,6 +209,7 @@ struct p2m_domain {
      * to set it to any other value. */
 #define P2M_BASE_EADDR     (~0ULL)
     uint64_t           np2m_base;
+    uint64_t           np2m_generation;
 
     /* Nested p2ms: linked list of n2pms allocated to this domain. 
      * The host p2m hasolds the head of the list and the np2ms are 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 06/14] x86/np2m: add stale_np2m flag
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
                   ` (4 preceding siblings ...)
  2017-09-04  8:14 ` [PATCH v1 05/14] x86/np2m: add np2m_generation Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-04  8:14 ` [PATCH v1 07/14] x86/vvmx: restart nested vmentry in case of stale_np2m Sergey Dyasli
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

The new element will indicate if update of a shadow p2m_base is needed
prior to vmentry. Update is required if a nested vcpu gets a new np2m
or if its np2m was flushed by an IPI.

Add nvcpu_flush() helper function.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/hvm/nestedhvm.c   |  2 ++
 xen/arch/x86/mm/p2m.c          | 10 ++++++++--
 xen/include/asm-x86/hvm/vcpu.h |  1 +
 3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/nestedhvm.c b/xen/arch/x86/hvm/nestedhvm.c
index 32b8acca6a..5b012568c4 100644
--- a/xen/arch/x86/hvm/nestedhvm.c
+++ b/xen/arch/x86/hvm/nestedhvm.c
@@ -57,6 +57,7 @@ nestedhvm_vcpu_reset(struct vcpu *v)
     nv->nv_flushp2m = 0;
     nv->nv_p2m = NULL;
     nv->np2m_generation = 0;
+    nv->stale_np2m = false;
 
     hvm_asid_flush_vcpu_asid(&nv->nv_n2asid);
 
@@ -108,6 +109,7 @@ nestedhvm_flushtlb_ipi(void *info)
      */
     hvm_asid_flush_core();
     vcpu_nestedhvm(v).nv_p2m = NULL;
+    vcpu_nestedhvm(v).stale_np2m = true;
 }
 
 void
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 2999b858e4..053df0c9aa 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1812,6 +1812,12 @@ static void assign_np2m(struct vcpu *v, struct p2m_domain *p2m)
     cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
 }
 
+static void nvcpu_flush(struct vcpu *v)
+{
+    hvm_asid_flush_vcpu(v);
+    vcpu_nestedhvm(v).stale_np2m = true;
+}
+
 struct p2m_domain *
 p2m_get_nestedp2m(struct vcpu *v)
 {
@@ -1835,7 +1841,7 @@ p2m_get_nestedp2m(struct vcpu *v)
         if ( p2m->np2m_base == np2m_base || p2m->np2m_base == P2M_BASE_EADDR )
         {
             if ( p2m->np2m_base == P2M_BASE_EADDR )
-                hvm_asid_flush_vcpu(v);
+                nvcpu_flush(v);
             p2m->np2m_base = np2m_base;
             assign_np2m(v, p2m);
             p2m_unlock(p2m);
@@ -1852,7 +1858,7 @@ p2m_get_nestedp2m(struct vcpu *v)
     p2m_flush_table(p2m);
     p2m_lock(p2m);
     p2m->np2m_base = np2m_base;
-    hvm_asid_flush_vcpu(v);
+    nvcpu_flush(v);
     assign_np2m(v, p2m);
     p2m_unlock(p2m);
     nestedp2m_unlock(d);
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 91651581db..16af97942f 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -116,6 +116,7 @@ struct nestedvcpu {
     bool_t nv_flushp2m; /* True, when p2m table must be flushed */
     struct p2m_domain *nv_p2m; /* used p2m table for this vcpu */
     uint64_t np2m_generation;
+    bool stale_np2m; /* True when p2m_base in VMCX02 is no longer valid */
 
     struct hvm_vcpu_asid nv_n2asid;
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 07/14] x86/vvmx: restart nested vmentry in case of stale_np2m
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
                   ` (5 preceding siblings ...)
  2017-09-04  8:14 ` [PATCH v1 06/14] x86/np2m: add stale_np2m flag Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-29 10:53   ` George Dunlap
  2017-09-04  8:14 ` [PATCH v1 08/14] x86/np2m: add np2m_schedule() Sergey Dyasli
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

If an IPI flushes vCPU's np2m object just before nested vmentry, there
will be a stale shadow EPTP value in VMCS02. Allow vmentry to be
restarted in such cases and add nvmx_eptp_update() to perform an update.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/hvm/vmx/entry.S |  6 ++++++
 xen/arch/x86/hvm/vmx/vmx.c   |  8 +++++++-
 xen/arch/x86/hvm/vmx/vvmx.c  | 14 ++++++++++++++
 3 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/entry.S b/xen/arch/x86/hvm/vmx/entry.S
index 53eedc6363..9fb8f89220 100644
--- a/xen/arch/x86/hvm/vmx/entry.S
+++ b/xen/arch/x86/hvm/vmx/entry.S
@@ -79,6 +79,8 @@ UNLIKELY_END(realmode)
 
         mov  %rsp,%rdi
         call vmx_vmenter_helper
+        cmp  $0,%eax
+        jne .Lvmx_vmentry_restart
         mov  VCPU_hvm_guest_cr2(%rbx),%rax
 
         pop  %r15
@@ -117,6 +119,10 @@ ENTRY(vmx_asm_do_vmentry)
         GET_CURRENT(bx)
         jmp  .Lvmx_do_vmentry
 
+.Lvmx_vmentry_restart:
+        sti
+        jmp  .Lvmx_do_vmentry
+
 .Lvmx_goto_emulator:
         sti
         mov  %rsp,%rdi
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f6da119c9f..06509590b7 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -4223,13 +4223,17 @@ static void lbr_fixup(void)
         bdw_erratum_bdf14_fixup();
 }
 
-void vmx_vmenter_helper(const struct cpu_user_regs *regs)
+int vmx_vmenter_helper(const struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
     u32 new_asid, old_asid;
     struct hvm_vcpu_asid *p_asid;
     bool_t need_flush;
 
+    /* Shadow EPTP can't be updated here because irqs are disabled */
+     if ( nestedhvm_vcpu_in_guestmode(curr) && vcpu_nestedhvm(curr).stale_np2m )
+         return 1;
+
     if ( curr->domain->arch.hvm_domain.pi_ops.do_resume )
         curr->domain->arch.hvm_domain.pi_ops.do_resume(curr);
 
@@ -4290,6 +4294,8 @@ void vmx_vmenter_helper(const struct cpu_user_regs *regs)
     __vmwrite(GUEST_RIP,    regs->rip);
     __vmwrite(GUEST_RSP,    regs->rsp);
     __vmwrite(GUEST_RFLAGS, regs->rflags | X86_EFLAGS_MBS);
+
+    return 0;
 }
 
 /*
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index ea2da14489..26ce349c76 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1405,12 +1405,26 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
     vmsucceed(regs);
 }
 
+static void nvmx_eptp_update(void)
+{
+    if ( !nestedhvm_vcpu_in_guestmode(current) ||
+          vcpu_nestedhvm(current).nv_vmexit_pending ||
+         !vcpu_nestedhvm(current).stale_np2m ||
+         !nestedhvm_paging_mode_hap(current) )
+        return;
+
+    __vmwrite(EPT_POINTER, get_shadow_eptp(current));
+    vcpu_nestedhvm(current).stale_np2m = false;
+}
+
 void nvmx_switch_guest(void)
 {
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
 
+    nvmx_eptp_update();
+
     /*
      * A pending IO emulation may still be not finished. In this case, no
      * virtual vmswitch is allowed. Or else, the following IO emulation will
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 08/14] x86/np2m: add np2m_schedule()
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
                   ` (6 preceding siblings ...)
  2017-09-04  8:14 ` [PATCH v1 07/14] x86/vvmx: restart nested vmentry in case of stale_np2m Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-04  8:14 ` [PATCH v1 09/14] x86/np2m: add p2m_get_nestedp2m_locked() Sergey Dyasli
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

np2m maintenance is required for a nested vcpu during scheduling:

    1. On schedule-out: clear pCPU's bit in p2m->dirty_cpumask
                        to prevent useless IPIs.

    2. On schedule-in: check if np2m is up to date and wasn't flushed.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
RFC --> v1:
- np2m_schedule() now accepts NP2M_SCHEDLE_IN/OUT

 xen/arch/x86/mm/p2m.c     | 43 +++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/p2m.h |  5 +++++
 2 files changed, 48 insertions(+)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 053df0c9aa..e5d2fed361 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1875,6 +1875,49 @@ p2m_get_p2m(struct vcpu *v)
     return p2m_get_nestedp2m(v);
 }
 
+void np2m_schedule(int dir)
+{
+    struct nestedvcpu *nv = &vcpu_nestedhvm(current);
+    struct p2m_domain *p2m;
+
+    ASSERT(dir == NP2M_SCHEDLE_IN || dir == NP2M_SCHEDLE_OUT);
+
+    if ( !nestedhvm_enabled(current->domain) ||
+         !nestedhvm_vcpu_in_guestmode(current) ||
+         !nestedhvm_paging_mode_hap(current) )
+        return;
+
+    p2m = nv->nv_p2m;
+    if ( p2m )
+    {
+        bool np2m_valid;
+
+        p2m_lock(p2m);
+        np2m_valid = p2m->np2m_base == nhvm_vcpu_p2m_base(current) &&
+                     nv->np2m_generation == p2m->np2m_generation;
+        if ( dir == NP2M_SCHEDLE_OUT && np2m_valid )
+        {
+            /*
+             * The np2m is up to date but this vCPU will no longer use it,
+             * which means there are no reasons to send a flush IPI.
+             */
+            cpumask_clear_cpu(current->processor, p2m->dirty_cpumask);
+        }
+        else if ( dir == NP2M_SCHEDLE_IN )
+        {
+            if ( !np2m_valid )
+            {
+                /* This vCPU's np2m was flushed while it was not runnable */
+                hvm_asid_flush_core();
+                vcpu_nestedhvm(current).nv_p2m = NULL;
+            }
+            else
+                cpumask_set_cpu(current->processor, p2m->dirty_cpumask);
+        }
+        p2m_unlock(p2m);
+    }
+}
+
 unsigned long paging_gva_to_gfn(struct vcpu *v,
                                 unsigned long va,
                                 uint32_t *pfec)
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 1a7002cbcd..f873dc4fd9 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -370,6 +370,11 @@ struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v);
  */
 struct p2m_domain *p2m_get_p2m(struct vcpu *v);
 
+#define NP2M_SCHEDLE_IN  0
+#define NP2M_SCHEDLE_OUT 1
+
+void np2m_schedule(int dir);
+
 static inline bool_t p2m_is_hostp2m(const struct p2m_domain *p2m)
 {
     return p2m->p2m_class == p2m_host;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 09/14] x86/np2m: add p2m_get_nestedp2m_locked()
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
                   ` (7 preceding siblings ...)
  2017-09-04  8:14 ` [PATCH v1 08/14] x86/np2m: add np2m_schedule() Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-04  8:14 ` [PATCH v1 10/14] x86/np2m: improve nestedhvm_hap_nested_page_fault() Sergey Dyasli
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

The new function returns still write-locked np2m.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/mm/p2m.c     | 12 +++++++++---
 xen/include/asm-x86/p2m.h |  2 ++
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index e5d2fed361..15dedef33b 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1819,7 +1819,7 @@ static void nvcpu_flush(struct vcpu *v)
 }
 
 struct p2m_domain *
-p2m_get_nestedp2m(struct vcpu *v)
+p2m_get_nestedp2m_locked(struct vcpu *v)
 {
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
     struct domain *d = v->domain;
@@ -1844,7 +1844,6 @@ p2m_get_nestedp2m(struct vcpu *v)
                 nvcpu_flush(v);
             p2m->np2m_base = np2m_base;
             assign_np2m(v, p2m);
-            p2m_unlock(p2m);
             nestedp2m_unlock(d);
 
             return p2m;
@@ -1860,12 +1859,19 @@ p2m_get_nestedp2m(struct vcpu *v)
     p2m->np2m_base = np2m_base;
     nvcpu_flush(v);
     assign_np2m(v, p2m);
-    p2m_unlock(p2m);
     nestedp2m_unlock(d);
 
     return p2m;
 }
 
+struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v)
+{
+    struct p2m_domain *p2m = p2m_get_nestedp2m_locked(v);
+    p2m_unlock(p2m);
+
+    return p2m;
+}
+
 struct p2m_domain *
 p2m_get_p2m(struct vcpu *v)
 {
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index f873dc4fd9..790635ec0b 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -364,6 +364,8 @@ struct p2m_domain {
  * Updates vCPU's n2pm to match its np2m_base in VMCX12 and returns that np2m.
  */
 struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v);
+/* Similar to the above except that returned p2m is still write-locked */
+struct p2m_domain *p2m_get_nestedp2m_locked(struct vcpu *v);
 
 /* If vcpu is in host mode then behaviour matches p2m_get_hostp2m().
  * If vcpu is in guest mode then behaviour matches p2m_get_nestedp2m().
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 10/14] x86/np2m: improve nestedhvm_hap_nested_page_fault()
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
                   ` (8 preceding siblings ...)
  2017-09-04  8:14 ` [PATCH v1 09/14] x86/np2m: add p2m_get_nestedp2m_locked() Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-04  8:14 ` [PATCH v1 11/14] x86/np2m: implement sharing of np2m between vCPUs Sergey Dyasli
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

There is a possibility for nested_p2m to became stale between
nestedhvm_hap_nested_page_fault() and nestedhap_fix_p2m(). Simply
use p2m_get_nestedp2m_lock() to guarantee that correct np2m is used.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/mm/hap/nested_hap.c | 29 +++++++++++------------------
 1 file changed, 11 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index ed137fa784..96afe632b5 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -101,28 +101,21 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
                   unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
 {
     int rc = 0;
+    unsigned long gfn, mask;
+    mfn_t mfn;
+
     ASSERT(p2m);
     ASSERT(p2m->set_entry);
+    ASSERT(p2m_locked_by_me(p2m));
 
-    p2m_lock(p2m);
-
-    /* If this p2m table has been flushed or recycled under our feet, 
-     * leave it alone.  We'll pick up the right one as we try to 
-     * vmenter the guest. */
-    if ( p2m->np2m_base == nhvm_vcpu_p2m_base(v) )
-    {
-        unsigned long gfn, mask;
-        mfn_t mfn;
-
-        /* If this is a superpage mapping, round down both addresses
-         * to the start of the superpage. */
-        mask = ~((1UL << page_order) - 1);
+    /* If this is a superpage mapping, round down both addresses
+     * to the start of the superpage. */
+    mask = ~((1UL << page_order) - 1);
 
-        gfn = (L2_gpa >> PAGE_SHIFT) & mask;
-        mfn = _mfn((L0_gpa >> PAGE_SHIFT) & mask);
+    gfn = (L2_gpa >> PAGE_SHIFT) & mask;
+    mfn = _mfn((L0_gpa >> PAGE_SHIFT) & mask);
 
-        rc = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma);
-    }
+    rc = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma);
 
     p2m_unlock(p2m);
 
@@ -212,7 +205,6 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     uint8_t p2ma_21 = p2m_access_rwx;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
-    nested_p2m = p2m_get_nestedp2m(v);
 
     /* walk the L1 P2M table */
     rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21, &p2ma_21,
@@ -278,6 +270,7 @@ nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     p2ma_10 &= (p2m_access_t)p2ma_21;
 
     /* fix p2m_get_pagetable(nested_p2m) */
+    nested_p2m = p2m_get_nestedp2m_locked(v);
     nestedhap_fix_p2m(v, nested_p2m, *L2_gpa, L0_gpa, page_order_20,
         p2mt_10, p2ma_10);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 11/14] x86/np2m: implement sharing of np2m between vCPUs
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
                   ` (9 preceding siblings ...)
  2017-09-04  8:14 ` [PATCH v1 10/14] x86/np2m: improve nestedhvm_hap_nested_page_fault() Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-04  8:14 ` [PATCH v1 12/14] x86/np2m: refactor p2m_get_nestedp2m_locked() Sergey Dyasli
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

Modify p2m_get_nestedp2m() to allow sharing a np2m between multiple
vcpus with the same np2m_base (L1 np2m_base value in VMCX12).

np2m_schedule() callbacks are added to context_switch() as well as
pseudo schedule-out is performed during vvmx's virtual_vmexit().

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/domain.c       |  2 ++
 xen/arch/x86/hvm/vmx/vvmx.c |  4 ++++
 xen/arch/x86/mm/p2m.c       | 29 +++++++++++++++++++++++++++--
 3 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index dbddc536d3..c8c26dad4e 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1647,6 +1647,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     {
         _update_runstate_area(prev);
         vpmu_switch_from(prev);
+        np2m_schedule(NP2M_SCHEDLE_OUT);
     }
 
     if ( is_hvm_domain(prevd) && !list_empty(&prev->arch.hvm_vcpu.tm_list) )
@@ -1695,6 +1696,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
 
         /* Must be done with interrupts enabled */
         vpmu_switch_to(next);
+        np2m_schedule(NP2M_SCHEDLE_IN);
     }
 
     /* Ensure that the vcpu has an up-to-date time base. */
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 26ce349c76..49733af62b 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1201,6 +1201,7 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
 
     /* Setup virtual ETP for L2 guest*/
     if ( nestedhvm_paging_mode_hap(v) )
+        /* This will setup the initial np2m for the nested vCPU */
         __vmwrite(EPT_POINTER, get_shadow_eptp(v));
     else
         __vmwrite(EPT_POINTER, get_host_eptp(v));
@@ -1367,6 +1368,9 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
          !(v->arch.hvm_vcpu.guest_efer & EFER_LMA) )
         shadow_to_vvmcs_bulk(v, ARRAY_SIZE(gpdpte_fields), gpdpte_fields);
 
+    /* This will clear current pCPU bit in p2m->dirty_cpumask */
+    np2m_schedule(NP2M_SCHEDLE_OUT);
+
     vmx_vmcs_switch(v->arch.hvm_vmx.vmcs_pa, nvcpu->nv_n1vmcx_pa);
 
     nestedhvm_vcpu_exit_guestmode(v);
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 15dedef33b..d6a474fa20 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1825,6 +1825,7 @@ p2m_get_nestedp2m_locked(struct vcpu *v)
     struct domain *d = v->domain;
     struct p2m_domain *p2m;
     uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
+    unsigned int i;
 
     /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */
     np2m_base &= ~(0xfffull);
@@ -1838,10 +1839,34 @@ p2m_get_nestedp2m_locked(struct vcpu *v)
     if ( p2m ) 
     {
         p2m_lock(p2m);
-        if ( p2m->np2m_base == np2m_base || p2m->np2m_base == P2M_BASE_EADDR )
+        if ( p2m->np2m_base == np2m_base )
         {
-            if ( p2m->np2m_base == P2M_BASE_EADDR )
+            /* Check if np2m was flushed just before the lock */
+            if ( nv->np2m_generation != p2m->np2m_generation )
                 nvcpu_flush(v);
+            /* np2m is up-to-date */
+            p2m->np2m_base = np2m_base;
+            assign_np2m(v, p2m);
+            nestedp2m_unlock(d);
+
+            return p2m;
+        }
+        else if ( p2m->np2m_base != P2M_BASE_EADDR )
+        {
+            /* vCPU is switching from some other valid np2m */
+            cpumask_clear_cpu(v->processor, p2m->dirty_cpumask);
+        }
+        p2m_unlock(p2m);
+    }
+
+    /* Share a np2m if possible */
+    for ( i = 0; i < MAX_NESTEDP2M; i++ )
+    {
+        p2m = d->arch.nested_p2m[i];
+        p2m_lock(p2m);
+        if ( p2m->np2m_base == np2m_base )
+        {
+            nvcpu_flush(v);
             p2m->np2m_base = np2m_base;
             assign_np2m(v, p2m);
             nestedp2m_unlock(d);
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 12/14] x86/np2m: refactor p2m_get_nestedp2m_locked()
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
                   ` (10 preceding siblings ...)
  2017-09-04  8:14 ` [PATCH v1 11/14] x86/np2m: implement sharing of np2m between vCPUs Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-04  8:14 ` [PATCH v1 13/14] x86/np2m: add break to np2m_flush_eptp() Sergey Dyasli
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

Remove some code duplication.

Suggested-by: George Dunlap <george.dunlap@citrix.com>
Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/mm/p2m.c | 25 ++++++++++---------------
 1 file changed, 10 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index d6a474fa20..f783f25fa8 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1826,6 +1826,7 @@ p2m_get_nestedp2m_locked(struct vcpu *v)
     struct p2m_domain *p2m;
     uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
     unsigned int i;
+    bool needs_flush = true;
 
     /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */
     np2m_base &= ~(0xfffull);
@@ -1842,14 +1843,10 @@ p2m_get_nestedp2m_locked(struct vcpu *v)
         if ( p2m->np2m_base == np2m_base )
         {
             /* Check if np2m was flushed just before the lock */
-            if ( nv->np2m_generation != p2m->np2m_generation )
-                nvcpu_flush(v);
+            if ( nv->np2m_generation == p2m->np2m_generation )
+                needs_flush = false;
             /* np2m is up-to-date */
-            p2m->np2m_base = np2m_base;
-            assign_np2m(v, p2m);
-            nestedp2m_unlock(d);
-
-            return p2m;
+            goto found;
         }
         else if ( p2m->np2m_base != P2M_BASE_EADDR )
         {
@@ -1864,15 +1861,10 @@ p2m_get_nestedp2m_locked(struct vcpu *v)
     {
         p2m = d->arch.nested_p2m[i];
         p2m_lock(p2m);
+
         if ( p2m->np2m_base == np2m_base )
-        {
-            nvcpu_flush(v);
-            p2m->np2m_base = np2m_base;
-            assign_np2m(v, p2m);
-            nestedp2m_unlock(d);
+            goto found;
 
-            return p2m;
-        }
         p2m_unlock(p2m);
     }
 
@@ -1881,8 +1873,11 @@ p2m_get_nestedp2m_locked(struct vcpu *v)
     p2m = p2m_getlru_nestedp2m(d, NULL);
     p2m_flush_table(p2m);
     p2m_lock(p2m);
+
+ found:
+    if ( needs_flush )
+        nvcpu_flush(v);
     p2m->np2m_base = np2m_base;
-    nvcpu_flush(v);
     assign_np2m(v, p2m);
     nestedp2m_unlock(d);
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 13/14] x86/np2m: add break to np2m_flush_eptp()
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
                   ` (11 preceding siblings ...)
  2017-09-04  8:14 ` [PATCH v1 12/14] x86/np2m: refactor p2m_get_nestedp2m_locked() Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-04  8:14 ` [PATCH v1 14/14] x86/vvmx: remove EPTP write from ept_handle_violation() Sergey Dyasli
  2017-09-29 15:01 ` [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs George Dunlap
  14 siblings, 0 replies; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

Now that np2m sharing is implemented, there can be only one np2m object
with the same np2m_base. Break from loop if the required np2m was found
during np2m_flush_eptp().

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/mm/p2m.c     | 4 ++++
 xen/include/asm-x86/p2m.h | 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index f783f25fa8..f11355b0d1 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1792,7 +1792,11 @@ void np2m_flush_base(struct vcpu *v, unsigned long np2m_base)
         p2m = d->arch.nested_p2m[i];
         p2m_lock(p2m);
         if ( p2m->np2m_base == np2m_base )
+        {
             p2m_flush_table_locked(p2m);
+            p2m_unlock(p2m);
+            break;
+        }
         p2m_unlock(p2m);
     }
     nestedp2m_unlock(d);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 790635ec0b..a17e589c07 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -786,7 +786,7 @@ int p2m_pt_handle_deferred_changes(uint64_t gpa);
 void p2m_flush(struct vcpu *v, struct p2m_domain *p2m);
 /* Flushes all nested p2m tables */
 void p2m_flush_nestedp2m(struct domain *d);
-/* Flushes all np2m objects with the specified np2m_base */
+/* Flushes the np2m specified by np2m_base (if it exists) */
 void np2m_flush_base(struct vcpu *v, unsigned long np2m_base);
 
 void nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 14/14] x86/vvmx: remove EPTP write from ept_handle_violation()
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
                   ` (12 preceding siblings ...)
  2017-09-04  8:14 ` [PATCH v1 13/14] x86/np2m: add break to np2m_flush_eptp() Sergey Dyasli
@ 2017-09-04  8:14 ` Sergey Dyasli
  2017-09-29 15:01 ` [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs George Dunlap
  14 siblings, 0 replies; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-04  8:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Jan Beulich, Boris Ostrovsky,
	Suravee Suthikulpanit

Now there is no need to update shadow EPTP after handling L2 EPT
violation since all EPTP updates are handled by nvmx_eptp_update().

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/hvm/vmx/vmx.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 06509590b7..6a2553cc58 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3269,12 +3269,6 @@ static void ept_handle_violation(ept_qual_t q, paddr_t gpa)
     case 0:         // Unhandled L1 EPT violation
         break;
     case 1:         // This violation is handled completly
-        /*Current nested EPT maybe flushed by other vcpus, so need
-         * to re-set its shadow EPTP pointer.
-         */
-        if ( nestedhvm_vcpu_in_guestmode(current) &&
-                        nestedhvm_paging_mode_hap(current ) )
-            __vmwrite(EPT_POINTER, get_shadow_eptp(current));
         return;
     case -1:        // This vioaltion should be injected to L1 VMM
         vcpu_nestedhvm(current).nv_vmexit_pending = 1;
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 03/14] x86/vvmx: use np2m_flush_base() for INVEPT_SINGLE_CONTEXT
  2017-09-04  8:14 ` [PATCH v1 03/14] x86/vvmx: use np2m_flush_base() for INVEPT_SINGLE_CONTEXT Sergey Dyasli
@ 2017-09-26 16:05   ` George Dunlap
  0 siblings, 0 replies; 22+ messages in thread
From: George Dunlap @ 2017-09-26 16:05 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Kevin Tian, Jun Nakajima, George Dunlap, Andrew Cooper,
	Tim Deegan, Jan Beulich, Boris Ostrovsky, Suravee Suthikulpanit

On 09/04/2017 09:14 AM, Sergey Dyasli wrote:
> nvmx_handle_invept() updates current's np2m just to flush it. Instead,
> use the new np2m_flush_base() directly for this purpose.

This one and the previous one look good, but it seems like it would be
better to have them as a single patch.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 04/14] x86/np2m: remove np2m_base from p2m_get_nestedp2m()
  2017-09-04  8:14 ` [PATCH v1 04/14] x86/np2m: remove np2m_base from p2m_get_nestedp2m() Sergey Dyasli
@ 2017-09-26 16:06   ` George Dunlap
  0 siblings, 0 replies; 22+ messages in thread
From: George Dunlap @ 2017-09-26 16:06 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Kevin Tian, Jun Nakajima, George Dunlap, Andrew Cooper,
	Tim Deegan, Jan Beulich, Boris Ostrovsky, Suravee Suthikulpanit

On 09/04/2017 09:14 AM, Sergey Dyasli wrote:
> Remove np2m_base parameter as it should always match the value of
> np2m_base in VMCX12.
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 01/14] x86/np2m: refactor p2m_get_nestedp2m()
  2017-09-04  8:14 ` [PATCH v1 01/14] x86/np2m: refactor p2m_get_nestedp2m() Sergey Dyasli
@ 2017-09-28 14:00   ` George Dunlap
  0 siblings, 0 replies; 22+ messages in thread
From: George Dunlap @ 2017-09-28 14:00 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Kevin Tian, Jun Nakajima, George Dunlap, Andrew Cooper,
	Tim Deegan, Jan Beulich, Boris Ostrovsky, Suravee Suthikulpanit

On 09/04/2017 09:14 AM, Sergey Dyasli wrote:
> 1. Add a helper function assign_np2m()
> 2. Remove useless volatile
> 3. Update function's comment in the header
> 4. Minor style fixes ('\n' and d)
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

> ---
>  xen/arch/x86/mm/p2m.c     | 31 ++++++++++++++++++-------------
>  xen/include/asm-x86/p2m.h |  6 +++---
>  2 files changed, 21 insertions(+), 16 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index e8a57d118c..b8c8bba421 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1773,14 +1773,24 @@ p2m_flush_nestedp2m(struct domain *d)
>          p2m_flush_table(d->arch.nested_p2m[i]);
>  }
>  
> +static void assign_np2m(struct vcpu *v, struct p2m_domain *p2m)
> +{
> +    struct nestedvcpu *nv = &vcpu_nestedhvm(v);
> +    struct domain *d = v->domain;
> +
> +    /* Bring this np2m to the top of the LRU list */
> +    p2m_getlru_nestedp2m(d, p2m);
> +
> +    nv->nv_flushp2m = 0;
> +    nv->nv_p2m = p2m;
> +    cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
> +}
> +
>  struct p2m_domain *
>  p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
>  {
> -    /* Use volatile to prevent gcc to cache nv->nv_p2m in a cpu register as
> -     * this may change within the loop by an other (v)cpu.
> -     */
> -    volatile struct nestedvcpu *nv = &vcpu_nestedhvm(v);
> -    struct domain *d;
> +    struct nestedvcpu *nv = &vcpu_nestedhvm(v);
> +    struct domain *d = v->domain;
>      struct p2m_domain *p2m;
>  
>      /* Mask out low bits; this avoids collisions with P2M_BASE_EADDR */
> @@ -1790,7 +1800,6 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
>          nv->nv_p2m = NULL;
>      }
>  
> -    d = v->domain;
>      nestedp2m_lock(d);
>      p2m = nv->nv_p2m;
>      if ( p2m ) 
> @@ -1798,15 +1807,13 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
>          p2m_lock(p2m);
>          if ( p2m->np2m_base == np2m_base || p2m->np2m_base == P2M_BASE_EADDR )
>          {
> -            nv->nv_flushp2m = 0;
> -            p2m_getlru_nestedp2m(d, p2m);
> -            nv->nv_p2m = p2m;
>              if ( p2m->np2m_base == P2M_BASE_EADDR )
>                  hvm_asid_flush_vcpu(v);
>              p2m->np2m_base = np2m_base;
> -            cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
> +            assign_np2m(v, p2m);
>              p2m_unlock(p2m);
>              nestedp2m_unlock(d);
> +
>              return p2m;
>          }
>          p2m_unlock(p2m);
> @@ -1817,11 +1824,9 @@ p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base)
>      p2m = p2m_getlru_nestedp2m(d, NULL);
>      p2m_flush_table(p2m);
>      p2m_lock(p2m);
> -    nv->nv_p2m = p2m;
>      p2m->np2m_base = np2m_base;
> -    nv->nv_flushp2m = 0;
>      hvm_asid_flush_vcpu(v);
> -    cpumask_set_cpu(v->processor, p2m->dirty_cpumask);
> +    assign_np2m(v, p2m);
>      p2m_unlock(p2m);
>      nestedp2m_unlock(d);
>  
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 6395e8fd1d..9086bb35dc 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -359,9 +359,9 @@ struct p2m_domain {
>  /* get host p2m table */
>  #define p2m_get_hostp2m(d)      ((d)->arch.p2m)
>  
> -/* Get p2m table (re)usable for specified np2m base.
> - * Automatically destroys and re-initializes a p2m if none found.
> - * If np2m_base == 0 then v->arch.hvm_vcpu.guest_cr[3] is used.
> +/*
> + * Assigns an np2m with the specified np2m_base to the specified vCPU
> + * and returns that np2m.
>   */
>  struct p2m_domain *p2m_get_nestedp2m(struct vcpu *v, uint64_t np2m_base);
>  
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 02/14] x86/np2m: add np2m_flush_base()
  2017-09-04  8:14 ` [PATCH v1 02/14] x86/np2m: add np2m_flush_base() Sergey Dyasli
@ 2017-09-28 14:01   ` George Dunlap
  0 siblings, 0 replies; 22+ messages in thread
From: George Dunlap @ 2017-09-28 14:01 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Kevin Tian, Jun Nakajima, George Dunlap, Andrew Cooper,
	Tim Deegan, Jan Beulich, Boris Ostrovsky, Suravee Suthikulpanit

On 09/04/2017 09:14 AM, Sergey Dyasli wrote:
> The new function finds all np2m objects with the specified np2m_base
> and flushes them.
> 
> Convert p2m_flush_table() into p2m_flush_table_locked() in order not to
> release the p2m_lock after np2m_base check.
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>

> ---
> RFC --> v1:
> - p2m_unlock(p2m) is moved from p2m_flush_table_locked() to
>   p2m_flush_table() for balanced lock/unlock
> - np2m_flush_eptp() is renamed to np2m_flush_base()
> 
>  xen/arch/x86/mm/p2m.c     | 35 +++++++++++++++++++++++++++++------
>  xen/include/asm-x86/p2m.h |  2 ++
>  2 files changed, 31 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index b8c8bba421..94a42400ad 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1708,15 +1708,14 @@ p2m_getlru_nestedp2m(struct domain *d, struct p2m_domain *p2m)
>      return p2m;
>  }
>  
> -/* Reset this p2m table to be empty */
>  static void
> -p2m_flush_table(struct p2m_domain *p2m)
> +p2m_flush_table_locked(struct p2m_domain *p2m)
>  {
>      struct page_info *top, *pg;
>      struct domain *d = p2m->domain;
>      mfn_t mfn;
>  
> -    p2m_lock(p2m);
> +    ASSERT(p2m_locked_by_me(p2m));
>  
>      /*
>       * "Host" p2m tables can have shared entries &c that need a bit more care
> @@ -1729,10 +1728,7 @@ p2m_flush_table(struct p2m_domain *p2m)
>  
>      /* No need to flush if it's already empty */
>      if ( p2m_is_nestedp2m(p2m) && p2m->np2m_base == P2M_BASE_EADDR )
> -    {
> -        p2m_unlock(p2m);
>          return;
> -    }
>  
>      /* This is no longer a valid nested p2m for any address space */
>      p2m->np2m_base = P2M_BASE_EADDR;
> @@ -1752,7 +1748,14 @@ p2m_flush_table(struct p2m_domain *p2m)
>              d->arch.paging.free_page(d, pg);
>      }
>      page_list_add(top, &p2m->pages);
> +}
>  
> +/* Reset this p2m table to be empty */
> +static void
> +p2m_flush_table(struct p2m_domain *p2m)
> +{
> +    p2m_lock(p2m);
> +    p2m_flush_table_locked(p2m);
>      p2m_unlock(p2m);
>  }
>  
> @@ -1773,6 +1776,26 @@ p2m_flush_nestedp2m(struct domain *d)
>          p2m_flush_table(d->arch.nested_p2m[i]);
>  }
>  
> +void np2m_flush_base(struct vcpu *v, unsigned long np2m_base)
> +{
> +    struct domain *d = v->domain;
> +    struct p2m_domain *p2m;
> +    unsigned int i;
> +
> +    np2m_base &= ~(0xfffull);
> +
> +    nestedp2m_lock(d);
> +    for ( i = 0; i < MAX_NESTEDP2M; i++ )
> +    {
> +        p2m = d->arch.nested_p2m[i];
> +        p2m_lock(p2m);
> +        if ( p2m->np2m_base == np2m_base )
> +            p2m_flush_table_locked(p2m);
> +        p2m_unlock(p2m);
> +    }
> +    nestedp2m_unlock(d);
> +}
> +
>  static void assign_np2m(struct vcpu *v, struct p2m_domain *p2m)
>  {
>      struct nestedvcpu *nv = &vcpu_nestedhvm(v);
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 9086bb35dc..cfb00591cd 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -779,6 +779,8 @@ int p2m_pt_handle_deferred_changes(uint64_t gpa);
>  void p2m_flush(struct vcpu *v, struct p2m_domain *p2m);
>  /* Flushes all nested p2m tables */
>  void p2m_flush_nestedp2m(struct domain *d);
> +/* Flushes all np2m objects with the specified np2m_base */
> +void np2m_flush_base(struct vcpu *v, unsigned long np2m_base);
>  
>  void nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
>      l1_pgentry_t *p, l1_pgentry_t new, unsigned int level);
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 07/14] x86/vvmx: restart nested vmentry in case of stale_np2m
  2017-09-04  8:14 ` [PATCH v1 07/14] x86/vvmx: restart nested vmentry in case of stale_np2m Sergey Dyasli
@ 2017-09-29 10:53   ` George Dunlap
  2017-09-29 13:39     ` Sergey Dyasli
  0 siblings, 1 reply; 22+ messages in thread
From: George Dunlap @ 2017-09-29 10:53 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Kevin Tian, Jun Nakajima, George Dunlap, Andrew Cooper,
	Tim Deegan, Jan Beulich, Boris Ostrovsky, Suravee Suthikulpanit

On 09/04/2017 09:14 AM, Sergey Dyasli wrote:
> If an IPI flushes vCPU's np2m object just before nested vmentry, there
> will be a stale shadow EPTP value in VMCS02. Allow vmentry to be
> restarted in such cases and add nvmx_eptp_update() to perform an update.
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> ---
>  xen/arch/x86/hvm/vmx/entry.S |  6 ++++++
>  xen/arch/x86/hvm/vmx/vmx.c   |  8 +++++++-
>  xen/arch/x86/hvm/vmx/vvmx.c  | 14 ++++++++++++++
>  3 files changed, 27 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/entry.S b/xen/arch/x86/hvm/vmx/entry.S
> index 53eedc6363..9fb8f89220 100644
> --- a/xen/arch/x86/hvm/vmx/entry.S
> +++ b/xen/arch/x86/hvm/vmx/entry.S
> @@ -79,6 +79,8 @@ UNLIKELY_END(realmode)
>  
>          mov  %rsp,%rdi
>          call vmx_vmenter_helper
> +        cmp  $0,%eax
> +        jne .Lvmx_vmentry_restart
>          mov  VCPU_hvm_guest_cr2(%rbx),%rax
>  
>          pop  %r15
> @@ -117,6 +119,10 @@ ENTRY(vmx_asm_do_vmentry)
>          GET_CURRENT(bx)
>          jmp  .Lvmx_do_vmentry
>  
> +.Lvmx_vmentry_restart:
> +        sti
> +        jmp  .Lvmx_do_vmentry
> +
>  .Lvmx_goto_emulator:
>          sti
>          mov  %rsp,%rdi
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index f6da119c9f..06509590b7 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -4223,13 +4223,17 @@ static void lbr_fixup(void)
>          bdw_erratum_bdf14_fixup();
>  }
>  
> -void vmx_vmenter_helper(const struct cpu_user_regs *regs)
> +int vmx_vmenter_helper(const struct cpu_user_regs *regs)
>  {
>      struct vcpu *curr = current;
>      u32 new_asid, old_asid;
>      struct hvm_vcpu_asid *p_asid;
>      bool_t need_flush;
>  
> +    /* Shadow EPTP can't be updated here because irqs are disabled */
> +     if ( nestedhvm_vcpu_in_guestmode(curr) && vcpu_nestedhvm(curr).stale_np2m )
> +         return 1;
> +
>      if ( curr->domain->arch.hvm_domain.pi_ops.do_resume )
>          curr->domain->arch.hvm_domain.pi_ops.do_resume(curr);
>  
> @@ -4290,6 +4294,8 @@ void vmx_vmenter_helper(const struct cpu_user_regs *regs)
>      __vmwrite(GUEST_RIP,    regs->rip);
>      __vmwrite(GUEST_RSP,    regs->rsp);
>      __vmwrite(GUEST_RFLAGS, regs->rflags | X86_EFLAGS_MBS);
> +
> +    return 0;
>  }
>  
>  /*
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index ea2da14489..26ce349c76 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1405,12 +1405,26 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
>      vmsucceed(regs);
>  }
>  
> +static void nvmx_eptp_update(void)
> +{
> +    if ( !nestedhvm_vcpu_in_guestmode(current) ||
> +          vcpu_nestedhvm(current).nv_vmexit_pending ||
> +         !vcpu_nestedhvm(current).stale_np2m ||
> +         !nestedhvm_paging_mode_hap(current) )
> +        return;
> +
> +    __vmwrite(EPT_POINTER, get_shadow_eptp(current));
> +    vcpu_nestedhvm(current).stale_np2m = false;

Hmm, so interrupts are enabled here.  What happens if a flush IPI occurs
between these two lines of code?  Won't we do the vmenter with a stale np2m?

It seems like we should clear stale_np2m first.  If an IPI occurs then,
we'll end up re-executing the vmenter unnecessarily, but it's better to
do that than to not re-execute it when we need to.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 07/14] x86/vvmx: restart nested vmentry in case of stale_np2m
  2017-09-29 10:53   ` George Dunlap
@ 2017-09-29 13:39     ` Sergey Dyasli
  0 siblings, 0 replies; 22+ messages in thread
From: Sergey Dyasli @ 2017-09-29 13:39 UTC (permalink / raw)
  Cc: Sergey Dyasli, Kevin Tian, jbeulich, jun.nakajima, Andrew Cooper,
	Tim (Xen.org),
	George Dunlap, xen-devel, suravee.suthikulpanit, boris.ostrovsky

On Fri, 2017-09-29 at 11:53 +0100, George Dunlap wrote:
> On 09/04/2017 09:14 AM, Sergey Dyasli wrote:
> > If an IPI flushes vCPU's np2m object just before nested vmentry, there
> > will be a stale shadow EPTP value in VMCS02. Allow vmentry to be
> > restarted in such cases and add nvmx_eptp_update() to perform an update.
> > 
> > Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> > ---
> >  xen/arch/x86/hvm/vmx/entry.S |  6 ++++++
> >  xen/arch/x86/hvm/vmx/vmx.c   |  8 +++++++-
> >  xen/arch/x86/hvm/vmx/vvmx.c  | 14 ++++++++++++++
> >  3 files changed, 27 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xen/arch/x86/hvm/vmx/entry.S b/xen/arch/x86/hvm/vmx/entry.S
> > index 53eedc6363..9fb8f89220 100644
> > --- a/xen/arch/x86/hvm/vmx/entry.S
> > +++ b/xen/arch/x86/hvm/vmx/entry.S
> > @@ -79,6 +79,8 @@ UNLIKELY_END(realmode)
> >  
> >          mov  %rsp,%rdi
> >          call vmx_vmenter_helper
> > +        cmp  $0,%eax
> > +        jne .Lvmx_vmentry_restart
> >          mov  VCPU_hvm_guest_cr2(%rbx),%rax
> >  
> >          pop  %r15
> > @@ -117,6 +119,10 @@ ENTRY(vmx_asm_do_vmentry)
> >          GET_CURRENT(bx)
> >          jmp  .Lvmx_do_vmentry
> >  
> > +.Lvmx_vmentry_restart:
> > +        sti
> > +        jmp  .Lvmx_do_vmentry
> > +
> >  .Lvmx_goto_emulator:
> >          sti
> >          mov  %rsp,%rdi
> > diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> > index f6da119c9f..06509590b7 100644
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -4223,13 +4223,17 @@ static void lbr_fixup(void)
> >          bdw_erratum_bdf14_fixup();
> >  }
> >  
> > -void vmx_vmenter_helper(const struct cpu_user_regs *regs)
> > +int vmx_vmenter_helper(const struct cpu_user_regs *regs)
> >  {
> >      struct vcpu *curr = current;
> >      u32 new_asid, old_asid;
> >      struct hvm_vcpu_asid *p_asid;
> >      bool_t need_flush;
> >  
> > +    /* Shadow EPTP can't be updated here because irqs are disabled */
> > +     if ( nestedhvm_vcpu_in_guestmode(curr) && vcpu_nestedhvm(curr).stale_np2m )
> > +         return 1;
> > +
> >      if ( curr->domain->arch.hvm_domain.pi_ops.do_resume )
> >          curr->domain->arch.hvm_domain.pi_ops.do_resume(curr);
> >  
> > @@ -4290,6 +4294,8 @@ void vmx_vmenter_helper(const struct cpu_user_regs *regs)
> >      __vmwrite(GUEST_RIP,    regs->rip);
> >      __vmwrite(GUEST_RSP,    regs->rsp);
> >      __vmwrite(GUEST_RFLAGS, regs->rflags | X86_EFLAGS_MBS);
> > +
> > +    return 0;
> >  }
> >  
> >  /*
> > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> > index ea2da14489..26ce349c76 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1405,12 +1405,26 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
> >      vmsucceed(regs);
> >  }
> >  
> > +static void nvmx_eptp_update(void)
> > +{
> > +    if ( !nestedhvm_vcpu_in_guestmode(current) ||
> > +          vcpu_nestedhvm(current).nv_vmexit_pending ||
> > +         !vcpu_nestedhvm(current).stale_np2m ||
> > +         !nestedhvm_paging_mode_hap(current) )
> > +        return;
> > +
> > +    __vmwrite(EPT_POINTER, get_shadow_eptp(current));
> > +    vcpu_nestedhvm(current).stale_np2m = false;
> 
> Hmm, so interrupts are enabled here.  What happens if a flush IPI occurs
> between these two lines of code?  Won't we do the vmenter with a stale np2m?
> 
> It seems like we should clear stale_np2m first.  If an IPI occurs then,
> we'll end up re-executing the vmenter unnecessarily, but it's better to
> do that than to not re-execute it when we need to.

Good catch! Clearing of stale_np2m must indeed happen before updating
a shadow EPTP.

-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs
  2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
                   ` (13 preceding siblings ...)
  2017-09-04  8:14 ` [PATCH v1 14/14] x86/vvmx: remove EPTP write from ept_handle_violation() Sergey Dyasli
@ 2017-09-29 15:01 ` George Dunlap
  14 siblings, 0 replies; 22+ messages in thread
From: George Dunlap @ 2017-09-29 15:01 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel
  Cc: Kevin Tian, Jun Nakajima, George Dunlap, Andrew Cooper,
	Tim Deegan, Jan Beulich, Boris Ostrovsky, Suravee Suthikulpanit

On 09/04/2017 09:14 AM, Sergey Dyasli wrote:
> Nested p2m (shadow EPT) is an object that stores memory address
> translations from L2 GPA directly to L0 HPA. This is achieved by
> combining together L1 EPT with L0 EPT during L2 EPT violations.
> 
> In the usual case, L1 uses the same EPTP value in VMCS12 for all vCPUs
> of a L2 guest. But unfortunately, in current Xen's implementation, each
> vCPU has its own n2pm object which cannot be shared with other vCPUs.
> This leads to the following issues if a nested guest has SMP:
> 
>     1. There will be multiple np2m objects (1 per nested vCPU) with
>        the same np2m_base (L1 EPTP value in VMCS12).
> 
>     2. Same EPT violations will be processed independently by each vCPU.
> 
>     3. Since MAX_NESTEDP2M is defined as 10, if a domain has more than
>        10 nested vCPUs, performance will be extremely degraded due to
>        constant np2m LRU list thrashing and np2m flushing.
> 
> This patch series makes it possible to share one np2m object between
> different vCPUs that have the same np2m_base. Sharing of np2m objects
> improves scalability of a domain from 10 nested vCPUs to 10 nested
> guests (with arbitrary number of vCPUs per guest).

Sergey,

With the exception the ordering issue in patch 7, I think this series is
largely correct.

However, the way the series was laid out made it fairly difficult to
understand what the code meant to be doing; it was often difficult to
see the forest for the trees, because changes were scattered across
several patches.  The worst of this was the dirty_cpumask / flushing
improvement, which was scattered across patches 5, 8, and 11.

In an effort to make sure I understood what was going on, I reorganized
the series in my own tree, merging many patches and re-writing the
commit messages in a format which makes it easier to verify the patch
(What's the situation, why is that a problem, what do we do to fix it).
I'll send this series as v2 -- could you read through it and make sure
I've gotten the main point of all the patches?

Thanks,
 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2017-09-29 15:01 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-04  8:14 [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs Sergey Dyasli
2017-09-04  8:14 ` [PATCH v1 01/14] x86/np2m: refactor p2m_get_nestedp2m() Sergey Dyasli
2017-09-28 14:00   ` George Dunlap
2017-09-04  8:14 ` [PATCH v1 02/14] x86/np2m: add np2m_flush_base() Sergey Dyasli
2017-09-28 14:01   ` George Dunlap
2017-09-04  8:14 ` [PATCH v1 03/14] x86/vvmx: use np2m_flush_base() for INVEPT_SINGLE_CONTEXT Sergey Dyasli
2017-09-26 16:05   ` George Dunlap
2017-09-04  8:14 ` [PATCH v1 04/14] x86/np2m: remove np2m_base from p2m_get_nestedp2m() Sergey Dyasli
2017-09-26 16:06   ` George Dunlap
2017-09-04  8:14 ` [PATCH v1 05/14] x86/np2m: add np2m_generation Sergey Dyasli
2017-09-04  8:14 ` [PATCH v1 06/14] x86/np2m: add stale_np2m flag Sergey Dyasli
2017-09-04  8:14 ` [PATCH v1 07/14] x86/vvmx: restart nested vmentry in case of stale_np2m Sergey Dyasli
2017-09-29 10:53   ` George Dunlap
2017-09-29 13:39     ` Sergey Dyasli
2017-09-04  8:14 ` [PATCH v1 08/14] x86/np2m: add np2m_schedule() Sergey Dyasli
2017-09-04  8:14 ` [PATCH v1 09/14] x86/np2m: add p2m_get_nestedp2m_locked() Sergey Dyasli
2017-09-04  8:14 ` [PATCH v1 10/14] x86/np2m: improve nestedhvm_hap_nested_page_fault() Sergey Dyasli
2017-09-04  8:14 ` [PATCH v1 11/14] x86/np2m: implement sharing of np2m between vCPUs Sergey Dyasli
2017-09-04  8:14 ` [PATCH v1 12/14] x86/np2m: refactor p2m_get_nestedp2m_locked() Sergey Dyasli
2017-09-04  8:14 ` [PATCH v1 13/14] x86/np2m: add break to np2m_flush_eptp() Sergey Dyasli
2017-09-04  8:14 ` [PATCH v1 14/14] x86/vvmx: remove EPTP write from ept_handle_violation() Sergey Dyasli
2017-09-29 15:01 ` [PATCH v1 00/14] Nested p2m: allow sharing between vCPUs George Dunlap

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.