xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] x86/hvm/viridian: fix the TLB flush hypercall
@ 2016-03-16 13:00 Paul Durrant
  2016-03-16 13:20 ` Andrew Cooper
  2016-03-16 13:31 ` Jan Beulich
  0 siblings, 2 replies; 9+ messages in thread
From: Paul Durrant @ 2016-03-16 13:00 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Keir Fraser, Jan Beulich

Commit b38d426a "flush remote tlbs by hypercall" add support to allow
Windows to request flush of remote TLB via hypercall rather than IPI.
Unfortunately it seems that this code was broken in a couple of ways:

1) The allocation of the per-vcpu flush mask is gated on whether the
   domain has viridian features enabled but the call to allocate is
   made before the toolstack has enabled those features. This results
   in a NULL pointer dereference.

2) One of the flush hypercall variants is a rep op, but the code
   does not update the output data with the reps completed. Hence the
   guest will spin repeatedly making the hypercall because it believes
   it has uncompleted reps.

This patch fixes both of these issues and also adds a check to make
sure the current vCPU is not included in the flush mask (since there's
clearly no need for the CPU to IPI itself).

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/hvm/hvm.c      | 12 ++++--------
 xen/arch/x86/hvm/viridian.c |  4 +++-
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5bc2812..f5c55e1 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2576,12 +2576,9 @@ int hvm_vcpu_initialise(struct vcpu *v)
     if ( rc != 0 )
         goto fail6;
 
-    if ( is_viridian_domain(d) )
-    {
-        rc = viridian_vcpu_init(v);
-        if ( rc != 0 )
-            goto fail7;
-    }
+    rc = viridian_vcpu_init(v);
+    if ( rc != 0 )
+        goto fail7;
 
     if ( v->vcpu_id == 0 )
     {
@@ -2615,8 +2612,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
 
 void hvm_vcpu_destroy(struct vcpu *v)
 {
-    if ( is_viridian_domain(v->domain) )
-        viridian_vcpu_deinit(v);
+    viridian_vcpu_deinit(v);
 
     hvm_all_ioreq_servers_remove_vcpu(v->domain, v);
 
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 6bd844b..6530a67 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -645,7 +645,7 @@ int viridian_hypercall(struct cpu_user_regs *regs)
                 continue;
 
             hvm_asid_flush_vcpu(v);
-            if ( v->is_running )
+            if ( v != curr && v->is_running )
                 __cpumask_set_cpu(v->processor, pcpu_mask);
         }
 
@@ -658,6 +658,8 @@ int viridian_hypercall(struct cpu_user_regs *regs)
         if ( !cpumask_empty(pcpu_mask) )
             flush_tlb_mask(pcpu_mask);
 
+        output.rep_complete = input.rep_count;
+
         status = HV_STATUS_SUCCESS;
         break;
     }
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 9+ messages in thread
* [PATCH] x86/hvm/viridian: fix the TLB flush hypercall
@ 2016-03-16 12:57 Paul Durrant
  2016-03-16 13:09 ` Paul Durrant
  0 siblings, 1 reply; 9+ messages in thread
From: Paul Durrant @ 2016-03-16 12:57 UTC (permalink / raw)
  To: xen-devel; +Cc: Paul Durrant

Commit b38d426a "flush remote tlbs by hypercall" add support to allow
Windows to request flush of remote TLB via hypercall rather than IPI.
Unfortunately it seems that this code was broken in a couple of ways:

1) The allocation of the per-vcpu flush mask is gated on whether the
   domain has viridian features enabled but the call to allocate is
   made before the toolstack has enabled those features. This results
   in a NULL pointer dereference.

2) One of the flush hypercall variants is a rep op, but the code
   does not update the output data with the reps completed. Hence the
   guest will spin repeatedly making the hypercall because it believes
   it has uncompleted reps.

This patch fixes both of these issues and also adds a check to make
sure the current vCPU is not included in the flush mask (since there's
clearly no need for the CPU to IPI itself).

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
 xen/arch/x86/hvm/hvm.c      | 12 ++++--------
 xen/arch/x86/hvm/viridian.c |  4 +++-
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5bc2812..f5c55e1 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2576,12 +2576,9 @@ int hvm_vcpu_initialise(struct vcpu *v)
     if ( rc != 0 )
         goto fail6;
 
-    if ( is_viridian_domain(d) )
-    {
-        rc = viridian_vcpu_init(v);
-        if ( rc != 0 )
-            goto fail7;
-    }
+    rc = viridian_vcpu_init(v);
+    if ( rc != 0 )
+        goto fail7;
 
     if ( v->vcpu_id == 0 )
     {
@@ -2615,8 +2612,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
 
 void hvm_vcpu_destroy(struct vcpu *v)
 {
-    if ( is_viridian_domain(v->domain) )
-        viridian_vcpu_deinit(v);
+    viridian_vcpu_deinit(v);
 
     hvm_all_ioreq_servers_remove_vcpu(v->domain, v);
 
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 6bd844b..6530a67 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -645,7 +645,7 @@ int viridian_hypercall(struct cpu_user_regs *regs)
                 continue;
 
             hvm_asid_flush_vcpu(v);
-            if ( v->is_running )
+            if ( v != curr && v->is_running )
                 __cpumask_set_cpu(v->processor, pcpu_mask);
         }
 
@@ -658,6 +658,8 @@ int viridian_hypercall(struct cpu_user_regs *regs)
         if ( !cpumask_empty(pcpu_mask) )
             flush_tlb_mask(pcpu_mask);
 
+        output.rep_complete = input.rep_count;
+
         status = HV_STATUS_SUCCESS;
         break;
     }
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-03-16 13:42 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-16 13:00 [PATCH] x86/hvm/viridian: fix the TLB flush hypercall Paul Durrant
2016-03-16 13:20 ` Andrew Cooper
2016-03-16 13:25   ` Paul Durrant
2016-03-16 13:31 ` Jan Beulich
2016-03-16 13:37   ` Andrew Cooper
2016-03-16 13:39     ` Paul Durrant
2016-03-16 13:38   ` Paul Durrant
  -- strict thread matches above, loose matches on Subject: below --
2016-03-16 12:57 Paul Durrant
2016-03-16 13:09 ` Paul Durrant

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).