xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall
@ 2016-03-16 14:21 Paul Durrant
  2016-03-16 15:35 ` Jan Beulich
  2016-03-17  8:35 ` Jan Beulich
  0 siblings, 2 replies; 7+ messages in thread
From: Paul Durrant @ 2016-03-16 14:21 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Keir Fraser, Jan Beulich

Commit b38d426a "flush remote tlbs by hypercall" add support to allow
Windows to request flush of remote TLB via hypercall rather than IPI.
Unfortunately it seems that this code was broken in a couple of ways:

1) The allocation of the per-vcpu ipi mask is gated on whether the
   domain has viridian features enabled but the call to allocate is
   made before the toolstack has enabled those features. This results
   in a NULL pointer dereference.

2) One of the flush hypercall variants is a rep op, but the code
   does not update the output data with the reps completed. Hence the
   guest will spin repeatedly making the hypercall because it believes
   it has uncompleted reps.

This patch fixes both of these issues as follows:

1) The ipi mask need only be per-pcpu so it is made a per-pcpu static
   to avoid the need for allocation.

2) The rep complete count is updated to the rep count since the single
   flush that Xen does covers all reps anyway.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---

v2:
 - Move to per-pcpu ipi mask.
 - Use smp_send_event_check_mask() to IPI rather than flush_tlb_mask().
---
 xen/arch/x86/hvm/hvm.c             | 12 ------------
 xen/arch/x86/hvm/viridian.c        | 19 ++++++-------------
 xen/include/asm-x86/hvm/viridian.h |  4 ----
 3 files changed, 6 insertions(+), 29 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5bc2812..4ea51d7 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2576,13 +2576,6 @@ int hvm_vcpu_initialise(struct vcpu *v)
     if ( rc != 0 )
         goto fail6;
 
-    if ( is_viridian_domain(d) )
-    {
-        rc = viridian_vcpu_init(v);
-        if ( rc != 0 )
-            goto fail7;
-    }
-
     if ( v->vcpu_id == 0 )
     {
         /* NB. All these really belong in hvm_domain_initialise(). */
@@ -2597,8 +2590,6 @@ int hvm_vcpu_initialise(struct vcpu *v)
 
     return 0;
 
- fail7:
-    hvm_all_ioreq_servers_remove_vcpu(v->domain, v);
  fail6:
     nestedhvm_vcpu_destroy(v);
  fail5:
@@ -2615,9 +2606,6 @@ int hvm_vcpu_initialise(struct vcpu *v)
 
 void hvm_vcpu_destroy(struct vcpu *v)
 {
-    if ( is_viridian_domain(v->domain) )
-        viridian_vcpu_deinit(v);
-
     hvm_all_ioreq_servers_remove_vcpu(v->domain, v);
 
     if ( hvm_altp2m_supported() )
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 6bd844b..1ee22aa 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -521,16 +521,7 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
     return 1;
 }
 
-int viridian_vcpu_init(struct vcpu *v)
-{
-    return alloc_cpumask_var(&v->arch.hvm_vcpu.viridian.flush_cpumask) ?
-           0 : -ENOMEM;
-}
-
-void viridian_vcpu_deinit(struct vcpu *v)
-{
-    free_cpumask_var(v->arch.hvm_vcpu.viridian.flush_cpumask);
-}
+static DEFINE_PER_CPU(cpumask_t, ipi_cpumask);
 
 int viridian_hypercall(struct cpu_user_regs *regs)
 {
@@ -627,7 +618,7 @@ int viridian_hypercall(struct cpu_user_regs *regs)
         if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS )
             input_params.vcpu_mask = ~0ul;
 
-        pcpu_mask = curr->arch.hvm_vcpu.viridian.flush_cpumask;
+        pcpu_mask = &this_cpu(ipi_cpumask);
         cpumask_clear(pcpu_mask);
 
         /*
@@ -645,7 +636,7 @@ int viridian_hypercall(struct cpu_user_regs *regs)
                 continue;
 
             hvm_asid_flush_vcpu(v);
-            if ( v->is_running )
+            if ( v != curr && v->is_running )
                 __cpumask_set_cpu(v->processor, pcpu_mask);
         }
 
@@ -656,7 +647,9 @@ int viridian_hypercall(struct cpu_user_regs *regs)
          * so we may unnecessarily IPI some CPUs.
          */
         if ( !cpumask_empty(pcpu_mask) )
-            flush_tlb_mask(pcpu_mask);
+            smp_send_event_check_mask(pcpu_mask);
+
+        output.rep_complete = input.rep_count;
 
         status = HV_STATUS_SUCCESS;
         break;
diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h
index 2eec85e..c4319d7 100644
--- a/xen/include/asm-x86/hvm/viridian.h
+++ b/xen/include/asm-x86/hvm/viridian.h
@@ -22,7 +22,6 @@ union viridian_apic_assist
 struct viridian_vcpu
 {
     union viridian_apic_assist apic_assist;
-    cpumask_var_t flush_cpumask;
 };
 
 union viridian_guest_os_id
@@ -118,9 +117,6 @@ viridian_hypercall(struct cpu_user_regs *regs);
 void viridian_time_ref_count_freeze(struct domain *d);
 void viridian_time_ref_count_thaw(struct domain *d);
 
-int viridian_vcpu_init(struct vcpu *v);
-void viridian_vcpu_deinit(struct vcpu *v);
-
 #endif /* __ASM_X86_HVM_VIRIDIAN_H__ */
 
 /*
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall
  2016-03-16 14:21 [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall Paul Durrant
@ 2016-03-16 15:35 ` Jan Beulich
  2016-03-16 17:35   ` Paul Durrant
  2016-03-17  8:35 ` Jan Beulich
  1 sibling, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2016-03-16 15:35 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Keir Fraser, xen-devel

>>> On 16.03.16 at 15:21, <paul.durrant@citrix.com> wrote:
> v2:
>  - Move to per-pcpu ipi mask.
>  - Use smp_send_event_check_mask() to IPI rather than flush_tlb_mask().
> ---
>  xen/arch/x86/hvm/hvm.c             | 12 ------------
>  xen/arch/x86/hvm/viridian.c        | 19 ++++++-------------
>  xen/include/asm-x86/hvm/viridian.h |  4 ----
>  3 files changed, 6 insertions(+), 29 deletions(-)

Quite nice for a bug fix.

> @@ -656,7 +647,9 @@ int viridian_hypercall(struct cpu_user_regs *regs)
>           * so we may unnecessarily IPI some CPUs.
>           */
>          if ( !cpumask_empty(pcpu_mask) )
> -            flush_tlb_mask(pcpu_mask);
> +            smp_send_event_check_mask(pcpu_mask);
> +
> +        output.rep_complete = input.rep_count;

Questions on this one remain: Why only for this hypercall? And
what does "repeat count" mean in this context?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall
  2016-03-16 15:35 ` Jan Beulich
@ 2016-03-16 17:35   ` Paul Durrant
  2016-03-17  8:11     ` Jan Beulich
  0 siblings, 1 reply; 7+ messages in thread
From: Paul Durrant @ 2016-03-16 17:35 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Keir (Xen.org), xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 16 March 2016 15:36
> To: Paul Durrant
> Cc: Andrew Cooper; xen-devel@lists.xenproject.org; Keir (Xen.org)
> Subject: Re: [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall
> 
> >>> On 16.03.16 at 15:21, <paul.durrant@citrix.com> wrote:
> > v2:
> >  - Move to per-pcpu ipi mask.
> >  - Use smp_send_event_check_mask() to IPI rather than flush_tlb_mask().
> > ---
> >  xen/arch/x86/hvm/hvm.c             | 12 ------------
> >  xen/arch/x86/hvm/viridian.c        | 19 ++++++-------------
> >  xen/include/asm-x86/hvm/viridian.h |  4 ----
> >  3 files changed, 6 insertions(+), 29 deletions(-)
> 
> Quite nice for a bug fix.
> 
> > @@ -656,7 +647,9 @@ int viridian_hypercall(struct cpu_user_regs *regs)
> >           * so we may unnecessarily IPI some CPUs.
> >           */
> >          if ( !cpumask_empty(pcpu_mask) )
> > -            flush_tlb_mask(pcpu_mask);
> > +            smp_send_event_check_mask(pcpu_mask);
> > +
> > +        output.rep_complete = input.rep_count;
> 
> Questions on this one remain: Why only for this hypercall? And
> what does "repeat count" mean in this context?
> 

It's only for this hypercall because it's the only 'rep' hypercall we implement. For non-rep hypercalls the spec states that the rep count and starting index in the input params must be zero. It does not state what the value of reps complete should be on output for non-rep hypercalls but I think it's safe to assume that zero is correct.
For rep hypercalls the spec says that on output "the reps complete field is the total number of reps complete and not relative to the rep start index. For example, if the caller specified a rep start index of 5, and a rep count of 10, the reps complete field would indicate 10 upon successful completion".

Section 12.4.3 of the spec defines the HvFlushVirtualAddressList hypercall as a rep hypercall and each rep refers to flush of a single guest VA range. Because we invalidate all VA ranges in one go clearly we complete all reps straight away :-)

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall
  2016-03-16 17:35   ` Paul Durrant
@ 2016-03-17  8:11     ` Jan Beulich
  2016-03-17  8:14       ` Paul Durrant
  0 siblings, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2016-03-17  8:11 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Keir(Xen.org), xen-devel

>>> On 16.03.16 at 18:35, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 16 March 2016 15:36
>> >>> On 16.03.16 at 15:21, <paul.durrant@citrix.com> wrote:
>> > @@ -656,7 +647,9 @@ int viridian_hypercall(struct cpu_user_regs *regs)
>> >           * so we may unnecessarily IPI some CPUs.
>> >           */
>> >          if ( !cpumask_empty(pcpu_mask) )
>> > -            flush_tlb_mask(pcpu_mask);
>> > +            smp_send_event_check_mask(pcpu_mask);
>> > +
>> > +        output.rep_complete = input.rep_count;
>> 
>> Questions on this one remain: Why only for this hypercall? And
>> what does "repeat count" mean in this context?
>> 
> 
> It's only for this hypercall because it's the only 'rep' hypercall we 
> implement. For non-rep hypercalls the spec states that the rep count and 
> starting index in the input params must be zero. It does not state what the 
> value of reps complete should be on output for non-rep hypercalls but I think 
> it's safe to assume that zero is correct.
> For rep hypercalls the spec says that on output "the reps complete field is 
> the total number of reps complete and not relative to the rep start index. 
> For example, if the caller specified a rep start index of 5, and a rep count 
> of 10, the reps complete field would indicate 10 upon successful completion".
> 
> Section 12.4.3 of the spec defines the HvFlushVirtualAddressList hypercall 
> as a rep hypercall and each rep refers to flush of a single guest VA range. 
> Because we invalidate all VA ranges in one go clearly we complete all reps 
> straight away :-)

Ah, there's an address list associated with it. So if the flush
request was just for a single page, isn't a flush-all then pretty
heavy handed?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall
  2016-03-17  8:11     ` Jan Beulich
@ 2016-03-17  8:14       ` Paul Durrant
  0 siblings, 0 replies; 7+ messages in thread
From: Paul Durrant @ 2016-03-17  8:14 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Keir (Xen.org), xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 17 March 2016 08:12
> To: Paul Durrant
> Cc: Andrew Cooper; xen-devel@lists.xenproject.org; Keir (Xen.org)
> Subject: RE: [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall
> 
> >>> On 16.03.16 at 18:35, <Paul.Durrant@citrix.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 16 March 2016 15:36
> >> >>> On 16.03.16 at 15:21, <paul.durrant@citrix.com> wrote:
> >> > @@ -656,7 +647,9 @@ int viridian_hypercall(struct cpu_user_regs
> *regs)
> >> >           * so we may unnecessarily IPI some CPUs.
> >> >           */
> >> >          if ( !cpumask_empty(pcpu_mask) )
> >> > -            flush_tlb_mask(pcpu_mask);
> >> > +            smp_send_event_check_mask(pcpu_mask);
> >> > +
> >> > +        output.rep_complete = input.rep_count;
> >>
> >> Questions on this one remain: Why only for this hypercall? And
> >> what does "repeat count" mean in this context?
> >>
> >
> > It's only for this hypercall because it's the only 'rep' hypercall we
> > implement. For non-rep hypercalls the spec states that the rep count and
> > starting index in the input params must be zero. It does not state what the
> > value of reps complete should be on output for non-rep hypercalls but I
> think
> > it's safe to assume that zero is correct.
> > For rep hypercalls the spec says that on output "the reps complete field is
> > the total number of reps complete and not relative to the rep start index.
> > For example, if the caller specified a rep start index of 5, and a rep count
> > of 10, the reps complete field would indicate 10 upon successful
> completion".
> >
> > Section 12.4.3 of the spec defines the HvFlushVirtualAddressList hypercall
> > as a rep hypercall and each rep refers to flush of a single guest VA range.
> > Because we invalidate all VA ranges in one go clearly we complete all reps
> > straight away :-)
> 
> Ah, there's an address list associated with it. So if the flush
> request was just for a single page, isn't a flush-all then pretty
> heavy handed?
> 

Yes, it is overkill, but it's probably still less expensive than waking up a de-scheduled vCPU to flush a single page and possibly still less expensive than an IPI to do the same.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall
  2016-03-16 14:21 [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall Paul Durrant
  2016-03-16 15:35 ` Jan Beulich
@ 2016-03-17  8:35 ` Jan Beulich
  2016-03-17 10:30   ` Andrew Cooper
  1 sibling, 1 reply; 7+ messages in thread
From: Jan Beulich @ 2016-03-17  8:35 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Keir Fraser, xen-devel

>>> On 16.03.16 at 15:21, <paul.durrant@citrix.com> wrote:
> Commit b38d426a "flush remote tlbs by hypercall" add support to allow
> Windows to request flush of remote TLB via hypercall rather than IPI.
> Unfortunately it seems that this code was broken in a couple of ways:
> 
> 1) The allocation of the per-vcpu ipi mask is gated on whether the
>    domain has viridian features enabled but the call to allocate is
>    made before the toolstack has enabled those features. This results
>    in a NULL pointer dereference.
> 
> 2) One of the flush hypercall variants is a rep op, but the code
>    does not update the output data with the reps completed. Hence the
>    guest will spin repeatedly making the hypercall because it believes
>    it has uncompleted reps.
> 
> This patch fixes both of these issues as follows:
> 
> 1) The ipi mask need only be per-pcpu so it is made a per-pcpu static
>    to avoid the need for allocation.
> 
> 2) The rep complete count is updated to the rep count since the single
>    flush that Xen does covers all reps anyway.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall
  2016-03-17  8:35 ` Jan Beulich
@ 2016-03-17 10:30   ` Andrew Cooper
  0 siblings, 0 replies; 7+ messages in thread
From: Andrew Cooper @ 2016-03-17 10:30 UTC (permalink / raw)
  To: Jan Beulich, Paul Durrant; +Cc: xen-devel, Keir Fraser

On 17/03/16 08:35, Jan Beulich wrote:
>>>> On 16.03.16 at 15:21, <paul.durrant@citrix.com> wrote:
>> Commit b38d426a "flush remote tlbs by hypercall" add support to allow
>> Windows to request flush of remote TLB via hypercall rather than IPI.
>> Unfortunately it seems that this code was broken in a couple of ways:
>>
>> 1) The allocation of the per-vcpu ipi mask is gated on whether the
>>    domain has viridian features enabled but the call to allocate is
>>    made before the toolstack has enabled those features. This results
>>    in a NULL pointer dereference.
>>
>> 2) One of the flush hypercall variants is a rep op, but the code
>>    does not update the output data with the reps completed. Hence the
>>    guest will spin repeatedly making the hypercall because it believes
>>    it has uncompleted reps.
>>
>> This patch fixes both of these issues as follows:
>>
>> 1) The ipi mask need only be per-pcpu so it is made a per-pcpu static
>>    to avoid the need for allocation.
>>
>> 2) The rep complete count is updated to the rep count since the single
>>    flush that Xen does covers all reps anyway.
>>
>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-03-17 10:30 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-16 14:21 [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall Paul Durrant
2016-03-16 15:35 ` Jan Beulich
2016-03-16 17:35   ` Paul Durrant
2016-03-17  8:11     ` Jan Beulich
2016-03-17  8:14       ` Paul Durrant
2016-03-17  8:35 ` Jan Beulich
2016-03-17 10:30   ` Andrew Cooper

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).