From: Paul Durrant <paul.durrant@citrix.com> To: xen-devel@lists.xenproject.org Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul.durrant@citrix.com>, Keir Fraser <keir@xen.org>, Jan Beulich <jbeulich@suse.com> Subject: [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall Date: Wed, 16 Mar 2016 14:21:41 +0000 [thread overview] Message-ID: <1458138101-1466-1-git-send-email-paul.durrant@citrix.com> (raw) Commit b38d426a "flush remote tlbs by hypercall" add support to allow Windows to request flush of remote TLB via hypercall rather than IPI. Unfortunately it seems that this code was broken in a couple of ways: 1) The allocation of the per-vcpu ipi mask is gated on whether the domain has viridian features enabled but the call to allocate is made before the toolstack has enabled those features. This results in a NULL pointer dereference. 2) One of the flush hypercall variants is a rep op, but the code does not update the output data with the reps completed. Hence the guest will spin repeatedly making the hypercall because it believes it has uncompleted reps. This patch fixes both of these issues as follows: 1) The ipi mask need only be per-pcpu so it is made a per-pcpu static to avoid the need for allocation. 2) The rep complete count is updated to the rep count since the single flush that Xen does covers all reps anyway. Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Cc: Keir Fraser <keir@xen.org> Cc: Jan Beulich <jbeulich@suse.com> Cc: Andrew Cooper <andrew.cooper3@citrix.com> --- v2: - Move to per-pcpu ipi mask. - Use smp_send_event_check_mask() to IPI rather than flush_tlb_mask(). --- xen/arch/x86/hvm/hvm.c | 12 ------------ xen/arch/x86/hvm/viridian.c | 19 ++++++------------- xen/include/asm-x86/hvm/viridian.h | 4 ---- 3 files changed, 6 insertions(+), 29 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 5bc2812..4ea51d7 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -2576,13 +2576,6 @@ int hvm_vcpu_initialise(struct vcpu *v) if ( rc != 0 ) goto fail6; - if ( is_viridian_domain(d) ) - { - rc = viridian_vcpu_init(v); - if ( rc != 0 ) - goto fail7; - } - if ( v->vcpu_id == 0 ) { /* NB. All these really belong in hvm_domain_initialise(). */ @@ -2597,8 +2590,6 @@ int hvm_vcpu_initialise(struct vcpu *v) return 0; - fail7: - hvm_all_ioreq_servers_remove_vcpu(v->domain, v); fail6: nestedhvm_vcpu_destroy(v); fail5: @@ -2615,9 +2606,6 @@ int hvm_vcpu_initialise(struct vcpu *v) void hvm_vcpu_destroy(struct vcpu *v) { - if ( is_viridian_domain(v->domain) ) - viridian_vcpu_deinit(v); - hvm_all_ioreq_servers_remove_vcpu(v->domain, v); if ( hvm_altp2m_supported() ) diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c index 6bd844b..1ee22aa 100644 --- a/xen/arch/x86/hvm/viridian.c +++ b/xen/arch/x86/hvm/viridian.c @@ -521,16 +521,7 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val) return 1; } -int viridian_vcpu_init(struct vcpu *v) -{ - return alloc_cpumask_var(&v->arch.hvm_vcpu.viridian.flush_cpumask) ? - 0 : -ENOMEM; -} - -void viridian_vcpu_deinit(struct vcpu *v) -{ - free_cpumask_var(v->arch.hvm_vcpu.viridian.flush_cpumask); -} +static DEFINE_PER_CPU(cpumask_t, ipi_cpumask); int viridian_hypercall(struct cpu_user_regs *regs) { @@ -627,7 +618,7 @@ int viridian_hypercall(struct cpu_user_regs *regs) if ( input_params.flags & HV_FLUSH_ALL_PROCESSORS ) input_params.vcpu_mask = ~0ul; - pcpu_mask = curr->arch.hvm_vcpu.viridian.flush_cpumask; + pcpu_mask = &this_cpu(ipi_cpumask); cpumask_clear(pcpu_mask); /* @@ -645,7 +636,7 @@ int viridian_hypercall(struct cpu_user_regs *regs) continue; hvm_asid_flush_vcpu(v); - if ( v->is_running ) + if ( v != curr && v->is_running ) __cpumask_set_cpu(v->processor, pcpu_mask); } @@ -656,7 +647,9 @@ int viridian_hypercall(struct cpu_user_regs *regs) * so we may unnecessarily IPI some CPUs. */ if ( !cpumask_empty(pcpu_mask) ) - flush_tlb_mask(pcpu_mask); + smp_send_event_check_mask(pcpu_mask); + + output.rep_complete = input.rep_count; status = HV_STATUS_SUCCESS; break; diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm-x86/hvm/viridian.h index 2eec85e..c4319d7 100644 --- a/xen/include/asm-x86/hvm/viridian.h +++ b/xen/include/asm-x86/hvm/viridian.h @@ -22,7 +22,6 @@ union viridian_apic_assist struct viridian_vcpu { union viridian_apic_assist apic_assist; - cpumask_var_t flush_cpumask; }; union viridian_guest_os_id @@ -118,9 +117,6 @@ viridian_hypercall(struct cpu_user_regs *regs); void viridian_time_ref_count_freeze(struct domain *d); void viridian_time_ref_count_thaw(struct domain *d); -int viridian_vcpu_init(struct vcpu *v); -void viridian_vcpu_deinit(struct vcpu *v); - #endif /* __ASM_X86_HVM_VIRIDIAN_H__ */ /* -- 2.1.4 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
next reply other threads:[~2016-03-16 14:31 UTC|newest] Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top 2016-03-16 14:21 Paul Durrant [this message] 2016-03-16 15:35 ` Jan Beulich 2016-03-16 17:35 ` Paul Durrant 2016-03-17 8:11 ` Jan Beulich 2016-03-17 8:14 ` Paul Durrant 2016-03-17 8:35 ` Jan Beulich 2016-03-17 10:30 ` Andrew Cooper
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1458138101-1466-1-git-send-email-paul.durrant@citrix.com \ --to=paul.durrant@citrix.com \ --cc=andrew.cooper3@citrix.com \ --cc=jbeulich@suse.com \ --cc=keir@xen.org \ --cc=xen-devel@lists.xenproject.org \ --subject='Re: [PATCH v2] x86/hvm/viridian: fix the TLB flush hypercall' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).