From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752753AbdKIPLJ (ORCPT ); Thu, 9 Nov 2017 10:11:09 -0500 Received: from mx1.redhat.com ([209.132.183.28]:41096 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751110AbdKIPLF (ORCPT ); Thu, 9 Nov 2017 10:11:05 -0500 Date: Thu, 9 Nov 2017 16:11:02 +0100 From: Radim =?utf-8?B?S3LEjW3DocWZ?= To: Wanpeng Li Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , Wanpeng Li Subject: Re: [PATCH RESEND 2/3] KVM: Add paravirt remote TLB flush Message-ID: <20171109151101.GB20859@flask> References: <1510192934-5369-1-git-send-email-wanpeng.li@hotmail.com> <1510192934-5369-3-git-send-email-wanpeng.li@hotmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1510192934-5369-3-git-send-email-wanpeng.li@hotmail.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Thu, 09 Nov 2017 15:11:05 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 2017-11-08 18:02-0800, Wanpeng Li: > From: Wanpeng Li > > Remote flushing api's does a busy wait which is fine in bare-metal > scenario. But with-in the guest, the vcpus might have been pre-empted > or blocked. In this scenario, the initator vcpu would end up > busy-waiting for a long amount of time. > > This patch set implements para-virt flush tlbs making sure that it > does not wait for vcpus that are sleeping. And all the sleeping vcpus > flush the tlb on guest enter. > > The best result is achieved when we're overcommiting the host by running > multiple vCPUs on each pCPU. In this case PV tlb flush avoids touching > vCPUs which are not scheduled and avoid the wait on the main CPU. > > Test on a Haswell i7 desktop 4 cores (2HT), so 8 pCPUs, running ebizzy in > one linux guest. > > ebizzy -M > vanilla optimized boost > 8 vCPUs 10152 10083 -0.68% > 16 vCPUs 1224 4866 297.5% > 24 vCPUs 1109 3871 249% > 32 vCPUs 1025 3375 229.3% > > Cc: Paolo Bonzini > Cc: Radim Krčmář > Signed-off-by: Wanpeng Li > --- > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > @@ -465,6 +465,33 @@ static void __init kvm_apf_trap_init(void) > update_intr_gate(X86_TRAP_PF, async_page_fault); > } > > +static void kvm_flush_tlb_others(const struct cpumask *cpumask, > + const struct flush_tlb_info *info) > +{ > + u8 state; > + int cpu; > + struct kvm_steal_time *src; > + cpumask_t flushmask; > + > + > + cpumask_copy(&flushmask, cpumask); > + /* > + * We have to call flush only on online vCPUs. And > + * queue flush_on_enter for pre-empted vCPUs > + */ > + for_each_cpu(cpu, cpumask) { > + src = &per_cpu(steal_time, cpu); > + state = src->preempted; > + if ((state & KVM_VCPU_PREEMPTED)) { > + if (cmpxchg(&src->preempted, state, state | 1 << > + KVM_VCPU_SHOULD_FLUSH)) We won't be flushing unless the last argument reads 'state | KVM_VCPU_SHOULD_FLUSH' and the result will be the original value that should be compared with state to avoid a race that would drop running VCPU: if (cmpxchg(&src->preempted, state, state | KVM_VCPU_SHOULD_FLUSH) == state) > + cpumask_clear_cpu(cpu, &flushmask); > + } > + } > + > + native_flush_tlb_others(&flushmask, info); > +} > + > void __init kvm_guest_init(void) > { > int i;