From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751722AbdIEJDT (ORCPT ); Tue, 5 Sep 2017 05:03:19 -0400 Received: from merlin.infradead.org ([205.233.59.134]:50624 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751583AbdIEJDQ (ORCPT ); Tue, 5 Sep 2017 05:03:16 -0400 Date: Tue, 5 Sep 2017 11:03:06 +0200 From: Peter Zijlstra To: Juergen Gross Cc: Davidlohr Bueso , Oscar Salvador , Ingo Molnar , Paolo Bonzini , "H . Peter Anvin" , Thomas Gleixner , linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org, Waiman Long Subject: Re: [PATCH resend] x86,kvm: Add a kernel parameter to disable PV spinlock Message-ID: <20170905090306.gz52hlo7s2ybogoy@hirez.programming.kicks-ass.net> References: <20170904142836.15446-1-osalvador@suse.de> <20170904144011.gp7hpis6usjehbuf@hirez.programming.kicks-ass.net> <20170904222157.GD17982@linux-80c1.suse> <0869e8a5-4abd-8f7f-0135-aab3e72e2d01@suse.com> <20170905065837.rs767a4os2aumg7h@hirez.programming.kicks-ass.net> <924fec17-548a-083d-edce-7adcb662c513@suse.com> <20170905081001.hn2276qrhfyqpjdi@hirez.programming.kicks-ass.net> <83ac209b-0807-0a72-cd07-d4ccd1d1ed61@suse.com> <20170905082801.d3sr7hdjz6lpon5p@hirez.programming.kicks-ass.net> <1b3afce5-c95d-5a3a-c36c-953a0654ab2d@suse.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1b3afce5-c95d-5a3a-c36c-953a0654ab2d@suse.com> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Sep 05, 2017 at 10:52:06AM +0200, Juergen Gross wrote: > > Hmm, that might work. Could we somehow nop that call when > > !X86_FEATURE_HYPERVISOR?, that saves native from having to do the call > > and would be a win for everyone. > > So in fact we want a "always false" shortcut for bare metal and for any > virtualization environment selecting bare metal behavior. > > I'll have a try. Right, so for native I think you can do this: diff --git a/arch/x86/kernel/paravirt_patch_64.c b/arch/x86/kernel/paravirt_patch_64.c index 11aaf1eaa0e4..4e9839001291 100644 --- a/arch/x86/kernel/paravirt_patch_64.c +++ b/arch/x86/kernel/paravirt_patch_64.c @@ -21,6 +21,7 @@ DEF_NATIVE(, mov64, "mov %rdi, %rax"); #if defined(CONFIG_PARAVIRT_SPINLOCKS) DEF_NATIVE(pv_lock_ops, queued_spin_unlock, "movb $0, (%rdi)"); DEF_NATIVE(pv_lock_ops, vcpu_is_preempted, "xor %rax, %rax"); +DEF_NATIVE(pv_lock_ops, virt_spin_lock, "xor %rax, %rax"); #endif unsigned paravirt_patch_ident_32(void *insnbuf, unsigned len) @@ -77,6 +78,13 @@ unsigned native_patch(u8 type, u16 clobbers, void *ibuf, goto patch_site; } goto patch_default; + case PARAVIRT_PATCH(pv_lock_ops.virt_spin_lock): + if (!this_cpu_has(X86_FEATURE_HYPERVISOR)) { + start = start_pv_lock_ops_virt_spin_lock; + end = end_pv_lock_ops_virt_spin_lock; + goto patch_side; + } + goto patch_default; #endif default: