From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S943021AbcJ0WfR (ORCPT ); Thu, 27 Oct 2016 18:35:17 -0400 Received: from mail-vk0-f43.google.com ([209.85.213.43]:35345 "EHLO mail-vk0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S942858AbcJ0WfO (ORCPT ); Thu, 27 Oct 2016 18:35:14 -0400 MIME-Version: 1.0 In-Reply-To: References: <20161019020333.3766-1-khuey@kylehuey.com> <20161019020333.3766-7-khuey@kylehuey.com> From: Andy Lutomirski Date: Thu, 27 Oct 2016 15:34:53 -0700 Message-ID: Subject: Re: [PATCH v7 6/6] x86/arch_prctl: Add ARCH_[GET|SET]_CPUID To: Thomas Gleixner Cc: Kyle Huey , "Robert O'Callahan" , Ingo Molnar , "H. Peter Anvin" , X86 ML , Jeff Dike , Richard Weinberger , Andy Lutomirski , Borislav Petkov , Dmitry Safonov , Peter Zijlstra , Dave Hansen , Boris Ostrovsky , Alexander Viro , Shuah Khan , "Rafael J. Wysocki" , Len Brown , "linux-kernel@vger.kernel.org" , "user-mode-linux-devel@lists.sourceforge.net" , "open list:USER-MODE LINUX (UML)" , "open list:KERNEL SELFTEST FRAMEWORK" , Linux API Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 27, 2016 at 4:15 AM, Thomas Gleixner wrote: > This is insane. The compiler makes that a conditional jump and then in > switch_cpuid_faulting we get another one. Further switch_cpuid_faulting() > calls into lib/msr which is adding even more overhead. > > msr_set/clear_bit() are nice for random driver code, but complete overkill > for the context switch hotpath. > > That's just not acceptable for switch_to(). We keep adding cruft and then > wonder why context switches slow down despite machines getting faster. > > This can and needs to be done smarter. See untested patch below. The > resulting code has a single conditional jump, which is obviously the check > for a change between prev and next. Everything else is done with straight > linear shift,add,and,rdmsr,wrmsr instructions. > ... > #define MSR_IA32_SYSENTER_ESP 0x00000175 > --- a/arch/x86/kernel/process.c > +++ b/arch/x86/kernel/process.c > @@ -193,12 +193,17 @@ int set_tsc_mode(unsigned int val) > return 0; > } > > -static void switch_cpuid_faulting(bool on) > +#define CPUID_FAULT_ON_MASK (~0ULL) > +#define CPUID_FAULT_OFF_MASK (~CPUID_FAULT_ENABLE) > + > +static void cpuid_fault_ctrl(u64 msk) > { > - if (on) > - msr_set_bit(MSR_MISC_FEATURES_ENABLES, 0); > - else > - msr_clear_bit(MSR_MISC_FEATURES_ENABLES, 0); > + u64 msrval; > + > + rdmsrl(MSR_MISC_FEATURES_ENABLES, msrval); > + msrval |= CPUID_FAULT_ENABLE; > + msrval &= msk; > + wrmsrl(MSR_MISC_FEATURES_ENABLES, msrval); > } Let's just do this right from day one: static void set_cpuid_faulting(bool on) { u64 msrval; DEBUG_LOCKS_WARN_ON(!irqs_disabled()); msrval = this_cpu_read(msr_misc_features_enables_shadow); msrval &= CPUID_FAULT_ENABLE; msrval |= (on << CPUID_FAULT_ENABLE_BIT); this_cpu_write(msr_misc_features_enables_shadow, msrval); wrmsrl(MSR_MISC_FEATURES_ENABLES, msrval); } RDMSR may be considerably faster than WRMSR, but that doesn't mean it's *fast*. Obviously this needs some initialization code, but that's fine IMO. --Andy