From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751585AbcAUWWo (ORCPT ); Thu, 21 Jan 2016 17:22:44 -0500 Received: from terminus.zytor.com ([198.137.202.10]:60651 "EHLO mail.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751083AbcAUWWl (ORCPT ); Thu, 21 Jan 2016 17:22:41 -0500 User-Agent: K-9 Mail for Android In-Reply-To: <20160121221442.GB300@pd.tnic> References: <20160118183921.GH12651@pd.tnic> <569D40CE.5090506@zytor.com> <20160118230554.GJ12651@pd.tnic> <3D4E057B-AB03-4C12-B59D-774E8954C742@zytor.com> <20160118232547.GK12651@pd.tnic> <20160119135714.GD15071@pd.tnic> <569F072B.1020504@zytor.com> <20160120103345.GA23350@pd.tnic> <108BC768-CF19-4F71-BF6D-70FF2252ADB8@zytor.com> <20160121221442.GB300@pd.tnic> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Subject: Re: [PATCH] x86: static_cpu_has_safe: discard dynamic check after init From: "H. Peter Anvin" Date: Thu, 21 Jan 2016 14:22:28 -0800 To: Borislav Petkov CC: Andy Lutomirski , Brian Gerst , the arch/x86 maintainers , Linux Kernel Mailing List , Ingo Molnar , Denys Vlasenko , Linus Torvalds Message-ID: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On January 21, 2016 2:14:42 PM PST, Borislav Petkov wrote: >On Wed, Jan 20, 2016 at 02:41:22AM -0800, H. Peter Anvin wrote: >> Ah. What would be even more of a win would be to rebias >> static_cpu_has_bug() so that the fallthrough case is the functional >> one. Easily done by reversing the labels. > >By reversing you mean this: > > >--- >diff --git a/arch/x86/include/asm/cpufeature.h >b/arch/x86/include/asm/cpufeature.h >index 77c51f4c15b7..49fa56f2b083 100644 >--- a/arch/x86/include/asm/cpufeature.h >+++ b/arch/x86/include/asm/cpufeature.h >@@ -174,10 +174,10 @@ static __always_inline __pure bool >_static_cpu_has(u16 bit) > [bitnum] "i" (1 << (bit & 7)), >[cap_word] "m" (((const char *)boot_cpu_data.x86_capability)[bit >> 3]) > : : t_yes, t_no); >- t_yes: >- return true; > t_no: > return false; >+ t_yes: >+ return true; > #else > return boot_cpu_has(bit); > #endif /* CC_HAVE_ASM_GOTO */ >--- > >? > >In any case, here's what happens with the current patchset: > >vmlinux: > >ffffffff8100472a: e9 50 0e de 00 jmpq ffffffff81de557f ><__alt_instructions_end+0x7aa> >ffffffff8100472f: 66 8c d0 mov %ss,%ax >ffffffff81004732: 66 83 f8 18 cmp $0x18,%ax >ffffffff81004736: 74 07 je ffffffff8100473f ><__switch_to+0x2ef> >ffffffff81004738: b8 18 00 00 00 mov $0x18,%eax >ffffffff8100473d: 8e d0 mov %eax,%ss >ffffffff8100473f: 48 83 c4 18 add $0x18,%rsp >ffffffff81004743: 4c 89 e0 mov %r12,%rax >ffffffff81004746: 5b pop %rbx >ffffffff81004747: 41 5c pop %r12 >ffffffff81004749: 41 5d pop %r13 >ffffffff8100474b: 41 5e pop %r14 >ffffffff8100474d: 41 5f pop %r15 >ffffffff8100474f: 5d pop %rbp >ffffffff81004750: c3 retq > >That first JMP above sends us to the dynamic section which is in asm >now: > >ffffffff81de557f: f6 05 8f de d1 ff 01 testb >$0x1,-0x2e2171(%rip) # ffffffff81b03415 >ffffffff81de5586: 0f 85 a3 f1 21 ff jne ffffffff8100472f ><__switch_to+0x2df> >ffffffff81de558c: e9 ae f1 21 ff jmpq ffffffff8100473f ><__switch_to+0x2ef> > >After X86_FEATURE_ALWAYS patching, that first JMP has become a 2-byte >JMP: > >[ 0.306333] apply_alternatives: feat: 3*32+21, old: >(ffffffff8100472a, len: 5), repl: (ffffffff81de4e12, len: 5), pad: 0 >[ 0.308005] ffffffff8100472a: old_insn: e9 50 0e de 00 >[ 0.312012] ffffffff81de4e12: rpl_insn: e9 28 f9 21 ff >[ 0.318201] recompute_jump: target RIP: ffffffff8100473f, new_displ: >0x15 >[ 0.320007] recompute_jump: final displ: 0x00000013, JMP >0xffffffff8100473f >[ 0.324005] ffffffff8100472a: final_insn: eb 13 0f 1f 00 > >so basically we jump over the %ss fixup: > >ffffffff8100472a: eb 13 0f 1f 00 jmp ffffffff8100473f >ffffffff8100472f: 66 8c d0 mov %ss,%ax >ffffffff81004732: 66 83 f8 18 cmp $0x18,%ax >ffffffff81004736: 74 07 je ffffffff8100473f ><__switch_to+0x2ef> >ffffffff81004738: b8 18 00 00 00 mov $0x18,%eax >ffffffff8100473d: 8e d0 mov %eax,%ss >ffffffff8100473f: 48 83 c4 18 add >$0x18,%rsp <---- >ffffffff81004743: 4c 89 e0 mov %r12,%rax >ffffffff81004746: 5b pop %rbx >ffffffff81004747: 41 5c pop %r12 >ffffffff81004749: 41 5d pop %r13 >ffffffff8100474b: 41 5e pop %r14 >ffffffff8100474d: 41 5f pop %r15 >ffffffff8100474f: 5d pop %rbp >ffffffff81004750: c3 retq > > >After X86_BUG_SYSRET_SS_ATTRS patching: > >[ 0.330367] apply_alternatives: feat: 16*32+8, old: >(ffffffff8100472a, len: 5), repl: (ffffffff81de3996, len: 0), pad: 0 >[ 0.332005] ffffffff8100472a: old_insn: eb 13 0f 1f 00 >[ 0.338332] ffffffff8100472a: final_insn: 0f 1f 44 00 00 > >ffffffff8100472a: 0f 1f 44 00 00 nop >ffffffff8100472f: 66 8c d0 mov %ss,%ax >ffffffff81004732: 66 83 f8 18 cmp $0x18,%ax >ffffffff81004736: 74 07 je ffffffff8100473f ><__switch_to+0x2ef> >ffffffff81004738: b8 18 00 00 00 mov $0x18,%eax >ffffffff8100473d: 8e d0 mov %eax,%ss >ffffffff8100473f: 48 83 c4 18 add $0x18,%rsp >ffffffff81004743: 4c 89 e0 mov %r12,%rax >ffffffff81004746: 5b pop %rbx >ffffffff81004747: 41 5c pop %r12 >ffffffff81004749: 41 5d pop %r13 >ffffffff8100474b: 41 5e pop %r14 >ffffffff8100474d: 41 5f pop %r15 >ffffffff8100474f: 5d pop %rbp >ffffffff81004750: c3 retq > >So the penalty for the !X86_BUG_SYSRET_SS_ATTRS CPUs is a 2-byte JMP. >Do >we care? > >In the case we do, we could do this: > > JMP ss_fixup >ret: > RET return prev_p; >ss_fixup: > > jmp ret > >and the !X86_BUG_SYSRET_SS_ATTRS CPUs would overwrite that >"JMP ss_fixup" with a NOP and they're fine. However, the >X86_BUG_SYSRET_SS_ATTRS CPUs will have to do two jumps, one to the >fixup >code and one back to RET. > >Now, how about I convert > > unsigned short ss_sel; > savesegment(ss, ss_sel); > if (ss_sel != __KERNEL_DS) > loadsegment(ss, __KERNEL_DS); > >into asm and into an alternative()? > >Then, the !X86_BUG_SYSRET_SS_ATTRS CPUs will trade off that JMP with a >bunch of NOPs which will pollute I$. > >Hmmm. Yes, having t_no as the fallthrough case ought to move the yes code out of line. The current code probably pollutes the i$ too. -- Sent from my Android device with K-9 Mail. Please excuse brevity and formatting.