From: "H. Peter Anvin" <hpa@zytor.com>
To: Borislav Petkov <bp@suse.de>
Cc: Andy Lutomirski <luto@amacapital.net>,
Brian Gerst <brgerst@gmail.com>,
the arch/x86 maintainers <x86@kernel.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Ingo Molnar <mingo@kernel.org>,
Denys Vlasenko <dvlasenk@redhat.com>,
Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [PATCH] x86: static_cpu_has_safe: discard dynamic check after init
Date: Thu, 21 Jan 2016 14:22:28 -0800 [thread overview]
Message-ID: <B2980B8C-9440-4BF6-968B-EA8930B10BA4@zytor.com> (raw)
In-Reply-To: <20160121221442.GB300@pd.tnic>
On January 21, 2016 2:14:42 PM PST, Borislav Petkov <bp@suse.de> wrote:
>On Wed, Jan 20, 2016 at 02:41:22AM -0800, H. Peter Anvin wrote:
>> Ah. What would be even more of a win would be to rebias
>> static_cpu_has_bug() so that the fallthrough case is the functional
>> one. Easily done by reversing the labels.
>
>By reversing you mean this:
>
>
>---
>diff --git a/arch/x86/include/asm/cpufeature.h
>b/arch/x86/include/asm/cpufeature.h
>index 77c51f4c15b7..49fa56f2b083 100644
>--- a/arch/x86/include/asm/cpufeature.h
>+++ b/arch/x86/include/asm/cpufeature.h
>@@ -174,10 +174,10 @@ static __always_inline __pure bool
>_static_cpu_has(u16 bit)
> [bitnum] "i" (1 << (bit & 7)),
>[cap_word] "m" (((const char *)boot_cpu_data.x86_capability)[bit >> 3])
> : : t_yes, t_no);
>- t_yes:
>- return true;
> t_no:
> return false;
>+ t_yes:
>+ return true;
> #else
> return boot_cpu_has(bit);
> #endif /* CC_HAVE_ASM_GOTO */
>---
>
>?
>
>In any case, here's what happens with the current patchset:
>
>vmlinux:
>
>ffffffff8100472a: e9 50 0e de 00 jmpq ffffffff81de557f
><__alt_instructions_end+0x7aa>
>ffffffff8100472f: 66 8c d0 mov %ss,%ax
>ffffffff81004732: 66 83 f8 18 cmp $0x18,%ax
>ffffffff81004736: 74 07 je ffffffff8100473f
><__switch_to+0x2ef>
>ffffffff81004738: b8 18 00 00 00 mov $0x18,%eax
>ffffffff8100473d: 8e d0 mov %eax,%ss
>ffffffff8100473f: 48 83 c4 18 add $0x18,%rsp
>ffffffff81004743: 4c 89 e0 mov %r12,%rax
>ffffffff81004746: 5b pop %rbx
>ffffffff81004747: 41 5c pop %r12
>ffffffff81004749: 41 5d pop %r13
>ffffffff8100474b: 41 5e pop %r14
>ffffffff8100474d: 41 5f pop %r15
>ffffffff8100474f: 5d pop %rbp
>ffffffff81004750: c3 retq
>
>That first JMP above sends us to the dynamic section which is in asm
>now:
>
>ffffffff81de557f: f6 05 8f de d1 ff 01 testb
>$0x1,-0x2e2171(%rip) # ffffffff81b03415 <boot_cpu_data+0x55>
>ffffffff81de5586: 0f 85 a3 f1 21 ff jne ffffffff8100472f
><__switch_to+0x2df>
>ffffffff81de558c: e9 ae f1 21 ff jmpq ffffffff8100473f
><__switch_to+0x2ef>
>
>After X86_FEATURE_ALWAYS patching, that first JMP has become a 2-byte
>JMP:
>
>[ 0.306333] apply_alternatives: feat: 3*32+21, old:
>(ffffffff8100472a, len: 5), repl: (ffffffff81de4e12, len: 5), pad: 0
>[ 0.308005] ffffffff8100472a: old_insn: e9 50 0e de 00
>[ 0.312012] ffffffff81de4e12: rpl_insn: e9 28 f9 21 ff
>[ 0.318201] recompute_jump: target RIP: ffffffff8100473f, new_displ:
>0x15
>[ 0.320007] recompute_jump: final displ: 0x00000013, JMP
>0xffffffff8100473f
>[ 0.324005] ffffffff8100472a: final_insn: eb 13 0f 1f 00
>
>so basically we jump over the %ss fixup:
>
>ffffffff8100472a: eb 13 0f 1f 00 jmp ffffffff8100473f
>ffffffff8100472f: 66 8c d0 mov %ss,%ax
>ffffffff81004732: 66 83 f8 18 cmp $0x18,%ax
>ffffffff81004736: 74 07 je ffffffff8100473f
><__switch_to+0x2ef>
>ffffffff81004738: b8 18 00 00 00 mov $0x18,%eax
>ffffffff8100473d: 8e d0 mov %eax,%ss
>ffffffff8100473f: 48 83 c4 18 add
>$0x18,%rsp <----
>ffffffff81004743: 4c 89 e0 mov %r12,%rax
>ffffffff81004746: 5b pop %rbx
>ffffffff81004747: 41 5c pop %r12
>ffffffff81004749: 41 5d pop %r13
>ffffffff8100474b: 41 5e pop %r14
>ffffffff8100474d: 41 5f pop %r15
>ffffffff8100474f: 5d pop %rbp
>ffffffff81004750: c3 retq
>
>
>After X86_BUG_SYSRET_SS_ATTRS patching:
>
>[ 0.330367] apply_alternatives: feat: 16*32+8, old:
>(ffffffff8100472a, len: 5), repl: (ffffffff81de3996, len: 0), pad: 0
>[ 0.332005] ffffffff8100472a: old_insn: eb 13 0f 1f 00
>[ 0.338332] ffffffff8100472a: final_insn: 0f 1f 44 00 00
>
>ffffffff8100472a: 0f 1f 44 00 00 nop
>ffffffff8100472f: 66 8c d0 mov %ss,%ax
>ffffffff81004732: 66 83 f8 18 cmp $0x18,%ax
>ffffffff81004736: 74 07 je ffffffff8100473f
><__switch_to+0x2ef>
>ffffffff81004738: b8 18 00 00 00 mov $0x18,%eax
>ffffffff8100473d: 8e d0 mov %eax,%ss
>ffffffff8100473f: 48 83 c4 18 add $0x18,%rsp
>ffffffff81004743: 4c 89 e0 mov %r12,%rax
>ffffffff81004746: 5b pop %rbx
>ffffffff81004747: 41 5c pop %r12
>ffffffff81004749: 41 5d pop %r13
>ffffffff8100474b: 41 5e pop %r14
>ffffffff8100474d: 41 5f pop %r15
>ffffffff8100474f: 5d pop %rbp
>ffffffff81004750: c3 retq
>
>So the penalty for the !X86_BUG_SYSRET_SS_ATTRS CPUs is a 2-byte JMP.
>Do
>we care?
>
>In the case we do, we could do this:
>
> JMP ss_fixup
>ret:
> RET return prev_p;
>ss_fixup:
> <fixup SS>
> jmp ret
>
>and the !X86_BUG_SYSRET_SS_ATTRS CPUs would overwrite that
>"JMP ss_fixup" with a NOP and they're fine. However, the
>X86_BUG_SYSRET_SS_ATTRS CPUs will have to do two jumps, one to the
>fixup
>code and one back to RET.
>
>Now, how about I convert
>
> unsigned short ss_sel;
> savesegment(ss, ss_sel);
> if (ss_sel != __KERNEL_DS)
> loadsegment(ss, __KERNEL_DS);
>
>into asm and into an alternative()?
>
>Then, the !X86_BUG_SYSRET_SS_ATTRS CPUs will trade off that JMP with a
>bunch of NOPs which will pollute I$.
>
>Hmmm.
Yes, having t_no as the fallthrough case ought to move the yes code out of line.
The current code probably pollutes the i$ too.
--
Sent from my Android device with K-9 Mail. Please excuse brevity and formatting.
next prev parent reply other threads:[~2016-01-21 22:22 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-16 19:22 [PATCH] x86: static_cpu_has_safe: discard dynamic check after init Brian Gerst
2016-01-16 19:36 ` Borislav Petkov
2016-01-16 19:58 ` Brian Gerst
2016-01-17 10:33 ` Borislav Petkov
2016-01-18 16:52 ` Brian Gerst
2016-01-18 17:49 ` Andy Lutomirski
2016-01-18 18:14 ` Borislav Petkov
2016-01-18 18:29 ` Andy Lutomirski
2016-01-18 18:39 ` Borislav Petkov
2016-01-18 19:45 ` H. Peter Anvin
2016-01-18 23:05 ` Borislav Petkov
2016-01-18 23:13 ` H. Peter Anvin
2016-01-18 23:25 ` Borislav Petkov
2016-01-19 13:57 ` Borislav Petkov
2016-01-19 16:23 ` Borislav Petkov
2016-01-19 23:10 ` Borislav Petkov
2016-01-19 23:26 ` Andy Lutomirski
2016-01-19 23:49 ` Boris Petkov
2016-01-20 4:03 ` H. Peter Anvin
2016-01-20 10:33 ` Borislav Petkov
2016-01-20 10:41 ` H. Peter Anvin
2016-01-21 22:14 ` Borislav Petkov
2016-01-21 22:22 ` H. Peter Anvin [this message]
2016-01-21 22:56 ` Borislav Petkov
2016-01-21 23:36 ` H. Peter Anvin
2016-01-21 23:37 ` H. Peter Anvin
2016-01-22 10:32 ` Borislav Petkov
2016-01-18 18:51 ` Borislav Petkov
2016-01-19 1:10 ` Borislav Petkov
2016-01-19 1:33 ` H. Peter Anvin
2016-01-19 9:22 ` Borislav Petkov
2016-01-20 4:02 ` H. Peter Anvin
2016-01-20 4:39 ` Brian Gerst
2016-01-20 4:42 ` H. Peter Anvin
2016-01-20 10:50 ` Borislav Petkov
2016-01-20 10:55 ` H. Peter Anvin
2016-01-20 11:05 ` Borislav Petkov
2016-01-20 14:48 ` H. Peter Anvin
2016-01-20 15:01 ` Borislav Petkov
2016-01-20 15:09 ` H. Peter Anvin
2016-01-20 16:04 ` Borislav Petkov
2016-01-20 16:16 ` H. Peter Anvin
2016-01-23 6:50 [PATCH] x86/head_64.S: do not use temporary register to check alignment Alexander Kuleshov
2016-01-26 9:31 ` Borislav Petkov
2016-01-26 21:12 [PATCH 00/10] tip-queue 2016-01-26, rest Borislav Petkov
2016-01-26 21:12 ` [PATCH 01/10] x86/asm: Add condition codes clobber to memory barrier macros Borislav Petkov
2016-01-26 21:12 ` [PATCH 02/10] x86/asm: Drop a comment left over from X86_OOSTORE Borislav Petkov
2016-01-26 21:12 ` [PATCH 03/10] x86/asm: Tweak the comment about wmb() use for IO Borislav Petkov
2016-01-26 21:12 ` [PATCH 04/10] x86/cpufeature: Carve out X86_FEATURE_* Borislav Petkov
2016-01-30 13:18 ` [tip:x86/asm] " tip-bot for Borislav Petkov
2016-01-26 21:12 ` [PATCH 05/10] x86/cpufeature: Replace the old static_cpu_has() with safe variant Borislav Petkov
2016-01-30 13:19 ` [tip:x86/asm] " tip-bot for Borislav Petkov
2016-01-26 21:12 ` [PATCH 06/10] x86/cpufeature: Get rid of the non-asm goto variant Borislav Petkov
2016-01-27 3:36 ` Brian Gerst
2016-01-27 8:41 ` Borislav Petkov
2016-01-27 8:43 ` [PATCH -v1.1 " Borislav Petkov
2016-01-30 13:19 ` [tip:x86/asm] " tip-bot for Borislav Petkov
2016-01-27 8:45 ` [PATCH -v1.1 8/10] x86/alternatives: Discard dynamic check after init Borislav Petkov
2016-01-30 13:20 ` [tip:x86/asm] " tip-bot for Brian Gerst
2016-01-26 21:12 ` [PATCH 07/10] x86/alternatives: Add an auxilary section Borislav Petkov
2016-01-30 13:19 ` [tip:x86/asm] " tip-bot for Borislav Petkov
2016-01-26 21:12 ` [PATCH 08/10] x86/alternatives: Discard dynamic check after init Borislav Petkov
2016-01-26 21:12 ` [PATCH 09/10] x86/vdso: Use static_cpu_has() Borislav Petkov
2016-01-30 13:20 ` [tip:x86/asm] " tip-bot for Borislav Petkov
2016-01-26 21:12 ` [PATCH 10/10] x86/head_64: Simplify kernel load address alignment check Borislav Petkov
2016-01-30 13:20 ` [tip:x86/boot] x86/boot: " tip-bot for Alexander Kuleshov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=B2980B8C-9440-4BF6-968B-EA8930B10BA4@zytor.com \
--to=hpa@zytor.com \
--cc=bp@suse.de \
--cc=brgerst@gmail.com \
--cc=dvlasenk@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@amacapital.net \
--cc=mingo@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).