* [PATCH v2 0/7] x86: retbleed=stuff fixes
@ 2023-01-16 14:25 Peter Zijlstra
2023-01-16 14:25 ` [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64() Peter Zijlstra
` (8 more replies)
0 siblings, 9 replies; 22+ messages in thread
From: Peter Zijlstra @ 2023-01-16 14:25 UTC (permalink / raw)
To: x86, Joan Bruguera
Cc: linux-kernel, peterz, Juergen Gross, Rafael J. Wysocki,
xen-devel, Jan Beulich, Roger Pau Monne, Kees Cook, mark.rutland,
Andrew Cooper, Jörg Rödel, H. Peter Anvin
Hi all,
Patches to address the various callthunk fails reported by Joan.
The first two patches are new (and I've temporarily dropped the
restore_processor_state sealing).
It is my understanding that AP bringup will always use the 16bit trampoline
path, if this is not the case, please holler.
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64()
2023-01-16 14:25 [PATCH v2 0/7] x86: retbleed=stuff fixes Peter Zijlstra
@ 2023-01-16 14:25 ` Peter Zijlstra
2023-01-17 9:25 ` Ingo Molnar
2023-01-18 9:45 ` Peter Zijlstra
2023-01-16 14:25 ` [PATCH v2 2/7] x86/boot: Delay sev_verify_cbit() a bit Peter Zijlstra
` (7 subsequent siblings)
8 siblings, 2 replies; 22+ messages in thread
From: Peter Zijlstra @ 2023-01-16 14:25 UTC (permalink / raw)
To: x86, Joan Bruguera
Cc: linux-kernel, peterz, Juergen Gross, Rafael J. Wysocki,
xen-devel, Jan Beulich, Roger Pau Monne, Kees Cook, mark.rutland,
Andrew Cooper, Jörg Rödel, H. Peter Anvin, jroedel
The boot trampolines from trampoline_64.S have code flow like:
16bit BIOS SEV-ES 64bit EFI
trampoline_start() sev_es_trampoline_start() trampoline_start_64()
verify_cpu() | |
switch_to_protected: <---------------' v
| pa_trampoline_compat()
v |
startup_32() <-----------------------------------------------'
|
v
startup_64()
|
v
tr_start() := head_64.S:secondary_startup_64()
Since AP bringup always goes through the 16bit BIOS path (EFI doesn't
touch the APs), there is already a verify_cpu() invocation.
Removing the verify_cpu() invocation from secondary_startup_64()
renders the whole secondary_startup_64_no_verify() thing moot, so
remove that too.
Cc: jroedel@suse.de
Cc: hpa@zytor.com
Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking")
Reported-by: Joan Bruguera <joanbrugueram@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
arch/x86/include/asm/realmode.h | 1 -
arch/x86/kernel/head_64.S | 16 ----------------
arch/x86/realmode/init.c | 6 ------
3 files changed, 23 deletions(-)
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -73,7 +73,6 @@ extern unsigned char startup_32_smp[];
extern unsigned char boot_gdt[];
#else
extern unsigned char secondary_startup_64[];
-extern unsigned char secondary_startup_64_no_verify[];
#endif
static inline size_t real_mode_size_needed(void)
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -143,22 +143,6 @@ SYM_CODE_START(secondary_startup_64)
* after the boot processor executes this code.
*/
- /* Sanitize CPU configuration */
- call verify_cpu
-
- /*
- * The secondary_startup_64_no_verify entry point is only used by
- * SEV-ES guests. In those guests the call to verify_cpu() would cause
- * #VC exceptions which can not be handled at this stage of secondary
- * CPU bringup.
- *
- * All non SEV-ES systems, especially Intel systems, need to execute
- * verify_cpu() above to make sure NX is enabled.
- */
-SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
- UNWIND_HINT_EMPTY
- ANNOTATE_NOENDBR
-
/*
* Retrieve the modifier (SME encryption mask if SME is active) to be
* added to the initial pgdir entry that will be programmed into CR3.
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -74,12 +74,6 @@ static void __init sme_sev_setup_real_mo
th->flags |= TH_FLAGS_SME_ACTIVE;
if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
- /*
- * Skip the call to verify_cpu() in secondary_startup_64 as it
- * will cause #VC exceptions when the AP can't handle them yet.
- */
- th->start = (u64) secondary_startup_64_no_verify;
-
if (sev_es_setup_ap_jump_table(real_mode_header))
panic("Failed to get/update SEV-ES AP Jump Table");
}
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 2/7] x86/boot: Delay sev_verify_cbit() a bit
2023-01-16 14:25 [PATCH v2 0/7] x86: retbleed=stuff fixes Peter Zijlstra
2023-01-16 14:25 ` [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64() Peter Zijlstra
@ 2023-01-16 14:25 ` Peter Zijlstra
2023-01-19 13:18 ` Borislav Petkov
2023-01-16 14:25 ` [PATCH v2 3/7] x86/power: De-paravirt restore_processor_state() Peter Zijlstra
` (6 subsequent siblings)
8 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2023-01-16 14:25 UTC (permalink / raw)
To: x86, Joan Bruguera
Cc: linux-kernel, peterz, Juergen Gross, Rafael J. Wysocki,
xen-devel, Jan Beulich, Roger Pau Monne, Kees Cook, mark.rutland,
Andrew Cooper, Jörg Rödel, H. Peter Anvin
Per the comment it is important to call sev_verify_cbit() before the
first RET instruction, this means we can delay calling this until more
of the CPU state is set up, specifically delay this until GS is
'sane' such that per-cpu variables work.
Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking")
Reported-by: Joan Bruguera <joanbrugueram@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
arch/x86/kernel/head_64.S | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -185,19 +185,6 @@ SYM_CODE_START(secondary_startup_64)
addq phys_base(%rip), %rax
/*
- * For SEV guests: Verify that the C-bit is correct. A malicious
- * hypervisor could lie about the C-bit position to perform a ROP
- * attack on the guest by writing to the unencrypted stack and wait for
- * the next RET instruction.
- * %rsi carries pointer to realmode data and is callee-clobbered. Save
- * and restore it.
- */
- pushq %rsi
- movq %rax, %rdi
- call sev_verify_cbit
- popq %rsi
-
- /*
* Switch to new page-table
*
* For the boot CPU this switches to early_top_pgt which still has the
@@ -265,6 +252,19 @@ SYM_CODE_START(secondary_startup_64)
*/
movq initial_stack(%rip), %rsp
+ /*
+ * For SEV guests: Verify that the C-bit is correct. A malicious
+ * hypervisor could lie about the C-bit position to perform a ROP
+ * attack on the guest by writing to the unencrypted stack and wait for
+ * the next RET instruction.
+ * %rsi carries pointer to realmode data and is callee-clobbered. Save
+ * and restore it.
+ */
+ pushq %rsi
+ movq %rax, %rdi
+ call sev_verify_cbit
+ popq %rsi
+
/* Setup and Load IDT */
pushq %rsi
call early_setup_idt
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 3/7] x86/power: De-paravirt restore_processor_state()
2023-01-16 14:25 [PATCH v2 0/7] x86: retbleed=stuff fixes Peter Zijlstra
2023-01-16 14:25 ` [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64() Peter Zijlstra
2023-01-16 14:25 ` [PATCH v2 2/7] x86/boot: Delay sev_verify_cbit() a bit Peter Zijlstra
@ 2023-01-16 14:25 ` Peter Zijlstra
2023-01-20 20:26 ` Borislav Petkov
2023-01-16 14:25 ` [PATCH v2 4/7] x86/power: Inline write_cr[04]() Peter Zijlstra
` (5 subsequent siblings)
8 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2023-01-16 14:25 UTC (permalink / raw)
To: x86, Joan Bruguera
Cc: linux-kernel, peterz, Juergen Gross, Rafael J. Wysocki,
xen-devel, Jan Beulich, Roger Pau Monne, Kees Cook, mark.rutland,
Andrew Cooper, Jörg Rödel, H. Peter Anvin
Since Xen PV doesn't use restore_processor_state(), and we're going to
have to avoid CALL/RET until at least GS is restored, de-paravirt the
easy bits.
Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking")
Reported-by: Joan Bruguera <joanbrugueram@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
arch/x86/power/cpu.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -197,25 +197,25 @@ static void notrace __restore_processor_
struct cpuinfo_x86 *c;
if (ctxt->misc_enable_saved)
- wrmsrl(MSR_IA32_MISC_ENABLE, ctxt->misc_enable);
+ native_wrmsrl(MSR_IA32_MISC_ENABLE, ctxt->misc_enable);
/*
* control registers
*/
/* cr4 was introduced in the Pentium CPU */
#ifdef CONFIG_X86_32
if (ctxt->cr4)
- __write_cr4(ctxt->cr4);
+ native_write_cr4(ctxt->cr4);
#else
/* CONFIG X86_64 */
- wrmsrl(MSR_EFER, ctxt->efer);
- __write_cr4(ctxt->cr4);
+ native_wrmsrl(MSR_EFER, ctxt->efer);
+ native_write_cr4(ctxt->cr4);
#endif
- write_cr3(ctxt->cr3);
- write_cr2(ctxt->cr2);
- write_cr0(ctxt->cr0);
+ native_write_cr3(ctxt->cr3);
+ native_write_cr2(ctxt->cr2);
+ native_write_cr0(ctxt->cr0);
/* Restore the IDT. */
- load_idt(&ctxt->idt);
+ native_load_idt(&ctxt->idt);
/*
* Just in case the asm code got us here with the SS, DS, or ES
@@ -230,7 +230,7 @@ static void notrace __restore_processor_
* handlers or in complicated helpers like load_gs_index().
*/
#ifdef CONFIG_X86_64
- wrmsrl(MSR_GS_BASE, ctxt->kernelmode_gs_base);
+ native_wrmsrl(MSR_GS_BASE, ctxt->kernelmode_gs_base);
#else
loadsegment(fs, __KERNEL_PERCPU);
#endif
@@ -246,15 +246,15 @@ static void notrace __restore_processor_
loadsegment(ds, ctxt->es);
loadsegment(es, ctxt->es);
loadsegment(fs, ctxt->fs);
- load_gs_index(ctxt->gs);
+ native_load_gs_index(ctxt->gs);
/*
* Restore FSBASE and GSBASE after restoring the selectors, since
* restoring the selectors clobbers the bases. Keep in mind
* that MSR_KERNEL_GS_BASE is horribly misnamed.
*/
- wrmsrl(MSR_FS_BASE, ctxt->fs_base);
- wrmsrl(MSR_KERNEL_GS_BASE, ctxt->usermode_gs_base);
+ native_wrmsrl(MSR_FS_BASE, ctxt->fs_base);
+ native_wrmsrl(MSR_KERNEL_GS_BASE, ctxt->usermode_gs_base);
#else
loadsegment(gs, ctxt->gs);
#endif
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 4/7] x86/power: Inline write_cr[04]()
2023-01-16 14:25 [PATCH v2 0/7] x86: retbleed=stuff fixes Peter Zijlstra
` (2 preceding siblings ...)
2023-01-16 14:25 ` [PATCH v2 3/7] x86/power: De-paravirt restore_processor_state() Peter Zijlstra
@ 2023-01-16 14:25 ` Peter Zijlstra
2023-01-20 20:29 ` Borislav Petkov
2023-01-16 14:25 ` [PATCH v2 5/7] x86/callthunk: No callthunk for restore_processor_state() Peter Zijlstra
` (4 subsequent siblings)
8 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2023-01-16 14:25 UTC (permalink / raw)
To: x86, Joan Bruguera
Cc: linux-kernel, peterz, Juergen Gross, Rafael J. Wysocki,
xen-devel, Jan Beulich, Roger Pau Monne, Kees Cook, mark.rutland,
Andrew Cooper, Jörg Rödel, H. Peter Anvin
Since we can't do CALL/RET until GS is restored and CR[04] pinning is
of dubious value in this code path, simply write the stored values.
Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking")
Reported-by: Joan Bruguera <joanbrugueram@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
---
arch/x86/power/cpu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -208,11 +208,11 @@ static void notrace __restore_processor_
#else
/* CONFIG X86_64 */
native_wrmsrl(MSR_EFER, ctxt->efer);
- native_write_cr4(ctxt->cr4);
+ asm volatile("mov %0,%%cr4": "+r" (ctxt->cr4) : : "memory");
#endif
native_write_cr3(ctxt->cr3);
native_write_cr2(ctxt->cr2);
- native_write_cr0(ctxt->cr0);
+ asm volatile("mov %0,%%cr0": "+r" (ctxt->cr0) : : "memory");
/* Restore the IDT. */
native_load_idt(&ctxt->idt);
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 5/7] x86/callthunk: No callthunk for restore_processor_state()
2023-01-16 14:25 [PATCH v2 0/7] x86: retbleed=stuff fixes Peter Zijlstra
` (3 preceding siblings ...)
2023-01-16 14:25 ` [PATCH v2 4/7] x86/power: Inline write_cr[04]() Peter Zijlstra
@ 2023-01-16 14:25 ` Peter Zijlstra
2023-01-16 14:25 ` [PATCH v2 6/7] x86/power: Sprinkle some noinstr Peter Zijlstra
` (3 subsequent siblings)
8 siblings, 0 replies; 22+ messages in thread
From: Peter Zijlstra @ 2023-01-16 14:25 UTC (permalink / raw)
To: x86, Joan Bruguera
Cc: linux-kernel, peterz, Juergen Gross, Rafael J. Wysocki,
xen-devel, Jan Beulich, Roger Pau Monne, Kees Cook, mark.rutland,
Andrew Cooper, Jörg Rödel, H. Peter Anvin
From: Joan Bruguera <joanbrugueram@gmail.com>
When resuming from suspend we don't have coherent CPU state, trying to
do callthunks here isn't going to work. Specifically GS isn't set yet.
Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking")
Signed-off-by: Joan Bruguera <joanbrugueram@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20230109040531.7888-1-joanbrugueram@gmail.com
---
arch/x86/kernel/callthunks.c | 5 +++++
arch/x86/power/cpu.c | 3 +++
2 files changed, 8 insertions(+)
--- a/arch/x86/kernel/callthunks.c
+++ b/arch/x86/kernel/callthunks.c
@@ -7,6 +7,7 @@
#include <linux/memory.h>
#include <linux/moduleloader.h>
#include <linux/static_call.h>
+#include <linux/suspend.h>
#include <asm/alternative.h>
#include <asm/asm-offsets.h>
@@ -151,6 +152,10 @@ static bool skip_addr(void *dest)
dest < (void*)hypercall_page + PAGE_SIZE)
return true;
#endif
+#ifdef CONFIG_PM_SLEEP
+ if (dest == restore_processor_state)
+ return true;
+#endif
return false;
}
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 6/7] x86/power: Sprinkle some noinstr
2023-01-16 14:25 [PATCH v2 0/7] x86: retbleed=stuff fixes Peter Zijlstra
` (4 preceding siblings ...)
2023-01-16 14:25 ` [PATCH v2 5/7] x86/callthunk: No callthunk for restore_processor_state() Peter Zijlstra
@ 2023-01-16 14:25 ` Peter Zijlstra
2023-01-17 9:31 ` Ingo Molnar
2023-01-16 14:25 ` [PATCH v2 7/7] PM / hibernate: Add minimal noinstr annotations Peter Zijlstra
` (2 subsequent siblings)
8 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2023-01-16 14:25 UTC (permalink / raw)
To: x86, Joan Bruguera
Cc: linux-kernel, peterz, Juergen Gross, Rafael J. Wysocki,
xen-devel, Jan Beulich, Roger Pau Monne, Kees Cook, mark.rutland,
Andrew Cooper, Jörg Rödel, H. Peter Anvin
Ensure no compiler instrumentation sneaks in while restoring the CPU
state. Specifically we can't handle CALL/RET until GS is restored.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
arch/x86/power/cpu.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -192,7 +192,7 @@ static void fix_processor_context(void)
* The asm code that gets us here will have restored a usable GDT, although
* it will be pointing to the wrong alias.
*/
-static void notrace __restore_processor_state(struct saved_context *ctxt)
+static __always_inline void __restore_processor_state(struct saved_context *ctxt)
{
struct cpuinfo_x86 *c;
@@ -235,6 +235,13 @@ static void notrace __restore_processor_
loadsegment(fs, __KERNEL_PERCPU);
#endif
+ /*
+ * Definitely wrong, but at this point we should have at least enough
+ * to do CALL/RET (consider SKL callthunks) and this avoids having
+ * to deal with the noinstr explosion for now :/
+ */
+ instrumentation_begin();
+
/* Restore the TSS, RO GDT, LDT, and usermode-relevant MSRs. */
fix_processor_context();
@@ -276,10 +283,12 @@ static void notrace __restore_processor_
* because some of the MSRs are "emulated" in microcode.
*/
msr_restore_context(ctxt);
+
+ instrumentation_end();
}
/* Needed by apm.c */
-void notrace restore_processor_state(void)
+void noinstr restore_processor_state(void)
{
__restore_processor_state(&saved_context);
}
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 7/7] PM / hibernate: Add minimal noinstr annotations
2023-01-16 14:25 [PATCH v2 0/7] x86: retbleed=stuff fixes Peter Zijlstra
` (5 preceding siblings ...)
2023-01-16 14:25 ` [PATCH v2 6/7] x86/power: Sprinkle some noinstr Peter Zijlstra
@ 2023-01-16 14:25 ` Peter Zijlstra
2023-01-18 1:54 ` [PATCH v2 0/7] x86: retbleed=stuff fixes Joan Bruguera
2023-05-16 13:59 ` Josh Poimboeuf
8 siblings, 0 replies; 22+ messages in thread
From: Peter Zijlstra @ 2023-01-16 14:25 UTC (permalink / raw)
To: x86, Joan Bruguera
Cc: linux-kernel, peterz, Juergen Gross, Rafael J. Wysocki,
xen-devel, Jan Beulich, Roger Pau Monne, Kees Cook, mark.rutland,
Andrew Cooper, Jörg Rödel, H. Peter Anvin
When resuming there must not be any code between swsusp_arch_suspend()
and restore_processor_state() since the CPU state is ill defined at
this point in time.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
kernel/power/hibernate.c | 30 +++++++++++++++++++++++++++---
1 file changed, 27 insertions(+), 3 deletions(-)
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -280,6 +280,32 @@ __weak int arch_resume_nosmt(void)
return 0;
}
+static noinstr int suspend_and_restore(void)
+{
+ int error;
+
+ /*
+ * Strictly speaking swsusp_arch_suspend() should be noinstr too but it
+ * is typically written in asm, as such, assume it is good and shut up
+ * the validator.
+ */
+ instrumentation_begin();
+ error = swsusp_arch_suspend();
+ instrumentation_end();
+
+ /*
+ * Architecture resume code 'returns' from the swsusp_arch_suspend()
+ * call and resumes execution here with some very dodgy machine state.
+ *
+ * Compiler instrumentation between these two calls (or in
+ * restore_processor_state() for that matter) will make life *very*
+ * interesting indeed.
+ */
+ restore_processor_state();
+
+ return error;
+}
+
/**
* create_image - Create a hibernation image.
* @platform_mode: Whether or not to use the platform driver.
@@ -323,9 +349,7 @@ static int create_image(int platform_mod
in_suspend = 1;
save_processor_state();
trace_suspend_resume(TPS("machine_suspend"), PM_EVENT_HIBERNATE, true);
- error = swsusp_arch_suspend();
- /* Restore control flow magically appears here */
- restore_processor_state();
+ error = suspend_and_restore();
trace_suspend_resume(TPS("machine_suspend"), PM_EVENT_HIBERNATE, false);
if (error)
pr_err("Error %d creating image\n", error);
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64()
2023-01-16 14:25 ` [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64() Peter Zijlstra
@ 2023-01-17 9:25 ` Ingo Molnar
2023-01-18 9:45 ` Peter Zijlstra
1 sibling, 0 replies; 22+ messages in thread
From: Ingo Molnar @ 2023-01-17 9:25 UTC (permalink / raw)
To: Peter Zijlstra
Cc: x86, Joan Bruguera, linux-kernel, Juergen Gross,
Rafael J. Wysocki, xen-devel, Jan Beulich, Roger Pau Monne,
Kees Cook, mark.rutland, Andrew Cooper, Jörg Rödel,
H. Peter Anvin, jroedel
* Peter Zijlstra <peterz@infradead.org> wrote:
> The boot trampolines from trampoline_64.S have code flow like:
>
> 16bit BIOS SEV-ES 64bit EFI
>
> trampoline_start() sev_es_trampoline_start() trampoline_start_64()
> verify_cpu() | |
> switch_to_protected: <---------------' v
> | pa_trampoline_compat()
> v |
> startup_32() <-----------------------------------------------'
> |
> v
> startup_64()
> |
> v
> tr_start() := head_64.S:secondary_startup_64()
oh ... this nice flow chart should move into a prominent C comment I think,
it's far too good to be forgotten in an Git commit changelog.
> Since AP bringup always goes through the 16bit BIOS path (EFI doesn't
> touch the APs), there is already a verify_cpu() invocation.
>
> Removing the verify_cpu() invocation from secondary_startup_64()
> renders the whole secondary_startup_64_no_verify() thing moot, so
> remove that too.
>
> Cc: jroedel@suse.de
> Cc: hpa@zytor.com
> Fixes: e81dc127ef69 ("x86/callthunks: Add call patching for call depth tracking")
> Reported-by: Joan Bruguera <joanbrugueram@gmail.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Thanks,
Ingo
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 6/7] x86/power: Sprinkle some noinstr
2023-01-16 14:25 ` [PATCH v2 6/7] x86/power: Sprinkle some noinstr Peter Zijlstra
@ 2023-01-17 9:31 ` Ingo Molnar
2023-01-17 11:29 ` Peter Zijlstra
0 siblings, 1 reply; 22+ messages in thread
From: Ingo Molnar @ 2023-01-17 9:31 UTC (permalink / raw)
To: Peter Zijlstra
Cc: x86, Joan Bruguera, linux-kernel, Juergen Gross,
Rafael J. Wysocki, xen-devel, Jan Beulich, Roger Pau Monne,
Kees Cook, mark.rutland, Andrew Cooper, Jörg Rödel,
H. Peter Anvin
* Peter Zijlstra <peterz@infradead.org> wrote:
> + /*
> + * Definitely wrong, but at this point we should have at least enough
> + * to do CALL/RET (consider SKL callthunks) and this avoids having
> + * to deal with the noinstr explosion for now :/
> + */
> + instrumentation_begin();
BTW., readability side note: instrumentation_begin()/end() are the
misnomers of the century - they don't signal the start/end of instrumented
code areas like the name falsely & naively suggests, but the exact
opposite: start/end of *non-*instrumented code areas.
As such they should probably be something like:
noinstr_begin();
...
noinstr_end();
... to reuse the nomenclature of the 'noinstr' attribute?
Thanks,
Ingo
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 6/7] x86/power: Sprinkle some noinstr
2023-01-17 9:31 ` Ingo Molnar
@ 2023-01-17 11:29 ` Peter Zijlstra
2023-01-17 11:54 ` Ingo Molnar
0 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2023-01-17 11:29 UTC (permalink / raw)
To: Ingo Molnar
Cc: x86, Joan Bruguera, linux-kernel, Juergen Gross,
Rafael J. Wysocki, xen-devel, Jan Beulich, Roger Pau Monne,
Kees Cook, mark.rutland, Andrew Cooper, Jörg Rödel,
H. Peter Anvin
On Tue, Jan 17, 2023 at 10:31:05AM +0100, Ingo Molnar wrote:
>
> * Peter Zijlstra <peterz@infradead.org> wrote:
>
> > + /*
> > + * Definitely wrong, but at this point we should have at least enough
> > + * to do CALL/RET (consider SKL callthunks) and this avoids having
> > + * to deal with the noinstr explosion for now :/
> > + */
> > + instrumentation_begin();
>
> BTW., readability side note: instrumentation_begin()/end() are the
> misnomers of the century - they don't signal the start/end of instrumented
> code areas like the name falsely & naively suggests, but the exact
> opposite: start/end of *non-*instrumented code areas.
Nope, they do as they say on the tin.
noinstr void foo(void)
{
}
declares the whole function as non-instrumented.
Within such functions, we demark regions where instrumentation is
allowed by:
noinstr void foo(void)
{
instrumentation_begin();
/* code that calls non-noinstr functions goes here */
instrumentation_end();
}
(note the double negative in the comment)
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 6/7] x86/power: Sprinkle some noinstr
2023-01-17 11:29 ` Peter Zijlstra
@ 2023-01-17 11:54 ` Ingo Molnar
0 siblings, 0 replies; 22+ messages in thread
From: Ingo Molnar @ 2023-01-17 11:54 UTC (permalink / raw)
To: Peter Zijlstra
Cc: x86, Joan Bruguera, linux-kernel, Juergen Gross,
Rafael J. Wysocki, xen-devel, Jan Beulich, Roger Pau Monne,
Kees Cook, mark.rutland, Andrew Cooper, Jörg Rödel,
H. Peter Anvin
* Peter Zijlstra <peterz@infradead.org> wrote:
> Nope, they do as they say on the tin.
>
> noinstr void foo(void)
> {
> }
>
> declares the whole function as non-instrumented.
>
> Within such functions, we demark regions where instrumentation is
> allowed by:
>
> noinstr void foo(void)
> {
> instrumentation_begin();
> /* code that calls non-noinstr functions goes here */
> instrumentation_end();
Indeed, you are right - I should have gotten more of my morning tea ... :-/
Thanks,
Ingo
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 0/7] x86: retbleed=stuff fixes
2023-01-16 14:25 [PATCH v2 0/7] x86: retbleed=stuff fixes Peter Zijlstra
` (6 preceding siblings ...)
2023-01-16 14:25 ` [PATCH v2 7/7] PM / hibernate: Add minimal noinstr annotations Peter Zijlstra
@ 2023-01-18 1:54 ` Joan Bruguera
2023-05-16 13:59 ` Josh Poimboeuf
8 siblings, 0 replies; 22+ messages in thread
From: Joan Bruguera @ 2023-01-18 1:54 UTC (permalink / raw)
To: x86, Peter Zijlstra
Cc: linux-kernel, Juergen Gross, Rafael J. Wysocki, xen-devel,
Jan Beulich, Roger Pau Monne, Kees Cook, mark.rutland,
Andrew Cooper, Jörg Rödel, H. Peter Anvin
Thanks, I've been testing those patches on my real system (i5-7200U) for
the last day with no problems so far, waking from s2ram works as well.
I can also no longer see those `sarq $5, %gs:0x1337` with %gs=0 on QEMU.
Regards,
- Joan
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64()
2023-01-16 14:25 ` [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64() Peter Zijlstra
2023-01-17 9:25 ` Ingo Molnar
@ 2023-01-18 9:45 ` Peter Zijlstra
2023-01-18 11:46 ` kirill.shutemov
2023-01-19 19:35 ` H. Peter Anvin
1 sibling, 2 replies; 22+ messages in thread
From: Peter Zijlstra @ 2023-01-18 9:45 UTC (permalink / raw)
To: x86, Joan Bruguera
Cc: linux-kernel, Juergen Gross, Rafael J. Wysocki, xen-devel,
Jan Beulich, Roger Pau Monne, Kees Cook, mark.rutland,
Andrew Cooper, Jörg Rödel, H. Peter Anvin, jroedel,
kirill.shutemov, dave.hansen, kai.huang
On Mon, Jan 16, 2023 at 03:25:34PM +0100, Peter Zijlstra wrote:
> The boot trampolines from trampoline_64.S have code flow like:
>
> 16bit BIOS SEV-ES 64bit EFI
>
> trampoline_start() sev_es_trampoline_start() trampoline_start_64()
> verify_cpu() | |
> switch_to_protected: <---------------' v
> | pa_trampoline_compat()
> v |
> startup_32() <-----------------------------------------------'
> |
> v
> startup_64()
> |
> v
> tr_start() := head_64.S:secondary_startup_64()
>
> Since AP bringup always goes through the 16bit BIOS path (EFI doesn't
> touch the APs), there is already a verify_cpu() invocation.
So supposedly TDX/ACPI-6.4 comes in on trampoline_startup64() for APs --
can any of the TDX capable folks tell me if we need verify_cpu() on
these?
Aside from checking for LM, it seems to clear XD_DISABLE on Intel and
force enable SSE on AMD/K7. Surely none of that is needed for these
shiny new chips?
I mean, I can hack up a patch that adds verify_cpu() to the 64bit entry
point, but it seems really sad to need that on modern systems.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64()
2023-01-18 9:45 ` Peter Zijlstra
@ 2023-01-18 11:46 ` kirill.shutemov
2023-01-19 19:35 ` H. Peter Anvin
1 sibling, 0 replies; 22+ messages in thread
From: kirill.shutemov @ 2023-01-18 11:46 UTC (permalink / raw)
To: Peter Zijlstra
Cc: x86, Joan Bruguera, linux-kernel, Juergen Gross,
Rafael J. Wysocki, xen-devel, Jan Beulich, Roger Pau Monne,
Kees Cook, mark.rutland, Andrew Cooper, Jörg Rödel,
H. Peter Anvin, jroedel, dave.hansen, kai.huang
On Wed, Jan 18, 2023 at 10:45:44AM +0100, Peter Zijlstra wrote:
> On Mon, Jan 16, 2023 at 03:25:34PM +0100, Peter Zijlstra wrote:
> > The boot trampolines from trampoline_64.S have code flow like:
> >
> > 16bit BIOS SEV-ES 64bit EFI
> >
> > trampoline_start() sev_es_trampoline_start() trampoline_start_64()
> > verify_cpu() | |
> > switch_to_protected: <---------------' v
> > | pa_trampoline_compat()
> > v |
> > startup_32() <-----------------------------------------------'
> > |
> > v
> > startup_64()
> > |
> > v
> > tr_start() := head_64.S:secondary_startup_64()
> >
> > Since AP bringup always goes through the 16bit BIOS path (EFI doesn't
> > touch the APs), there is already a verify_cpu() invocation.
>
> So supposedly TDX/ACPI-6.4 comes in on trampoline_startup64() for APs --
> can any of the TDX capable folks tell me if we need verify_cpu() on
> these?
>
> Aside from checking for LM, it seems to clear XD_DISABLE on Intel and
> force enable SSE on AMD/K7. Surely none of that is needed for these
> shiny new chips?
TDX has no XD_DISABLE set and it doesn't allow to write to
IA32_MISC_ENABLE MSR (triggers #VE), so we should be safe.
--
Kiryl Shutsemau / Kirill A. Shutemov
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 2/7] x86/boot: Delay sev_verify_cbit() a bit
2023-01-16 14:25 ` [PATCH v2 2/7] x86/boot: Delay sev_verify_cbit() a bit Peter Zijlstra
@ 2023-01-19 13:18 ` Borislav Petkov
2023-01-20 12:43 ` Jörg Rödel
0 siblings, 1 reply; 22+ messages in thread
From: Borislav Petkov @ 2023-01-19 13:18 UTC (permalink / raw)
To: Peter Zijlstra, Jörg Rödel
Cc: x86, Joan Bruguera, linux-kernel, Juergen Gross,
Rafael J. Wysocki, xen-devel, Jan Beulich, Roger Pau Monne,
Kees Cook, mark.rutland, Andrew Cooper, H. Peter Anvin
On Mon, Jan 16, 2023 at 03:25:35PM +0100, Peter Zijlstra wrote:
> Per the comment it is important to call sev_verify_cbit() before the
> first RET instruction, this means we can delay calling this until more
Make that "... this means that this can be delayed until... "
And I believe this is not about the first RET insn but about the *next* RET
which will pop poisoned crap from the unencrypted stack and do shits with it.
Also, there's this over sev_verify_cbit():
* sev_verify_cbit() is called before switching to a new long-mode page-table
* at boot.
so you can't move it under the
movq %rax, %cr3
Looking at this more, there's a sme_enable() call on the BSP which is already in
C.
So, can we do that C-bit verification once on the BSP, *in C* which would be a
lot easier, and be done with it?
Once it is verified there, the bit is the same on all APs so all good.
Right?
joro?
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64()
2023-01-18 9:45 ` Peter Zijlstra
2023-01-18 11:46 ` kirill.shutemov
@ 2023-01-19 19:35 ` H. Peter Anvin
2023-01-26 14:15 ` Peter Zijlstra
1 sibling, 1 reply; 22+ messages in thread
From: H. Peter Anvin @ 2023-01-19 19:35 UTC (permalink / raw)
To: Peter Zijlstra, x86, Joan Bruguera
Cc: linux-kernel, Juergen Gross, Rafael J. Wysocki, xen-devel,
Jan Beulich, Roger Pau Monne, Kees Cook, mark.rutland,
Andrew Cooper, Jörg Rödel, jroedel, kirill.shutemov,
dave.hansen, kai.huang
On January 18, 2023 1:45:44 AM PST, Peter Zijlstra <peterz@infradead.org> wrote:
>On Mon, Jan 16, 2023 at 03:25:34PM +0100, Peter Zijlstra wrote:
>> The boot trampolines from trampoline_64.S have code flow like:
>>
>> 16bit BIOS SEV-ES 64bit EFI
>>
>> trampoline_start() sev_es_trampoline_start() trampoline_start_64()
>> verify_cpu() | |
>> switch_to_protected: <---------------' v
>> | pa_trampoline_compat()
>> v |
>> startup_32() <-----------------------------------------------'
>> |
>> v
>> startup_64()
>> |
>> v
>> tr_start() := head_64.S:secondary_startup_64()
>>
>> Since AP bringup always goes through the 16bit BIOS path (EFI doesn't
>> touch the APs), there is already a verify_cpu() invocation.
>
>So supposedly TDX/ACPI-6.4 comes in on trampoline_startup64() for APs --
>can any of the TDX capable folks tell me if we need verify_cpu() on
>these?
>
>Aside from checking for LM, it seems to clear XD_DISABLE on Intel and
>force enable SSE on AMD/K7. Surely none of that is needed for these
>shiny new chips?
>
>I mean, I can hack up a patch that adds verify_cpu() to the 64bit entry
>point, but it seems really sad to need that on modern systems.
Sad, perhaps, but really better for orthogonality – fewer special cases.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 2/7] x86/boot: Delay sev_verify_cbit() a bit
2023-01-19 13:18 ` Borislav Petkov
@ 2023-01-20 12:43 ` Jörg Rödel
0 siblings, 0 replies; 22+ messages in thread
From: Jörg Rödel @ 2023-01-20 12:43 UTC (permalink / raw)
To: Borislav Petkov
Cc: Peter Zijlstra, x86, Joan Bruguera, linux-kernel, Juergen Gross,
Rafael J. Wysocki, xen-devel, Jan Beulich, Roger Pau Monne,
Kees Cook, mark.rutland, Andrew Cooper, H. Peter Anvin
On Thu, Jan 19, 2023 at 02:18:47PM +0100, Borislav Petkov wrote:
> So, can we do that C-bit verification once on the BSP, *in C* which would be a
> lot easier, and be done with it?
>
> Once it is verified there, the bit is the same on all APs so all good.
Yes, I think this is safe to do. The page-table the APs will use to boot
already has the correct C-bit set, and the position is verified on the
BSP. Further, the C-bit position is a hardware capability and there is
no chance the APs will have it at a different position than the BSP.
Even if the HV is lying to the VM by faking CPUID on the APs it wouldn't
matter, because the position is not read again on the APs.
Regards,
Joerg
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 3/7] x86/power: De-paravirt restore_processor_state()
2023-01-16 14:25 ` [PATCH v2 3/7] x86/power: De-paravirt restore_processor_state() Peter Zijlstra
@ 2023-01-20 20:26 ` Borislav Petkov
0 siblings, 0 replies; 22+ messages in thread
From: Borislav Petkov @ 2023-01-20 20:26 UTC (permalink / raw)
To: Peter Zijlstra
Cc: x86, Joan Bruguera, linux-kernel, Juergen Gross,
Rafael J. Wysocki, xen-devel, Jan Beulich, Roger Pau Monne,
Kees Cook, mark.rutland, Andrew Cooper, Jörg Rödel,
H. Peter Anvin
On Mon, Jan 16, 2023 at 03:25:36PM +0100, Peter Zijlstra wrote:
> Since Xen PV doesn't use restore_processor_state(), and we're going to
> have to avoid CALL/RET until at least GS is restored,
Drop the "we":
"..., and CALL/RET cannot happen before GS has been restored, ..."
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 4/7] x86/power: Inline write_cr[04]()
2023-01-16 14:25 ` [PATCH v2 4/7] x86/power: Inline write_cr[04]() Peter Zijlstra
@ 2023-01-20 20:29 ` Borislav Petkov
0 siblings, 0 replies; 22+ messages in thread
From: Borislav Petkov @ 2023-01-20 20:29 UTC (permalink / raw)
To: Peter Zijlstra
Cc: x86, Joan Bruguera, linux-kernel, Juergen Gross,
Rafael J. Wysocki, xen-devel, Jan Beulich, Roger Pau Monne,
Kees Cook, mark.rutland, Andrew Cooper, Jörg Rödel,
H. Peter Anvin
On Mon, Jan 16, 2023 at 03:25:37PM +0100, Peter Zijlstra wrote:
> Since we can't do CALL/RET until GS is restored and CR[04] pinning is
^^
Ditto like for the previous one.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64()
2023-01-19 19:35 ` H. Peter Anvin
@ 2023-01-26 14:15 ` Peter Zijlstra
0 siblings, 0 replies; 22+ messages in thread
From: Peter Zijlstra @ 2023-01-26 14:15 UTC (permalink / raw)
To: H. Peter Anvin
Cc: x86, Joan Bruguera, linux-kernel, Juergen Gross,
Rafael J. Wysocki, xen-devel, Jan Beulich, Roger Pau Monne,
Kees Cook, mark.rutland, Andrew Cooper, Jörg Rödel,
jroedel, kirill.shutemov, dave.hansen, kai.huang
On Thu, Jan 19, 2023 at 11:35:06AM -0800, H. Peter Anvin wrote:
> On January 18, 2023 1:45:44 AM PST, Peter Zijlstra <peterz@infradead.org> wrote:
> >On Mon, Jan 16, 2023 at 03:25:34PM +0100, Peter Zijlstra wrote:
> >> The boot trampolines from trampoline_64.S have code flow like:
> >>
> >> 16bit BIOS SEV-ES 64bit EFI
> >>
> >> trampoline_start() sev_es_trampoline_start() trampoline_start_64()
> >> verify_cpu() | |
> >> switch_to_protected: <---------------' v
> >> | pa_trampoline_compat()
> >> v |
> >> startup_32() <-----------------------------------------------'
> >> |
> >> v
> >> startup_64()
> >> |
> >> v
> >> tr_start() := head_64.S:secondary_startup_64()
> >>
> >> Since AP bringup always goes through the 16bit BIOS path (EFI doesn't
> >> touch the APs), there is already a verify_cpu() invocation.
> >
> >So supposedly TDX/ACPI-6.4 comes in on trampoline_startup64() for APs --
> >can any of the TDX capable folks tell me if we need verify_cpu() on
> >these?
> >
> >Aside from checking for LM, it seems to clear XD_DISABLE on Intel and
> >force enable SSE on AMD/K7. Surely none of that is needed for these
> >shiny new chips?
> >
> >I mean, I can hack up a patch that adds verify_cpu() to the 64bit entry
> >point, but it seems really sad to need that on modern systems.
>
> Sad, perhaps, but really better for orthogonality – fewer special cases.
I'd argue more, but whatever. XD_DISABLE is an abomination and 64bit
entry points should care about it just as much as having LM. And this
way we have 2/3 instead of 1/3 entry points do 'special' nonsense.
I ended up with this trainwreck, it adds verify_cpu to
pa_trampoline_compat() because for some raisin it doesn't want to
assemble when placed in trampoline_start64().
Is this really what we want?
---
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -689,9 +689,14 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lno_longmo
jmp 1b
SYM_FUNC_END(.Lno_longmode)
- .globl verify_cpu
#include "../../kernel/verify_cpu.S"
+ .globl verify_cpu
+SYM_FUNC_START_LOCAL(verify_cpu)
+ VERIFY_CPU
+ RET
+SYM_FUNC_END(verify_cpu)
+
.data
SYM_DATA_START_LOCAL(gdt64)
.word gdt_end - gdt - 1
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -321,6 +321,11 @@ SYM_FUNC_END(startup_32_smp)
#include "verify_cpu.S"
+SYM_FUNC_START_LOCAL(verify_cpu)
+ VERIFY_CPU
+ RET
+SYM_FUNC_END(verify_cpu)
+
__INIT
SYM_FUNC_START(early_idt_handler_array)
# 36(%esp) %eflags
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -345,6 +345,12 @@ SYM_CODE_START(secondary_startup_64)
SYM_CODE_END(secondary_startup_64)
#include "verify_cpu.S"
+
+SYM_FUNC_START_LOCAL(verify_cpu)
+ VERIFY_CPU
+ RET
+SYM_FUNC_END(verify_cpu)
+
#include "sev_verify_cbit.S"
#ifdef CONFIG_HOTPLUG_CPU
--- a/arch/x86/kernel/verify_cpu.S
+++ b/arch/x86/kernel/verify_cpu.S
@@ -31,7 +31,7 @@
#include <asm/cpufeatures.h>
#include <asm/msr-index.h>
-SYM_FUNC_START_LOCAL(verify_cpu)
+.macro VERIFY_CPU
pushf # Save caller passed flags
push $0 # Kill any dangerous flags
popf
@@ -46,31 +46,31 @@ SYM_FUNC_START_LOCAL(verify_cpu)
pushfl
popl %eax
cmpl %eax,%ebx
- jz .Lverify_cpu_no_longmode # cpu has no cpuid
+ jz .Lverify_cpu_no_longmode_\@ # cpu has no cpuid
#endif
movl $0x0,%eax # See if cpuid 1 is implemented
cpuid
cmpl $0x1,%eax
- jb .Lverify_cpu_no_longmode # no cpuid 1
+ jb .Lverify_cpu_no_longmode_\@ # no cpuid 1
xor %di,%di
cmpl $0x68747541,%ebx # AuthenticAMD
- jnz .Lverify_cpu_noamd
+ jnz .Lverify_cpu_noamd_\@
cmpl $0x69746e65,%edx
- jnz .Lverify_cpu_noamd
+ jnz .Lverify_cpu_noamd_\@
cmpl $0x444d4163,%ecx
- jnz .Lverify_cpu_noamd
+ jnz .Lverify_cpu_noamd_\@
mov $1,%di # cpu is from AMD
- jmp .Lverify_cpu_check
+ jmp .Lverify_cpu_check_\@
-.Lverify_cpu_noamd:
+.Lverify_cpu_noamd_\@:
cmpl $0x756e6547,%ebx # GenuineIntel?
- jnz .Lverify_cpu_check
+ jnz .Lverify_cpu_check_\@
cmpl $0x49656e69,%edx
- jnz .Lverify_cpu_check
+ jnz .Lverify_cpu_check_\@
cmpl $0x6c65746e,%ecx
- jnz .Lverify_cpu_check
+ jnz .Lverify_cpu_check_\@
# only call IA32_MISC_ENABLE when:
# family > 6 || (family == 6 && model >= 0xd)
@@ -81,60 +81,62 @@ SYM_FUNC_START_LOCAL(verify_cpu)
andl $0x0ff00f00, %eax # mask family and extended family
shrl $8, %eax
cmpl $6, %eax
- ja .Lverify_cpu_clear_xd # family > 6, ok
- jb .Lverify_cpu_check # family < 6, skip
+ ja .Lverify_cpu_clear_xd_\@ # family > 6, ok
+ jb .Lverify_cpu_check_\@ # family < 6, skip
andl $0x000f00f0, %ecx # mask model and extended model
shrl $4, %ecx
cmpl $0xd, %ecx
- jb .Lverify_cpu_check # family == 6, model < 0xd, skip
+ jb .Lverify_cpu_check_\@ # family == 6, model < 0xd, skip
-.Lverify_cpu_clear_xd:
+.Lverify_cpu_clear_xd_\@:
movl $MSR_IA32_MISC_ENABLE, %ecx
rdmsr
btrl $2, %edx # clear MSR_IA32_MISC_ENABLE_XD_DISABLE
- jnc .Lverify_cpu_check # only write MSR if bit was changed
+ jnc .Lverify_cpu_check_\@ # only write MSR if bit was changed
wrmsr
-.Lverify_cpu_check:
+.Lverify_cpu_check_\@:
movl $0x1,%eax # Does the cpu have what it takes
cpuid
andl $REQUIRED_MASK0,%edx
xorl $REQUIRED_MASK0,%edx
- jnz .Lverify_cpu_no_longmode
+ jnz .Lverify_cpu_no_longmode_\@
movl $0x80000000,%eax # See if extended cpuid is implemented
cpuid
cmpl $0x80000001,%eax
- jb .Lverify_cpu_no_longmode # no extended cpuid
+ jb .Lverify_cpu_no_longmode_\@ # no extended cpuid
movl $0x80000001,%eax # Does the cpu have what it takes
cpuid
andl $REQUIRED_MASK1,%edx
xorl $REQUIRED_MASK1,%edx
- jnz .Lverify_cpu_no_longmode
+ jnz .Lverify_cpu_no_longmode_\@
-.Lverify_cpu_sse_test:
+.Lverify_cpu_sse_test_\@:
movl $1,%eax
cpuid
andl $SSE_MASK,%edx
cmpl $SSE_MASK,%edx
- je .Lverify_cpu_sse_ok
+ je .Lverify_cpu_sse_ok_\@
test %di,%di
- jz .Lverify_cpu_no_longmode # only try to force SSE on AMD
+ jz .Lverify_cpu_no_longmode_\@ # only try to force SSE on AMD
movl $MSR_K7_HWCR,%ecx
rdmsr
btr $15,%eax # enable SSE
wrmsr
xor %di,%di # don't loop
- jmp .Lverify_cpu_sse_test # try again
+ jmp .Lverify_cpu_sse_test_\@ # try again
-.Lverify_cpu_no_longmode:
+.Lverify_cpu_no_longmode_\@:
popf # Restore caller passed flags
movl $1,%eax
- RET
-.Lverify_cpu_sse_ok:
+ jmp .Lverify_cpu_ret_\@
+
+.Lverify_cpu_sse_ok_\@:
popf # Restore caller passed flags
xorl %eax, %eax
- RET
-SYM_FUNC_END(verify_cpu)
+
+.Lverify_cpu_ret_\@:
+.endm
--- a/arch/x86/realmode/rm/trampoline_64.S
+++ b/arch/x86/realmode/rm/trampoline_64.S
@@ -34,6 +34,8 @@
#include <asm/realmode.h>
#include "realmode.h"
+#include "../kernel/verify_cpu.S"
+
.text
.code16
@@ -52,7 +54,8 @@ SYM_CODE_START(trampoline_start)
# Setup stack
movl $rm_stack_end, %esp
- call verify_cpu # Verify the cpu supports long mode
+ VERIFY_CPU # Verify the cpu supports long mode
+
testl %eax, %eax # Check for return code
jnz no_longmode
@@ -100,8 +103,6 @@ SYM_CODE_START(sev_es_trampoline_start)
SYM_CODE_END(sev_es_trampoline_start)
#endif /* CONFIG_AMD_MEM_ENCRYPT */
-#include "../kernel/verify_cpu.S"
-
.section ".text32","ax"
.code32
.balign 4
@@ -180,6 +181,8 @@ SYM_CODE_START(pa_trampoline_compat)
movl $rm_stack_end, %esp
movw $__KERNEL_DS, %dx
+ VERIFY_CPU
+
movl $(CR0_STATE & ~X86_CR0_PG), %eax
movl %eax, %cr0
ljmpl $__KERNEL32_CS, $pa_startup_32
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 0/7] x86: retbleed=stuff fixes
2023-01-16 14:25 [PATCH v2 0/7] x86: retbleed=stuff fixes Peter Zijlstra
` (7 preceding siblings ...)
2023-01-18 1:54 ` [PATCH v2 0/7] x86: retbleed=stuff fixes Joan Bruguera
@ 2023-05-16 13:59 ` Josh Poimboeuf
8 siblings, 0 replies; 22+ messages in thread
From: Josh Poimboeuf @ 2023-05-16 13:59 UTC (permalink / raw)
To: Peter Zijlstra
Cc: x86, Joan Bruguera, linux-kernel, Juergen Gross,
Rafael J. Wysocki, xen-devel, Jan Beulich, Roger Pau Monne,
Kees Cook, mark.rutland, Andrew Cooper, Jörg Rödel,
H. Peter Anvin
On Mon, Jan 16, 2023 at 03:25:33PM +0100, Peter Zijlstra wrote:
> Hi all,
>
> Patches to address the various callthunk fails reported by Joan.
>
> The first two patches are new (and I've temporarily dropped the
> restore_processor_state sealing).
>
> It is my understanding that AP bringup will always use the 16bit trampoline
> path, if this is not the case, please holler.
Ping?
--
Josh
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2023-05-16 14:00 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-16 14:25 [PATCH v2 0/7] x86: retbleed=stuff fixes Peter Zijlstra
2023-01-16 14:25 ` [PATCH v2 1/7] x86/boot: Remove verify_cpu() from secondary_startup_64() Peter Zijlstra
2023-01-17 9:25 ` Ingo Molnar
2023-01-18 9:45 ` Peter Zijlstra
2023-01-18 11:46 ` kirill.shutemov
2023-01-19 19:35 ` H. Peter Anvin
2023-01-26 14:15 ` Peter Zijlstra
2023-01-16 14:25 ` [PATCH v2 2/7] x86/boot: Delay sev_verify_cbit() a bit Peter Zijlstra
2023-01-19 13:18 ` Borislav Petkov
2023-01-20 12:43 ` Jörg Rödel
2023-01-16 14:25 ` [PATCH v2 3/7] x86/power: De-paravirt restore_processor_state() Peter Zijlstra
2023-01-20 20:26 ` Borislav Petkov
2023-01-16 14:25 ` [PATCH v2 4/7] x86/power: Inline write_cr[04]() Peter Zijlstra
2023-01-20 20:29 ` Borislav Petkov
2023-01-16 14:25 ` [PATCH v2 5/7] x86/callthunk: No callthunk for restore_processor_state() Peter Zijlstra
2023-01-16 14:25 ` [PATCH v2 6/7] x86/power: Sprinkle some noinstr Peter Zijlstra
2023-01-17 9:31 ` Ingo Molnar
2023-01-17 11:29 ` Peter Zijlstra
2023-01-17 11:54 ` Ingo Molnar
2023-01-16 14:25 ` [PATCH v2 7/7] PM / hibernate: Add minimal noinstr annotations Peter Zijlstra
2023-01-18 1:54 ` [PATCH v2 0/7] x86: retbleed=stuff fixes Joan Bruguera
2023-05-16 13:59 ` Josh Poimboeuf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).