From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (146.0.238.70:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 24 Feb 2019 15:08:17 -0000 Received: from mga04.intel.com ([192.55.52.120]) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1gxvNo-0001Qw-42 for speck@linutronix.de; Sun, 24 Feb 2019 16:08:00 +0100 From: Andi Kleen Subject: [MODERATED] [PATCH v6 15/43] MDSv6 Date: Sun, 24 Feb 2019 07:07:21 -0800 Message-Id: <80f966f2e8e5c82fa364c2ae1969712a497faad7.1551019522.git.ak@linux.intel.com> In-Reply-To: References: In-Reply-To: References: To: speck@linutronix.de Cc: Andi Kleen List-ID: From: Andi Kleen Subject: x86/speculation/mds: Schedule cpu clear on context switch On context switch we need to schedule a cpu clear on the next kernel exit when: - We're switching between different processes - We're switching from a kernel thread For kernel threads like work queue assume they might contain sensitive (other user's or crypto) data. switch_mm already distinguishes these cases. We either schedule a flush when the mm is different, or we were in lazy mm mode, which means a kernel thread ran before. Signed-off-by: Andi Kleen --- arch/x86/kernel/process.h | 1 + arch/x86/mm/tlb.c | 14 ++++++++++++++ 2 files changed, 15 insertions(+) diff --git a/arch/x86/kernel/process.h b/arch/x86/kernel/process.h index 320ab978fb1f..976722d8b537 100644 --- a/arch/x86/kernel/process.h +++ b/arch/x86/kernel/process.h @@ -2,6 +2,7 @@ // // Code shared between 32 and 64 bit +#include #include void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p); diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 999d6d8f0bef..995420034f57 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -342,6 +342,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, !cpumask_test_cpu(cpu, mm_cpumask(next)))) cpumask_set_cpu(cpu, mm_cpumask(next)); + /* + * We switched through a kernel thread, so schedule + * a cpu clear to protect the thread. + */ + if (static_cpu_has_bug(X86_BUG_MDS) && was_lazy) + lazy_clear_cpu(); + /* * If the CPU is not in lazy TLB mode, we are just switching * from one thread in a process to another thread in the same @@ -376,6 +383,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, */ cond_ibpb(tsk); + /* + * We're switching to a different process, so schedule + * a cpu clear. + */ + if (static_cpu_has_bug(X86_BUG_MDS)) + lazy_clear_cpu(); + if (IS_ENABLED(CONFIG_VMAP_STACK)) { /* * If our current stack is in vmalloc space and isn't -- 2.17.2