From mboxrd@z Thu Jan 1 00:00:00 1970 Return-path: Received: from mga04.intel.com ([192.55.52.120]) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1fEMoE-00061g-ER for speck@linutronix.de; Fri, 04 May 2018 00:34:42 +0200 From: Dave Hansen Subject: [MODERATED] [PATCH 3/5] SSB extra 1 Date: Thu, 3 May 2018 15:29:46 -0700 Message-Id: =?utf-8?q?=3Cd4ffdf50f25bca207b3942fc4a390d2273487517=2E152538?= =?utf-8?q?3411=2Egit=2Edave=2Ehansen=40intel=2Ecom=3E?= In-Reply-To: References: In-Reply-To: References: Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit MIME-Version: 1.0 To: speck@linutronix.de Cc: Dave Hansen List-ID: From: Dave Hansen The previous patches put in place the infrastructure to tell when BPF code is running. Now, we hook into that code to call out to some architecture-specific code which will implement those mitigationse Signed-off-by: Dave Hansen Cc: Andi Kleen Cc: Tim Chen --- include/linux/filter.h | 7 +++++++ include/linux/nospec.h | 9 +++++++++ net/core/filter.c | 23 +++++++++++++++-------- 3 files changed, 31 insertions(+), 8 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index 83c1298..ad63d9a 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -19,6 +19,7 @@ #include #include #include +#include #include @@ -496,6 +497,12 @@ static inline void bpf_enter_prog(const struct bpf_prog *fp) { int *count = &get_cpu_var(bpf_prog_ran); (*count)++; + /* + * Upon the first entry to BPF code, we need to reduce + * memory speculation to mitigate attacks targeting it. + */ + if (*count == 1) + cpu_enter_reduced_memory_speculation(); } extern void bpf_leave_prog_deferred(const struct bpf_prog *fp); diff --git a/include/linux/nospec.h b/include/linux/nospec.h index 1e63a0a..037ed8e 100644 --- a/include/linux/nospec.h +++ b/include/linux/nospec.h @@ -60,4 +60,13 @@ static inline unsigned long array_index_mask_nospec(unsigned long index, int arch_prctl_set_spec_ctrl(unsigned long which, unsigned long ctrl); int arch_prctl_get_spec_ctrl(unsigned long which); +#ifndef CONFIG_ARCH_HAS_REDUCED_MEMORY_SPECULATION +static inline void cpu_enter_reduced_memory_speculation(void) +{ +} +static inline void cpu_leave_reduced_memory_speculation(void) +{ +} +#endif + #endif /* _LINUX_NOSPEC_H */ diff --git a/net/core/filter.c b/net/core/filter.c index ffca000..e7d7a29 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include #include @@ -5662,17 +5663,23 @@ DEFINE_PER_CPU(unsigned int, bpf_prog_ran); EXPORT_SYMBOL_GPL(bpf_prog_ran); static void bpf_done_on_this_cpu(struct work_struct *work) { - if (!this_cpu_dec_return(bpf_prog_ran)) - return; + if (this_cpu_dec_return(bpf_prog_ran)) { + /* + * This is unexpected. The elevated refcount indicates + * being in the *middle* of a BPF program, which should + * be impossible. They are executed inside + * rcu_read_lock() where we can not sleep and where + * preemption is disabled. + */ + WARN_ON_ONCE(1); + } /* - * This is unexpected. The elevated refcount indicates - * being in the *middle* of a BPF program, which should - * be impossible. They are executed inside - * rcu_read_lock() where we can not sleep and where - * preemption is disabled. + * Unsafe BPF code is no longer running, disable mitigations. + * This must be done after bpf_prog_ran because the mitigation + * code looks at its state. */ - WARN_ON_ONCE(1); + cpu_leave_reduced_memory_speculation(); } DEFINE_PER_CPU(struct delayed_work, bpf_prog_delayed_work); -- 2.9.5