From mboxrd@z Thu Jan 1 00:00:00 1970 Return-path: Received: from mga07.intel.com ([134.134.136.100]) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1fFpUG-0000KE-7H for speck@linutronix.de; Tue, 08 May 2018 01:24:08 +0200 From: Dave Hansen Subject: [MODERATED] [PATCH 2/5] SSB extra v2 2 Date: Mon, 7 May 2018 16:18:43 -0700 Message-Id: =?utf-8?q?=3C8e72615bb02e3b433c57659652e0f4c31555eb98=2E152573?= =?utf-8?q?4796=2Egit=2Edave=2Ehansen=40intel=2Ecom=3E?= In-Reply-To: References: In-Reply-To: References: Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit MIME-Version: 1.0 To: speck@linutronix.de Cc: Dave Hansen List-ID: From: Dave Hansen BPF programs can potentially do bad things, especially where side-channels are concerned. This introduces some infrastructure to help deal with them. Each BPF program has a function pointer (->bpf_func) which either points to an interpreter or to the actual JIT'd code. We override it when running untrusted programs (defined by having CAP_SYS_ADMIN at program load time), but leave it in place for trusted ones. This way, the runtime overhead is basically zero for programs which receive no mitigation: it just calls directly into the original code. The function we override it to (bpf_untrusted_wrapper()) just turns on mitigations when entering the program and turns them off before returning. Ideally, we would change the ->bpf_func call to actually pass 'bpf_prog' itself instead of deriving it via container_of(), but this means less churn for now. Why not use the BPF machine itself to do the mitigation by patching in instructions? We could attempt to do the mitigation in the eBPF instructions themselves, but that gets messy, fast. We would need something like bpf__gen_prologue(), which I ran away from in terror. We would also need to ensure that offloaded programs did not receive the mitigation instructions. eBPF programs can also "call" other programs, but those calls never return, so they are more similar to execve(). That means that the mitigations at the beginning of programs *have* to be conditional in some way if implemented inside the eBPF machine. It also means that we do not get a nice, clean call/return pair. Signed-off-by: Dave Hansen --- include/linux/filter.h | 7 ++++++- kernel/bpf/core.c | 9 +++++++++ net/core/filter.c | 25 +++++++++++++++++++++++++ 3 files changed, 40 insertions(+), 1 deletion(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index fc4e8f9..ac10038 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -467,7 +467,8 @@ struct bpf_prog { dst_needed:1, /* Do we need dst entry? */ blinded:1, /* Was blinded */ is_func:1, /* program is a bpf function */ - kprobe_override:1; /* Do we override a kprobe? */ + kprobe_override:1, /* Do we override a kprobe? */ + need_mitigation:1; /* Need speculation fixes? */ enum bpf_prog_type type; /* Type of BPF program */ enum bpf_attach_type expected_attach_type; /* For some prog types */ u32 len; /* Number of filter blocks */ @@ -477,6 +478,8 @@ struct bpf_prog { struct sock_fprog_kern *orig_prog; /* Original BPF program */ unsigned int (*bpf_func)(const void *ctx, const struct bpf_insn *insn); + unsigned int (*bpf_orig_func)(const void *ctx, + const struct bpf_insn *insn); /* Instructions for interpreter */ union { struct sock_filter insns[0]; @@ -1051,4 +1054,6 @@ struct bpf_sock_ops_kern { */ }; +unsigned int bpf_untrusted_wrapper(const void *ctx, const struct bpf_insn *insn); + #endif /* __LINUX_FILTER_H__ */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index dd7caa1..15e4865 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1501,6 +1501,15 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err) if (*err) return fp; } + + /* Must be done after JIT has set fp->bpf_func: */ + if (fp->need_mitigation) { + /* Stash the original function: */ + fp->bpf_orig_func = fp->bpf_func; + /* Replace it with the wrapper for untrusted programs: */ + fp->bpf_func = bpf_untrusted_wrapper; + } + bpf_prog_lock_ro(fp); /* The tail call compatibility check can only be done at diff --git a/net/core/filter.c b/net/core/filter.c index d31aff9..f8b8099 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -48,6 +48,7 @@ #include #include #include +#include #include #include #include @@ -5649,3 +5650,27 @@ int sk_get_filter(struct sock *sk, struct sock_filter __user *ubuf, release_sock(sk); return ret; } + +/* + * This function is installed as the ->bpf_func in all bpf_prog's that + * are not trusted. They might have been loaded by an untrusted user + * or call potentially untrusted code. They might be doing something + * that reduces kernel hardening, so add mitigations while they are + * running. + * + * This is nice for things like offloaded BPF programs because they + * do not use ->bpf_func. + */ +unsigned int bpf_untrusted_wrapper(const void *ctx, const struct bpf_insn *insn) +{ + struct bpf_prog *prog = container_of(insn, struct bpf_prog, insnsi[0]); + unsigned int ret; + + arch_bpf_spec_ctrl_enable(); + ret = prog->bpf_orig_func(ctx, insn); + arch_bpf_spec_ctrl_disable(); + + return ret; +} +EXPORT_SYMBOL_GPL(bpf_untrusted_wrapper); + -- 2.9.5