All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v2] bpf, seccomp: fix false positive preemption splat for cbpf->ebpf progs
@ 2019-02-20 23:01 Daniel Borkmann
  2019-02-20 23:59 ` Alexei Starovoitov
  0 siblings, 1 reply; 11+ messages in thread
From: Daniel Borkmann @ 2019-02-20 23:01 UTC (permalink / raw)
  To: ast; +Cc: keescook, netdev, Daniel Borkmann

In 568f196756ad ("bpf: check that BPF programs run with preemption disabled")
a check was added for BPF_PROG_RUN() that for every invocation preemption is
disabled to not break eBPF assumptions (e.g. per-cpu map). Of course this does
not count for seccomp because only cBPF -> eBPF is loaded here and it does
not make use of any functionality that would require this assertion. Fix this
false positive by adding and using SECCOMP_RUN() variant that does not have
the cant_sleep(); check.

Fixes: 568f196756ad ("bpf: check that BPF programs run with preemption disabled")
Reported-by: syzbot+8bf19ee2aa580de7a2a7@syzkaller.appspotmail.com
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Kees Cook <keescook@chromium.org>
---
 v1 -> v2:
  - More elaborate comment and added SECCOMP_RUN
  - Added Kees' ACK from earlier v1 patch

 include/linux/filter.h | 22 +++++++++++++++++++++-
 kernel/seccomp.c       |  2 +-
 2 files changed, 22 insertions(+), 2 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index f32b3ec..cd7f957 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -533,7 +533,27 @@ struct sk_filter {
 	struct bpf_prog	*prog;
 };
 
-#define BPF_PROG_RUN(filter, ctx)  ({ cant_sleep(); (*(filter)->bpf_func)(ctx, (filter)->insnsi); })
+#define __bpf_prog_run(prog, ctx)		\
+	(*(prog)->bpf_func)(ctx, (prog)->insnsi)
+#define __bpf_prog_run__may_preempt(prog, ctx)	\
+	({ __bpf_prog_run(prog, ctx); })
+#define __bpf_prog_run__non_preempt(prog, ctx)	\
+	({ cant_sleep(); __bpf_prog_run(prog, ctx); })
+
+/* Preemption must be disabled when native eBPF programs are run in
+ * order to not break per CPU data structures, for example; make
+ * sure to throw a stack trace under CONFIG_DEBUG_ATOMIC_SLEEP when
+ * we find that preemption is still enabled.
+ *
+ * Only exception today is seccomp, where progs have transitioned
+ * from cBPF to eBPF, and native eBPF is _not_ supported. They can
+ * safely run with preemption enabled.
+ */
+#define BPF_PROG_RUN(prog, ctx)			\
+	__bpf_prog_run__non_preempt(prog, ctx)
+
+#define SECCOMP_RUN(prog, ctx)			\
+	__bpf_prog_run__may_preempt(prog, ctx)
 
 #define BPF_SKB_CB_LEN QDISC_CB_PRIV_LEN
 
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index e815781..701a3cf 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -268,7 +268,7 @@ static u32 seccomp_run_filters(const struct seccomp_data *sd,
 	 * value always takes priority (ignoring the DATA).
 	 */
 	for (; f; f = f->prev) {
-		u32 cur_ret = BPF_PROG_RUN(f->prog, sd);
+		u32 cur_ret = SECCOMP_RUN(f->prog, sd);
 
 		if (ACTION_ONLY(cur_ret) < ACTION_ONLY(ret)) {
 			ret = cur_ret;
-- 
2.9.5


^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-02-22  0:22 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-20 23:01 [PATCH bpf-next v2] bpf, seccomp: fix false positive preemption splat for cbpf->ebpf progs Daniel Borkmann
2019-02-20 23:59 ` Alexei Starovoitov
2019-02-21  4:02   ` Alexei Starovoitov
2019-02-21  5:31     ` Kees Cook
2019-02-21  8:53       ` Daniel Borkmann
2019-02-21 12:56         ` Jann Horn
2019-02-21 19:29           ` Alexei Starovoitov
2019-02-21 19:53             ` Kees Cook
2019-02-21 20:36               ` Alexei Starovoitov
2019-02-21 22:14                 ` Kees Cook
2019-02-22  0:22                   ` Andy Lutomirski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.