All of lore.kernel.org
 help / color / mirror / Atom feed
* [MODERATED] [PATCH 0/5] SSB extra v2 0
@ 2018-05-07 23:18 Dave Hansen
  2018-05-07 23:18 ` [MODERATED] [PATCH 1/5] SSB extra v2 1 Dave Hansen
                   ` (5 more replies)
  0 siblings, 6 replies; 16+ messages in thread
From: Dave Hansen @ 2018-05-07 23:18 UTC (permalink / raw)
  To: speck; +Cc: Dave Hansen

This is still highly RFC.  I would mostly like feedback from folks
as to whether the overall method of plugging into BPF is more
acceptable now.  This has not been tested on real hardware yet.

--

BPF is a potential source of gadgets that can be used for memory
diambiguation-based attacks.  To help mitigate these, we enable
the bit in SPEC_CTRL which enables the reduced (memory)
speculation mode on the processor when runing BPF code.

This improves on the last version quite a bit: there is no overhead
for unmitigated BPF programs.  Instead of calling the mitigation
code unconditionally, we stick it in a wrapper function which then
calls the real BPF payload.  In the unmitigated case, the function
pointer just points to the payload directly.

This also does not do any of the arch-specific bits.  Thomas has
some code to do that.

Big caveat: I've only been looking at BPF for a couple of days.
I'm not deepy familiar with the code and I'm positive there are bugs
and tons of ways to optimize this.

Thomas Gleixner(1):
  bpf: Add speculation control interface

Dave Hansen (4):
  bpf: install mitigation wrapper for untrusted programs
  bpf: set prog->need_mitigation when calling other programs
  bpf: populate prog->need_mitigation for unprivileged programs
  bpf: only mitigate programs that write to memory

 include/linux/bpf_verifier.h |  1 +
 include/linux/filter.h       |  7 ++++++-
 include/linux/nospec.h       |  4 ++++
 kernel/bpf/core.c            | 14 ++++++++++++++
 kernel/bpf/verifier.c        | 13 +++++++++++++
 net/core/filter.c            | 25 +++++++++++++++++++++++++
 6 files changed, 63 insertions(+), 1 deletion(-)

-- 
2.9.5

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] [PATCH 1/5] SSB extra v2 1
  2018-05-07 23:18 [MODERATED] [PATCH 0/5] SSB extra v2 0 Dave Hansen
@ 2018-05-07 23:18 ` Dave Hansen
  2018-05-07 23:18 ` [MODERATED] [PATCH 2/5] SSB extra v2 2 Dave Hansen
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Dave Hansen @ 2018-05-07 23:18 UTC (permalink / raw)
  To: speck; +Cc: Dave Hansen

From: Thomas Gleixner <tglx@linutronix.de>

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/nospec.h | 4 ++++
 kernel/bpf/core.c      | 5 +++++
 2 files changed, 9 insertions(+)

diff --git a/include/linux/nospec.h b/include/linux/nospec.h
index a908c95..d04ca98 100644
--- a/include/linux/nospec.h
+++ b/include/linux/nospec.h
@@ -63,4 +63,8 @@ int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which);
 int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
 			     unsigned long ctrl);
 
+/* Speculation control for BPF enforced mitigation */
+void arch_bpf_spec_ctrl_disable(void);
+void arch_bpf_spec_ctrl_enable(void);
+
 #endif /* _LINUX_NOSPEC_H */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index ba03ec3..dd7caa1 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -31,6 +31,7 @@
 #include <linux/rbtree_latch.h>
 #include <linux/kallsyms.h>
 #include <linux/rcupdate.h>
+#include <linux/nospec.h>
 
 #include <asm/unaligned.h>
 
@@ -1805,6 +1806,10 @@ const struct bpf_func_proto bpf_tail_call_proto = {
 	.arg3_type	= ARG_ANYTHING,
 };
 
+/* Stubs for architectures which do not implement BPF speculation control */
+void __weak arch_bpf_spec_ctrl_disable(void) { }
+void __weak arch_bpf_spec_ctrl_enable(void) { }
+
 /* Stub for JITs that only support cBPF. eBPF programs are interpreted.
  * It is encouraged to implement bpf_int_jit_compile() instead, so that
  * eBPF and implicitly also cBPF can get JITed!
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [MODERATED] [PATCH 2/5] SSB extra v2 2
  2018-05-07 23:18 [MODERATED] [PATCH 0/5] SSB extra v2 0 Dave Hansen
  2018-05-07 23:18 ` [MODERATED] [PATCH 1/5] SSB extra v2 1 Dave Hansen
@ 2018-05-07 23:18 ` Dave Hansen
  2018-05-07 23:18 ` [MODERATED] [PATCH 3/5] SSB extra v2 3 Dave Hansen
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Dave Hansen @ 2018-05-07 23:18 UTC (permalink / raw)
  To: speck; +Cc: Dave Hansen

From: Dave Hansen <dave.hansen@linux.intel.com>

BPF programs can potentially do bad things, especially where
side-channels are concerned.  This introduces some infrastructure to
help deal with them.

Each BPF program has a function pointer (->bpf_func) which either
points to an interpreter or to the actual JIT'd code.  We override it
when running untrusted programs (defined by having CAP_SYS_ADMIN at
program load time), but leave it in place for trusted ones.  This way,
the runtime overhead is basically zero for programs which receive no
mitigation: it just calls directly into the original code.

The function we override it to (bpf_untrusted_wrapper()) just turns
on mitigations when entering the program and turns them off before
returning.  Ideally, we would change the ->bpf_func call to actually
pass 'bpf_prog' itself instead of deriving it via container_of(),
but this means less churn for now.

Why not use the BPF machine itself to do the mitigation by patching
in instructions?

We could attempt to do the mitigation in the eBPF instructions
themselves, but that gets messy, fast.  We would need something like
bpf__gen_prologue(), which I ran away from in terror.  We would also
need to ensure that offloaded programs did not receive the mitigation
instructions.

eBPF programs can also "call" other programs, but those calls never
return, so they are more similar to execve().  That means that the
mitigations at the beginning of programs *have* to be conditional in
some way if implemented inside the eBPF machine.  It also means that
we do not get a nice, clean call/return pair.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
---
 include/linux/filter.h |  7 ++++++-
 kernel/bpf/core.c      |  9 +++++++++
 net/core/filter.c      | 25 +++++++++++++++++++++++++
 3 files changed, 40 insertions(+), 1 deletion(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index fc4e8f9..ac10038 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -467,7 +467,8 @@ struct bpf_prog {
 				dst_needed:1,	/* Do we need dst entry? */
 				blinded:1,	/* Was blinded */
 				is_func:1,	/* program is a bpf function */
-				kprobe_override:1; /* Do we override a kprobe? */
+				kprobe_override:1, /* Do we override a kprobe? */
+				need_mitigation:1; /* Need speculation fixes? */
 	enum bpf_prog_type	type;		/* Type of BPF program */
 	enum bpf_attach_type	expected_attach_type; /* For some prog types */
 	u32			len;		/* Number of filter blocks */
@@ -477,6 +478,8 @@ struct bpf_prog {
 	struct sock_fprog_kern	*orig_prog;	/* Original BPF program */
 	unsigned int		(*bpf_func)(const void *ctx,
 					    const struct bpf_insn *insn);
+	unsigned int		(*bpf_orig_func)(const void *ctx,
+					    const struct bpf_insn *insn);
 	/* Instructions for interpreter */
 	union {
 		struct sock_filter	insns[0];
@@ -1051,4 +1054,6 @@ struct bpf_sock_ops_kern {
 					 */
 };
 
+unsigned int bpf_untrusted_wrapper(const void *ctx, const struct bpf_insn *insn);
+
 #endif /* __LINUX_FILTER_H__ */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index dd7caa1..15e4865 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1501,6 +1501,15 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
 		if (*err)
 			return fp;
 	}
+
+	/* Must be done after JIT has set fp->bpf_func: */
+	if (fp->need_mitigation) {
+		/* Stash the original function: */
+		fp->bpf_orig_func = fp->bpf_func;
+		/* Replace it with the wrapper for untrusted programs: */
+		fp->bpf_func = bpf_untrusted_wrapper;
+	}
+
 	bpf_prog_lock_ro(fp);
 
 	/* The tail call compatibility check can only be done at
diff --git a/net/core/filter.c b/net/core/filter.c
index d31aff9..f8b8099 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -48,6 +48,7 @@
 #include <linux/filter.h>
 #include <linux/ratelimit.h>
 #include <linux/seccomp.h>
+#include <linux/nospec.h>
 #include <linux/if_vlan.h>
 #include <linux/bpf.h>
 #include <net/sch_generic.h>
@@ -5649,3 +5650,27 @@ int sk_get_filter(struct sock *sk, struct sock_filter __user *ubuf,
 	release_sock(sk);
 	return ret;
 }
+
+/*
+ * This function is installed as the ->bpf_func in all bpf_prog's that
+ * are not trusted.  They might have been loaded by an untrusted user
+ * or call potentially untrusted code.  They might be doing something
+ * that reduces kernel hardening, so add mitigations while they are
+ * running.
+ *
+ * This is nice for things like offloaded BPF programs because they
+ * do not use ->bpf_func.
+ */
+unsigned int bpf_untrusted_wrapper(const void *ctx, const struct bpf_insn *insn)
+{
+	struct bpf_prog *prog = container_of(insn, struct bpf_prog, insnsi[0]);
+	unsigned int ret;
+
+	arch_bpf_spec_ctrl_enable();
+	ret = prog->bpf_orig_func(ctx, insn);
+	arch_bpf_spec_ctrl_disable();
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(bpf_untrusted_wrapper);
+
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [MODERATED] [PATCH 3/5] SSB extra v2 3
  2018-05-07 23:18 [MODERATED] [PATCH 0/5] SSB extra v2 0 Dave Hansen
  2018-05-07 23:18 ` [MODERATED] [PATCH 1/5] SSB extra v2 1 Dave Hansen
  2018-05-07 23:18 ` [MODERATED] [PATCH 2/5] SSB extra v2 2 Dave Hansen
@ 2018-05-07 23:18 ` Dave Hansen
  2018-05-07 23:18 ` [MODERATED] [PATCH 4/5] SSB extra v2 4 Dave Hansen
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 16+ messages in thread
From: Dave Hansen @ 2018-05-07 23:18 UTC (permalink / raw)
  To: speck; +Cc: Dave Hansen

From: Dave Hansen <dave.hansen@linux.intel.com>

When calling other programs, assume that those programs are not
trusted and mark the current program as needing mitigation.

This situation can obviously be improved by checking the calling
program to see if it needs mitigation.  That step is left as an
exercise to the reader (the folks who have actually looked at
the BPF code for more than two days in their life).

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
---
 kernel/bpf/verifier.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5dd1dcb..ee454b6 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5481,6 +5481,8 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
 			 * the program array.
 			 */
 			prog->cb_access = 1;
+			prog->need_mitigation = 1;
+
 			env->prog->aux->stack_depth = MAX_BPF_STACK;
 
 			/* mark bpf_tail_call as different opcode to avoid
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [MODERATED] [PATCH 4/5] SSB extra v2 4
  2018-05-07 23:18 [MODERATED] [PATCH 0/5] SSB extra v2 0 Dave Hansen
                   ` (2 preceding siblings ...)
  2018-05-07 23:18 ` [MODERATED] [PATCH 3/5] SSB extra v2 3 Dave Hansen
@ 2018-05-07 23:18 ` Dave Hansen
  2018-05-07 23:18 ` [MODERATED] [PATCH 5/5] SSB extra v2 5 Dave Hansen
  2018-05-08  0:36 ` [MODERATED] " Andi Kleen
  5 siblings, 0 replies; 16+ messages in thread
From: Dave Hansen @ 2018-05-07 23:18 UTC (permalink / raw)
  To: speck; +Cc: Dave Hansen

From: Dave Hansen <dave.hansen@linux.intel.com>

For now, assume that all programs loaded by unpriviliged users are
untrusted and need mitigation.

Again, there are lots of optimizations we can do here, but this is
obviously a conservative place to start.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
---
 kernel/bpf/syscall.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 4ca46df..fa4d9e3 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1292,6 +1292,8 @@ static int bpf_prog_load(union bpf_attr *attr)
 			   bpf_prog_insn_size(prog)) != 0)
 		goto free_prog;
 
+	/* Mitigate all programs loaded without CAP_SYS_ADMIN: */
+	prog->need_mitigation = !capable(CAP_SYS_ADMIN);
 	prog->orig_prog = NULL;
 	prog->jited = 0;
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [MODERATED] [PATCH 5/5] SSB extra v2 5
  2018-05-07 23:18 [MODERATED] [PATCH 0/5] SSB extra v2 0 Dave Hansen
                   ` (3 preceding siblings ...)
  2018-05-07 23:18 ` [MODERATED] [PATCH 4/5] SSB extra v2 4 Dave Hansen
@ 2018-05-07 23:18 ` Dave Hansen
  2018-05-08  0:36 ` [MODERATED] " Andi Kleen
  5 siblings, 0 replies; 16+ messages in thread
From: Dave Hansen @ 2018-05-07 23:18 UTC (permalink / raw)
  To: speck; +Cc: Dave Hansen

From: Dave Hansen <dave.hansen@linux.intel.com>

Think of this more as an example of what we can do than a full-blown
mitigation.  Since we go through and verify BFP programs and we also
know that speculative-store-bypass requires stores (aka writes),
we have the verifier record if it saw a memory write.  If not, like
if the program operates on registers alone, then do not bother
with mitigation.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
---
 include/linux/bpf_verifier.h |  1 +
 kernel/bpf/syscall.c         |  2 --
 kernel/bpf/verifier.c        | 11 +++++++++++
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 7e61c39..396b5f1 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -189,6 +189,7 @@ struct bpf_verifier_env {
 	u32 id_gen;			/* used to generate unique reg IDs */
 	bool allow_ptr_leaks;
 	bool seen_direct_write;
+	bool saw_memory_write;		/* Did the program write to memory? */
 	struct bpf_insn_aux_data *insn_aux_data; /* array of per-insn state */
 	struct bpf_verifier_log log;
 	u32 subprog_starts[BPF_MAX_SUBPROGS];
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index fa4d9e3..4ca46df 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1292,8 +1292,6 @@ static int bpf_prog_load(union bpf_attr *attr)
 			   bpf_prog_insn_size(prog)) != 0)
 		goto free_prog;
 
-	/* Mitigate all programs loaded without CAP_SYS_ADMIN: */
-	prog->need_mitigation = !capable(CAP_SYS_ADMIN);
 	prog->orig_prog = NULL;
 	prog->jited = 0;
 
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index ee454b6..83fc611 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1601,6 +1601,9 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 	struct bpf_func_state *state;
 	int size, err = 0;
 
+	if (t == BPF_WRITE)
+		env->saw_memory_write = true;
+
 	size = bpf_size_to_bytes(bpf_size);
 	if (size < 0)
 		return size;
@@ -5739,6 +5742,14 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr)
 		convert_pseudo_ld_imm64(env);
 	}
 
+	/* To mitigate speculative-store-bypass, we only need
+	 * mitigation for programs that write to memory.  Mark that
+	 * the program needs mitigation if loaded without
+	 * CAP_SYS_ADMIN:
+	 */
+	if (env->saw_memory_write && !capable(CAP_SYS_ADMIN))
+		env->prog->need_mitigation = true;
+
 err_release_maps:
 	if (!env->prog->aux->used_maps)
 		/* if we didn't copy map pointers into bpf_prog_info, release
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH 5/5] SSB extra v2 5
  2018-05-07 23:18 [MODERATED] [PATCH 0/5] SSB extra v2 0 Dave Hansen
                   ` (4 preceding siblings ...)
  2018-05-07 23:18 ` [MODERATED] [PATCH 5/5] SSB extra v2 5 Dave Hansen
@ 2018-05-08  0:36 ` Andi Kleen
  2018-05-08  0:46   ` Dave Hansen
  5 siblings, 1 reply; 16+ messages in thread
From: Andi Kleen @ 2018-05-08  0:36 UTC (permalink / raw)
  To: speck

> +	/* To mitigate speculative-store-bypass, we only need
> +	 * mitigation for programs that write to memory.  Mark that
> +	 * the program needs mitigation if loaded without
> +	 * CAP_SYS_ADMIN:
> +	 */
> +	if (env->saw_memory_write && !capable(CAP_SYS_ADMIN))
> +		env->prog->need_mitigation = true;

Flag should have a more descriptive name specific to SSB. 
I bet this won't be the last mitigation needed for EBPF :|

-Andi

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH 5/5] SSB extra v2 5
  2018-05-08  0:36 ` [MODERATED] " Andi Kleen
@ 2018-05-08  0:46   ` Dave Hansen
  2018-05-09 15:36     ` Thomas Gleixner
  0 siblings, 1 reply; 16+ messages in thread
From: Dave Hansen @ 2018-05-08  0:46 UTC (permalink / raw)
  To: speck

On 05/07/2018 05:36 PM, speck for Andi Kleen wrote:
>> +	/* To mitigate speculative-store-bypass, we only need
>> +	 * mitigation for programs that write to memory.  Mark that
>> +	 * the program needs mitigation if loaded without
>> +	 * CAP_SYS_ADMIN:
>> +	 */
>> +	if (env->saw_memory_write && !capable(CAP_SYS_ADMIN))
>> +		env->prog->need_mitigation = true;
> Flag should have a more descriptive name specific to SSB. 
> I bet this won't be the last mitigation needed for EBPF :|

For now, it *is* generic, though.  It's whether the mitigation function
gets injected into the call path or not.

Once we have multiple things we have to mitigate for at entry/exit, we
may need to add some other flags to the bpf_prog that say exactly what
mitigations we need, like a "bfp_prog->does_memory_write" that
explicitly tells us if we need the SSB mitigation.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 5/5] SSB extra v2 5
  2018-05-08  0:46   ` Dave Hansen
@ 2018-05-09 15:36     ` Thomas Gleixner
  2018-05-09 15:43       ` [MODERATED] " Dave Hansen
  0 siblings, 1 reply; 16+ messages in thread
From: Thomas Gleixner @ 2018-05-09 15:36 UTC (permalink / raw)
  To: speck

On Mon, 7 May 2018, speck for Dave Hansen wrote:
> On 05/07/2018 05:36 PM, speck for Andi Kleen wrote:
> >> +	/* To mitigate speculative-store-bypass, we only need
> >> +	 * mitigation for programs that write to memory.  Mark that
> >> +	 * the program needs mitigation if loaded without
> >> +	 * CAP_SYS_ADMIN:
> >> +	 */
> >> +	if (env->saw_memory_write && !capable(CAP_SYS_ADMIN))
> >> +		env->prog->need_mitigation = true;
> > Flag should have a more descriptive name specific to SSB. 
> > I bet this won't be the last mitigation needed for EBPF :|
> 
> For now, it *is* generic, though.  It's whether the mitigation function
> gets injected into the call path or not.
> 
> Once we have multiple things we have to mitigate for at entry/exit, we
> may need to add some other flags to the bpf_prog that say exactly what
> mitigations we need, like a "bfp_prog->does_memory_write" that
> explicitly tells us if we need the SSB mitigation.

Now looking at the overhead of this SSB hardware mitigation stuff I really
wonder what's the state on investigating software mitigations for BPF.

The magic PDF talks about that, but has anyone looked at that?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH 5/5] SSB extra v2 5
  2018-05-09 15:36     ` Thomas Gleixner
@ 2018-05-09 15:43       ` Dave Hansen
  2018-05-09 15:55         ` Greg KH
  2018-05-09 21:04         ` Linus Torvalds
  0 siblings, 2 replies; 16+ messages in thread
From: Dave Hansen @ 2018-05-09 15:43 UTC (permalink / raw)
  To: speck

On 05/09/2018 08:36 AM, speck for Thomas Gleixner wrote:
>> For now, it *is* generic, though.  It's whether the mitigation function
>> gets injected into the call path or not.
>>
>> Once we have multiple things we have to mitigate for at entry/exit, we
>> may need to add some other flags to the bpf_prog that say exactly what
>> mitigations we need, like a "bfp_prog->does_memory_write" that
>> explicitly tells us if we need the SSB mitigation.
> Now looking at the overhead of this SSB hardware mitigation stuff I really
> wonder what's the state on investigating software mitigations for BPF.
> 
> The magic PDF talks about that, but has anyone looked at that?

I don't know of anyone that has.  I'm fairly sure no one at Intel has
looked in detail, at least in the post-Spectre timeframe.

Frankly, I think it's mostly a waste of time to do it without having the
BPF folks deeply involved from day one.  They were not fans of the way
we (Intel) did Spectre V1 mitigations and I don't expect this to be any
different.

They were deeply opposed to LFENCE getting used, for instance.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH 5/5] SSB extra v2 5
  2018-05-09 15:43       ` [MODERATED] " Dave Hansen
@ 2018-05-09 15:55         ` Greg KH
  2018-05-09 16:03           ` Dave Hansen
  2018-05-09 21:04         ` Linus Torvalds
  1 sibling, 1 reply; 16+ messages in thread
From: Greg KH @ 2018-05-09 15:55 UTC (permalink / raw)
  To: speck

On Wed, May 09, 2018 at 08:43:30AM -0700, speck for Dave Hansen wrote:
> On 05/09/2018 08:36 AM, speck for Thomas Gleixner wrote:
> >> For now, it *is* generic, though.  It's whether the mitigation function
> >> gets injected into the call path or not.
> >>
> >> Once we have multiple things we have to mitigate for at entry/exit, we
> >> may need to add some other flags to the bpf_prog that say exactly what
> >> mitigations we need, like a "bfp_prog->does_memory_write" that
> >> explicitly tells us if we need the SSB mitigation.
> > Now looking at the overhead of this SSB hardware mitigation stuff I really
> > wonder what's the state on investigating software mitigations for BPF.
> > 
> > The magic PDF talks about that, but has anyone looked at that?
> 
> I don't know of anyone that has.  I'm fairly sure no one at Intel has
> looked in detail, at least in the post-Spectre timeframe.
> 
> Frankly, I think it's mostly a waste of time to do it without having the
> BPF folks deeply involved from day one.  They were not fans of the way
> we (Intel) did Spectre V1 mitigations and I don't expect this to be any
> different.
> 
> They were deeply opposed to LFENCE getting used, for instance.

Great, why can't we drag them into this then?  We obviously need their
help, as anything we come up with is going to just be made fun of by
them :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH 5/5] SSB extra v2 5
  2018-05-09 15:55         ` Greg KH
@ 2018-05-09 16:03           ` Dave Hansen
  2018-05-09 16:05             ` Jiri Kosina
  0 siblings, 1 reply; 16+ messages in thread
From: Dave Hansen @ 2018-05-09 16:03 UTC (permalink / raw)
  To: speck

On 05/09/2018 08:55 AM, speck for Greg KH wrote:
>> Frankly, I think it's mostly a waste of time to do it without having the
>> BPF folks deeply involved from day one.  They were not fans of the way
>> we (Intel) did Spectre V1 mitigations and I don't expect this to be any
>> different.
>>
>> They were deeply opposed to LFENCE getting used, for instance.
> Great, why can't we drag them into this then?  We obviously need their
> help, as anything we come up with is going to just be made fun of by
> them :)

I've asked specifically that we get Alexei Starovoitov on here.  I've
been told that it's in the works, but it's a bit opaque to me what's
actually going on.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH 5/5] SSB extra v2 5
  2018-05-09 16:03           ` Dave Hansen
@ 2018-05-09 16:05             ` Jiri Kosina
  2018-05-09 16:36               ` Dave Hansen
  0 siblings, 1 reply; 16+ messages in thread
From: Jiri Kosina @ 2018-05-09 16:05 UTC (permalink / raw)
  To: speck

On Wed, 9 May 2018, speck for Dave Hansen wrote:

> I've asked specifically that we get Alexei Starovoitov on here.  I've 
> been told that it's in the works, but it's a bit opaque to me what's 
> actually going on.

I asked (admittedly on this list only) Jon to bring Daniel in here as 
well, but no response to that, unfortunately.

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH 5/5] SSB extra v2 5
  2018-05-09 16:05             ` Jiri Kosina
@ 2018-05-09 16:36               ` Dave Hansen
  2018-05-16 15:23                 ` Jon Masters
  0 siblings, 1 reply; 16+ messages in thread
From: Dave Hansen @ 2018-05-09 16:36 UTC (permalink / raw)
  To: speck

On 05/09/2018 09:05 AM, speck for Jiri Kosina wrote:
>> I've asked specifically that we get Alexei Starovoitov on here.  I've 
>> been told that it's in the works, but it's a bit opaque to me what's 
>> actually going on.
> I asked (admittedly on this list only) Jon to bring Daniel in here as 
> well, but no response to that, unfortunately.

I just asked about this, at least from Intel's perspective: Daniel is a
bit harder to do because he's not attached to a big company that folks
have existing relationships with.

How about we do Alexei first, and then we can work on Daniel if he
really is critical?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH 5/5] SSB extra v2 5
  2018-05-09 15:43       ` [MODERATED] " Dave Hansen
  2018-05-09 15:55         ` Greg KH
@ 2018-05-09 21:04         ` Linus Torvalds
  1 sibling, 0 replies; 16+ messages in thread
From: Linus Torvalds @ 2018-05-09 21:04 UTC (permalink / raw)
  To: speck



On Wed, 9 May 2018, speck for Dave Hansen wrote:
> 
> Frankly, I think it's mostly a waste of time to do it without having the
> BPF folks deeply involved from day one.  They were not fans of the way
> we (Intel) did Spectre V1 mitigations and I don't expect this to be any
> different.
> 
> They were deeply opposed to LFENCE getting used, for instance.

If the main worry is using the store buffer bypass to basically bypass 
type safety (by loading a pointer that was prevously stored as something 
else), I would assume that it could be done with something much more 
targeted than an lfence. 

But I do suspect that once they see a MSR write on each entry/exit, 
they'll say "we'll do the lfence". You can probably have some simple 
greedy model where you only do an lfence for pointer loads, and only if 
you did a store before, and maybe it won't be too painful.

But yes, we definitely should have Alexei involved. For some reason I 
thought he already was.

              Linus

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [MODERATED] Re: [PATCH 5/5] SSB extra v2 5
  2018-05-09 16:36               ` Dave Hansen
@ 2018-05-16 15:23                 ` Jon Masters
  0 siblings, 0 replies; 16+ messages in thread
From: Jon Masters @ 2018-05-16 15:23 UTC (permalink / raw)
  To: speck

[-- Attachment #1: Type: text/plain, Size: 889 bytes --]

On 05/09/2018 12:36 PM, speck for Dave Hansen wrote:
> On 05/09/2018 09:05 AM, speck for Jiri Kosina wrote:
>>> I've asked specifically that we get Alexei Starovoitov on here.  I've 
>>> been told that it's in the works, but it's a bit opaque to me what's 
>>> actually going on.
>> I asked (admittedly on this list only) Jon to bring Daniel in here as 
>> well, but no response to that, unfortunately.

Just to explicitly followup (because I missed your second mail, sorry
Jiri). I saw the original ask but then your next mail said it was ok. I
missed some email last week due to Red Hat Summit and patch scrambling.
I'm currently going over everything I might have missed but also flying
to a funeral in the UK - will you let me know if you'd still like me to
push for additional folks to be read in?
Jon.

-- 
Computer Architect | Sent from my Fedora powered laptop


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2018-05-16 15:24 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-07 23:18 [MODERATED] [PATCH 0/5] SSB extra v2 0 Dave Hansen
2018-05-07 23:18 ` [MODERATED] [PATCH 1/5] SSB extra v2 1 Dave Hansen
2018-05-07 23:18 ` [MODERATED] [PATCH 2/5] SSB extra v2 2 Dave Hansen
2018-05-07 23:18 ` [MODERATED] [PATCH 3/5] SSB extra v2 3 Dave Hansen
2018-05-07 23:18 ` [MODERATED] [PATCH 4/5] SSB extra v2 4 Dave Hansen
2018-05-07 23:18 ` [MODERATED] [PATCH 5/5] SSB extra v2 5 Dave Hansen
2018-05-08  0:36 ` [MODERATED] " Andi Kleen
2018-05-08  0:46   ` Dave Hansen
2018-05-09 15:36     ` Thomas Gleixner
2018-05-09 15:43       ` [MODERATED] " Dave Hansen
2018-05-09 15:55         ` Greg KH
2018-05-09 16:03           ` Dave Hansen
2018-05-09 16:05             ` Jiri Kosina
2018-05-09 16:36               ` Dave Hansen
2018-05-16 15:23                 ` Jon Masters
2018-05-09 21:04         ` Linus Torvalds

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.