All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 00/13] BPF improvements and fixes
@ 2018-01-26 22:33 Daniel Borkmann
  2018-01-26 22:33 ` [PATCH bpf-next 01/13] bpf: xor of a/x in cbpf can be done in 32 bit alu Daniel Borkmann
                   ` (13 more replies)
  0 siblings, 14 replies; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann

This set contains a small cleanup in cBPF prologue generation and
otherwise fixes an outstanding issue related to BPF to BPF calls
and exception handling. For details please see related patches.
Last but not least, BPF selftests is extended with several new
test cases.

Thanks!

Daniel Borkmann (13):
  bpf: xor of a/x in cbpf can be done in 32 bit alu
  bpf: improve dead code sanitizing
  bpf: make unknown opcode handling more robust
  bpf: fix subprog verifier bypass by div/mod by 0 exception
  bpf, x86_64: remove obsolete exception handling from div/mod
  bpf, arm64: remove obsolete exception handling from div/mod
  bpf, s390x: remove obsolete exception handling from div/mod
  bpf, ppc64: remove obsolete exception handling from div/mod
  bpf, sparc64: remove obsolete exception handling from div/mod
  bpf, mips64: remove obsolete exception handling from div/mod
  bpf, mips64: remove unneeded zero check from div/mod with k
  bpf, arm: remove obsolete exception handling from div/mod
  bpf: add further test cases around div/mod and others

 arch/arm/net/bpf_jit_32.c                   |   8 -
 arch/arm64/net/bpf_jit_comp.c               |  13 --
 arch/mips/net/ebpf_jit.c                    |  29 +--
 arch/powerpc/net/bpf_jit_comp64.c           |   8 -
 arch/s390/net/bpf_jit_comp.c                |  10 -
 arch/sparc/net/bpf_jit_comp_64.c            |  18 --
 arch/x86/net/bpf_jit_comp.c                 |  20 --
 include/linux/filter.h                      |   2 +
 kernel/bpf/core.c                           | 258 ++++++++++++---------
 kernel/bpf/verifier.c                       |  62 +++--
 lib/test_bpf.c                              |   8 +-
 net/core/filter.c                           |  13 +-
 tools/testing/selftests/bpf/test_verifier.c | 343 ++++++++++++++++++++++++++--
 13 files changed, 546 insertions(+), 246 deletions(-)

-- 
2.9.5

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH bpf-next 01/13] bpf: xor of a/x in cbpf can be done in 32 bit alu
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
@ 2018-01-26 22:33 ` Daniel Borkmann
  2018-01-28 18:58   ` [PATCH bpf-next 01/13] bpf: xor of a/x in cbpf can be done Naveen N. Rao
  2018-01-26 22:33 ` [PATCH bpf-next 02/13] bpf: improve dead code sanitizing Daniel Borkmann
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann

Very minor optimization; saves 1 byte per program in x86_64
JIT in cBPF prologue.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 net/core/filter.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/core/filter.c b/net/core/filter.c
index 18da42a..cba2f73 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -401,8 +401,8 @@ static int bpf_convert_filter(struct sock_filter *prog, int len,
 		/* Classic BPF expects A and X to be reset first. These need
 		 * to be guaranteed to be the first two instructions.
 		 */
-		*new_insn++ = BPF_ALU64_REG(BPF_XOR, BPF_REG_A, BPF_REG_A);
-		*new_insn++ = BPF_ALU64_REG(BPF_XOR, BPF_REG_X, BPF_REG_X);
+		*new_insn++ = BPF_ALU32_REG(BPF_XOR, BPF_REG_A, BPF_REG_A);
+		*new_insn++ = BPF_ALU32_REG(BPF_XOR, BPF_REG_X, BPF_REG_X);
 
 		/* All programs must keep CTX in callee saved BPF_REG_CTX.
 		 * In eBPF case it's done by the compiler, here we need to
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next 02/13] bpf: improve dead code sanitizing
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
  2018-01-26 22:33 ` [PATCH bpf-next 01/13] bpf: xor of a/x in cbpf can be done in 32 bit alu Daniel Borkmann
@ 2018-01-26 22:33 ` Daniel Borkmann
  2018-01-26 22:33 ` [PATCH bpf-next 03/13] bpf: make unknown opcode handling more robust Daniel Borkmann
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann

Given we recently had c131187db2d3 ("bpf: fix branch pruning
logic") and 95a762e2c8c9 ("bpf: fix incorrect sign extension in
check_alu_op()") in particular where before verifier skipped
verification of the wrongly assumed dead branch, we should not
just replace the dead code parts with nops (mov r0,r0). If there
is a bug such as fixed in 95a762e2c8c9 in future again, where
runtime could execute those insns, then one of the potential
issues with the current setting would be that given the nops
would be at the end of the program, we could execute out of
bounds at some point.

The best in such case would be to just exit the BPF program
altogether and return an exception code. However, given this
would require two instructions, and such a dead code gap could
just be a single insn long, we would need to place 'r0 = X; ret'
snippet at the very end after the user program or at the start
before the program (where we'd skip that region on prog entry),
and then place unconditional ja's into the dead code gap.

While more complex but possible, there's still another block
in the road that currently prevents from this, namely BPF to
BPF calls. The issue here is that such exception could be
returned from a callee, but the caller would not know that
it's an exception that needs to be propagated further down.
Alternative that has little complexity is to just use a ja-1
code for now which will trap the execution here instead of
silently doing bad things if we ever get there due to bugs.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 kernel/bpf/verifier.c | 17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index dfb138b..8365259 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5064,14 +5064,21 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
 	return new_prog;
 }
 
-/* The verifier does more data flow analysis than llvm and will not explore
- * branches that are dead at run time. Malicious programs can have dead code
- * too. Therefore replace all dead at-run-time code with nops.
+/* The verifier does more data flow analysis than llvm and will not
+ * explore branches that are dead at run time. Malicious programs can
+ * have dead code too. Therefore replace all dead at-run-time code
+ * with 'ja -1'.
+ *
+ * Just nops are not optimal, e.g. if they would sit at the end of the
+ * program and through another bug we would manage to jump there, then
+ * we'd execute beyond program memory otherwise. Returning exception
+ * code also wouldn't work since we can have subprogs where the dead
+ * code could be located.
  */
 static void sanitize_dead_code(struct bpf_verifier_env *env)
 {
 	struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
-	struct bpf_insn nop = BPF_MOV64_REG(BPF_REG_0, BPF_REG_0);
+	struct bpf_insn trap = BPF_JMP_IMM(BPF_JA, 0, 0, -1);
 	struct bpf_insn *insn = env->prog->insnsi;
 	const int insn_cnt = env->prog->len;
 	int i;
@@ -5079,7 +5086,7 @@ static void sanitize_dead_code(struct bpf_verifier_env *env)
 	for (i = 0; i < insn_cnt; i++) {
 		if (aux_data[i].seen)
 			continue;
-		memcpy(insn + i, &nop, sizeof(nop));
+		memcpy(insn + i, &trap, sizeof(trap));
 	}
 }
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next 03/13] bpf: make unknown opcode handling more robust
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
  2018-01-26 22:33 ` [PATCH bpf-next 01/13] bpf: xor of a/x in cbpf can be done in 32 bit alu Daniel Borkmann
  2018-01-26 22:33 ` [PATCH bpf-next 02/13] bpf: improve dead code sanitizing Daniel Borkmann
@ 2018-01-26 22:33 ` Daniel Borkmann
  2018-01-26 22:33 ` [PATCH bpf-next 04/13] bpf: fix subprog verifier bypass by div/mod by 0 exception Daniel Borkmann
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann

Recent findings by syzcaller fixed in 7891a87efc71 ("bpf: arsh is
not supported in 32 bit alu thus reject it") triggered a warning
in the interpreter due to unknown opcode not being rejected by
the verifier. The 'return 0' for an unknown opcode is really not
optimal, since with BPF to BPF calls, this would go untracked by
the verifier.

Do two things here to improve the situation: i) perform basic insn
sanity check early on in the verification phase and reject every
non-uapi insn right there. The bpf_opcode_in_insntable() table
reuses the same mapping as the jumptable in ___bpf_prog_run() sans
the non-public mappings. And ii) in ___bpf_prog_run() we do need
to BUG in the case where the verifier would ever create an unknown
opcode due to some rewrites.

Note that JITs do not have such issues since they would punt to
interpreter in these situations. Moreover, the BPF_JIT_ALWAYS_ON
would also help to avoid such unknown opcodes in the first place.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/filter.h |   2 +
 kernel/bpf/core.c      | 250 ++++++++++++++++++++++++++++---------------------
 kernel/bpf/verifier.c  |   7 ++
 3 files changed, 154 insertions(+), 105 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 425056c..7bd06b4 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -688,6 +688,8 @@ static inline int sk_filter(struct sock *sk, struct sk_buff *skb)
 struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err);
 void bpf_prog_free(struct bpf_prog *fp);
 
+bool bpf_opcode_in_insntable(u8 code);
+
 struct bpf_prog *bpf_prog_alloc(unsigned int size, gfp_t gfp_extra_flags);
 struct bpf_prog *bpf_prog_realloc(struct bpf_prog *fp_old, unsigned int size,
 				  gfp_t gfp_extra_flags);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 3aa0658..01962c4 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -782,6 +782,137 @@ noinline u64 __bpf_call_base(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
 }
 EXPORT_SYMBOL_GPL(__bpf_call_base);
 
+/* All UAPI available opcodes. */
+#define BPF_INSN_MAP(INSN_2, INSN_3)		\
+	/* 32 bit ALU operations. */		\
+	/*   Register based. */			\
+	INSN_3(ALU, ADD, X),			\
+	INSN_3(ALU, SUB, X),			\
+	INSN_3(ALU, AND, X),			\
+	INSN_3(ALU, OR,  X),			\
+	INSN_3(ALU, LSH, X),			\
+	INSN_3(ALU, RSH, X),			\
+	INSN_3(ALU, XOR, X),			\
+	INSN_3(ALU, MUL, X),			\
+	INSN_3(ALU, MOV, X),			\
+	INSN_3(ALU, DIV, X),			\
+	INSN_3(ALU, MOD, X),			\
+	INSN_2(ALU, NEG),			\
+	INSN_3(ALU, END, TO_BE),		\
+	INSN_3(ALU, END, TO_LE),		\
+	/*   Immediate based. */		\
+	INSN_3(ALU, ADD, K),			\
+	INSN_3(ALU, SUB, K),			\
+	INSN_3(ALU, AND, K),			\
+	INSN_3(ALU, OR,  K),			\
+	INSN_3(ALU, LSH, K),			\
+	INSN_3(ALU, RSH, K),			\
+	INSN_3(ALU, XOR, K),			\
+	INSN_3(ALU, MUL, K),			\
+	INSN_3(ALU, MOV, K),			\
+	INSN_3(ALU, DIV, K),			\
+	INSN_3(ALU, MOD, K),			\
+	/* 64 bit ALU operations. */		\
+	/*   Register based. */			\
+	INSN_3(ALU64, ADD,  X),			\
+	INSN_3(ALU64, SUB,  X),			\
+	INSN_3(ALU64, AND,  X),			\
+	INSN_3(ALU64, OR,   X),			\
+	INSN_3(ALU64, LSH,  X),			\
+	INSN_3(ALU64, RSH,  X),			\
+	INSN_3(ALU64, XOR,  X),			\
+	INSN_3(ALU64, MUL,  X),			\
+	INSN_3(ALU64, MOV,  X),			\
+	INSN_3(ALU64, ARSH, X),			\
+	INSN_3(ALU64, DIV,  X),			\
+	INSN_3(ALU64, MOD,  X),			\
+	INSN_2(ALU64, NEG),			\
+	/*   Immediate based. */		\
+	INSN_3(ALU64, ADD,  K),			\
+	INSN_3(ALU64, SUB,  K),			\
+	INSN_3(ALU64, AND,  K),			\
+	INSN_3(ALU64, OR,   K),			\
+	INSN_3(ALU64, LSH,  K),			\
+	INSN_3(ALU64, RSH,  K),			\
+	INSN_3(ALU64, XOR,  K),			\
+	INSN_3(ALU64, MUL,  K),			\
+	INSN_3(ALU64, MOV,  K),			\
+	INSN_3(ALU64, ARSH, K),			\
+	INSN_3(ALU64, DIV,  K),			\
+	INSN_3(ALU64, MOD,  K),			\
+	/* Call instruction. */			\
+	INSN_2(JMP, CALL),			\
+	/* Exit instruction. */			\
+	INSN_2(JMP, EXIT),			\
+	/* Jump instructions. */		\
+	/*   Register based. */			\
+	INSN_3(JMP, JEQ,  X),			\
+	INSN_3(JMP, JNE,  X),			\
+	INSN_3(JMP, JGT,  X),			\
+	INSN_3(JMP, JLT,  X),			\
+	INSN_3(JMP, JGE,  X),			\
+	INSN_3(JMP, JLE,  X),			\
+	INSN_3(JMP, JSGT, X),			\
+	INSN_3(JMP, JSLT, X),			\
+	INSN_3(JMP, JSGE, X),			\
+	INSN_3(JMP, JSLE, X),			\
+	INSN_3(JMP, JSET, X),			\
+	/*   Immediate based. */		\
+	INSN_3(JMP, JEQ,  K),			\
+	INSN_3(JMP, JNE,  K),			\
+	INSN_3(JMP, JGT,  K),			\
+	INSN_3(JMP, JLT,  K),			\
+	INSN_3(JMP, JGE,  K),			\
+	INSN_3(JMP, JLE,  K),			\
+	INSN_3(JMP, JSGT, K),			\
+	INSN_3(JMP, JSLT, K),			\
+	INSN_3(JMP, JSGE, K),			\
+	INSN_3(JMP, JSLE, K),			\
+	INSN_3(JMP, JSET, K),			\
+	INSN_2(JMP, JA),			\
+	/* Store instructions. */		\
+	/*   Register based. */			\
+	INSN_3(STX, MEM,  B),			\
+	INSN_3(STX, MEM,  H),			\
+	INSN_3(STX, MEM,  W),			\
+	INSN_3(STX, MEM,  DW),			\
+	INSN_3(STX, XADD, W),			\
+	INSN_3(STX, XADD, DW),			\
+	/*   Immediate based. */		\
+	INSN_3(ST, MEM, B),			\
+	INSN_3(ST, MEM, H),			\
+	INSN_3(ST, MEM, W),			\
+	INSN_3(ST, MEM, DW),			\
+	/* Load instructions. */		\
+	/*   Register based. */			\
+	INSN_3(LDX, MEM, B),			\
+	INSN_3(LDX, MEM, H),			\
+	INSN_3(LDX, MEM, W),			\
+	INSN_3(LDX, MEM, DW),			\
+	/*   Immediate based. */		\
+	INSN_3(LD, IMM, DW),			\
+	/*   Misc (old cBPF carry-over). */	\
+	INSN_3(LD, ABS, B),			\
+	INSN_3(LD, ABS, H),			\
+	INSN_3(LD, ABS, W),			\
+	INSN_3(LD, IND, B),			\
+	INSN_3(LD, IND, H),			\
+	INSN_3(LD, IND, W)
+
+bool bpf_opcode_in_insntable(u8 code)
+{
+#define BPF_INSN_2_TBL(x, y)    [BPF_##x | BPF_##y] = true
+#define BPF_INSN_3_TBL(x, y, z) [BPF_##x | BPF_##y | BPF_##z] = true
+	static const bool public_insntable[256] = {
+		[0 ... 255] = false,
+		/* Now overwrite non-defaults ... */
+		BPF_INSN_MAP(BPF_INSN_2_TBL, BPF_INSN_3_TBL),
+	};
+#undef BPF_INSN_3_TBL
+#undef BPF_INSN_2_TBL
+	return public_insntable[code];
+}
+
 #ifndef CONFIG_BPF_JIT_ALWAYS_ON
 /**
  *	__bpf_prog_run - run eBPF program on a given context
@@ -793,115 +924,18 @@ EXPORT_SYMBOL_GPL(__bpf_call_base);
 static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
 {
 	u64 tmp;
+#define BPF_INSN_2_LBL(x, y)    [BPF_##x | BPF_##y] = &&x##_##y
+#define BPF_INSN_3_LBL(x, y, z) [BPF_##x | BPF_##y | BPF_##z] = &&x##_##y##_##z
 	static const void *jumptable[256] = {
 		[0 ... 255] = &&default_label,
 		/* Now overwrite non-defaults ... */
-		/* 32 bit ALU operations */
-		[BPF_ALU | BPF_ADD | BPF_X] = &&ALU_ADD_X,
-		[BPF_ALU | BPF_ADD | BPF_K] = &&ALU_ADD_K,
-		[BPF_ALU | BPF_SUB | BPF_X] = &&ALU_SUB_X,
-		[BPF_ALU | BPF_SUB | BPF_K] = &&ALU_SUB_K,
-		[BPF_ALU | BPF_AND | BPF_X] = &&ALU_AND_X,
-		[BPF_ALU | BPF_AND | BPF_K] = &&ALU_AND_K,
-		[BPF_ALU | BPF_OR | BPF_X]  = &&ALU_OR_X,
-		[BPF_ALU | BPF_OR | BPF_K]  = &&ALU_OR_K,
-		[BPF_ALU | BPF_LSH | BPF_X] = &&ALU_LSH_X,
-		[BPF_ALU | BPF_LSH | BPF_K] = &&ALU_LSH_K,
-		[BPF_ALU | BPF_RSH | BPF_X] = &&ALU_RSH_X,
-		[BPF_ALU | BPF_RSH | BPF_K] = &&ALU_RSH_K,
-		[BPF_ALU | BPF_XOR | BPF_X] = &&ALU_XOR_X,
-		[BPF_ALU | BPF_XOR | BPF_K] = &&ALU_XOR_K,
-		[BPF_ALU | BPF_MUL | BPF_X] = &&ALU_MUL_X,
-		[BPF_ALU | BPF_MUL | BPF_K] = &&ALU_MUL_K,
-		[BPF_ALU | BPF_MOV | BPF_X] = &&ALU_MOV_X,
-		[BPF_ALU | BPF_MOV | BPF_K] = &&ALU_MOV_K,
-		[BPF_ALU | BPF_DIV | BPF_X] = &&ALU_DIV_X,
-		[BPF_ALU | BPF_DIV | BPF_K] = &&ALU_DIV_K,
-		[BPF_ALU | BPF_MOD | BPF_X] = &&ALU_MOD_X,
-		[BPF_ALU | BPF_MOD | BPF_K] = &&ALU_MOD_K,
-		[BPF_ALU | BPF_NEG] = &&ALU_NEG,
-		[BPF_ALU | BPF_END | BPF_TO_BE] = &&ALU_END_TO_BE,
-		[BPF_ALU | BPF_END | BPF_TO_LE] = &&ALU_END_TO_LE,
-		/* 64 bit ALU operations */
-		[BPF_ALU64 | BPF_ADD | BPF_X] = &&ALU64_ADD_X,
-		[BPF_ALU64 | BPF_ADD | BPF_K] = &&ALU64_ADD_K,
-		[BPF_ALU64 | BPF_SUB | BPF_X] = &&ALU64_SUB_X,
-		[BPF_ALU64 | BPF_SUB | BPF_K] = &&ALU64_SUB_K,
-		[BPF_ALU64 | BPF_AND | BPF_X] = &&ALU64_AND_X,
-		[BPF_ALU64 | BPF_AND | BPF_K] = &&ALU64_AND_K,
-		[BPF_ALU64 | BPF_OR | BPF_X] = &&ALU64_OR_X,
-		[BPF_ALU64 | BPF_OR | BPF_K] = &&ALU64_OR_K,
-		[BPF_ALU64 | BPF_LSH | BPF_X] = &&ALU64_LSH_X,
-		[BPF_ALU64 | BPF_LSH | BPF_K] = &&ALU64_LSH_K,
-		[BPF_ALU64 | BPF_RSH | BPF_X] = &&ALU64_RSH_X,
-		[BPF_ALU64 | BPF_RSH | BPF_K] = &&ALU64_RSH_K,
-		[BPF_ALU64 | BPF_XOR | BPF_X] = &&ALU64_XOR_X,
-		[BPF_ALU64 | BPF_XOR | BPF_K] = &&ALU64_XOR_K,
-		[BPF_ALU64 | BPF_MUL | BPF_X] = &&ALU64_MUL_X,
-		[BPF_ALU64 | BPF_MUL | BPF_K] = &&ALU64_MUL_K,
-		[BPF_ALU64 | BPF_MOV | BPF_X] = &&ALU64_MOV_X,
-		[BPF_ALU64 | BPF_MOV | BPF_K] = &&ALU64_MOV_K,
-		[BPF_ALU64 | BPF_ARSH | BPF_X] = &&ALU64_ARSH_X,
-		[BPF_ALU64 | BPF_ARSH | BPF_K] = &&ALU64_ARSH_K,
-		[BPF_ALU64 | BPF_DIV | BPF_X] = &&ALU64_DIV_X,
-		[BPF_ALU64 | BPF_DIV | BPF_K] = &&ALU64_DIV_K,
-		[BPF_ALU64 | BPF_MOD | BPF_X] = &&ALU64_MOD_X,
-		[BPF_ALU64 | BPF_MOD | BPF_K] = &&ALU64_MOD_K,
-		[BPF_ALU64 | BPF_NEG] = &&ALU64_NEG,
-		/* Call instruction */
-		[BPF_JMP | BPF_CALL] = &&JMP_CALL,
+		BPF_INSN_MAP(BPF_INSN_2_LBL, BPF_INSN_3_LBL),
+		/* Non-UAPI available opcodes. */
 		[BPF_JMP | BPF_CALL_ARGS] = &&JMP_CALL_ARGS,
 		[BPF_JMP | BPF_TAIL_CALL] = &&JMP_TAIL_CALL,
-		/* Jumps */
-		[BPF_JMP | BPF_JA] = &&JMP_JA,
-		[BPF_JMP | BPF_JEQ | BPF_X] = &&JMP_JEQ_X,
-		[BPF_JMP | BPF_JEQ | BPF_K] = &&JMP_JEQ_K,
-		[BPF_JMP | BPF_JNE | BPF_X] = &&JMP_JNE_X,
-		[BPF_JMP | BPF_JNE | BPF_K] = &&JMP_JNE_K,
-		[BPF_JMP | BPF_JGT | BPF_X] = &&JMP_JGT_X,
-		[BPF_JMP | BPF_JGT | BPF_K] = &&JMP_JGT_K,
-		[BPF_JMP | BPF_JLT | BPF_X] = &&JMP_JLT_X,
-		[BPF_JMP | BPF_JLT | BPF_K] = &&JMP_JLT_K,
-		[BPF_JMP | BPF_JGE | BPF_X] = &&JMP_JGE_X,
-		[BPF_JMP | BPF_JGE | BPF_K] = &&JMP_JGE_K,
-		[BPF_JMP | BPF_JLE | BPF_X] = &&JMP_JLE_X,
-		[BPF_JMP | BPF_JLE | BPF_K] = &&JMP_JLE_K,
-		[BPF_JMP | BPF_JSGT | BPF_X] = &&JMP_JSGT_X,
-		[BPF_JMP | BPF_JSGT | BPF_K] = &&JMP_JSGT_K,
-		[BPF_JMP | BPF_JSLT | BPF_X] = &&JMP_JSLT_X,
-		[BPF_JMP | BPF_JSLT | BPF_K] = &&JMP_JSLT_K,
-		[BPF_JMP | BPF_JSGE | BPF_X] = &&JMP_JSGE_X,
-		[BPF_JMP | BPF_JSGE | BPF_K] = &&JMP_JSGE_K,
-		[BPF_JMP | BPF_JSLE | BPF_X] = &&JMP_JSLE_X,
-		[BPF_JMP | BPF_JSLE | BPF_K] = &&JMP_JSLE_K,
-		[BPF_JMP | BPF_JSET | BPF_X] = &&JMP_JSET_X,
-		[BPF_JMP | BPF_JSET | BPF_K] = &&JMP_JSET_K,
-		/* Program return */
-		[BPF_JMP | BPF_EXIT] = &&JMP_EXIT,
-		/* Store instructions */
-		[BPF_STX | BPF_MEM | BPF_B] = &&STX_MEM_B,
-		[BPF_STX | BPF_MEM | BPF_H] = &&STX_MEM_H,
-		[BPF_STX | BPF_MEM | BPF_W] = &&STX_MEM_W,
-		[BPF_STX | BPF_MEM | BPF_DW] = &&STX_MEM_DW,
-		[BPF_STX | BPF_XADD | BPF_W] = &&STX_XADD_W,
-		[BPF_STX | BPF_XADD | BPF_DW] = &&STX_XADD_DW,
-		[BPF_ST | BPF_MEM | BPF_B] = &&ST_MEM_B,
-		[BPF_ST | BPF_MEM | BPF_H] = &&ST_MEM_H,
-		[BPF_ST | BPF_MEM | BPF_W] = &&ST_MEM_W,
-		[BPF_ST | BPF_MEM | BPF_DW] = &&ST_MEM_DW,
-		/* Load instructions */
-		[BPF_LDX | BPF_MEM | BPF_B] = &&LDX_MEM_B,
-		[BPF_LDX | BPF_MEM | BPF_H] = &&LDX_MEM_H,
-		[BPF_LDX | BPF_MEM | BPF_W] = &&LDX_MEM_W,
-		[BPF_LDX | BPF_MEM | BPF_DW] = &&LDX_MEM_DW,
-		[BPF_LD | BPF_ABS | BPF_W] = &&LD_ABS_W,
-		[BPF_LD | BPF_ABS | BPF_H] = &&LD_ABS_H,
-		[BPF_LD | BPF_ABS | BPF_B] = &&LD_ABS_B,
-		[BPF_LD | BPF_IND | BPF_W] = &&LD_IND_W,
-		[BPF_LD | BPF_IND | BPF_H] = &&LD_IND_H,
-		[BPF_LD | BPF_IND | BPF_B] = &&LD_IND_B,
-		[BPF_LD | BPF_IMM | BPF_DW] = &&LD_IMM_DW,
 	};
+#undef BPF_INSN_3_LBL
+#undef BPF_INSN_2_LBL
 	u32 tail_call_cnt = 0;
 	void *ptr;
 	int off;
@@ -1302,8 +1336,14 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
 		goto load_byte;
 
 	default_label:
-		/* If we ever reach this, we have a bug somewhere. */
-		WARN_RATELIMIT(1, "unknown opcode %02x\n", insn->code);
+		/* If we ever reach this, we have a bug somewhere. Die hard here
+		 * instead of just returning 0; we could be somewhere in a subprog,
+		 * so execution could continue otherwise which we do /not/ want.
+		 *
+		 * Note, verifier whitelists all opcodes in bpf_opcode_in_insntable().
+		 */
+		pr_warn("BPF interpreter: unknown opcode %02x\n", insn->code);
+		BUG_ON(1);
 		return 0;
 }
 STACK_FRAME_NON_STANDARD(___bpf_prog_run); /* jump table */
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 8365259..0c52694 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4981,6 +4981,13 @@ static int replace_map_fd_with_map_ptr(struct bpf_verifier_env *env)
 next_insn:
 			insn++;
 			i++;
+			continue;
+		}
+
+		/* Basic sanity check before we invest more work here. */
+		if (!bpf_opcode_in_insntable(insn->code)) {
+			verbose(env, "unknown opcode %02x\n", insn->code);
+			return -EINVAL;
 		}
 	}
 
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next 04/13] bpf: fix subprog verifier bypass by div/mod by 0 exception
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
                   ` (2 preceding siblings ...)
  2018-01-26 22:33 ` [PATCH bpf-next 03/13] bpf: make unknown opcode handling more robust Daniel Borkmann
@ 2018-01-26 22:33 ` Daniel Borkmann
  2018-01-26 22:33 ` [PATCH bpf-next 05/13] bpf, x86_64: remove obsolete exception handling from div/mod Daniel Borkmann
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann

One of the ugly leftovers from the early eBPF days is that div/mod
operations based on registers have a hard-coded src_reg == 0 test
in the interpreter as well as in JIT code generators that would
return from the BPF program with exit code 0. This was basically
adopted from cBPF interpreter for historical reasons.

There are multiple reasons why this is very suboptimal and prone
to bugs. To name one: the return code mapping for such abnormal
program exit of 0 does not always match with a suitable program
type's exit code mapping. For example, '0' in tc means action 'ok'
where the packet gets passed further up the stack, which is just
undesirable for such cases (e.g. when implementing policy) and
also does not match with other program types.

While trying to work out an exception handling scheme, I also
noticed that programs crafted like the following will currently
pass the verifier:

  0: (bf) r6 = r1
  1: (85) call pc+8
  caller:
   R6=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
  callee:
   frame1: R1=ctx(id=0,off=0,imm=0) R10=fp0,call_1
  10: (b4) (u32) r2 = (u32) 0
  11: (b4) (u32) r3 = (u32) 1
  12: (3c) (u32) r3 /= (u32) r2
  13: (61) r0 = *(u32 *)(r1 +76)
  14: (95) exit
  returning from callee:
   frame1: R0_w=pkt(id=0,off=0,r=0,imm=0)
           R1=ctx(id=0,off=0,imm=0) R2_w=inv0
           R3_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff))
           R10=fp0,call_1
  to caller at 2:
   R0_w=pkt(id=0,off=0,r=0,imm=0) R6=ctx(id=0,off=0,imm=0)
   R10=fp0,call_-1

  from 14 to 2: R0=pkt(id=0,off=0,r=0,imm=0)
                R6=ctx(id=0,off=0,imm=0) R10=fp0,call_-1
  2: (bf) r1 = r6
  3: (61) r1 = *(u32 *)(r1 +80)
  4: (bf) r2 = r0
  5: (07) r2 += 8
  6: (2d) if r2 > r1 goto pc+1
   R0=pkt(id=0,off=0,r=8,imm=0) R1=pkt_end(id=0,off=0,imm=0)
   R2=pkt(id=0,off=8,r=8,imm=0) R6=ctx(id=0,off=0,imm=0)
   R10=fp0,call_-1
  7: (71) r0 = *(u8 *)(r0 +0)
  8: (b7) r0 = 1
  9: (95) exit

  from 6 to 8: safe
  processed 16 insns (limit 131072), stack depth 0+0

Basically what happens is that in the subprog we make use of a
div/mod by 0 exception and in the 'normal' subprog's exit path
we just return skb->data back to the main prog. This has the
implication that the verifier thinks we always get a pkt pointer
in R0 while we still have the implicit 'return 0' from the div
as an alternative unconditional return path earlier. Thus, R0
then contains 0, meaning back in the parent prog we get the
address range of [0x0, skb->data_end] as read and writeable.
Similar can be crafted with other pointer register types.

Since i) BPF_ABS/IND is not allowed in programs that contain
BPF to BPF calls (and generally it's also disadvised to use in
native eBPF context), ii) unknown opcodes don't return zero
anymore, iii) we don't return an exception code in dead branches,
the only last missing case affected and to fix is the div/mod
handling.

What we would really need is some infrastructure to propagate
exceptions all the way to the original prog unwinding the
current stack and returning that code to the caller of the
BPF program. In user space such exception handling for similar
runtimes is typically implemented with setjmp(3) and longjmp(3)
as one possibility which is not available in the kernel,
though (kgdb used to implement it in kernel long time ago). I
implemented a PoC exception handling mechanism into the BPF
interpreter with porting setjmp()/longjmp() into x86_64 and
adding a new internal BPF_ABRT opcode that can use a program
specific exception code for all exception cases we have (e.g.
div/mod by 0, unknown opcodes, etc). While this seems to work
in the constrained BPF environment (meaning, here, we don't
need to deal with state e.g. from memory allocations that we
would need to undo before going into exception state), it still
has various drawbacks: i) we would need to implement the
setjmp()/longjmp() for every arch supported in the kernel and
for x86_64, arm64, sparc64 JITs currently supporting calls,
ii) it has unconditional additional cost on main program
entry to store CPU register state in initial setjmp() call,
and we would need some way to pass the jmp_buf down into
___bpf_prog_run() for main prog and all subprogs, but also
storing on stack is not really nice (other option would be
per-cpu storage for this, but it also has the drawback that
we need to disable preemption for every BPF program types).
All in all this approach would add a lot of complexity.

Another poor-man's solution would be to have some sort of
additional shared register or scratch buffer to hold state
for exceptions, and test that after every call return to
chain returns and pass R0 all the way down to BPF prog caller.
This is also problematic in various ways: i) an additional
register doesn't map well into JITs, and some other scratch
space could only be on per-cpu storage, which, again has the
side-effect that this only works when we disable preemption,
or somewhere in the input context which is not available
everywhere either, and ii) this adds significant runtime
overhead by putting conditionals after each and every call,
as well as implementation complexity.

Yet another option is to teach verifier that div/mod can
return an integer, which however is also complex to implement
as verifier would need to walk such fake 'mov r0,<code>; exit;'
sequeuence and there would still be no guarantee for having
propagation of this further down to the BPF caller as proper
exception code. For parent prog, it is also is not distinguishable
from a normal return of a constant scalar value.

The approach taken here is a completely different one with
little complexity and no additional overhead involved in
that we make use of the fact that a div/mod by 0 is undefined
behavior. Instead of bailing out, we adapt the same behavior
as on some major archs like ARMv8 [0] into eBPF as well:
X div 0 results in 0, and X mod 0 results in X. aarch64 and
aarch32 ISA do not generate any traps or otherwise aborts
of program execution for unsigned divides. I verified this
also with a test program compiled by gcc and clang, and the
behavior matches with the spec. Going forward we adapt the
eBPF verifier to emit such rewrites once div/mod by register
was seen. cBPF is not touched and will keep existing 'return 0'
semantics. Given the options, it seems the most suitable from
all of them, also since major archs have similar schemes in
place. Given this is all in the realm of undefined behavior,
we still have the option to adapt if deemed necessary and
this way we would also have the option of more flexibility
from LLVM code generation side (which is then fully visible
to verifier). Thus, this patch i) fixes the panic seen in
above program and ii) doesn't bypass the verifier observations.

  [0] ARM Architecture Reference Manual, ARMv8 [ARM DDI 0487B.b]
      http://infocenter.arm.com/help/topic/com.arm.doc.ddi0487b.b/DDI0487B_b_armv8_arm.pdf
      1) aarch64 instruction set: section C3.4.7 and C6.2.279 (UDIV)
         "A division by zero results in a zero being written to
          the destination register, without any indication that
          the division by zero occurred."
      2) aarch32 instruction set: section F1.4.8 and F5.1.263 (UDIV)
         "For the SDIV and UDIV instructions, division by zero
          always returns a zero result."

Fixes: f4d7e40a5b71 ("bpf: introduce function calls (verification)")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 kernel/bpf/core.c     |  8 --------
 kernel/bpf/verifier.c | 38 ++++++++++++++++++++++++++++++--------
 net/core/filter.c     |  9 ++++++++-
 3 files changed, 38 insertions(+), 17 deletions(-)

diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 01962c4..610ac04 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -999,14 +999,10 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
 		(*(s64 *) &DST) >>= IMM;
 		CONT;
 	ALU64_MOD_X:
-		if (unlikely(SRC == 0))
-			return 0;
 		div64_u64_rem(DST, SRC, &tmp);
 		DST = tmp;
 		CONT;
 	ALU_MOD_X:
-		if (unlikely((u32)SRC == 0))
-			return 0;
 		tmp = (u32) DST;
 		DST = do_div(tmp, (u32) SRC);
 		CONT;
@@ -1019,13 +1015,9 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
 		DST = do_div(tmp, (u32) IMM);
 		CONT;
 	ALU64_DIV_X:
-		if (unlikely(SRC == 0))
-			return 0;
 		DST = div64_u64(DST, SRC);
 		CONT;
 	ALU_DIV_X:
-		if (unlikely((u32)SRC == 0))
-			return 0;
 		tmp = (u32) DST;
 		do_div(tmp, (u32) SRC);
 		DST = (u32) tmp;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0c52694..5fb69a8 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5400,15 +5400,37 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
 	int i, cnt, delta = 0;
 
 	for (i = 0; i < insn_cnt; i++, insn++) {
-		if (insn->code == (BPF_ALU | BPF_MOD | BPF_X) ||
+		if (insn->code == (BPF_ALU64 | BPF_MOD | BPF_X) ||
+		    insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) ||
+		    insn->code == (BPF_ALU | BPF_MOD | BPF_X) ||
 		    insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
-			/* due to JIT bugs clear upper 32-bits of src register
-			 * before div/mod operation
-			 */
-			insn_buf[0] = BPF_MOV32_REG(insn->src_reg, insn->src_reg);
-			insn_buf[1] = *insn;
-			cnt = 2;
-			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
+			bool is64 = BPF_CLASS(insn->code) == BPF_ALU64;
+			struct bpf_insn mask_and_div[] = {
+				BPF_MOV32_REG(insn->src_reg, insn->src_reg),
+				/* Rx div 0 -> 0 */
+				BPF_JMP_IMM(BPF_JNE, insn->src_reg, 0, 2),
+				BPF_ALU32_REG(BPF_XOR, insn->dst_reg, insn->dst_reg),
+				BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+				*insn,
+			};
+			struct bpf_insn mask_and_mod[] = {
+				BPF_MOV32_REG(insn->src_reg, insn->src_reg),
+				/* Rx mod 0 -> Rx */
+				BPF_JMP_IMM(BPF_JEQ, insn->src_reg, 0, 1),
+				*insn,
+			};
+			struct bpf_insn *patchlet;
+
+			if (insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) ||
+			    insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
+				patchlet = mask_and_div + (is64 ? 1 : 0);
+				cnt = ARRAY_SIZE(mask_and_div) - (is64 ? 1 : 0);
+			} else {
+				patchlet = mask_and_mod + (is64 ? 1 : 0);
+				cnt = ARRAY_SIZE(mask_and_mod) - (is64 ? 1 : 0);
+			}
+
+			new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt);
 			if (!new_prog)
 				return -ENOMEM;
 
diff --git a/net/core/filter.c b/net/core/filter.c
index cba2f73..acf855a 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -459,8 +459,15 @@ static int bpf_convert_filter(struct sock_filter *prog, int len,
 				break;
 
 			if (fp->code == (BPF_ALU | BPF_DIV | BPF_X) ||
-			    fp->code == (BPF_ALU | BPF_MOD | BPF_X))
+			    fp->code == (BPF_ALU | BPF_MOD | BPF_X)) {
 				*insn++ = BPF_MOV32_REG(BPF_REG_X, BPF_REG_X);
+				/* Error with exception code on div/mod by 0.
+				 * For cBPF programs, this was always return 0.
+				 */
+				*insn++ = BPF_JMP_IMM(BPF_JNE, BPF_REG_X, 0, 2);
+				*insn++ = BPF_ALU32_REG(BPF_XOR, BPF_REG_A, BPF_REG_A);
+				*insn++ = BPF_EXIT_INSN();
+			}
 
 			*insn = BPF_RAW_INSN(fp->code, BPF_REG_A, BPF_REG_X, 0, fp->k);
 			break;
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next 05/13] bpf, x86_64: remove obsolete exception handling from div/mod
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
                   ` (3 preceding siblings ...)
  2018-01-26 22:33 ` [PATCH bpf-next 04/13] bpf: fix subprog verifier bypass by div/mod by 0 exception Daniel Borkmann
@ 2018-01-26 22:33 ` Daniel Borkmann
  2018-01-26 22:33 ` [PATCH bpf-next 06/13] bpf, arm64: " Daniel Borkmann
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann

Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from x86_64 JIT.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 arch/x86/net/bpf_jit_comp.c | 20 --------------------
 1 file changed, 20 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 5acee51..4923d92 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -568,26 +568,6 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			 */
 			EMIT2(0x31, 0xd2);
 
-			if (BPF_SRC(insn->code) == BPF_X) {
-				/* if (src_reg == 0) return 0 */
-
-				/* cmp r11, 0 */
-				EMIT4(0x49, 0x83, 0xFB, 0x00);
-
-				/* jne .+9 (skip over pop, pop, xor and jmp) */
-				EMIT2(X86_JNE, 1 + 1 + 2 + 5);
-				EMIT1(0x5A); /* pop rdx */
-				EMIT1(0x58); /* pop rax */
-				EMIT2(0x31, 0xc0); /* xor eax, eax */
-
-				/* jmp cleanup_addr
-				 * addrs[i] - 11, because there are 11 bytes
-				 * after this insn: div, mov, pop, pop, mov
-				 */
-				jmp_offset = ctx->cleanup_addr - (addrs[i] - 11);
-				EMIT1_off32(0xE9, jmp_offset);
-			}
-
 			if (BPF_CLASS(insn->code) == BPF_ALU64)
 				/* div r11 */
 				EMIT3(0x49, 0xF7, 0xF3);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next 06/13] bpf, arm64: remove obsolete exception handling from div/mod
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
                   ` (4 preceding siblings ...)
  2018-01-26 22:33 ` [PATCH bpf-next 05/13] bpf, x86_64: remove obsolete exception handling from div/mod Daniel Borkmann
@ 2018-01-26 22:33 ` Daniel Borkmann
  2018-01-26 22:33 ` [PATCH bpf-next 07/13] bpf, s390x: " Daniel Borkmann
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann

Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from arm64 JIT.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 arch/arm64/net/bpf_jit_comp.c | 13 -------------
 1 file changed, 13 deletions(-)

diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 0775d5a..1d4f1da 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -390,18 +390,6 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	case BPF_ALU64 | BPF_DIV | BPF_X:
 	case BPF_ALU | BPF_MOD | BPF_X:
 	case BPF_ALU64 | BPF_MOD | BPF_X:
-	{
-		const u8 r0 = bpf2a64[BPF_REG_0];
-
-		/* if (src == 0) return 0 */
-		jmp_offset = 3; /* skip ahead to else path */
-		check_imm19(jmp_offset);
-		emit(A64_CBNZ(is64, src, jmp_offset), ctx);
-		emit(A64_MOVZ(1, r0, 0, 0), ctx);
-		jmp_offset = epilogue_offset(ctx);
-		check_imm26(jmp_offset);
-		emit(A64_B(jmp_offset), ctx);
-		/* else */
 		switch (BPF_OP(code)) {
 		case BPF_DIV:
 			emit(A64_UDIV(is64, dst, dst, src), ctx);
@@ -413,7 +401,6 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 			break;
 		}
 		break;
-	}
 	case BPF_ALU | BPF_LSH | BPF_X:
 	case BPF_ALU64 | BPF_LSH | BPF_X:
 		emit(A64_LSLV(is64, dst, dst, src), ctx);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next 07/13] bpf, s390x: remove obsolete exception handling from div/mod
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
                   ` (5 preceding siblings ...)
  2018-01-26 22:33 ` [PATCH bpf-next 06/13] bpf, arm64: " Daniel Borkmann
@ 2018-01-26 22:33 ` Daniel Borkmann
  2018-01-29 14:33   ` Michael Holzheu
  2018-01-26 22:33 ` [PATCH bpf-next 08/13] bpf, ppc64: " Daniel Borkmann
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann, Michael Holzheu

Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from s390x JIT.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
---
 arch/s390/net/bpf_jit_comp.c | 10 ----------
 1 file changed, 10 deletions(-)

diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index e501887..78a19c9 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -610,11 +610,6 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
 	{
 		int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
 
-		jit->seen |= SEEN_RET0;
-		/* ltr %src,%src (if src == 0 goto fail) */
-		EMIT2(0x1200, src_reg, src_reg);
-		/* jz <ret0> */
-		EMIT4_PCREL(0xa7840000, jit->ret0_ip - jit->prg);
 		/* lhi %w0,0 */
 		EMIT4_IMM(0xa7080000, REG_W0, 0);
 		/* lr %w1,%dst */
@@ -630,11 +625,6 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
 	{
 		int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
 
-		jit->seen |= SEEN_RET0;
-		/* ltgr %src,%src (if src == 0 goto fail) */
-		EMIT4(0xb9020000, src_reg, src_reg);
-		/* jz <ret0> */
-		EMIT4_PCREL(0xa7840000, jit->ret0_ip - jit->prg);
 		/* lghi %w0,0 */
 		EMIT4_IMM(0xa7090000, REG_W0, 0);
 		/* lgr %w1,%dst */
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next 08/13] bpf, ppc64: remove obsolete exception handling from div/mod
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
                   ` (6 preceding siblings ...)
  2018-01-26 22:33 ` [PATCH bpf-next 07/13] bpf, s390x: " Daniel Borkmann
@ 2018-01-26 22:33 ` Daniel Borkmann
  2018-01-28 18:52   ` Naveen N. Rao
  2018-01-26 22:33 ` [PATCH bpf-next 09/13] bpf, sparc64: " Daniel Borkmann
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann, Naveen N . Rao

Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from ppc64 JIT.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
---
 arch/powerpc/net/bpf_jit_comp64.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 217a78e..0a34b0c 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -381,10 +381,6 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
 			goto bpf_alu32_trunc;
 		case BPF_ALU | BPF_DIV | BPF_X: /* (u32) dst /= (u32) src */
 		case BPF_ALU | BPF_MOD | BPF_X: /* (u32) dst %= (u32) src */
-			PPC_CMPWI(src_reg, 0);
-			PPC_BCC_SHORT(COND_NE, (ctx->idx * 4) + 12);
-			PPC_LI(b2p[BPF_REG_0], 0);
-			PPC_JMP(exit_addr);
 			if (BPF_OP(code) == BPF_MOD) {
 				PPC_DIVWU(b2p[TMP_REG_1], dst_reg, src_reg);
 				PPC_MULW(b2p[TMP_REG_1], src_reg,
@@ -395,10 +391,6 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
 			goto bpf_alu32_trunc;
 		case BPF_ALU64 | BPF_DIV | BPF_X: /* dst /= src */
 		case BPF_ALU64 | BPF_MOD | BPF_X: /* dst %= src */
-			PPC_CMPDI(src_reg, 0);
-			PPC_BCC_SHORT(COND_NE, (ctx->idx * 4) + 12);
-			PPC_LI(b2p[BPF_REG_0], 0);
-			PPC_JMP(exit_addr);
 			if (BPF_OP(code) == BPF_MOD) {
 				PPC_DIVD(b2p[TMP_REG_1], dst_reg, src_reg);
 				PPC_MULD(b2p[TMP_REG_1], src_reg,
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next 09/13] bpf, sparc64: remove obsolete exception handling from div/mod
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
                   ` (7 preceding siblings ...)
  2018-01-26 22:33 ` [PATCH bpf-next 08/13] bpf, ppc64: " Daniel Borkmann
@ 2018-01-26 22:33 ` Daniel Borkmann
  2018-01-26 22:33 ` [PATCH bpf-next 10/13] bpf, mips64: " Daniel Borkmann
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann, David S . Miller

Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from sparc64 JIT.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: David S. Miller <davem@davemloft.net>
---
 arch/sparc/net/bpf_jit_comp_64.c | 18 ------------------
 1 file changed, 18 deletions(-)

diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index 50a24d7..48a2586 100644
--- a/arch/sparc/net/bpf_jit_comp_64.c
+++ b/arch/sparc/net/bpf_jit_comp_64.c
@@ -967,31 +967,17 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		emit_alu(MULX, src, dst, ctx);
 		break;
 	case BPF_ALU | BPF_DIV | BPF_X:
-		emit_cmp(src, G0, ctx);
-		emit_branch(BE|ANNUL, ctx->idx, ctx->epilogue_offset, ctx);
-		emit_loadimm(0, bpf2sparc[BPF_REG_0], ctx);
-
 		emit_write_y(G0, ctx);
 		emit_alu(DIV, src, dst, ctx);
 		break;
-
 	case BPF_ALU64 | BPF_DIV | BPF_X:
-		emit_cmp(src, G0, ctx);
-		emit_branch(BE|ANNUL, ctx->idx, ctx->epilogue_offset, ctx);
-		emit_loadimm(0, bpf2sparc[BPF_REG_0], ctx);
-
 		emit_alu(UDIVX, src, dst, ctx);
 		break;
-
 	case BPF_ALU | BPF_MOD | BPF_X: {
 		const u8 tmp = bpf2sparc[TMP_REG_1];
 
 		ctx->tmp_1_used = true;
 
-		emit_cmp(src, G0, ctx);
-		emit_branch(BE|ANNUL, ctx->idx, ctx->epilogue_offset, ctx);
-		emit_loadimm(0, bpf2sparc[BPF_REG_0], ctx);
-
 		emit_write_y(G0, ctx);
 		emit_alu3(DIV, dst, src, tmp, ctx);
 		emit_alu3(MULX, tmp, src, tmp, ctx);
@@ -1003,10 +989,6 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 
 		ctx->tmp_1_used = true;
 
-		emit_cmp(src, G0, ctx);
-		emit_branch(BE|ANNUL, ctx->idx, ctx->epilogue_offset, ctx);
-		emit_loadimm(0, bpf2sparc[BPF_REG_0], ctx);
-
 		emit_alu3(UDIVX, dst, src, tmp, ctx);
 		emit_alu3(MULX, tmp, src, tmp, ctx);
 		emit_alu3(SUB, dst, tmp, dst, ctx);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next 10/13] bpf, mips64: remove obsolete exception handling from div/mod
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
                   ` (8 preceding siblings ...)
  2018-01-26 22:33 ` [PATCH bpf-next 09/13] bpf, sparc64: " Daniel Borkmann
@ 2018-01-26 22:33 ` Daniel Borkmann
  2018-01-26 22:39   ` David Daney
  2018-01-26 22:33 ` [PATCH bpf-next 11/13] bpf, mips64: remove unneeded zero check from div/mod with k Daniel Borkmann
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann, David Daney

Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from mips64 JIT.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Daney <david.daney@cavium.com>
---
 arch/mips/net/ebpf_jit.c | 10 ----------
 1 file changed, 10 deletions(-)

diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c
index 4e34703..296f1410 100644
--- a/arch/mips/net/ebpf_jit.c
+++ b/arch/mips/net/ebpf_jit.c
@@ -860,11 +860,6 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 			break;
 		case BPF_DIV:
 		case BPF_MOD:
-			b_off = b_imm(exit_idx, ctx);
-			if (is_bad_offset(b_off))
-				return -E2BIG;
-			emit_instr(ctx, beq, src, MIPS_R_ZERO, b_off);
-			emit_instr(ctx, movz, MIPS_R_V0, MIPS_R_ZERO, src);
 			emit_instr(ctx, ddivu, dst, src);
 			if (bpf_op == BPF_DIV)
 				emit_instr(ctx, mflo, dst);
@@ -943,11 +938,6 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 			break;
 		case BPF_DIV:
 		case BPF_MOD:
-			b_off = b_imm(exit_idx, ctx);
-			if (is_bad_offset(b_off))
-				return -E2BIG;
-			emit_instr(ctx, beq, src, MIPS_R_ZERO, b_off);
-			emit_instr(ctx, movz, MIPS_R_V0, MIPS_R_ZERO, src);
 			emit_instr(ctx, divu, dst, src);
 			if (bpf_op == BPF_DIV)
 				emit_instr(ctx, mflo, dst);
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next 11/13] bpf, mips64: remove unneeded zero check from div/mod with k
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
                   ` (9 preceding siblings ...)
  2018-01-26 22:33 ` [PATCH bpf-next 10/13] bpf, mips64: " Daniel Borkmann
@ 2018-01-26 22:33 ` Daniel Borkmann
  2018-01-26 22:36   ` David Daney
  2018-01-26 22:33 ` [PATCH bpf-next 12/13] bpf, arm: remove obsolete exception handling from div/mod Daniel Borkmann
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann, David Daney

The verifier in both cBPF and eBPF reject div/mod by 0 imm,
so this can never load. Remove emitting such test and reject
it from being JITed instead (the latter is actually also not
needed, but given practice in sparc64, ppc64 today, so
doesn't hurt to add it here either).

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Daney <david.daney@cavium.com>
---
 arch/mips/net/ebpf_jit.c | 19 ++++---------------
 1 file changed, 4 insertions(+), 15 deletions(-)

diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c
index 296f1410..3e2798b 100644
--- a/arch/mips/net/ebpf_jit.c
+++ b/arch/mips/net/ebpf_jit.c
@@ -741,16 +741,11 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 		break;
 	case BPF_ALU | BPF_DIV | BPF_K: /* ALU_IMM */
 	case BPF_ALU | BPF_MOD | BPF_K: /* ALU_IMM */
+		if (insn->imm == 0)
+			return -EINVAL;
 		dst = ebpf_to_mips_reg(ctx, insn, dst_reg);
 		if (dst < 0)
 			return dst;
-		if (insn->imm == 0) { /* Div by zero */
-			b_off = b_imm(exit_idx, ctx);
-			if (is_bad_offset(b_off))
-				return -E2BIG;
-			emit_instr(ctx, beq, MIPS_R_ZERO, MIPS_R_ZERO, b_off);
-			emit_instr(ctx, addu, MIPS_R_V0, MIPS_R_ZERO, MIPS_R_ZERO);
-		}
 		td = get_reg_val_type(ctx, this_idx, insn->dst_reg);
 		if (td == REG_64BIT || td == REG_32BIT_ZERO_EX)
 			/* sign extend */
@@ -770,19 +765,13 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 		break;
 	case BPF_ALU64 | BPF_DIV | BPF_K: /* ALU_IMM */
 	case BPF_ALU64 | BPF_MOD | BPF_K: /* ALU_IMM */
+		if (insn->imm == 0)
+			return -EINVAL;
 		dst = ebpf_to_mips_reg(ctx, insn, dst_reg);
 		if (dst < 0)
 			return dst;
-		if (insn->imm == 0) { /* Div by zero */
-			b_off = b_imm(exit_idx, ctx);
-			if (is_bad_offset(b_off))
-				return -E2BIG;
-			emit_instr(ctx, beq, MIPS_R_ZERO, MIPS_R_ZERO, b_off);
-			emit_instr(ctx, addu, MIPS_R_V0, MIPS_R_ZERO, MIPS_R_ZERO);
-		}
 		if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT)
 			emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32);
-
 		if (insn->imm == 1) {
 			/* div by 1 is a nop, mod by 1 is zero */
 			if (bpf_op == BPF_MOD)
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next 12/13] bpf, arm: remove obsolete exception handling from div/mod
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
                   ` (10 preceding siblings ...)
  2018-01-26 22:33 ` [PATCH bpf-next 11/13] bpf, mips64: remove unneeded zero check from div/mod with k Daniel Borkmann
@ 2018-01-26 22:33 ` Daniel Borkmann
  2018-01-26 22:33 ` [PATCH bpf-next 13/13] bpf: add further test cases around div/mod and others Daniel Borkmann
  2018-01-27  0:48 ` [PATCH bpf-next 00/13] BPF improvements and fixes Alexei Starovoitov
  13 siblings, 0 replies; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann, Shubham Bansal

Since we've changed div/mod exception handling for src_reg in
eBPF verifier itself, remove the leftovers from arm32 JIT.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Shubham Bansal <illusionist.neo@gmail.com>
---
 arch/arm/net/bpf_jit_32.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 41e2feb..b5030e1 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -363,15 +363,7 @@ static inline int epilogue_offset(const struct jit_ctx *ctx)
 static inline void emit_udivmod(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx, u8 op)
 {
 	const u8 *tmp = bpf2a32[TMP_REG_1];
-	s32 jmp_offset;
 
-	/* checks if divisor is zero or not. If it is, then
-	 * exit directly.
-	 */
-	emit(ARM_CMP_I(rn, 0), ctx);
-	_emit(ARM_COND_EQ, ARM_MOV_I(ARM_R0, 0), ctx);
-	jmp_offset = epilogue_offset(ctx);
-	_emit(ARM_COND_EQ, ARM_B(jmp_offset), ctx);
 #if __LINUX_ARM_ARCH__ == 7
 	if (elf_hwcap & HWCAP_IDIVA) {
 		if (op == BPF_DIV)
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH bpf-next 13/13] bpf: add further test cases around div/mod and others
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
                   ` (11 preceding siblings ...)
  2018-01-26 22:33 ` [PATCH bpf-next 12/13] bpf, arm: remove obsolete exception handling from div/mod Daniel Borkmann
@ 2018-01-26 22:33 ` Daniel Borkmann
  2018-01-27  0:48 ` [PATCH bpf-next 00/13] BPF improvements and fixes Alexei Starovoitov
  13 siblings, 0 replies; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-26 22:33 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann

Update selftests to relfect recent changes and add various new
test cases.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 lib/test_bpf.c                              |   8 +-
 tools/testing/selftests/bpf/test_verifier.c | 343 ++++++++++++++++++++++++++--
 2 files changed, 336 insertions(+), 15 deletions(-)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index e3938e3..4cd9ea9 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -2003,10 +2003,14 @@ static struct bpf_test tests[] = {
 		{ { 4, 0 }, { 5, 10 } }
 	},
 	{
-		"INT: DIV by zero",
+		/* This one doesn't go through verifier, but is just raw insn
+		 * as opposed to cBPF tests from here. Thus div by 0 tests are
+		 * done in test_verifier in BPF kselftests.
+		 */
+		"INT: DIV by -1",
 		.u.insns_int = {
 			BPF_ALU64_REG(BPF_MOV, R6, R1),
-			BPF_ALU64_IMM(BPF_MOV, R7, 0),
+			BPF_ALU64_IMM(BPF_MOV, R7, -1),
 			BPF_LD_ABS(BPF_B, 3),
 			BPF_ALU32_REG(BPF_DIV, R0, R7),
 			BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 9e7075b..697bd83 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -21,6 +21,7 @@
 #include <stddef.h>
 #include <stdbool.h>
 #include <sched.h>
+#include <limits.h>
 
 #include <sys/capability.h>
 #include <sys/resource.h>
@@ -111,7 +112,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.result = ACCEPT,
-		.retval = 0,
+		.retval = 42,
 	},
 	{
 		"DIV32 by 0, zero check 2",
@@ -123,7 +124,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.result = ACCEPT,
-		.retval = 0,
+		.retval = 42,
 	},
 	{
 		"DIV64 by 0, zero check",
@@ -135,7 +136,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.result = ACCEPT,
-		.retval = 0,
+		.retval = 42,
 	},
 	{
 		"MOD32 by 0, zero check 1",
@@ -147,7 +148,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.result = ACCEPT,
-		.retval = 0,
+		.retval = 42,
 	},
 	{
 		"MOD32 by 0, zero check 2",
@@ -159,7 +160,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.result = ACCEPT,
-		.retval = 0,
+		.retval = 42,
 	},
 	{
 		"MOD64 by 0, zero check",
@@ -171,13 +172,245 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.result = ACCEPT,
+		.retval = 42,
+	},
+	{
+		"DIV32 by 0, zero check ok, cls",
+		.insns = {
+			BPF_MOV32_IMM(BPF_REG_0, 42),
+			BPF_MOV32_IMM(BPF_REG_1, 2),
+			BPF_MOV32_IMM(BPF_REG_2, 16),
+			BPF_ALU32_REG(BPF_DIV, BPF_REG_2, BPF_REG_1),
+			BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 8,
+	},
+	{
+		"DIV32 by 0, zero check 1, cls",
+		.insns = {
+			BPF_MOV32_IMM(BPF_REG_1, 0),
+			BPF_MOV32_IMM(BPF_REG_0, 1),
+			BPF_ALU32_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 0,
+	},
+	{
+		"DIV32 by 0, zero check 2, cls",
+		.insns = {
+			BPF_LD_IMM64(BPF_REG_1, 0xffffffff00000000LL),
+			BPF_MOV32_IMM(BPF_REG_0, 1),
+			BPF_ALU32_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 0,
+	},
+	{
+		"DIV64 by 0, zero check, cls",
+		.insns = {
+			BPF_MOV32_IMM(BPF_REG_1, 0),
+			BPF_MOV32_IMM(BPF_REG_0, 1),
+			BPF_ALU64_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 0,
+	},
+	{
+		"MOD32 by 0, zero check ok, cls",
+		.insns = {
+			BPF_MOV32_IMM(BPF_REG_0, 42),
+			BPF_MOV32_IMM(BPF_REG_1, 3),
+			BPF_MOV32_IMM(BPF_REG_2, 5),
+			BPF_ALU32_REG(BPF_MOD, BPF_REG_2, BPF_REG_1),
+			BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 2,
+	},
+	{
+		"MOD32 by 0, zero check 1, cls",
+		.insns = {
+			BPF_MOV32_IMM(BPF_REG_1, 0),
+			BPF_MOV32_IMM(BPF_REG_0, 1),
+			BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 1,
+	},
+	{
+		"MOD32 by 0, zero check 2, cls",
+		.insns = {
+			BPF_LD_IMM64(BPF_REG_1, 0xffffffff00000000LL),
+			BPF_MOV32_IMM(BPF_REG_0, 1),
+			BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 1,
+	},
+	{
+		"MOD64 by 0, zero check 1, cls",
+		.insns = {
+			BPF_MOV32_IMM(BPF_REG_1, 0),
+			BPF_MOV32_IMM(BPF_REG_0, 2),
+			BPF_ALU64_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 2,
+	},
+	{
+		"MOD64 by 0, zero check 2, cls",
+		.insns = {
+			BPF_MOV32_IMM(BPF_REG_1, 0),
+			BPF_MOV32_IMM(BPF_REG_0, -1),
+			BPF_ALU64_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = -1,
+	},
+	/* Just make sure that JITs used udiv/umod as otherwise we get
+	 * an exception from INT_MIN/-1 overflow similarly as with div
+	 * by zero.
+	 */
+	{
+		"DIV32 overflow, check 1",
+		.insns = {
+			BPF_MOV32_IMM(BPF_REG_1, -1),
+			BPF_MOV32_IMM(BPF_REG_0, INT_MIN),
+			BPF_ALU32_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 0,
+	},
+	{
+		"DIV32 overflow, check 2",
+		.insns = {
+			BPF_MOV32_IMM(BPF_REG_0, INT_MIN),
+			BPF_ALU32_IMM(BPF_DIV, BPF_REG_0, -1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 0,
+	},
+	{
+		"DIV64 overflow, check 1",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_1, -1),
+			BPF_LD_IMM64(BPF_REG_0, LLONG_MIN),
+			BPF_ALU64_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 0,
+	},
+	{
+		"DIV64 overflow, check 2",
+		.insns = {
+			BPF_LD_IMM64(BPF_REG_0, LLONG_MIN),
+			BPF_ALU64_IMM(BPF_DIV, BPF_REG_0, -1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
 		.retval = 0,
 	},
 	{
+		"MOD32 overflow, check 1",
+		.insns = {
+			BPF_MOV32_IMM(BPF_REG_1, -1),
+			BPF_MOV32_IMM(BPF_REG_0, INT_MIN),
+			BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = INT_MIN,
+	},
+	{
+		"MOD32 overflow, check 2",
+		.insns = {
+			BPF_MOV32_IMM(BPF_REG_0, INT_MIN),
+			BPF_ALU32_IMM(BPF_MOD, BPF_REG_0, -1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = INT_MIN,
+	},
+	{
+		"MOD64 overflow, check 1",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_1, -1),
+			BPF_LD_IMM64(BPF_REG_2, LLONG_MIN),
+			BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
+			BPF_ALU64_REG(BPF_MOD, BPF_REG_2, BPF_REG_1),
+			BPF_MOV32_IMM(BPF_REG_0, 0),
+			BPF_JMP_REG(BPF_JNE, BPF_REG_3, BPF_REG_2, 1),
+			BPF_MOV32_IMM(BPF_REG_0, 1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 1,
+	},
+	{
+		"MOD64 overflow, check 2",
+		.insns = {
+			BPF_LD_IMM64(BPF_REG_2, LLONG_MIN),
+			BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
+			BPF_ALU64_IMM(BPF_MOD, BPF_REG_2, -1),
+			BPF_MOV32_IMM(BPF_REG_0, 0),
+			BPF_JMP_REG(BPF_JNE, BPF_REG_3, BPF_REG_2, 1),
+			BPF_MOV32_IMM(BPF_REG_0, 1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 1,
+	},
+	{
+		"xor32 zero extend check",
+		.insns = {
+			BPF_MOV32_IMM(BPF_REG_2, -1),
+			BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 32),
+			BPF_ALU64_IMM(BPF_OR, BPF_REG_2, 0xffff),
+			BPF_ALU32_REG(BPF_XOR, BPF_REG_2, BPF_REG_2),
+			BPF_MOV32_IMM(BPF_REG_0, 2),
+			BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0, 1),
+			BPF_MOV32_IMM(BPF_REG_0, 1),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 1,
+	},
+	{
 		"empty prog",
 		.insns = {
 		},
-		.errstr = "last insn is not an exit or jmp",
+		.errstr = "unknown opcode 00",
 		.result = REJECT,
 	},
 	{
@@ -374,7 +607,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.result = REJECT,
-		.errstr = "BPF_ARSH not supported for 32 bit ALU",
+		.errstr = "unknown opcode c4",
 	},
 	{
 		"arsh32 on reg",
@@ -385,7 +618,7 @@ static struct bpf_test tests[] = {
 			BPF_EXIT_INSN(),
 		},
 		.result = REJECT,
-		.errstr = "BPF_ARSH not supported for 32 bit ALU",
+		.errstr = "unknown opcode cc",
 	},
 	{
 		"arsh64 on imm",
@@ -501,7 +734,7 @@ static struct bpf_test tests[] = {
 			BPF_RAW_INSN(BPF_JMP | BPF_CALL | BPF_X, 0, 0, 0, 0),
 			BPF_EXIT_INSN(),
 		},
-		.errstr = "BPF_CALL uses reserved",
+		.errstr = "unknown opcode 8d",
 		.result = REJECT,
 	},
 	{
@@ -691,7 +924,7 @@ static struct bpf_test tests[] = {
 			BPF_RAW_INSN(0, 0, 0, 0, 0),
 			BPF_EXIT_INSN(),
 		},
-		.errstr = "invalid BPF_LD_IMM",
+		.errstr = "unknown opcode 00",
 		.result = REJECT,
 	},
 	{
@@ -709,7 +942,7 @@ static struct bpf_test tests[] = {
 			BPF_RAW_INSN(-1, 0, 0, 0, 0),
 			BPF_EXIT_INSN(),
 		},
-		.errstr = "invalid BPF_ALU opcode f0",
+		.errstr = "unknown opcode ff",
 		.result = REJECT,
 	},
 	{
@@ -718,7 +951,7 @@ static struct bpf_test tests[] = {
 			BPF_RAW_INSN(-1, -1, -1, -1, -1),
 			BPF_EXIT_INSN(),
 		},
-		.errstr = "invalid BPF_ALU opcode f0",
+		.errstr = "unknown opcode ff",
 		.result = REJECT,
 	},
 	{
@@ -7543,7 +7776,7 @@ static struct bpf_test tests[] = {
 			},
 			BPF_EXIT_INSN(),
 		},
-		.errstr = "BPF_END uses reserved fields",
+		.errstr = "unknown opcode d7",
 		.result = REJECT,
 	},
 	{
@@ -8964,6 +9197,90 @@ static struct bpf_test tests[] = {
 		.retval = 1,
 	},
 	{
+		"calls: div by 0 in subprog",
+		.insns = {
+			BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
+			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 8),
+			BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
+			BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
+				    offsetof(struct __sk_buff, data_end)),
+			BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8),
+			BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
+			BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+			BPF_MOV64_IMM(BPF_REG_0, 1),
+			BPF_EXIT_INSN(),
+			BPF_MOV32_IMM(BPF_REG_2, 0),
+			BPF_MOV32_IMM(BPF_REG_3, 1),
+			BPF_ALU32_REG(BPF_DIV, BPF_REG_3, BPF_REG_2),
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, data)),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = ACCEPT,
+		.retval = 1,
+	},
+	{
+		"calls: multiple ret types in subprog 1",
+		.insns = {
+			BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
+			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 8),
+			BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
+			BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
+				    offsetof(struct __sk_buff, data_end)),
+			BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8),
+			BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
+			BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+			BPF_MOV64_IMM(BPF_REG_0, 1),
+			BPF_EXIT_INSN(),
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, data)),
+			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+			BPF_MOV32_IMM(BPF_REG_0, 42),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.result = REJECT,
+		.errstr = "R0 invalid mem access 'inv'",
+	},
+	{
+		"calls: multiple ret types in subprog 2",
+		.insns = {
+			BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
+			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 8),
+			BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
+			BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
+				    offsetof(struct __sk_buff, data_end)),
+			BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8),
+			BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
+			BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+			BPF_MOV64_IMM(BPF_REG_0, 1),
+			BPF_EXIT_INSN(),
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, data)),
+			BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
+			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 9),
+			BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+			BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+			BPF_LD_MAP_FD(BPF_REG_1, 0),
+			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+				     BPF_FUNC_map_lookup_elem),
+			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6,
+				    offsetof(struct __sk_buff, data)),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 64),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.fixup_map1 = { 16 },
+		.result = REJECT,
+		.errstr = "R0 min value is outside of the array range",
+	},
+	{
 		"calls: overlapping caller/callee",
 		.insns = {
 			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 0),
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next 11/13] bpf, mips64: remove unneeded zero check from div/mod with k
  2018-01-26 22:33 ` [PATCH bpf-next 11/13] bpf, mips64: remove unneeded zero check from div/mod with k Daniel Borkmann
@ 2018-01-26 22:36   ` David Daney
  0 siblings, 0 replies; 22+ messages in thread
From: David Daney @ 2018-01-26 22:36 UTC (permalink / raw)
  To: Daniel Borkmann, ast; +Cc: netdev, David Daney

On 01/26/2018 02:33 PM, Daniel Borkmann wrote:
> The verifier in both cBPF and eBPF reject div/mod by 0 imm,
> so this can never load. Remove emitting such test and reject
> it from being JITed instead (the latter is actually also not
> needed, but given practice in sparc64, ppc64 today, so
> doesn't hurt to add it here either).
> 
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> Cc: David Daney <david.daney@cavium.com>

This looks plausible,

Reviewed-by: David Daney <david.daney@cavium.com>


> ---
>   arch/mips/net/ebpf_jit.c | 19 ++++---------------
>   1 file changed, 4 insertions(+), 15 deletions(-)
> 
> diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c
> index 296f1410..3e2798b 100644
> --- a/arch/mips/net/ebpf_jit.c
> +++ b/arch/mips/net/ebpf_jit.c
> @@ -741,16 +741,11 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
>   		break;
>   	case BPF_ALU | BPF_DIV | BPF_K: /* ALU_IMM */
>   	case BPF_ALU | BPF_MOD | BPF_K: /* ALU_IMM */
> +		if (insn->imm == 0)
> +			return -EINVAL;
>   		dst = ebpf_to_mips_reg(ctx, insn, dst_reg);
>   		if (dst < 0)
>   			return dst;
> -		if (insn->imm == 0) { /* Div by zero */
> -			b_off = b_imm(exit_idx, ctx);
> -			if (is_bad_offset(b_off))
> -				return -E2BIG;
> -			emit_instr(ctx, beq, MIPS_R_ZERO, MIPS_R_ZERO, b_off);
> -			emit_instr(ctx, addu, MIPS_R_V0, MIPS_R_ZERO, MIPS_R_ZERO);
> -		}
>   		td = get_reg_val_type(ctx, this_idx, insn->dst_reg);
>   		if (td == REG_64BIT || td == REG_32BIT_ZERO_EX)
>   			/* sign extend */
> @@ -770,19 +765,13 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
>   		break;
>   	case BPF_ALU64 | BPF_DIV | BPF_K: /* ALU_IMM */
>   	case BPF_ALU64 | BPF_MOD | BPF_K: /* ALU_IMM */
> +		if (insn->imm == 0)
> +			return -EINVAL;
>   		dst = ebpf_to_mips_reg(ctx, insn, dst_reg);
>   		if (dst < 0)
>   			return dst;
> -		if (insn->imm == 0) { /* Div by zero */
> -			b_off = b_imm(exit_idx, ctx);
> -			if (is_bad_offset(b_off))
> -				return -E2BIG;
> -			emit_instr(ctx, beq, MIPS_R_ZERO, MIPS_R_ZERO, b_off);
> -			emit_instr(ctx, addu, MIPS_R_V0, MIPS_R_ZERO, MIPS_R_ZERO);
> -		}
>   		if (get_reg_val_type(ctx, this_idx, insn->dst_reg) == REG_32BIT)
>   			emit_instr(ctx, dinsu, dst, MIPS_R_ZERO, 32, 32);
> -
>   		if (insn->imm == 1) {
>   			/* div by 1 is a nop, mod by 1 is zero */
>   			if (bpf_op == BPF_MOD)
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next 10/13] bpf, mips64: remove obsolete exception handling from div/mod
  2018-01-26 22:33 ` [PATCH bpf-next 10/13] bpf, mips64: " Daniel Borkmann
@ 2018-01-26 22:39   ` David Daney
  0 siblings, 0 replies; 22+ messages in thread
From: David Daney @ 2018-01-26 22:39 UTC (permalink / raw)
  To: Daniel Borkmann, ast; +Cc: netdev, David Daney

On 01/26/2018 02:33 PM, Daniel Borkmann wrote:
> Since we've changed div/mod exception handling for src_reg in
> eBPF verifier itself, remove the leftovers from mips64 JIT.
> 
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> Cc: David Daney <david.daney@cavium.com>

I didn't test it, but this looks correct, so ...

Reviewed-by: David Daney <david.daney@cavium.com>

> ---
>   arch/mips/net/ebpf_jit.c | 10 ----------
>   1 file changed, 10 deletions(-)
> 
> diff --git a/arch/mips/net/ebpf_jit.c b/arch/mips/net/ebpf_jit.c
> index 4e34703..296f1410 100644
> --- a/arch/mips/net/ebpf_jit.c
> +++ b/arch/mips/net/ebpf_jit.c
> @@ -860,11 +860,6 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
>   			break;
>   		case BPF_DIV:
>   		case BPF_MOD:
> -			b_off = b_imm(exit_idx, ctx);
> -			if (is_bad_offset(b_off))
> -				return -E2BIG;
> -			emit_instr(ctx, beq, src, MIPS_R_ZERO, b_off);
> -			emit_instr(ctx, movz, MIPS_R_V0, MIPS_R_ZERO, src);
>   			emit_instr(ctx, ddivu, dst, src);
>   			if (bpf_op == BPF_DIV)
>   				emit_instr(ctx, mflo, dst);
> @@ -943,11 +938,6 @@ static int build_one_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
>   			break;
>   		case BPF_DIV:
>   		case BPF_MOD:
> -			b_off = b_imm(exit_idx, ctx);
> -			if (is_bad_offset(b_off))
> -				return -E2BIG;
> -			emit_instr(ctx, beq, src, MIPS_R_ZERO, b_off);
> -			emit_instr(ctx, movz, MIPS_R_V0, MIPS_R_ZERO, src);
>   			emit_instr(ctx, divu, dst, src);
>   			if (bpf_op == BPF_DIV)
>   				emit_instr(ctx, mflo, dst);
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next 00/13] BPF improvements and fixes
  2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
                   ` (12 preceding siblings ...)
  2018-01-26 22:33 ` [PATCH bpf-next 13/13] bpf: add further test cases around div/mod and others Daniel Borkmann
@ 2018-01-27  0:48 ` Alexei Starovoitov
  13 siblings, 0 replies; 22+ messages in thread
From: Alexei Starovoitov @ 2018-01-27  0:48 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: ast, netdev

On Fri, Jan 26, 2018 at 11:33:35PM +0100, Daniel Borkmann wrote:
> This set contains a small cleanup in cBPF prologue generation and
> otherwise fixes an outstanding issue related to BPF to BPF calls
> and exception handling. For details please see related patches.
> Last but not least, BPF selftests is extended with several new
> test cases.

Applied to bpf-next, Thank you Daniel!

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next 08/13] bpf, ppc64: remove obsolete exception handling from div/mod
  2018-01-26 22:33 ` [PATCH bpf-next 08/13] bpf, ppc64: " Daniel Borkmann
@ 2018-01-28 18:52   ` Naveen N. Rao
  0 siblings, 0 replies; 22+ messages in thread
From: Naveen N. Rao @ 2018-01-28 18:52 UTC (permalink / raw)
  To: ast, Daniel Borkmann; +Cc: netdev

Daniel Borkmann wrote:
> Since we've changed div/mod exception handling for src_reg in
> eBPF verifier itself, remove the leftovers from ppc64 JIT.
> 
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> ---
>  arch/powerpc/net/bpf_jit_comp64.c | 8 --------
>  1 file changed, 8 deletions(-)

Probably too late, but none the less:
Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

Thanks,
Naveen

> 
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 217a78e..0a34b0c 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -381,10 +381,6 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
>  			goto bpf_alu32_trunc;
>  		case BPF_ALU | BPF_DIV | BPF_X: /* (u32) dst /= (u32) src */
>  		case BPF_ALU | BPF_MOD | BPF_X: /* (u32) dst %= (u32) src */
> -			PPC_CMPWI(src_reg, 0);
> -			PPC_BCC_SHORT(COND_NE, (ctx->idx * 4) + 12);
> -			PPC_LI(b2p[BPF_REG_0], 0);
> -			PPC_JMP(exit_addr);
>  			if (BPF_OP(code) == BPF_MOD) {
>  				PPC_DIVWU(b2p[TMP_REG_1], dst_reg, src_reg);
>  				PPC_MULW(b2p[TMP_REG_1], src_reg,
> @@ -395,10 +391,6 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
>  			goto bpf_alu32_trunc;
>  		case BPF_ALU64 | BPF_DIV | BPF_X: /* dst /= src */
>  		case BPF_ALU64 | BPF_MOD | BPF_X: /* dst %= src */
> -			PPC_CMPDI(src_reg, 0);
> -			PPC_BCC_SHORT(COND_NE, (ctx->idx * 4) + 12);
> -			PPC_LI(b2p[BPF_REG_0], 0);
> -			PPC_JMP(exit_addr);
>  			if (BPF_OP(code) == BPF_MOD) {
>  				PPC_DIVD(b2p[TMP_REG_1], dst_reg, src_reg);
>  				PPC_MULD(b2p[TMP_REG_1], src_reg,
> -- 
> 2.9.5
> 
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next 01/13] bpf: xor of a/x in cbpf can be done
  2018-01-26 22:33 ` [PATCH bpf-next 01/13] bpf: xor of a/x in cbpf can be done in 32 bit alu Daniel Borkmann
@ 2018-01-28 18:58   ` Naveen N. Rao
  2018-01-28 20:51     ` Daniel Borkmann
  0 siblings, 1 reply; 22+ messages in thread
From: Naveen N. Rao @ 2018-01-28 18:58 UTC (permalink / raw)
  To: ast, Daniel Borkmann; +Cc: netdev, Sandipan Das

in 32 bit alu

Daniel Borkmann wrote:
> Very minor optimization; saves 1 byte per program in x86_64
> JIT in cBPF prologue.

... but increases program size by 4 bytes on ppc64 :(
In general, this is an area I've been wanting to spend some time on.  
Powerpc doesn't have 32-bit sub-registers, so we need to emit an 
additional instruction to clear the higher 32-bits for all 32-bit 
operations. I need to look at the performance impact.

- Naveen

> 
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> Acked-by: Alexei Starovoitov <ast@kernel.org>
> ---
>  net/core/filter.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 18da42a..cba2f73 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -401,8 +401,8 @@ static int bpf_convert_filter(struct sock_filter *prog, int len,
>  		/* Classic BPF expects A and X to be reset first. These need
>  		 * to be guaranteed to be the first two instructions.
>  		 */
> -		*new_insn++ = BPF_ALU64_REG(BPF_XOR, BPF_REG_A, BPF_REG_A);
> -		*new_insn++ = BPF_ALU64_REG(BPF_XOR, BPF_REG_X, BPF_REG_X);
> +		*new_insn++ = BPF_ALU32_REG(BPF_XOR, BPF_REG_A, BPF_REG_A);
> +		*new_insn++ = BPF_ALU32_REG(BPF_XOR, BPF_REG_X, BPF_REG_X);
>  
>  		/* All programs must keep CTX in callee saved BPF_REG_CTX.
>  		 * In eBPF case it's done by the compiler, here we need to
> -- 
> 2.9.5
> 
> 
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next 01/13] bpf: xor of a/x in cbpf can be done
  2018-01-28 18:58   ` [PATCH bpf-next 01/13] bpf: xor of a/x in cbpf can be done Naveen N. Rao
@ 2018-01-28 20:51     ` Daniel Borkmann
  0 siblings, 0 replies; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-28 20:51 UTC (permalink / raw)
  To: Naveen N. Rao, ast; +Cc: netdev, Sandipan Das

On 01/28/2018 07:58 PM, Naveen N. Rao wrote:
> in 32 bit alu
> 
> Daniel Borkmann wrote:
>> Very minor optimization; saves 1 byte per program in x86_64
>> JIT in cBPF prologue.
> 
> ... but increases program size by 4 bytes on ppc64 :(
> In general, this is an area I've been wanting to spend some time on.  Powerpc doesn't have 32-bit sub-registers, so we need to emit an additional instruction to clear the higher 32-bits for all 32-bit operations. I need to look at the performance impact.

Right, I think one way to optimize this could be on JIT level in such
case when CPU doesn't have subregs. There is the bpf_prog_was_classic()
helper that can be used there in order to skip some of the bpf_alu32_trunc
goto cases e.g. for some of the bit ops as an example, since we know that
upper part in cBPF must be zero here anyway, this should definitely be a
low hanging fruit given we use alu32 in the cBPF to eBPF conversion in a
lot of places.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next 07/13] bpf, s390x: remove obsolete exception handling from div/mod
  2018-01-26 22:33 ` [PATCH bpf-next 07/13] bpf, s390x: " Daniel Borkmann
@ 2018-01-29 14:33   ` Michael Holzheu
  2018-01-29 15:52     ` Daniel Borkmann
  0 siblings, 1 reply; 22+ messages in thread
From: Michael Holzheu @ 2018-01-29 14:33 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: ast, netdev

Am Fri, 26 Jan 2018 23:33:42 +0100
schrieb Daniel Borkmann <daniel@iogearbox.net>:

> Since we've changed div/mod exception handling for src_reg in
> eBPF verifier itself, 

Maybe add the commit that introduced that to the patch description?

> remove the leftovers from s390x JIT.
> 
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
> ---
>  arch/s390/net/bpf_jit_comp.c | 10 ----------
>  1 file changed, 10 deletions(-)
> 
> diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
> index e501887..78a19c9 100644
> --- a/arch/s390/net/bpf_jit_comp.c
> +++ b/arch/s390/net/bpf_jit_comp.c
> @@ -610,11 +610,6 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
>  	{
>  		int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
> 
> -		jit->seen |= SEEN_RET0;
> -		/* ltr %src,%src (if src == 0 goto fail) */
> -		EMIT2(0x1200, src_reg, src_reg);
> -		/* jz <ret0> */
> -		EMIT4_PCREL(0xa7840000, jit->ret0_ip - jit->prg);
>  		/* lhi %w0,0 */
>  		EMIT4_IMM(0xa7080000, REG_W0, 0);
>  		/* lr %w1,%dst */
> @@ -630,11 +625,6 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
>  	{
>  		int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
> 
> -		jit->seen |= SEEN_RET0;
> -		/* ltgr %src,%src (if src == 0 goto fail) */
> -		EMIT4(0xb9020000, src_reg, src_reg);
> -		/* jz <ret0> */
> -		EMIT4_PCREL(0xa7840000, jit->ret0_ip - jit->prg);
>  		/* lghi %w0,0 */
>  		EMIT4_IMM(0xa7090000, REG_W0, 0);
>  		/* lgr %w1,%dst */

If the check is done in the verifier now, this looks good to me.

Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH bpf-next 07/13] bpf, s390x: remove obsolete exception handling from div/mod
  2018-01-29 14:33   ` Michael Holzheu
@ 2018-01-29 15:52     ` Daniel Borkmann
  0 siblings, 0 replies; 22+ messages in thread
From: Daniel Borkmann @ 2018-01-29 15:52 UTC (permalink / raw)
  To: Michael Holzheu; +Cc: ast, netdev

On 01/29/2018 03:33 PM, Michael Holzheu wrote:
> Am Fri, 26 Jan 2018 23:33:42 +0100
> schrieb Daniel Borkmann <daniel@iogearbox.net>:
> 
>> Since we've changed div/mod exception handling for src_reg in
>> eBPF verifier itself, 
> 
> Maybe add the commit that introduced that to the patch description?

I couldn't add it here since it was all part of the same series and
thus didn't have a stable commit id for future reference yet. Maybe
I should have just put the commit subject in this case; will do next
time.

>> remove the leftovers from s390x JIT.
>>
>> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
>> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
>> ---
>>  arch/s390/net/bpf_jit_comp.c | 10 ----------
>>  1 file changed, 10 deletions(-)
>>
>> diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
>> index e501887..78a19c9 100644
>> --- a/arch/s390/net/bpf_jit_comp.c
>> +++ b/arch/s390/net/bpf_jit_comp.c
>> @@ -610,11 +610,6 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
>>  	{
>>  		int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
>>
>> -		jit->seen |= SEEN_RET0;
>> -		/* ltr %src,%src (if src == 0 goto fail) */
>> -		EMIT2(0x1200, src_reg, src_reg);
>> -		/* jz <ret0> */
>> -		EMIT4_PCREL(0xa7840000, jit->ret0_ip - jit->prg);
>>  		/* lhi %w0,0 */
>>  		EMIT4_IMM(0xa7080000, REG_W0, 0);
>>  		/* lr %w1,%dst */
>> @@ -630,11 +625,6 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
>>  	{
>>  		int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
>>
>> -		jit->seen |= SEEN_RET0;
>> -		/* ltgr %src,%src (if src == 0 goto fail) */
>> -		EMIT4(0xb9020000, src_reg, src_reg);
>> -		/* jz <ret0> */
>> -		EMIT4_PCREL(0xa7840000, jit->ret0_ip - jit->prg);
>>  		/* lghi %w0,0 */
>>  		EMIT4_IMM(0xa7090000, REG_W0, 0);
>>  		/* lgr %w1,%dst */
> 
> If the check is done in the verifier now, this looks good to me.
> 
> Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>

Thanks for the review, Michael!

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2018-01-29 15:53 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-26 22:33 [PATCH bpf-next 00/13] BPF improvements and fixes Daniel Borkmann
2018-01-26 22:33 ` [PATCH bpf-next 01/13] bpf: xor of a/x in cbpf can be done in 32 bit alu Daniel Borkmann
2018-01-28 18:58   ` [PATCH bpf-next 01/13] bpf: xor of a/x in cbpf can be done Naveen N. Rao
2018-01-28 20:51     ` Daniel Borkmann
2018-01-26 22:33 ` [PATCH bpf-next 02/13] bpf: improve dead code sanitizing Daniel Borkmann
2018-01-26 22:33 ` [PATCH bpf-next 03/13] bpf: make unknown opcode handling more robust Daniel Borkmann
2018-01-26 22:33 ` [PATCH bpf-next 04/13] bpf: fix subprog verifier bypass by div/mod by 0 exception Daniel Borkmann
2018-01-26 22:33 ` [PATCH bpf-next 05/13] bpf, x86_64: remove obsolete exception handling from div/mod Daniel Borkmann
2018-01-26 22:33 ` [PATCH bpf-next 06/13] bpf, arm64: " Daniel Borkmann
2018-01-26 22:33 ` [PATCH bpf-next 07/13] bpf, s390x: " Daniel Borkmann
2018-01-29 14:33   ` Michael Holzheu
2018-01-29 15:52     ` Daniel Borkmann
2018-01-26 22:33 ` [PATCH bpf-next 08/13] bpf, ppc64: " Daniel Borkmann
2018-01-28 18:52   ` Naveen N. Rao
2018-01-26 22:33 ` [PATCH bpf-next 09/13] bpf, sparc64: " Daniel Borkmann
2018-01-26 22:33 ` [PATCH bpf-next 10/13] bpf, mips64: " Daniel Borkmann
2018-01-26 22:39   ` David Daney
2018-01-26 22:33 ` [PATCH bpf-next 11/13] bpf, mips64: remove unneeded zero check from div/mod with k Daniel Borkmann
2018-01-26 22:36   ` David Daney
2018-01-26 22:33 ` [PATCH bpf-next 12/13] bpf, arm: remove obsolete exception handling from div/mod Daniel Borkmann
2018-01-26 22:33 ` [PATCH bpf-next 13/13] bpf: add further test cases around div/mod and others Daniel Borkmann
2018-01-27  0:48 ` [PATCH bpf-next 00/13] BPF improvements and fixes Alexei Starovoitov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.