netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/5] Misc BPF updates
@ 2015-12-17 22:51 Daniel Borkmann
  2015-12-17 22:51 ` [PATCH net-next 1/5] bpf: add bpf_skb_load_bytes helper Daniel Borkmann
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Daniel Borkmann @ 2015-12-17 22:51 UTC (permalink / raw)
  To: davem; +Cc: ast, holzheu, zlim.lnx, netdev, Daniel Borkmann

This series contains a couple of misc updates to the BPF code, besides
others a new helper bpf_skb_load_bytes(), moving clearing of A/X to the
classic converter, etc. Please see individual patches for details.

Thanks!

Daniel Borkmann (5):
  bpf: add bpf_skb_load_bytes helper
  bpf: move clearing of A/X into classic to eBPF migration prologue
  bpf: fix misleading comment in bpf_convert_filter
  bpf, x86: detect/optimize loading 0 immediates
  bpf, test: add couple of test cases

 arch/arm64/net/bpf_jit_comp.c |   6 ---
 arch/s390/net/bpf_jit_comp.c  |  13 +----
 arch/x86/net/bpf_jit_comp.c   |  40 ++++++++++++--
 include/uapi/linux/bpf.h      |   1 +
 kernel/bpf/core.c             |   4 --
 lib/test_bpf.c                | 120 ++++++++++++++++++++++++++++++++++++++++++
 net/core/filter.c             |  60 +++++++++++++++++----
 7 files changed, 208 insertions(+), 36 deletions(-)

-- 
1.9.3

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH net-next 1/5] bpf: add bpf_skb_load_bytes helper
  2015-12-17 22:51 [PATCH net-next 0/5] Misc BPF updates Daniel Borkmann
@ 2015-12-17 22:51 ` Daniel Borkmann
  2015-12-17 22:51 ` [PATCH net-next 2/5] bpf: move clearing of A/X into classic to eBPF migration prologue Daniel Borkmann
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Daniel Borkmann @ 2015-12-17 22:51 UTC (permalink / raw)
  To: davem; +Cc: ast, holzheu, zlim.lnx, netdev, Daniel Borkmann

When hacking tc programs with eBPF, one of the issues that come up
from time to time is to load addresses from headers. In eBPF as in
classic BPF, we have BPF_LD | BPF_ABS | BPF_{B,H,W} instructions that
extract a byte, half-word or word out of the skb data though helpers
such as bpf_load_pointer() (interpreter case).

F.e. extracting a whole IPv6 address could possibly look like ...

  union v6addr {
    struct {
      __u32 p1;
      __u32 p2;
      __u32 p3;
      __u32 p4;
    };
    __u8 addr[16];
  };

  [...]

  a.p1 = htonl(load_word(skb, off));
  a.p2 = htonl(load_word(skb, off +  4));
  a.p3 = htonl(load_word(skb, off +  8));
  a.p4 = htonl(load_word(skb, off + 12));

  [...]

  /* access to a.addr[...] */

This work adds a complementary helper bpf_skb_load_bytes() (we also
have bpf_skb_store_bytes()) as an alternative where the same call
would look like from an eBPF program:

  ret = bpf_skb_load_bytes(skb, off, addr, sizeof(addr));

Same verifier restrictions apply as in ffeedafbf023 ("bpf: introduce
current->pid, tgid, uid, gid, comm accessors") case, where stack memory
access needs to be statically verified and thus guaranteed to be
initialized in first use (otherwise verifier cannot tell whether a
subsequent access to it is valid or not as it's runtime dependent).

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 include/uapi/linux/bpf.h |  1 +
 net/core/filter.c        | 35 ++++++++++++++++++++++++++++++++++-
 2 files changed, 35 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 9ea2d22..8bed7f1 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -269,6 +269,7 @@ enum bpf_func_id {
 	 * Return: 0 on success
 	 */
 	BPF_FUNC_perf_event_output,
+	BPF_FUNC_skb_load_bytes,
 	__BPF_FUNC_MAX_ID,
 };
 
diff --git a/net/core/filter.c b/net/core/filter.c
index 672eefb..34bf6fc 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -1245,6 +1245,7 @@ int sk_attach_bpf(u32 ufd, struct sock *sk)
 }
 
 #define BPF_RECOMPUTE_CSUM(flags)	((flags) & 1)
+#define BPF_LDST_LEN			16U
 
 static u64 bpf_skb_store_bytes(u64 r1, u64 r2, u64 r3, u64 r4, u64 flags)
 {
@@ -1252,7 +1253,7 @@ static u64 bpf_skb_store_bytes(u64 r1, u64 r2, u64 r3, u64 r4, u64 flags)
 	int offset = (int) r2;
 	void *from = (void *) (long) r3;
 	unsigned int len = (unsigned int) r4;
-	char buf[16];
+	char buf[BPF_LDST_LEN];
 	void *ptr;
 
 	/* bpf verifier guarantees that:
@@ -1299,6 +1300,36 @@ const struct bpf_func_proto bpf_skb_store_bytes_proto = {
 	.arg5_type	= ARG_ANYTHING,
 };
 
+static u64 bpf_skb_load_bytes(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5)
+{
+	const struct sk_buff *skb = (const struct sk_buff *)(unsigned long) r1;
+	int offset = (int) r2;
+	void *to = (void *)(unsigned long) r3;
+	unsigned int len = (unsigned int) r4;
+	void *ptr;
+
+	if (unlikely((u32) offset > 0xffff || len > BPF_LDST_LEN))
+		return -EFAULT;
+
+	ptr = skb_header_pointer(skb, offset, len, to);
+	if (unlikely(!ptr))
+		return -EFAULT;
+	if (ptr != to)
+		memcpy(to, ptr, len);
+
+	return 0;
+}
+
+const struct bpf_func_proto bpf_skb_load_bytes_proto = {
+	.func		= bpf_skb_load_bytes,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_ANYTHING,
+	.arg3_type	= ARG_PTR_TO_STACK,
+	.arg4_type	= ARG_CONST_STACK_SIZE,
+};
+
 #define BPF_HEADER_FIELD_SIZE(flags)	((flags) & 0x0f)
 #define BPF_IS_PSEUDO_HEADER(flags)	((flags) & 0x10)
 
@@ -1654,6 +1685,8 @@ tc_cls_act_func_proto(enum bpf_func_id func_id)
 	switch (func_id) {
 	case BPF_FUNC_skb_store_bytes:
 		return &bpf_skb_store_bytes_proto;
+	case BPF_FUNC_skb_load_bytes:
+		return &bpf_skb_load_bytes_proto;
 	case BPF_FUNC_l3_csum_replace:
 		return &bpf_l3_csum_replace_proto;
 	case BPF_FUNC_l4_csum_replace:
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next 2/5] bpf: move clearing of A/X into classic to eBPF migration prologue
  2015-12-17 22:51 [PATCH net-next 0/5] Misc BPF updates Daniel Borkmann
  2015-12-17 22:51 ` [PATCH net-next 1/5] bpf: add bpf_skb_load_bytes helper Daniel Borkmann
@ 2015-12-17 22:51 ` Daniel Borkmann
  2015-12-17 23:52   ` Shi, Yang
  2015-12-18  0:00   ` Z Lim
  2015-12-17 22:51 ` [PATCH net-next 3/5] bpf: fix misleading comment in bpf_convert_filter Daniel Borkmann
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 11+ messages in thread
From: Daniel Borkmann @ 2015-12-17 22:51 UTC (permalink / raw)
  To: davem; +Cc: ast, holzheu, zlim.lnx, netdev, Daniel Borkmann, Yang Shi

Back in the days where eBPF (or back then "internal BPF" ;->) was not
exposed to user space, and only the classic BPF programs internally
translated into eBPF programs, we missed the fact that for classic BPF
A and X needed to be cleared. It was fixed back then via 83d5b7ef99c9
("net: filter: initialize A and X registers"), and thus classic BPF
specifics were added to the eBPF interpreter core to work around it.

This added some confusion for JIT developers later on that take the
eBPF interpreter code as an example for deriving their JIT. F.e. in
f75298f5c3fe ("s390/bpf: clear correct BPF accumulator register"), at
least X could leak stack memory. Furthermore, since this is only needed
for classic BPF translations and not for eBPF (verifier takes care
that read access to regs cannot be done uninitialized), more complexity
is added to JITs as they need to determine whether they deal with
migrations or native eBPF where they can just omit clearing A/X in
their prologue and thus reduce image size a bit, see f.e. cde66c2d88da
("s390/bpf: Only clear A and X for converted BPF programs"). In other
cases (x86, arm64), A and X is being cleared in the prologue also for
eBPF case, which is unnecessary.

Lets move this into the BPF migration in bpf_convert_filter() where it
actually belongs as long as the number of eBPF JITs are still few. It
can thus be done generically; allowing us to remove the quirk from
__bpf_prog_run() and to slightly reduce JIT image size in case of eBPF,
while reducing code duplication on this matter in current(/future) eBPF
JITs.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Tested-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Cc: Zi Shen Lim <zlim.lnx@gmail.com>
Cc: Yang Shi <yang.shi@linaro.org>
---
 arch/arm64/net/bpf_jit_comp.c |  6 ------
 arch/s390/net/bpf_jit_comp.c  | 13 ++-----------
 arch/x86/net/bpf_jit_comp.c   | 14 +++++++++-----
 kernel/bpf/core.c             |  4 ----
 net/core/filter.c             | 19 ++++++++++++++++---
 5 files changed, 27 insertions(+), 29 deletions(-)

diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index b162ad7..7658612 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -152,8 +152,6 @@ static void build_prologue(struct jit_ctx *ctx)
 	const u8 r8 = bpf2a64[BPF_REG_8];
 	const u8 r9 = bpf2a64[BPF_REG_9];
 	const u8 fp = bpf2a64[BPF_REG_FP];
-	const u8 ra = bpf2a64[BPF_REG_A];
-	const u8 rx = bpf2a64[BPF_REG_X];
 	const u8 tmp1 = bpf2a64[TMP_REG_1];
 	const u8 tmp2 = bpf2a64[TMP_REG_2];
 
@@ -200,10 +198,6 @@ static void build_prologue(struct jit_ctx *ctx)
 
 	/* Set up function call stack */
 	emit(A64_SUB_I(1, A64_SP, A64_SP, STACK_SIZE), ctx);
-
-	/* Clear registers A and X */
-	emit_a64_mov_i64(ra, 0, ctx);
-	emit_a64_mov_i64(rx, 0, ctx);
 }
 
 static void build_epilogue(struct jit_ctx *ctx)
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index 9a0c4c2..3c0bfc1 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -408,7 +408,7 @@ static void emit_load_skb_data_hlen(struct bpf_jit *jit)
  * Save registers and create stack frame if necessary.
  * See stack frame layout desription in "bpf_jit.h"!
  */
-static void bpf_jit_prologue(struct bpf_jit *jit, bool is_classic)
+static void bpf_jit_prologue(struct bpf_jit *jit)
 {
 	if (jit->seen & SEEN_TAIL_CALL) {
 		/* xc STK_OFF_TCCNT(4,%r15),STK_OFF_TCCNT(%r15) */
@@ -448,15 +448,6 @@ static void bpf_jit_prologue(struct bpf_jit *jit, bool is_classic)
 		/* stg %b1,ST_OFF_SKBP(%r0,%r15) */
 		EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0, REG_15,
 			      STK_OFF_SKBP);
-	/* Clear A (%b0) and X (%b7) registers for converted BPF programs */
-	if (is_classic) {
-		if (REG_SEEN(BPF_REG_A))
-			/* lghi %ba,0 */
-			EMIT4_IMM(0xa7090000, BPF_REG_A, 0);
-		if (REG_SEEN(BPF_REG_X))
-			/* lghi %bx,0 */
-			EMIT4_IMM(0xa7090000, BPF_REG_X, 0);
-	}
 }
 
 /*
@@ -1245,7 +1236,7 @@ static int bpf_jit_prog(struct bpf_jit *jit, struct bpf_prog *fp)
 	jit->lit = jit->lit_start;
 	jit->prg = 0;
 
-	bpf_jit_prologue(jit, bpf_prog_was_classic(fp));
+	bpf_jit_prologue(jit);
 	for (i = 0; i < fp->len; i += insn_count) {
 		insn_count = bpf_jit_insn(jit, fp, i);
 		if (insn_count < 0)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 7599197..c080e81 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -193,7 +193,7 @@ struct jit_context {
 	 32 /* space for rbx, r13, r14, r15 */ + \
 	 8 /* space for skb_copy_bits() buffer */)
 
-#define PROLOGUE_SIZE 51
+#define PROLOGUE_SIZE 48
 
 /* emit x64 prologue code for BPF program and check it's size.
  * bpf_tail_call helper will skip it while jumping into another program
@@ -229,11 +229,15 @@ static void emit_prologue(u8 **pprog)
 	/* mov qword ptr [rbp-X],r15 */
 	EMIT3_off32(0x4C, 0x89, 0xBD, -STACKSIZE + 24);
 
-	/* clear A and X registers */
-	EMIT2(0x31, 0xc0); /* xor eax, eax */
-	EMIT3(0x4D, 0x31, 0xED); /* xor r13, r13 */
+	/* Clear the tail call counter (tail_call_cnt): for eBPF tail calls
+	 * we need to reset the counter to 0. It's done in two instructions,
+	 * resetting rax register to 0 (xor on eax gets 0 extended), and
+	 * moving it to the counter location.
+	 */
 
-	/* clear tail_cnt: mov qword ptr [rbp-X], rax */
+	/* xor eax, eax */
+	EMIT2(0x31, 0xc0);
+	/* mov qword ptr [rbp-X], rax */
 	EMIT3_off32(0x48, 0x89, 0x85, -STACKSIZE + 32);
 
 	BUILD_BUG_ON(cnt != PROLOGUE_SIZE);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 334b1bd..972d9a8 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -306,10 +306,6 @@ static unsigned int __bpf_prog_run(void *ctx, const struct bpf_insn *insn)
 	FP = (u64) (unsigned long) &stack[ARRAY_SIZE(stack)];
 	ARG1 = (u64) (unsigned long) ctx;
 
-	/* Registers used in classic BPF programs need to be reset first. */
-	regs[BPF_REG_A] = 0;
-	regs[BPF_REG_X] = 0;
-
 select_insn:
 	goto *jumptable[insn->code];
 
diff --git a/net/core/filter.c b/net/core/filter.c
index 34bf6fc..b513eb8 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -381,9 +381,22 @@ do_pass:
 	new_insn = new_prog;
 	fp = prog;
 
-	if (new_insn)
-		*new_insn = BPF_MOV64_REG(BPF_REG_CTX, BPF_REG_ARG1);
-	new_insn++;
+	/* Classic BPF related prologue emission. */
+	if (new_insn) {
+		/* Classic BPF expects A and X to be reset first. These need
+		 * to be guaranteed to be the first two instructions.
+		 */
+		*new_insn++ = BPF_ALU64_REG(BPF_XOR, BPF_REG_A, BPF_REG_A);
+		*new_insn++ = BPF_ALU64_REG(BPF_XOR, BPF_REG_X, BPF_REG_X);
+
+		/* All programs must keep CTX in callee saved BPF_REG_CTX.
+		 * In eBPF case it's done by the compiler, here we need to
+		 * do this ourself. Initial CTX is present in BPF_REG_ARG1.
+		 */
+		*new_insn++ = BPF_MOV64_REG(BPF_REG_CTX, BPF_REG_ARG1);
+	} else {
+		new_insn += 3;
+	}
 
 	for (i = 0; i < len; fp++, i++) {
 		struct bpf_insn tmp_insns[6] = { };
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next 3/5] bpf: fix misleading comment in bpf_convert_filter
  2015-12-17 22:51 [PATCH net-next 0/5] Misc BPF updates Daniel Borkmann
  2015-12-17 22:51 ` [PATCH net-next 1/5] bpf: add bpf_skb_load_bytes helper Daniel Borkmann
  2015-12-17 22:51 ` [PATCH net-next 2/5] bpf: move clearing of A/X into classic to eBPF migration prologue Daniel Borkmann
@ 2015-12-17 22:51 ` Daniel Borkmann
  2015-12-17 22:51 ` [PATCH net-next 4/5] bpf, x86: detect/optimize loading 0 immediates Daniel Borkmann
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Daniel Borkmann @ 2015-12-17 22:51 UTC (permalink / raw)
  To: davem; +Cc: ast, holzheu, zlim.lnx, netdev, Daniel Borkmann

Comment says "User BPF's register A is mapped to our BPF register 6",
which is actually wrong as the mapping is on register 0. This can
already be inferred from the code itself. So just remove it before
someone makes assumptions based on that. Only code tells truth. ;)

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 net/core/filter.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/net/core/filter.c b/net/core/filter.c
index b513eb8..c770196 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -348,12 +348,6 @@ static bool convert_bpf_extensions(struct sock_filter *fp,
  *    jump offsets, 2nd pass remapping:
  *   new_prog = kmalloc(sizeof(struct bpf_insn) * new_len);
  *   bpf_convert_filter(old_prog, old_len, new_prog, &new_len);
- *
- * User BPF's register A is mapped to our BPF register 6, user BPF
- * register X is mapped to BPF register 7; frame pointer is always
- * register 10; Context 'void *ctx' is stored in register 1, that is,
- * for socket filters: ctx == 'struct sk_buff *', for seccomp:
- * ctx == 'struct seccomp_data *'.
  */
 static int bpf_convert_filter(struct sock_filter *prog, int len,
 			      struct bpf_insn *new_prog, int *new_len)
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next 4/5] bpf, x86: detect/optimize loading 0 immediates
  2015-12-17 22:51 [PATCH net-next 0/5] Misc BPF updates Daniel Borkmann
                   ` (2 preceding siblings ...)
  2015-12-17 22:51 ` [PATCH net-next 3/5] bpf: fix misleading comment in bpf_convert_filter Daniel Borkmann
@ 2015-12-17 22:51 ` Daniel Borkmann
  2015-12-17 22:51 ` [PATCH net-next 5/5] bpf, test: add couple of test cases Daniel Borkmann
  2015-12-18 21:05 ` [PATCH net-next 0/5] Misc BPF updates David Miller
  5 siblings, 0 replies; 11+ messages in thread
From: Daniel Borkmann @ 2015-12-17 22:51 UTC (permalink / raw)
  To: davem; +Cc: ast, holzheu, zlim.lnx, netdev, Daniel Borkmann

When sometimes structs or variables need to be initialized/'memset' to 0 in
an eBPF C program, the x86 BPF JIT converts this to use immediates. We can
however save a couple of bytes (f.e. even up to 7 bytes on a single emmission
of BPF_LD | BPF_IMM | BPF_DW) in the image by detecting such case and use xor
on the dst register instead.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 arch/x86/net/bpf_jit_comp.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index c080e81..4286f36 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -459,6 +459,18 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			}
 
 		case BPF_ALU | BPF_MOV | BPF_K:
+			/* optimization: if imm32 is zero, use 'xor <dst>,<dst>'
+			 * to save 3 bytes.
+			 */
+			if (imm32 == 0) {
+				if (is_ereg(dst_reg))
+					EMIT1(add_2mod(0x40, dst_reg, dst_reg));
+				b2 = 0x31; /* xor */
+				b3 = 0xC0;
+				EMIT2(b2, add_2reg(b3, dst_reg, dst_reg));
+				break;
+			}
+
 			/* mov %eax, imm32 */
 			if (is_ereg(dst_reg))
 				EMIT1(add_1mod(0x40, dst_reg));
@@ -473,6 +485,20 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 				return -EINVAL;
 			}
 
+			/* optimization: if imm64 is zero, use 'xor <dst>,<dst>'
+			 * to save 7 bytes.
+			 */
+			if (insn[0].imm == 0 && insn[1].imm == 0) {
+				b1 = add_2mod(0x48, dst_reg, dst_reg);
+				b2 = 0x31; /* xor */
+				b3 = 0xC0;
+				EMIT3(b1, b2, add_2reg(b3, dst_reg, dst_reg));
+
+				insn++;
+				i++;
+				break;
+			}
+
 			/* movabsq %rax, imm64 */
 			EMIT2(add_1mod(0x48, dst_reg), add_1reg(0xB8, dst_reg));
 			EMIT(insn[0].imm, 4);
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH net-next 5/5] bpf, test: add couple of test cases
  2015-12-17 22:51 [PATCH net-next 0/5] Misc BPF updates Daniel Borkmann
                   ` (3 preceding siblings ...)
  2015-12-17 22:51 ` [PATCH net-next 4/5] bpf, x86: detect/optimize loading 0 immediates Daniel Borkmann
@ 2015-12-17 22:51 ` Daniel Borkmann
  2015-12-18 21:05 ` [PATCH net-next 0/5] Misc BPF updates David Miller
  5 siblings, 0 replies; 11+ messages in thread
From: Daniel Borkmann @ 2015-12-17 22:51 UTC (permalink / raw)
  To: davem; +Cc: ast, holzheu, zlim.lnx, netdev, Daniel Borkmann

Add couple of test cases for interpreter but also JITs, f.e. to test that
when imm32 moves are being done, upper 32bits of the regs are being zero
extended.

Without JIT:

  [...]
  [ 1114.129301] test_bpf: #43 MOV REG64 jited:0 128 PASS
  [ 1114.130626] test_bpf: #44 MOV REG32 jited:0 139 PASS
  [ 1114.132055] test_bpf: #45 LD IMM64 jited:0 124 PASS
  [...]

With JIT (generated code can as usual be nicely verified with the help of
bpf_jit_disasm tool):

  [...]
  [ 1062.726782] test_bpf: #43 MOV REG64 jited:1 6 PASS
  [ 1062.726890] test_bpf: #44 MOV REG32 jited:1 6 PASS
  [ 1062.726993] test_bpf: #45 LD IMM64 jited:1 6 PASS
  [...]

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 lib/test_bpf.c | 120 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 120 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 10cd186..27a7a26 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -1685,6 +1685,126 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x35d97ef2 } }
 	},
+	{	/* Mainly checking JIT here. */
+		"MOV REG64",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0xffffffffffffffffLL),
+			BPF_MOV64_REG(R1, R0),
+			BPF_MOV64_REG(R2, R1),
+			BPF_MOV64_REG(R3, R2),
+			BPF_MOV64_REG(R4, R3),
+			BPF_MOV64_REG(R5, R4),
+			BPF_MOV64_REG(R6, R5),
+			BPF_MOV64_REG(R7, R6),
+			BPF_MOV64_REG(R8, R7),
+			BPF_MOV64_REG(R9, R8),
+			BPF_ALU64_IMM(BPF_MOV, R0, 0),
+			BPF_ALU64_IMM(BPF_MOV, R1, 0),
+			BPF_ALU64_IMM(BPF_MOV, R2, 0),
+			BPF_ALU64_IMM(BPF_MOV, R3, 0),
+			BPF_ALU64_IMM(BPF_MOV, R4, 0),
+			BPF_ALU64_IMM(BPF_MOV, R5, 0),
+			BPF_ALU64_IMM(BPF_MOV, R6, 0),
+			BPF_ALU64_IMM(BPF_MOV, R7, 0),
+			BPF_ALU64_IMM(BPF_MOV, R8, 0),
+			BPF_ALU64_IMM(BPF_MOV, R9, 0),
+			BPF_ALU64_REG(BPF_ADD, R0, R0),
+			BPF_ALU64_REG(BPF_ADD, R0, R1),
+			BPF_ALU64_REG(BPF_ADD, R0, R2),
+			BPF_ALU64_REG(BPF_ADD, R0, R3),
+			BPF_ALU64_REG(BPF_ADD, R0, R4),
+			BPF_ALU64_REG(BPF_ADD, R0, R5),
+			BPF_ALU64_REG(BPF_ADD, R0, R6),
+			BPF_ALU64_REG(BPF_ADD, R0, R7),
+			BPF_ALU64_REG(BPF_ADD, R0, R8),
+			BPF_ALU64_REG(BPF_ADD, R0, R9),
+			BPF_ALU64_IMM(BPF_ADD, R0, 0xfefe),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfefe } }
+	},
+	{	/* Mainly checking JIT here. */
+		"MOV REG32",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0xffffffffffffffffLL),
+			BPF_MOV64_REG(R1, R0),
+			BPF_MOV64_REG(R2, R1),
+			BPF_MOV64_REG(R3, R2),
+			BPF_MOV64_REG(R4, R3),
+			BPF_MOV64_REG(R5, R4),
+			BPF_MOV64_REG(R6, R5),
+			BPF_MOV64_REG(R7, R6),
+			BPF_MOV64_REG(R8, R7),
+			BPF_MOV64_REG(R9, R8),
+			BPF_ALU32_IMM(BPF_MOV, R0, 0),
+			BPF_ALU32_IMM(BPF_MOV, R1, 0),
+			BPF_ALU32_IMM(BPF_MOV, R2, 0),
+			BPF_ALU32_IMM(BPF_MOV, R3, 0),
+			BPF_ALU32_IMM(BPF_MOV, R4, 0),
+			BPF_ALU32_IMM(BPF_MOV, R5, 0),
+			BPF_ALU32_IMM(BPF_MOV, R6, 0),
+			BPF_ALU32_IMM(BPF_MOV, R7, 0),
+			BPF_ALU32_IMM(BPF_MOV, R8, 0),
+			BPF_ALU32_IMM(BPF_MOV, R9, 0),
+			BPF_ALU64_REG(BPF_ADD, R0, R0),
+			BPF_ALU64_REG(BPF_ADD, R0, R1),
+			BPF_ALU64_REG(BPF_ADD, R0, R2),
+			BPF_ALU64_REG(BPF_ADD, R0, R3),
+			BPF_ALU64_REG(BPF_ADD, R0, R4),
+			BPF_ALU64_REG(BPF_ADD, R0, R5),
+			BPF_ALU64_REG(BPF_ADD, R0, R6),
+			BPF_ALU64_REG(BPF_ADD, R0, R7),
+			BPF_ALU64_REG(BPF_ADD, R0, R8),
+			BPF_ALU64_REG(BPF_ADD, R0, R9),
+			BPF_ALU64_IMM(BPF_ADD, R0, 0xfefe),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfefe } }
+	},
+	{	/* Mainly checking JIT here. */
+		"LD IMM64",
+		.u.insns_int = {
+			BPF_LD_IMM64(R0, 0xffffffffffffffffLL),
+			BPF_MOV64_REG(R1, R0),
+			BPF_MOV64_REG(R2, R1),
+			BPF_MOV64_REG(R3, R2),
+			BPF_MOV64_REG(R4, R3),
+			BPF_MOV64_REG(R5, R4),
+			BPF_MOV64_REG(R6, R5),
+			BPF_MOV64_REG(R7, R6),
+			BPF_MOV64_REG(R8, R7),
+			BPF_MOV64_REG(R9, R8),
+			BPF_LD_IMM64(R0, 0x0LL),
+			BPF_LD_IMM64(R1, 0x0LL),
+			BPF_LD_IMM64(R2, 0x0LL),
+			BPF_LD_IMM64(R3, 0x0LL),
+			BPF_LD_IMM64(R4, 0x0LL),
+			BPF_LD_IMM64(R5, 0x0LL),
+			BPF_LD_IMM64(R6, 0x0LL),
+			BPF_LD_IMM64(R7, 0x0LL),
+			BPF_LD_IMM64(R8, 0x0LL),
+			BPF_LD_IMM64(R9, 0x0LL),
+			BPF_ALU64_REG(BPF_ADD, R0, R0),
+			BPF_ALU64_REG(BPF_ADD, R0, R1),
+			BPF_ALU64_REG(BPF_ADD, R0, R2),
+			BPF_ALU64_REG(BPF_ADD, R0, R3),
+			BPF_ALU64_REG(BPF_ADD, R0, R4),
+			BPF_ALU64_REG(BPF_ADD, R0, R5),
+			BPF_ALU64_REG(BPF_ADD, R0, R6),
+			BPF_ALU64_REG(BPF_ADD, R0, R7),
+			BPF_ALU64_REG(BPF_ADD, R0, R8),
+			BPF_ALU64_REG(BPF_ADD, R0, R9),
+			BPF_ALU64_IMM(BPF_ADD, R0, 0xfefe),
+			BPF_EXIT_INSN(),
+		},
+		INTERNAL,
+		{ },
+		{ { 0, 0xfefe } }
+	},
 	{
 		"INT: ALU MIX",
 		.u.insns_int = {
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 2/5] bpf: move clearing of A/X into classic to eBPF migration prologue
  2015-12-17 22:51 ` [PATCH net-next 2/5] bpf: move clearing of A/X into classic to eBPF migration prologue Daniel Borkmann
@ 2015-12-17 23:52   ` Shi, Yang
  2015-12-18  0:00   ` Z Lim
  1 sibling, 0 replies; 11+ messages in thread
From: Shi, Yang @ 2015-12-17 23:52 UTC (permalink / raw)
  To: Daniel Borkmann, davem; +Cc: ast, holzheu, zlim.lnx, netdev

On 12/17/2015 2:51 PM, Daniel Borkmann wrote:
> Back in the days where eBPF (or back then "internal BPF" ;->) was not
> exposed to user space, and only the classic BPF programs internally
> translated into eBPF programs, we missed the fact that for classic BPF
> A and X needed to be cleared. It was fixed back then via 83d5b7ef99c9
> ("net: filter: initialize A and X registers"), and thus classic BPF
> specifics were added to the eBPF interpreter core to work around it.
>
> This added some confusion for JIT developers later on that take the
> eBPF interpreter code as an example for deriving their JIT. F.e. in
> f75298f5c3fe ("s390/bpf: clear correct BPF accumulator register"), at
> least X could leak stack memory. Furthermore, since this is only needed
> for classic BPF translations and not for eBPF (verifier takes care
> that read access to regs cannot be done uninitialized), more complexity
> is added to JITs as they need to determine whether they deal with
> migrations or native eBPF where they can just omit clearing A/X in
> their prologue and thus reduce image size a bit, see f.e. cde66c2d88da
> ("s390/bpf: Only clear A and X for converted BPF programs"). In other
> cases (x86, arm64), A and X is being cleared in the prologue also for
> eBPF case, which is unnecessary.
>
> Lets move this into the BPF migration in bpf_convert_filter() where it
> actually belongs as long as the number of eBPF JITs are still few. It
> can thus be done generically; allowing us to remove the quirk from
> __bpf_prog_run() and to slightly reduce JIT image size in case of eBPF,
> while reducing code duplication on this matter in current(/future) eBPF
> JITs.
>
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> Acked-by: Alexei Starovoitov <ast@kernel.org>
> Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
> Tested-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
> Cc: Zi Shen Lim <zlim.lnx@gmail.com>
> Cc: Yang Shi <yang.shi@linaro.org>

Acked by me on the arm64 part.

Acked-by: Yang Shi <yang.shi@linaro.org>

Thanks,
Yang

> ---
>   arch/arm64/net/bpf_jit_comp.c |  6 ------
>   arch/s390/net/bpf_jit_comp.c  | 13 ++-----------
>   arch/x86/net/bpf_jit_comp.c   | 14 +++++++++-----
>   kernel/bpf/core.c             |  4 ----
>   net/core/filter.c             | 19 ++++++++++++++++---
>   5 files changed, 27 insertions(+), 29 deletions(-)
>
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index b162ad7..7658612 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -152,8 +152,6 @@ static void build_prologue(struct jit_ctx *ctx)
>   	const u8 r8 = bpf2a64[BPF_REG_8];
>   	const u8 r9 = bpf2a64[BPF_REG_9];
>   	const u8 fp = bpf2a64[BPF_REG_FP];
> -	const u8 ra = bpf2a64[BPF_REG_A];
> -	const u8 rx = bpf2a64[BPF_REG_X];
>   	const u8 tmp1 = bpf2a64[TMP_REG_1];
>   	const u8 tmp2 = bpf2a64[TMP_REG_2];
>
> @@ -200,10 +198,6 @@ static void build_prologue(struct jit_ctx *ctx)
>
>   	/* Set up function call stack */
>   	emit(A64_SUB_I(1, A64_SP, A64_SP, STACK_SIZE), ctx);
> -
> -	/* Clear registers A and X */
> -	emit_a64_mov_i64(ra, 0, ctx);
> -	emit_a64_mov_i64(rx, 0, ctx);
>   }
>
>   static void build_epilogue(struct jit_ctx *ctx)
> diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
> index 9a0c4c2..3c0bfc1 100644
> --- a/arch/s390/net/bpf_jit_comp.c
> +++ b/arch/s390/net/bpf_jit_comp.c
> @@ -408,7 +408,7 @@ static void emit_load_skb_data_hlen(struct bpf_jit *jit)
>    * Save registers and create stack frame if necessary.
>    * See stack frame layout desription in "bpf_jit.h"!
>    */
> -static void bpf_jit_prologue(struct bpf_jit *jit, bool is_classic)
> +static void bpf_jit_prologue(struct bpf_jit *jit)
>   {
>   	if (jit->seen & SEEN_TAIL_CALL) {
>   		/* xc STK_OFF_TCCNT(4,%r15),STK_OFF_TCCNT(%r15) */
> @@ -448,15 +448,6 @@ static void bpf_jit_prologue(struct bpf_jit *jit, bool is_classic)
>   		/* stg %b1,ST_OFF_SKBP(%r0,%r15) */
>   		EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0, REG_15,
>   			      STK_OFF_SKBP);
> -	/* Clear A (%b0) and X (%b7) registers for converted BPF programs */
> -	if (is_classic) {
> -		if (REG_SEEN(BPF_REG_A))
> -			/* lghi %ba,0 */
> -			EMIT4_IMM(0xa7090000, BPF_REG_A, 0);
> -		if (REG_SEEN(BPF_REG_X))
> -			/* lghi %bx,0 */
> -			EMIT4_IMM(0xa7090000, BPF_REG_X, 0);
> -	}
>   }
>
>   /*
> @@ -1245,7 +1236,7 @@ static int bpf_jit_prog(struct bpf_jit *jit, struct bpf_prog *fp)
>   	jit->lit = jit->lit_start;
>   	jit->prg = 0;
>
> -	bpf_jit_prologue(jit, bpf_prog_was_classic(fp));
> +	bpf_jit_prologue(jit);
>   	for (i = 0; i < fp->len; i += insn_count) {
>   		insn_count = bpf_jit_insn(jit, fp, i);
>   		if (insn_count < 0)
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 7599197..c080e81 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -193,7 +193,7 @@ struct jit_context {
>   	 32 /* space for rbx, r13, r14, r15 */ + \
>   	 8 /* space for skb_copy_bits() buffer */)
>
> -#define PROLOGUE_SIZE 51
> +#define PROLOGUE_SIZE 48
>
>   /* emit x64 prologue code for BPF program and check it's size.
>    * bpf_tail_call helper will skip it while jumping into another program
> @@ -229,11 +229,15 @@ static void emit_prologue(u8 **pprog)
>   	/* mov qword ptr [rbp-X],r15 */
>   	EMIT3_off32(0x4C, 0x89, 0xBD, -STACKSIZE + 24);
>
> -	/* clear A and X registers */
> -	EMIT2(0x31, 0xc0); /* xor eax, eax */
> -	EMIT3(0x4D, 0x31, 0xED); /* xor r13, r13 */
> +	/* Clear the tail call counter (tail_call_cnt): for eBPF tail calls
> +	 * we need to reset the counter to 0. It's done in two instructions,
> +	 * resetting rax register to 0 (xor on eax gets 0 extended), and
> +	 * moving it to the counter location.
> +	 */
>
> -	/* clear tail_cnt: mov qword ptr [rbp-X], rax */
> +	/* xor eax, eax */
> +	EMIT2(0x31, 0xc0);
> +	/* mov qword ptr [rbp-X], rax */
>   	EMIT3_off32(0x48, 0x89, 0x85, -STACKSIZE + 32);
>
>   	BUILD_BUG_ON(cnt != PROLOGUE_SIZE);
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 334b1bd..972d9a8 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -306,10 +306,6 @@ static unsigned int __bpf_prog_run(void *ctx, const struct bpf_insn *insn)
>   	FP = (u64) (unsigned long) &stack[ARRAY_SIZE(stack)];
>   	ARG1 = (u64) (unsigned long) ctx;
>
> -	/* Registers used in classic BPF programs need to be reset first. */
> -	regs[BPF_REG_A] = 0;
> -	regs[BPF_REG_X] = 0;
> -
>   select_insn:
>   	goto *jumptable[insn->code];
>
> diff --git a/net/core/filter.c b/net/core/filter.c
> index 34bf6fc..b513eb8 100644
> --- a/net/core/filter.c
> +++ b/net/core/filter.c
> @@ -381,9 +381,22 @@ do_pass:
>   	new_insn = new_prog;
>   	fp = prog;
>
> -	if (new_insn)
> -		*new_insn = BPF_MOV64_REG(BPF_REG_CTX, BPF_REG_ARG1);
> -	new_insn++;
> +	/* Classic BPF related prologue emission. */
> +	if (new_insn) {
> +		/* Classic BPF expects A and X to be reset first. These need
> +		 * to be guaranteed to be the first two instructions.
> +		 */
> +		*new_insn++ = BPF_ALU64_REG(BPF_XOR, BPF_REG_A, BPF_REG_A);
> +		*new_insn++ = BPF_ALU64_REG(BPF_XOR, BPF_REG_X, BPF_REG_X);
> +
> +		/* All programs must keep CTX in callee saved BPF_REG_CTX.
> +		 * In eBPF case it's done by the compiler, here we need to
> +		 * do this ourself. Initial CTX is present in BPF_REG_ARG1.
> +		 */
> +		*new_insn++ = BPF_MOV64_REG(BPF_REG_CTX, BPF_REG_ARG1);
> +	} else {
> +		new_insn += 3;
> +	}
>
>   	for (i = 0; i < len; fp++, i++) {
>   		struct bpf_insn tmp_insns[6] = { };
>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 2/5] bpf: move clearing of A/X into classic to eBPF migration prologue
  2015-12-17 22:51 ` [PATCH net-next 2/5] bpf: move clearing of A/X into classic to eBPF migration prologue Daniel Borkmann
  2015-12-17 23:52   ` Shi, Yang
@ 2015-12-18  0:00   ` Z Lim
  1 sibling, 0 replies; 11+ messages in thread
From: Z Lim @ 2015-12-18  0:00 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: David S. Miller, Alexei Starovoitov, Michael Holzheu,
	Network Development, Yang Shi

On Thu, Dec 17, 2015 at 2:51 PM, Daniel Borkmann <daniel@iogearbox.net> wrote:
> Back in the days where eBPF (or back then "internal BPF" ;->) was not
> exposed to user space, and only the classic BPF programs internally
> translated into eBPF programs, we missed the fact that for classic BPF
> A and X needed to be cleared. It was fixed back then via 83d5b7ef99c9
> ("net: filter: initialize A and X registers"), and thus classic BPF
> specifics were added to the eBPF interpreter core to work around it.
>
> This added some confusion for JIT developers later on that take the
> eBPF interpreter code as an example for deriving their JIT. F.e. in
> f75298f5c3fe ("s390/bpf: clear correct BPF accumulator register"), at
> least X could leak stack memory. Furthermore, since this is only needed
> for classic BPF translations and not for eBPF (verifier takes care
> that read access to regs cannot be done uninitialized), more complexity
> is added to JITs as they need to determine whether they deal with
> migrations or native eBPF where they can just omit clearing A/X in
> their prologue and thus reduce image size a bit, see f.e. cde66c2d88da
> ("s390/bpf: Only clear A and X for converted BPF programs"). In other
> cases (x86, arm64), A and X is being cleared in the prologue also for
> eBPF case, which is unnecessary.
>
> Lets move this into the BPF migration in bpf_convert_filter() where it
> actually belongs as long as the number of eBPF JITs are still few. It
> can thus be done generically; allowing us to remove the quirk from
> __bpf_prog_run() and to slightly reduce JIT image size in case of eBPF,
> while reducing code duplication on this matter in current(/future) eBPF
> JITs.
>
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> Acked-by: Alexei Starovoitov <ast@kernel.org>
> Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
> Tested-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
> Cc: Zi Shen Lim <zlim.lnx@gmail.com>
> Cc: Yang Shi <yang.shi@linaro.org>
> ---
>  arch/arm64/net/bpf_jit_comp.c |  6 ------

For the arm64 bits:
Acked-by: Zi Shen Lim <zlim.lnx@gmail.com>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 0/5] Misc BPF updates
  2015-12-17 22:51 [PATCH net-next 0/5] Misc BPF updates Daniel Borkmann
                   ` (4 preceding siblings ...)
  2015-12-17 22:51 ` [PATCH net-next 5/5] bpf, test: add couple of test cases Daniel Borkmann
@ 2015-12-18 21:05 ` David Miller
  5 siblings, 0 replies; 11+ messages in thread
From: David Miller @ 2015-12-18 21:05 UTC (permalink / raw)
  To: daniel; +Cc: ast, holzheu, zlim.lnx, netdev

From: Daniel Borkmann <daniel@iogearbox.net>
Date: Thu, 17 Dec 2015 23:51:52 +0100

> This series contains a couple of misc updates to the BPF code, besides
> others a new helper bpf_skb_load_bytes(), moving clearing of A/X to the
> classic converter, etc. Please see individual patches for details.

Series applied, thanks Daniel.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH net-next 0/5] Misc BPF updates
  2017-04-26 23:39 Daniel Borkmann
@ 2017-04-28 19:49 ` David Miller
  0 siblings, 0 replies; 11+ messages in thread
From: David Miller @ 2017-04-28 19:49 UTC (permalink / raw)
  To: daniel; +Cc: ast, netdev

From: Daniel Borkmann <daniel@iogearbox.net>
Date: Thu, 27 Apr 2017 01:39:30 +0200

> This set cleans up ldimm64 leftovers from early eBPF days and
> adds couple of test cases related to this to the verifier test
> suite. It also cleans up the kallsyms spinlock (had same patch
> also in queue) by relaxing it through switching to _bh variant.
> It fixes up test_progs in relation to htons/ntohs and adds
> accessor macros for the percpu tests in test_maps.

Series applied, thanks for fixing all of this stuff.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH net-next 0/5] Misc BPF updates
@ 2017-04-26 23:39 Daniel Borkmann
  2017-04-28 19:49 ` David Miller
  0 siblings, 1 reply; 11+ messages in thread
From: Daniel Borkmann @ 2017-04-26 23:39 UTC (permalink / raw)
  To: davem; +Cc: ast, netdev, Daniel Borkmann

This set cleans up ldimm64 leftovers from early eBPF days and
adds couple of test cases related to this to the verifier test
suite. It also cleans up the kallsyms spinlock (had same patch
also in queue) by relaxing it through switching to _bh variant.
It fixes up test_progs in relation to htons/ntohs and adds
accessor macros for the percpu tests in test_maps.

Thanks!

Daniel Borkmann (4):
  bpf, x86_64/arm64: remove old ldimm64 artifacts from jits
  bpf: add various test cases to verifier selftests
  bpf: fix _htons occurences in test_progs
  bpf: provide a generic macro for percpu values for selftests

Hannes Frederic Sowa (1):
  bpf: bpf_lock on kallsysms doesn't need to be irqsave

 arch/arm64/net/bpf_jit_comp.c                 |   9 --
 arch/x86/net/bpf_jit_comp.c                   |   7 --
 kernel/bpf/core.c                             |  12 +--
 tools/testing/selftests/bpf/bpf_util.h        |  26 +++++
 tools/testing/selftests/bpf/test_l4lb.c       |  11 +--
 tools/testing/selftests/bpf/test_maps.c       |  37 +++----
 tools/testing/selftests/bpf/test_pkt_access.c |   6 +-
 tools/testing/selftests/bpf/test_progs.c      |  10 +-
 tools/testing/selftests/bpf/test_verifier.c   | 137 ++++++++++++++++++++++++++
 9 files changed, 199 insertions(+), 56 deletions(-)

-- 
1.9.3

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-04-28 19:49 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-17 22:51 [PATCH net-next 0/5] Misc BPF updates Daniel Borkmann
2015-12-17 22:51 ` [PATCH net-next 1/5] bpf: add bpf_skb_load_bytes helper Daniel Borkmann
2015-12-17 22:51 ` [PATCH net-next 2/5] bpf: move clearing of A/X into classic to eBPF migration prologue Daniel Borkmann
2015-12-17 23:52   ` Shi, Yang
2015-12-18  0:00   ` Z Lim
2015-12-17 22:51 ` [PATCH net-next 3/5] bpf: fix misleading comment in bpf_convert_filter Daniel Borkmann
2015-12-17 22:51 ` [PATCH net-next 4/5] bpf, x86: detect/optimize loading 0 immediates Daniel Borkmann
2015-12-17 22:51 ` [PATCH net-next 5/5] bpf, test: add couple of test cases Daniel Borkmann
2015-12-18 21:05 ` [PATCH net-next 0/5] Misc BPF updates David Miller
2017-04-26 23:39 Daniel Borkmann
2017-04-28 19:49 ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).