All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next 0/5] Misc BPF improvements
@ 2017-01-24  0:06 Daniel Borkmann
  2017-01-24  0:06 ` [PATCH net-next 1/5] bpf: simplify __is_valid_access test on cb Daniel Borkmann
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Daniel Borkmann @ 2017-01-24  0:06 UTC (permalink / raw)
  To: davem; +Cc: netdev, ast, Daniel Borkmann

This series adds various misc improvements to BPF, f.e. allowing
skb_load_bytes() helper to be used with filter/reuseport programs
to facilitate programming, test cases for program tag, etc. For
details, please see individual patches.

Thanks!

Daniel Borkmann (5):
  bpf: simplify __is_valid_access test on cb
  bpf: enable load bytes helper for filter/reuseport progs
  bpf: allow option for setting bpf_l4_csum_replace from scratch
  bpf: add prog tag test case to bpf selftests
  bpf: enable verifier to better track const alu ops

 include/uapi/linux/bpf.h                    |   1 +
 kernel/bpf/verifier.c                       |  64 ++++++---
 net/core/filter.c                           |  63 ++++-----
 tools/testing/selftests/bpf/Makefile        |   4 +-
 tools/testing/selftests/bpf/test_tag.c      | 202 ++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/test_verifier.c |  82 +++++++++++
 6 files changed, 364 insertions(+), 52 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/test_tag.c

-- 
1.9.3

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH net-next 1/5] bpf: simplify __is_valid_access test on cb
  2017-01-24  0:06 [PATCH net-next 0/5] Misc BPF improvements Daniel Borkmann
@ 2017-01-24  0:06 ` Daniel Borkmann
  2017-01-24  0:06 ` [PATCH net-next 2/5] bpf: enable load bytes helper for filter/reuseport progs Daniel Borkmann
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Daniel Borkmann @ 2017-01-24  0:06 UTC (permalink / raw)
  To: davem; +Cc: netdev, ast, Daniel Borkmann

The __is_valid_access() test for cb[] from 62c7989b24db ("bpf: allow
b/h/w/dw access for bpf's cb in ctx") was done unnecessarily complex,
we can just simplify it the same way as recent fix from 2d071c643f1c
("bpf, trace: make ctx access checks more robust") did. Overflow can
never happen as size is 1/2/4/8 depending on access.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 net/core/filter.c | 15 ++-------------
 1 file changed, 2 insertions(+), 13 deletions(-)

diff --git a/net/core/filter.c b/net/core/filter.c
index 9038386..883975f 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -2784,19 +2784,8 @@ static bool __is_valid_access(int off, int size)
 	switch (off) {
 	case offsetof(struct __sk_buff, cb[0]) ...
 	     offsetof(struct __sk_buff, cb[4]) + sizeof(__u32) - 1:
-		if (size == sizeof(__u16) &&
-		    off > offsetof(struct __sk_buff, cb[4]) + sizeof(__u16))
-			return false;
-		if (size == sizeof(__u32) &&
-		    off > offsetof(struct __sk_buff, cb[4]))
-			return false;
-		if (size == sizeof(__u64) &&
-		    off > offsetof(struct __sk_buff, cb[2]))
-			return false;
-		if (size != sizeof(__u8)  &&
-		    size != sizeof(__u16) &&
-		    size != sizeof(__u32) &&
-		    size != sizeof(__u64))
+		if (off + size >
+		    offsetof(struct __sk_buff, cb[4]) + sizeof(__u32))
 			return false;
 		break;
 	default:
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next 2/5] bpf: enable load bytes helper for filter/reuseport progs
  2017-01-24  0:06 [PATCH net-next 0/5] Misc BPF improvements Daniel Borkmann
  2017-01-24  0:06 ` [PATCH net-next 1/5] bpf: simplify __is_valid_access test on cb Daniel Borkmann
@ 2017-01-24  0:06 ` Daniel Borkmann
  2017-01-24  0:06 ` [PATCH net-next 3/5] bpf: allow option for setting bpf_l4_csum_replace from scratch Daniel Borkmann
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Daniel Borkmann @ 2017-01-24  0:06 UTC (permalink / raw)
  To: davem; +Cc: netdev, ast, Daniel Borkmann

BPF_PROG_TYPE_SOCKET_FILTER are used in various facilities such as
for SO_REUSEPORT and packet fanout demuxing, packet filtering, kcm,
etc, and yet the only facility they can use is BPF_LD with {BPF_ABS,
BPF_IND} for single byte/half/word access.

Direct packet access is only restricted to tc programs right now,
but we can still facilitate usage by allowing skb_load_bytes() helper
added back then in 05c74e5e53f6 ("bpf: add bpf_skb_load_bytes helper")
that calls skb_header_pointer() similarly to bpf_load_pointer(), but
for stack buffers with larger access size.

Name the previous sk_filter_func_proto() as bpf_base_func_proto()
since this is used everywhere else as well, similarly for the ctx
converter, that is, bpf_convert_ctx_access().

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 net/core/filter.c | 41 ++++++++++++++++++++++++++---------------
 1 file changed, 26 insertions(+), 15 deletions(-)

diff --git a/net/core/filter.c b/net/core/filter.c
index 883975f..e2263da 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -2598,7 +2598,7 @@ static unsigned long bpf_xdp_copy(void *dst_buff, const void *src_buff,
 };
 
 static const struct bpf_func_proto *
-sk_filter_func_proto(enum bpf_func_id func_id)
+bpf_base_func_proto(enum bpf_func_id func_id)
 {
 	switch (func_id) {
 	case BPF_FUNC_map_lookup_elem:
@@ -2626,6 +2626,17 @@ static unsigned long bpf_xdp_copy(void *dst_buff, const void *src_buff,
 }
 
 static const struct bpf_func_proto *
+sk_filter_func_proto(enum bpf_func_id func_id)
+{
+	switch (func_id) {
+	case BPF_FUNC_skb_load_bytes:
+		return &bpf_skb_load_bytes_proto;
+	default:
+		return bpf_base_func_proto(func_id);
+	}
+}
+
+static const struct bpf_func_proto *
 tc_cls_act_func_proto(enum bpf_func_id func_id)
 {
 	switch (func_id) {
@@ -2680,7 +2691,7 @@ static unsigned long bpf_xdp_copy(void *dst_buff, const void *src_buff,
 	case BPF_FUNC_skb_under_cgroup:
 		return &bpf_skb_under_cgroup_proto;
 	default:
-		return sk_filter_func_proto(func_id);
+		return bpf_base_func_proto(func_id);
 	}
 }
 
@@ -2695,7 +2706,7 @@ static unsigned long bpf_xdp_copy(void *dst_buff, const void *src_buff,
 	case BPF_FUNC_xdp_adjust_head:
 		return &bpf_xdp_adjust_head_proto;
 	default:
-		return sk_filter_func_proto(func_id);
+		return bpf_base_func_proto(func_id);
 	}
 }
 
@@ -2706,7 +2717,7 @@ static unsigned long bpf_xdp_copy(void *dst_buff, const void *src_buff,
 	case BPF_FUNC_skb_load_bytes:
 		return &bpf_skb_load_bytes_proto;
 	default:
-		return sk_filter_func_proto(func_id);
+		return bpf_base_func_proto(func_id);
 	}
 }
 
@@ -2733,7 +2744,7 @@ static unsigned long bpf_xdp_copy(void *dst_buff, const void *src_buff,
 	case BPF_FUNC_skb_under_cgroup:
 		return &bpf_skb_under_cgroup_proto;
 	default:
-		return sk_filter_func_proto(func_id);
+		return bpf_base_func_proto(func_id);
 	}
 }
 
@@ -2983,10 +2994,10 @@ void bpf_warn_invalid_xdp_action(u32 act)
 }
 EXPORT_SYMBOL_GPL(bpf_warn_invalid_xdp_action);
 
-static u32 sk_filter_convert_ctx_access(enum bpf_access_type type,
-					const struct bpf_insn *si,
-					struct bpf_insn *insn_buf,
-					struct bpf_prog *prog)
+static u32 bpf_convert_ctx_access(enum bpf_access_type type,
+				  const struct bpf_insn *si,
+				  struct bpf_insn *insn_buf,
+				  struct bpf_prog *prog)
 {
 	struct bpf_insn *insn = insn_buf;
 	int off;
@@ -3210,7 +3221,7 @@ static u32 tc_cls_act_convert_ctx_access(enum bpf_access_type type,
 				      offsetof(struct net_device, ifindex));
 		break;
 	default:
-		return sk_filter_convert_ctx_access(type, si, insn_buf, prog);
+		return bpf_convert_ctx_access(type, si, insn_buf, prog);
 	}
 
 	return insn - insn_buf;
@@ -3242,7 +3253,7 @@ static u32 xdp_convert_ctx_access(enum bpf_access_type type,
 static const struct bpf_verifier_ops sk_filter_ops = {
 	.get_func_proto		= sk_filter_func_proto,
 	.is_valid_access	= sk_filter_is_valid_access,
-	.convert_ctx_access	= sk_filter_convert_ctx_access,
+	.convert_ctx_access	= bpf_convert_ctx_access,
 };
 
 static const struct bpf_verifier_ops tc_cls_act_ops = {
@@ -3261,24 +3272,24 @@ static u32 xdp_convert_ctx_access(enum bpf_access_type type,
 static const struct bpf_verifier_ops cg_skb_ops = {
 	.get_func_proto		= cg_skb_func_proto,
 	.is_valid_access	= sk_filter_is_valid_access,
-	.convert_ctx_access	= sk_filter_convert_ctx_access,
+	.convert_ctx_access	= bpf_convert_ctx_access,
 };
 
 static const struct bpf_verifier_ops lwt_inout_ops = {
 	.get_func_proto		= lwt_inout_func_proto,
 	.is_valid_access	= lwt_is_valid_access,
-	.convert_ctx_access	= sk_filter_convert_ctx_access,
+	.convert_ctx_access	= bpf_convert_ctx_access,
 };
 
 static const struct bpf_verifier_ops lwt_xmit_ops = {
 	.get_func_proto		= lwt_xmit_func_proto,
 	.is_valid_access	= lwt_is_valid_access,
-	.convert_ctx_access	= sk_filter_convert_ctx_access,
+	.convert_ctx_access	= bpf_convert_ctx_access,
 	.gen_prologue		= tc_cls_act_prologue,
 };
 
 static const struct bpf_verifier_ops cg_sock_ops = {
-	.get_func_proto		= sk_filter_func_proto,
+	.get_func_proto		= bpf_base_func_proto,
 	.is_valid_access	= sock_filter_is_valid_access,
 	.convert_ctx_access	= sock_filter_convert_ctx_access,
 };
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next 3/5] bpf: allow option for setting bpf_l4_csum_replace from scratch
  2017-01-24  0:06 [PATCH net-next 0/5] Misc BPF improvements Daniel Borkmann
  2017-01-24  0:06 ` [PATCH net-next 1/5] bpf: simplify __is_valid_access test on cb Daniel Borkmann
  2017-01-24  0:06 ` [PATCH net-next 2/5] bpf: enable load bytes helper for filter/reuseport progs Daniel Borkmann
@ 2017-01-24  0:06 ` Daniel Borkmann
  2017-01-24  0:06 ` [PATCH net-next 4/5] bpf: add prog tag test case to bpf selftests Daniel Borkmann
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Daniel Borkmann @ 2017-01-24  0:06 UTC (permalink / raw)
  To: davem; +Cc: netdev, ast, Daniel Borkmann

When programs need to calculate the csum from scratch for small UDP
packets and use bpf_l4_csum_replace() to feed the result from helpers
like bpf_csum_diff(), then we need a flag besides BPF_F_MARK_MANGLED_0
that would ignore the case of current csum being 0, and which would
still allow for the helper to set the csum and transform when needed
to CSUM_MANGLED_0.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 include/uapi/linux/bpf.h | 1 +
 net/core/filter.c        | 7 ++++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index bd30684..e07fd5a 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -522,6 +522,7 @@ enum bpf_func_id {
 /* BPF_FUNC_l4_csum_replace flags. */
 #define BPF_F_PSEUDO_HDR		(1ULL << 4)
 #define BPF_F_MARK_MANGLED_0		(1ULL << 5)
+#define BPF_F_MARK_ENFORCE		(1ULL << 6)
 
 /* BPF_FUNC_clone_redirect and BPF_FUNC_redirect flags. */
 #define BPF_F_INGRESS			(1ULL << 0)
diff --git a/net/core/filter.c b/net/core/filter.c
index e2263da..1e00737 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -1522,10 +1522,11 @@ static inline void bpf_pull_mac_rcsum(struct sk_buff *skb)
 {
 	bool is_pseudo = flags & BPF_F_PSEUDO_HDR;
 	bool is_mmzero = flags & BPF_F_MARK_MANGLED_0;
+	bool do_mforce = flags & BPF_F_MARK_ENFORCE;
 	__sum16 *ptr;
 
-	if (unlikely(flags & ~(BPF_F_MARK_MANGLED_0 | BPF_F_PSEUDO_HDR |
-			       BPF_F_HDR_FIELD_MASK)))
+	if (unlikely(flags & ~(BPF_F_MARK_MANGLED_0 | BPF_F_MARK_ENFORCE |
+			       BPF_F_PSEUDO_HDR | BPF_F_HDR_FIELD_MASK)))
 		return -EINVAL;
 	if (unlikely(offset > 0xffff || offset & 1))
 		return -EFAULT;
@@ -1533,7 +1534,7 @@ static inline void bpf_pull_mac_rcsum(struct sk_buff *skb)
 		return -EFAULT;
 
 	ptr = (__sum16 *)(skb->data + offset);
-	if (is_mmzero && !*ptr)
+	if (is_mmzero && !do_mforce && !*ptr)
 		return 0;
 
 	switch (flags & BPF_F_HDR_FIELD_MASK) {
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next 4/5] bpf: add prog tag test case to bpf selftests
  2017-01-24  0:06 [PATCH net-next 0/5] Misc BPF improvements Daniel Borkmann
                   ` (2 preceding siblings ...)
  2017-01-24  0:06 ` [PATCH net-next 3/5] bpf: allow option for setting bpf_l4_csum_replace from scratch Daniel Borkmann
@ 2017-01-24  0:06 ` Daniel Borkmann
  2017-01-24  0:06 ` [PATCH net-next 5/5] bpf: enable verifier to better track const alu ops Daniel Borkmann
  2017-01-24 19:49 ` [PATCH net-next 0/5] Misc BPF improvements David Miller
  5 siblings, 0 replies; 9+ messages in thread
From: Daniel Borkmann @ 2017-01-24  0:06 UTC (permalink / raw)
  To: davem; +Cc: netdev, ast, Daniel Borkmann

Add the test case used to compare the results from fdinfo with
af_alg's output on the tag. Tests are from min to max sized
programs, with and without maps included.

  # ./test_tag
  test_tag: OK (40945 tests)

Tested on x86_64 and s390x.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/testing/selftests/bpf/Makefile   |   4 +-
 tools/testing/selftests/bpf/test_tag.c | 202 +++++++++++++++++++++++++++++++++
 2 files changed, 204 insertions(+), 2 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/test_tag.c

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 064a3e5..769a6cb 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -1,8 +1,8 @@
 CFLAGS += -Wall -O2 -I../../../../usr/include
 
-test_objs = test_verifier test_maps test_lru_map test_lpm_map
+test_objs = test_verifier test_tag test_maps test_lru_map test_lpm_map
 
-TEST_PROGS := test_verifier test_maps test_lru_map test_lpm_map test_kmod.sh
+TEST_PROGS := $(test_objs) test_kmod.sh
 TEST_FILES := $(test_objs)
 
 all: $(test_objs)
diff --git a/tools/testing/selftests/bpf/test_tag.c b/tools/testing/selftests/bpf/test_tag.c
new file mode 100644
index 0000000..6ab4793
--- /dev/null
+++ b/tools/testing/selftests/bpf/test_tag.c
@@ -0,0 +1,202 @@
+#include <stdio.h>
+#include <stdlib.h>
+#include <ctype.h>
+#include <time.h>
+#include <errno.h>
+#include <unistd.h>
+#include <string.h>
+#include <sched.h>
+#include <limits.h>
+#include <assert.h>
+
+#include <sys/socket.h>
+#include <sys/resource.h>
+
+#include <linux/filter.h>
+#include <linux/bpf.h>
+#include <linux/if_alg.h>
+
+#include "../../../include/linux/filter.h"
+
+#include "bpf_sys.h"
+
+static struct bpf_insn prog[BPF_MAXINSNS];
+
+static void bpf_gen_imm_prog(unsigned int insns, int fd_map)
+{
+	int i;
+
+	srand(time(NULL));
+	for (i = 0; i < insns; i++)
+		prog[i] = BPF_ALU64_IMM(BPF_MOV, i % BPF_REG_10, rand());
+	prog[i - 1] = BPF_EXIT_INSN();
+}
+
+static void bpf_gen_map_prog(unsigned int insns, int fd_map)
+{
+	int i, j = 0;
+
+	for (i = 0; i + 1 < insns; i += 2) {
+		struct bpf_insn tmp[] = {
+			BPF_LD_MAP_FD(j++ % BPF_REG_10, fd_map)
+		};
+
+		memcpy(&prog[i], tmp, sizeof(tmp));
+	}
+	if (insns % 2 == 0)
+		prog[insns - 2] = BPF_ALU64_IMM(BPF_MOV, i % BPF_REG_10, 42);
+	prog[insns - 1] = BPF_EXIT_INSN();
+}
+
+static int bpf_try_load_prog(int insns, int fd_map,
+			     void (*bpf_filler)(unsigned int insns,
+						int fd_map))
+{
+	int fd_prog;
+
+	bpf_filler(insns, fd_map);
+	fd_prog = bpf_prog_load(BPF_PROG_TYPE_SCHED_CLS, prog, insns *
+				sizeof(struct bpf_insn), "", NULL, 0);
+	assert(fd_prog > 0);
+	if (fd_map > 0)
+		bpf_filler(insns, 0);
+	return fd_prog;
+}
+
+static int __hex2bin(char ch)
+{
+	if ((ch >= '0') && (ch <= '9'))
+		return ch - '0';
+	ch = tolower(ch);
+	if ((ch >= 'a') && (ch <= 'f'))
+		return ch - 'a' + 10;
+	return -1;
+}
+
+static int hex2bin(uint8_t *dst, const char *src, size_t count)
+{
+	while (count--) {
+		int hi = __hex2bin(*src++);
+		int lo = __hex2bin(*src++);
+
+		if ((hi < 0) || (lo < 0))
+			return -1;
+		*dst++ = (hi << 4) | lo;
+	}
+	return 0;
+}
+
+static void tag_from_fdinfo(int fd_prog, uint8_t *tag, uint32_t len)
+{
+	const int prefix_len = sizeof("prog_tag:\t") - 1;
+	char buff[256];
+	int ret = -1;
+	FILE *fp;
+
+	snprintf(buff, sizeof(buff), "/proc/%d/fdinfo/%d", getpid(),
+		 fd_prog);
+	fp = fopen(buff, "r");
+	assert(fp);
+
+	while (fgets(buff, sizeof(buff), fp)) {
+		if (strncmp(buff, "prog_tag:\t", len))
+			continue;
+		ret = hex2bin(tag, buff + prefix_len, len);
+		break;
+	}
+
+	fclose(fp);
+	assert(!ret);
+}
+
+static void tag_from_alg(int insns, uint8_t *tag, uint32_t len)
+{
+	static const struct sockaddr_alg alg = {
+		.salg_family	= AF_ALG,
+		.salg_type	= "hash",
+		.salg_name	= "sha1",
+	};
+	int fd_base, fd_alg, ret;
+	ssize_t size;
+
+	fd_base = socket(AF_ALG, SOCK_SEQPACKET, 0);
+	assert(fd_base > 0);
+
+	ret = bind(fd_base, (struct sockaddr *)&alg, sizeof(alg));
+	assert(!ret);
+
+	fd_alg = accept(fd_base, NULL, 0);
+	assert(fd_alg > 0);
+
+	insns *= sizeof(struct bpf_insn);
+	size = write(fd_alg, prog, insns);
+	assert(size == insns);
+
+	size = read(fd_alg, tag, len);
+	assert(size == len);
+
+	close(fd_alg);
+	close(fd_base);
+}
+
+static void tag_dump(const char *prefix, uint8_t *tag, uint32_t len)
+{
+	int i;
+
+	printf("%s", prefix);
+	for (i = 0; i < len; i++)
+		printf("%02x", tag[i]);
+	printf("\n");
+}
+
+static void tag_exit_report(int insns, int fd_map, uint8_t *ftag,
+			    uint8_t *atag, uint32_t len)
+{
+	printf("Program tag mismatch for %d insns%s!\n", insns,
+	       fd_map < 0 ? "" : " with map");
+
+	tag_dump("  fdinfo result: ", ftag, len);
+	tag_dump("  af_alg result: ", atag, len);
+	exit(1);
+}
+
+static void do_test(uint32_t *tests, int start_insns, int fd_map,
+		    void (*bpf_filler)(unsigned int insns, int fd))
+{
+	int i, fd_prog;
+
+	for (i = start_insns; i <= BPF_MAXINSNS; i++) {
+		uint8_t ftag[8], atag[sizeof(ftag)];
+
+		fd_prog = bpf_try_load_prog(i, fd_map, bpf_filler);
+		tag_from_fdinfo(fd_prog, ftag, sizeof(ftag));
+		tag_from_alg(i, atag, sizeof(atag));
+		if (memcmp(ftag, atag, sizeof(ftag)))
+			tag_exit_report(i, fd_map, ftag, atag, sizeof(ftag));
+
+		close(fd_prog);
+		sched_yield();
+		(*tests)++;
+	}
+}
+
+int main(void)
+{
+	struct rlimit rinf = { RLIM_INFINITY, RLIM_INFINITY };
+	uint32_t tests = 0;
+	int i, fd_map;
+
+	setrlimit(RLIMIT_MEMLOCK, &rinf);
+	fd_map = bpf_map_create(BPF_MAP_TYPE_HASH, sizeof(int),
+				sizeof(int), 1, BPF_F_NO_PREALLOC);
+	assert(fd_map > 0);
+
+	for (i = 0; i < 5; i++) {
+		do_test(&tests, 2, -1,     bpf_gen_imm_prog);
+		do_test(&tests, 3, fd_map, bpf_gen_map_prog);
+	}
+
+	printf("test_tag: OK (%u tests)\n", tests);
+	close(fd_map);
+	return 0;
+}
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH net-next 5/5] bpf: enable verifier to better track const alu ops
  2017-01-24  0:06 [PATCH net-next 0/5] Misc BPF improvements Daniel Borkmann
                   ` (3 preceding siblings ...)
  2017-01-24  0:06 ` [PATCH net-next 4/5] bpf: add prog tag test case to bpf selftests Daniel Borkmann
@ 2017-01-24  0:06 ` Daniel Borkmann
  2017-01-25 15:56   ` William Tu
  2017-01-24 19:49 ` [PATCH net-next 0/5] Misc BPF improvements David Miller
  5 siblings, 1 reply; 9+ messages in thread
From: Daniel Borkmann @ 2017-01-24  0:06 UTC (permalink / raw)
  To: davem; +Cc: netdev, ast, Daniel Borkmann, Gianluca Borello, William Tu

William reported couple of issues in relation to direct packet
access. Typical scheme is to check for data + [off] <= data_end,
where [off] can be either immediate or coming from a tracked
register that contains an immediate, depending on the branch, we
can then access the data. However, in case of calculating [off]
for either the mentioned test itself or for access after the test
in a more "complex" way, then the verifier will stop tracking the
CONST_IMM marked register and will mark it as UNKNOWN_VALUE one.

Adding that UNKNOWN_VALUE typed register to a pkt() marked
register, the verifier then bails out in check_packet_ptr_add()
as it finds the registers imm value below 48. In the first below
example, that is due to evaluate_reg_imm_alu() not handling right
shifts and thus marking the register as UNKNOWN_VALUE via helper
__mark_reg_unknown_value() that resets imm to 0.

In the second case the same happens at the time when r4 is set
to r4 &= r5, where it transitions to UNKNOWN_VALUE from
evaluate_reg_imm_alu(). Later on r4 we shift right by 3 inside
evaluate_reg_alu(), where the register's imm turns into 3. That
is, for registers with type UNKNOWN_VALUE, imm of 0 means that
we don't know what value the register has, and for imm > 0 it
means that the value has [imm] upper zero bits. F.e. when shifting
an UNKNOWN_VALUE register by 3 to the right, no matter what value
it had, we know that the 3 upper most bits must be zero now.
This is to make sure that ALU operations with unknown registers
don't overflow. Meaning, once we know that we have more than 48
upper zero bits, or, in other words cannot go beyond 0xffff offset
with ALU ops, such an addition will track the target register
as a new pkt() register with a new id, but 0 offset and 0 range,
so for that a new data/data_end test will be required. Is the source
register a CONST_IMM one that is to be added to the pkt() register,
or the source instruction is an add instruction with immediate
value, then it will get added if it stays within max 0xffff bounds.
>From there, pkt() type, can be accessed should reg->off + imm be
within the access range of pkt().

  [...]
  from 28 to 30: R0=imm1,min_value=1,max_value=1
    R1=pkt(id=0,off=0,r=22) R2=pkt_end
    R3=imm144,min_value=144,max_value=144
    R4=imm0,min_value=0,max_value=0
    R5=inv48,min_value=2054,max_value=2054 R10=fp
  30: (bf) r5 = r3
  31: (07) r5 += 23
  32: (77) r5 >>= 3
  33: (bf) r6 = r1
  34: (0f) r6 += r5
  cannot add integer value with 0 upper zero bits to ptr_to_packet

  [...]
  from 52 to 80: R0=imm1,min_value=1,max_value=1
    R1=pkt(id=0,off=0,r=34) R2=pkt_end R3=inv
    R4=imm272 R5=inv56,min_value=17,max_value=17
    R6=pkt(id=0,off=26,r=34) R10=fp
  80: (07) r4 += 71
  81: (18) r5 = 0xfffffff8
  83: (5f) r4 &= r5
  84: (77) r4 >>= 3
  85: (0f) r1 += r4
  cannot add integer value with 3 upper zero bits to ptr_to_packet

Thus to get above use-cases working, evaluate_reg_imm_alu() has
been extended for further ALU ops. This is fine, because we only
operate strictly within realm of CONST_IMM types, so here we don't
care about overflows as they will happen in the simulated but also
real execution and interaction with pkt() in check_packet_ptr_add()
will check actual imm value once added to pkt(), but it's irrelevant
before.

With regards to 06c1c049721a ("bpf: allow helpers access to variable
memory") that works on UNKNOWN_VALUE registers, the verifier becomes
now a bit smarter as it can better resolve ALU ops, so we need to
adapt two test cases there, as min/max bound tracking only becomes
necessary when registers were spilled to stack. So while mask was
set before to track upper bound for UNKNOWN_VALUE case, it's now
resolved directly as CONST_IMM, and such contructs are only necessary
when f.e. registers are spilled.

For commit 6b17387307ba ("bpf: recognize 64bit immediate loads as
consts") that initially enabled dw load tracking only for nfp jit/
analyzer, I did couple of tests on large, complex programs and we
don't increase complexity badly (my tests were in ~3% range on avg).
I've added a couple of tests similar to affected code above, and
it works fine with verifier now.

Reported-by: William Tu <u9012063@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Gianluca Borello <g.borello@gmail.com>
Cc: William Tu <u9012063@gmail.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 kernel/bpf/verifier.c                       | 64 +++++++++++++++-------
 tools/testing/selftests/bpf/test_verifier.c | 82 +++++++++++++++++++++++++++++
 2 files changed, 127 insertions(+), 19 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 8f69df7..fb3513b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1566,22 +1566,54 @@ static int evaluate_reg_imm_alu(struct bpf_verifier_env *env,
 	struct bpf_reg_state *dst_reg = &regs[insn->dst_reg];
 	struct bpf_reg_state *src_reg = &regs[insn->src_reg];
 	u8 opcode = BPF_OP(insn->code);
+	u64 dst_imm = dst_reg->imm;
 
-	/* dst_reg->type == CONST_IMM here, simulate execution of 'add'/'or'
-	 * insn. Don't care about overflow or negative values, just add them
+	/* dst_reg->type == CONST_IMM here. Simulate execution of insns
+	 * containing ALU ops. Don't care about overflow or negative
+	 * values, just add/sub/... them; registers are in u64.
 	 */
-	if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_K)
-		dst_reg->imm += insn->imm;
-	else if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_X &&
-		 src_reg->type == CONST_IMM)
-		dst_reg->imm += src_reg->imm;
-	else if (opcode == BPF_OR && BPF_SRC(insn->code) == BPF_K)
-		dst_reg->imm |= insn->imm;
-	else if (opcode == BPF_OR && BPF_SRC(insn->code) == BPF_X &&
-		 src_reg->type == CONST_IMM)
-		dst_reg->imm |= src_reg->imm;
-	else
+	if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_K) {
+		dst_imm += insn->imm;
+	} else if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_X &&
+		   src_reg->type == CONST_IMM) {
+		dst_imm += src_reg->imm;
+	} else if (opcode == BPF_SUB && BPF_SRC(insn->code) == BPF_K) {
+		dst_imm -= insn->imm;
+	} else if (opcode == BPF_SUB && BPF_SRC(insn->code) == BPF_X &&
+		   src_reg->type == CONST_IMM) {
+		dst_imm -= src_reg->imm;
+	} else if (opcode == BPF_MUL && BPF_SRC(insn->code) == BPF_K) {
+		dst_imm *= insn->imm;
+	} else if (opcode == BPF_MUL && BPF_SRC(insn->code) == BPF_X &&
+		   src_reg->type == CONST_IMM) {
+		dst_imm *= src_reg->imm;
+	} else if (opcode == BPF_OR && BPF_SRC(insn->code) == BPF_K) {
+		dst_imm |= insn->imm;
+	} else if (opcode == BPF_OR && BPF_SRC(insn->code) == BPF_X &&
+		   src_reg->type == CONST_IMM) {
+		dst_imm |= src_reg->imm;
+	} else if (opcode == BPF_AND && BPF_SRC(insn->code) == BPF_K) {
+		dst_imm &= insn->imm;
+	} else if (opcode == BPF_AND && BPF_SRC(insn->code) == BPF_X &&
+		   src_reg->type == CONST_IMM) {
+		dst_imm &= src_reg->imm;
+	} else if (opcode == BPF_RSH && BPF_SRC(insn->code) == BPF_K) {
+		dst_imm >>= insn->imm;
+	} else if (opcode == BPF_RSH && BPF_SRC(insn->code) == BPF_X &&
+		   src_reg->type == CONST_IMM) {
+		dst_imm >>= src_reg->imm;
+	} else if (opcode == BPF_LSH && BPF_SRC(insn->code) == BPF_K) {
+		dst_imm <<= insn->imm;
+	} else if (opcode == BPF_LSH && BPF_SRC(insn->code) == BPF_X &&
+		   src_reg->type == CONST_IMM) {
+		dst_imm <<= src_reg->imm;
+	} else {
 		mark_reg_unknown_value(regs, insn->dst_reg);
+		goto out;
+	}
+
+	dst_reg->imm = dst_imm;
+out:
 	return 0;
 }
 
@@ -2225,14 +2257,8 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn)
 		return err;
 
 	if (insn->src_reg == 0) {
-		/* generic move 64-bit immediate into a register,
-		 * only analyzer needs to collect the ld_imm value.
-		 */
 		u64 imm = ((u64)(insn + 1)->imm << 32) | (u32)insn->imm;
 
-		if (!env->analyzer_ops)
-			return 0;
-
 		regs[insn->dst_reg].type = CONST_IMM;
 		regs[insn->dst_reg].imm = imm;
 		return 0;
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 1aa7324..0d0912c 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -2326,6 +2326,84 @@ struct test_val {
 		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
 	},
 	{
+		"direct packet access: test11 (shift, good access)",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+				    offsetof(struct __sk_buff, data)),
+			BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+				    offsetof(struct __sk_buff, data_end)),
+			BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
+			BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
+			BPF_MOV64_IMM(BPF_REG_3, 144),
+			BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
+			BPF_ALU64_IMM(BPF_RSH, BPF_REG_5, 3),
+			BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
+			BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
+			BPF_MOV64_IMM(BPF_REG_0, 1),
+			BPF_EXIT_INSN(),
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_EXIT_INSN(),
+		},
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+	},
+	{
+		"direct packet access: test12 (and, good access)",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+				    offsetof(struct __sk_buff, data)),
+			BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+				    offsetof(struct __sk_buff, data_end)),
+			BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
+			BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
+			BPF_MOV64_IMM(BPF_REG_3, 144),
+			BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
+			BPF_ALU64_IMM(BPF_AND, BPF_REG_5, 15),
+			BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
+			BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
+			BPF_MOV64_IMM(BPF_REG_0, 1),
+			BPF_EXIT_INSN(),
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_EXIT_INSN(),
+		},
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+	},
+	{
+		"direct packet access: test13 (branches, good access)",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
+				    offsetof(struct __sk_buff, data)),
+			BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+				    offsetof(struct __sk_buff, data_end)),
+			BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
+			BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 13),
+			BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
+				    offsetof(struct __sk_buff, mark)),
+			BPF_MOV64_IMM(BPF_REG_4, 1),
+			BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_4, 2),
+			BPF_MOV64_IMM(BPF_REG_3, 14),
+			BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+			BPF_MOV64_IMM(BPF_REG_3, 24),
+			BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
+			BPF_ALU64_IMM(BPF_AND, BPF_REG_5, 15),
+			BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
+			BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
+			BPF_MOV64_IMM(BPF_REG_0, 1),
+			BPF_EXIT_INSN(),
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_EXIT_INSN(),
+		},
+		.result = ACCEPT,
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+	},
+	{
 		"helper access to packet: test1, valid packet_ptr range",
 		.insns = {
 			BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
@@ -4208,6 +4286,8 @@ struct test_val {
 		.insns = {
 			BPF_MOV64_IMM(BPF_REG_1, 0),
 			BPF_MOV64_IMM(BPF_REG_2, 0),
+			BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128),
+			BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128),
 			BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 64),
 			BPF_MOV64_IMM(BPF_REG_3, 0),
 			BPF_MOV64_IMM(BPF_REG_4, 0),
@@ -4251,6 +4331,8 @@ struct test_val {
 			BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
 			BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
 			BPF_MOV64_IMM(BPF_REG_2, 0),
+			BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128),
+			BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128),
 			BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 63),
 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1),
 			BPF_MOV64_IMM(BPF_REG_3, 0),
-- 
1.9.3

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next 0/5] Misc BPF improvements
  2017-01-24  0:06 [PATCH net-next 0/5] Misc BPF improvements Daniel Borkmann
                   ` (4 preceding siblings ...)
  2017-01-24  0:06 ` [PATCH net-next 5/5] bpf: enable verifier to better track const alu ops Daniel Borkmann
@ 2017-01-24 19:49 ` David Miller
  5 siblings, 0 replies; 9+ messages in thread
From: David Miller @ 2017-01-24 19:49 UTC (permalink / raw)
  To: daniel; +Cc: netdev, ast

From: Daniel Borkmann <daniel@iogearbox.net>
Date: Tue, 24 Jan 2017 01:06:25 +0100

> This series adds various misc improvements to BPF, f.e. allowing
> skb_load_bytes() helper to be used with filter/reuseport programs
> to facilitate programming, test cases for program tag, etc. For
> details, please see individual patches.

Series applied, thanks.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next 5/5] bpf: enable verifier to better track const alu ops
  2017-01-24  0:06 ` [PATCH net-next 5/5] bpf: enable verifier to better track const alu ops Daniel Borkmann
@ 2017-01-25 15:56   ` William Tu
  2017-01-25 20:19     ` Daniel Borkmann
  0 siblings, 1 reply; 9+ messages in thread
From: William Tu @ 2017-01-25 15:56 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: David Miller, Linux Kernel Network Developers, ast, Gianluca Borello

Looks good to me, I tested with several complex program without any
problem. Thanks for the patch.
--William

On Mon, Jan 23, 2017 at 4:06 PM, Daniel Borkmann <daniel@iogearbox.net> wrote:
> William reported couple of issues in relation to direct packet
> access. Typical scheme is to check for data + [off] <= data_end,
> where [off] can be either immediate or coming from a tracked
> register that contains an immediate, depending on the branch, we
> can then access the data. However, in case of calculating [off]
> for either the mentioned test itself or for access after the test
> in a more "complex" way, then the verifier will stop tracking the
> CONST_IMM marked register and will mark it as UNKNOWN_VALUE one.
>
> Adding that UNKNOWN_VALUE typed register to a pkt() marked
> register, the verifier then bails out in check_packet_ptr_add()
> as it finds the registers imm value below 48. In the first below
> example, that is due to evaluate_reg_imm_alu() not handling right
> shifts and thus marking the register as UNKNOWN_VALUE via helper
> __mark_reg_unknown_value() that resets imm to 0.
>
> In the second case the same happens at the time when r4 is set
> to r4 &= r5, where it transitions to UNKNOWN_VALUE from
> evaluate_reg_imm_alu(). Later on r4 we shift right by 3 inside
> evaluate_reg_alu(), where the register's imm turns into 3. That
> is, for registers with type UNKNOWN_VALUE, imm of 0 means that
> we don't know what value the register has, and for imm > 0 it
> means that the value has [imm] upper zero bits. F.e. when shifting
> an UNKNOWN_VALUE register by 3 to the right, no matter what value
> it had, we know that the 3 upper most bits must be zero now.
> This is to make sure that ALU operations with unknown registers
> don't overflow. Meaning, once we know that we have more than 48
> upper zero bits, or, in other words cannot go beyond 0xffff offset
> with ALU ops, such an addition will track the target register
> as a new pkt() register with a new id, but 0 offset and 0 range,
> so for that a new data/data_end test will be required. Is the source
> register a CONST_IMM one that is to be added to the pkt() register,
> or the source instruction is an add instruction with immediate
> value, then it will get added if it stays within max 0xffff bounds.
> From there, pkt() type, can be accessed should reg->off + imm be
> within the access range of pkt().
>
>   [...]
>   from 28 to 30: R0=imm1,min_value=1,max_value=1
>     R1=pkt(id=0,off=0,r=22) R2=pkt_end
>     R3=imm144,min_value=144,max_value=144
>     R4=imm0,min_value=0,max_value=0
>     R5=inv48,min_value=2054,max_value=2054 R10=fp
>   30: (bf) r5 = r3
>   31: (07) r5 += 23
>   32: (77) r5 >>= 3
>   33: (bf) r6 = r1
>   34: (0f) r6 += r5
>   cannot add integer value with 0 upper zero bits to ptr_to_packet
>
>   [...]
>   from 52 to 80: R0=imm1,min_value=1,max_value=1
>     R1=pkt(id=0,off=0,r=34) R2=pkt_end R3=inv
>     R4=imm272 R5=inv56,min_value=17,max_value=17
>     R6=pkt(id=0,off=26,r=34) R10=fp
>   80: (07) r4 += 71
>   81: (18) r5 = 0xfffffff8
>   83: (5f) r4 &= r5
>   84: (77) r4 >>= 3
>   85: (0f) r1 += r4
>   cannot add integer value with 3 upper zero bits to ptr_to_packet
>
> Thus to get above use-cases working, evaluate_reg_imm_alu() has
> been extended for further ALU ops. This is fine, because we only
> operate strictly within realm of CONST_IMM types, so here we don't
> care about overflows as they will happen in the simulated but also
> real execution and interaction with pkt() in check_packet_ptr_add()
> will check actual imm value once added to pkt(), but it's irrelevant
> before.
>
> With regards to 06c1c049721a ("bpf: allow helpers access to variable
> memory") that works on UNKNOWN_VALUE registers, the verifier becomes
> now a bit smarter as it can better resolve ALU ops, so we need to
> adapt two test cases there, as min/max bound tracking only becomes
> necessary when registers were spilled to stack. So while mask was
> set before to track upper bound for UNKNOWN_VALUE case, it's now
> resolved directly as CONST_IMM, and such contructs are only necessary
> when f.e. registers are spilled.
>
> For commit 6b17387307ba ("bpf: recognize 64bit immediate loads as
> consts") that initially enabled dw load tracking only for nfp jit/
> analyzer, I did couple of tests on large, complex programs and we
> don't increase complexity badly (my tests were in ~3% range on avg).
> I've added a couple of tests similar to affected code above, and
> it works fine with verifier now.
>
> Reported-by: William Tu <u9012063@gmail.com>
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> Cc: Gianluca Borello <g.borello@gmail.com>
> Cc: William Tu <u9012063@gmail.com>
> Acked-by: Alexei Starovoitov <ast@kernel.org>
> ---
>  kernel/bpf/verifier.c                       | 64 +++++++++++++++-------
>  tools/testing/selftests/bpf/test_verifier.c | 82 +++++++++++++++++++++++++++++
>  2 files changed, 127 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 8f69df7..fb3513b 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1566,22 +1566,54 @@ static int evaluate_reg_imm_alu(struct bpf_verifier_env *env,
>         struct bpf_reg_state *dst_reg = &regs[insn->dst_reg];
>         struct bpf_reg_state *src_reg = &regs[insn->src_reg];
>         u8 opcode = BPF_OP(insn->code);
> +       u64 dst_imm = dst_reg->imm;
>
> -       /* dst_reg->type == CONST_IMM here, simulate execution of 'add'/'or'
> -        * insn. Don't care about overflow or negative values, just add them
> +       /* dst_reg->type == CONST_IMM here. Simulate execution of insns
> +        * containing ALU ops. Don't care about overflow or negative
> +        * values, just add/sub/... them; registers are in u64.
>          */
> -       if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_K)
> -               dst_reg->imm += insn->imm;
> -       else if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_X &&
> -                src_reg->type == CONST_IMM)
> -               dst_reg->imm += src_reg->imm;
> -       else if (opcode == BPF_OR && BPF_SRC(insn->code) == BPF_K)
> -               dst_reg->imm |= insn->imm;
> -       else if (opcode == BPF_OR && BPF_SRC(insn->code) == BPF_X &&
> -                src_reg->type == CONST_IMM)
> -               dst_reg->imm |= src_reg->imm;
> -       else
> +       if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_K) {
> +               dst_imm += insn->imm;
> +       } else if (opcode == BPF_ADD && BPF_SRC(insn->code) == BPF_X &&
> +                  src_reg->type == CONST_IMM) {
> +               dst_imm += src_reg->imm;
> +       } else if (opcode == BPF_SUB && BPF_SRC(insn->code) == BPF_K) {
> +               dst_imm -= insn->imm;
> +       } else if (opcode == BPF_SUB && BPF_SRC(insn->code) == BPF_X &&
> +                  src_reg->type == CONST_IMM) {
> +               dst_imm -= src_reg->imm;
> +       } else if (opcode == BPF_MUL && BPF_SRC(insn->code) == BPF_K) {
> +               dst_imm *= insn->imm;
> +       } else if (opcode == BPF_MUL && BPF_SRC(insn->code) == BPF_X &&
> +                  src_reg->type == CONST_IMM) {
> +               dst_imm *= src_reg->imm;
> +       } else if (opcode == BPF_OR && BPF_SRC(insn->code) == BPF_K) {
> +               dst_imm |= insn->imm;
> +       } else if (opcode == BPF_OR && BPF_SRC(insn->code) == BPF_X &&
> +                  src_reg->type == CONST_IMM) {
> +               dst_imm |= src_reg->imm;
> +       } else if (opcode == BPF_AND && BPF_SRC(insn->code) == BPF_K) {
> +               dst_imm &= insn->imm;
> +       } else if (opcode == BPF_AND && BPF_SRC(insn->code) == BPF_X &&
> +                  src_reg->type == CONST_IMM) {
> +               dst_imm &= src_reg->imm;
> +       } else if (opcode == BPF_RSH && BPF_SRC(insn->code) == BPF_K) {
> +               dst_imm >>= insn->imm;
> +       } else if (opcode == BPF_RSH && BPF_SRC(insn->code) == BPF_X &&
> +                  src_reg->type == CONST_IMM) {
> +               dst_imm >>= src_reg->imm;
> +       } else if (opcode == BPF_LSH && BPF_SRC(insn->code) == BPF_K) {
> +               dst_imm <<= insn->imm;
> +       } else if (opcode == BPF_LSH && BPF_SRC(insn->code) == BPF_X &&
> +                  src_reg->type == CONST_IMM) {
> +               dst_imm <<= src_reg->imm;
> +       } else {
>                 mark_reg_unknown_value(regs, insn->dst_reg);
> +               goto out;
> +       }
> +
> +       dst_reg->imm = dst_imm;
> +out:
>         return 0;
>  }
>
> @@ -2225,14 +2257,8 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn)
>                 return err;
>
>         if (insn->src_reg == 0) {
> -               /* generic move 64-bit immediate into a register,
> -                * only analyzer needs to collect the ld_imm value.
> -                */
>                 u64 imm = ((u64)(insn + 1)->imm << 32) | (u32)insn->imm;
>
> -               if (!env->analyzer_ops)
> -                       return 0;
> -
>                 regs[insn->dst_reg].type = CONST_IMM;
>                 regs[insn->dst_reg].imm = imm;
>                 return 0;
> diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
> index 1aa7324..0d0912c 100644
> --- a/tools/testing/selftests/bpf/test_verifier.c
> +++ b/tools/testing/selftests/bpf/test_verifier.c
> @@ -2326,6 +2326,84 @@ struct test_val {
>                 .prog_type = BPF_PROG_TYPE_SCHED_CLS,
>         },
>         {
> +               "direct packet access: test11 (shift, good access)",
> +               .insns = {
> +                       BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
> +                                   offsetof(struct __sk_buff, data)),
> +                       BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
> +                                   offsetof(struct __sk_buff, data_end)),
> +                       BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
> +                       BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
> +                       BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
> +                       BPF_MOV64_IMM(BPF_REG_3, 144),
> +                       BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
> +                       BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
> +                       BPF_ALU64_IMM(BPF_RSH, BPF_REG_5, 3),
> +                       BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
> +                       BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
> +                       BPF_MOV64_IMM(BPF_REG_0, 1),
> +                       BPF_EXIT_INSN(),
> +                       BPF_MOV64_IMM(BPF_REG_0, 0),
> +                       BPF_EXIT_INSN(),
> +               },
> +               .result = ACCEPT,
> +               .prog_type = BPF_PROG_TYPE_SCHED_CLS,
> +       },
> +       {
> +               "direct packet access: test12 (and, good access)",
> +               .insns = {
> +                       BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
> +                                   offsetof(struct __sk_buff, data)),
> +                       BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
> +                                   offsetof(struct __sk_buff, data_end)),
> +                       BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
> +                       BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
> +                       BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
> +                       BPF_MOV64_IMM(BPF_REG_3, 144),
> +                       BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
> +                       BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
> +                       BPF_ALU64_IMM(BPF_AND, BPF_REG_5, 15),
> +                       BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
> +                       BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
> +                       BPF_MOV64_IMM(BPF_REG_0, 1),
> +                       BPF_EXIT_INSN(),
> +                       BPF_MOV64_IMM(BPF_REG_0, 0),
> +                       BPF_EXIT_INSN(),
> +               },
> +               .result = ACCEPT,
> +               .prog_type = BPF_PROG_TYPE_SCHED_CLS,
> +       },
> +       {
> +               "direct packet access: test13 (branches, good access)",
> +               .insns = {
> +                       BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
> +                                   offsetof(struct __sk_buff, data)),
> +                       BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
> +                                   offsetof(struct __sk_buff, data_end)),
> +                       BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
> +                       BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
> +                       BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 13),
> +                       BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
> +                                   offsetof(struct __sk_buff, mark)),
> +                       BPF_MOV64_IMM(BPF_REG_4, 1),
> +                       BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_4, 2),
> +                       BPF_MOV64_IMM(BPF_REG_3, 14),
> +                       BPF_JMP_IMM(BPF_JA, 0, 0, 1),
> +                       BPF_MOV64_IMM(BPF_REG_3, 24),
> +                       BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
> +                       BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
> +                       BPF_ALU64_IMM(BPF_AND, BPF_REG_5, 15),
> +                       BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
> +                       BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_5),
> +                       BPF_MOV64_IMM(BPF_REG_0, 1),
> +                       BPF_EXIT_INSN(),
> +                       BPF_MOV64_IMM(BPF_REG_0, 0),
> +                       BPF_EXIT_INSN(),
> +               },
> +               .result = ACCEPT,
> +               .prog_type = BPF_PROG_TYPE_SCHED_CLS,
> +       },
> +       {
>                 "helper access to packet: test1, valid packet_ptr range",
>                 .insns = {
>                         BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
> @@ -4208,6 +4286,8 @@ struct test_val {
>                 .insns = {
>                         BPF_MOV64_IMM(BPF_REG_1, 0),
>                         BPF_MOV64_IMM(BPF_REG_2, 0),
> +                       BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128),
> +                       BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128),
>                         BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 64),
>                         BPF_MOV64_IMM(BPF_REG_3, 0),
>                         BPF_MOV64_IMM(BPF_REG_4, 0),
> @@ -4251,6 +4331,8 @@ struct test_val {
>                         BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -16),
>                         BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
>                         BPF_MOV64_IMM(BPF_REG_2, 0),
> +                       BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128),
> +                       BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128),
>                         BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 63),
>                         BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1),
>                         BPF_MOV64_IMM(BPF_REG_3, 0),
> --
> 1.9.3
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH net-next 5/5] bpf: enable verifier to better track const alu ops
  2017-01-25 15:56   ` William Tu
@ 2017-01-25 20:19     ` Daniel Borkmann
  0 siblings, 0 replies; 9+ messages in thread
From: Daniel Borkmann @ 2017-01-25 20:19 UTC (permalink / raw)
  To: William Tu
  Cc: David Miller, Linux Kernel Network Developers, ast, Gianluca Borello

On 01/25/2017 04:56 PM, William Tu wrote:
> Looks good to me, I tested with several complex program without any
> problem. Thanks for the patch.

Thanks for re-testing, William!

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-01-25 20:19 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-24  0:06 [PATCH net-next 0/5] Misc BPF improvements Daniel Borkmann
2017-01-24  0:06 ` [PATCH net-next 1/5] bpf: simplify __is_valid_access test on cb Daniel Borkmann
2017-01-24  0:06 ` [PATCH net-next 2/5] bpf: enable load bytes helper for filter/reuseport progs Daniel Borkmann
2017-01-24  0:06 ` [PATCH net-next 3/5] bpf: allow option for setting bpf_l4_csum_replace from scratch Daniel Borkmann
2017-01-24  0:06 ` [PATCH net-next 4/5] bpf: add prog tag test case to bpf selftests Daniel Borkmann
2017-01-24  0:06 ` [PATCH net-next 5/5] bpf: enable verifier to better track const alu ops Daniel Borkmann
2017-01-25 15:56   ` William Tu
2017-01-25 20:19     ` Daniel Borkmann
2017-01-24 19:49 ` [PATCH net-next 0/5] Misc BPF improvements David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.