netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next v3 00/11] Misc BPF improvements
@ 2018-06-02 21:06 Daniel Borkmann
  2018-06-02 21:06 ` [PATCH bpf-next v3 01/11] bpf: test case for map pointer poison with calls/branches Daniel Borkmann
                   ` (11 more replies)
  0 siblings, 12 replies; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-02 21:06 UTC (permalink / raw)
  To: alexei.starovoitov; +Cc: netdev, Daniel Borkmann

This set adds various patches I still had in my queue, first two
are test cases to provide coverage for the recent two fixes that
went to bpf tree, then a small improvement on the error message
for gpl helpers. Next, we expose prog and map id into fdinfo in
order to allow for inspection of these objections currently used
in applications. Patch after that removes a retpoline call for
map lookup/update/delete helpers. A new helper is added in the
subsequent patch to lookup the skb's socket's cgroup v2 id which
can be used in an efficient way for e.g. lookups on egress side.
Next one is a fix to fully clear state info in tunnel/xfrm helpers.
Given this is full cap_sys_admin from init ns and has same priv
requirements like tracing, bpf-next should be okay. A small bug
fix for bpf_asm follows, and next a fix for context access in
tracing which was recently reported. Lastly, a small update in
the maintainer's file to add patchwork url and missing files.

Thanks!

v2 -> v3:
  - Noticed a merge artefact inside uapi header comment, sigh,
    fixed now.
v1 -> v2:
  - minor fix in getting context access work on 32 bit for tracing
  - add paragraph to uapi helper doc to better describe kernel
    build deps for cggroup helper

Daniel Borkmann (11):
  bpf: test case for map pointer poison with calls/branches
  bpf: add also cbpf long jump test cases with heavy expansion
  bpf: fixup error message from gpl helpers on license mismatch
  bpf: show prog and map id in fdinfo
  bpf: avoid retpoline for lookup/update/delete calls on maps
  bpf: add bpf_skb_cgroup_id helper
  bpf: make sure to clear unused fields in tunnel/xfrm state fetch
  bpf: fix cbpf parser bug for octal numbers
  bpf: fix context access in tracing progs on 32 bit archs
  bpf: sync bpf uapi header with tools
  bpf, doc: add missing patchwork url and libbpf to maintainers

 MAINTAINERS                                 |   2 +
 include/linux/filter.h                      |  43 ++++++-
 include/uapi/linux/bpf.h                    |  22 +++-
 kernel/bpf/hashtab.c                        |  12 +-
 kernel/bpf/syscall.c                        |  12 +-
 kernel/bpf/verifier.c                       |  72 ++++++++---
 kernel/trace/bpf_trace.c                    |  10 +-
 lib/test_bpf.c                              |  63 ++++++++++
 net/core/filter.c                           |  35 +++++-
 tools/bpf/bpf_exp.l                         |   2 +-
 tools/include/linux/filter.h                |  10 ++
 tools/include/uapi/linux/bpf.h              |  22 +++-
 tools/testing/selftests/bpf/test_verifier.c | 185 ++++++++++++++++++++++++----
 13 files changed, 422 insertions(+), 68 deletions(-)

-- 
2.9.5

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH bpf-next v3 01/11] bpf: test case for map pointer poison with calls/branches
  2018-06-02 21:06 [PATCH bpf-next v3 00/11] Misc BPF improvements Daniel Borkmann
@ 2018-06-02 21:06 ` Daniel Borkmann
  2018-06-02 21:06 ` [PATCH bpf-next v3 02/11] bpf: add also cbpf long jump test cases with heavy expansion Daniel Borkmann
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-02 21:06 UTC (permalink / raw)
  To: alexei.starovoitov; +Cc: netdev, Daniel Borkmann

Add several test cases where the same or different map pointers
originate from different paths in the program and execute a map
lookup or tail call at a common location.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
---
 include/linux/filter.h                      |  10 ++
 tools/include/linux/filter.h                |  10 ++
 tools/testing/selftests/bpf/test_verifier.c | 185 ++++++++++++++++++++++++----
 3 files changed, 178 insertions(+), 27 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index d90abda..6fd375fe 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -289,6 +289,16 @@ struct xdp_buff;
 		.off   = OFF,					\
 		.imm   = 0 })
 
+/* Relative call */
+
+#define BPF_CALL_REL(TGT)					\
+	((struct bpf_insn) {					\
+		.code  = BPF_JMP | BPF_CALL,			\
+		.dst_reg = 0,					\
+		.src_reg = BPF_PSEUDO_CALL,			\
+		.off   = 0,					\
+		.imm   = TGT })
+
 /* Function call */
 
 #define BPF_EMIT_CALL(FUNC)					\
diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
index c5e512d..af55acf 100644
--- a/tools/include/linux/filter.h
+++ b/tools/include/linux/filter.h
@@ -263,6 +263,16 @@
 #define BPF_LD_MAP_FD(DST, MAP_FD)				\
 	BPF_LD_IMM64_RAW(DST, BPF_PSEUDO_MAP_FD, MAP_FD)
 
+/* Relative call */
+
+#define BPF_CALL_REL(TGT)					\
+	((struct bpf_insn) {					\
+		.code  = BPF_JMP | BPF_CALL,			\
+		.dst_reg = 0,					\
+		.src_reg = BPF_PSEUDO_CALL,			\
+		.off   = 0,					\
+		.imm   = TGT })
+
 /* Program exit */
 
 #define BPF_EXIT_INSN()						\
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 4b4f015..7cb1d74 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -50,7 +50,7 @@
 
 #define MAX_INSNS	BPF_MAXINSNS
 #define MAX_FIXUPS	8
-#define MAX_NR_MAPS	4
+#define MAX_NR_MAPS	7
 #define POINTER_VALUE	0xcafe4all
 #define TEST_DATA_LEN	64
 
@@ -66,7 +66,9 @@ struct bpf_test {
 	int fixup_map1[MAX_FIXUPS];
 	int fixup_map2[MAX_FIXUPS];
 	int fixup_map3[MAX_FIXUPS];
-	int fixup_prog[MAX_FIXUPS];
+	int fixup_map4[MAX_FIXUPS];
+	int fixup_prog1[MAX_FIXUPS];
+	int fixup_prog2[MAX_FIXUPS];
 	int fixup_map_in_map[MAX_FIXUPS];
 	const char *errstr;
 	const char *errstr_unpriv;
@@ -2769,7 +2771,7 @@ static struct bpf_test tests[] = {
 			BPF_MOV64_IMM(BPF_REG_0, 0),
 			BPF_EXIT_INSN(),
 		},
-		.fixup_prog = { 1 },
+		.fixup_prog1 = { 1 },
 		.errstr_unpriv = "R3 leaks addr into helper",
 		.result_unpriv = REJECT,
 		.result = ACCEPT,
@@ -2856,7 +2858,7 @@ static struct bpf_test tests[] = {
 			BPF_MOV64_IMM(BPF_REG_0, 1),
 			BPF_EXIT_INSN(),
 		},
-		.fixup_prog = { 1 },
+		.fixup_prog1 = { 1 },
 		.result = ACCEPT,
 		.retval = 42,
 	},
@@ -2870,7 +2872,7 @@ static struct bpf_test tests[] = {
 			BPF_MOV64_IMM(BPF_REG_0, 1),
 			BPF_EXIT_INSN(),
 		},
-		.fixup_prog = { 1 },
+		.fixup_prog1 = { 1 },
 		.result = ACCEPT,
 		.retval = 41,
 	},
@@ -2884,7 +2886,7 @@ static struct bpf_test tests[] = {
 			BPF_MOV64_IMM(BPF_REG_0, 1),
 			BPF_EXIT_INSN(),
 		},
-		.fixup_prog = { 1 },
+		.fixup_prog1 = { 1 },
 		.result = ACCEPT,
 		.retval = 1,
 	},
@@ -2898,7 +2900,7 @@ static struct bpf_test tests[] = {
 			BPF_MOV64_IMM(BPF_REG_0, 2),
 			BPF_EXIT_INSN(),
 		},
-		.fixup_prog = { 1 },
+		.fixup_prog1 = { 1 },
 		.result = ACCEPT,
 		.retval = 2,
 	},
@@ -2912,7 +2914,7 @@ static struct bpf_test tests[] = {
 			BPF_MOV64_IMM(BPF_REG_0, 2),
 			BPF_EXIT_INSN(),
 		},
-		.fixup_prog = { 1 },
+		.fixup_prog1 = { 1 },
 		.result = ACCEPT,
 		.retval = 2,
 	},
@@ -2926,7 +2928,7 @@ static struct bpf_test tests[] = {
 			BPF_MOV64_IMM(BPF_REG_0, 2),
 			BPF_EXIT_INSN(),
 		},
-		.fixup_prog = { 2 },
+		.fixup_prog1 = { 2 },
 		.result = ACCEPT,
 		.retval = 42,
 	},
@@ -11682,6 +11684,112 @@ static struct bpf_test tests[] = {
 		.prog_type = BPF_PROG_TYPE_XDP,
 	},
 	{
+		"calls: two calls returning different map pointers for lookup (hash, array)",
+		.insns = {
+			/* main prog */
+			BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+			BPF_CALL_REL(11),
+			BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+			BPF_CALL_REL(12),
+			BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+			BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+			BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+				     BPF_FUNC_map_lookup_elem),
+			BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+			BPF_ST_MEM(BPF_DW, BPF_REG_0, 0,
+				   offsetof(struct test_val, foo)),
+			BPF_MOV64_IMM(BPF_REG_0, 1),
+			BPF_EXIT_INSN(),
+			/* subprog 1 */
+			BPF_LD_MAP_FD(BPF_REG_0, 0),
+			BPF_EXIT_INSN(),
+			/* subprog 2 */
+			BPF_LD_MAP_FD(BPF_REG_0, 0),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.fixup_map2 = { 13 },
+		.fixup_map4 = { 16 },
+		.result = ACCEPT,
+		.retval = 1,
+	},
+	{
+		"calls: two calls returning different map pointers for lookup (hash, map in map)",
+		.insns = {
+			/* main prog */
+			BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+			BPF_CALL_REL(11),
+			BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+			BPF_CALL_REL(12),
+			BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+			BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
+			BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+			BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+				     BPF_FUNC_map_lookup_elem),
+			BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+			BPF_ST_MEM(BPF_DW, BPF_REG_0, 0,
+				   offsetof(struct test_val, foo)),
+			BPF_MOV64_IMM(BPF_REG_0, 1),
+			BPF_EXIT_INSN(),
+			/* subprog 1 */
+			BPF_LD_MAP_FD(BPF_REG_0, 0),
+			BPF_EXIT_INSN(),
+			/* subprog 2 */
+			BPF_LD_MAP_FD(BPF_REG_0, 0),
+			BPF_EXIT_INSN(),
+		},
+		.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+		.fixup_map_in_map = { 16 },
+		.fixup_map4 = { 13 },
+		.result = REJECT,
+		.errstr = "R0 invalid mem access 'map_ptr'",
+	},
+	{
+		"cond: two branches returning different map pointers for lookup (tail, tail)",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
+				    offsetof(struct __sk_buff, mark)),
+			BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 3),
+			BPF_LD_MAP_FD(BPF_REG_2, 0),
+			BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+			BPF_LD_MAP_FD(BPF_REG_2, 0),
+			BPF_MOV64_IMM(BPF_REG_3, 7),
+			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+				     BPF_FUNC_tail_call),
+			BPF_MOV64_IMM(BPF_REG_0, 1),
+			BPF_EXIT_INSN(),
+		},
+		.fixup_prog1 = { 5 },
+		.fixup_prog2 = { 2 },
+		.result_unpriv = REJECT,
+		.errstr_unpriv = "tail_call abusing map_ptr",
+		.result = ACCEPT,
+		.retval = 42,
+	},
+	{
+		"cond: two branches returning same map pointers for lookup (tail, tail)",
+		.insns = {
+			BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
+				    offsetof(struct __sk_buff, mark)),
+			BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 3),
+			BPF_LD_MAP_FD(BPF_REG_2, 0),
+			BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+			BPF_LD_MAP_FD(BPF_REG_2, 0),
+			BPF_MOV64_IMM(BPF_REG_3, 7),
+			BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+				     BPF_FUNC_tail_call),
+			BPF_MOV64_IMM(BPF_REG_0, 1),
+			BPF_EXIT_INSN(),
+		},
+		.fixup_prog2 = { 2, 5 },
+		.result_unpriv = ACCEPT,
+		.result = ACCEPT,
+		.retval = 42,
+	},
+	{
 		"search pruning: all branches should be verified (nop operation)",
 		.insns = {
 			BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
@@ -12162,12 +12270,13 @@ static int probe_filter_length(const struct bpf_insn *fp)
 	return len + 1;
 }
 
-static int create_map(uint32_t size_value, uint32_t max_elem)
+static int create_map(uint32_t type, uint32_t size_key,
+		      uint32_t size_value, uint32_t max_elem)
 {
 	int fd;
 
-	fd = bpf_create_map(BPF_MAP_TYPE_HASH, sizeof(long long),
-			    size_value, max_elem, BPF_F_NO_PREALLOC);
+	fd = bpf_create_map(type, size_key, size_value, max_elem,
+			    type == BPF_MAP_TYPE_HASH ? BPF_F_NO_PREALLOC : 0);
 	if (fd < 0)
 		printf("Failed to create hash map '%s'!\n", strerror(errno));
 
@@ -12200,13 +12309,13 @@ static int create_prog_dummy2(int mfd, int idx)
 				ARRAY_SIZE(prog), "GPL", 0, NULL, 0);
 }
 
-static int create_prog_array(void)
+static int create_prog_array(uint32_t max_elem, int p1key)
 {
-	int p1key = 0, p2key = 1;
+	int p2key = 1;
 	int mfd, p1fd, p2fd;
 
 	mfd = bpf_create_map(BPF_MAP_TYPE_PROG_ARRAY, sizeof(int),
-			     sizeof(int), 4, 0);
+			     sizeof(int), max_elem, 0);
 	if (mfd < 0) {
 		printf("Failed to create prog array '%s'!\n", strerror(errno));
 		return -1;
@@ -12261,7 +12370,9 @@ static void do_test_fixup(struct bpf_test *test, struct bpf_insn *prog,
 	int *fixup_map1 = test->fixup_map1;
 	int *fixup_map2 = test->fixup_map2;
 	int *fixup_map3 = test->fixup_map3;
-	int *fixup_prog = test->fixup_prog;
+	int *fixup_map4 = test->fixup_map4;
+	int *fixup_prog1 = test->fixup_prog1;
+	int *fixup_prog2 = test->fixup_prog2;
 	int *fixup_map_in_map = test->fixup_map_in_map;
 
 	if (test->fill_helper)
@@ -12272,7 +12383,8 @@ static void do_test_fixup(struct bpf_test *test, struct bpf_insn *prog,
 	 * that really matters is value size in this case.
 	 */
 	if (*fixup_map1) {
-		map_fds[0] = create_map(sizeof(long long), 1);
+		map_fds[0] = create_map(BPF_MAP_TYPE_HASH, sizeof(long long),
+					sizeof(long long), 1);
 		do {
 			prog[*fixup_map1].imm = map_fds[0];
 			fixup_map1++;
@@ -12280,7 +12392,8 @@ static void do_test_fixup(struct bpf_test *test, struct bpf_insn *prog,
 	}
 
 	if (*fixup_map2) {
-		map_fds[1] = create_map(sizeof(struct test_val), 1);
+		map_fds[1] = create_map(BPF_MAP_TYPE_HASH, sizeof(long long),
+					sizeof(struct test_val), 1);
 		do {
 			prog[*fixup_map2].imm = map_fds[1];
 			fixup_map2++;
@@ -12288,25 +12401,43 @@ static void do_test_fixup(struct bpf_test *test, struct bpf_insn *prog,
 	}
 
 	if (*fixup_map3) {
-		map_fds[1] = create_map(sizeof(struct other_val), 1);
+		map_fds[2] = create_map(BPF_MAP_TYPE_HASH, sizeof(long long),
+					sizeof(struct other_val), 1);
 		do {
-			prog[*fixup_map3].imm = map_fds[1];
+			prog[*fixup_map3].imm = map_fds[2];
 			fixup_map3++;
 		} while (*fixup_map3);
 	}
 
-	if (*fixup_prog) {
-		map_fds[2] = create_prog_array();
+	if (*fixup_map4) {
+		map_fds[3] = create_map(BPF_MAP_TYPE_ARRAY, sizeof(int),
+					sizeof(struct test_val), 1);
+		do {
+			prog[*fixup_map4].imm = map_fds[3];
+			fixup_map4++;
+		} while (*fixup_map4);
+	}
+
+	if (*fixup_prog1) {
+		map_fds[4] = create_prog_array(4, 0);
+		do {
+			prog[*fixup_prog1].imm = map_fds[4];
+			fixup_prog1++;
+		} while (*fixup_prog1);
+	}
+
+	if (*fixup_prog2) {
+		map_fds[5] = create_prog_array(8, 7);
 		do {
-			prog[*fixup_prog].imm = map_fds[2];
-			fixup_prog++;
-		} while (*fixup_prog);
+			prog[*fixup_prog2].imm = map_fds[5];
+			fixup_prog2++;
+		} while (*fixup_prog2);
 	}
 
 	if (*fixup_map_in_map) {
-		map_fds[3] = create_map_in_map();
+		map_fds[6] = create_map_in_map();
 		do {
-			prog[*fixup_map_in_map].imm = map_fds[3];
+			prog[*fixup_map_in_map].imm = map_fds[6];
 			fixup_map_in_map++;
 		} while (*fixup_map_in_map);
 	}
-- 
2.9.5

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH bpf-next v3 02/11] bpf: add also cbpf long jump test cases with heavy expansion
  2018-06-02 21:06 [PATCH bpf-next v3 00/11] Misc BPF improvements Daniel Borkmann
  2018-06-02 21:06 ` [PATCH bpf-next v3 01/11] bpf: test case for map pointer poison with calls/branches Daniel Borkmann
@ 2018-06-02 21:06 ` Daniel Borkmann
  2018-06-02 21:06 ` [PATCH bpf-next v3 03/11] bpf: fixup error message from gpl helpers on license mismatch Daniel Borkmann
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-02 21:06 UTC (permalink / raw)
  To: alexei.starovoitov; +Cc: netdev, Daniel Borkmann

We have one triggering on eBPF but lets also add a cBPF example to
make sure we keep tracking them. Also add anther cBPF test running
max number of MSH ops.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
---
 lib/test_bpf.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 63 insertions(+)

diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index 317f231..60aedc8 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -356,6 +356,52 @@ static int bpf_fill_maxinsns11(struct bpf_test *self)
 	return __bpf_fill_ja(self, BPF_MAXINSNS, 68);
 }
 
+static int bpf_fill_maxinsns12(struct bpf_test *self)
+{
+	unsigned int len = BPF_MAXINSNS;
+	struct sock_filter *insn;
+	int i = 0;
+
+	insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL);
+	if (!insn)
+		return -ENOMEM;
+
+	insn[0] = __BPF_JUMP(BPF_JMP | BPF_JA, len - 2, 0, 0);
+
+	for (i = 1; i < len - 1; i++)
+		insn[i] = __BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, 0);
+
+	insn[len - 1] = __BPF_STMT(BPF_RET | BPF_K, 0xabababab);
+
+	self->u.ptr.insns = insn;
+	self->u.ptr.len = len;
+
+	return 0;
+}
+
+static int bpf_fill_maxinsns13(struct bpf_test *self)
+{
+	unsigned int len = BPF_MAXINSNS;
+	struct sock_filter *insn;
+	int i = 0;
+
+	insn = kmalloc_array(len, sizeof(*insn), GFP_KERNEL);
+	if (!insn)
+		return -ENOMEM;
+
+	for (i = 0; i < len - 3; i++)
+		insn[i] = __BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, 0);
+
+	insn[len - 3] = __BPF_STMT(BPF_LD | BPF_IMM, 0xabababab);
+	insn[len - 2] = __BPF_STMT(BPF_ALU | BPF_XOR | BPF_X, 0);
+	insn[len - 1] = __BPF_STMT(BPF_RET | BPF_A, 0);
+
+	self->u.ptr.insns = insn;
+	self->u.ptr.len = len;
+
+	return 0;
+}
+
 static int bpf_fill_ja(struct bpf_test *self)
 {
 	/* Hits exactly 11 passes on x86_64 JIT. */
@@ -5290,6 +5336,23 @@ static struct bpf_test tests[] = {
 		.expected_errcode = -ENOTSUPP,
 	},
 	{
+		"BPF_MAXINSNS: jump over MSH",
+		{ },
+		CLASSIC | FLAG_EXPECTED_FAIL,
+		{ 0xfa, 0xfb, 0xfc, 0xfd, },
+		{ { 4, 0xabababab } },
+		.fill_helper = bpf_fill_maxinsns12,
+		.expected_errcode = -EINVAL,
+	},
+	{
+		"BPF_MAXINSNS: exec all MSH",
+		{ },
+		CLASSIC,
+		{ 0xfa, 0xfb, 0xfc, 0xfd, },
+		{ { 4, 0xababab83 } },
+		.fill_helper = bpf_fill_maxinsns13,
+	},
+	{
 		"BPF_MAXINSNS: ld_abs+get_processor_id",
 		{ },
 		CLASSIC,
-- 
2.9.5

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH bpf-next v3 03/11] bpf: fixup error message from gpl helpers on license mismatch
  2018-06-02 21:06 [PATCH bpf-next v3 00/11] Misc BPF improvements Daniel Borkmann
  2018-06-02 21:06 ` [PATCH bpf-next v3 01/11] bpf: test case for map pointer poison with calls/branches Daniel Borkmann
  2018-06-02 21:06 ` [PATCH bpf-next v3 02/11] bpf: add also cbpf long jump test cases with heavy expansion Daniel Borkmann
@ 2018-06-02 21:06 ` Daniel Borkmann
  2018-06-02 21:06 ` [PATCH bpf-next v3 04/11] bpf: show prog and map id in fdinfo Daniel Borkmann
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-02 21:06 UTC (permalink / raw)
  To: alexei.starovoitov; +Cc: netdev, Daniel Borkmann

Stating 'proprietary program' in the error is just silly since it
can also be a different open source license than that which is just
not compatible.

Reference: https://twitter.com/majek04/status/998531268039102465
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 kernel/bpf/verifier.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1fd9667b..4f4786e 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2462,7 +2462,7 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
 
 	/* eBPF programs must be GPL compatible to use GPL-ed functions */
 	if (!env->prog->gpl_compatible && fn->gpl_only) {
-		verbose(env, "cannot call GPL only function from proprietary program\n");
+		verbose(env, "cannot call GPL-restricted function from non-GPL compatible program\n");
 		return -EINVAL;
 	}
 
-- 
2.9.5

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH bpf-next v3 04/11] bpf: show prog and map id in fdinfo
  2018-06-02 21:06 [PATCH bpf-next v3 00/11] Misc BPF improvements Daniel Borkmann
                   ` (2 preceding siblings ...)
  2018-06-02 21:06 ` [PATCH bpf-next v3 03/11] bpf: fixup error message from gpl helpers on license mismatch Daniel Borkmann
@ 2018-06-02 21:06 ` Daniel Borkmann
  2018-06-02 21:06 ` [PATCH bpf-next v3 05/11] bpf: avoid retpoline for lookup/update/delete calls on maps Daniel Borkmann
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-02 21:06 UTC (permalink / raw)
  To: alexei.starovoitov; +Cc: netdev, Daniel Borkmann

Its trivial and straight forward to expose it for scripts that can
then use it along with bpftool in order to inspect an individual
application's used maps and progs. Right now we dump some basic
information in the fdinfo file but with the help of the map/prog
id full introspection becomes possible now.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 kernel/bpf/syscall.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 7365d79..0fa2062 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -327,13 +327,15 @@ static void bpf_map_show_fdinfo(struct seq_file *m, struct file *filp)
 		   "value_size:\t%u\n"
 		   "max_entries:\t%u\n"
 		   "map_flags:\t%#x\n"
-		   "memlock:\t%llu\n",
+		   "memlock:\t%llu\n"
+		   "map_id:\t%u\n",
 		   map->map_type,
 		   map->key_size,
 		   map->value_size,
 		   map->max_entries,
 		   map->map_flags,
-		   map->pages * 1ULL << PAGE_SHIFT);
+		   map->pages * 1ULL << PAGE_SHIFT,
+		   map->id);
 
 	if (owner_prog_type) {
 		seq_printf(m, "owner_prog_type:\t%u\n",
@@ -1070,11 +1072,13 @@ static void bpf_prog_show_fdinfo(struct seq_file *m, struct file *filp)
 		   "prog_type:\t%u\n"
 		   "prog_jited:\t%u\n"
 		   "prog_tag:\t%s\n"
-		   "memlock:\t%llu\n",
+		   "memlock:\t%llu\n"
+		   "prog_id:\t%u\n",
 		   prog->type,
 		   prog->jited,
 		   prog_tag,
-		   prog->pages * 1ULL << PAGE_SHIFT);
+		   prog->pages * 1ULL << PAGE_SHIFT,
+		   prog->aux->id);
 }
 #endif
 
-- 
2.9.5

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH bpf-next v3 05/11] bpf: avoid retpoline for lookup/update/delete calls on maps
  2018-06-02 21:06 [PATCH bpf-next v3 00/11] Misc BPF improvements Daniel Borkmann
                   ` (3 preceding siblings ...)
  2018-06-02 21:06 ` [PATCH bpf-next v3 04/11] bpf: show prog and map id in fdinfo Daniel Borkmann
@ 2018-06-02 21:06 ` Daniel Borkmann
  2018-06-03  6:56   ` Jesper Dangaard Brouer
  2018-06-02 21:06 ` [PATCH bpf-next v3 06/11] bpf: add bpf_skb_cgroup_id helper Daniel Borkmann
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-02 21:06 UTC (permalink / raw)
  To: alexei.starovoitov; +Cc: netdev, Daniel Borkmann

While some of the BPF map lookup helpers provide a ->map_gen_lookup()
callback for inlining the map lookup altogether it is not available
for every map, so the remaining ones have to call bpf_map_lookup_elem()
helper which does a dispatch to map->ops->map_lookup_elem(). In
times of retpolines, this will control and trap speculative execution
rather than letting it do its work for the indirect call and will
therefore cause a slowdown. Likewise, bpf_map_update_elem() and
bpf_map_delete_elem() do not have an inlined version and need to call
into their map->ops->map_update_elem() resp. map->ops->map_delete_elem()
handlers.

Before:

  # bpftool p d x i 1
    0: (bf) r2 = r10
    1: (07) r2 += -8
    2: (7a) *(u64 *)(r2 +0) = 0
    3: (18) r1 = map[id:1]
    5: (85) call __htab_map_lookup_elem#232656
    6: (15) if r0 == 0x0 goto pc+4
    7: (71) r1 = *(u8 *)(r0 +35)
    8: (55) if r1 != 0x0 goto pc+1
    9: (72) *(u8 *)(r0 +35) = 1
   10: (07) r0 += 56
   11: (15) if r0 == 0x0 goto pc+4
   12: (bf) r2 = r0
   13: (18) r1 = map[id:1]
   15: (85) call bpf_map_delete_elem#215008  <-- indirect call via
   16: (95) exit                                 helper

After:

  # bpftool p d x i 1
    0: (bf) r2 = r10
    1: (07) r2 += -8
    2: (7a) *(u64 *)(r2 +0) = 0
    3: (18) r1 = map[id:1]
    5: (85) call __htab_map_lookup_elem#233328
    6: (15) if r0 == 0x0 goto pc+4
    7: (71) r1 = *(u8 *)(r0 +35)
    8: (55) if r1 != 0x0 goto pc+1
    9: (72) *(u8 *)(r0 +35) = 1
   10: (07) r0 += 56
   11: (15) if r0 == 0x0 goto pc+4
   12: (bf) r2 = r0
   13: (18) r1 = map[id:1]
   15: (85) call htab_lru_map_delete_elem#238240  <-- direct call
   16: (95) exit

In all three lookup/update/delete cases however we can use the actual
address of the map callback directly if we find that there's only a
single path with a map pointer leading to the helper call, meaning
when the map pointer has not been poisoned from verifier side.
Example code can be seen above for the delete case.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
---
 include/linux/filter.h |  3 +++
 kernel/bpf/hashtab.c   | 12 ++++++---
 kernel/bpf/verifier.c  | 67 +++++++++++++++++++++++++++++++++++++-------------
 3 files changed, 62 insertions(+), 20 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 6fd375fe..8e60f1e 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -301,6 +301,9 @@ struct xdp_buff;
 
 /* Function call */
 
+#define BPF_CAST_CALL(x)					\
+		((u64 (*)(u64, u64, u64, u64, u64))(x))
+
 #define BPF_EMIT_CALL(FUNC)					\
 	((struct bpf_insn) {					\
 		.code  = BPF_JMP | BPF_CALL,			\
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index b76828f..3ca2198 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -503,7 +503,9 @@ static u32 htab_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
 	struct bpf_insn *insn = insn_buf;
 	const int ret = BPF_REG_0;
 
-	*insn++ = BPF_EMIT_CALL((u64 (*)(u64, u64, u64, u64, u64))__htab_map_lookup_elem);
+	BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem,
+		     (void *(*)(struct bpf_map *map, void *key))NULL));
+	*insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem));
 	*insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 1);
 	*insn++ = BPF_ALU64_IMM(BPF_ADD, ret,
 				offsetof(struct htab_elem, key) +
@@ -530,7 +532,9 @@ static u32 htab_lru_map_gen_lookup(struct bpf_map *map,
 	const int ret = BPF_REG_0;
 	const int ref_reg = BPF_REG_1;
 
-	*insn++ = BPF_EMIT_CALL((u64 (*)(u64, u64, u64, u64, u64))__htab_map_lookup_elem);
+	BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem,
+		     (void *(*)(struct bpf_map *map, void *key))NULL));
+	*insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem));
 	*insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 4);
 	*insn++ = BPF_LDX_MEM(BPF_B, ref_reg, ret,
 			      offsetof(struct htab_elem, lru_node) +
@@ -1369,7 +1373,9 @@ static u32 htab_of_map_gen_lookup(struct bpf_map *map,
 	struct bpf_insn *insn = insn_buf;
 	const int ret = BPF_REG_0;
 
-	*insn++ = BPF_EMIT_CALL((u64 (*)(u64, u64, u64, u64, u64))__htab_map_lookup_elem);
+	BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem,
+		     (void *(*)(struct bpf_map *map, void *key))NULL));
+	*insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem));
 	*insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 2);
 	*insn++ = BPF_ALU64_IMM(BPF_ADD, ret,
 				offsetof(struct htab_elem, key) +
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 4f4786e..5684b15 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2421,8 +2421,11 @@ record_func_map(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
 	struct bpf_insn_aux_data *aux = &env->insn_aux_data[insn_idx];
 
 	if (func_id != BPF_FUNC_tail_call &&
-	    func_id != BPF_FUNC_map_lookup_elem)
+	    func_id != BPF_FUNC_map_lookup_elem &&
+	    func_id != BPF_FUNC_map_update_elem &&
+	    func_id != BPF_FUNC_map_delete_elem)
 		return 0;
+
 	if (meta->map_ptr == NULL) {
 		verbose(env, "kernel subsystem misconfigured verifier\n");
 		return -EINVAL;
@@ -5586,6 +5589,7 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
 	struct bpf_insn *insn = prog->insnsi;
 	const struct bpf_func_proto *fn;
 	const int insn_cnt = prog->len;
+	const struct bpf_map_ops *ops;
 	struct bpf_insn_aux_data *aux;
 	struct bpf_insn insn_buf[16];
 	struct bpf_prog *new_prog;
@@ -5715,10 +5719,13 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
 		}
 
 		/* BPF_EMIT_CALL() assumptions in some of the map_gen_lookup
-		 * handlers are currently limited to 64 bit only.
+		 * and other inlining handlers are currently limited to 64 bit
+		 * only.
 		 */
 		if (prog->jit_requested && BITS_PER_LONG == 64 &&
-		    insn->imm == BPF_FUNC_map_lookup_elem) {
+		    (insn->imm == BPF_FUNC_map_lookup_elem ||
+		     insn->imm == BPF_FUNC_map_update_elem ||
+		     insn->imm == BPF_FUNC_map_delete_elem)) {
 			aux = &env->insn_aux_data[i + delta];
 			if (bpf_map_ptr_poisoned(aux))
 				goto patch_call_imm;
@@ -5727,23 +5734,49 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env)
 			if (!map_ptr->ops->map_gen_lookup)
 				goto patch_call_imm;
 
-			cnt = map_ptr->ops->map_gen_lookup(map_ptr, insn_buf);
-			if (cnt == 0 || cnt >= ARRAY_SIZE(insn_buf)) {
-				verbose(env, "bpf verifier is misconfigured\n");
-				return -EINVAL;
-			}
+			ops = map_ptr->ops;
+			if (insn->imm == BPF_FUNC_map_lookup_elem &&
+			    ops->map_gen_lookup) {
+				cnt = ops->map_gen_lookup(map_ptr, insn_buf);
+				if (cnt == 0 || cnt >= ARRAY_SIZE(insn_buf)) {
+					verbose(env, "bpf verifier is misconfigured\n");
+					return -EINVAL;
+				}
 
-			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf,
-						       cnt);
-			if (!new_prog)
-				return -ENOMEM;
+				new_prog = bpf_patch_insn_data(env, i + delta,
+							       insn_buf, cnt);
+				if (!new_prog)
+					return -ENOMEM;
 
-			delta += cnt - 1;
+				delta    += cnt - 1;
+				env->prog = prog = new_prog;
+				insn      = new_prog->insnsi + i + delta;
+				continue;
+			}
 
-			/* keep walking new program and skip insns we just inserted */
-			env->prog = prog = new_prog;
-			insn      = new_prog->insnsi + i + delta;
-			continue;
+			BUILD_BUG_ON(!__same_type(ops->map_lookup_elem,
+				     (void *(*)(struct bpf_map *map, void *key))NULL));
+			BUILD_BUG_ON(!__same_type(ops->map_delete_elem,
+				     (int (*)(struct bpf_map *map, void *key))NULL));
+			BUILD_BUG_ON(!__same_type(ops->map_update_elem,
+				     (int (*)(struct bpf_map *map, void *key, void *value,
+					      u64 flags))NULL));
+			switch (insn->imm) {
+			case BPF_FUNC_map_lookup_elem:
+				insn->imm = BPF_CAST_CALL(ops->map_lookup_elem) -
+					    __bpf_call_base;
+				continue;
+			case BPF_FUNC_map_update_elem:
+				insn->imm = BPF_CAST_CALL(ops->map_update_elem) -
+					    __bpf_call_base;
+				continue;
+			case BPF_FUNC_map_delete_elem:
+				insn->imm = BPF_CAST_CALL(ops->map_delete_elem) -
+					    __bpf_call_base;
+				continue;
+			}
+
+			goto patch_call_imm;
 		}
 
 		if (insn->imm == BPF_FUNC_redirect_map) {
-- 
2.9.5

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH bpf-next v3 06/11] bpf: add bpf_skb_cgroup_id helper
  2018-06-02 21:06 [PATCH bpf-next v3 00/11] Misc BPF improvements Daniel Borkmann
                   ` (4 preceding siblings ...)
  2018-06-02 21:06 ` [PATCH bpf-next v3 05/11] bpf: avoid retpoline for lookup/update/delete calls on maps Daniel Borkmann
@ 2018-06-02 21:06 ` Daniel Borkmann
  2018-06-02 21:06 ` [PATCH bpf-next v3 07/11] bpf: make sure to clear unused fields in tunnel/xfrm state fetch Daniel Borkmann
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-02 21:06 UTC (permalink / raw)
  To: alexei.starovoitov; +Cc: netdev, Daniel Borkmann

Add a new bpf_skb_cgroup_id() helper that allows to retrieve the
cgroup id from the skb's socket. This is useful in particular to
enable bpf_get_cgroup_classid()-like behavior for cgroup v1 in
cgroup v2 by allowing ID based matching on egress. This can in
particular be used in combination with applying policy e.g. from
map lookups, and also complements the older bpf_skb_under_cgroup()
interface. In user space the cgroup id for a given path can be
retrieved through the f_handle as demonstrated in [0] recently.

  [0] https://lkml.org/lkml/2018/5/22/1190

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
---
 include/uapi/linux/bpf.h | 19 ++++++++++++++++++-
 net/core/filter.c        | 29 +++++++++++++++++++++++++++--
 2 files changed, 45 insertions(+), 3 deletions(-)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 64ac0f7..6613181 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -2054,6 +2054,22 @@ union bpf_attr {
  *
  *	Return
  *		0
+ *
+ * uint64_t bpf_skb_cgroup_id(struct sk_buff *skb)
+ * 	Description
+ * 		Return the cgroup v2 id of the socket associated with the *skb*.
+ * 		This is roughly similar to the **bpf_get_cgroup_classid**\ ()
+ * 		helper for cgroup v1 by providing a tag resp. identifier that
+ * 		can be matched on or used for map lookups e.g. to implement
+ * 		policy. The cgroup v2 id of a given path in the hierarchy is
+ * 		exposed in user space through the f_handle API in order to get
+ * 		to the same 64-bit id.
+ *
+ * 		This helper can be used on TC egress path, but not on ingress,
+ * 		and is available only if the kernel was compiled with the
+ * 		**CONFIG_SOCK_CGROUP_DATA** configuration option.
+ * 	Return
+ * 		The id is returned or 0 in case the id could not be retrieved.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -2134,7 +2150,8 @@ union bpf_attr {
 	FN(lwt_seg6_adjust_srh),	\
 	FN(lwt_seg6_action),		\
 	FN(rc_repeat),			\
-	FN(rc_keydown),
+	FN(rc_keydown),			\
+	FN(skb_cgroup_id),
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
  * function eBPF program intends to call
diff --git a/net/core/filter.c b/net/core/filter.c
index 885fb0e..edbfaa6 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3661,6 +3661,27 @@ static const struct bpf_func_proto bpf_skb_under_cgroup_proto = {
 	.arg3_type	= ARG_ANYTHING,
 };
 
+#ifdef CONFIG_SOCK_CGROUP_DATA
+BPF_CALL_1(bpf_skb_cgroup_id, const struct sk_buff *, skb)
+{
+	struct sock *sk = skb_to_full_sk(skb);
+	struct cgroup *cgrp;
+
+	if (!sk || !sk_fullsock(sk))
+		return 0;
+
+	cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data);
+	return cgrp->kn->id.id;
+}
+
+static const struct bpf_func_proto bpf_skb_cgroup_id_proto = {
+	.func           = bpf_skb_cgroup_id,
+	.gpl_only       = false,
+	.ret_type       = RET_INTEGER,
+	.arg1_type      = ARG_PTR_TO_CTX,
+};
+#endif
+
 static unsigned long bpf_xdp_copy(void *dst_buff, const void *src_buff,
 				  unsigned long off, unsigned long len)
 {
@@ -4747,12 +4768,16 @@ tc_cls_act_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_get_socket_cookie_proto;
 	case BPF_FUNC_get_socket_uid:
 		return &bpf_get_socket_uid_proto;
+	case BPF_FUNC_fib_lookup:
+		return &bpf_skb_fib_lookup_proto;
 #ifdef CONFIG_XFRM
 	case BPF_FUNC_skb_get_xfrm_state:
 		return &bpf_skb_get_xfrm_state_proto;
 #endif
-	case BPF_FUNC_fib_lookup:
-		return &bpf_skb_fib_lookup_proto;
+#ifdef CONFIG_SOCK_CGROUP_DATA
+	case BPF_FUNC_skb_cgroup_id:
+		return &bpf_skb_cgroup_id_proto;
+#endif
 	default:
 		return bpf_base_func_proto(func_id);
 	}
-- 
2.9.5

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH bpf-next v3 07/11] bpf: make sure to clear unused fields in tunnel/xfrm state fetch
  2018-06-02 21:06 [PATCH bpf-next v3 00/11] Misc BPF improvements Daniel Borkmann
                   ` (5 preceding siblings ...)
  2018-06-02 21:06 ` [PATCH bpf-next v3 06/11] bpf: add bpf_skb_cgroup_id helper Daniel Borkmann
@ 2018-06-02 21:06 ` Daniel Borkmann
  2018-06-02 21:06 ` [PATCH bpf-next v3 08/11] bpf: fix cbpf parser bug for octal numbers Daniel Borkmann
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-02 21:06 UTC (permalink / raw)
  To: alexei.starovoitov; +Cc: netdev, Daniel Borkmann

Since the remaining bits are not filled in struct bpf_tunnel_key
resp. struct bpf_xfrm_state and originate from uninitialized stack
space, we should make sure to clear them before handing control
back to the program.

Also add a padding element to struct bpf_xfrm_state for future use
similar as we have in struct bpf_tunnel_key and clear it as well.

  struct bpf_xfrm_state {
      __u32                      reqid;            /*     0     4 */
      __u32                      spi;              /*     4     4 */
      __u16                      family;           /*     8     2 */

      /* XXX 2 bytes hole, try to pack */

      union {
          __u32              remote_ipv4;          /*           4 */
          __u32              remote_ipv6[4];       /*          16 */
      };                                           /*    12    16 */

      /* size: 28, cachelines: 1, members: 4 */
      /* sum members: 26, holes: 1, sum holes: 2 */
      /* last cacheline: 28 bytes */
  };

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
---
 include/uapi/linux/bpf.h | 3 ++-
 net/core/filter.c        | 6 ++++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 6613181..f0b6608 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -2268,7 +2268,7 @@ struct bpf_tunnel_key {
 	};
 	__u8 tunnel_tos;
 	__u8 tunnel_ttl;
-	__u16 tunnel_ext;
+	__u16 tunnel_ext;	/* Padding, future use. */
 	__u32 tunnel_label;
 };
 
@@ -2279,6 +2279,7 @@ struct bpf_xfrm_state {
 	__u32 reqid;
 	__u32 spi;	/* Stored in network byte order */
 	__u16 family;
+	__u16 ext;	/* Padding, future use. */
 	union {
 		__u32 remote_ipv4;	/* Stored in network byte order */
 		__u32 remote_ipv6[4];	/* Stored in network byte order */
diff --git a/net/core/filter.c b/net/core/filter.c
index edbfaa6..28e8647 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3445,6 +3445,7 @@ BPF_CALL_4(bpf_skb_get_tunnel_key, struct sk_buff *, skb, struct bpf_tunnel_key
 	to->tunnel_id = be64_to_cpu(info->key.tun_id);
 	to->tunnel_tos = info->key.tos;
 	to->tunnel_ttl = info->key.ttl;
+	to->tunnel_ext = 0;
 
 	if (flags & BPF_F_TUNINFO_IPV6) {
 		memcpy(to->remote_ipv6, &info->key.u.ipv6.src,
@@ -3452,6 +3453,8 @@ BPF_CALL_4(bpf_skb_get_tunnel_key, struct sk_buff *, skb, struct bpf_tunnel_key
 		to->tunnel_label = be32_to_cpu(info->key.label);
 	} else {
 		to->remote_ipv4 = be32_to_cpu(info->key.u.ipv4.src);
+		memset(&to->remote_ipv6[1], 0, sizeof(__u32) * 3);
+		to->tunnel_label = 0;
 	}
 
 	if (unlikely(size != sizeof(struct bpf_tunnel_key)))
@@ -4047,11 +4050,14 @@ BPF_CALL_5(bpf_skb_get_xfrm_state, struct sk_buff *, skb, u32, index,
 	to->reqid = x->props.reqid;
 	to->spi = x->id.spi;
 	to->family = x->props.family;
+	to->ext = 0;
+
 	if (to->family == AF_INET6) {
 		memcpy(to->remote_ipv6, x->props.saddr.a6,
 		       sizeof(to->remote_ipv6));
 	} else {
 		to->remote_ipv4 = x->props.saddr.a4;
+		memset(&to->remote_ipv6[1], 0, sizeof(__u32) * 3);
 	}
 
 	return 0;
-- 
2.9.5

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH bpf-next v3 08/11] bpf: fix cbpf parser bug for octal numbers
  2018-06-02 21:06 [PATCH bpf-next v3 00/11] Misc BPF improvements Daniel Borkmann
                   ` (6 preceding siblings ...)
  2018-06-02 21:06 ` [PATCH bpf-next v3 07/11] bpf: make sure to clear unused fields in tunnel/xfrm state fetch Daniel Borkmann
@ 2018-06-02 21:06 ` Daniel Borkmann
  2018-06-02 21:06 ` [PATCH bpf-next v3 09/11] bpf: fix context access in tracing progs on 32 bit archs Daniel Borkmann
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-02 21:06 UTC (permalink / raw)
  To: alexei.starovoitov; +Cc: netdev, Daniel Borkmann

Range is 0-7, not 0-9, otherwise parser silently excludes it from the
strtol() rather than throwing an error.

Reported-by: Marc Boschma <marc@boschma.cx>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
---
 tools/bpf/bpf_exp.l | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/bpf/bpf_exp.l b/tools/bpf/bpf_exp.l
index bd83149..4da8d05 100644
--- a/tools/bpf/bpf_exp.l
+++ b/tools/bpf/bpf_exp.l
@@ -175,7 +175,7 @@ extern void yyerror(const char *str);
 			yylval.number = strtol(yytext, NULL, 10);
 			return number;
 		}
-([0][0-9]+)	{
+([0][0-7]+)	{
 			yylval.number = strtol(yytext + 1, NULL, 8);
 			return number;
 		}
-- 
2.9.5

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH bpf-next v3 09/11] bpf: fix context access in tracing progs on 32 bit archs
  2018-06-02 21:06 [PATCH bpf-next v3 00/11] Misc BPF improvements Daniel Borkmann
                   ` (7 preceding siblings ...)
  2018-06-02 21:06 ` [PATCH bpf-next v3 08/11] bpf: fix cbpf parser bug for octal numbers Daniel Borkmann
@ 2018-06-02 21:06 ` Daniel Borkmann
  2018-06-02 21:06 ` [PATCH bpf-next v3 10/11] bpf: sync bpf uapi header with tools Daniel Borkmann
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-02 21:06 UTC (permalink / raw)
  To: alexei.starovoitov; +Cc: netdev, Daniel Borkmann

Wang reported that all the testcases for BPF_PROG_TYPE_PERF_EVENT
program type in test_verifier report the following errors on x86_32:

  172/p unpriv: spill/fill of different pointers ldx FAIL
  Unexpected error message!
  0: (bf) r6 = r10
  1: (07) r6 += -8
  2: (15) if r1 == 0x0 goto pc+3
  R1=ctx(id=0,off=0,imm=0) R6=fp-8,call_-1 R10=fp0,call_-1
  3: (bf) r2 = r10
  4: (07) r2 += -76
  5: (7b) *(u64 *)(r6 +0) = r2
  6: (55) if r1 != 0x0 goto pc+1
  R1=ctx(id=0,off=0,imm=0) R2=fp-76,call_-1 R6=fp-8,call_-1 R10=fp0,call_-1 fp-8=fp
  7: (7b) *(u64 *)(r6 +0) = r1
  8: (79) r1 = *(u64 *)(r6 +0)
  9: (79) r1 = *(u64 *)(r1 +68)
  invalid bpf_context access off=68 size=8

  378/p check bpf_perf_event_data->sample_period byte load permitted FAIL
  Failed to load prog 'Permission denied'!
  0: (b7) r0 = 0
  1: (71) r0 = *(u8 *)(r1 +68)
  invalid bpf_context access off=68 size=1

  379/p check bpf_perf_event_data->sample_period half load permitted FAIL
  Failed to load prog 'Permission denied'!
  0: (b7) r0 = 0
  1: (69) r0 = *(u16 *)(r1 +68)
  invalid bpf_context access off=68 size=2

  380/p check bpf_perf_event_data->sample_period word load permitted FAIL
  Failed to load prog 'Permission denied'!
  0: (b7) r0 = 0
  1: (61) r0 = *(u32 *)(r1 +68)
  invalid bpf_context access off=68 size=4

  381/p check bpf_perf_event_data->sample_period dword load permitted FAIL
  Failed to load prog 'Permission denied'!
  0: (b7) r0 = 0
  1: (79) r0 = *(u64 *)(r1 +68)
  invalid bpf_context access off=68 size=8

Reason is that struct pt_regs on x86_32 doesn't fully align to 8 byte
boundary due to its size of 68 bytes. Therefore, bpf_ctx_narrow_access_ok()
will then bail out saying that off & (size_default - 1) which is 68 & 7
doesn't cleanly align in the case of sample_period access from struct
bpf_perf_event_data, hence verifier wrongly thinks we might be doing an
unaligned access here though underlying arch can handle it just fine.
Therefore adjust this down to machine size and check and rewrite the
offset for narrow access on that basis. We also need to fix corresponding
pe_prog_is_valid_access(), since we hit the check for off % size != 0
(e.g. 68 % 8 -> 4) in the first and last test. With that in place, progs
for tracing work on x86_32.

Reported-by: Wang YanQing <udknight@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Tested-by: Wang YanQing <udknight@gmail.com>
---
 include/linux/filter.h   | 30 ++++++++++++++++++++++++------
 kernel/bpf/verifier.c    |  3 ++-
 kernel/trace/bpf_trace.c | 10 ++++++++--
 3 files changed, 34 insertions(+), 9 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 8e60f1e..45fc0f5 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -639,16 +639,34 @@ static inline bool bpf_prog_was_classic(const struct bpf_prog *prog)
 	return prog->type == BPF_PROG_TYPE_UNSPEC;
 }
 
-static inline bool
-bpf_ctx_narrow_access_ok(u32 off, u32 size, const u32 size_default)
+static inline u32 bpf_ctx_off_adjust_machine(u32 size)
+{
+	const u32 size_machine = sizeof(unsigned long);
+
+	if (size > size_machine && size % size_machine == 0)
+		size = size_machine;
+
+	return size;
+}
+
+static inline bool bpf_ctx_narrow_align_ok(u32 off, u32 size_access,
+					   u32 size_default)
 {
-	bool off_ok;
+	size_default = bpf_ctx_off_adjust_machine(size_default);
+	size_access  = bpf_ctx_off_adjust_machine(size_access);
+
 #ifdef __LITTLE_ENDIAN
-	off_ok = (off & (size_default - 1)) == 0;
+	return (off & (size_default - 1)) == 0;
 #else
-	off_ok = (off & (size_default - 1)) + size == size_default;
+	return (off & (size_default - 1)) + size_access == size_default;
 #endif
-	return off_ok && size <= size_default && (size & (size - 1)) == 0;
+}
+
+static inline bool
+bpf_ctx_narrow_access_ok(u32 off, u32 size, u32 size_default)
+{
+	return bpf_ctx_narrow_align_ok(off, size, size_default) &&
+	       size <= size_default && (size & (size - 1)) == 0;
 }
 
 #define bpf_classic_proglen(fprog) (fprog->len * sizeof(fprog->filter[0]))
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5684b15..011def5 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5349,6 +5349,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
 		 */
 		is_narrower_load = size < ctx_field_size;
 		if (is_narrower_load) {
+			u32 size_default = bpf_ctx_off_adjust_machine(ctx_field_size);
 			u32 off = insn->off;
 			u8 size_code;
 
@@ -5363,7 +5364,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
 			else if (ctx_field_size == 8)
 				size_code = BPF_DW;
 
-			insn->off = off & ~(ctx_field_size - 1);
+			insn->off = off & ~(size_default - 1);
 			insn->code = BPF_LDX | BPF_MEM | size_code;
 		}
 
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index af1486d..752992c 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -880,8 +880,14 @@ static bool pe_prog_is_valid_access(int off, int size, enum bpf_access_type type
 		return false;
 	if (type != BPF_READ)
 		return false;
-	if (off % size != 0)
-		return false;
+	if (off % size != 0) {
+		if (sizeof(unsigned long) != 4)
+			return false;
+		if (size != 8)
+			return false;
+		if (off % size != 4)
+			return false;
+	}
 
 	switch (off) {
 	case bpf_ctx_range(struct bpf_perf_event_data, sample_period):
-- 
2.9.5

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH bpf-next v3 10/11] bpf: sync bpf uapi header with tools
  2018-06-02 21:06 [PATCH bpf-next v3 00/11] Misc BPF improvements Daniel Borkmann
                   ` (8 preceding siblings ...)
  2018-06-02 21:06 ` [PATCH bpf-next v3 09/11] bpf: fix context access in tracing progs on 32 bit archs Daniel Borkmann
@ 2018-06-02 21:06 ` Daniel Borkmann
  2018-06-02 21:06 ` [PATCH bpf-next v3 11/11] bpf, doc: add missing patchwork url and libbpf to maintainers Daniel Borkmann
  2018-06-03 15:08 ` [PATCH bpf-next v3 00/11] Misc BPF improvements Alexei Starovoitov
  11 siblings, 0 replies; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-02 21:06 UTC (permalink / raw)
  To: alexei.starovoitov; +Cc: netdev, Daniel Borkmann

Pull in recent changes from include/uapi/linux/bpf.h.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
---
 tools/include/uapi/linux/bpf.h | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 64ac0f7..f0b6608 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -2054,6 +2054,22 @@ union bpf_attr {
  *
  *	Return
  *		0
+ *
+ * uint64_t bpf_skb_cgroup_id(struct sk_buff *skb)
+ * 	Description
+ * 		Return the cgroup v2 id of the socket associated with the *skb*.
+ * 		This is roughly similar to the **bpf_get_cgroup_classid**\ ()
+ * 		helper for cgroup v1 by providing a tag resp. identifier that
+ * 		can be matched on or used for map lookups e.g. to implement
+ * 		policy. The cgroup v2 id of a given path in the hierarchy is
+ * 		exposed in user space through the f_handle API in order to get
+ * 		to the same 64-bit id.
+ *
+ * 		This helper can be used on TC egress path, but not on ingress,
+ * 		and is available only if the kernel was compiled with the
+ * 		**CONFIG_SOCK_CGROUP_DATA** configuration option.
+ * 	Return
+ * 		The id is returned or 0 in case the id could not be retrieved.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -2134,7 +2150,8 @@ union bpf_attr {
 	FN(lwt_seg6_adjust_srh),	\
 	FN(lwt_seg6_action),		\
 	FN(rc_repeat),			\
-	FN(rc_keydown),
+	FN(rc_keydown),			\
+	FN(skb_cgroup_id),
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
  * function eBPF program intends to call
@@ -2251,7 +2268,7 @@ struct bpf_tunnel_key {
 	};
 	__u8 tunnel_tos;
 	__u8 tunnel_ttl;
-	__u16 tunnel_ext;
+	__u16 tunnel_ext;	/* Padding, future use. */
 	__u32 tunnel_label;
 };
 
@@ -2262,6 +2279,7 @@ struct bpf_xfrm_state {
 	__u32 reqid;
 	__u32 spi;	/* Stored in network byte order */
 	__u16 family;
+	__u16 ext;	/* Padding, future use. */
 	union {
 		__u32 remote_ipv4;	/* Stored in network byte order */
 		__u32 remote_ipv6[4];	/* Stored in network byte order */
-- 
2.9.5

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH bpf-next v3 11/11] bpf, doc: add missing patchwork url and libbpf to maintainers
  2018-06-02 21:06 [PATCH bpf-next v3 00/11] Misc BPF improvements Daniel Borkmann
                   ` (9 preceding siblings ...)
  2018-06-02 21:06 ` [PATCH bpf-next v3 10/11] bpf: sync bpf uapi header with tools Daniel Borkmann
@ 2018-06-02 21:06 ` Daniel Borkmann
  2018-06-03 15:08 ` [PATCH bpf-next v3 00/11] Misc BPF improvements Alexei Starovoitov
  11 siblings, 0 replies; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-02 21:06 UTC (permalink / raw)
  To: alexei.starovoitov; +Cc: netdev, Daniel Borkmann

Add missing bits under tools/lib/bpf/ and also Q: entry in order to
make it easier for people to retrieve current patch queue.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
---
 MAINTAINERS | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index f492431..2fd51db 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2722,6 +2722,7 @@ L:	netdev@vger.kernel.org
 L:	linux-kernel@vger.kernel.org
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git
+Q:	https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147
 S:	Supported
 F:	arch/x86/net/bpf_jit*
 F:	Documentation/networking/filter.txt
@@ -2740,6 +2741,7 @@ F:	net/sched/act_bpf.c
 F:	net/sched/cls_bpf.c
 F:	samples/bpf/
 F:	tools/bpf/
+F:	tools/lib/bpf/
 F:	tools/testing/selftests/bpf/
 
 BROADCOM B44 10/100 ETHERNET DRIVER
-- 
2.9.5

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH bpf-next v3 05/11] bpf: avoid retpoline for lookup/update/delete calls on maps
  2018-06-02 21:06 ` [PATCH bpf-next v3 05/11] bpf: avoid retpoline for lookup/update/delete calls on maps Daniel Borkmann
@ 2018-06-03  6:56   ` Jesper Dangaard Brouer
  2018-06-03 16:11     ` Daniel Borkmann
  0 siblings, 1 reply; 20+ messages in thread
From: Jesper Dangaard Brouer @ 2018-06-03  6:56 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: brouer, alexei.starovoitov, netdev

On Sat,  2 Jun 2018 23:06:35 +0200
Daniel Borkmann <daniel@iogearbox.net> wrote:

> Before:
> 
>   # bpftool p d x i 1

Could this please be changed to:

 # bpftool prog dump xlated id 1

I requested this before, but you seem to have missed my feedback...
This makes the command "self-documenting" and searchable by Google.


>     0: (bf) r2 = r10
>     1: (07) r2 += -8
>     2: (7a) *(u64 *)(r2 +0) = 0
>     3: (18) r1 = map[id:1]
>     5: (85) call __htab_map_lookup_elem#232656
>     6: (15) if r0 == 0x0 goto pc+4
>     7: (71) r1 = *(u8 *)(r0 +35)
>     8: (55) if r1 != 0x0 goto pc+1
>     9: (72) *(u8 *)(r0 +35) = 1
>    10: (07) r0 += 56
>    11: (15) if r0 == 0x0 goto pc+4
>    12: (bf) r2 = r0
>    13: (18) r1 = map[id:1]
>    15: (85) call bpf_map_delete_elem#215008  <-- indirect call via
>    16: (95) exit                                 helper
> 



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH bpf-next v3 00/11] Misc BPF improvements
  2018-06-02 21:06 [PATCH bpf-next v3 00/11] Misc BPF improvements Daniel Borkmann
                   ` (10 preceding siblings ...)
  2018-06-02 21:06 ` [PATCH bpf-next v3 11/11] bpf, doc: add missing patchwork url and libbpf to maintainers Daniel Borkmann
@ 2018-06-03 15:08 ` Alexei Starovoitov
  11 siblings, 0 replies; 20+ messages in thread
From: Alexei Starovoitov @ 2018-06-03 15:08 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: netdev

On Sat, Jun 02, 2018 at 11:06:30PM +0200, Daniel Borkmann wrote:
> This set adds various patches I still had in my queue, first two
> are test cases to provide coverage for the recent two fixes that
> went to bpf tree, then a small improvement on the error message
> for gpl helpers. Next, we expose prog and map id into fdinfo in
> order to allow for inspection of these objections currently used
> in applications. Patch after that removes a retpoline call for
> map lookup/update/delete helpers. A new helper is added in the
> subsequent patch to lookup the skb's socket's cgroup v2 id which
> can be used in an efficient way for e.g. lookups on egress side.
> Next one is a fix to fully clear state info in tunnel/xfrm helpers.
> Given this is full cap_sys_admin from init ns and has same priv
> requirements like tracing, bpf-next should be okay. A small bug
> fix for bpf_asm follows, and next a fix for context access in
> tracing which was recently reported. Lastly, a small update in
> the maintainer's file to add patchwork url and missing files.
> 
> Thanks!
> 
> v2 -> v3:
>   - Noticed a merge artefact inside uapi header comment, sigh,
>     fixed now.
> v1 -> v2:
>   - minor fix in getting context access work on 32 bit for tracing
>   - add paragraph to uapi helper doc to better describe kernel
>     build deps for cggroup helper

Applied, Thanks Daniel.
fixed up commit log s/bpftool p d x i/bpftool prog dump xlated id/
while applying, since it was indeed a bit cryptic.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH bpf-next v3 05/11] bpf: avoid retpoline for lookup/update/delete calls on maps
  2018-06-03  6:56   ` Jesper Dangaard Brouer
@ 2018-06-03 16:11     ` Daniel Borkmann
  2018-06-03 17:08       ` Jesper Dangaard Brouer
  0 siblings, 1 reply; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-03 16:11 UTC (permalink / raw)
  To: Jesper Dangaard Brouer; +Cc: alexei.starovoitov, netdev

On 06/03/2018 08:56 AM, Jesper Dangaard Brouer wrote:
> On Sat,  2 Jun 2018 23:06:35 +0200
> Daniel Borkmann <daniel@iogearbox.net> wrote:
> 
>> Before:
>>
>>   # bpftool p d x i 1
> 
> Could this please be changed to:
> 
>  # bpftool prog dump xlated id 1
> 
> I requested this before, but you seem to have missed my feedback...
> This makes the command "self-documenting" and searchable by Google.

I recently wrote a howto here, but there's also excellent documentation
in terms of man pages for bpftool.

http://cilium.readthedocs.io/en/latest/bpf/#bpftool

My original thinking was that it might be okay to also show usage of
short option matching, like in iproute2 probably few people only write
'ip address' but majority uses 'ip a' instead. But I'm fine either way
if there are strong opinions ... thanks Alexei for fixing up!

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH bpf-next v3 05/11] bpf: avoid retpoline for lookup/update/delete calls on maps
  2018-06-03 16:11     ` Daniel Borkmann
@ 2018-06-03 17:08       ` Jesper Dangaard Brouer
  2018-06-04 11:02         ` Phil Sutter
  0 siblings, 1 reply; 20+ messages in thread
From: Jesper Dangaard Brouer @ 2018-06-03 17:08 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: brouer, alexei.starovoitov, netdev, Phil Sutter, Jakub Kicinski,
	Jakub Kicinski, Quentin Monnet

On Sun, 3 Jun 2018 18:11:45 +0200
Daniel Borkmann <daniel@iogearbox.net> wrote:

> On 06/03/2018 08:56 AM, Jesper Dangaard Brouer wrote:
> > On Sat,  2 Jun 2018 23:06:35 +0200
> > Daniel Borkmann <daniel@iogearbox.net> wrote:
> >   
> >> Before:
> >>
> >>   # bpftool p d x i 1  
> > 
> > Could this please be changed to:
> > 
> >  # bpftool prog dump xlated id 1
> > 
> > I requested this before, but you seem to have missed my feedback...
> > This makes the command "self-documenting" and searchable by Google.  
> 
> I recently wrote a howto here, but there's also excellent documentation
> in terms of man pages for bpftool.
> 
> http://cilium.readthedocs.io/en/latest/bpf/#bpftool
> 
> My original thinking was that it might be okay to also show usage of
> short option matching, like in iproute2 probably few people only write
> 'ip address' but majority uses 'ip a' instead. But I'm fine either way
> if there are strong opinions ... thanks Alexei for fixing up!

First of all I love your documentation effort.

Secondly I personally *hate* how the 'ip' does it's short options
parsing and especially order/precedence ambiguity.  Phil Sutter
(Fedora/RHEL iproute2 maintainer) have a funny quiz illustrating the
ambiguity issues.

Quiz: https://youtu.be/cymH9pcFGa0?t=7m10s
Code problem: https://youtu.be/cymH9pcFGa0?t=9m8s

I hope the maintainers and developers of bpftool make sure we don't end
up in an ambiguity mess like we have with 'ip', pretty please.
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH bpf-next v3 05/11] bpf: avoid retpoline for lookup/update/delete calls on maps
  2018-06-03 17:08       ` Jesper Dangaard Brouer
@ 2018-06-04 11:02         ` Phil Sutter
  2018-06-04 18:25           ` Jakub Kicinski
  0 siblings, 1 reply; 20+ messages in thread
From: Phil Sutter @ 2018-06-04 11:02 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Daniel Borkmann, alexei.starovoitov, netdev, Jakub Kicinski,
	Jakub Kicinski, Quentin Monnet

[-- Attachment #1: Type: text/plain, Size: 776 bytes --]

Hi!

On Sun, Jun 03, 2018 at 07:08:55PM +0200, Jesper Dangaard Brouer wrote:
[...]
> Secondly I personally *hate* how the 'ip' does it's short options
> parsing and especially order/precedence ambiguity.  Phil Sutter
> (Fedora/RHEL iproute2 maintainer) have a funny quiz illustrating the
> ambiguity issues.

Hehe, yes. It's a classical case of something smart evolving into a
pain: At first there's only 'ip link', so you allow 'ip l' as a
shortcut. Then someone implements 'ip l2tp' - so what do you do?
Establish a policy of abbreviation having to be unique and break
existing behaviour or accept the mess and head on.

My suggestion would be to not get into the abbreviated subcommands
business at all but instead ship and maintain a bash-completion script.

Cheers, Phil

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH bpf-next v3 05/11] bpf: avoid retpoline for lookup/update/delete calls on maps
  2018-06-04 11:02         ` Phil Sutter
@ 2018-06-04 18:25           ` Jakub Kicinski
  2018-06-04 19:45             ` Daniel Borkmann
  2018-06-04 22:36             ` David Ahern
  0 siblings, 2 replies; 20+ messages in thread
From: Jakub Kicinski @ 2018-06-04 18:25 UTC (permalink / raw)
  To: Phil Sutter
  Cc: Jesper Dangaard Brouer, Daniel Borkmann, alexei.starovoitov,
	netdev, Jakub Kicinski, Quentin Monnet

On Mon, 4 Jun 2018 13:02:25 +0200, Phil Sutter wrote:
> On Sun, Jun 03, 2018 at 07:08:55PM +0200, Jesper Dangaard Brouer wrote:
> > Secondly I personally *hate* how the 'ip' does it's short options
> > parsing and especially order/precedence ambiguity.  Phil Sutter
> > (Fedora/RHEL iproute2 maintainer) have a funny quiz illustrating the
> > ambiguity issues.  
> 
> Hehe, yes. It's a classical case of something smart evolving into a
> pain: At first there's only 'ip link', so you allow 'ip l' as a
> shortcut. Then someone implements 'ip l2tp' - so what do you do?

Good example, I like that "ip l" shows me the links because that's what
99.99% of people want when they type that command ;)

> Establish a policy of abbreviation having to be unique and break
> existing behaviour or accept the mess and head on.

Commands are tested in order of addition so older ones take precedence.

The iproute2 behaviour was replicated in bpftool on purpose, because
it should be very familiar to people.  It is to me at least.  And IMHO
it's better to be consistent with a well known tool than have our own
quirks and rules...

> My suggestion would be to not get into the abbreviated subcommands
> business at all but instead ship and maintain a bash-completion script.

We prefer to have both :)  Those of us who like to abbreviate can do
that, and others can use completions.  I personally think Quentin did
an awesome job on the completions, they cover the entire syntax unlike
the iproute2 ones and we intend to keep them that way!

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH bpf-next v3 05/11] bpf: avoid retpoline for lookup/update/delete calls on maps
  2018-06-04 18:25           ` Jakub Kicinski
@ 2018-06-04 19:45             ` Daniel Borkmann
  2018-06-04 22:36             ` David Ahern
  1 sibling, 0 replies; 20+ messages in thread
From: Daniel Borkmann @ 2018-06-04 19:45 UTC (permalink / raw)
  To: Jakub Kicinski, Phil Sutter
  Cc: Jesper Dangaard Brouer, alexei.starovoitov, netdev,
	Jakub Kicinski, Quentin Monnet

On 06/04/2018 08:25 PM, Jakub Kicinski wrote:
[...]
> We prefer to have both :)  Those of us who like to abbreviate can do
> that, and others can use completions.  I personally think Quentin did
> an awesome job on the completions, they cover the entire syntax unlike
> the iproute2 ones and we intend to keep them that way!

Fully agree, both make sense. Personally, I only use abbreviations on
bpftool so far. :)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH bpf-next v3 05/11] bpf: avoid retpoline for lookup/update/delete calls on maps
  2018-06-04 18:25           ` Jakub Kicinski
  2018-06-04 19:45             ` Daniel Borkmann
@ 2018-06-04 22:36             ` David Ahern
  1 sibling, 0 replies; 20+ messages in thread
From: David Ahern @ 2018-06-04 22:36 UTC (permalink / raw)
  To: Jakub Kicinski, Phil Sutter
  Cc: Jesper Dangaard Brouer, Daniel Borkmann, alexei.starovoitov,
	netdev, Jakub Kicinski, Quentin Monnet

On 6/4/18 11:25 AM, Jakub Kicinski wrote:
> that, and others can use completions.  I personally think Quentin did
> an awesome job on the completions, they cover the entire syntax unlike
> the iproute2 ones and we intend to keep them that way!

iproute2 patches for completions would be welcomed if anyone has the time.

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2018-06-04 22:36 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-02 21:06 [PATCH bpf-next v3 00/11] Misc BPF improvements Daniel Borkmann
2018-06-02 21:06 ` [PATCH bpf-next v3 01/11] bpf: test case for map pointer poison with calls/branches Daniel Borkmann
2018-06-02 21:06 ` [PATCH bpf-next v3 02/11] bpf: add also cbpf long jump test cases with heavy expansion Daniel Borkmann
2018-06-02 21:06 ` [PATCH bpf-next v3 03/11] bpf: fixup error message from gpl helpers on license mismatch Daniel Borkmann
2018-06-02 21:06 ` [PATCH bpf-next v3 04/11] bpf: show prog and map id in fdinfo Daniel Borkmann
2018-06-02 21:06 ` [PATCH bpf-next v3 05/11] bpf: avoid retpoline for lookup/update/delete calls on maps Daniel Borkmann
2018-06-03  6:56   ` Jesper Dangaard Brouer
2018-06-03 16:11     ` Daniel Borkmann
2018-06-03 17:08       ` Jesper Dangaard Brouer
2018-06-04 11:02         ` Phil Sutter
2018-06-04 18:25           ` Jakub Kicinski
2018-06-04 19:45             ` Daniel Borkmann
2018-06-04 22:36             ` David Ahern
2018-06-02 21:06 ` [PATCH bpf-next v3 06/11] bpf: add bpf_skb_cgroup_id helper Daniel Borkmann
2018-06-02 21:06 ` [PATCH bpf-next v3 07/11] bpf: make sure to clear unused fields in tunnel/xfrm state fetch Daniel Borkmann
2018-06-02 21:06 ` [PATCH bpf-next v3 08/11] bpf: fix cbpf parser bug for octal numbers Daniel Borkmann
2018-06-02 21:06 ` [PATCH bpf-next v3 09/11] bpf: fix context access in tracing progs on 32 bit archs Daniel Borkmann
2018-06-02 21:06 ` [PATCH bpf-next v3 10/11] bpf: sync bpf uapi header with tools Daniel Borkmann
2018-06-02 21:06 ` [PATCH bpf-next v3 11/11] bpf, doc: add missing patchwork url and libbpf to maintainers Daniel Borkmann
2018-06-03 15:08 ` [PATCH bpf-next v3 00/11] Misc BPF improvements Alexei Starovoitov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).