All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv2 0/6] bpf: Add trampoline helpers
@ 2020-01-18 13:49 Jiri Olsa
  2020-01-18 13:49 ` [PATCH 1/6] bpf: Allow ctx access for pointers to scalar Jiri Olsa
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Jiri Olsa @ 2020-01-18 13:49 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel

hi,
adding helpers for trampolines and 2 other fixes to have kernel
support for loading trampoline programs in bcc/bpftrace.

Original rfc post [1].

Speedup output of perf bench while running klockstat.py
on kprobes vs trampolines:

    Without:
            $ perf bench sched messaging -l 50000
            ...
                 Total time: 18.571 [sec]

    With current kprobe tracing:
            $ perf bench sched messaging -l 50000
            ...
                 Total time: 183.395 [sec]

    With kfunc tracing:
            $ perf bench sched messaging -l 50000
            ...
                 Total time: 39.773 [sec]

v2 changes:
  - make the unwind work for dispatcher as well
  - added test for allowed trampolines count
  - used raw tp pt_regs nest-arrays for trampoline helpers

thanks,
jirka


[1] https://lore.kernel.org/netdev/20191229143740.29143-1-jolsa@kernel.org/
---
Jiri Olsa (6):
      bpf: Allow ctx access for pointers to scalar
      bpf: Add bpf_perf_event_output_kfunc
      bpf: Add bpf_get_stackid_kfunc
      bpf: Add bpf_get_stack_kfunc
      bpf: Allow to resolve bpf trampoline and dispatcher in unwind
      selftest/bpf: Add test for allowed trampolines count

 include/linux/bpf.h                                       |  12 +++++++++-
 kernel/bpf/btf.c                                          |  13 ++++++++++-
 kernel/bpf/core.c                                         |   2 ++
 kernel/bpf/dispatcher.c                                   |   4 ++--
 kernel/bpf/trampoline.c                                   |  76 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++------
 kernel/trace/bpf_trace.c                                  |  96 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/prog_tests/trampoline_count.c | 112 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/progs/test_trampoline_count.c |  21 ++++++++++++++++++
 8 files changed, 325 insertions(+), 11 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/trampoline_count.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_trampoline_count.c


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/6] bpf: Allow ctx access for pointers to scalar
  2020-01-18 13:49 [PATCHv2 0/6] bpf: Add trampoline helpers Jiri Olsa
@ 2020-01-18 13:49 ` Jiri Olsa
  2020-01-21  0:24   ` John Fastabend
  2020-01-18 13:49 ` [PATCH 2/6] bpf: Add bpf_perf_event_output_kfunc Jiri Olsa
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Jiri Olsa @ 2020-01-18 13:49 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel

When accessing the context we allow access to arguments with
scalar type and pointer to struct. But we omit pointer to scalar
type, which is the case for many functions and same case as
when accessing scalar.

Adding the check if the pointer is to scalar type and allow it.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/bpf/btf.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 832b5d7fd892..207ae554e0ce 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3668,7 +3668,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		    const struct bpf_prog *prog,
 		    struct bpf_insn_access_aux *info)
 {
-	const struct btf_type *t = prog->aux->attach_func_proto;
+	const struct btf_type *tp, *t = prog->aux->attach_func_proto;
 	struct bpf_prog *tgt_prog = prog->aux->linked_prog;
 	struct btf *btf = bpf_prog_get_target_btf(prog);
 	const char *tname = prog->aux->attach_func_name;
@@ -3730,6 +3730,17 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		 */
 		return true;
 
+	tp = btf_type_by_id(btf, t->type);
+	/* skip modifiers */
+	while (btf_type_is_modifier(tp))
+		tp = btf_type_by_id(btf, tp->type);
+
+	if (btf_type_is_int(tp) || btf_type_is_enum(tp))
+		/* This is a pointer scalar.
+		 * It is the same as scalar from the verifier safety pov.
+		 */
+		return true;
+
 	/* this is a pointer to another type */
 	info->reg_type = PTR_TO_BTF_ID;
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/6] bpf: Add bpf_perf_event_output_kfunc
  2020-01-18 13:49 [PATCHv2 0/6] bpf: Add trampoline helpers Jiri Olsa
  2020-01-18 13:49 ` [PATCH 1/6] bpf: Allow ctx access for pointers to scalar Jiri Olsa
@ 2020-01-18 13:49 ` Jiri Olsa
  2020-01-18 13:49 ` [PATCH 3/6] bpf: Add bpf_get_stackid_kfunc Jiri Olsa
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Jiri Olsa @ 2020-01-18 13:49 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel

Adding support to use perf_event_output in
BPF_TRACE_FENTRY/BPF_TRACE_FEXIT programs.

Using nesting regs array from raw tracepoint helpers.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/trace/bpf_trace.c | 41 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 19e793aa441a..6a18e2ae6e30 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1172,6 +1172,43 @@ raw_tp_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 	}
 }
 
+BPF_CALL_5(bpf_perf_event_output_kfunc, void *, ctx, struct bpf_map *, map,
+	   u64, flags, void *, data, u64, size)
+{
+	struct pt_regs *regs = get_bpf_raw_tp_regs();
+	int ret;
+
+	if (IS_ERR(regs))
+		return PTR_ERR(regs);
+
+	perf_fetch_caller_regs(regs);
+	ret = ____bpf_perf_event_output(regs, map, flags, data, size);
+	put_bpf_raw_tp_regs();
+	return ret;
+}
+
+static const struct bpf_func_proto bpf_perf_event_output_proto_kfunc = {
+	.func		= bpf_perf_event_output_kfunc,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_CONST_MAP_PTR,
+	.arg3_type	= ARG_ANYTHING,
+	.arg4_type	= ARG_PTR_TO_MEM,
+	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
+};
+
+static const struct bpf_func_proto *
+kfunc_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+{
+	switch (func_id) {
+	case BPF_FUNC_perf_event_output:
+		return &bpf_perf_event_output_proto_kfunc;
+	default:
+		return tracing_func_proto(func_id, prog);
+	}
+}
+
 static const struct bpf_func_proto *
 tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -1181,6 +1218,10 @@ tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_skb_output_proto;
 #endif
 	default:
+		if (prog->expected_attach_type == BPF_TRACE_FENTRY ||
+		    prog->expected_attach_type == BPF_TRACE_FEXIT)
+			return kfunc_prog_func_proto(func_id, prog);
+
 		return raw_tp_prog_func_proto(func_id, prog);
 	}
 }
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/6] bpf: Add bpf_get_stackid_kfunc
  2020-01-18 13:49 [PATCHv2 0/6] bpf: Add trampoline helpers Jiri Olsa
  2020-01-18 13:49 ` [PATCH 1/6] bpf: Allow ctx access for pointers to scalar Jiri Olsa
  2020-01-18 13:49 ` [PATCH 2/6] bpf: Add bpf_perf_event_output_kfunc Jiri Olsa
@ 2020-01-18 13:49 ` Jiri Olsa
  2020-01-18 13:49 ` [PATCH 4/6] bpf: Add bpf_get_stack_kfunc Jiri Olsa
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Jiri Olsa @ 2020-01-18 13:49 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel

Adding support to use bpf_get_stackid_kfunc in
BPF_TRACE_FENTRY/BPF_TRACE_FEXIT programs.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/trace/bpf_trace.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 6a18e2ae6e30..18d6e96751c4 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1198,12 +1198,39 @@ static const struct bpf_func_proto bpf_perf_event_output_proto_kfunc = {
 	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
 };
 
+BPF_CALL_3(bpf_get_stackid_kfunc, void*, args,
+	   struct bpf_map *, map, u64, flags)
+{
+	struct pt_regs *regs = get_bpf_raw_tp_regs();
+	int ret;
+
+	if (IS_ERR(regs))
+		return PTR_ERR(regs);
+
+	perf_fetch_caller_regs(regs);
+	ret = bpf_get_stackid((unsigned long) regs, (unsigned long) map,
+			      flags, 0, 0);
+	put_bpf_raw_tp_regs();
+	return ret;
+}
+
+static const struct bpf_func_proto bpf_get_stackid_proto_kfunc = {
+	.func		= bpf_get_stackid_kfunc,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_CONST_MAP_PTR,
+	.arg3_type	= ARG_ANYTHING,
+};
+
 static const struct bpf_func_proto *
 kfunc_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
 	switch (func_id) {
 	case BPF_FUNC_perf_event_output:
 		return &bpf_perf_event_output_proto_kfunc;
+	case BPF_FUNC_get_stackid:
+		return &bpf_get_stackid_proto_kfunc;
 	default:
 		return tracing_func_proto(func_id, prog);
 	}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/6] bpf: Add bpf_get_stack_kfunc
  2020-01-18 13:49 [PATCHv2 0/6] bpf: Add trampoline helpers Jiri Olsa
                   ` (2 preceding siblings ...)
  2020-01-18 13:49 ` [PATCH 3/6] bpf: Add bpf_get_stackid_kfunc Jiri Olsa
@ 2020-01-18 13:49 ` Jiri Olsa
  2020-01-18 13:49 ` [PATCH 5/6] bpf: Allow to resolve bpf trampoline and dispatcher in unwind Jiri Olsa
  2020-01-18 13:49 ` [PATCH 6/6] selftest/bpf: Add test for allowed trampolines count Jiri Olsa
  5 siblings, 0 replies; 10+ messages in thread
From: Jiri Olsa @ 2020-01-18 13:49 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel

Adding support to use bpf_get_stack_kfunc in
BPF_TRACE_FENTRY/BPF_TRACE_FEXIT programs.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/trace/bpf_trace.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 18d6e96751c4..5c8edede3ac4 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1223,6 +1223,32 @@ static const struct bpf_func_proto bpf_get_stackid_proto_kfunc = {
 	.arg3_type	= ARG_ANYTHING,
 };
 
+BPF_CALL_4(bpf_get_stack_kfunc, void*, args,
+	   void *, buf, u32, size, u64, flags)
+{
+	struct pt_regs *regs = get_bpf_raw_tp_regs();
+	int ret;
+
+	if (IS_ERR(regs))
+		return PTR_ERR(regs);
+
+	perf_fetch_caller_regs(regs);
+	ret = bpf_get_stack((unsigned long) regs, (unsigned long) buf,
+			    (unsigned long) size, flags, 0);
+	put_bpf_raw_tp_regs();
+	return ret;
+}
+
+static const struct bpf_func_proto bpf_get_stack_proto_kfunc = {
+	.func		= bpf_get_stack_raw_tp,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_PTR_TO_MEM,
+	.arg3_type	= ARG_CONST_SIZE_OR_ZERO,
+	.arg4_type	= ARG_ANYTHING,
+};
+
 static const struct bpf_func_proto *
 kfunc_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -1231,6 +1257,8 @@ kfunc_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_perf_event_output_proto_kfunc;
 	case BPF_FUNC_get_stackid:
 		return &bpf_get_stackid_proto_kfunc;
+	case BPF_FUNC_get_stack:
+		return &bpf_get_stack_proto_kfunc;
 	default:
 		return tracing_func_proto(func_id, prog);
 	}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/6] bpf: Allow to resolve bpf trampoline and dispatcher in unwind
  2020-01-18 13:49 [PATCHv2 0/6] bpf: Add trampoline helpers Jiri Olsa
                   ` (3 preceding siblings ...)
  2020-01-18 13:49 ` [PATCH 4/6] bpf: Add bpf_get_stack_kfunc Jiri Olsa
@ 2020-01-18 13:49 ` Jiri Olsa
  2020-01-20 23:55   ` Daniel Borkmann
  2020-01-18 13:49 ` [PATCH 6/6] selftest/bpf: Add test for allowed trampolines count Jiri Olsa
  5 siblings, 1 reply; 10+ messages in thread
From: Jiri Olsa @ 2020-01-18 13:49 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel

When unwinding the stack we need to identify each address
to successfully continue. Adding latch tree to keep trampolines
for quick lookup during the unwind.

The patch uses first 48 bytes for latch tree node, leaving 4048
bytes from the rest of the page for trampoline or dispatcher
generated code.

It's still enough not to affect trampoline and dispatcher progs
maximum counts.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/bpf.h     | 12 ++++++-
 kernel/bpf/core.c       |  2 ++
 kernel/bpf/dispatcher.c |  4 +--
 kernel/bpf/trampoline.c | 76 +++++++++++++++++++++++++++++++++++++----
 4 files changed, 84 insertions(+), 10 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 8e3b8f4ad183..41eb0cf663e8 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -519,7 +519,6 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key);
 int bpf_trampoline_link_prog(struct bpf_prog *prog);
 int bpf_trampoline_unlink_prog(struct bpf_prog *prog);
 void bpf_trampoline_put(struct bpf_trampoline *tr);
-void *bpf_jit_alloc_exec_page(void);
 #define BPF_DISPATCHER_INIT(name) {			\
 	.mutex = __MUTEX_INITIALIZER(name.mutex),	\
 	.func = &name##func,				\
@@ -551,6 +550,13 @@ void *bpf_jit_alloc_exec_page(void);
 #define BPF_DISPATCHER_PTR(name) (&name)
 void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
 				struct bpf_prog *to);
+struct bpf_image {
+	struct latch_tree_node tnode;
+	unsigned char data[];
+};
+#define BPF_IMAGE_SIZE (PAGE_SIZE - sizeof(struct bpf_image))
+bool is_bpf_image(void *addr);
+void *bpf_image_alloc(void);
 #else
 static inline struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
 {
@@ -572,6 +578,10 @@ static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {}
 static inline void bpf_dispatcher_change_prog(struct bpf_dispatcher *d,
 					      struct bpf_prog *from,
 					      struct bpf_prog *to) {}
+static inline bool is_bpf_image(void *addr)
+{
+	return false;
+}
 #endif
 
 struct bpf_func_info_aux {
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 29d47aae0dd1..b3299dc9adda 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -704,6 +704,8 @@ bool is_bpf_text_address(unsigned long addr)
 
 	rcu_read_lock();
 	ret = bpf_prog_kallsyms_find(addr) != NULL;
+	if (!ret)
+		ret = is_bpf_image((void *) addr);
 	rcu_read_unlock();
 
 	return ret;
diff --git a/kernel/bpf/dispatcher.c b/kernel/bpf/dispatcher.c
index 204ee61a3904..b3e5b214fed8 100644
--- a/kernel/bpf/dispatcher.c
+++ b/kernel/bpf/dispatcher.c
@@ -113,7 +113,7 @@ static void bpf_dispatcher_update(struct bpf_dispatcher *d, int prev_num_progs)
 		noff = 0;
 	} else {
 		old = d->image + d->image_off;
-		noff = d->image_off ^ (PAGE_SIZE / 2);
+		noff = d->image_off ^ (BPF_IMAGE_SIZE / 2);
 	}
 
 	new = d->num_progs ? d->image + noff : NULL;
@@ -140,7 +140,7 @@ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
 
 	mutex_lock(&d->mutex);
 	if (!d->image) {
-		d->image = bpf_jit_alloc_exec_page();
+		d->image = bpf_image_alloc();
 		if (!d->image)
 			goto out;
 	}
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 79a04417050d..3ea56f89c68a 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -4,6 +4,7 @@
 #include <linux/bpf.h>
 #include <linux/filter.h>
 #include <linux/ftrace.h>
+#include <linux/rbtree_latch.h>
 
 /* btf_vmlinux has ~22k attachable functions. 1k htab is enough. */
 #define TRAMPOLINE_HASH_BITS 10
@@ -14,7 +15,12 @@ static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE];
 /* serializes access to trampoline_table */
 static DEFINE_MUTEX(trampoline_mutex);
 
-void *bpf_jit_alloc_exec_page(void)
+static struct latch_tree_root image_tree __cacheline_aligned;
+
+/* serializes access to image_tree */
+static DEFINE_MUTEX(image_mutex);
+
+static void *bpf_jit_alloc_exec_page(void)
 {
 	void *image;
 
@@ -30,6 +36,62 @@ void *bpf_jit_alloc_exec_page(void)
 	return image;
 }
 
+static __always_inline bool image_tree_less(struct latch_tree_node *a,
+				      struct latch_tree_node *b)
+{
+	struct bpf_image *ia = container_of(a, struct bpf_image, tnode);
+	struct bpf_image *ib = container_of(b, struct bpf_image, tnode);
+
+	return ia < ib;
+}
+
+static __always_inline int image_tree_comp(void *addr, struct latch_tree_node *n)
+{
+	void *image = container_of(n, struct bpf_image, tnode);
+
+	if (addr < image)
+		return -1;
+	if (addr >= image + PAGE_SIZE)
+		return 1;
+
+	return 0;
+}
+
+static const struct latch_tree_ops image_tree_ops = {
+	.less	= image_tree_less,
+	.comp	= image_tree_comp,
+};
+
+void *bpf_image_alloc(void)
+{
+	struct bpf_image *image;
+
+	image = bpf_jit_alloc_exec_page();
+	if (!image)
+		return NULL;
+
+	mutex_lock(&image_mutex);
+	latch_tree_insert(&image->tnode, &image_tree, &image_tree_ops);
+	mutex_unlock(&image_mutex);
+	return image->data;
+}
+
+void bpf_image_delete(void *ptr)
+{
+	struct bpf_image *image = container_of(ptr, struct bpf_image, data);
+
+	mutex_lock(&image_mutex);
+	latch_tree_erase(&image->tnode, &image_tree, &image_tree_ops);
+	synchronize_rcu();
+	bpf_jit_free_exec(image);
+	mutex_unlock(&image_mutex);
+}
+
+bool is_bpf_image(void *addr)
+{
+	return latch_tree_find(addr, &image_tree, &image_tree_ops) != NULL;
+}
+
 struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
 {
 	struct bpf_trampoline *tr;
@@ -50,7 +112,7 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
 		goto out;
 
 	/* is_root was checked earlier. No need for bpf_jit_charge_modmem() */
-	image = bpf_jit_alloc_exec_page();
+	image = bpf_image_alloc();
 	if (!image) {
 		kfree(tr);
 		tr = NULL;
@@ -125,14 +187,14 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
 }
 
 /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
- * bytes on x86.  Pick a number to fit into PAGE_SIZE / 2
+ * bytes on x86.  Pick a number to fit into BPF_IMAGE_SIZE / 2
  */
 #define BPF_MAX_TRAMP_PROGS 40
 
 static int bpf_trampoline_update(struct bpf_trampoline *tr)
 {
-	void *old_image = tr->image + ((tr->selector + 1) & 1) * PAGE_SIZE/2;
-	void *new_image = tr->image + (tr->selector & 1) * PAGE_SIZE/2;
+	void *old_image = tr->image + ((tr->selector + 1) & 1) * BPF_IMAGE_SIZE/2;
+	void *new_image = tr->image + (tr->selector & 1) * BPF_IMAGE_SIZE/2;
 	struct bpf_prog *progs_to_run[BPF_MAX_TRAMP_PROGS];
 	int fentry_cnt = tr->progs_cnt[BPF_TRAMP_FENTRY];
 	int fexit_cnt = tr->progs_cnt[BPF_TRAMP_FEXIT];
@@ -160,7 +222,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
 	if (fexit_cnt)
 		flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
 
-	err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE / 2,
+	err = arch_prepare_bpf_trampoline(new_image, new_image + BPF_IMAGE_SIZE / 2,
 					  &tr->func.model, flags,
 					  fentry, fentry_cnt,
 					  fexit, fexit_cnt,
@@ -251,7 +313,7 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
 		goto out;
 	if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT])))
 		goto out;
-	bpf_jit_free_exec(tr->image);
+	bpf_image_delete(tr->image);
 	hlist_del(&tr->hlist);
 	kfree(tr);
 out:
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/6] selftest/bpf: Add test for allowed trampolines count
  2020-01-18 13:49 [PATCHv2 0/6] bpf: Add trampoline helpers Jiri Olsa
                   ` (4 preceding siblings ...)
  2020-01-18 13:49 ` [PATCH 5/6] bpf: Allow to resolve bpf trampoline and dispatcher in unwind Jiri Olsa
@ 2020-01-18 13:49 ` Jiri Olsa
  5 siblings, 0 replies; 10+ messages in thread
From: Jiri Olsa @ 2020-01-18 13:49 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel

There's limit of 40 programs tht can be attached
to trampoline for one function. Adding test that
tries to attach that many plus one extra that needs
to fail.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../bpf/prog_tests/trampoline_count.c         | 112 ++++++++++++++++++
 .../bpf/progs/test_trampoline_count.c         |  21 ++++
 2 files changed, 133 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/trampoline_count.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_trampoline_count.c

diff --git a/tools/testing/selftests/bpf/prog_tests/trampoline_count.c b/tools/testing/selftests/bpf/prog_tests/trampoline_count.c
new file mode 100644
index 000000000000..1235f3d1cc50
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/trampoline_count.c
@@ -0,0 +1,112 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#define _GNU_SOURCE
+#include <sched.h>
+#include <sys/prctl.h>
+#include <test_progs.h>
+
+#define MAX_TRAMP_PROGS 40
+
+struct inst {
+	struct bpf_object *obj;
+	struct bpf_link   *link_fentry;
+	struct bpf_link   *link_fexit;
+};
+
+static int test_task_rename(void)
+{
+	int fd, duration = 0, err;
+	char buf[] = "test_overhead";
+
+	fd = open("/proc/self/comm", O_WRONLY|O_TRUNC);
+	if (CHECK(fd < 0, "open /proc", "err %d", errno))
+		return -1;
+	err = write(fd, buf, sizeof(buf));
+	if (err < 0) {
+		CHECK(err < 0, "task rename", "err %d", errno);
+		close(fd);
+		return -1;
+	}
+	close(fd);
+	return 0;
+}
+
+static struct bpf_link *load(struct bpf_object *obj, const char *name)
+{
+	struct bpf_program *prog;
+	int duration = 0;
+
+	prog = bpf_object__find_program_by_title(obj, name);
+	if (CHECK(!prog, "find_probe", "prog '%s' not found\n", name))
+		return ERR_PTR(-EINVAL);
+	return bpf_program__attach_trace(prog);
+}
+
+void test_trampoline_count(void)
+{
+	const char *fentry_name = "fentry/__set_task_comm";
+	const char *fexit_name = "fexit/__set_task_comm";
+	const char *object = "test_trampoline_count.o";
+	struct inst inst[MAX_TRAMP_PROGS] = { 0 };
+	int err, i = 0, duration = 0;
+	struct bpf_object *obj;
+	struct bpf_link *link;
+	char comm[16] = {};
+
+	/* attach 'allowed' 40 trampoline programs */
+	for (i = 0; i < MAX_TRAMP_PROGS; i++) {
+		obj = bpf_object__open_file(object, NULL);
+		if (CHECK(IS_ERR(obj), "obj_open_file", "err %ld\n", PTR_ERR(obj)))
+			goto cleanup;
+
+		err = bpf_object__load(obj);
+		if (CHECK(err, "obj_load", "err %d\n", err))
+			goto cleanup;
+		inst[i].obj = obj;
+
+		if (rand() % 2) {
+			link = load(obj, fentry_name);
+			if (CHECK(IS_ERR(link), "attach prog", "err %ld\n", PTR_ERR(link)))
+				goto cleanup;
+			inst[i].link_fentry = link;
+		} else {
+			link = load(obj, fexit_name);
+			if (CHECK(IS_ERR(link), "attach prog", "err %ld\n", PTR_ERR(link)))
+				goto cleanup;
+			inst[i].link_fexit = link;
+		}
+	}
+
+	/* and try 1 extra.. */
+	obj = bpf_object__open_file(object, NULL);
+	if (CHECK(IS_ERR(obj), "obj_open_file", "err %ld\n", PTR_ERR(obj)))
+		goto cleanup;
+
+	err = bpf_object__load(obj);
+	if (CHECK(err, "obj_load", "err %d\n", err))
+		goto cleanup_extra;
+
+	/* ..that needs to fail */
+	link = load(obj, fentry_name);
+	if (CHECK(!IS_ERR(link), "cannot attach over the limit", "err %ld\n", PTR_ERR(link))) {
+		bpf_link__destroy(link);
+		goto cleanup_extra;
+	}
+
+	/* with E2BIG error */
+	CHECK(PTR_ERR(link) != -E2BIG, "proper error check", "err %ld\n", PTR_ERR(link));
+
+	/* and finaly execute the probe */
+	if (CHECK_FAIL(prctl(PR_GET_NAME, comm, 0L, 0L, 0L)))
+		goto cleanup_extra;
+	CHECK_FAIL(test_task_rename());
+	CHECK_FAIL(prctl(PR_SET_NAME, comm, 0L, 0L, 0L));
+
+cleanup_extra:
+	bpf_object__close(obj);
+cleanup:
+	while (--i) {
+		bpf_link__destroy(inst[i].link_fentry);
+		bpf_link__destroy(inst[i].link_fexit);
+		bpf_object__close(inst[i].obj);
+	}
+}
diff --git a/tools/testing/selftests/bpf/progs/test_trampoline_count.c b/tools/testing/selftests/bpf/progs/test_trampoline_count.c
new file mode 100644
index 000000000000..e51e6e3a81c2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_trampoline_count.c
@@ -0,0 +1,21 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdbool.h>
+#include <stddef.h>
+#include <linux/bpf.h>
+#include "bpf_trace_helpers.h"
+
+struct task_struct;
+
+SEC("fentry/__set_task_comm")
+int BPF_PROG(prog1, struct task_struct *tsk, const char *buf, bool exec)
+{
+	return 0;
+}
+
+SEC("fexit/__set_task_comm")
+int BPF_PROG(prog2, struct task_struct *tsk, const char *buf, bool exec)
+{
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 5/6] bpf: Allow to resolve bpf trampoline and dispatcher in unwind
  2020-01-18 13:49 ` [PATCH 5/6] bpf: Allow to resolve bpf trampoline and dispatcher in unwind Jiri Olsa
@ 2020-01-20 23:55   ` Daniel Borkmann
  2020-01-21  9:56     ` Jiri Olsa
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Borkmann @ 2020-01-20 23:55 UTC (permalink / raw)
  To: Jiri Olsa, Alexei Starovoitov
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel

On 1/18/20 2:49 PM, Jiri Olsa wrote:
> When unwinding the stack we need to identify each address
> to successfully continue. Adding latch tree to keep trampolines
> for quick lookup during the unwind.
> 
> The patch uses first 48 bytes for latch tree node, leaving 4048
> bytes from the rest of the page for trampoline or dispatcher
> generated code.
> 
> It's still enough not to affect trampoline and dispatcher progs
> maximum counts.
> 
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>   include/linux/bpf.h     | 12 ++++++-
>   kernel/bpf/core.c       |  2 ++
>   kernel/bpf/dispatcher.c |  4 +--
>   kernel/bpf/trampoline.c | 76 +++++++++++++++++++++++++++++++++++++----
>   4 files changed, 84 insertions(+), 10 deletions(-)
> 
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 8e3b8f4ad183..41eb0cf663e8 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -519,7 +519,6 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key);
>   int bpf_trampoline_link_prog(struct bpf_prog *prog);
>   int bpf_trampoline_unlink_prog(struct bpf_prog *prog);
>   void bpf_trampoline_put(struct bpf_trampoline *tr);
> -void *bpf_jit_alloc_exec_page(void);
>   #define BPF_DISPATCHER_INIT(name) {			\
>   	.mutex = __MUTEX_INITIALIZER(name.mutex),	\
>   	.func = &name##func,				\
> @@ -551,6 +550,13 @@ void *bpf_jit_alloc_exec_page(void);
>   #define BPF_DISPATCHER_PTR(name) (&name)
>   void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
>   				struct bpf_prog *to);
> +struct bpf_image {
> +	struct latch_tree_node tnode;
> +	unsigned char data[];
> +};
> +#define BPF_IMAGE_SIZE (PAGE_SIZE - sizeof(struct bpf_image))
> +bool is_bpf_image(void *addr);
> +void *bpf_image_alloc(void);
>   #else
>   static inline struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
>   {
> @@ -572,6 +578,10 @@ static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {}
>   static inline void bpf_dispatcher_change_prog(struct bpf_dispatcher *d,
>   					      struct bpf_prog *from,
>   					      struct bpf_prog *to) {}
> +static inline bool is_bpf_image(void *addr)
> +{
> +	return false;
> +}
>   #endif
>   
>   struct bpf_func_info_aux {
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 29d47aae0dd1..b3299dc9adda 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -704,6 +704,8 @@ bool is_bpf_text_address(unsigned long addr)
>   
>   	rcu_read_lock();
>   	ret = bpf_prog_kallsyms_find(addr) != NULL;
> +	if (!ret)
> +		ret = is_bpf_image((void *) addr);
>   	rcu_read_unlock();

Btw, shouldn't this be a separate entity entirely to avoid unnecessary inclusion
in bpf_arch_text_poke() for the is_bpf_text_address() check there?

Did you drop the bpf_{trampoline,dispatcher}_<...> entry addition in kallsyms?

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH 1/6] bpf: Allow ctx access for pointers to scalar
  2020-01-18 13:49 ` [PATCH 1/6] bpf: Allow ctx access for pointers to scalar Jiri Olsa
@ 2020-01-21  0:24   ` John Fastabend
  0 siblings, 0 replies; 10+ messages in thread
From: John Fastabend @ 2020-01-21  0:24 UTC (permalink / raw)
  To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel

Jiri Olsa wrote:
> When accessing the context we allow access to arguments with
> scalar type and pointer to struct. But we omit pointer to scalar
> type, which is the case for many functions and same case as
> when accessing scalar.
> 
> Adding the check if the pointer is to scalar type and allow it.
> 
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  kernel/bpf/btf.c | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 832b5d7fd892..207ae554e0ce 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -3668,7 +3668,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
>  		    const struct bpf_prog *prog,
>  		    struct bpf_insn_access_aux *info)
>  {
> -	const struct btf_type *t = prog->aux->attach_func_proto;
> +	const struct btf_type *tp, *t = prog->aux->attach_func_proto;
>  	struct bpf_prog *tgt_prog = prog->aux->linked_prog;
>  	struct btf *btf = bpf_prog_get_target_btf(prog);
>  	const char *tname = prog->aux->attach_func_name;
> @@ -3730,6 +3730,17 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
>  		 */
>  		return true;
>  
> +	tp = btf_type_by_id(btf, t->type);
> +	/* skip modifiers */
> +	while (btf_type_is_modifier(tp))
> +		tp = btf_type_by_id(btf, tp->type);
> +
> +	if (btf_type_is_int(tp) || btf_type_is_enum(tp))
> +		/* This is a pointer scalar.
> +		 * It is the same as scalar from the verifier safety pov.
> +		 */
> +		return true;
> +
>  	/* this is a pointer to another type */
>  	info->reg_type = PTR_TO_BTF_ID;
>  

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 5/6] bpf: Allow to resolve bpf trampoline and dispatcher in unwind
  2020-01-20 23:55   ` Daniel Borkmann
@ 2020-01-21  9:56     ` Jiri Olsa
  0 siblings, 0 replies; 10+ messages in thread
From: Jiri Olsa @ 2020-01-21  9:56 UTC (permalink / raw)
  To: Daniel Borkmann
  Cc: Jiri Olsa, Alexei Starovoitov, netdev, bpf, Andrii Nakryiko,
	Yonghong Song, Martin KaFai Lau, Jakub Kicinski, David Miller,
	Björn Töpel

On Tue, Jan 21, 2020 at 12:55:10AM +0100, Daniel Borkmann wrote:
> On 1/18/20 2:49 PM, Jiri Olsa wrote:
> > When unwinding the stack we need to identify each address
> > to successfully continue. Adding latch tree to keep trampolines
> > for quick lookup during the unwind.
> > 
> > The patch uses first 48 bytes for latch tree node, leaving 4048
> > bytes from the rest of the page for trampoline or dispatcher
> > generated code.
> > 
> > It's still enough not to affect trampoline and dispatcher progs
> > maximum counts.
> > 
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >   include/linux/bpf.h     | 12 ++++++-
> >   kernel/bpf/core.c       |  2 ++
> >   kernel/bpf/dispatcher.c |  4 +--
> >   kernel/bpf/trampoline.c | 76 +++++++++++++++++++++++++++++++++++++----
> >   4 files changed, 84 insertions(+), 10 deletions(-)
> > 
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index 8e3b8f4ad183..41eb0cf663e8 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -519,7 +519,6 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key);
> >   int bpf_trampoline_link_prog(struct bpf_prog *prog);
> >   int bpf_trampoline_unlink_prog(struct bpf_prog *prog);
> >   void bpf_trampoline_put(struct bpf_trampoline *tr);
> > -void *bpf_jit_alloc_exec_page(void);
> >   #define BPF_DISPATCHER_INIT(name) {			\
> >   	.mutex = __MUTEX_INITIALIZER(name.mutex),	\
> >   	.func = &name##func,				\
> > @@ -551,6 +550,13 @@ void *bpf_jit_alloc_exec_page(void);
> >   #define BPF_DISPATCHER_PTR(name) (&name)
> >   void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
> >   				struct bpf_prog *to);
> > +struct bpf_image {
> > +	struct latch_tree_node tnode;
> > +	unsigned char data[];
> > +};
> > +#define BPF_IMAGE_SIZE (PAGE_SIZE - sizeof(struct bpf_image))
> > +bool is_bpf_image(void *addr);
> > +void *bpf_image_alloc(void);
> >   #else
> >   static inline struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
> >   {
> > @@ -572,6 +578,10 @@ static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {}
> >   static inline void bpf_dispatcher_change_prog(struct bpf_dispatcher *d,
> >   					      struct bpf_prog *from,
> >   					      struct bpf_prog *to) {}
> > +static inline bool is_bpf_image(void *addr)
> > +{
> > +	return false;
> > +}
> >   #endif
> >   struct bpf_func_info_aux {
> > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > index 29d47aae0dd1..b3299dc9adda 100644
> > --- a/kernel/bpf/core.c
> > +++ b/kernel/bpf/core.c
> > @@ -704,6 +704,8 @@ bool is_bpf_text_address(unsigned long addr)
> >   	rcu_read_lock();
> >   	ret = bpf_prog_kallsyms_find(addr) != NULL;
> > +	if (!ret)
> > +		ret = is_bpf_image((void *) addr);
> >   	rcu_read_unlock();
> 
> Btw, shouldn't this be a separate entity entirely to avoid unnecessary inclusion
> in bpf_arch_text_poke() for the is_bpf_text_address() check there?

right, we dont want poking in trampolines/dispatchers.. I'll change that

> 
> Did you drop the bpf_{trampoline,dispatcher}_<...> entry addition in kallsyms?

working on that, will send it separately

jirka


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-01-21  9:56 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-18 13:49 [PATCHv2 0/6] bpf: Add trampoline helpers Jiri Olsa
2020-01-18 13:49 ` [PATCH 1/6] bpf: Allow ctx access for pointers to scalar Jiri Olsa
2020-01-21  0:24   ` John Fastabend
2020-01-18 13:49 ` [PATCH 2/6] bpf: Add bpf_perf_event_output_kfunc Jiri Olsa
2020-01-18 13:49 ` [PATCH 3/6] bpf: Add bpf_get_stackid_kfunc Jiri Olsa
2020-01-18 13:49 ` [PATCH 4/6] bpf: Add bpf_get_stack_kfunc Jiri Olsa
2020-01-18 13:49 ` [PATCH 5/6] bpf: Allow to resolve bpf trampoline and dispatcher in unwind Jiri Olsa
2020-01-20 23:55   ` Daniel Borkmann
2020-01-21  9:56     ` Jiri Olsa
2020-01-18 13:49 ` [PATCH 6/6] selftest/bpf: Add test for allowed trampolines count Jiri Olsa

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.