bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv3 0/6] bpf: Add trampoline helpers
@ 2020-01-21 12:05 Jiri Olsa
  2020-01-21 12:05 ` [PATCH 1/6] bpf: Allow ctx access for pointers to scalar Jiri Olsa
                   ` (5 more replies)
  0 siblings, 6 replies; 19+ messages in thread
From: Jiri Olsa @ 2020-01-21 12:05 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel,
	John Fastabend

hi,
adding helpers for trampolines and 2 other fixes to have kernel
support for loading trampoline programs in bcc/bpftrace.

Original rfc post [1].

Speedup output of perf bench while running klockstat.py
on kprobes vs trampolines:

    Without:
            $ perf bench sched messaging -l 50000
            ...
                 Total time: 18.571 [sec]

    With current kprobe tracing:
            $ perf bench sched messaging -l 50000
            ...
                 Total time: 183.395 [sec]

    With kfunc tracing:
            $ perf bench sched messaging -l 50000
            ...
                 Total time: 39.773 [sec]

v3 changes:
  - added ack from John Fastabend for patch 1
  - move out is_bpf_image_address from is_bpf_text_address call [David]

v2 changes:
  - make the unwind work for dispatcher as well
  - added test for allowed trampolines count
  - used raw tp pt_regs nest-arrays for trampoline helpers

thanks,
jirka


[1] https://lore.kernel.org/netdev/20191229143740.29143-1-jolsa@kernel.org/
---
Jiri Olsa (6):
      bpf: Allow ctx access for pointers to scalar
      bpf: Add bpf_perf_event_output_kfunc
      bpf: Add bpf_get_stackid_kfunc
      bpf: Add bpf_get_stack_kfunc
      bpf: Allow to resolve bpf trampoline and dispatcher in unwind
      selftest/bpf: Add test for allowed trampolines count

 include/linux/bpf.h                                       |  12 +++++++++++-
 kernel/bpf/btf.c                                          |  13 ++++++++++++-
 kernel/bpf/dispatcher.c                                   |   4 ++--
 kernel/bpf/trampoline.c                                   |  82 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------
 kernel/extable.c                                          |   7 +++++--
 kernel/trace/bpf_trace.c                                  |  96 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/prog_tests/trampoline_count.c | 112 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/progs/test_trampoline_count.c |  21 +++++++++++++++++++++
 8 files changed, 334 insertions(+), 13 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/trampoline_count.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_trampoline_count.c


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/6] bpf: Allow ctx access for pointers to scalar
  2020-01-21 12:05 [PATCHv3 0/6] bpf: Add trampoline helpers Jiri Olsa
@ 2020-01-21 12:05 ` Jiri Olsa
  2020-01-22  1:51   ` Alexei Starovoitov
  2020-01-21 12:05 ` [PATCH 2/6] bpf: Add bpf_perf_event_output_kfunc Jiri Olsa
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 19+ messages in thread
From: Jiri Olsa @ 2020-01-21 12:05 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: John Fastabend, netdev, bpf, Andrii Nakryiko, Yonghong Song,
	Martin KaFai Lau, Jakub Kicinski, David Miller,
	Björn Töpel

When accessing the context we allow access to arguments with
scalar type and pointer to struct. But we omit pointer to scalar
type, which is the case for many functions and same case as
when accessing scalar.

Adding the check if the pointer is to scalar type and allow it.

Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/bpf/btf.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 832b5d7fd892..207ae554e0ce 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3668,7 +3668,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		    const struct bpf_prog *prog,
 		    struct bpf_insn_access_aux *info)
 {
-	const struct btf_type *t = prog->aux->attach_func_proto;
+	const struct btf_type *tp, *t = prog->aux->attach_func_proto;
 	struct bpf_prog *tgt_prog = prog->aux->linked_prog;
 	struct btf *btf = bpf_prog_get_target_btf(prog);
 	const char *tname = prog->aux->attach_func_name;
@@ -3730,6 +3730,17 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		 */
 		return true;
 
+	tp = btf_type_by_id(btf, t->type);
+	/* skip modifiers */
+	while (btf_type_is_modifier(tp))
+		tp = btf_type_by_id(btf, tp->type);
+
+	if (btf_type_is_int(tp) || btf_type_is_enum(tp))
+		/* This is a pointer scalar.
+		 * It is the same as scalar from the verifier safety pov.
+		 */
+		return true;
+
 	/* this is a pointer to another type */
 	info->reg_type = PTR_TO_BTF_ID;
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 2/6] bpf: Add bpf_perf_event_output_kfunc
  2020-01-21 12:05 [PATCHv3 0/6] bpf: Add trampoline helpers Jiri Olsa
  2020-01-21 12:05 ` [PATCH 1/6] bpf: Allow ctx access for pointers to scalar Jiri Olsa
@ 2020-01-21 12:05 ` Jiri Olsa
  2020-01-22  0:03   ` Alexei Starovoitov
  2020-01-21 12:05 ` [PATCH 3/6] bpf: Add bpf_get_stackid_kfunc Jiri Olsa
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 19+ messages in thread
From: Jiri Olsa @ 2020-01-21 12:05 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel,
	John Fastabend

Adding support to use perf_event_output in
BPF_TRACE_FENTRY/BPF_TRACE_FEXIT programs.

Using nesting regs array from raw tracepoint helpers.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/trace/bpf_trace.c | 41 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 19e793aa441a..6a18e2ae6e30 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1172,6 +1172,43 @@ raw_tp_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 	}
 }
 
+BPF_CALL_5(bpf_perf_event_output_kfunc, void *, ctx, struct bpf_map *, map,
+	   u64, flags, void *, data, u64, size)
+{
+	struct pt_regs *regs = get_bpf_raw_tp_regs();
+	int ret;
+
+	if (IS_ERR(regs))
+		return PTR_ERR(regs);
+
+	perf_fetch_caller_regs(regs);
+	ret = ____bpf_perf_event_output(regs, map, flags, data, size);
+	put_bpf_raw_tp_regs();
+	return ret;
+}
+
+static const struct bpf_func_proto bpf_perf_event_output_proto_kfunc = {
+	.func		= bpf_perf_event_output_kfunc,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_CONST_MAP_PTR,
+	.arg3_type	= ARG_ANYTHING,
+	.arg4_type	= ARG_PTR_TO_MEM,
+	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
+};
+
+static const struct bpf_func_proto *
+kfunc_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+{
+	switch (func_id) {
+	case BPF_FUNC_perf_event_output:
+		return &bpf_perf_event_output_proto_kfunc;
+	default:
+		return tracing_func_proto(func_id, prog);
+	}
+}
+
 static const struct bpf_func_proto *
 tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -1181,6 +1218,10 @@ tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_skb_output_proto;
 #endif
 	default:
+		if (prog->expected_attach_type == BPF_TRACE_FENTRY ||
+		    prog->expected_attach_type == BPF_TRACE_FEXIT)
+			return kfunc_prog_func_proto(func_id, prog);
+
 		return raw_tp_prog_func_proto(func_id, prog);
 	}
 }
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 3/6] bpf: Add bpf_get_stackid_kfunc
  2020-01-21 12:05 [PATCHv3 0/6] bpf: Add trampoline helpers Jiri Olsa
  2020-01-21 12:05 ` [PATCH 1/6] bpf: Allow ctx access for pointers to scalar Jiri Olsa
  2020-01-21 12:05 ` [PATCH 2/6] bpf: Add bpf_perf_event_output_kfunc Jiri Olsa
@ 2020-01-21 12:05 ` Jiri Olsa
  2020-01-21 12:05 ` [PATCH 4/6] bpf: Add bpf_get_stack_kfunc Jiri Olsa
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 19+ messages in thread
From: Jiri Olsa @ 2020-01-21 12:05 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel,
	John Fastabend

Adding support to use bpf_get_stackid_kfunc in
BPF_TRACE_FENTRY/BPF_TRACE_FEXIT programs.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/trace/bpf_trace.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 6a18e2ae6e30..18d6e96751c4 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1198,12 +1198,39 @@ static const struct bpf_func_proto bpf_perf_event_output_proto_kfunc = {
 	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
 };
 
+BPF_CALL_3(bpf_get_stackid_kfunc, void*, args,
+	   struct bpf_map *, map, u64, flags)
+{
+	struct pt_regs *regs = get_bpf_raw_tp_regs();
+	int ret;
+
+	if (IS_ERR(regs))
+		return PTR_ERR(regs);
+
+	perf_fetch_caller_regs(regs);
+	ret = bpf_get_stackid((unsigned long) regs, (unsigned long) map,
+			      flags, 0, 0);
+	put_bpf_raw_tp_regs();
+	return ret;
+}
+
+static const struct bpf_func_proto bpf_get_stackid_proto_kfunc = {
+	.func		= bpf_get_stackid_kfunc,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_CONST_MAP_PTR,
+	.arg3_type	= ARG_ANYTHING,
+};
+
 static const struct bpf_func_proto *
 kfunc_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
 	switch (func_id) {
 	case BPF_FUNC_perf_event_output:
 		return &bpf_perf_event_output_proto_kfunc;
+	case BPF_FUNC_get_stackid:
+		return &bpf_get_stackid_proto_kfunc;
 	default:
 		return tracing_func_proto(func_id, prog);
 	}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 4/6] bpf: Add bpf_get_stack_kfunc
  2020-01-21 12:05 [PATCHv3 0/6] bpf: Add trampoline helpers Jiri Olsa
                   ` (2 preceding siblings ...)
  2020-01-21 12:05 ` [PATCH 3/6] bpf: Add bpf_get_stackid_kfunc Jiri Olsa
@ 2020-01-21 12:05 ` Jiri Olsa
  2020-01-21 12:05 ` [PATCH 5/6] bpf: Allow to resolve bpf trampoline and dispatcher in unwind Jiri Olsa
  2020-01-21 12:05 ` [PATCH 6/6] selftest/bpf: Add test for allowed trampolines count Jiri Olsa
  5 siblings, 0 replies; 19+ messages in thread
From: Jiri Olsa @ 2020-01-21 12:05 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel,
	John Fastabend

Adding support to use bpf_get_stack_kfunc in
BPF_TRACE_FENTRY/BPF_TRACE_FEXIT programs.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/trace/bpf_trace.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 18d6e96751c4..5c8edede3ac4 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1223,6 +1223,32 @@ static const struct bpf_func_proto bpf_get_stackid_proto_kfunc = {
 	.arg3_type	= ARG_ANYTHING,
 };
 
+BPF_CALL_4(bpf_get_stack_kfunc, void*, args,
+	   void *, buf, u32, size, u64, flags)
+{
+	struct pt_regs *regs = get_bpf_raw_tp_regs();
+	int ret;
+
+	if (IS_ERR(regs))
+		return PTR_ERR(regs);
+
+	perf_fetch_caller_regs(regs);
+	ret = bpf_get_stack((unsigned long) regs, (unsigned long) buf,
+			    (unsigned long) size, flags, 0);
+	put_bpf_raw_tp_regs();
+	return ret;
+}
+
+static const struct bpf_func_proto bpf_get_stack_proto_kfunc = {
+	.func		= bpf_get_stack_raw_tp,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_PTR_TO_MEM,
+	.arg3_type	= ARG_CONST_SIZE_OR_ZERO,
+	.arg4_type	= ARG_ANYTHING,
+};
+
 static const struct bpf_func_proto *
 kfunc_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -1231,6 +1257,8 @@ kfunc_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_perf_event_output_proto_kfunc;
 	case BPF_FUNC_get_stackid:
 		return &bpf_get_stackid_proto_kfunc;
+	case BPF_FUNC_get_stack:
+		return &bpf_get_stack_proto_kfunc;
 	default:
 		return tracing_func_proto(func_id, prog);
 	}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 5/6] bpf: Allow to resolve bpf trampoline and dispatcher in unwind
  2020-01-21 12:05 [PATCHv3 0/6] bpf: Add trampoline helpers Jiri Olsa
                   ` (3 preceding siblings ...)
  2020-01-21 12:05 ` [PATCH 4/6] bpf: Add bpf_get_stack_kfunc Jiri Olsa
@ 2020-01-21 12:05 ` Jiri Olsa
  2020-01-21 12:05 ` [PATCH 6/6] selftest/bpf: Add test for allowed trampolines count Jiri Olsa
  5 siblings, 0 replies; 19+ messages in thread
From: Jiri Olsa @ 2020-01-21 12:05 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel,
	John Fastabend

When unwinding the stack we need to identify each address
to successfully continue. Adding latch tree to keep trampolines
for quick lookup during the unwind.

The patch uses first 48 bytes for latch tree node, leaving 4048
bytes from the rest of the page for trampoline or dispatcher
generated code.

It's still enough not to affect trampoline and dispatcher progs
maximum counts.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/bpf.h     | 12 +++++-
 kernel/bpf/dispatcher.c |  4 +-
 kernel/bpf/trampoline.c | 82 +++++++++++++++++++++++++++++++++++++----
 kernel/extable.c        |  7 +++-
 4 files changed, 93 insertions(+), 12 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 8e3b8f4ad183..46b60ed4387b 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -519,7 +519,6 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key);
 int bpf_trampoline_link_prog(struct bpf_prog *prog);
 int bpf_trampoline_unlink_prog(struct bpf_prog *prog);
 void bpf_trampoline_put(struct bpf_trampoline *tr);
-void *bpf_jit_alloc_exec_page(void);
 #define BPF_DISPATCHER_INIT(name) {			\
 	.mutex = __MUTEX_INITIALIZER(name.mutex),	\
 	.func = &name##func,				\
@@ -551,6 +550,13 @@ void *bpf_jit_alloc_exec_page(void);
 #define BPF_DISPATCHER_PTR(name) (&name)
 void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
 				struct bpf_prog *to);
+struct bpf_image {
+	struct latch_tree_node tnode;
+	unsigned char data[];
+};
+#define BPF_IMAGE_SIZE (PAGE_SIZE - sizeof(struct bpf_image))
+bool is_bpf_image_address(unsigned long address);
+void *bpf_image_alloc(void);
 #else
 static inline struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
 {
@@ -572,6 +578,10 @@ static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {}
 static inline void bpf_dispatcher_change_prog(struct bpf_dispatcher *d,
 					      struct bpf_prog *from,
 					      struct bpf_prog *to) {}
+static inline bool is_bpf_image_address(unsigned long address)
+{
+	return false;
+}
 #endif
 
 struct bpf_func_info_aux {
diff --git a/kernel/bpf/dispatcher.c b/kernel/bpf/dispatcher.c
index 204ee61a3904..b3e5b214fed8 100644
--- a/kernel/bpf/dispatcher.c
+++ b/kernel/bpf/dispatcher.c
@@ -113,7 +113,7 @@ static void bpf_dispatcher_update(struct bpf_dispatcher *d, int prev_num_progs)
 		noff = 0;
 	} else {
 		old = d->image + d->image_off;
-		noff = d->image_off ^ (PAGE_SIZE / 2);
+		noff = d->image_off ^ (BPF_IMAGE_SIZE / 2);
 	}
 
 	new = d->num_progs ? d->image + noff : NULL;
@@ -140,7 +140,7 @@ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
 
 	mutex_lock(&d->mutex);
 	if (!d->image) {
-		d->image = bpf_jit_alloc_exec_page();
+		d->image = bpf_image_alloc();
 		if (!d->image)
 			goto out;
 	}
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 79a04417050d..38e2a8a536d3 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -4,6 +4,7 @@
 #include <linux/bpf.h>
 #include <linux/filter.h>
 #include <linux/ftrace.h>
+#include <linux/rbtree_latch.h>
 
 /* btf_vmlinux has ~22k attachable functions. 1k htab is enough. */
 #define TRAMPOLINE_HASH_BITS 10
@@ -14,7 +15,12 @@ static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE];
 /* serializes access to trampoline_table */
 static DEFINE_MUTEX(trampoline_mutex);
 
-void *bpf_jit_alloc_exec_page(void)
+static struct latch_tree_root image_tree __cacheline_aligned;
+
+/* serializes access to image_tree */
+static DEFINE_MUTEX(image_mutex);
+
+static void *bpf_jit_alloc_exec_page(void)
 {
 	void *image;
 
@@ -30,6 +36,68 @@ void *bpf_jit_alloc_exec_page(void)
 	return image;
 }
 
+static __always_inline bool image_tree_less(struct latch_tree_node *a,
+				      struct latch_tree_node *b)
+{
+	struct bpf_image *ia = container_of(a, struct bpf_image, tnode);
+	struct bpf_image *ib = container_of(b, struct bpf_image, tnode);
+
+	return ia < ib;
+}
+
+static __always_inline int image_tree_comp(void *addr, struct latch_tree_node *n)
+{
+	void *image = container_of(n, struct bpf_image, tnode);
+
+	if (addr < image)
+		return -1;
+	if (addr >= image + PAGE_SIZE)
+		return 1;
+
+	return 0;
+}
+
+static const struct latch_tree_ops image_tree_ops = {
+	.less	= image_tree_less,
+	.comp	= image_tree_comp,
+};
+
+void *bpf_image_alloc(void)
+{
+	struct bpf_image *image;
+
+	image = bpf_jit_alloc_exec_page();
+	if (!image)
+		return NULL;
+
+	mutex_lock(&image_mutex);
+	latch_tree_insert(&image->tnode, &image_tree, &image_tree_ops);
+	mutex_unlock(&image_mutex);
+	return image->data;
+}
+
+void bpf_image_delete(void *ptr)
+{
+	struct bpf_image *image = container_of(ptr, struct bpf_image, data);
+
+	mutex_lock(&image_mutex);
+	latch_tree_erase(&image->tnode, &image_tree, &image_tree_ops);
+	synchronize_rcu();
+	bpf_jit_free_exec(image);
+	mutex_unlock(&image_mutex);
+}
+
+bool is_bpf_image_address(unsigned long addr)
+{
+	bool ret;
+
+	rcu_read_lock();
+	ret = latch_tree_find((void *) addr, &image_tree, &image_tree_ops) != NULL;
+	rcu_read_unlock();
+
+	return ret;
+}
+
 struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
 {
 	struct bpf_trampoline *tr;
@@ -50,7 +118,7 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
 		goto out;
 
 	/* is_root was checked earlier. No need for bpf_jit_charge_modmem() */
-	image = bpf_jit_alloc_exec_page();
+	image = bpf_image_alloc();
 	if (!image) {
 		kfree(tr);
 		tr = NULL;
@@ -125,14 +193,14 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
 }
 
 /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
- * bytes on x86.  Pick a number to fit into PAGE_SIZE / 2
+ * bytes on x86.  Pick a number to fit into BPF_IMAGE_SIZE / 2
  */
 #define BPF_MAX_TRAMP_PROGS 40
 
 static int bpf_trampoline_update(struct bpf_trampoline *tr)
 {
-	void *old_image = tr->image + ((tr->selector + 1) & 1) * PAGE_SIZE/2;
-	void *new_image = tr->image + (tr->selector & 1) * PAGE_SIZE/2;
+	void *old_image = tr->image + ((tr->selector + 1) & 1) * BPF_IMAGE_SIZE/2;
+	void *new_image = tr->image + (tr->selector & 1) * BPF_IMAGE_SIZE/2;
 	struct bpf_prog *progs_to_run[BPF_MAX_TRAMP_PROGS];
 	int fentry_cnt = tr->progs_cnt[BPF_TRAMP_FENTRY];
 	int fexit_cnt = tr->progs_cnt[BPF_TRAMP_FEXIT];
@@ -160,7 +228,7 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
 	if (fexit_cnt)
 		flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
 
-	err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE / 2,
+	err = arch_prepare_bpf_trampoline(new_image, new_image + BPF_IMAGE_SIZE / 2,
 					  &tr->func.model, flags,
 					  fentry, fentry_cnt,
 					  fexit, fexit_cnt,
@@ -251,7 +319,7 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
 		goto out;
 	if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT])))
 		goto out;
-	bpf_jit_free_exec(tr->image);
+	bpf_image_delete(tr->image);
 	hlist_del(&tr->hlist);
 	kfree(tr);
 out:
diff --git a/kernel/extable.c b/kernel/extable.c
index f6920a11e28a..a0024f27d3a1 100644
--- a/kernel/extable.c
+++ b/kernel/extable.c
@@ -131,8 +131,9 @@ int kernel_text_address(unsigned long addr)
 	 * triggers a stack trace, or a WARN() that happens during
 	 * coming back from idle, or cpu on or offlining.
 	 *
-	 * is_module_text_address() as well as the kprobe slots
-	 * and is_bpf_text_address() require RCU to be watching.
+	 * is_module_text_address() as well as the kprobe slots,
+	 * is_bpf_text_address() and is_bpf_image_address require
+	 * RCU to be watching.
 	 */
 	no_rcu = !rcu_is_watching();
 
@@ -148,6 +149,8 @@ int kernel_text_address(unsigned long addr)
 		goto out;
 	if (is_bpf_text_address(addr))
 		goto out;
+	if (is_bpf_image_address(addr))
+		goto out;
 	ret = 0;
 out:
 	if (no_rcu)
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 6/6] selftest/bpf: Add test for allowed trampolines count
  2020-01-21 12:05 [PATCHv3 0/6] bpf: Add trampoline helpers Jiri Olsa
                   ` (4 preceding siblings ...)
  2020-01-21 12:05 ` [PATCH 5/6] bpf: Allow to resolve bpf trampoline and dispatcher in unwind Jiri Olsa
@ 2020-01-21 12:05 ` Jiri Olsa
  2020-01-22  0:10   ` Alexei Starovoitov
  5 siblings, 1 reply; 19+ messages in thread
From: Jiri Olsa @ 2020-01-21 12:05 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel,
	John Fastabend

There's limit of 40 programs tht can be attached
to trampoline for one function. Adding test that
tries to attach that many plus one extra that needs
to fail.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../bpf/prog_tests/trampoline_count.c         | 112 ++++++++++++++++++
 .../bpf/progs/test_trampoline_count.c         |  21 ++++
 2 files changed, 133 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/trampoline_count.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_trampoline_count.c

diff --git a/tools/testing/selftests/bpf/prog_tests/trampoline_count.c b/tools/testing/selftests/bpf/prog_tests/trampoline_count.c
new file mode 100644
index 000000000000..1235f3d1cc50
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/trampoline_count.c
@@ -0,0 +1,112 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#define _GNU_SOURCE
+#include <sched.h>
+#include <sys/prctl.h>
+#include <test_progs.h>
+
+#define MAX_TRAMP_PROGS 40
+
+struct inst {
+	struct bpf_object *obj;
+	struct bpf_link   *link_fentry;
+	struct bpf_link   *link_fexit;
+};
+
+static int test_task_rename(void)
+{
+	int fd, duration = 0, err;
+	char buf[] = "test_overhead";
+
+	fd = open("/proc/self/comm", O_WRONLY|O_TRUNC);
+	if (CHECK(fd < 0, "open /proc", "err %d", errno))
+		return -1;
+	err = write(fd, buf, sizeof(buf));
+	if (err < 0) {
+		CHECK(err < 0, "task rename", "err %d", errno);
+		close(fd);
+		return -1;
+	}
+	close(fd);
+	return 0;
+}
+
+static struct bpf_link *load(struct bpf_object *obj, const char *name)
+{
+	struct bpf_program *prog;
+	int duration = 0;
+
+	prog = bpf_object__find_program_by_title(obj, name);
+	if (CHECK(!prog, "find_probe", "prog '%s' not found\n", name))
+		return ERR_PTR(-EINVAL);
+	return bpf_program__attach_trace(prog);
+}
+
+void test_trampoline_count(void)
+{
+	const char *fentry_name = "fentry/__set_task_comm";
+	const char *fexit_name = "fexit/__set_task_comm";
+	const char *object = "test_trampoline_count.o";
+	struct inst inst[MAX_TRAMP_PROGS] = { 0 };
+	int err, i = 0, duration = 0;
+	struct bpf_object *obj;
+	struct bpf_link *link;
+	char comm[16] = {};
+
+	/* attach 'allowed' 40 trampoline programs */
+	for (i = 0; i < MAX_TRAMP_PROGS; i++) {
+		obj = bpf_object__open_file(object, NULL);
+		if (CHECK(IS_ERR(obj), "obj_open_file", "err %ld\n", PTR_ERR(obj)))
+			goto cleanup;
+
+		err = bpf_object__load(obj);
+		if (CHECK(err, "obj_load", "err %d\n", err))
+			goto cleanup;
+		inst[i].obj = obj;
+
+		if (rand() % 2) {
+			link = load(obj, fentry_name);
+			if (CHECK(IS_ERR(link), "attach prog", "err %ld\n", PTR_ERR(link)))
+				goto cleanup;
+			inst[i].link_fentry = link;
+		} else {
+			link = load(obj, fexit_name);
+			if (CHECK(IS_ERR(link), "attach prog", "err %ld\n", PTR_ERR(link)))
+				goto cleanup;
+			inst[i].link_fexit = link;
+		}
+	}
+
+	/* and try 1 extra.. */
+	obj = bpf_object__open_file(object, NULL);
+	if (CHECK(IS_ERR(obj), "obj_open_file", "err %ld\n", PTR_ERR(obj)))
+		goto cleanup;
+
+	err = bpf_object__load(obj);
+	if (CHECK(err, "obj_load", "err %d\n", err))
+		goto cleanup_extra;
+
+	/* ..that needs to fail */
+	link = load(obj, fentry_name);
+	if (CHECK(!IS_ERR(link), "cannot attach over the limit", "err %ld\n", PTR_ERR(link))) {
+		bpf_link__destroy(link);
+		goto cleanup_extra;
+	}
+
+	/* with E2BIG error */
+	CHECK(PTR_ERR(link) != -E2BIG, "proper error check", "err %ld\n", PTR_ERR(link));
+
+	/* and finaly execute the probe */
+	if (CHECK_FAIL(prctl(PR_GET_NAME, comm, 0L, 0L, 0L)))
+		goto cleanup_extra;
+	CHECK_FAIL(test_task_rename());
+	CHECK_FAIL(prctl(PR_SET_NAME, comm, 0L, 0L, 0L));
+
+cleanup_extra:
+	bpf_object__close(obj);
+cleanup:
+	while (--i) {
+		bpf_link__destroy(inst[i].link_fentry);
+		bpf_link__destroy(inst[i].link_fexit);
+		bpf_object__close(inst[i].obj);
+	}
+}
diff --git a/tools/testing/selftests/bpf/progs/test_trampoline_count.c b/tools/testing/selftests/bpf/progs/test_trampoline_count.c
new file mode 100644
index 000000000000..e51e6e3a81c2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_trampoline_count.c
@@ -0,0 +1,21 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdbool.h>
+#include <stddef.h>
+#include <linux/bpf.h>
+#include "bpf_trace_helpers.h"
+
+struct task_struct;
+
+SEC("fentry/__set_task_comm")
+int BPF_PROG(prog1, struct task_struct *tsk, const char *buf, bool exec)
+{
+	return 0;
+}
+
+SEC("fexit/__set_task_comm")
+int BPF_PROG(prog2, struct task_struct *tsk, const char *buf, bool exec)
+{
+	return 0;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/6] bpf: Add bpf_perf_event_output_kfunc
  2020-01-21 12:05 ` [PATCH 2/6] bpf: Add bpf_perf_event_output_kfunc Jiri Olsa
@ 2020-01-22  0:03   ` Alexei Starovoitov
  2020-01-22  7:51     ` Jiri Olsa
  0 siblings, 1 reply; 19+ messages in thread
From: Alexei Starovoitov @ 2020-01-22  0:03 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, netdev, bpf,
	Andrii Nakryiko, Yonghong Song, Martin KaFai Lau, Jakub Kicinski,
	David Miller, Björn Töpel, John Fastabend

On Tue, Jan 21, 2020 at 01:05:08PM +0100, Jiri Olsa wrote:
> Adding support to use perf_event_output in
> BPF_TRACE_FENTRY/BPF_TRACE_FEXIT programs.
> 
> Using nesting regs array from raw tracepoint helpers.
> 
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  kernel/trace/bpf_trace.c | 41 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 41 insertions(+)
> 
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 19e793aa441a..6a18e2ae6e30 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -1172,6 +1172,43 @@ raw_tp_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
>  	}
>  }
>  
> +BPF_CALL_5(bpf_perf_event_output_kfunc, void *, ctx, struct bpf_map *, map,
> +	   u64, flags, void *, data, u64, size)
> +{
> +	struct pt_regs *regs = get_bpf_raw_tp_regs();
> +	int ret;
> +
> +	if (IS_ERR(regs))
> +		return PTR_ERR(regs);
> +
> +	perf_fetch_caller_regs(regs);
> +	ret = ____bpf_perf_event_output(regs, map, flags, data, size);
> +	put_bpf_raw_tp_regs();
> +	return ret;
> +}

I'm not sure why copy paste bpf_perf_event_output_raw_tp() into new function.

> @@ -1181,6 +1218,10 @@ tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
>  		return &bpf_skb_output_proto;
>  #endif
>  	default:
> +		if (prog->expected_attach_type == BPF_TRACE_FENTRY ||
> +		    prog->expected_attach_type == BPF_TRACE_FEXIT)
> +			return kfunc_prog_func_proto(func_id, prog);
> +
>  		return raw_tp_prog_func_proto(func_id, prog);

Are you saying bpf_perf_event_output_raw_tp() for some reason
didn't work for fentry/fexit?
But above is exact copy-paste and it somehow worked?

Ditto for patches 3,4.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 6/6] selftest/bpf: Add test for allowed trampolines count
  2020-01-21 12:05 ` [PATCH 6/6] selftest/bpf: Add test for allowed trampolines count Jiri Olsa
@ 2020-01-22  0:10   ` Alexei Starovoitov
  2020-01-22  7:47     ` Jiri Olsa
  0 siblings, 1 reply; 19+ messages in thread
From: Alexei Starovoitov @ 2020-01-22  0:10 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Network Development, bpf,
	Andrii Nakryiko, Yonghong Song, Martin KaFai Lau, Jakub Kicinski,
	David Miller, Björn Töpel, John Fastabend

On Tue, Jan 21, 2020 at 4:05 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> There's limit of 40 programs tht can be attached
> to trampoline for one function. Adding test that
> tries to attach that many plus one extra that needs
> to fail.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>

I don't mind another test. Just pointing out that there is one
for this purpose already :)
prog_tests/fexit_stress.c
Yours is better. Mine wasn't that sophisticated. :)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] bpf: Allow ctx access for pointers to scalar
  2020-01-21 12:05 ` [PATCH 1/6] bpf: Allow ctx access for pointers to scalar Jiri Olsa
@ 2020-01-22  1:51   ` Alexei Starovoitov
  2020-01-22  2:33     ` Yonghong Song
  0 siblings, 1 reply; 19+ messages in thread
From: Alexei Starovoitov @ 2020-01-22  1:51 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Network Development, bpf, Andrii Nakryiko, Yonghong Song,
	Martin KaFai Lau, Jakub Kicinski, David Miller,
	Björn Töpel

On Tue, Jan 21, 2020 at 4:05 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> When accessing the context we allow access to arguments with
> scalar type and pointer to struct. But we omit pointer to scalar
> type, which is the case for many functions and same case as
> when accessing scalar.
>
> Adding the check if the pointer is to scalar type and allow it.
>
> Acked-by: John Fastabend <john.fastabend@gmail.com>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  kernel/bpf/btf.c | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 832b5d7fd892..207ae554e0ce 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -3668,7 +3668,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
>                     const struct bpf_prog *prog,
>                     struct bpf_insn_access_aux *info)
>  {
> -       const struct btf_type *t = prog->aux->attach_func_proto;
> +       const struct btf_type *tp, *t = prog->aux->attach_func_proto;
>         struct bpf_prog *tgt_prog = prog->aux->linked_prog;
>         struct btf *btf = bpf_prog_get_target_btf(prog);
>         const char *tname = prog->aux->attach_func_name;
> @@ -3730,6 +3730,17 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
>                  */
>                 return true;
>
> +       tp = btf_type_by_id(btf, t->type);
> +       /* skip modifiers */
> +       while (btf_type_is_modifier(tp))
> +               tp = btf_type_by_id(btf, tp->type);
> +
> +       if (btf_type_is_int(tp) || btf_type_is_enum(tp))
> +               /* This is a pointer scalar.
> +                * It is the same as scalar from the verifier safety pov.
> +                */
> +               return true;

The reason I didn't do it earlier is I was thinking to represent it
as PTR_TO_BTF_ID as well, so that corresponding u8..u64
access into this memory would still be possible.
I'm trying to analyze the situation that returning a scalar now
and converting to PTR_TO_BTF_ID in the future will keep progs
passing the verifier. Is it really the case?
Could you give a specific example that needs this support?
It will help me understand this backward compatibility concern.
What prog is doing with that 'u32 *' that is seen as scalar ?
It cannot dereference it. Use it as what?

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] bpf: Allow ctx access for pointers to scalar
  2020-01-22  1:51   ` Alexei Starovoitov
@ 2020-01-22  2:33     ` Yonghong Song
  2020-01-22  9:13       ` Jiri Olsa
  0 siblings, 1 reply; 19+ messages in thread
From: Yonghong Song @ 2020-01-22  2:33 UTC (permalink / raw)
  To: Alexei Starovoitov, Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, John Fastabend,
	Network Development, bpf, Andrii Nakryiko, Martin Lau,
	Jakub Kicinski, David Miller, Björn Töpel



On 1/21/20 5:51 PM, Alexei Starovoitov wrote:
> On Tue, Jan 21, 2020 at 4:05 AM Jiri Olsa <jolsa@kernel.org> wrote:
>>
>> When accessing the context we allow access to arguments with
>> scalar type and pointer to struct. But we omit pointer to scalar
>> type, which is the case for many functions and same case as
>> when accessing scalar.
>>
>> Adding the check if the pointer is to scalar type and allow it.
>>
>> Acked-by: John Fastabend <john.fastabend@gmail.com>
>> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
>> ---
>>   kernel/bpf/btf.c | 13 ++++++++++++-
>>   1 file changed, 12 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
>> index 832b5d7fd892..207ae554e0ce 100644
>> --- a/kernel/bpf/btf.c
>> +++ b/kernel/bpf/btf.c
>> @@ -3668,7 +3668,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
>>                      const struct bpf_prog *prog,
>>                      struct bpf_insn_access_aux *info)
>>   {
>> -       const struct btf_type *t = prog->aux->attach_func_proto;
>> +       const struct btf_type *tp, *t = prog->aux->attach_func_proto;
>>          struct bpf_prog *tgt_prog = prog->aux->linked_prog;
>>          struct btf *btf = bpf_prog_get_target_btf(prog);
>>          const char *tname = prog->aux->attach_func_name;
>> @@ -3730,6 +3730,17 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
>>                   */
>>                  return true;
>>
>> +       tp = btf_type_by_id(btf, t->type);
>> +       /* skip modifiers */
>> +       while (btf_type_is_modifier(tp))
>> +               tp = btf_type_by_id(btf, tp->type);
>> +
>> +       if (btf_type_is_int(tp) || btf_type_is_enum(tp))
>> +               /* This is a pointer scalar.
>> +                * It is the same as scalar from the verifier safety pov.
>> +                */
>> +               return true;
> 
> The reason I didn't do it earlier is I was thinking to represent it
> as PTR_TO_BTF_ID as well, so that corresponding u8..u64
> access into this memory would still be possible.
> I'm trying to analyze the situation that returning a scalar now
> and converting to PTR_TO_BTF_ID in the future will keep progs
> passing the verifier. Is it really the case?
> Could you give a specific example that needs this support?
> It will help me understand this backward compatibility concern.
> What prog is doing with that 'u32 *' that is seen as scalar ?
> It cannot dereference it. Use it as what?

If this is from original bcc code, it will use bpf_probe_read for 
dereference. This is what I understand when I first reviewed this patch.
But it will be good to get Jiri's confirmation.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 6/6] selftest/bpf: Add test for allowed trampolines count
  2020-01-22  0:10   ` Alexei Starovoitov
@ 2020-01-22  7:47     ` Jiri Olsa
  0 siblings, 0 replies; 19+ messages in thread
From: Jiri Olsa @ 2020-01-22  7:47 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann,
	Network Development, bpf, Andrii Nakryiko, Yonghong Song,
	Martin KaFai Lau, Jakub Kicinski, David Miller,
	Björn Töpel, John Fastabend

On Tue, Jan 21, 2020 at 04:10:26PM -0800, Alexei Starovoitov wrote:
> On Tue, Jan 21, 2020 at 4:05 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > There's limit of 40 programs tht can be attached
> > to trampoline for one function. Adding test that
> > tries to attach that many plus one extra that needs
> > to fail.
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> 
> I don't mind another test. Just pointing out that there is one
> for this purpose already :)
> prog_tests/fexit_stress.c
> Yours is better. Mine wasn't that sophisticated. :)

ok ;-) did not notice that one.. just wanted to be sure
the unwind change won't screw that

jirka


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 2/6] bpf: Add bpf_perf_event_output_kfunc
  2020-01-22  0:03   ` Alexei Starovoitov
@ 2020-01-22  7:51     ` Jiri Olsa
  0 siblings, 0 replies; 19+ messages in thread
From: Jiri Olsa @ 2020-01-22  7:51 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, netdev, bpf,
	Andrii Nakryiko, Yonghong Song, Martin KaFai Lau, Jakub Kicinski,
	David Miller, Björn Töpel, John Fastabend

On Tue, Jan 21, 2020 at 04:03:23PM -0800, Alexei Starovoitov wrote:
> On Tue, Jan 21, 2020 at 01:05:08PM +0100, Jiri Olsa wrote:
> > Adding support to use perf_event_output in
> > BPF_TRACE_FENTRY/BPF_TRACE_FEXIT programs.
> > 
> > Using nesting regs array from raw tracepoint helpers.
> > 
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  kernel/trace/bpf_trace.c | 41 ++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 41 insertions(+)
> > 
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index 19e793aa441a..6a18e2ae6e30 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -1172,6 +1172,43 @@ raw_tp_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >  	}
> >  }
> >  
> > +BPF_CALL_5(bpf_perf_event_output_kfunc, void *, ctx, struct bpf_map *, map,
> > +	   u64, flags, void *, data, u64, size)
> > +{
> > +	struct pt_regs *regs = get_bpf_raw_tp_regs();
> > +	int ret;
> > +
> > +	if (IS_ERR(regs))
> > +		return PTR_ERR(regs);
> > +
> > +	perf_fetch_caller_regs(regs);
> > +	ret = ____bpf_perf_event_output(regs, map, flags, data, size);
> > +	put_bpf_raw_tp_regs();
> > +	return ret;
> > +}
> 
> I'm not sure why copy paste bpf_perf_event_output_raw_tp() into new function.
> 
> > @@ -1181,6 +1218,10 @@ tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >  		return &bpf_skb_output_proto;
> >  #endif
> >  	default:
> > +		if (prog->expected_attach_type == BPF_TRACE_FENTRY ||
> > +		    prog->expected_attach_type == BPF_TRACE_FEXIT)
> > +			return kfunc_prog_func_proto(func_id, prog);
> > +
> >  		return raw_tp_prog_func_proto(func_id, prog);
> 
> Are you saying bpf_perf_event_output_raw_tp() for some reason
> didn't work for fentry/fexit?
> But above is exact copy-paste and it somehow worked?
> 
> Ditto for patches 3,4.

ugh right.. did not realize that after switching to the rawtp
regs nest arrays it's identical and we don't need that

jirka


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] bpf: Allow ctx access for pointers to scalar
  2020-01-22  2:33     ` Yonghong Song
@ 2020-01-22  9:13       ` Jiri Olsa
  2020-01-22 16:09         ` Alexei Starovoitov
  0 siblings, 1 reply; 19+ messages in thread
From: Jiri Olsa @ 2020-01-22  9:13 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Alexei Starovoitov, Jiri Olsa, Alexei Starovoitov,
	Daniel Borkmann, John Fastabend, Network Development, bpf,
	Andrii Nakryiko, Martin Lau, Jakub Kicinski, David Miller,
	Björn Töpel

On Wed, Jan 22, 2020 at 02:33:32AM +0000, Yonghong Song wrote:
> 
> 
> On 1/21/20 5:51 PM, Alexei Starovoitov wrote:
> > On Tue, Jan 21, 2020 at 4:05 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >>
> >> When accessing the context we allow access to arguments with
> >> scalar type and pointer to struct. But we omit pointer to scalar
> >> type, which is the case for many functions and same case as
> >> when accessing scalar.
> >>
> >> Adding the check if the pointer is to scalar type and allow it.
> >>
> >> Acked-by: John Fastabend <john.fastabend@gmail.com>
> >> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> >> ---
> >>   kernel/bpf/btf.c | 13 ++++++++++++-
> >>   1 file changed, 12 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> >> index 832b5d7fd892..207ae554e0ce 100644
> >> --- a/kernel/bpf/btf.c
> >> +++ b/kernel/bpf/btf.c
> >> @@ -3668,7 +3668,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
> >>                      const struct bpf_prog *prog,
> >>                      struct bpf_insn_access_aux *info)
> >>   {
> >> -       const struct btf_type *t = prog->aux->attach_func_proto;
> >> +       const struct btf_type *tp, *t = prog->aux->attach_func_proto;
> >>          struct bpf_prog *tgt_prog = prog->aux->linked_prog;
> >>          struct btf *btf = bpf_prog_get_target_btf(prog);
> >>          const char *tname = prog->aux->attach_func_name;
> >> @@ -3730,6 +3730,17 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
> >>                   */
> >>                  return true;
> >>
> >> +       tp = btf_type_by_id(btf, t->type);
> >> +       /* skip modifiers */
> >> +       while (btf_type_is_modifier(tp))
> >> +               tp = btf_type_by_id(btf, tp->type);
> >> +
> >> +       if (btf_type_is_int(tp) || btf_type_is_enum(tp))
> >> +               /* This is a pointer scalar.
> >> +                * It is the same as scalar from the verifier safety pov.
> >> +                */
> >> +               return true;
> > 
> > The reason I didn't do it earlier is I was thinking to represent it
> > as PTR_TO_BTF_ID as well, so that corresponding u8..u64
> > access into this memory would still be possible.
> > I'm trying to analyze the situation that returning a scalar now
> > and converting to PTR_TO_BTF_ID in the future will keep progs
> > passing the verifier. Is it really the case?
> > Could you give a specific example that needs this support?
> > It will help me understand this backward compatibility concern.
> > What prog is doing with that 'u32 *' that is seen as scalar ?
> > It cannot dereference it. Use it as what?
> 
> If this is from original bcc code, it will use bpf_probe_read for 
> dereference. This is what I understand when I first reviewed this patch.
> But it will be good to get Jiri's confirmation.

it blocked me from accessing 'filename' argument when I probed
do_sys_open via trampoline in bcc, like:

	KRETFUNC_PROBE(do_sys_open)
	{
	    const char *filename = (const char *) args[1];

AFAICS the current code does not allow for trampoline arguments
being other pointers than to void or struct, the patch should
detect that the argument is pointer to scalar type and let it
pass

jirka


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] bpf: Allow ctx access for pointers to scalar
  2020-01-22  9:13       ` Jiri Olsa
@ 2020-01-22 16:09         ` Alexei Starovoitov
  2020-01-22 21:18           ` Jiri Olsa
  0 siblings, 1 reply; 19+ messages in thread
From: Alexei Starovoitov @ 2020-01-22 16:09 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Yonghong Song, Jiri Olsa, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Network Development, bpf, Andrii Nakryiko,
	Martin Lau, Jakub Kicinski, David Miller, Björn Töpel

On Wed, Jan 22, 2020 at 10:13:36AM +0100, Jiri Olsa wrote:
> On Wed, Jan 22, 2020 at 02:33:32AM +0000, Yonghong Song wrote:
> > 
> > 
> > On 1/21/20 5:51 PM, Alexei Starovoitov wrote:
> > > On Tue, Jan 21, 2020 at 4:05 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > >>
> > >> When accessing the context we allow access to arguments with
> > >> scalar type and pointer to struct. But we omit pointer to scalar
> > >> type, which is the case for many functions and same case as
> > >> when accessing scalar.
> > >>
> > >> Adding the check if the pointer is to scalar type and allow it.
> > >>
> > >> Acked-by: John Fastabend <john.fastabend@gmail.com>
> > >> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > >> ---
> > >>   kernel/bpf/btf.c | 13 ++++++++++++-
> > >>   1 file changed, 12 insertions(+), 1 deletion(-)
> > >>
> > >> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> > >> index 832b5d7fd892..207ae554e0ce 100644
> > >> --- a/kernel/bpf/btf.c
> > >> +++ b/kernel/bpf/btf.c
> > >> @@ -3668,7 +3668,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
> > >>                      const struct bpf_prog *prog,
> > >>                      struct bpf_insn_access_aux *info)
> > >>   {
> > >> -       const struct btf_type *t = prog->aux->attach_func_proto;
> > >> +       const struct btf_type *tp, *t = prog->aux->attach_func_proto;
> > >>          struct bpf_prog *tgt_prog = prog->aux->linked_prog;
> > >>          struct btf *btf = bpf_prog_get_target_btf(prog);
> > >>          const char *tname = prog->aux->attach_func_name;
> > >> @@ -3730,6 +3730,17 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
> > >>                   */
> > >>                  return true;
> > >>
> > >> +       tp = btf_type_by_id(btf, t->type);
> > >> +       /* skip modifiers */
> > >> +       while (btf_type_is_modifier(tp))
> > >> +               tp = btf_type_by_id(btf, tp->type);
> > >> +
> > >> +       if (btf_type_is_int(tp) || btf_type_is_enum(tp))
> > >> +               /* This is a pointer scalar.
> > >> +                * It is the same as scalar from the verifier safety pov.
> > >> +                */
> > >> +               return true;
> > > 
> > > The reason I didn't do it earlier is I was thinking to represent it
> > > as PTR_TO_BTF_ID as well, so that corresponding u8..u64
> > > access into this memory would still be possible.
> > > I'm trying to analyze the situation that returning a scalar now
> > > and converting to PTR_TO_BTF_ID in the future will keep progs
> > > passing the verifier. Is it really the case?
> > > Could you give a specific example that needs this support?
> > > It will help me understand this backward compatibility concern.
> > > What prog is doing with that 'u32 *' that is seen as scalar ?
> > > It cannot dereference it. Use it as what?
> > 
> > If this is from original bcc code, it will use bpf_probe_read for 
> > dereference. This is what I understand when I first reviewed this patch.
> > But it will be good to get Jiri's confirmation.
> 
> it blocked me from accessing 'filename' argument when I probed
> do_sys_open via trampoline in bcc, like:
> 
> 	KRETFUNC_PROBE(do_sys_open)
> 	{
> 	    const char *filename = (const char *) args[1];
> 
> AFAICS the current code does not allow for trampoline arguments
> being other pointers than to void or struct, the patch should
> detect that the argument is pointer to scalar type and let it
> pass

Got it. I've looked up your bcc patches and I agree that there is no way to
workaround. BTF type argument of that kernel function is 'const char *' and the
verifier will enforce that if bpf program tries to cast it the verifier will
still see 'const char *'. (It's done this way by design). How about we special
case 'char *' in the verifier? Then my concern regarding future extensibility
of 'int *' and 'long *' will go away.
Compilers have a long history special casing 'char *'. In particular signed
char because it's a pointer to null terminated string. I think it's still a
special pointer from pointer aliasing point of view. I think the verifier can
treat it as scalar here too. In the future the verifier will get smarter and
will recognize it as PTR_TO_NULL_STRING while 'u8 *', 'u32 *' will be
PTR_TO_BTF_ID. I think it will solve this particular issue. I like conservative
approach to the verifier improvements: start with strict checking and relax it
on case-by-case. Instead of accepting wide range of cases and cause potential
compatibility issues.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] bpf: Allow ctx access for pointers to scalar
  2020-01-22 16:09         ` Alexei Starovoitov
@ 2020-01-22 21:18           ` Jiri Olsa
  2020-01-23  1:16             ` Alexei Starovoitov
  0 siblings, 1 reply; 19+ messages in thread
From: Jiri Olsa @ 2020-01-22 21:18 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Yonghong Song, Jiri Olsa, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Network Development, bpf, Andrii Nakryiko,
	Martin Lau, Jakub Kicinski, David Miller, Björn Töpel

On Wed, Jan 22, 2020 at 08:09:59AM -0800, Alexei Starovoitov wrote:

SNIP

> > > > It cannot dereference it. Use it as what?
> > > 
> > > If this is from original bcc code, it will use bpf_probe_read for 
> > > dereference. This is what I understand when I first reviewed this patch.
> > > But it will be good to get Jiri's confirmation.
> > 
> > it blocked me from accessing 'filename' argument when I probed
> > do_sys_open via trampoline in bcc, like:
> > 
> > 	KRETFUNC_PROBE(do_sys_open)
> > 	{
> > 	    const char *filename = (const char *) args[1];
> > 
> > AFAICS the current code does not allow for trampoline arguments
> > being other pointers than to void or struct, the patch should
> > detect that the argument is pointer to scalar type and let it
> > pass
> 
> Got it. I've looked up your bcc patches and I agree that there is no way to
> workaround. BTF type argument of that kernel function is 'const char *' and the
> verifier will enforce that if bpf program tries to cast it the verifier will
> still see 'const char *'. (It's done this way by design). How about we special
> case 'char *' in the verifier? Then my concern regarding future extensibility
> of 'int *' and 'long *' will go away.
> Compilers have a long history special casing 'char *'. In particular signed
> char because it's a pointer to null terminated string. I think it's still a
> special pointer from pointer aliasing point of view. I think the verifier can
> treat it as scalar here too. In the future the verifier will get smarter and
> will recognize it as PTR_TO_NULL_STRING while 'u8 *', 'u32 *' will be
> PTR_TO_BTF_ID. I think it will solve this particular issue. I like conservative
> approach to the verifier improvements: start with strict checking and relax it
> on case-by-case. Instead of accepting wide range of cases and cause potential
> compatibility issues.

ok, so something like below?

jirka


---
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 832b5d7fd892..dd678b8e00b7 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3664,6 +3664,19 @@ struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog)
 	}
 }
 
+static bool is_string_ptr(struct btf *btf, const struct btf_type *t)
+{
+	/* t comes in already as a pointer */
+	t = btf_type_by_id(btf, t->type);
+
+	/* allow const */
+	if (BTF_INFO_KIND(t->info) == BTF_KIND_CONST)
+		t = btf_type_by_id(btf, t->type);
+
+	/* char, signed char, unsigned char */
+	return btf_type_is_int(t) && t->size == 1;
+}
+
 bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		    const struct bpf_prog *prog,
 		    struct bpf_insn_access_aux *info)
@@ -3730,6 +3743,9 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		 */
 		return true;
 
+	if (is_string_ptr(btf, t))
+		return true;
+
 	/* this is a pointer to another type */
 	info->reg_type = PTR_TO_BTF_ID;
 


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 1/6] bpf: Allow ctx access for pointers to scalar
  2020-01-22 21:18           ` Jiri Olsa
@ 2020-01-23  1:16             ` Alexei Starovoitov
  0 siblings, 0 replies; 19+ messages in thread
From: Alexei Starovoitov @ 2020-01-23  1:16 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Yonghong Song, Jiri Olsa, Alexei Starovoitov, Daniel Borkmann,
	John Fastabend, Network Development, bpf, Andrii Nakryiko,
	Martin Lau, Jakub Kicinski, David Miller, Björn Töpel

On Wed, Jan 22, 2020 at 1:18 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Wed, Jan 22, 2020 at 08:09:59AM -0800, Alexei Starovoitov wrote:
>
> SNIP
>
> > > > > It cannot dereference it. Use it as what?
> > > >
> > > > If this is from original bcc code, it will use bpf_probe_read for
> > > > dereference. This is what I understand when I first reviewed this patch.
> > > > But it will be good to get Jiri's confirmation.
> > >
> > > it blocked me from accessing 'filename' argument when I probed
> > > do_sys_open via trampoline in bcc, like:
> > >
> > >     KRETFUNC_PROBE(do_sys_open)
> > >     {
> > >         const char *filename = (const char *) args[1];
> > >
> > > AFAICS the current code does not allow for trampoline arguments
> > > being other pointers than to void or struct, the patch should
> > > detect that the argument is pointer to scalar type and let it
> > > pass
> >
> > Got it. I've looked up your bcc patches and I agree that there is no way to
> > workaround. BTF type argument of that kernel function is 'const char *' and the
> > verifier will enforce that if bpf program tries to cast it the verifier will
> > still see 'const char *'. (It's done this way by design). How about we special
> > case 'char *' in the verifier? Then my concern regarding future extensibility
> > of 'int *' and 'long *' will go away.
> > Compilers have a long history special casing 'char *'. In particular signed
> > char because it's a pointer to null terminated string. I think it's still a
> > special pointer from pointer aliasing point of view. I think the verifier can
> > treat it as scalar here too. In the future the verifier will get smarter and
> > will recognize it as PTR_TO_NULL_STRING while 'u8 *', 'u32 *' will be
> > PTR_TO_BTF_ID. I think it will solve this particular issue. I like conservative
> > approach to the verifier improvements: start with strict checking and relax it
> > on case-by-case. Instead of accepting wide range of cases and cause potential
> > compatibility issues.
>
> ok, so something like below?
>
> jirka
>
>
> ---
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 832b5d7fd892..dd678b8e00b7 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -3664,6 +3664,19 @@ struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog)
>         }
>  }
>
> +static bool is_string_ptr(struct btf *btf, const struct btf_type *t)
> +{
> +       /* t comes in already as a pointer */
> +       t = btf_type_by_id(btf, t->type);
> +
> +       /* allow const */
> +       if (BTF_INFO_KIND(t->info) == BTF_KIND_CONST)
> +               t = btf_type_by_id(btf, t->type);
> +
> +       /* char, signed char, unsigned char */
> +       return btf_type_is_int(t) && t->size == 1;
> +}

yep. looks like btf doesn't distinguish signedness for chars.
So above is good.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [PATCH 1/6] bpf: Allow ctx access for pointers to scalar
  2020-01-18 13:49 ` [PATCH 1/6] bpf: Allow ctx access for pointers to scalar Jiri Olsa
@ 2020-01-21  0:24   ` John Fastabend
  0 siblings, 0 replies; 19+ messages in thread
From: John Fastabend @ 2020-01-21  0:24 UTC (permalink / raw)
  To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel

Jiri Olsa wrote:
> When accessing the context we allow access to arguments with
> scalar type and pointer to struct. But we omit pointer to scalar
> type, which is the case for many functions and same case as
> when accessing scalar.
> 
> Adding the check if the pointer is to scalar type and allow it.
> 
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  kernel/bpf/btf.c | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 832b5d7fd892..207ae554e0ce 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -3668,7 +3668,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
>  		    const struct bpf_prog *prog,
>  		    struct bpf_insn_access_aux *info)
>  {
> -	const struct btf_type *t = prog->aux->attach_func_proto;
> +	const struct btf_type *tp, *t = prog->aux->attach_func_proto;
>  	struct bpf_prog *tgt_prog = prog->aux->linked_prog;
>  	struct btf *btf = bpf_prog_get_target_btf(prog);
>  	const char *tname = prog->aux->attach_func_name;
> @@ -3730,6 +3730,17 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
>  		 */
>  		return true;
>  
> +	tp = btf_type_by_id(btf, t->type);
> +	/* skip modifiers */
> +	while (btf_type_is_modifier(tp))
> +		tp = btf_type_by_id(btf, tp->type);
> +
> +	if (btf_type_is_int(tp) || btf_type_is_enum(tp))
> +		/* This is a pointer scalar.
> +		 * It is the same as scalar from the verifier safety pov.
> +		 */
> +		return true;
> +
>  	/* this is a pointer to another type */
>  	info->reg_type = PTR_TO_BTF_ID;
>  

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/6] bpf: Allow ctx access for pointers to scalar
  2020-01-18 13:49 [PATCHv2 0/6] bpf: Add trampoline helpers Jiri Olsa
@ 2020-01-18 13:49 ` Jiri Olsa
  2020-01-21  0:24   ` John Fastabend
  0 siblings, 1 reply; 19+ messages in thread
From: Jiri Olsa @ 2020-01-18 13:49 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Andrii Nakryiko, Yonghong Song, Martin KaFai Lau,
	Jakub Kicinski, David Miller, Björn Töpel

When accessing the context we allow access to arguments with
scalar type and pointer to struct. But we omit pointer to scalar
type, which is the case for many functions and same case as
when accessing scalar.

Adding the check if the pointer is to scalar type and allow it.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/bpf/btf.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 832b5d7fd892..207ae554e0ce 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3668,7 +3668,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		    const struct bpf_prog *prog,
 		    struct bpf_insn_access_aux *info)
 {
-	const struct btf_type *t = prog->aux->attach_func_proto;
+	const struct btf_type *tp, *t = prog->aux->attach_func_proto;
 	struct bpf_prog *tgt_prog = prog->aux->linked_prog;
 	struct btf *btf = bpf_prog_get_target_btf(prog);
 	const char *tname = prog->aux->attach_func_name;
@@ -3730,6 +3730,17 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		 */
 		return true;
 
+	tp = btf_type_by_id(btf, t->type);
+	/* skip modifiers */
+	while (btf_type_is_modifier(tp))
+		tp = btf_type_by_id(btf, tp->type);
+
+	if (btf_type_is_int(tp) || btf_type_is_enum(tp))
+		/* This is a pointer scalar.
+		 * It is the same as scalar from the verifier safety pov.
+		 */
+		return true;
+
 	/* this is a pointer to another type */
 	info->reg_type = PTR_TO_BTF_ID;
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2020-01-23  1:16 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-21 12:05 [PATCHv3 0/6] bpf: Add trampoline helpers Jiri Olsa
2020-01-21 12:05 ` [PATCH 1/6] bpf: Allow ctx access for pointers to scalar Jiri Olsa
2020-01-22  1:51   ` Alexei Starovoitov
2020-01-22  2:33     ` Yonghong Song
2020-01-22  9:13       ` Jiri Olsa
2020-01-22 16:09         ` Alexei Starovoitov
2020-01-22 21:18           ` Jiri Olsa
2020-01-23  1:16             ` Alexei Starovoitov
2020-01-21 12:05 ` [PATCH 2/6] bpf: Add bpf_perf_event_output_kfunc Jiri Olsa
2020-01-22  0:03   ` Alexei Starovoitov
2020-01-22  7:51     ` Jiri Olsa
2020-01-21 12:05 ` [PATCH 3/6] bpf: Add bpf_get_stackid_kfunc Jiri Olsa
2020-01-21 12:05 ` [PATCH 4/6] bpf: Add bpf_get_stack_kfunc Jiri Olsa
2020-01-21 12:05 ` [PATCH 5/6] bpf: Allow to resolve bpf trampoline and dispatcher in unwind Jiri Olsa
2020-01-21 12:05 ` [PATCH 6/6] selftest/bpf: Add test for allowed trampolines count Jiri Olsa
2020-01-22  0:10   ` Alexei Starovoitov
2020-01-22  7:47     ` Jiri Olsa
  -- strict thread matches above, loose matches on Subject: below --
2020-01-18 13:49 [PATCHv2 0/6] bpf: Add trampoline helpers Jiri Olsa
2020-01-18 13:49 ` [PATCH 1/6] bpf: Allow ctx access for pointers to scalar Jiri Olsa
2020-01-21  0:24   ` John Fastabend

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).