All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach
@ 2021-08-26 19:38 Jiri Olsa
  2021-08-26 19:38 ` [PATCH bpf-next v4 01/27] x86/ftrace: Remove extra orig rax move Jiri Olsa
                   ` (27 more replies)
  0 siblings, 28 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

hi,
sending new version of batch attach support, previous post
is in here [1].

The previous post could not assign multi trampoline on top
of regular trampolines. This patchset is trying to address
that, plus it has other fixes from last post.

This patchset contains:
  1) patches (1-4) that fix the ftrace graph tracing over the function
     with direct trampolines attached
  2) patches (5-8) that add batch interface for ftrace direct function
     register/unregister/modify
  3) patches (9-27) that add support to attach BPF program to multiple
     functions

The current functionality in nutshell:
  - allows to create 'multi trampoline' and use it to attach single
    program over multiple functions
  - it's possible to attach 'multi trampoline' on top of functions
    with attached trampoline
  - once 'multi trampoline' is created, the functions are locked and we:
       - do not allow to attach another 'multi trampoline' that intersects
         partially with already attached multi trampoline
       - do not allow to attach another standard trampoline on any function
         from 'multi trampoline'
       - allow to reuse 'multi trampoline' and attach another multi program
         in it

    These limitations are enforced to keep the implementation simple,
    because having multi trampolines to intersect would bring more
    complexity plus more ftrace direct API changes.

    It'd be probably possible allowing to attach another standard
    trampoline to 'multi trampoline' if needed.

v4 other changes from previous review:
  - more detailed changelogs in several patches
  - removed 'ip' argument assumption in verifier code,
    because we now have bpf_get_func_ip helper
  - moved 'multi_func' under other bools in bpf.h [Yonghong]
  - used static linker in selftests [Andrii]
  - added more tests
  - added btf__find_by_glob_kind for simplified glob matching
    instead of the previous glibc glob matching [Andrii]
  - used '__ksym' instead of resolving test functions [Andrii]
  - I kept the single BPF_F_MULTI_FUNC flag instead of adding
    new multi prog type, because it'd be more complex
  - removed superfluous BPF_PROG_TYPE_TRACING/multi_func check
    from check_multi_prog_type [Yonghong]
  - kept link_create.iter_info_len as BPF_LINK_CREATE_LAST_FIELD
    [Yonghong]
  - define FTRACE_OPS_GRAPH_STUB 0 to make code look sane [Andrii]
  - removed BPF_LINK_UPDATE interface

Also available at:
  https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
  bpf/batch

thanks,
jirka


[1] https://lore.kernel.org/bpf/20210605111034.1810858-1-jolsa@kernel.org/

---
Jiri Olsa (25):
      x86/ftrace: Remove extra orig rax move
      tracing: Add trampoline/graph selftest
      ftrace: Add ftrace_add_rec_direct function
      ftrace: Add multi direct register/unregister interface
      ftrace: Add multi direct modify interface
      ftrace/samples: Add multi direct interface test module
      bpf: Add support to load multi func tracing program
      bpf: Add struct bpf_tramp_node layer
      bpf: Factor out bpf_trampoline_init function
      bpf: Factor out __bpf_trampoline_lookup function
      bpf: Factor out __bpf_trampoline_put function
      bpf: Change bpf_trampoline_get to return error pointer
      bpf, x64: Allow to use caller address from stack
      bpf: Add bpf_trampoline_multi_get/put functions
      bpf: Add multi trampoline attach support
      bpf, x64: Store properly return value for trampoline with multi func programs
      bpf: Attach multi trampoline with ftrace_ops
      libbpf: Add btf__find_by_glob_kind function
      libbpf: Add support to link multi func tracing program
      selftests/bpf: Add fentry multi func test
      selftests/bpf: Add fexit multi func test
      selftests/bpf: Add fentry/fexit multi func test
      selftests/bpf: Add mixed multi func test
      selftests/bpf: Add attach multi func test
      selftests/bpf: Add ret_mod multi func test

Steven Rostedt (VMware) (2):
      x86/ftrace: Remove fault protection code in prepare_ftrace_return
      x86/ftrace: Make function graph use ftrace directly

 arch/x86/Makefile                                                |   7 +++
 arch/x86/boot/compressed/Makefile                                |   4 ++
 arch/x86/include/asm/ftrace.h                                    |   9 +++-
 arch/x86/kernel/ftrace.c                                         |  71 +++++++++++++------------
 arch/x86/kernel/ftrace_64.S                                      |  30 +----------
 arch/x86/net/bpf_jit_comp.c                                      |  53 +++++++++++++++----
 drivers/firmware/efi/libstub/Makefile                            |   3 ++
 include/linux/bpf.h                                              |  44 ++++++++++++++--
 include/linux/ftrace.h                                           |  22 ++++++++
 include/uapi/linux/bpf.h                                         |  12 +++++
 kernel/bpf/core.c                                                |   2 +
 kernel/bpf/syscall.c                                             | 163 +++++++++++++++++++++++++++++++++++++++++++++++++++++----
 kernel/bpf/trampoline.c                                          | 400 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--------------
 kernel/bpf/verifier.c                                            |   7 +--
 kernel/trace/fgraph.c                                            |   6 ++-
 kernel/trace/ftrace.c                                            | 214 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------
 kernel/trace/trace_selftest.c                                    |  49 ++++++++++++++++-
 samples/ftrace/Makefile                                          |   1 +
 samples/ftrace/ftrace-direct-multi.c                             |  52 ++++++++++++++++++
 tools/include/uapi/linux/bpf.h                                   |  12 +++++
 tools/lib/bpf/bpf.c                                              |   8 +++
 tools/lib/bpf/bpf.h                                              |   6 ++-
 tools/lib/bpf/btf.c                                              |  80 ++++++++++++++++++++++++++++
 tools/lib/bpf/btf.h                                              |   3 ++
 tools/lib/bpf/libbpf.c                                           |  72 +++++++++++++++++++++++++
 tools/testing/selftests/bpf/Makefile                             |   8 ++-
 tools/testing/selftests/bpf/prog_tests/modify_return.c           | 114 ++++++++++++++++++++++++++++++++++++++--
 tools/testing/selftests/bpf/prog_tests/multi_attach_check_test.c | 115 ++++++++++++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/prog_tests/multi_fentry_fexit_test.c |  32 ++++++++++++
 tools/testing/selftests/bpf/prog_tests/multi_fentry_test.c       |  30 +++++++++++
 tools/testing/selftests/bpf/prog_tests/multi_fexit_test.c        |  31 +++++++++++
 tools/testing/selftests/bpf/prog_tests/multi_mixed_test.c        |  34 ++++++++++++
 tools/testing/selftests/bpf/progs/multi_attach_check.c           |  36 +++++++++++++
 tools/testing/selftests/bpf/progs/multi_attach_check_extra1.c    |  12 +++++
 tools/testing/selftests/bpf/progs/multi_attach_check_extra2.c    |  12 +++++
 tools/testing/selftests/bpf/progs/multi_check.c                  |  85 ++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/progs/multi_fentry.c                 |  17 ++++++
 tools/testing/selftests/bpf/progs/multi_fentry_fexit.c           |  28 ++++++++++
 tools/testing/selftests/bpf/progs/multi_fexit.c                  |  20 +++++++
 tools/testing/selftests/bpf/progs/multi_mixed.c                  |  43 +++++++++++++++
 tools/testing/selftests/bpf/progs/multi_modify_return.c          |  17 ++++++
 41 files changed, 1799 insertions(+), 165 deletions(-)
 create mode 100644 samples/ftrace/ftrace-direct-multi.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/multi_attach_check_test.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/multi_fentry_fexit_test.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/multi_fentry_test.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/multi_fexit_test.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/multi_mixed_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_attach_check.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_attach_check_extra1.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_attach_check_extra2.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_check.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_fentry.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_fentry_fexit.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_fexit.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_mixed.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_modify_return.c


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 01/27] x86/ftrace: Remove extra orig rax move
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
@ 2021-08-26 19:38 ` Jiri Olsa
  2021-08-26 19:38 ` [PATCH bpf-next v4 02/27] x86/ftrace: Remove fault protection code in prepare_ftrace_return Jiri Olsa
                   ` (26 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

There's identical move 2 lines earlier.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/kernel/ftrace_64.S | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
index 7c273846c687..a8eb084a7a9a 100644
--- a/arch/x86/kernel/ftrace_64.S
+++ b/arch/x86/kernel/ftrace_64.S
@@ -251,7 +251,6 @@ SYM_INNER_LABEL(ftrace_regs_call, SYM_L_GLOBAL)
 	 * If ORIG_RAX is anything but zero, make this a call to that.
 	 * See arch_ftrace_set_direct_caller().
 	 */
-	movq ORIG_RAX(%rsp), %rax
 	testq	%rax, %rax
 SYM_INNER_LABEL(ftrace_regs_caller_jmp, SYM_L_GLOBAL)
 	jnz	1f
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 02/27] x86/ftrace: Remove fault protection code in prepare_ftrace_return
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
  2021-08-26 19:38 ` [PATCH bpf-next v4 01/27] x86/ftrace: Remove extra orig rax move Jiri Olsa
@ 2021-08-26 19:38 ` Jiri Olsa
  2021-08-26 19:38 ` [PATCH bpf-next v4 03/27] x86/ftrace: Make function graph use ftrace directly Jiri Olsa
                   ` (25 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

Removing the fault protection code when writing return_hooker
to stack. As Steven noted:

> That protection was there from the beginning due to being "paranoid",
> considering ftrace was bricking network cards. But that protection
> would not have even protected against that.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/kernel/ftrace.c | 38 +++-----------------------------------
 1 file changed, 3 insertions(+), 35 deletions(-)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 1b3ce3b4a2a2..c555624da989 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -625,12 +625,10 @@ int ftrace_disable_ftrace_graph_caller(void)
  * Hook the return address and push it in the stack of return addrs
  * in current thread info.
  */
-void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent,
+void prepare_ftrace_return(unsigned long ip, unsigned long *parent,
 			   unsigned long frame_pointer)
 {
 	unsigned long return_hooker = (unsigned long)&return_to_handler;
-	unsigned long old;
-	int faulted;
 
 	/*
 	 * When resuming from suspend-to-ram, this function can be indirectly
@@ -650,37 +648,7 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent,
 	if (unlikely(atomic_read(&current->tracing_graph_pause)))
 		return;
 
-	/*
-	 * Protect against fault, even if it shouldn't
-	 * happen. This tool is too much intrusive to
-	 * ignore such a protection.
-	 */
-	asm volatile(
-		"1: " _ASM_MOV " (%[parent]), %[old]\n"
-		"2: " _ASM_MOV " %[return_hooker], (%[parent])\n"
-		"   movl $0, %[faulted]\n"
-		"3:\n"
-
-		".section .fixup, \"ax\"\n"
-		"4: movl $1, %[faulted]\n"
-		"   jmp 3b\n"
-		".previous\n"
-
-		_ASM_EXTABLE(1b, 4b)
-		_ASM_EXTABLE(2b, 4b)
-
-		: [old] "=&r" (old), [faulted] "=r" (faulted)
-		: [parent] "r" (parent), [return_hooker] "r" (return_hooker)
-		: "memory"
-	);
-
-	if (unlikely(faulted)) {
-		ftrace_graph_stop();
-		WARN_ON(1);
-		return;
-	}
-
-	if (function_graph_enter(old, self_addr, frame_pointer, parent))
-		*parent = old;
+	if (!function_graph_enter(*parent, ip, frame_pointer, parent))
+		*parent = return_hooker;
 }
 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 03/27] x86/ftrace: Make function graph use ftrace directly
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
  2021-08-26 19:38 ` [PATCH bpf-next v4 01/27] x86/ftrace: Remove extra orig rax move Jiri Olsa
  2021-08-26 19:38 ` [PATCH bpf-next v4 02/27] x86/ftrace: Remove fault protection code in prepare_ftrace_return Jiri Olsa
@ 2021-08-26 19:38 ` Jiri Olsa
  2021-08-26 19:38 ` [PATCH bpf-next v4 04/27] tracing: Add trampoline/graph selftest Jiri Olsa
                   ` (24 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>

We don't need special hook for graph tracer entry point,
but instead we can use graph_ops::func function to install
the return_hooker.

This moves the graph tracing setup _before_ the direct
trampoline prepares the stack, so the return_hooker will
be called when the direct trampoline is finished.

This simplifies the code, because we don't need to take into
account the direct trampoline setup when preparing the graph
tracer hooker and we can allow function graph tracer on entries
registered with direct trampoline.

Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/include/asm/ftrace.h |  9 +++++++--
 arch/x86/kernel/ftrace.c      | 37 ++++++++++++++++++++++++++++++++---
 arch/x86/kernel/ftrace_64.S   | 29 +--------------------------
 include/linux/ftrace.h        |  6 ++++++
 kernel/trace/fgraph.c         |  6 ++++--
 5 files changed, 52 insertions(+), 35 deletions(-)

diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
index 9f3130f40807..024d9797646e 100644
--- a/arch/x86/include/asm/ftrace.h
+++ b/arch/x86/include/asm/ftrace.h
@@ -57,6 +57,13 @@ arch_ftrace_get_regs(struct ftrace_regs *fregs)
 
 #define ftrace_instruction_pointer_set(fregs, _ip)	\
 	do { (fregs)->regs.ip = (_ip); } while (0)
+
+struct ftrace_ops;
+#define ftrace_graph_func ftrace_graph_func
+void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
+		       struct ftrace_ops *op, struct ftrace_regs *fregs);
+#else
+#define FTRACE_GRAPH_TRAMP_ADDR FTRACE_GRAPH_ADDR
 #endif
 
 #ifdef CONFIG_DYNAMIC_FTRACE
@@ -65,8 +72,6 @@ struct dyn_arch_ftrace {
 	/* No extra data needed for x86 */
 };
 
-#define FTRACE_GRAPH_TRAMP_ADDR FTRACE_GRAPH_ADDR
-
 #endif /*  CONFIG_DYNAMIC_FTRACE */
 #endif /* __ASSEMBLY__ */
 #endif /* CONFIG_FUNCTION_TRACER */
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index c555624da989..804fcc6ef2c7 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -527,7 +527,7 @@ static void *addr_from_call(void *ptr)
 	return ptr + CALL_INSN_SIZE + call.disp;
 }
 
-void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent,
+void prepare_ftrace_return(unsigned long ip, unsigned long *parent,
 			   unsigned long frame_pointer);
 
 /*
@@ -541,7 +541,8 @@ static void *static_tramp_func(struct ftrace_ops *ops, struct dyn_ftrace *rec)
 	void *ptr;
 
 	if (ops && ops->trampoline) {
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+#if !defined(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS) && \
+	defined(CONFIG_FUNCTION_GRAPH_TRACER)
 		/*
 		 * We only know about function graph tracer setting as static
 		 * trampoline.
@@ -589,8 +590,9 @@ void arch_ftrace_trampoline_free(struct ftrace_ops *ops)
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
 
 #ifdef CONFIG_DYNAMIC_FTRACE
-extern void ftrace_graph_call(void);
 
+#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
+extern void ftrace_graph_call(void);
 static const char *ftrace_jmp_replace(unsigned long ip, unsigned long addr)
 {
 	return text_gen_insn(JMP32_INSN_OPCODE, (void *)ip, (void *)addr);
@@ -618,7 +620,17 @@ int ftrace_disable_ftrace_graph_caller(void)
 
 	return ftrace_mod_jmp(ip, &ftrace_stub);
 }
+#else /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */
+int ftrace_enable_ftrace_graph_caller(void)
+{
+	return 0;
+}
 
+int ftrace_disable_ftrace_graph_caller(void)
+{
+	return 0;
+}
+#endif /* CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */
 #endif /* !CONFIG_DYNAMIC_FTRACE */
 
 /*
@@ -629,6 +641,7 @@ void prepare_ftrace_return(unsigned long ip, unsigned long *parent,
 			   unsigned long frame_pointer)
 {
 	unsigned long return_hooker = (unsigned long)&return_to_handler;
+	int bit;
 
 	/*
 	 * When resuming from suspend-to-ram, this function can be indirectly
@@ -648,7 +661,25 @@ void prepare_ftrace_return(unsigned long ip, unsigned long *parent,
 	if (unlikely(atomic_read(&current->tracing_graph_pause)))
 		return;
 
+	bit = ftrace_test_recursion_trylock(ip, *parent);
+	if (bit < 0)
+		return;
+
 	if (!function_graph_enter(*parent, ip, frame_pointer, parent))
 		*parent = return_hooker;
+
+	ftrace_test_recursion_unlock(bit);
+}
+
+#ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
+void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
+		       struct ftrace_ops *op, struct ftrace_regs *fregs)
+{
+	struct pt_regs *regs = &fregs->regs;
+	unsigned long *stack = (unsigned long *)kernel_stack_pointer(regs);
+
+	prepare_ftrace_return(ip, (unsigned long *)stack, 0);
 }
+#endif
+
 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
index a8eb084a7a9a..7a879901f103 100644
--- a/arch/x86/kernel/ftrace_64.S
+++ b/arch/x86/kernel/ftrace_64.S
@@ -174,11 +174,6 @@ SYM_INNER_LABEL(ftrace_caller_end, SYM_L_GLOBAL)
 SYM_FUNC_END(ftrace_caller);
 
 SYM_FUNC_START(ftrace_epilogue)
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL)
-	jmp ftrace_stub
-#endif
-
 /*
  * This is weak to keep gas from relaxing the jumps.
  * It is also used to copy the retq for trampolines.
@@ -288,15 +283,6 @@ SYM_FUNC_START(__fentry__)
 	cmpq $ftrace_stub, ftrace_trace_function
 	jnz trace
 
-fgraph_trace:
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-	cmpq $ftrace_stub, ftrace_graph_return
-	jnz ftrace_graph_caller
-
-	cmpq $ftrace_graph_entry_stub, ftrace_graph_entry
-	jnz ftrace_graph_caller
-#endif
-
 SYM_INNER_LABEL(ftrace_stub, SYM_L_GLOBAL)
 	retq
 
@@ -314,25 +300,12 @@ trace:
 	CALL_NOSPEC r8
 	restore_mcount_regs
 
-	jmp fgraph_trace
+	jmp ftrace_stub
 SYM_FUNC_END(__fentry__)
 EXPORT_SYMBOL(__fentry__)
 #endif /* CONFIG_DYNAMIC_FTRACE */
 
 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
-SYM_FUNC_START(ftrace_graph_caller)
-	/* Saves rbp into %rdx and fills first parameter  */
-	save_mcount_regs
-
-	leaq MCOUNT_REG_SIZE+8(%rsp), %rsi
-	movq $0, %rdx	/* No framepointers needed */
-	call	prepare_ftrace_return
-
-	restore_mcount_regs
-
-	retq
-SYM_FUNC_END(ftrace_graph_caller)
-
 SYM_FUNC_START(return_to_handler)
 	subq  $24, %rsp
 
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index a69f363b61bf..9b218e59a608 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -614,6 +614,12 @@ void ftrace_modify_all_code(int command);
 extern void ftrace_graph_caller(void);
 extern int ftrace_enable_ftrace_graph_caller(void);
 extern int ftrace_disable_ftrace_graph_caller(void);
+#ifndef ftrace_graph_func
+#define ftrace_graph_func ftrace_stub
+#define FTRACE_OPS_GRAPH_STUB FTRACE_OPS_FL_STUB
+#else
+#define FTRACE_OPS_GRAPH_STUB 0
+#endif
 #else
 static inline int ftrace_enable_ftrace_graph_caller(void) { return 0; }
 static inline int ftrace_disable_ftrace_graph_caller(void) { return 0; }
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
index b8a0d1d564fb..22061d38fc00 100644
--- a/kernel/trace/fgraph.c
+++ b/kernel/trace/fgraph.c
@@ -115,6 +115,7 @@ int function_graph_enter(unsigned long ret, unsigned long func,
 {
 	struct ftrace_graph_ent trace;
 
+#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
 	/*
 	 * Skip graph tracing if the return location is served by direct trampoline,
 	 * since call sequence and return addresses are unpredictable anyway.
@@ -124,6 +125,7 @@ int function_graph_enter(unsigned long ret, unsigned long func,
 	if (ftrace_direct_func_count &&
 	    ftrace_find_rec_direct(ret - MCOUNT_INSN_SIZE))
 		return -EBUSY;
+#endif
 	trace.func = func;
 	trace.depth = ++current->curr_ret_depth;
 
@@ -333,10 +335,10 @@ unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx,
 #endif /* HAVE_FUNCTION_GRAPH_RET_ADDR_PTR */
 
 static struct ftrace_ops graph_ops = {
-	.func			= ftrace_stub,
+	.func			= ftrace_graph_func,
 	.flags			= FTRACE_OPS_FL_INITIALIZED |
 				   FTRACE_OPS_FL_PID |
-				   FTRACE_OPS_FL_STUB,
+				   FTRACE_OPS_GRAPH_STUB,
 #ifdef FTRACE_GRAPH_TRAMP_ADDR
 	.trampoline		= FTRACE_GRAPH_TRAMP_ADDR,
 	/* trampoline_size is only needed for dynamically allocated tramps */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 04/27] tracing: Add trampoline/graph selftest
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (2 preceding siblings ...)
  2021-08-26 19:38 ` [PATCH bpf-next v4 03/27] x86/ftrace: Make function graph use ftrace directly Jiri Olsa
@ 2021-08-26 19:38 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 05/27] ftrace: Add ftrace_add_rec_direct function Jiri Olsa
                   ` (23 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:38 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding selftest for checking that direct trampoline can
co-exist together with graph tracer on same function.

This is supported for CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
config option, which is defined only for x86_64 for now.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/trace/trace_selftest.c | 49 ++++++++++++++++++++++++++++++++++-
 1 file changed, 48 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
index adf7ef194005..f8e55b949cdd 100644
--- a/kernel/trace/trace_selftest.c
+++ b/kernel/trace/trace_selftest.c
@@ -750,6 +750,8 @@ static struct fgraph_ops fgraph_ops __initdata  = {
 	.retfunc		= &trace_graph_return,
 };
 
+noinline __noclone static void trace_direct_tramp(void) { }
+
 /*
  * Pretty much the same than for the function tracer from which the selftest
  * has been borrowed.
@@ -760,6 +762,7 @@ trace_selftest_startup_function_graph(struct tracer *trace,
 {
 	int ret;
 	unsigned long count;
+	char *func_name __maybe_unused;
 
 #ifdef CONFIG_DYNAMIC_FTRACE
 	if (ftrace_filter_param) {
@@ -808,8 +811,52 @@ trace_selftest_startup_function_graph(struct tracer *trace,
 		goto out;
 	}
 
-	/* Don't test dynamic tracing, the function tracer already did */
+#ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
+	tracing_reset_online_cpus(&tr->array_buffer);
+	set_graph_array(tr);
 
+	/*
+	 * Some archs *cough*PowerPC*cough* add characters to the
+	 * start of the function names. We simply put a '*' to
+	 * accommodate them.
+	 */
+	func_name = "*" __stringify(DYN_FTRACE_TEST_NAME);
+	ftrace_set_global_filter(func_name, strlen(func_name), 1);
+
+	/*
+	 * Register direct function together with graph tracer
+	 * and make sure we get graph trace.
+	 */
+	ret = register_ftrace_direct((unsigned long) DYN_FTRACE_TEST_NAME,
+				     (unsigned long) trace_direct_tramp);
+	if (ret)
+		goto out;
+
+	ret = register_ftrace_graph(&fgraph_ops);
+	if (ret) {
+		warn_failed_init_tracer(trace, ret);
+		goto out;
+	}
+
+	DYN_FTRACE_TEST_NAME();
+
+	count = 0;
+
+	tracing_stop();
+	/* check the trace buffer */
+	ret = trace_test_buffer(&tr->array_buffer, &count);
+
+	unregister_ftrace_graph(&fgraph_ops);
+
+	tracing_start();
+
+	if (!ret && !count) {
+		ret = -1;
+		goto out;
+	}
+#endif
+
+	/* Don't test dynamic tracing, the function tracer already did */
 out:
 	/* Stop it if we failed */
 	if (ret)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 05/27] ftrace: Add ftrace_add_rec_direct function
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (3 preceding siblings ...)
  2021-08-26 19:38 ` [PATCH bpf-next v4 04/27] tracing: Add trampoline/graph selftest Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 06/27] ftrace: Add multi direct register/unregister interface Jiri Olsa
                   ` (22 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Factor out the code that adds (ip, addr) tuple to direct_functions
hash in new ftrace_add_rec_direct function. It will be used in
following patches.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/trace/ftrace.c | 60 ++++++++++++++++++++++++++-----------------
 1 file changed, 36 insertions(+), 24 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 7b180f61e6d3..c60217d81040 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -2394,6 +2394,39 @@ unsigned long ftrace_find_rec_direct(unsigned long ip)
 	return entry->direct;
 }
 
+static struct ftrace_func_entry*
+ftrace_add_rec_direct(unsigned long ip, unsigned long addr,
+		      struct ftrace_hash **free_hash)
+{
+	struct ftrace_func_entry *entry;
+
+	if (ftrace_hash_empty(direct_functions) ||
+	    direct_functions->count > 2 * (1 << direct_functions->size_bits)) {
+		struct ftrace_hash *new_hash;
+		int size = ftrace_hash_empty(direct_functions) ? 0 :
+			direct_functions->count + 1;
+
+		if (size < 32)
+			size = 32;
+
+		new_hash = dup_hash(direct_functions, size);
+		if (!new_hash)
+			return NULL;
+
+		*free_hash = direct_functions;
+		direct_functions = new_hash;
+	}
+
+	entry = kmalloc(sizeof(*entry), GFP_KERNEL);
+	if (!entry)
+		return NULL;
+
+	entry->ip = ip;
+	entry->direct = addr;
+	__add_hash_entry(direct_functions, entry);
+	return entry;
+}
+
 static void call_direct_funcs(unsigned long ip, unsigned long pip,
 			      struct ftrace_ops *ops, struct ftrace_regs *fregs)
 {
@@ -5110,27 +5143,6 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr)
 	}
 
 	ret = -ENOMEM;
-	if (ftrace_hash_empty(direct_functions) ||
-	    direct_functions->count > 2 * (1 << direct_functions->size_bits)) {
-		struct ftrace_hash *new_hash;
-		int size = ftrace_hash_empty(direct_functions) ? 0 :
-			direct_functions->count + 1;
-
-		if (size < 32)
-			size = 32;
-
-		new_hash = dup_hash(direct_functions, size);
-		if (!new_hash)
-			goto out_unlock;
-
-		free_hash = direct_functions;
-		direct_functions = new_hash;
-	}
-
-	entry = kmalloc(sizeof(*entry), GFP_KERNEL);
-	if (!entry)
-		goto out_unlock;
-
 	direct = ftrace_find_direct_func(addr);
 	if (!direct) {
 		direct = ftrace_alloc_direct_func(addr);
@@ -5140,9 +5152,9 @@ int register_ftrace_direct(unsigned long ip, unsigned long addr)
 		}
 	}
 
-	entry->ip = ip;
-	entry->direct = addr;
-	__add_hash_entry(direct_functions, entry);
+	entry = ftrace_add_rec_direct(ip, addr, &free_hash);
+	if (!entry)
+		goto out_unlock;
 
 	ret = ftrace_set_filter_ip(&direct_ops, ip, 0, 0);
 	if (ret)
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 06/27] ftrace: Add multi direct register/unregister interface
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (4 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 05/27] ftrace: Add ftrace_add_rec_direct function Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 07/27] ftrace: Add multi direct modify interface Jiri Olsa
                   ` (21 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding interface to register multiple direct functions
within single call. Adding following functions:

  register_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
  unregister_ftrace_direct_multi(struct ftrace_ops *ops)

The register_ftrace_direct_multi registers direct function (addr)
with all functions in ops filter. The ops filter can be updated
before with ftrace_set_filter_ip calls.

All requested functions must not have direct function currently
registered, otherwise register_ftrace_direct_multi will fail.

The unregister_ftrace_direct_multi unregisters ops related direct
functions.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/ftrace.h |  10 ++++
 kernel/trace/ftrace.c  | 111 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 121 insertions(+)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 9b218e59a608..93d8f12e70b3 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -316,6 +316,8 @@ int ftrace_modify_direct_caller(struct ftrace_func_entry *entry,
 				unsigned long old_addr,
 				unsigned long new_addr);
 unsigned long ftrace_find_rec_direct(unsigned long ip);
+int register_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr);
+int unregister_ftrace_direct_multi(struct ftrace_ops *ops);
 #else
 # define ftrace_direct_func_count 0
 static inline int register_ftrace_direct(unsigned long ip, unsigned long addr)
@@ -346,6 +348,14 @@ static inline unsigned long ftrace_find_rec_direct(unsigned long ip)
 {
 	return 0;
 }
+int register_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
+{
+	return -ENODEV;
+}
+int unregister_ftrace_direct_multi(struct ftrace_ops *ops)
+{
+	return -ENODEV;
+}
 #endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
 
 #ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index c60217d81040..7243769493c9 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5407,6 +5407,117 @@ int modify_ftrace_direct(unsigned long ip,
 	return ret;
 }
 EXPORT_SYMBOL_GPL(modify_ftrace_direct);
+
+#define MULTI_FLAGS (FTRACE_OPS_FL_IPMODIFY | FTRACE_OPS_FL_DIRECT | \
+		     FTRACE_OPS_FL_SAVE_REGS)
+
+static int check_direct_multi(struct ftrace_ops *ops)
+{
+	if (!(ops->flags & FTRACE_OPS_FL_INITIALIZED))
+		return -EINVAL;
+	if ((ops->flags & MULTI_FLAGS) != MULTI_FLAGS)
+		return -EINVAL;
+	return 0;
+}
+
+int register_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
+{
+	struct ftrace_hash *hash, *free_hash = NULL;
+	struct ftrace_func_entry *entry, *new;
+	int err = -EBUSY, size, i;
+
+	if (ops->func || ops->trampoline)
+		return -EINVAL;
+	if (!(ops->flags & FTRACE_OPS_FL_INITIALIZED))
+		return -EINVAL;
+	if (ops->flags & FTRACE_OPS_FL_ENABLED)
+		return -EINVAL;
+
+	hash = ops->func_hash->filter_hash;
+	if (ftrace_hash_empty(hash))
+		return -EINVAL;
+
+	mutex_lock(&direct_mutex);
+
+	/* Make sure requested entries are not already registered.. */
+	size = 1 << hash->size_bits;
+	for (i = 0; i < size; i++) {
+		hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
+			if (ftrace_find_rec_direct(entry->ip))
+				goto out_unlock;
+		}
+	}
+
+	/* ... and insert them to direct_functions hash. */
+	err = -ENOMEM;
+	for (i = 0; i < size; i++) {
+		hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
+			new = ftrace_add_rec_direct(entry->ip, addr, &free_hash);
+			if (!new)
+				goto out_remove;
+			entry->direct = addr;
+		}
+	}
+
+	ops->func = call_direct_funcs;
+	ops->flags = MULTI_FLAGS;
+	ops->trampoline = FTRACE_REGS_ADDR;
+
+	err = register_ftrace_function(ops);
+
+ out_remove:
+	if (err) {
+		for (i = 0; i < size; i++) {
+			hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
+				new = __ftrace_lookup_ip(direct_functions, entry->ip);
+				if (new) {
+					remove_hash_entry(direct_functions, new);
+					kfree(new);
+				}
+			}
+		}
+	}
+
+ out_unlock:
+	mutex_unlock(&direct_mutex);
+
+	if (free_hash) {
+		synchronize_rcu_tasks();
+		free_ftrace_hash(free_hash);
+	}
+	return err;
+}
+EXPORT_SYMBOL_GPL(register_ftrace_direct_multi);
+
+int unregister_ftrace_direct_multi(struct ftrace_ops *ops)
+{
+	struct ftrace_hash *hash = ops->func_hash->filter_hash;
+	struct ftrace_func_entry *entry, *new;
+	int err, size, i;
+
+	if (check_direct_multi(ops))
+		return -EINVAL;
+	if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
+		return -EINVAL;
+
+	mutex_lock(&direct_mutex);
+	err = unregister_ftrace_function(ops);
+
+	size = 1 << hash->size_bits;
+	for (i = 0; i < size; i++) {
+		hlist_for_each_entry(entry, &hash->buckets[i], hlist) {
+			new = __ftrace_lookup_ip(direct_functions, entry->ip);
+			if (new) {
+				remove_hash_entry(direct_functions, new);
+				kfree(new);
+			}
+		}
+	}
+
+	mutex_unlock(&direct_mutex);
+	return err;
+}
+EXPORT_SYMBOL_GPL(unregister_ftrace_direct_multi);
 #endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
 
 /**
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 07/27] ftrace: Add multi direct modify interface
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (5 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 06/27] ftrace: Add multi direct register/unregister interface Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 08/27] ftrace/samples: Add multi direct interface test module Jiri Olsa
                   ` (20 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding interface to modify registered direct function
for ftrace_ops. Adding following function:

   modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)

The function changes the currently registered direct
function for all attached functions.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/ftrace.h |  6 ++++++
 kernel/trace/ftrace.c  | 43 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 49 insertions(+)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 93d8f12e70b3..63ca0a424947 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -318,6 +318,8 @@ int ftrace_modify_direct_caller(struct ftrace_func_entry *entry,
 unsigned long ftrace_find_rec_direct(unsigned long ip);
 int register_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr);
 int unregister_ftrace_direct_multi(struct ftrace_ops *ops);
+int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr);
+
 #else
 # define ftrace_direct_func_count 0
 static inline int register_ftrace_direct(unsigned long ip, unsigned long addr)
@@ -356,6 +358,10 @@ int unregister_ftrace_direct_multi(struct ftrace_ops *ops)
 {
 	return -ENODEV;
 }
+int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
+{
+	return -ENODEV;
+}
 #endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
 
 #ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 7243769493c9..59940a6a907c 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -5518,6 +5518,49 @@ int unregister_ftrace_direct_multi(struct ftrace_ops *ops)
 	return err;
 }
 EXPORT_SYMBOL_GPL(unregister_ftrace_direct_multi);
+
+int modify_ftrace_direct_multi(struct ftrace_ops *ops, unsigned long addr)
+{
+	struct ftrace_hash *hash = ops->func_hash->filter_hash;
+	struct ftrace_func_entry *entry, *iter;
+	int i, size;
+	int err;
+
+	if (check_direct_multi(ops))
+		return -EINVAL;
+	if (!(ops->flags & FTRACE_OPS_FL_ENABLED))
+		return -EINVAL;
+
+	mutex_lock(&direct_mutex);
+	mutex_lock(&ftrace_lock);
+
+	/*
+	 * Shutdown the ops, change 'direct' pointer for each
+	 * ops entry in direct_functions hash and startup the
+	 * ops back again.
+	 */
+	err = ftrace_shutdown(ops, 0);
+	if (err)
+		goto out_unlock;
+
+	size = 1 << hash->size_bits;
+	for (i = 0; i < size; i++) {
+		hlist_for_each_entry(iter, &hash->buckets[i], hlist) {
+			entry = __ftrace_lookup_ip(direct_functions, iter->ip);
+			if (!entry)
+				continue;
+			entry->direct = addr;
+		}
+	}
+
+	err = ftrace_startup(ops, 0);
+
+ out_unlock:
+	mutex_unlock(&ftrace_lock);
+	mutex_unlock(&direct_mutex);
+	return err;
+}
+EXPORT_SYMBOL_GPL(modify_ftrace_direct_multi);
 #endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
 
 /**
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 08/27] ftrace/samples: Add multi direct interface test module
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (6 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 07/27] ftrace: Add multi direct modify interface Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 09/27] bpf: Add support to load multi func tracing program Jiri Olsa
                   ` (19 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding simple module that uses multi direct interface:

  register_ftrace_direct_multi
  unregister_ftrace_direct_multi

The init function registers trampoline for 2 functions,
and exit function unregisters them.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 samples/ftrace/Makefile              |  1 +
 samples/ftrace/ftrace-direct-multi.c | 52 ++++++++++++++++++++++++++++
 2 files changed, 53 insertions(+)
 create mode 100644 samples/ftrace/ftrace-direct-multi.c

diff --git a/samples/ftrace/Makefile b/samples/ftrace/Makefile
index 4ce896e10b2e..ab1d1c05c288 100644
--- a/samples/ftrace/Makefile
+++ b/samples/ftrace/Makefile
@@ -3,6 +3,7 @@
 obj-$(CONFIG_SAMPLE_FTRACE_DIRECT) += ftrace-direct.o
 obj-$(CONFIG_SAMPLE_FTRACE_DIRECT) += ftrace-direct-too.o
 obj-$(CONFIG_SAMPLE_FTRACE_DIRECT) += ftrace-direct-modify.o
+obj-$(CONFIG_SAMPLE_FTRACE_DIRECT) += ftrace-direct-multi.o
 
 CFLAGS_sample-trace-array.o := -I$(src)
 obj-$(CONFIG_SAMPLE_TRACE_ARRAY) += sample-trace-array.o
diff --git a/samples/ftrace/ftrace-direct-multi.c b/samples/ftrace/ftrace-direct-multi.c
new file mode 100644
index 000000000000..76b34d46d11c
--- /dev/null
+++ b/samples/ftrace/ftrace-direct-multi.c
@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/module.h>
+
+#include <linux/mm.h> /* for handle_mm_fault() */
+#include <linux/ftrace.h>
+#include <linux/sched/stat.h>
+
+void my_direct_func(unsigned long ip)
+{
+	trace_printk("ip %lx\n", ip);
+}
+
+extern void my_tramp(void *);
+
+asm (
+"	.pushsection    .text, \"ax\", @progbits\n"
+"	.type		my_tramp, @function\n"
+"	.globl		my_tramp\n"
+"   my_tramp:"
+"	pushq %rbp\n"
+"	movq %rsp, %rbp\n"
+"	pushq %rdi\n"
+"	movq 8(%rbp), %rdi\n"
+"	call my_direct_func\n"
+"	popq %rdi\n"
+"	leave\n"
+"	ret\n"
+"	.size		my_tramp, .-my_tramp\n"
+"	.popsection\n"
+);
+
+static struct ftrace_ops direct;
+
+static int __init ftrace_direct_multi_init(void)
+{
+	ftrace_set_filter_ip(&direct, (unsigned long) wake_up_process, 0, 0);
+	ftrace_set_filter_ip(&direct, (unsigned long) schedule, 0, 0);
+
+	return register_ftrace_direct_multi(&direct, (unsigned long) my_tramp);
+}
+
+static void __exit ftrace_direct_multi_exit(void)
+{
+	unregister_ftrace_direct_multi(&direct);
+}
+
+module_init(ftrace_direct_multi_init);
+module_exit(ftrace_direct_multi_exit);
+
+MODULE_AUTHOR("Jiri Olsa");
+MODULE_DESCRIPTION("Example use case of using register_ftrace_direct_multi()");
+MODULE_LICENSE("GPL");
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 09/27] bpf: Add support to load multi func tracing program
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (7 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 08/27] ftrace/samples: Add multi direct interface test module Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-31 23:17   ` Andrii Nakryiko
  2021-08-26 19:39 ` [PATCH bpf-next v4 10/27] bpf: Add struct bpf_tramp_node layer Jiri Olsa
                   ` (18 subsequent siblings)
  27 siblings, 1 reply; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding support to load tracing program with new BPF_F_MULTI_FUNC flag,
that allows the program to be loaded without specific function to be
attached to.

Such program will be allowed to be attached to multiple functions
in following patches.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/bpf.h            |  1 +
 include/uapi/linux/bpf.h       |  7 +++++++
 kernel/bpf/syscall.c           | 35 +++++++++++++++++++++++++++++-----
 kernel/bpf/verifier.c          |  3 ++-
 tools/include/uapi/linux/bpf.h |  7 +++++++
 5 files changed, 47 insertions(+), 6 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index f4c16f19f83e..dc9838d741ac 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -865,6 +865,7 @@ struct bpf_prog_aux {
 	bool func_proto_unreliable;
 	bool sleepable;
 	bool tail_call_reachable;
+	bool multi_func;
 	struct hlist_node tramp_hlist;
 	/* BTF_KIND_FUNC_PROTO for valid attach_btf_id */
 	const struct btf_type *attach_func_proto;
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 791f31dd0abe..1f9d336861f0 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1110,6 +1110,13 @@ enum bpf_link_type {
  */
 #define BPF_F_SLEEPABLE		(1U << 4)
 
+/* If BPF_F_MULTI_FUNC is used in BPF_PROG_LOAD command, the verifier does
+ * not expect BTF ID for the program, instead it assumes it's function
+ * with 6 u64 arguments. No trampoline is created for the program. Such
+ * program can be attached to multiple functions.
+ */
+#define BPF_F_MULTI_FUNC	(1U << 5)
+
 /* When BPF ldimm64's insn[0].src_reg != 0 then this can have
  * the following extensions:
  *
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 4e50c0bfdb7d..fa3f93c423d8 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -31,6 +31,7 @@
 #include <linux/bpf-netns.h>
 #include <linux/rcupdate_trace.h>
 #include <linux/memcontrol.h>
+#include <linux/btf_ids.h>
 
 #define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \
 			  (map)->map_type == BPF_MAP_TYPE_CGROUP_ARRAY || \
@@ -2012,7 +2013,8 @@ static int
 bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
 			   enum bpf_attach_type expected_attach_type,
 			   struct btf *attach_btf, u32 btf_id,
-			   struct bpf_prog *dst_prog)
+			   struct bpf_prog *dst_prog,
+			   bool multi_func)
 {
 	if (btf_id) {
 		if (btf_id > BTF_MAX_TYPE)
@@ -2032,6 +2034,14 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
 		}
 	}
 
+	if (multi_func) {
+		if (prog_type != BPF_PROG_TYPE_TRACING)
+			return -EINVAL;
+		if (!attach_btf || btf_id)
+			return -EINVAL;
+		return 0;
+	}
+
 	if (attach_btf && (!btf_id || dst_prog))
 		return -EINVAL;
 
@@ -2155,6 +2165,16 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type)
 	}
 }
 
+#define DEFINE_BPF_MULTI_FUNC(args...)			\
+	extern int bpf_multi_func(args);		\
+	int __init bpf_multi_func(args) { return 0; }
+
+DEFINE_BPF_MULTI_FUNC(unsigned long a1, unsigned long a2,
+		      unsigned long a3, unsigned long a4,
+		      unsigned long a5, unsigned long a6)
+
+BTF_ID_LIST_SINGLE(bpf_multi_func_btf_id, func, bpf_multi_func)
+
 /* last field in 'union bpf_attr' used by this command */
 #define	BPF_PROG_LOAD_LAST_FIELD fd_array
 
@@ -2165,6 +2185,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr)
 	struct btf *attach_btf = NULL;
 	int err;
 	char license[128];
+	bool multi_func;
 	bool is_gpl;
 
 	if (CHECK_ATTR(BPF_PROG_LOAD))
@@ -2174,7 +2195,8 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr)
 				 BPF_F_ANY_ALIGNMENT |
 				 BPF_F_TEST_STATE_FREQ |
 				 BPF_F_SLEEPABLE |
-				 BPF_F_TEST_RND_HI32))
+				 BPF_F_TEST_RND_HI32 |
+				 BPF_F_MULTI_FUNC))
 		return -EINVAL;
 
 	if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) &&
@@ -2205,6 +2227,8 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr)
 	if (is_perfmon_prog_type(type) && !perfmon_capable())
 		return -EPERM;
 
+	multi_func = attr->prog_flags & BPF_F_MULTI_FUNC;
+
 	/* attach_prog_fd/attach_btf_obj_fd can specify fd of either bpf_prog
 	 * or btf, we need to check which one it is
 	 */
@@ -2223,7 +2247,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr)
 				return -ENOTSUPP;
 			}
 		}
-	} else if (attr->attach_btf_id) {
+	} else if (attr->attach_btf_id || multi_func) {
 		/* fall back to vmlinux BTF, if BTF type ID is specified */
 		attach_btf = bpf_get_btf_vmlinux();
 		if (IS_ERR(attach_btf))
@@ -2236,7 +2260,7 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr)
 	bpf_prog_load_fixup_attach_type(attr);
 	if (bpf_prog_load_check_attach(type, attr->expected_attach_type,
 				       attach_btf, attr->attach_btf_id,
-				       dst_prog)) {
+				       dst_prog, multi_func)) {
 		if (dst_prog)
 			bpf_prog_put(dst_prog);
 		if (attach_btf)
@@ -2256,10 +2280,11 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr)
 
 	prog->expected_attach_type = attr->expected_attach_type;
 	prog->aux->attach_btf = attach_btf;
-	prog->aux->attach_btf_id = attr->attach_btf_id;
+	prog->aux->attach_btf_id = multi_func ? bpf_multi_func_btf_id[0] : attr->attach_btf_id;
 	prog->aux->dst_prog = dst_prog;
 	prog->aux->offload_requested = !!attr->prog_ifindex;
 	prog->aux->sleepable = attr->prog_flags & BPF_F_SLEEPABLE;
+	prog->aux->multi_func = multi_func;
 
 	err = security_bpf_prog_alloc(prog->aux);
 	if (err)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 206c221453cf..e9e84adfb974 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13628,7 +13628,8 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 		if (!bpf_iter_prog_supported(prog))
 			return -EINVAL;
 		return 0;
-	}
+	} else if (prog->aux->multi_func)
+		return prog->type == BPF_PROG_TYPE_TRACING ? 0 : -EINVAL;
 
 	if (prog->type == BPF_PROG_TYPE_LSM) {
 		ret = bpf_lsm_verify_prog(&env->log, prog);
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 791f31dd0abe..1f9d336861f0 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1110,6 +1110,13 @@ enum bpf_link_type {
  */
 #define BPF_F_SLEEPABLE		(1U << 4)
 
+/* If BPF_F_MULTI_FUNC is used in BPF_PROG_LOAD command, the verifier does
+ * not expect BTF ID for the program, instead it assumes it's function
+ * with 6 u64 arguments. No trampoline is created for the program. Such
+ * program can be attached to multiple functions.
+ */
+#define BPF_F_MULTI_FUNC	(1U << 5)
+
 /* When BPF ldimm64's insn[0].src_reg != 0 then this can have
  * the following extensions:
  *
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 10/27] bpf: Add struct bpf_tramp_node layer
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (8 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 09/27] bpf: Add support to load multi func tracing program Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 11/27] bpf: Factor out bpf_trampoline_init function Jiri Olsa
                   ` (17 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Currently each trampoline holds a list of programs that
are attached to it. With multi func attach support we need
a way for a single program to be connected to multiple
trampolines.

Adding struct bpf_tramp_node object that holds bpf_prog
pointer, so it can be resolved directly. We can now
have multiple struct bpf_tramp_node being attached to
different trampolines pointing to single bpf_prog.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/bpf.h     | 15 ++++++++++-----
 kernel/bpf/core.c       |  1 +
 kernel/bpf/syscall.c    |  4 ++--
 kernel/bpf/trampoline.c | 22 ++++++++++++----------
 4 files changed, 25 insertions(+), 17 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index dc9838d741ac..f0f548f8f391 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -656,6 +656,11 @@ struct bpf_tramp_image {
 	};
 };
 
+struct bpf_tramp_node {
+	struct hlist_node hlist;
+	struct bpf_prog *prog;
+};
+
 struct bpf_trampoline {
 	/* hlist for trampoline_table */
 	struct hlist_node hlist;
@@ -717,8 +722,8 @@ static __always_inline __nocfi unsigned int bpf_dispatcher_nop_func(
 	return bpf_func(ctx, insnsi);
 }
 #ifdef CONFIG_BPF_JIT
-int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr);
-int bpf_trampoline_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline *tr);
+int bpf_trampoline_link_prog(struct bpf_tramp_node *node, struct bpf_trampoline *tr);
+int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node, struct bpf_trampoline *tr);
 struct bpf_trampoline *bpf_trampoline_get(u64 key,
 					  struct bpf_attach_target_info *tgt_info);
 void bpf_trampoline_put(struct bpf_trampoline *tr);
@@ -767,12 +772,12 @@ void bpf_ksym_del(struct bpf_ksym *ksym);
 int bpf_jit_charge_modmem(u32 pages);
 void bpf_jit_uncharge_modmem(u32 pages);
 #else
-static inline int bpf_trampoline_link_prog(struct bpf_prog *prog,
+static inline int bpf_trampoline_link_prog(struct bpf_tramp_node *node,
 					   struct bpf_trampoline *tr)
 {
 	return -ENOTSUPP;
 }
-static inline int bpf_trampoline_unlink_prog(struct bpf_prog *prog,
+static inline int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node,
 					     struct bpf_trampoline *tr)
 {
 	return -ENOTSUPP;
@@ -866,7 +871,7 @@ struct bpf_prog_aux {
 	bool sleepable;
 	bool tail_call_reachable;
 	bool multi_func;
-	struct hlist_node tramp_hlist;
+	struct bpf_tramp_node tramp_node;
 	/* BTF_KIND_FUNC_PROTO for valid attach_btf_id */
 	const struct btf_type *attach_func_proto;
 	/* function name for valid attach_btf_id */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 9f4636d021b1..bad03dde97a2 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -104,6 +104,7 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flag
 	fp->aux = aux;
 	fp->aux->prog = fp;
 	fp->jit_requested = ebpf_jit_enabled();
+	fp->aux->tramp_node.prog = fp;
 
 	INIT_LIST_HEAD_RCU(&fp->aux->ksym.lnode);
 	mutex_init(&fp->aux->used_maps_mutex);
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index fa3f93c423d8..e667d392cc33 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2625,7 +2625,7 @@ static void bpf_tracing_link_release(struct bpf_link *link)
 	struct bpf_tracing_link *tr_link =
 		container_of(link, struct bpf_tracing_link, link);
 
-	WARN_ON_ONCE(bpf_trampoline_unlink_prog(link->prog,
+	WARN_ON_ONCE(bpf_trampoline_unlink_prog(&link->prog->aux->tramp_node,
 						tr_link->trampoline));
 
 	bpf_trampoline_put(tr_link->trampoline);
@@ -2813,7 +2813,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
 	if (err)
 		goto out_unlock;
 
-	err = bpf_trampoline_link_prog(prog, tr);
+	err = bpf_trampoline_link_prog(&prog->aux->tramp_node, tr);
 	if (err) {
 		bpf_link_cleanup(&link_primer);
 		link = NULL;
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index fe1e857324e6..525fa74c2f62 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -174,8 +174,8 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
 static struct bpf_tramp_progs *
 bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg)
 {
-	const struct bpf_prog_aux *aux;
 	struct bpf_tramp_progs *tprogs;
+	struct bpf_tramp_node *node;
 	struct bpf_prog **progs;
 	int kind;
 
@@ -189,9 +189,9 @@ bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_a
 		*total += tr->progs_cnt[kind];
 		progs = tprogs[kind].progs;
 
-		hlist_for_each_entry(aux, &tr->progs_hlist[kind], tramp_hlist) {
-			*ip_arg |= aux->prog->call_get_func_ip;
-			*progs++ = aux->prog;
+		hlist_for_each_entry(node, &tr->progs_hlist[kind], hlist) {
+			*ip_arg |= node->prog->call_get_func_ip;
+			*progs++ = node->prog;
 		}
 	}
 	return tprogs;
@@ -410,8 +410,9 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
 	}
 }
 
-int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr)
+int bpf_trampoline_link_prog(struct bpf_tramp_node *node, struct bpf_trampoline *tr)
 {
+	struct bpf_prog *prog = node->prog;
 	enum bpf_tramp_prog_type kind;
 	int err = 0;
 	int cnt;
@@ -441,16 +442,16 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr)
 		err = -E2BIG;
 		goto out;
 	}
-	if (!hlist_unhashed(&prog->aux->tramp_hlist)) {
+	if (!hlist_unhashed(&node->hlist)) {
 		/* prog already linked */
 		err = -EBUSY;
 		goto out;
 	}
-	hlist_add_head(&prog->aux->tramp_hlist, &tr->progs_hlist[kind]);
+	hlist_add_head(&node->hlist, &tr->progs_hlist[kind]);
 	tr->progs_cnt[kind]++;
 	err = bpf_trampoline_update(tr);
 	if (err) {
-		hlist_del_init(&prog->aux->tramp_hlist);
+		hlist_del_init(&node->hlist);
 		tr->progs_cnt[kind]--;
 	}
 out:
@@ -459,8 +460,9 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr)
 }
 
 /* bpf_trampoline_unlink_prog() should never fail. */
-int bpf_trampoline_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline *tr)
+int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node, struct bpf_trampoline *tr)
 {
+	struct bpf_prog *prog = node->prog;
 	enum bpf_tramp_prog_type kind;
 	int err;
 
@@ -473,7 +475,7 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline *tr)
 		tr->extension_prog = NULL;
 		goto out;
 	}
-	hlist_del_init(&prog->aux->tramp_hlist);
+	hlist_del_init(&node->hlist);
 	tr->progs_cnt[kind]--;
 	err = bpf_trampoline_update(tr);
 out:
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 11/27] bpf: Factor out bpf_trampoline_init function
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (9 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 10/27] bpf: Add struct bpf_tramp_node layer Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 12/27] bpf: Factor out __bpf_trampoline_lookup function Jiri Olsa
                   ` (16 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Separating out bpf_trampoline_init function, so it can
be used from other places in following patches.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/bpf/trampoline.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 525fa74c2f62..f44899c9698a 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -58,11 +58,22 @@ void bpf_image_ksym_del(struct bpf_ksym *ksym)
 			   PAGE_SIZE, true, ksym->name);
 }
 
+static void bpf_trampoline_init(struct bpf_trampoline *tr, u64 key)
+{
+	int i;
+
+	tr->key = key;
+	INIT_HLIST_NODE(&tr->hlist);
+	refcount_set(&tr->refcnt, 1);
+	mutex_init(&tr->mutex);
+	for (i = 0; i < BPF_TRAMP_MAX; i++)
+		INIT_HLIST_HEAD(&tr->progs_hlist[i]);
+}
+
 static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
 {
 	struct bpf_trampoline *tr;
 	struct hlist_head *head;
-	int i;
 
 	mutex_lock(&trampoline_mutex);
 	head = &trampoline_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
@@ -75,14 +86,8 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
 	tr = kzalloc(sizeof(*tr), GFP_KERNEL);
 	if (!tr)
 		goto out;
-
-	tr->key = key;
-	INIT_HLIST_NODE(&tr->hlist);
+	bpf_trampoline_init(tr, key);
 	hlist_add_head(&tr->hlist, head);
-	refcount_set(&tr->refcnt, 1);
-	mutex_init(&tr->mutex);
-	for (i = 0; i < BPF_TRAMP_MAX; i++)
-		INIT_HLIST_HEAD(&tr->progs_hlist[i]);
 out:
 	mutex_unlock(&trampoline_mutex);
 	return tr;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 12/27] bpf: Factor out __bpf_trampoline_lookup function
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (10 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 11/27] bpf: Factor out bpf_trampoline_init function Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 13/27] bpf: Factor out __bpf_trampoline_put function Jiri Olsa
                   ` (15 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Separating out __bpf_trampoline_lookup function, so it can
be used from other places in following patches.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/bpf/trampoline.c | 26 ++++++++++++++++++++------
 1 file changed, 20 insertions(+), 6 deletions(-)

diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index f44899c9698a..6dba43266e0b 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -70,23 +70,37 @@ static void bpf_trampoline_init(struct bpf_trampoline *tr, u64 key)
 		INIT_HLIST_HEAD(&tr->progs_hlist[i]);
 }
 
-static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
+static struct bpf_trampoline *__bpf_trampoline_lookup(u64 key)
 {
 	struct bpf_trampoline *tr;
 	struct hlist_head *head;
 
-	mutex_lock(&trampoline_mutex);
+	lockdep_assert_held(&trampoline_mutex);
+
 	head = &trampoline_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
 	hlist_for_each_entry(tr, head, hlist) {
-		if (tr->key == key) {
-			refcount_inc(&tr->refcnt);
-			goto out;
-		}
+		if (tr->key == key)
+			return tr;
+	}
+	return NULL;
+}
+
+static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
+{
+	struct bpf_trampoline *tr;
+	struct hlist_head *head;
+
+	mutex_lock(&trampoline_mutex);
+	tr = __bpf_trampoline_lookup(key);
+	if (tr) {
+		refcount_inc(&tr->refcnt);
+		goto out;
 	}
 	tr = kzalloc(sizeof(*tr), GFP_KERNEL);
 	if (!tr)
 		goto out;
 	bpf_trampoline_init(tr, key);
+	head = &trampoline_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
 	hlist_add_head(&tr->hlist, head);
 out:
 	mutex_unlock(&trampoline_mutex);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 13/27] bpf: Factor out __bpf_trampoline_put function
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (11 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 12/27] bpf: Factor out __bpf_trampoline_lookup function Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 14/27] bpf: Change bpf_trampoline_get to return error pointer Jiri Olsa
                   ` (14 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Separating out __bpf_trampoline_put function, so it can
be used from other places in following patches.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/bpf/trampoline.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 6dba43266e0b..8aa0aca38b3a 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -522,18 +522,16 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key,
 	return tr;
 }
 
-void bpf_trampoline_put(struct bpf_trampoline *tr)
+static void __bpf_trampoline_put(struct bpf_trampoline *tr)
 {
-	if (!tr)
-		return;
-	mutex_lock(&trampoline_mutex);
+	lockdep_assert_held(&trampoline_mutex);
 	if (!refcount_dec_and_test(&tr->refcnt))
-		goto out;
+		return;
 	WARN_ON_ONCE(mutex_is_locked(&tr->mutex));
 	if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FENTRY])))
-		goto out;
+		return;
 	if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT])))
-		goto out;
+		return;
 	/* This code will be executed even when the last bpf_tramp_image
 	 * is alive. All progs are detached from the trampoline and the
 	 * trampoline image is patched with jmp into epilogue to skip
@@ -542,7 +540,14 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
 	 */
 	hlist_del(&tr->hlist);
 	kfree(tr);
-out:
+}
+
+void bpf_trampoline_put(struct bpf_trampoline *tr)
+{
+	if (!tr)
+		return;
+	mutex_lock(&trampoline_mutex);
+	__bpf_trampoline_put(tr);
 	mutex_unlock(&trampoline_mutex);
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 14/27] bpf: Change bpf_trampoline_get to return error pointer
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (12 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 13/27] bpf: Factor out __bpf_trampoline_put function Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 15/27] bpf, x64: Allow to use caller address from stack Jiri Olsa
                   ` (13 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Changing bpf_trampoline_get to return error pointer,
so we can return other than ENOMEM error in following
changes.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/bpf/syscall.c    | 4 ++--
 kernel/bpf/trampoline.c | 8 +++++---
 kernel/bpf/verifier.c   | 4 ++--
 3 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index e667d392cc33..537687664bdf 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2793,8 +2793,8 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
 			goto out_unlock;
 
 		tr = bpf_trampoline_get(key, &tgt_info);
-		if (!tr) {
-			err = -ENOMEM;
+		if (IS_ERR(tr)) {
+			err = PTR_ERR(tr);
 			goto out_unlock;
 		}
 	} else {
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 8aa0aca38b3a..c9794e9f24ee 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -97,8 +97,10 @@ static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
 		goto out;
 	}
 	tr = kzalloc(sizeof(*tr), GFP_KERNEL);
-	if (!tr)
+	if (!tr) {
+		tr = ERR_PTR(-ENOMEM);
 		goto out;
+	}
 	bpf_trampoline_init(tr, key);
 	head = &trampoline_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
 	hlist_add_head(&tr->hlist, head);
@@ -508,8 +510,8 @@ struct bpf_trampoline *bpf_trampoline_get(u64 key,
 	struct bpf_trampoline *tr;
 
 	tr = bpf_trampoline_lookup(key);
-	if (!tr)
-		return NULL;
+	if (IS_ERR(tr))
+		return tr;
 
 	mutex_lock(&tr->mutex);
 	if (tr->func.addr)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e9e84adfb974..ad410e1222e4 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13642,8 +13642,8 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 
 	key = bpf_trampoline_compute_key(tgt_prog, prog->aux->attach_btf, btf_id);
 	tr = bpf_trampoline_get(key, &tgt_info);
-	if (!tr)
-		return -ENOMEM;
+	if (IS_ERR(tr))
+		return PTR_ERR(tr);
 
 	prog->aux->dst_trampoline = tr;
 	return 0;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 15/27] bpf, x64: Allow to use caller address from stack
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (13 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 14/27] bpf: Change bpf_trampoline_get to return error pointer Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 16/27] bpf: Add bpf_trampoline_multi_get/put functions Jiri Olsa
                   ` (12 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Currently we call the original function by using the absolute address
given at the JIT generation. That's not usable when having trampoline
attached to multiple functions. In this case we need to take the
return address from the stack.

Adding support to retrieve the original function address from the stack
by adding new BPF_TRAMP_F_ORIG_STACK flag for arch_prepare_bpf_trampoline
function.

Basically we take the return address of the 'fentry' call:

   function + 0: call fentry    # stores 'function + 5' address on stack
   function + 5: ...

The 'function + 5' address will be used as the address for the
original function to call.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/net/bpf_jit_comp.c | 13 +++++++++----
 include/linux/bpf.h         |  5 +++++
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 0fe6aacef3db..9f31197780ae 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -2024,10 +2024,15 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
 		restore_regs(m, &prog, nr_args, stack_size);
 
-		/* call original function */
-		if (emit_call(&prog, orig_call, prog)) {
-			ret = -EINVAL;
-			goto cleanup;
+		if (flags & BPF_TRAMP_F_ORIG_STACK) {
+			emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
+			EMIT2(0xff, 0xd0); /* call *rax */
+		} else {
+			/* call original function */
+			if (emit_call(&prog, orig_call, prog)) {
+				ret = -EINVAL;
+				goto cleanup;
+			}
 		}
 		/* remember return value in a stack for bpf prog to access */
 		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index f0f548f8f391..a5c3307d49c6 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -584,6 +584,11 @@ struct btf_func_model {
  */
 #define BPF_TRAMP_F_IP_ARG		BIT(3)
 
+/* Get original function from stack instead of from provided direct address.
+ * Makes sense for fexit programs only.
+ */
+#define BPF_TRAMP_F_ORIG_STACK		BIT(4)
+
 /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
  * bytes on x86.  Pick a number to fit into BPF_IMAGE_SIZE / 2
  */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 16/27] bpf: Add bpf_trampoline_multi_get/put functions
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (14 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 15/27] bpf, x64: Allow to use caller address from stack Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 17/27] bpf: Add multi trampoline attach support Jiri Olsa
                   ` (11 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding struct bpf_trampoline_multi object and API to allocate
and free it.

The multi bpf trampoline is defined by BTF ids that represents
functions that the trampoline will be attached to.

By calling bpf_trampoline_multi_get you'll allocate new or get
existing bpf_trampoline_multi object with following rules:

  - multi trampolines BTF ids can't intersect
  - multi trampoline can attach to functions that have standard
    program attached
  - standard programs can't attach to functions that have multi
    trampoline attached

The multi trampoline contains pointers to all 'nested' standard
trampolines and 'main' standard trampoline object (with key == 0)
that represents the rest of the functions.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/bpf.h     |  15 ++++
 kernel/bpf/trampoline.c | 179 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 194 insertions(+)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index a5c3307d49c6..678b9cd2fa21 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -691,6 +691,18 @@ struct bpf_trampoline {
 	struct bpf_tramp_image *cur_image;
 	u64 selector;
 	struct module *mod;
+	struct {
+		struct bpf_trampoline *tr;
+	} multi;
+};
+
+struct bpf_trampoline_multi {
+	struct bpf_trampoline main;
+	struct list_head list;
+	u32 *ids;
+	u32 ids_cnt;
+	int tr_cnt;
+	struct bpf_trampoline *tr[];
 };
 
 struct bpf_attach_target_info {
@@ -732,6 +744,9 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node, struct bpf_trampolin
 struct bpf_trampoline *bpf_trampoline_get(u64 key,
 					  struct bpf_attach_target_info *tgt_info);
 void bpf_trampoline_put(struct bpf_trampoline *tr);
+struct bpf_trampoline_multi *bpf_trampoline_multi_get(struct bpf_prog *prog, u32 *ids,
+						      u32 ids_cnt);
+void bpf_trampoline_multi_put(struct bpf_trampoline_multi *multi);
 #define BPF_DISPATCHER_INIT(_name) {				\
 	.mutex = __MUTEX_INITIALIZER(_name.mutex),		\
 	.func = &_name##_func,					\
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index c9794e9f24ee..d66b76c23d74 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -10,6 +10,9 @@
 #include <linux/rcupdate_trace.h>
 #include <linux/rcupdate_wait.h>
 #include <linux/module.h>
+#include <linux/bsearch.h>
+#include <linux/bpf_verifier.h>
+#include <linux/sort.h>
 
 /* dummy _ops. The verifier will operate on target program's ops. */
 const struct bpf_verifier_ops bpf_extension_verifier_ops = {
@@ -22,6 +25,7 @@ const struct bpf_prog_ops bpf_extension_prog_ops = {
 #define TRAMPOLINE_TABLE_SIZE (1 << TRAMPOLINE_HASH_BITS)
 
 static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE];
+static LIST_HEAD(trampoline_multi);
 
 /* serializes access to trampoline_table */
 static DEFINE_MUTEX(trampoline_mutex);
@@ -85,12 +89,41 @@ static struct bpf_trampoline *__bpf_trampoline_lookup(u64 key)
 	return NULL;
 }
 
+static int btf_ids_cmp(const void *a, const void *b)
+{
+	const u32 *x = a;
+	const u32 *y = b;
+
+	if (*x == *y)
+		return 0;
+	return *x < *y ? -1 : 1;
+}
+
+static bool is_in_multi(u64 key)
+{
+	struct bpf_trampoline_multi *multi;
+	u32 id;
+
+	bpf_trampoline_unpack_key(key, NULL, &id);
+
+	list_for_each_entry(multi, &trampoline_multi, list) {
+		if (bsearch(&id, multi->ids, multi->ids_cnt,
+			    sizeof(u32), btf_ids_cmp))
+			return true;
+	}
+	return false;
+}
+
 static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
 {
 	struct bpf_trampoline *tr;
 	struct hlist_head *head;
 
 	mutex_lock(&trampoline_mutex);
+	if (is_in_multi(key)) {
+		tr = ERR_PTR(-EBUSY);
+		goto out;
+	}
 	tr = __bpf_trampoline_lookup(key);
 	if (tr) {
 		refcount_inc(&tr->refcnt);
@@ -553,6 +586,152 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
 	mutex_unlock(&trampoline_mutex);
 }
 
+static void bpf_func_model_nargs(struct btf_func_model *m, int nr_args)
+{
+	int i;
+
+	for (i = 0; i < nr_args; i++)
+		m->arg_size[i] = 8;
+	m->ret_size = 8;
+	m->nr_args = nr_args;
+}
+
+static struct bpf_trampoline *lookup_trampoline(struct bpf_prog *prog, u32 id)
+{
+	u64 key = bpf_trampoline_compute_key(NULL, prog->aux->attach_btf, id);
+
+	return __bpf_trampoline_lookup(key);
+}
+
+struct bpf_trampoline_multi *bpf_trampoline_multi_get(struct bpf_prog *prog,
+						      u32 *ids, u32 ids_cnt)
+{
+	int i, j, tr_cnt = 0, err = 0;
+	struct bpf_trampoline_multi *multi;
+	struct bpf_trampoline *tr;
+	u8 nr_args = 0;
+	size_t size;
+
+	/* Sort user provided BTF ids, so we can use memcpy
+	 * and bsearch below.
+	 */
+	sort(ids, ids_cnt, sizeof(u32), btf_ids_cmp, NULL);
+
+	mutex_lock(&trampoline_mutex);
+	/* Check if the requested multi trampoline already exists. */
+	list_for_each_entry(multi, &trampoline_multi, list) {
+		if (ids_cnt == multi->ids_cnt && !memcmp(ids, multi->ids, ids_cnt)) {
+			refcount_inc(&multi->main.refcnt);
+			kfree(ids);
+			goto out;
+		}
+		for (i = 0; i < ids_cnt; i++) {
+			if (bsearch(&ids[i], multi->ids, multi->ids_cnt,
+				    sizeof(u32), btf_ids_cmp)) {
+				multi = ERR_PTR(-EINVAL);
+				goto out;
+			}
+		}
+	}
+
+	/* Check if any of the requested functions have already standard
+	 * trampoline attached.
+	 */
+	for (i = 0; i < ids_cnt; i++) {
+		tr = lookup_trampoline(prog, ids[i]);
+		if (!tr)
+			continue;
+		if (tr->multi.tr) {
+			multi = ERR_PTR(-EBUSY);
+			goto out;
+		}
+		tr_cnt++;
+	}
+
+	/* Create new multi trampoline ... */
+	size = sizeof(*multi) + tr_cnt * sizeof(multi->tr[0]);
+	multi = kzalloc(size, GFP_KERNEL);
+	if (!multi) {
+		multi = ERR_PTR(-ENOMEM);
+		goto out;
+	}
+
+	bpf_trampoline_init(&multi->main, 0);
+	multi->tr_cnt = tr_cnt;
+	multi->ids = ids;
+	multi->ids_cnt = ids_cnt;
+	list_add_tail(&multi->list, &trampoline_multi);
+
+	for (i = 0; i < ids_cnt; i++) {
+		struct bpf_attach_target_info tgt_info = {};
+
+		tr = lookup_trampoline(prog, ids[i]);
+		if (tr)
+			continue;
+
+		err = bpf_check_attach_target(NULL, prog, NULL, ids[i], &tgt_info);
+		if (err)
+			goto out_free;
+
+		err = -EINVAL;
+		if (!is_ftrace_location((void *) tgt_info.tgt_addr))
+			goto out_free;
+
+		if (nr_args < tgt_info.fmodel.nr_args)
+			nr_args = tgt_info.fmodel.nr_args;
+	}
+
+	bpf_func_model_nargs(&multi->main.func.model, nr_args);
+
+	/* ... and attach already existing standard trampolines. */
+	for (i = 0, j = 0; i < ids_cnt && j < tr_cnt; i++) {
+		tr = lookup_trampoline(prog, ids[i]);
+		if (tr) {
+			refcount_inc(&tr->refcnt);
+			tr->multi.tr = &multi->main;
+			multi->tr[j++] = tr;
+		}
+	}
+
+out_free:
+	if (err) {
+		list_del(&multi->list);
+		kfree(multi);
+		multi = ERR_PTR(err);
+	}
+out:
+	mutex_unlock(&trampoline_mutex);
+	return multi;
+}
+
+void bpf_trampoline_multi_put(struct bpf_trampoline_multi *multi)
+{
+	int i;
+
+	if (!multi)
+		return;
+
+	mutex_lock(&trampoline_mutex);
+	if (!refcount_dec_and_test(&multi->main.refcnt))
+		goto out;
+
+	if (WARN_ON_ONCE(!hlist_empty(&multi->main.progs_hlist[BPF_TRAMP_FENTRY])))
+		goto out;
+	if (WARN_ON_ONCE(!hlist_empty(&multi->main.progs_hlist[BPF_TRAMP_FEXIT])))
+		goto out;
+
+	list_del(&multi->list);
+
+	for (i = 0; i < multi->tr_cnt; i++) {
+		multi->tr[i]->multi.tr = NULL;
+		__bpf_trampoline_put(multi->tr[i]);
+	}
+	kfree(multi->ids);
+	kfree(multi);
+out:
+	mutex_unlock(&trampoline_mutex);
+}
+
 #define NO_START_TIME 1
 static u64 notrace bpf_prog_start_time(void)
 {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 17/27] bpf: Add multi trampoline attach support
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (15 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 16/27] bpf: Add bpf_trampoline_multi_get/put functions Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-31 23:36   ` Andrii Nakryiko
  2021-08-26 19:39 ` [PATCH bpf-next v4 18/27] bpf, x64: Store properly return value for trampoline with multi func programs Jiri Olsa
                   ` (10 subsequent siblings)
  27 siblings, 1 reply; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding new multi trampoline link (BPF_LINK_TYPE_TRACING_MULTI)
as an interface to attach program to multiple functions.

The link_create bpf_attr interface already has 'bpf_prog' file
descriptor, that defines the program to be attached. It must be
loaded with BPF_F_MULTI_FUNC flag.

Adding new multi_btf_ids/multi_btf_ids_cnt link_create bpf_attr
fields that provides BTF ids.

The new link gets multi trampoline (via bpf_trampoline_multi_get)
and links the provided program with embedded trampolines and the
'main' trampoline with new multi link/unlink functions:

  int bpf_trampoline_multi_link_prog(struct bpf_prog *prog,
                                     struct bpf_trampoline_multi *tr);
  int bpf_trampoline_multi_unlink_prog(struct bpf_prog *prog,
                                       struct bpf_trampoline_multi *tr);

If embedded trampoline contains fexit programs, we need to switch
its model to the multi trampoline model (because of the final 'ret'
argument). We keep the count of attached multi func programs for each
trampoline, so we can tell when to switch the model.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/bpf.h            |   5 ++
 include/uapi/linux/bpf.h       |   5 ++
 kernel/bpf/core.c              |   1 +
 kernel/bpf/syscall.c           | 120 +++++++++++++++++++++++++++++++++
 kernel/bpf/trampoline.c        |  87 ++++++++++++++++++++++--
 tools/include/uapi/linux/bpf.h |   5 ++
 6 files changed, 219 insertions(+), 4 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 678b9cd2fa21..3ce4656e2057 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -693,6 +693,7 @@ struct bpf_trampoline {
 	struct module *mod;
 	struct {
 		struct bpf_trampoline *tr;
+		int count;
 	} multi;
 };
 
@@ -747,6 +748,8 @@ void bpf_trampoline_put(struct bpf_trampoline *tr);
 struct bpf_trampoline_multi *bpf_trampoline_multi_get(struct bpf_prog *prog, u32 *ids,
 						      u32 ids_cnt);
 void bpf_trampoline_multi_put(struct bpf_trampoline_multi *multi);
+int bpf_trampoline_multi_link_prog(struct bpf_prog *prog, struct bpf_trampoline_multi *tr);
+int bpf_trampoline_multi_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline_multi *tr);
 #define BPF_DISPATCHER_INIT(_name) {				\
 	.mutex = __MUTEX_INITIALIZER(_name.mutex),		\
 	.func = &_name##_func,					\
@@ -892,6 +895,8 @@ struct bpf_prog_aux {
 	bool tail_call_reachable;
 	bool multi_func;
 	struct bpf_tramp_node tramp_node;
+	struct bpf_tramp_node *multi_node;
+	struct mutex multi_node_mutex;
 	/* BTF_KIND_FUNC_PROTO for valid attach_btf_id */
 	const struct btf_type *attach_func_proto;
 	/* function name for valid attach_btf_id */
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 1f9d336861f0..9533200ffadf 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1008,6 +1008,7 @@ enum bpf_link_type {
 	BPF_LINK_TYPE_NETNS = 5,
 	BPF_LINK_TYPE_XDP = 6,
 	BPF_LINK_TYPE_PERF_EVENT = 7,
+	BPF_LINK_TYPE_TRACING_MULTI = 8,
 
 	MAX_BPF_LINK_TYPE,
 };
@@ -1462,6 +1463,10 @@ union bpf_attr {
 				 */
 				__u64		bpf_cookie;
 			} perf_event;
+			struct {
+				__aligned_u64	multi_btf_ids;		/* addresses to attach */
+				__u32		multi_btf_ids_cnt;	/* addresses count */
+			};
 		};
 	} link_create;
 
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index bad03dde97a2..6c16ac43dd91 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -109,6 +109,7 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flag
 	INIT_LIST_HEAD_RCU(&fp->aux->ksym.lnode);
 	mutex_init(&fp->aux->used_maps_mutex);
 	mutex_init(&fp->aux->dst_mutex);
+	mutex_init(&fp->aux->multi_node_mutex);
 
 	return fp;
 }
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 537687664bdf..8f1f934a8f26 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -32,6 +32,7 @@
 #include <linux/rcupdate_trace.h>
 #include <linux/memcontrol.h>
 #include <linux/btf_ids.h>
+#include <linux/ftrace.h>
 
 #define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \
 			  (map)->map_type == BPF_MAP_TYPE_CGROUP_ARRAY || \
@@ -2851,6 +2852,121 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
 	return err;
 }
 
+struct bpf_tracing_multi_link {
+	struct bpf_link link;
+	enum bpf_attach_type attach_type;
+	struct bpf_trampoline_multi *multi;
+};
+
+static void bpf_tracing_multi_link_release(struct bpf_link *link)
+{
+	struct bpf_tracing_multi_link *tr_link =
+		container_of(link, struct bpf_tracing_multi_link, link);
+
+	bpf_trampoline_multi_unlink_prog(link->prog, tr_link->multi);
+}
+
+static void bpf_tracing_multi_link_dealloc(struct bpf_link *link)
+{
+	struct bpf_tracing_multi_link *tr_link =
+		container_of(link, struct bpf_tracing_multi_link, link);
+
+	bpf_trampoline_multi_put(tr_link->multi);
+	kfree(tr_link);
+}
+
+static void bpf_tracing_multi_link_show_fdinfo(const struct bpf_link *link,
+					       struct seq_file *seq)
+{
+	struct bpf_tracing_multi_link *tr_link =
+		container_of(link, struct bpf_tracing_multi_link, link);
+
+	seq_printf(seq, "attach_type:\t%d\n", tr_link->attach_type);
+}
+
+static int bpf_tracing_multi_link_fill_link_info(const struct bpf_link *link,
+						 struct bpf_link_info *info)
+{
+	struct bpf_tracing_multi_link *tr_link =
+		container_of(link, struct bpf_tracing_multi_link, link);
+
+	info->tracing.attach_type = tr_link->attach_type;
+	return 0;
+}
+
+static int check_multi_prog_type(struct bpf_prog *prog)
+{
+	if (prog->expected_attach_type != BPF_TRACE_FENTRY &&
+	    prog->expected_attach_type != BPF_TRACE_FEXIT)
+		return -EINVAL;
+	return 0;
+}
+
+static const struct bpf_link_ops bpf_tracing_multi_link_lops = {
+	.release = bpf_tracing_multi_link_release,
+	.dealloc = bpf_tracing_multi_link_dealloc,
+	.show_fdinfo = bpf_tracing_multi_link_show_fdinfo,
+	.fill_link_info = bpf_tracing_multi_link_fill_link_info,
+};
+
+static int bpf_tracing_multi_attach(struct bpf_prog *prog,
+				    const union bpf_attr *attr)
+{
+	void __user *ubtf_ids = u64_to_user_ptr(attr->link_create.multi_btf_ids);
+	u32 size, cnt = attr->link_create.multi_btf_ids_cnt;
+	struct bpf_tracing_multi_link *link = NULL;
+	struct bpf_link_primer link_primer;
+	struct bpf_trampoline_multi *multi = NULL;
+	int err = -EINVAL;
+	u32 *btf_ids;
+
+	if (check_multi_prog_type(prog))
+		return -EINVAL;
+	if (!cnt)
+		return -EINVAL;
+
+	size = cnt * sizeof(*btf_ids);
+	btf_ids = kmalloc(size, GFP_USER | __GFP_NOWARN);
+	if (!btf_ids)
+		return -ENOMEM;
+
+	err = -EFAULT;
+	if (ubtf_ids && copy_from_user(btf_ids, ubtf_ids, size))
+		goto out_free_ids;
+
+	multi = bpf_trampoline_multi_get(prog, btf_ids, cnt);
+	if (IS_ERR(multi)) {
+		err = PTR_ERR(multi);
+		goto out_free_ids;
+	}
+
+	link = kzalloc(sizeof(*link), GFP_USER);
+	if (!link) {
+		err = -ENOMEM;
+		goto out_free;
+	}
+
+	bpf_link_init(&link->link, BPF_LINK_TYPE_TRACING_MULTI,
+		      &bpf_tracing_multi_link_lops, prog);
+	link->attach_type = prog->expected_attach_type;
+	link->multi = multi;
+
+	err = bpf_link_prime(&link->link, &link_primer);
+	if (err)
+		goto out_free;
+	err = bpf_trampoline_multi_link_prog(prog, multi);
+	if (err)
+		goto out_free;
+	return bpf_link_settle(&link_primer);
+
+out_free:
+	bpf_trampoline_multi_put(multi);
+	kfree(link);
+out_free_ids:
+	kfree(btf_ids);
+	return err;
+}
+
 struct bpf_raw_tp_link {
 	struct bpf_link link;
 	struct bpf_raw_event_map *btp;
@@ -3157,6 +3273,8 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type)
 	case BPF_CGROUP_SETSOCKOPT:
 		return BPF_PROG_TYPE_CGROUP_SOCKOPT;
 	case BPF_TRACE_ITER:
+	case BPF_TRACE_FENTRY:
+	case BPF_TRACE_FEXIT:
 		return BPF_PROG_TYPE_TRACING;
 	case BPF_SK_LOOKUP:
 		return BPF_PROG_TYPE_SK_LOOKUP;
@@ -4213,6 +4331,8 @@ static int tracing_bpf_link_attach(const union bpf_attr *attr, bpfptr_t uattr,
 
 	if (prog->expected_attach_type == BPF_TRACE_ITER)
 		return bpf_iter_link_attach(attr, uattr, prog);
+	else if (prog->aux->multi_func)
+		return bpf_tracing_multi_attach(prog, attr);
 	else if (prog->type == BPF_PROG_TYPE_EXT)
 		return bpf_tracing_prog_attach(prog,
 					       attr->link_create.target_fd,
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index d66b76c23d74..6ff5c2512f91 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -30,6 +30,11 @@ static LIST_HEAD(trampoline_multi);
 /* serializes access to trampoline_table */
 static DEFINE_MUTEX(trampoline_mutex);
 
+static bool is_multi_trampoline(struct bpf_trampoline *tr)
+{
+	return tr->key == 0;
+}
+
 void *bpf_jit_alloc_exec_page(void)
 {
 	void *image;
@@ -384,8 +389,21 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, u32 idx)
 	return ERR_PTR(err);
 }
 
+static bool needs_multi_model(struct bpf_trampoline *tr, struct btf_func_model *new)
+{
+	struct bpf_trampoline *multi = tr->multi.tr;
+
+	if (!tr->multi.count || !multi)
+		return false;
+	if (tr->func.model.nr_args >= multi->func.model.nr_args)
+		return false;
+	memcpy(new, &multi->func.model, sizeof(*new));
+	return true;
+}
+
 static int bpf_trampoline_update(struct bpf_trampoline *tr)
 {
+	struct btf_func_model model_multi, *model = &tr->func.model;
 	struct bpf_tramp_image *im;
 	struct bpf_tramp_progs *tprogs;
 	u32 flags = BPF_TRAMP_F_RESTORE_REGS;
@@ -411,15 +429,19 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
 	}
 
 	if (tprogs[BPF_TRAMP_FEXIT].nr_progs ||
-	    tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs)
+	    tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs) {
 		flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
+		if (is_multi_trampoline(tr))
+			flags |= BPF_TRAMP_F_ORIG_STACK;
+		if (needs_multi_model(tr, &model_multi))
+			model = &model_multi;
+	}
 
 	if (ip_arg)
 		flags |= BPF_TRAMP_F_IP_ARG;
 
 	err = arch_prepare_bpf_trampoline(im, im->image, im->image + PAGE_SIZE,
-					  &tr->func.model, flags, tprogs,
-					  tr->func.addr);
+					  model, flags, tprogs, tr->func.addr);
 	if (err < 0)
 		goto out;
 
@@ -507,7 +529,8 @@ int bpf_trampoline_link_prog(struct bpf_tramp_node *node, struct bpf_trampoline
 	if (err) {
 		hlist_del_init(&node->hlist);
 		tr->progs_cnt[kind]--;
-	}
+	} else if (prog->aux->multi_func)
+		tr->multi.count++;
 out:
 	mutex_unlock(&tr->mutex);
 	return err;
@@ -531,6 +554,8 @@ int bpf_trampoline_unlink_prog(struct bpf_tramp_node *node, struct bpf_trampolin
 	}
 	hlist_del_init(&node->hlist);
 	tr->progs_cnt[kind]--;
+	if (prog->aux->multi_func)
+		tr->multi.count--;
 	err = bpf_trampoline_update(tr);
 out:
 	mutex_unlock(&tr->mutex);
@@ -732,6 +757,60 @@ void bpf_trampoline_multi_put(struct bpf_trampoline_multi *multi)
 	mutex_unlock(&trampoline_mutex);
 }
 
+int bpf_trampoline_multi_link_prog(struct bpf_prog *prog, struct bpf_trampoline_multi *multi)
+{
+	struct bpf_tramp_node *multi_node = NULL;
+	int i, j, err = 0;
+
+	multi_node = kzalloc(sizeof(*multi_node) * multi->tr_cnt, GFP_KERNEL);
+	if (!multi_node)
+		return -ENOMEM;
+
+	mutex_lock(&prog->aux->multi_node_mutex);
+	if (prog->aux->multi_node)
+		err = -EBUSY;
+	else
+		prog->aux->multi_node = multi_node;
+	mutex_unlock(&prog->aux->multi_node_mutex);
+	if (err)
+		goto out_free;
+
+	for (i = 0; i < multi->tr_cnt; i++) {
+		multi_node[i].prog = prog;
+		err = bpf_trampoline_link_prog(&multi_node[i], multi->tr[i]);
+		if (err)
+			goto out_unlink;
+	}
+
+	err = bpf_trampoline_link_prog(&prog->aux->tramp_node, &multi->main);
+	if (!err)
+		return 0;
+
+out_unlink:
+	for (j = 0; j < i; j++)
+		WARN_ON_ONCE(bpf_trampoline_unlink_prog(&multi_node[j], multi->tr[j]));
+
+out_free:
+	kfree(multi_node);
+	return err;
+}
+
+int bpf_trampoline_multi_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline_multi *multi)
+{
+	struct bpf_tramp_node *multi_node = prog->aux->multi_node;
+	int i;
+
+	for (i = 0; i < multi->tr_cnt; i++)
+		WARN_ON_ONCE(bpf_trampoline_unlink_prog(&multi_node[i], multi->tr[i]));
+
+	mutex_lock(&prog->aux->multi_node_mutex);
+	prog->aux->multi_node = NULL;
+	mutex_unlock(&prog->aux->multi_node_mutex);
+
+	kfree(multi_node);
+	return bpf_trampoline_unlink_prog(&prog->aux->tramp_node, &multi->main);
+}
+
 #define NO_START_TIME 1
 static u64 notrace bpf_prog_start_time(void)
 {
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 1f9d336861f0..9533200ffadf 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1008,6 +1008,7 @@ enum bpf_link_type {
 	BPF_LINK_TYPE_NETNS = 5,
 	BPF_LINK_TYPE_XDP = 6,
 	BPF_LINK_TYPE_PERF_EVENT = 7,
+	BPF_LINK_TYPE_TRACING_MULTI = 8,
 
 	MAX_BPF_LINK_TYPE,
 };
@@ -1462,6 +1463,10 @@ union bpf_attr {
 				 */
 				__u64		bpf_cookie;
 			} perf_event;
+			struct {
+				__aligned_u64	multi_btf_ids;		/* addresses to attach */
+				__u32		multi_btf_ids_cnt;	/* addresses count */
+			};
 		};
 	} link_create;
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 18/27] bpf, x64: Store properly return value for trampoline with multi func programs
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (16 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 17/27] bpf: Add multi trampoline attach support Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-31 23:51   ` Andrii Nakryiko
  2021-08-26 19:39 ` [PATCH bpf-next v4 19/27] bpf: Attach multi trampoline with ftrace_ops Jiri Olsa
                   ` (9 subsequent siblings)
  27 siblings, 1 reply; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

When we have multi func program attached, the trampoline
switched to the function model of the multi func program.

This breaks already attached standard programs, for example
when we attach following program:

  SEC("fexit/bpf_fentry_test2")
  int BPF_PROG(test1, int a, __u64 b, int ret)

the trampoline pushes on stack args 'a' and 'b' and return
value 'ret'.

When following multi func program is attached to bpf_fentry_test2:

  SEC("fexit.multi/bpf_fentry_test*")
  int BPF_PROG(test2, __u64 a, __u64 b, __u64 c, __u64 d,
                       __u64 e, __u64 f, int ret)

the trampoline takes this program model and pushes all 6 args
and return value on stack.

But we still have the original 'test1' program attached, that
expects 'ret' value where there's 'c' argument now:

  test1(a, b, c)

To fix that we simply overwrite 'c' argument with 'ret' value,
so test1 is called as expected and test2 gets called as:

  test2(a, b, ret, d, e, f, ret)

which is ok, because 'c' is not defined for bpf_fentry_test2
anyway.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/net/bpf_jit_comp.c | 40 ++++++++++++++++++++++++++++++-------
 include/linux/bpf.h         |  1 +
 kernel/bpf/trampoline.c     |  1 +
 3 files changed, 35 insertions(+), 7 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 9f31197780ae..3f7911a92d62 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1744,7 +1744,7 @@ static void restore_regs(const struct btf_func_model *m, u8 **prog, int nr_args,
 }
 
 static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
-			   struct bpf_prog *p, int stack_size, bool mod_ret)
+			   struct bpf_prog *p, int stack_size, bool mod_ret, int args_off)
 {
 	u8 *prog = *pprog;
 	u8 *jmp_insn;
@@ -1780,9 +1780,14 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
 	/* BPF_TRAMP_MODIFY_RETURN trampolines can modify the return
 	 * of the previous call which is then passed on the stack to
 	 * the next BPF program.
+	 * Store the return value also to original args' end in case
+	 * we have multi func programs in trampoline.
 	 */
-	if (mod_ret)
+	if (mod_ret) {
 		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
+		if (args_off)
+			emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -args_off);
+	}
 
 	/* replace 2 nops with JE insn, since jmp target is known */
 	jmp_insn[0] = X86_JE;
@@ -1834,7 +1839,7 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
 	u8 *prog = *pprog;
 
 	for (i = 0; i < tp->nr_progs; i++) {
-		if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size, false))
+		if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size, false, 0))
 			return -EINVAL;
 	}
 	*pprog = prog;
@@ -1843,7 +1848,7 @@ static int invoke_bpf(const struct btf_func_model *m, u8 **pprog,
 
 static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
 			      struct bpf_tramp_progs *tp, int stack_size,
-			      u8 **branches)
+			      u8 **branches, int args_off)
 {
 	u8 *prog = *pprog;
 	int i;
@@ -1853,8 +1858,15 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
 	 */
 	emit_mov_imm32(&prog, false, BPF_REG_0, 0);
 	emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
+
+	/* Store the return value also to original args' end in case
+	 * we have multi func programs in trampoline.
+	 */
+	if (args_off)
+		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -args_off);
+
 	for (i = 0; i < tp->nr_progs; i++) {
-		if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size, true))
+		if (invoke_bpf_prog(m, &prog, tp->progs[i], stack_size, true, args_off))
 			return -EINVAL;
 
 		/* mod_ret prog stored return value into [rbp - 8]. Emit:
@@ -1942,7 +1954,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 				struct bpf_tramp_progs *tprogs,
 				void *orig_call)
 {
-	int ret, i, nr_args = m->nr_args;
+	int ret, i, args_off = 0, nr_args = m->nr_args;
 	int stack_size = nr_args * 8;
 	struct bpf_tramp_progs *fentry = &tprogs[BPF_TRAMP_FENTRY];
 	struct bpf_tramp_progs *fexit = &tprogs[BPF_TRAMP_FEXIT];
@@ -1958,6 +1970,13 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	    (flags & BPF_TRAMP_F_SKIP_FRAME))
 		return -EINVAL;
 
+	/* if m->nr_args_orig != 0, then we have multi prog model and
+	 * we need to also store return value at the end of standard
+	 * trampoline's arguments
+	 */
+	if (m->nr_args_orig && m->nr_args > m->nr_args_orig)
+		args_off = (m->nr_args - m->nr_args_orig) * 8 + 8;
+
 	if (flags & BPF_TRAMP_F_CALL_ORIG)
 		stack_size += 8; /* room for return value of orig_call */
 
@@ -2015,7 +2034,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 			return -ENOMEM;
 
 		if (invoke_bpf_mod_ret(m, &prog, fmod_ret, stack_size,
-				       branches)) {
+				       branches, args_off)) {
 			ret = -EINVAL;
 			goto cleanup;
 		}
@@ -2036,6 +2055,13 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 		}
 		/* remember return value in a stack for bpf prog to access */
 		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
+
+		/* store return value also to original args' end in case we have
+		 * multi func programs in trampoline
+		 */
+		if (args_off)
+			emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -args_off);
+
 		im->ip_after_call = prog;
 		memcpy(prog, x86_nops[5], X86_PATCH_SIZE);
 		prog += X86_PATCH_SIZE;
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 3ce4656e2057..373f45ae7dce 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -563,6 +563,7 @@ struct btf_func_model {
 	u8 ret_size;
 	u8 nr_args;
 	u8 arg_size[MAX_BPF_FUNC_ARGS];
+	u8 nr_args_orig;
 };
 
 /* Restore arguments before returning from trampoline to let original function
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 6ff5c2512f91..308d58e698be 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -398,6 +398,7 @@ static bool needs_multi_model(struct bpf_trampoline *tr, struct btf_func_model *
 	if (tr->func.model.nr_args >= multi->func.model.nr_args)
 		return false;
 	memcpy(new, &multi->func.model, sizeof(*new));
+	new->nr_args_orig = tr->func.model.nr_args;
 	return true;
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 19/27] bpf: Attach multi trampoline with ftrace_ops
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (17 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 18/27] bpf, x64: Store properly return value for trampoline with multi func programs Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 20/27] libbpf: Add btf__find_by_glob_kind function Jiri Olsa
                   ` (8 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding struct ftrace_ops object to bpf_trampoline_multi
struct and setting it up with all the requested function
addresses.

Adding is_multi_trampoline(tr) hooks to installing functions
to actually install multiple bpf trampoline via ftrace_ops.

I had to add -DCC_USING_FENTRY to several places because
arch/x86/include/asm/ftrace.h would fail the compilation
if it's not defined. Perhaps there's a better way.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/Makefile                     |  7 +++++
 arch/x86/boot/compressed/Makefile     |  4 +++
 drivers/firmware/efi/libstub/Makefile |  3 +++
 include/linux/bpf.h                   |  2 ++
 kernel/bpf/trampoline.c               | 37 +++++++++++++++++++++++++++
 5 files changed, 53 insertions(+)

diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 307fd0000a83..72986eb38d0b 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -34,6 +34,9 @@ REALMODE_CFLAGS += -fno-stack-protector
 REALMODE_CFLAGS += $(call __cc-option, $(CC), $(REALMODE_CFLAGS), -Wno-address-of-packed-member)
 REALMODE_CFLAGS += $(call __cc-option, $(CC), $(REALMODE_CFLAGS), $(cc_stack_align4))
 REALMODE_CFLAGS += $(CLANG_FLAGS)
+ifdef CONFIG_DYNAMIC_FTRACE
+     REALMODE_CFLAGS += -DCC_USING_FENTRY
+endif
 export REALMODE_CFLAGS
 
 # BITS is used as extension for files which are available in a 32 bit
@@ -54,6 +57,10 @@ KBUILD_CFLAGS += $(call cc-option,-mno-avx,)
 # Intel CET isn't enabled in the kernel
 KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none)
 
+ifdef CONFIG_DYNAMIC_FTRACE
+     KBUILD_CFLAGS += -DCC_USING_FENTRY
+endif
+
 ifeq ($(CONFIG_X86_32),y)
         BITS := 32
         UTS_MACHINE := i386
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 431bf7f846c3..a93dcbda4de2 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -49,6 +49,10 @@ KBUILD_CFLAGS += $(call as-option,-Wa$(comma)-mrelax-relocations=no)
 KBUILD_CFLAGS += -include $(srctree)/include/linux/hidden.h
 KBUILD_CFLAGS += $(CLANG_FLAGS)
 
+ifdef CONFIG_DYNAMIC_FTRACE
+     KBUILD_CFLAGS += -DCC_USING_FENTRY
+endif
+
 # sev.c indirectly inludes inat-table.h which is generated during
 # compilation and stored in $(objtree). Add the directory to the includes so
 # that the compiler finds it even with out-of-tree builds (make O=/some/path).
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index d0537573501e..2fd5d6f55e24 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -15,6 +15,9 @@ cflags-$(CONFIG_X86)		+= -m$(BITS) -D__KERNEL__ \
 				   $(call cc-disable-warning, gnu) \
 				   -fno-asynchronous-unwind-tables \
 				   $(CLANG_FLAGS)
+ifdef CONFIG_DYNAMIC_FTRACE
+     cflags-$(CONFIG_X86)	+= -DCC_USING_FENTRY
+endif
 
 # arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
 # disable the stackleak plugin
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 373f45ae7dce..52cbec23665c 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -23,6 +23,7 @@
 #include <linux/slab.h>
 #include <linux/percpu-refcount.h>
 #include <linux/bpfptr.h>
+#include <linux/ftrace.h>
 
 struct bpf_verifier_env;
 struct bpf_verifier_log;
@@ -703,6 +704,7 @@ struct bpf_trampoline_multi {
 	struct list_head list;
 	u32 *ids;
 	u32 ids_cnt;
+	struct ftrace_ops ops;
 	int tr_cnt;
 	struct bpf_trampoline *tr[];
 };
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 308d58e698be..82e7545a7426 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -179,11 +179,38 @@ static int is_ftrace_location(void *ip)
 	return 1;
 }
 
+static int register_ftrace_multi(struct bpf_trampoline *tr, void *new_addr)
+{
+	struct bpf_trampoline_multi *multi;
+
+	multi = container_of(tr, struct bpf_trampoline_multi, main);
+	return register_ftrace_direct_multi(&multi->ops, (long)new_addr);
+}
+
+static int unregister_ftrace_multi(struct bpf_trampoline *tr, void *old_addr)
+{
+	struct bpf_trampoline_multi *multi;
+
+	multi = container_of(tr, struct bpf_trampoline_multi, main);
+	return unregister_ftrace_direct_multi(&multi->ops);
+}
+
+static int modify_ftrace_multi(struct bpf_trampoline *tr, void *new_addr)
+{
+	struct bpf_trampoline_multi *multi;
+
+	multi = container_of(tr, struct bpf_trampoline_multi, main);
+	return modify_ftrace_direct_multi(&multi->ops, (long)new_addr);
+}
+
 static int unregister_fentry(struct bpf_trampoline *tr, void *old_addr)
 {
 	void *ip = tr->func.addr;
 	int ret;
 
+	if (is_multi_trampoline(tr))
+		return unregister_ftrace_multi(tr, old_addr);
+
 	if (tr->func.ftrace_managed)
 		ret = unregister_ftrace_direct((long)ip, (long)old_addr);
 	else
@@ -199,6 +226,9 @@ static int modify_fentry(struct bpf_trampoline *tr, void *old_addr, void *new_ad
 	void *ip = tr->func.addr;
 	int ret;
 
+	if (is_multi_trampoline(tr))
+		return modify_ftrace_multi(tr, new_addr);
+
 	if (tr->func.ftrace_managed)
 		ret = modify_ftrace_direct((long)ip, (long)old_addr, (long)new_addr);
 	else
@@ -212,6 +242,9 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
 	void *ip = tr->func.addr;
 	int ret;
 
+	if (is_multi_trampoline(tr))
+		return register_ftrace_multi(tr, new_addr);
+
 	ret = is_ftrace_location(ip);
 	if (ret < 0)
 		return ret;
@@ -703,6 +736,10 @@ struct bpf_trampoline_multi *bpf_trampoline_multi_get(struct bpf_prog *prog,
 		if (!is_ftrace_location((void *) tgt_info.tgt_addr))
 			goto out_free;
 
+		err = ftrace_set_filter_ip(&multi->ops, tgt_info.tgt_addr, 0, 0);
+		if (err)
+			goto out_free;
+
 		if (nr_args < tgt_info.fmodel.nr_args)
 			nr_args = tgt_info.fmodel.nr_args;
 	}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 20/27] libbpf: Add btf__find_by_glob_kind function
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (18 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 19/27] bpf: Attach multi trampoline with ftrace_ops Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-09-01  0:10   ` Andrii Nakryiko
  2021-08-26 19:39 ` [PATCH bpf-next v4 21/27] libbpf: Add support to link multi func tracing program Jiri Olsa
                   ` (7 subsequent siblings)
  27 siblings, 1 reply; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: Andrii Nakryiko, netdev, bpf, Martin KaFai Lau, Song Liu,
	Yonghong Song, John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding btf__find_by_glob_kind function that returns array of
BTF ids that match given kind and allow/deny patterns.

int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
                           const char *allow_pattern,
                           const char *deny_pattern,
                           __u32 **__ids);

The __ids array is allocated and needs to be manually freed.

At the moment the supported pattern is '*' at the beginning or
the end of the pattern.

Kindly borrowed from retsnoop.

Suggested-by: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/lib/bpf/btf.c | 80 +++++++++++++++++++++++++++++++++++++++++++++
 tools/lib/bpf/btf.h |  3 ++
 2 files changed, 83 insertions(+)

diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 77dc24d58302..5baaca6c3134 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -711,6 +711,86 @@ __s32 btf__find_by_name_kind(const struct btf *btf, const char *type_name,
 	return libbpf_err(-ENOENT);
 }
 
+/* 'borrowed' from retsnoop */
+static bool glob_matches(const char *glob, const char *s)
+{
+	int n = strlen(glob);
+
+	if (n == 1 && glob[0] == '*')
+		return true;
+
+	if (glob[0] == '*' && glob[n - 1] == '*') {
+		const char *subs;
+		/* substring match */
+
+		/* this is hacky, but we don't want to allocate for no good reason */
+		((char *)glob)[n - 1] = '\0';
+		subs = strstr(s, glob + 1);
+		((char *)glob)[n - 1] = '*';
+
+		return subs != NULL;
+	} else if (glob[0] == '*') {
+		size_t nn = strlen(s);
+		/* suffix match */
+
+		/* too short for a given suffix */
+		if (nn < n - 1)
+			return false;
+
+		return strcmp(s + nn - (n - 1), glob + 1) == 0;
+	} else if (glob[n - 1] == '*') {
+		/* prefix match */
+		return strncmp(s, glob, n - 1) == 0;
+	} else {
+		/* exact match */
+		return strcmp(glob, s) == 0;
+	}
+}
+
+int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
+			   const char *allow_pattern, const char *deny_pattern,
+			   __u32 **__ids)
+{
+	__u32 i, nr_types = btf__get_nr_types(btf);
+	int cnt = 0, alloc = 0;
+	__u32 *ids = NULL;
+
+	for (i = 1; i <= nr_types; i++) {
+		const struct btf_type *t = btf__type_by_id(btf, i);
+		bool match = false;
+		const char *name;
+		__u32 *p;
+
+		if (btf_kind(t) != kind)
+			continue;
+		name = btf__name_by_offset(btf, t->name_off);
+		if (!name)
+			continue;
+
+		if (allow_pattern && glob_matches(allow_pattern, name))
+			match = true;
+		if (deny_pattern && !glob_matches(deny_pattern, name))
+			match = true;
+		if (!match)
+			continue;
+
+		if (cnt == alloc) {
+			alloc = max(100, alloc * 3 / 2);
+			p = realloc(ids, alloc * sizeof(__u32));
+			if (!p) {
+				free(ids);
+				return -ENOMEM;
+			}
+			ids = p;
+		}
+		ids[cnt] = i;
+		cnt++;
+	}
+
+	*__ids = ids;
+	return cnt ?: -ENOENT;
+}
+
 static bool btf_is_modifiable(const struct btf *btf)
 {
 	return (void *)btf->hdr != btf->raw_data;
diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
index 4a711f990904..b288211770c3 100644
--- a/tools/lib/bpf/btf.h
+++ b/tools/lib/bpf/btf.h
@@ -396,6 +396,9 @@ btf_var_secinfos(const struct btf_type *t)
 	return (struct btf_var_secinfo *)(t + 1);
 }
 
+int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
+			   const char *allow_pattern, const char *deny_pattern,
+			   __u32 **__ids);
 #ifdef __cplusplus
 } /* extern "C" */
 #endif
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 21/27] libbpf: Add support to link multi func tracing program
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (19 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 20/27] libbpf: Add btf__find_by_glob_kind function Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 22/27] selftests/bpf: Add fentry multi func test Jiri Olsa
                   ` (6 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding support to link multi func tracing program
through link_create interface.

Adding special types for multi func programs:

  fentry.multi
  fexit.multi

so you can define multi func programs like:

  SEC("fentry.multi/bpf_fentry_test*")
  int BPF_PROG(test1, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f)

that defines test1 to be attached to bpf_fentry_test* functions.
The test1 program is loaded with BPF_F_MULTI_FUNC flag.

If functions are not specified the program needs to be attached
manually.

Adding new btf_ids/btf_ids_cnt fields to bpf_link_create_opts,
that define functions to attach the program to.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/lib/bpf/bpf.c    |  8 +++++
 tools/lib/bpf/bpf.h    |  6 +++-
 tools/lib/bpf/libbpf.c | 72 ++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 85 insertions(+), 1 deletion(-)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 2401fad090c5..20422d58b945 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -713,12 +713,20 @@ int bpf_link_create(int prog_fd, int target_fd,
 		if (!OPTS_ZEROED(opts, perf_event))
 			return libbpf_err(-EINVAL);
 		break;
+	case BPF_TRACE_FENTRY:
+	case BPF_TRACE_FEXIT:
+		attr.link_create.multi_btf_ids = (__u64) OPTS_GET(opts, multi.btf_ids, 0);
+		attr.link_create.multi_btf_ids_cnt = OPTS_GET(opts, multi.btf_ids_cnt, 0);
+		if (!OPTS_ZEROED(opts, multi))
+			return libbpf_err(-EINVAL);
+		break;
 	default:
 		if (!OPTS_ZEROED(opts, flags))
 			return libbpf_err(-EINVAL);
 		break;
 	}
 proceed:
+
 	fd = sys_bpf(BPF_LINK_CREATE, &attr, sizeof(attr));
 	return libbpf_err_errno(fd);
 }
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 6fffb3cdf39b..2eff99be7069 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -181,10 +181,14 @@ struct bpf_link_create_opts {
 		struct {
 			__u64 bpf_cookie;
 		} perf_event;
+		struct {
+			__u32 *btf_ids;
+			__u32  btf_ids_cnt;
+		} multi;
 	};
 	size_t :0;
 };
-#define bpf_link_create_opts__last_field perf_event
+#define bpf_link_create_opts__last_field multi
 
 LIBBPF_API int bpf_link_create(int prog_fd, int target_fd,
 			       enum bpf_attach_type attach_type,
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 88d8825fc6f6..7f717b7755be 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -230,6 +230,7 @@ struct bpf_sec_def {
 	bool is_attachable;
 	bool is_attach_btf;
 	bool is_sleepable;
+	bool is_multi_func;
 	attach_fn_t attach_fn;
 };
 
@@ -6446,6 +6447,8 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
 
 		if (prog->sec_def->is_sleepable)
 			prog->prog_flags |= BPF_F_SLEEPABLE;
+		if (prog->sec_def->is_multi_func)
+			prog->prog_flags |= BPF_F_MULTI_FUNC;
 		bpf_program__set_type(prog, prog->sec_def->prog_type);
 		bpf_program__set_expected_attach_type(prog,
 				prog->sec_def->expected_attach_type);
@@ -7915,6 +7918,8 @@ static struct bpf_link *attach_raw_tp(const struct bpf_sec_def *sec,
 				      struct bpf_program *prog);
 static struct bpf_link *attach_trace(const struct bpf_sec_def *sec,
 				     struct bpf_program *prog);
+static struct bpf_link *attach_trace_multi(const struct bpf_sec_def *sec,
+					   struct bpf_program *prog);
 static struct bpf_link *attach_lsm(const struct bpf_sec_def *sec,
 				   struct bpf_program *prog);
 static struct bpf_link *attach_iter(const struct bpf_sec_def *sec,
@@ -7991,6 +7996,14 @@ static const struct bpf_sec_def section_defs[] = {
 		.attach_fn = attach_iter),
 	SEC_DEF("syscall", SYSCALL,
 		.is_sleepable = true),
+	SEC_DEF("fentry.multi/", TRACING,
+		.expected_attach_type = BPF_TRACE_FENTRY,
+		.is_multi_func = true,
+		.attach_fn = attach_trace_multi),
+	SEC_DEF("fexit.multi/", TRACING,
+		.expected_attach_type = BPF_TRACE_FEXIT,
+		.is_multi_func = true,
+		.attach_fn = attach_trace_multi),
 	BPF_EAPROG_SEC("xdp_devmap/",		BPF_PROG_TYPE_XDP,
 						BPF_XDP_DEVMAP),
 	BPF_EAPROG_SEC("xdp_cpumap/",		BPF_PROG_TYPE_XDP,
@@ -8435,6 +8448,9 @@ static int libbpf_find_attach_btf_id(struct bpf_program *prog, int *btf_obj_fd,
 	if (!name)
 		return -EINVAL;
 
+	if (prog->prog_flags & BPF_F_MULTI_FUNC)
+		return 0;
+
 	for (i = 0; i < ARRAY_SIZE(section_defs); i++) {
 		if (!section_defs[i].is_attach_btf)
 			continue;
@@ -9523,6 +9539,62 @@ static struct bpf_link *bpf_program__attach_btf_id(struct bpf_program *prog)
 	return (struct bpf_link *)link;
 }
 
+static struct bpf_link *bpf_program__attach_multi(struct bpf_program *prog)
+{
+	char *pattern = prog->sec_name + prog->sec_def->len;
+	DECLARE_LIBBPF_OPTS(bpf_link_create_opts, opts);
+	enum bpf_attach_type attach_type;
+	int prog_fd, link_fd, cnt, err;
+	struct bpf_link *link = NULL;
+	__u32 *ids = NULL;
+
+	prog_fd = bpf_program__fd(prog);
+	if (prog_fd < 0) {
+		pr_warn("prog '%s': can't attach before loaded\n", prog->name);
+		return ERR_PTR(-EINVAL);
+	}
+
+	err = bpf_object__load_vmlinux_btf(prog->obj, true);
+	if (err)
+		return ERR_PTR(err);
+
+	cnt = btf__find_by_glob_kind(prog->obj->btf_vmlinux, BTF_KIND_FUNC,
+				     pattern, NULL, &ids);
+	if (cnt <= 0)
+		return ERR_PTR(-EINVAL);
+
+	link = calloc(1, sizeof(*link));
+	if (!link) {
+		err = -ENOMEM;
+		goto out_err;
+	}
+	link->detach = &bpf_link__detach_fd;
+
+	opts.multi.btf_ids = ids;
+	opts.multi.btf_ids_cnt = cnt;
+
+	attach_type = bpf_program__get_expected_attach_type(prog);
+	link_fd = bpf_link_create(prog_fd, 0, attach_type, &opts);
+	if (link_fd < 0) {
+		err = -errno;
+		goto out_err;
+	}
+	link->fd = link_fd;
+	free(ids);
+	return link;
+
+out_err:
+	free(link);
+	free(ids);
+	return ERR_PTR(err);
+}
+
+static struct bpf_link *attach_trace_multi(const struct bpf_sec_def *sec,
+					   struct bpf_program *prog)
+{
+	return bpf_program__attach_multi(prog);
+}
+
 struct bpf_link *bpf_program__attach_trace(struct bpf_program *prog)
 {
 	return bpf_program__attach_btf_id(prog);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 22/27] selftests/bpf: Add fentry multi func test
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (20 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 21/27] libbpf: Add support to link multi func tracing program Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 23/27] selftests/bpf: Add fexit " Jiri Olsa
                   ` (5 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding selftest for fentry multi func test that attaches
to bpf_fentry_test* functions and checks argument values
based on the processed function.

We need to cast to real arguments types in multi_arg_check,
because the checked value can be shorter than u64.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/testing/selftests/bpf/Makefile          |  4 +-
 .../bpf/prog_tests/multi_fentry_test.c        | 30 +++++++++
 .../testing/selftests/bpf/progs/multi_check.c | 63 +++++++++++++++++++
 .../selftests/bpf/progs/multi_fentry.c        | 17 +++++
 4 files changed, 113 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/multi_fentry_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_check.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_fentry.c

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 866531c08e4f..013d41c8edae 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -312,7 +312,8 @@ endef
 SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
 
 LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h		\
-		linked_vars.skel.h linked_maps.skel.h
+		linked_vars.skel.h linked_maps.skel.h			\
+		multi_fentry_test.skel.h
 
 LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
 	test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c
@@ -322,6 +323,7 @@ test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o
 linked_funcs.skel.h-deps := linked_funcs1.o linked_funcs2.o
 linked_vars.skel.h-deps := linked_vars1.o linked_vars2.o
 linked_maps.skel.h-deps := linked_maps1.o linked_maps2.o
+multi_fentry_test.skel.h-deps := multi_fentry.o multi_check.o
 
 LINKED_BPF_SRCS := $(patsubst %.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(skel)-deps)))
 
diff --git a/tools/testing/selftests/bpf/prog_tests/multi_fentry_test.c b/tools/testing/selftests/bpf/prog_tests/multi_fentry_test.c
new file mode 100644
index 000000000000..8dc08c3e715f
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/multi_fentry_test.c
@@ -0,0 +1,30 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "multi_fentry_test.skel.h"
+#include "trace_helpers.h"
+
+void test_multi_fentry_test(void)
+{
+	struct multi_fentry_test *skel = NULL;
+	__u32 duration = 0, retval;
+	int err, prog_fd;
+
+	skel = multi_fentry_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "fentry_multi_skel_load"))
+		goto cleanup;
+
+	err = multi_fentry_test__attach(skel);
+	if (!ASSERT_OK(err, "fentry_attach"))
+		goto cleanup;
+
+	prog_fd = bpf_program__fd(skel->progs.test);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(retval, 0, "test_run");
+
+	ASSERT_EQ(skel->bss->test_result, 8, "test_result");
+
+cleanup:
+	multi_fentry_test__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/multi_check.c b/tools/testing/selftests/bpf/progs/multi_check.c
new file mode 100644
index 000000000000..415c96684a30
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/multi_check.c
@@ -0,0 +1,63 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+extern const void bpf_fentry_test1 __ksym;
+extern const void bpf_fentry_test2 __ksym;
+extern const void bpf_fentry_test3 __ksym;
+extern const void bpf_fentry_test4 __ksym;
+extern const void bpf_fentry_test5 __ksym;
+extern const void bpf_fentry_test6 __ksym;
+extern const void bpf_fentry_test7 __ksym;
+extern const void bpf_fentry_test8 __ksym;
+
+void multi_arg_check(__u64 *ctx, __u64 *test_result)
+{
+	void *ip = (void *) bpf_get_func_ip(ctx);
+
+	if (ip == &bpf_fentry_test1) {
+		int a = (int) ctx[0];
+
+		*test_result += a == 1;
+	} else if (ip == &bpf_fentry_test2) {
+		int a = (int) ctx[0];
+		__u64 b = ctx[1];
+
+		*test_result += a == 2 && b == 3;
+	} else if (ip == &bpf_fentry_test3) {
+		int a = (int) ctx[0];
+		int b = (int) ctx[1];
+		__u64 c = ctx[2];
+
+		*test_result += a == 4 && b == 5 && c == 6;
+	} else if (ip == &bpf_fentry_test4) {
+		void *a = (void *) ctx[0];
+		int b = (int) ctx[1];
+		int c = (int) ctx[2];
+		__u64 d = ctx[3];
+
+		*test_result += a == (void *) 7 && b == 8 && c == 9 && d == 10;
+	} else if (ip == &bpf_fentry_test5) {
+		__u64 a = ctx[0];
+		void *b = (void *) ctx[1];
+		short c = (short) ctx[2];
+		int d = (int) ctx[3];
+		__u64 e = ctx[4];
+
+		*test_result += a == 11 && b == (void *) 12 && c == 13 && d == 14 && e == 15;
+	} else if (ip == &bpf_fentry_test6) {
+		__u64 a = ctx[0];
+		void *b = (void *) ctx[1];
+		short c = (short) ctx[2];
+		int d = (int) ctx[3];
+		void *e = (void *) ctx[4];
+		__u64 f = ctx[5];
+
+		*test_result += a == 16 && b == (void *) 17 && c == 18 && d == 19 && e == (void *) 20 && f == 21;
+	} else if (ip == &bpf_fentry_test7) {
+		*test_result += 1;
+	} else if (ip == &bpf_fentry_test8) {
+		*test_result += 1;
+	}
+}
diff --git a/tools/testing/selftests/bpf/progs/multi_fentry.c b/tools/testing/selftests/bpf/progs/multi_fentry.c
new file mode 100644
index 000000000000..b78d36772aa6
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/multi_fentry.c
@@ -0,0 +1,17 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__u64 test_result = 0;
+
+__hidden extern void multi_arg_check(__u64 *ctx, __u64 *test_result);
+
+SEC("fentry.multi/bpf_fentry_test*")
+int BPF_PROG(test, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f)
+{
+	multi_arg_check(ctx, &test_result);
+	return 0;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 23/27] selftests/bpf: Add fexit multi func test
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (21 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 22/27] selftests/bpf: Add fentry multi func test Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 24/27] selftests/bpf: Add fentry/fexit " Jiri Olsa
                   ` (4 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding selftest for fexit multi func test that attaches
to bpf_fentry_test* functions and checks argument values
based on the processed function.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/testing/selftests/bpf/Makefile          |  3 +-
 .../bpf/prog_tests/multi_fexit_test.c         | 31 +++++++++++++++++++
 .../testing/selftests/bpf/progs/multi_check.c | 22 +++++++++++++
 .../testing/selftests/bpf/progs/multi_fexit.c | 20 ++++++++++++
 4 files changed, 75 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/multi_fexit_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_fexit.c

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 013d41c8edae..8da3be0972de 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -313,7 +313,7 @@ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
 
 LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h		\
 		linked_vars.skel.h linked_maps.skel.h			\
-		multi_fentry_test.skel.h
+		multi_fentry_test.skel.h multi_fexit_test.skel.h
 
 LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
 	test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c
@@ -324,6 +324,7 @@ linked_funcs.skel.h-deps := linked_funcs1.o linked_funcs2.o
 linked_vars.skel.h-deps := linked_vars1.o linked_vars2.o
 linked_maps.skel.h-deps := linked_maps1.o linked_maps2.o
 multi_fentry_test.skel.h-deps := multi_fentry.o multi_check.o
+multi_fexit_test.skel.h-deps := multi_fexit.o multi_check.o
 
 LINKED_BPF_SRCS := $(patsubst %.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(skel)-deps)))
 
diff --git a/tools/testing/selftests/bpf/prog_tests/multi_fexit_test.c b/tools/testing/selftests/bpf/prog_tests/multi_fexit_test.c
new file mode 100644
index 000000000000..d9b0eedd9f45
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/multi_fexit_test.c
@@ -0,0 +1,31 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "multi_fexit_test.skel.h"
+#include "trace_helpers.h"
+
+void test_multi_fexit_test(void)
+{
+	struct multi_fexit_test *skel = NULL;
+	__u32 duration = 0, retval;
+	int err, prog_fd;
+
+	skel = multi_fexit_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "fexit_multi_skel_load"))
+		goto cleanup;
+
+	err = multi_fexit_test__attach(skel);
+	if (!ASSERT_OK(err, "fexit_attach"))
+		goto cleanup;
+
+	prog_fd = bpf_program__fd(skel->progs.test);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(retval, 0, "test_run");
+
+	ASSERT_EQ(skel->bss->test_arg_result, 8, "fexit_multi_arg_result");
+	ASSERT_EQ(skel->bss->test_ret_result, 8, "fexit_multi_ret_result");
+
+cleanup:
+	multi_fexit_test__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/multi_check.c b/tools/testing/selftests/bpf/progs/multi_check.c
index 415c96684a30..80ebbefaa8cd 100644
--- a/tools/testing/selftests/bpf/progs/multi_check.c
+++ b/tools/testing/selftests/bpf/progs/multi_check.c
@@ -61,3 +61,25 @@ void multi_arg_check(__u64 *ctx, __u64 *test_result)
 		*test_result += 1;
 	}
 }
+
+void multi_ret_check(void *ctx, int ret, __u64 *test_result)
+{
+	void *ip = (void *) bpf_get_func_ip(ctx);
+
+	if (ip == &bpf_fentry_test1)
+		*test_result += ret == 2;
+	else if (ip == &bpf_fentry_test2)
+		*test_result += ret == 5;
+	else if (ip == &bpf_fentry_test3)
+		*test_result += ret == 15;
+	else if (ip == &bpf_fentry_test4)
+		*test_result += ret == 34;
+	else if (ip == &bpf_fentry_test5)
+		*test_result += ret == 65;
+	else if (ip == &bpf_fentry_test6)
+		*test_result += ret == 111;
+	else if (ip == &bpf_fentry_test7)
+		*test_result += ret == 0;
+	else if (ip == &bpf_fentry_test8)
+		*test_result += ret == 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/multi_fexit.c b/tools/testing/selftests/bpf/progs/multi_fexit.c
new file mode 100644
index 000000000000..29c49aa3bfc2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/multi_fexit.c
@@ -0,0 +1,20 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__hidden extern void multi_arg_check(__u64 *ctx, __u64 *test_result);
+__hidden extern void multi_ret_check(void *ctx, int ret, __u64 *test_result);
+
+__u64 test_arg_result = 0;
+__u64 test_ret_result = 0;
+
+SEC("fexit.multi/bpf_fentry_test*")
+int BPF_PROG(test, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f, int ret)
+{
+	multi_arg_check(ctx, &test_arg_result);
+	multi_ret_check(ctx, ret, &test_ret_result);
+	return 0;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 24/27] selftests/bpf: Add fentry/fexit multi func test
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (22 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 23/27] selftests/bpf: Add fexit " Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 25/27] selftests/bpf: Add mixed " Jiri Olsa
                   ` (3 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding selftest for fentry/fexit multi func tests that attaches
to bpf_fentry_test* functions and checks argument values based
on the processed function.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/testing/selftests/bpf/Makefile          |  4 ++-
 .../bpf/prog_tests/multi_fentry_fexit_test.c  | 32 +++++++++++++++++++
 .../selftests/bpf/progs/multi_fentry_fexit.c  | 28 ++++++++++++++++
 3 files changed, 63 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/multi_fentry_fexit_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_fentry_fexit.c

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 8da3be0972de..6272d9c166f9 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -313,7 +313,8 @@ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
 
 LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h		\
 		linked_vars.skel.h linked_maps.skel.h			\
-		multi_fentry_test.skel.h multi_fexit_test.skel.h
+		multi_fentry_test.skel.h multi_fexit_test.skel.h	\
+		multi_fentry_fexit_test.skel.h
 
 LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
 	test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c
@@ -325,6 +326,7 @@ linked_vars.skel.h-deps := linked_vars1.o linked_vars2.o
 linked_maps.skel.h-deps := linked_maps1.o linked_maps2.o
 multi_fentry_test.skel.h-deps := multi_fentry.o multi_check.o
 multi_fexit_test.skel.h-deps := multi_fexit.o multi_check.o
+multi_fentry_fexit_test.skel.h-deps := multi_fentry_fexit.o multi_check.o
 
 LINKED_BPF_SRCS := $(patsubst %.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(skel)-deps)))
 
diff --git a/tools/testing/selftests/bpf/prog_tests/multi_fentry_fexit_test.c b/tools/testing/selftests/bpf/prog_tests/multi_fentry_fexit_test.c
new file mode 100644
index 000000000000..d54abf36ab2f
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/multi_fentry_fexit_test.c
@@ -0,0 +1,32 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "multi_fentry_fexit_test.skel.h"
+
+void test_multi_fentry_fexit_test(void)
+{
+	DECLARE_LIBBPF_OPTS(bpf_link_update_opts, link_upd_opts);
+	struct multi_fentry_fexit_test *skel = NULL;
+	__u32 duration = 0, retval;
+	int err, prog_fd;
+
+	skel = multi_fentry_fexit_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "fentry_multi_skel_load"))
+		goto cleanup;
+
+	err = multi_fentry_fexit_test__attach(skel);
+	if (!ASSERT_OK(err, "fentry_fexit_attach"))
+		goto cleanup;
+
+	prog_fd = bpf_program__fd(skel->progs.test2);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(retval, 0, "test_run");
+
+	ASSERT_EQ(skel->bss->test1_arg_result, 8, "test1_arg_result");
+	ASSERT_EQ(skel->bss->test2_arg_result, 8, "test2_arg_result");
+	ASSERT_EQ(skel->bss->test2_ret_result, 8, "test2_ret_result");
+
+cleanup:
+	multi_fentry_fexit_test__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/multi_fentry_fexit.c b/tools/testing/selftests/bpf/progs/multi_fentry_fexit.c
new file mode 100644
index 000000000000..aac7df8c9211
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/multi_fentry_fexit.c
@@ -0,0 +1,28 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__u64 test1_arg_result = 0;
+__u64 test2_arg_result = 0;
+__u64 test2_ret_result = 0;
+
+__hidden extern void multi_arg_check(__u64 *ctx, __u64 *test_result);
+__hidden extern void multi_ret_check(void *ctx, int ret, __u64 *test_result);
+
+SEC("fentry.multi/bpf_fentry_test*")
+int BPF_PROG(test1, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f)
+{
+	multi_arg_check(ctx, &test1_arg_result);
+	return 0;
+}
+
+SEC("fexit.multi/bpf_fentry_test*")
+int BPF_PROG(test2, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f, int ret)
+{
+	multi_arg_check(ctx, &test2_arg_result);
+	multi_ret_check(ctx, ret, &test2_ret_result);
+	return 0;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 25/27] selftests/bpf: Add mixed multi func test
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (23 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 24/27] selftests/bpf: Add fentry/fexit " Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 26/27] selftests/bpf: Add attach " Jiri Olsa
                   ` (2 subsequent siblings)
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding selftest for fentry/fexit multi func tests that attaches
to bpf_fentry_test* functions, where some of them have already
attached trampoline.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/testing/selftests/bpf/Makefile          |  3 +-
 .../bpf/prog_tests/multi_mixed_test.c         | 34 +++++++++++++++
 .../testing/selftests/bpf/progs/multi_mixed.c | 43 +++++++++++++++++++
 3 files changed, 79 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/multi_mixed_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_mixed.c

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 6272d9c166f9..1c80d76ebc70 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -314,7 +314,7 @@ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
 LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h		\
 		linked_vars.skel.h linked_maps.skel.h			\
 		multi_fentry_test.skel.h multi_fexit_test.skel.h	\
-		multi_fentry_fexit_test.skel.h
+		multi_fentry_fexit_test.skel.h multi_mixed_test.skel.h
 
 LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
 	test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c
@@ -327,6 +327,7 @@ linked_maps.skel.h-deps := linked_maps1.o linked_maps2.o
 multi_fentry_test.skel.h-deps := multi_fentry.o multi_check.o
 multi_fexit_test.skel.h-deps := multi_fexit.o multi_check.o
 multi_fentry_fexit_test.skel.h-deps := multi_fentry_fexit.o multi_check.o
+multi_mixed_test.skel.h-deps := multi_mixed.o multi_check.o
 
 LINKED_BPF_SRCS := $(patsubst %.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(skel)-deps)))
 
diff --git a/tools/testing/selftests/bpf/prog_tests/multi_mixed_test.c b/tools/testing/selftests/bpf/prog_tests/multi_mixed_test.c
new file mode 100644
index 000000000000..c9395b4eb5ac
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/multi_mixed_test.c
@@ -0,0 +1,34 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "multi_mixed_test.skel.h"
+
+void test_multi_mixed_test(void)
+{
+	DECLARE_LIBBPF_OPTS(bpf_link_update_opts, link_upd_opts);
+	struct multi_mixed_test *skel = NULL;
+	__u32 duration = 0, retval;
+	int err, prog_fd;
+
+	skel = multi_mixed_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "fentry_multi_skel_load"))
+		goto cleanup;
+
+	err = multi_mixed_test__attach(skel);
+	if (!ASSERT_OK(err, "fentry_attach"))
+		goto cleanup;
+
+	prog_fd = bpf_program__fd(skel->progs.test1);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(retval, 0, "test_run");
+
+	ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
+	ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
+	ASSERT_EQ(skel->bss->test3_arg_result, 8, "test3_arg_result");
+	ASSERT_EQ(skel->bss->test4_arg_result, 8, "test4_arg_result");
+	ASSERT_EQ(skel->bss->test4_ret_result, 8, "test4_ret_result");
+
+cleanup:
+	multi_mixed_test__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/multi_mixed.c b/tools/testing/selftests/bpf/progs/multi_mixed.c
new file mode 100644
index 000000000000..2ccd507747c7
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/multi_mixed.c
@@ -0,0 +1,43 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__hidden extern void multi_arg_check(__u64 *ctx, __u64 *test_result);
+__hidden extern void multi_ret_check(void *ctx, int ret, __u64 *test_result);
+
+__u64 test1_result = 0;
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(test1, int a)
+{
+	test1_result += a == 1;
+	return 0;
+}
+
+__u64 test2_result = 0;
+SEC("fexit/bpf_fentry_test2")
+int BPF_PROG(test2, int a, __u64 b, int ret)
+{
+	test2_result += a == 2 && b == 3;
+	return 0;
+}
+
+__u64 test3_arg_result = 0;
+SEC("fentry.multi/bpf_fentry_test*")
+int BPF_PROG(test3, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f)
+{
+	multi_arg_check(ctx, &test3_arg_result);
+	return 0;
+}
+
+__u64 test4_arg_result = 0;
+__u64 test4_ret_result = 0;
+SEC("fexit.multi/bpf_fentry_test*")
+int BPF_PROG(test4, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f, int ret)
+{
+	multi_arg_check(ctx, &test4_arg_result);
+	multi_ret_check(ctx, ret, &test4_ret_result);
+	return 0;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 26/27] selftests/bpf: Add attach multi func test
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (24 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 25/27] selftests/bpf: Add mixed " Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-26 19:39 ` [PATCH bpf-next v4 27/27] selftests/bpf: Add ret_mod " Jiri Olsa
  2021-08-29 17:04 ` [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Alexei Starovoitov
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding selftest to check attaching rules for multi func programs.

  - attach 2 programs:

    fentry/bpf_fentry_test1
    fexit/bpf_fentry_test2

  - check that we can attach multi func program on top of them:

    fentry.multi/bpf_fentry_test*

  - check that we cannot attach another multi funct program
    that does not cover the same BTF ids (one less):

    fentry.multi/bpf_fentry_test[1-7]
    fexit.multi/bpf_fentry_test[1-7]

  - check that we can no longer attach standard trampoline
    programs (below) on top of attached multi func program:

    fentry/bpf_fentry_test1
    fexit/bpf_fentry_test3

Because the supported wildcards do not allow us to
match just limited set of bpf_fentry_test*, adding
extra code to look it up in kernel's BTF.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../bpf/prog_tests/multi_attach_check_test.c  | 115 ++++++++++++++++++
 .../selftests/bpf/progs/multi_attach_check.c  |  36 ++++++
 .../bpf/progs/multi_attach_check_extra1.c     |  12 ++
 .../bpf/progs/multi_attach_check_extra2.c     |  12 ++
 4 files changed, 175 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/multi_attach_check_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_attach_check.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_attach_check_extra1.c
 create mode 100644 tools/testing/selftests/bpf/progs/multi_attach_check_extra2.c

diff --git a/tools/testing/selftests/bpf/prog_tests/multi_attach_check_test.c b/tools/testing/selftests/bpf/prog_tests/multi_attach_check_test.c
new file mode 100644
index 000000000000..32b23718437d
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/multi_attach_check_test.c
@@ -0,0 +1,115 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include <linux/btf_ids.h>
+#include "multi_attach_check.skel.h"
+#include "multi_attach_check_extra1.skel.h"
+#include "multi_attach_check_extra2.skel.h"
+#include <bpf/btf.h>
+
+static __u32 btf_ids[7];
+
+static int load_btf_ids(void)
+{
+	__u32 i, nr_types, cnt;
+	struct btf *btf;
+
+	btf = btf__load_vmlinux_btf();
+	if (!ASSERT_OK_PTR(btf, "btf__load_vmlinux_btf"))
+		return -1;
+
+	nr_types = btf__get_nr_types(btf);
+
+	for (i = 1, cnt = 0; i <= nr_types && cnt < 7; i++) {
+		const struct btf_type *t = btf__type_by_id(btf, i);
+		const char *name;
+
+		if (!btf_is_func(t))
+			continue;
+
+		name = btf__name_by_offset(btf, t->name_off);
+		if (!name)
+			continue;
+		if (strncmp(name, "bpf_fentry_test", sizeof("bpf_fentry_test") - 1))
+			continue;
+
+		btf_ids[cnt] = i;
+		cnt++;
+	}
+
+	btf__free(btf);
+	return ASSERT_EQ(cnt, 7, "bpf_fentry_test_cnt") ? 0 : -1;
+}
+
+void test_multi_attach_check_test(void)
+{
+	struct bpf_link *link1 = NULL, *link2 = NULL, *link3 = NULL;
+	DECLARE_LIBBPF_OPTS(bpf_link_create_opts, opts);
+	struct multi_attach_check_extra1 *skel_extra1 = NULL;
+	struct multi_attach_check_extra2 *skel_extra2 = NULL;
+	struct multi_attach_check *skel;
+	int link_fd, prog_fd;
+
+	/* Load/attach standard trampolines and on top of it multi
+	 * func program. It should succeed.
+	 */
+	skel = multi_attach_check__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "multi_attach_check__load"))
+		return;
+
+	link1 = bpf_program__attach(skel->progs.test1);
+	if (!ASSERT_OK_PTR(link1, "multi_attach_check__test1_attach"))
+		goto cleanup;
+
+	link2 = bpf_program__attach(skel->progs.test2);
+	if (!ASSERT_OK_PTR(link2, "multi_attach_check__test2_attach"))
+		goto cleanup;
+
+	link3 = bpf_program__attach(skel->progs.test3);
+	if (!ASSERT_OK_PTR(link3, "multi_attach_check__test3_attach"))
+		goto cleanup;
+
+	if (!ASSERT_OK(load_btf_ids(), "load_btf_ids"))
+		goto cleanup;
+
+	/* There's 8 bpf_fentry_test* functions, get BTF ids for 7 of them
+	 * and try to load/link multi func program with them. It should fail
+	 * both for fentry.multi ...
+	 */
+	opts.multi.btf_ids = btf_ids;
+	opts.multi.btf_ids_cnt = 7;
+
+	prog_fd = bpf_program__fd(skel->progs.test4);
+
+	link_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_FENTRY, &opts);
+	if (!ASSERT_LT(link_fd, 0, "bpf_link_create"))
+		goto cleanup;
+
+	close(link_fd);
+
+	/* ... and fexit.multi */
+	prog_fd = bpf_program__fd(skel->progs.test5);
+
+	link_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_FEXIT, &opts);
+	if (!ASSERT_LT(link_fd, 0, "bpf_link_create"))
+		goto cleanup;
+
+	close(link_fd);
+
+	/* Try to load/attach extra programs on top of multi func programs,
+	 * it should fail for both fentry ...
+	 */
+	skel_extra1 = multi_attach_check_extra1__open_and_load();
+	if (!ASSERT_ERR_PTR(skel_extra1, "multi_attach_check_extra1__load"))
+		multi_attach_check_extra1__destroy(skel_extra1);
+
+	/* ... and fexit */
+	skel_extra2 = multi_attach_check_extra2__open_and_load();
+	if (!ASSERT_ERR_PTR(skel_extra2, "multi_attach_check_extra2__load"))
+		multi_attach_check_extra2__destroy(skel_extra2);
+
+cleanup:
+	bpf_link__destroy(link1);
+	bpf_link__destroy(link2);
+	bpf_link__destroy(link3);
+	multi_attach_check__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/multi_attach_check.c b/tools/testing/selftests/bpf/progs/multi_attach_check.c
new file mode 100644
index 000000000000..cf0f25c69556
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/multi_attach_check.c
@@ -0,0 +1,36 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(test1, int a)
+{
+	return 0;
+}
+
+SEC("fexit/bpf_fentry_test2")
+int BPF_PROG(test2, int a, __u64 b, int ret)
+{
+	return 0;
+}
+
+SEC("fentry.multi/bpf_fentry_test*")
+int BPF_PROG(test3, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f)
+{
+	return 0;
+}
+
+SEC("fentry.multi/bpf_fentry_test1-7")
+int BPF_PROG(test4, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f, int ret)
+{
+	return 0;
+}
+
+SEC("fexit.multi/bpf_fentry_test1-7")
+int BPF_PROG(test5, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f, int ret)
+{
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/multi_attach_check_extra1.c b/tools/testing/selftests/bpf/progs/multi_attach_check_extra1.c
new file mode 100644
index 000000000000..c1d816a0206a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/multi_attach_check_extra1.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(test1, int a)
+{
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/multi_attach_check_extra2.c b/tools/testing/selftests/bpf/progs/multi_attach_check_extra2.c
new file mode 100644
index 000000000000..cd66abb4b848
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/multi_attach_check_extra2.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+SEC("fexit/bpf_fentry_test3")
+int BPF_PROG(test3, int a, __u64 b, int ret)
+{
+	return 0;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH bpf-next v4 27/27] selftests/bpf: Add ret_mod multi func test
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (25 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 26/27] selftests/bpf: Add attach " Jiri Olsa
@ 2021-08-26 19:39 ` Jiri Olsa
  2021-08-29 17:04 ` [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Alexei Starovoitov
  27 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-26 19:39 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware)
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

Adding extra test to existing modify_return test to
test this with multi func program attached on top
of the modify return program.

Because the supported wildcards do not allow us to
match both bpf_fentry_test* and bpf_modify_return_test,
adding extra code to look it up in kernel's BTF.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../selftests/bpf/prog_tests/modify_return.c  | 114 +++++++++++++++++-
 .../selftests/bpf/progs/multi_modify_return.c |  17 +++
 2 files changed, 128 insertions(+), 3 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/multi_modify_return.c

diff --git a/tools/testing/selftests/bpf/prog_tests/modify_return.c b/tools/testing/selftests/bpf/prog_tests/modify_return.c
index 97fec70c600b..9876104ad5b2 100644
--- a/tools/testing/selftests/bpf/prog_tests/modify_return.c
+++ b/tools/testing/selftests/bpf/prog_tests/modify_return.c
@@ -5,13 +5,100 @@
  */
 
 #include <test_progs.h>
+#include <bpf/btf.h>
 #include "modify_return.skel.h"
+#include "multi_modify_return.skel.h"
 
 #define LOWER(x) ((x) & 0xffff)
 #define UPPER(x) ((x) >> 16)
 
 
-static void run_test(__u32 input_retval, __u16 want_side_effect, __s16 want_ret)
+struct multi_data {
+	struct multi_modify_return *skel;
+	int link_fentry;
+	int link_fexit;
+	__u32 btf_ids[9];
+};
+
+static int multi_btf_ids(struct multi_data *md)
+{
+	__u32 i, nr_types, ids_cnt;
+	struct btf *btf;
+
+	btf = btf__load_vmlinux_btf();
+	if (!ASSERT_OK_PTR(btf, "btf__load_vmlinux_btf"))
+		return -1;
+
+	nr_types = btf__get_nr_types(btf);
+
+	for (i = 1; i <= nr_types; i++) {
+		const struct btf_type *t = btf__type_by_id(btf, i);
+		const char *name;
+		bool match;
+
+		if (!btf_is_func(t))
+			continue;
+
+		name = btf__name_by_offset(btf, t->name_off);
+		if (!name)
+			continue;
+		match = strncmp(name, "bpf_modify_return_test",
+				sizeof("bpf_modify_return_test") - 1) == 0;
+		match |= strncmp(name, "bpf_fentry_test",
+				 sizeof("bpf_fentry_test") - 1) == 0;
+		if (!match)
+			continue;
+
+		md->btf_ids[ids_cnt] = i;
+		ids_cnt++;
+	}
+
+	btf__free(btf);
+	return ASSERT_EQ(ids_cnt, 9, "multi_btf_ids") ? 0 : -1;
+}
+
+static int multi_attach(struct multi_data *md)
+{
+	DECLARE_LIBBPF_OPTS(bpf_link_create_opts, opts);
+	int prog_fd;
+
+	md->skel = multi_modify_return__open_and_load();
+	if (!ASSERT_OK_PTR(md->skel, "multi_attach_check__load"))
+		return -1;
+
+	opts.multi.btf_ids = md->btf_ids;
+	opts.multi.btf_ids_cnt = 9;
+
+	prog_fd = bpf_program__fd(md->skel->progs.test1);
+
+	md->link_fentry = bpf_link_create(prog_fd, 0, BPF_TRACE_FENTRY, &opts);
+	if (!ASSERT_GE(md->link_fentry, 0, "bpf_link_create"))
+		goto cleanup;
+
+	prog_fd = bpf_program__fd(md->skel->progs.test2);
+
+	md->link_fexit = bpf_link_create(prog_fd, 0, BPF_TRACE_FEXIT, &opts);
+	if (!ASSERT_GE(md->link_fexit, 0, "bpf_link_create"))
+		goto cleanup_close;
+
+	return 0;
+
+cleanup_close:
+	close(md->link_fentry);
+cleanup:
+	multi_modify_return__destroy(md->skel);
+	return -1;
+}
+
+static void multi_detach(struct multi_data *md)
+{
+	close(md->link_fentry);
+	close(md->link_fexit);
+	multi_modify_return__destroy(md->skel);
+}
+
+static void run_test(__u32 input_retval, __u16 want_side_effect, __s16 want_ret,
+		     struct multi_data *md)
 {
 	struct modify_return *skel = NULL;
 	int err, prog_fd;
@@ -27,6 +114,9 @@ static void run_test(__u32 input_retval, __u16 want_side_effect, __s16 want_ret)
 	if (CHECK(err, "modify_return", "attach failed: %d\n", err))
 		goto cleanup;
 
+	if (md && !ASSERT_OK(multi_attach(md), "multi_attach"))
+		goto cleanup;
+
 	skel->bss->input_retval = input_retval;
 	prog_fd = bpf_program__fd(skel->progs.fmod_ret_test);
 	err = bpf_prog_test_run(prog_fd, 1, NULL, 0, NULL, 0,
@@ -49,17 +139,35 @@ static void run_test(__u32 input_retval, __u16 want_side_effect, __s16 want_ret)
 	CHECK(skel->bss->fmod_ret_result != 1, "modify_return",
 	      "fmod_ret failed\n");
 
+	if (md)
+		multi_detach(md);
 cleanup:
 	modify_return__destroy(skel);
 }
 
 void test_modify_return(void)
 {
+	struct multi_data data = {};
+
+	run_test(0 /* input_retval */,
+		 1 /* want_side_effect */,
+		 4 /* want_ret */,
+		 NULL /* no multi func test */);
+	run_test(-EINVAL /* input_retval */,
+		 0 /* want_side_effect */,
+		 -EINVAL /* want_ret */,
+		 NULL /* no multi func test */);
+
+	if (!ASSERT_OK(multi_btf_ids(&data), "multi_attach"))
+		return;
+
 	run_test(0 /* input_retval */,
 		 1 /* want_side_effect */,
-		 4 /* want_ret */);
+		 4 /* want_ret */,
+		 &data);
 	run_test(-EINVAL /* input_retval */,
 		 0 /* want_side_effect */,
-		 -EINVAL /* want_ret */);
+		 -EINVAL /* want_ret */,
+		 &data);
 }
 
diff --git a/tools/testing/selftests/bpf/progs/multi_modify_return.c b/tools/testing/selftests/bpf/progs/multi_modify_return.c
new file mode 100644
index 000000000000..34754e438c96
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/multi_modify_return.c
@@ -0,0 +1,17 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+SEC("fentry.multi/bpf_fentry_test*")
+int BPF_PROG(test1, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f)
+{
+	return 0;
+}
+
+SEC("fexit.multi/bpf_fentry_test*")
+int BPF_PROG(test2, __u64 a, __u64 b, __u64 c, __u64 d, __u64 e, __u64 f, int ret)
+{
+	return 0;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach
  2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
                   ` (26 preceding siblings ...)
  2021-08-26 19:39 ` [PATCH bpf-next v4 27/27] selftests/bpf: Add ret_mod " Jiri Olsa
@ 2021-08-29 17:04 ` Alexei Starovoitov
  2021-08-30  8:02   ` Jiri Olsa
  27 siblings, 1 reply; 44+ messages in thread
From: Alexei Starovoitov @ 2021-08-29 17:04 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Thu, Aug 26, 2021 at 09:38:55PM +0200, Jiri Olsa wrote:
> hi,
> sending new version of batch attach support, previous post
> is in here [1].
> 
> The previous post could not assign multi trampoline on top
> of regular trampolines. This patchset is trying to address
> that, plus it has other fixes from last post.
> 
> This patchset contains:
>   1) patches (1-4) that fix the ftrace graph tracing over the function
>      with direct trampolines attached
>   2) patches (5-8) that add batch interface for ftrace direct function
>      register/unregister/modify
>   3) patches (9-27) that add support to attach BPF program to multiple
>      functions

I did a quick look and it looks ok, but probably will require another respin.
In the mean would be great to land the first 8 patches for the upcoming merge
window.
Jiri,
can you respin them quickly addressing build bot issues and maybe
Steven can apply them into his tracing tree for the merge window?
Then during the next release cycle we will only iterate on bpf bits in the
later patches.
Thoughts?

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach
  2021-08-29 17:04 ` [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Alexei Starovoitov
@ 2021-08-30  8:02   ` Jiri Olsa
  0 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-08-30  8:02 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Sun, Aug 29, 2021 at 10:04:25AM -0700, Alexei Starovoitov wrote:
> On Thu, Aug 26, 2021 at 09:38:55PM +0200, Jiri Olsa wrote:
> > hi,
> > sending new version of batch attach support, previous post
> > is in here [1].
> > 
> > The previous post could not assign multi trampoline on top
> > of regular trampolines. This patchset is trying to address
> > that, plus it has other fixes from last post.
> > 
> > This patchset contains:
> >   1) patches (1-4) that fix the ftrace graph tracing over the function
> >      with direct trampolines attached
> >   2) patches (5-8) that add batch interface for ftrace direct function
> >      register/unregister/modify
> >   3) patches (9-27) that add support to attach BPF program to multiple
> >      functions
> 
> I did a quick look and it looks ok, but probably will require another respin.
> In the mean would be great to land the first 8 patches for the upcoming merge
> window.
> Jiri,
> can you respin them quickly addressing build bot issues and maybe
> Steven can apply them into his tracing tree for the merge window?
> Then during the next release cycle we will only iterate on bpf bits in the
> later patches.
> Thoughts?

sounds good, will do

jirka


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 09/27] bpf: Add support to load multi func tracing program
  2021-08-26 19:39 ` [PATCH bpf-next v4 09/27] bpf: Add support to load multi func tracing program Jiri Olsa
@ 2021-08-31 23:17   ` Andrii Nakryiko
  2021-09-01 11:32     ` Jiri Olsa
  0 siblings, 1 reply; 44+ messages in thread
From: Andrii Nakryiko @ 2021-08-31 23:17 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Thu, Aug 26, 2021 at 12:40 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> Adding support to load tracing program with new BPF_F_MULTI_FUNC flag,
> that allows the program to be loaded without specific function to be
> attached to.

Are there any benefits to using a new load flag vs having separate
expected attach types like FENTRY_MULTI/FEXIT_MULTI? I find load flags
a bigger pain to work with compared to expected attach type (and
expected attach type should be more apparent in BPF link info, bpftool
output, etc).

>
> Such program will be allowed to be attached to multiple functions
> in following patches.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  include/linux/bpf.h            |  1 +
>  include/uapi/linux/bpf.h       |  7 +++++++
>  kernel/bpf/syscall.c           | 35 +++++++++++++++++++++++++++++-----
>  kernel/bpf/verifier.c          |  3 ++-
>  tools/include/uapi/linux/bpf.h |  7 +++++++
>  5 files changed, 47 insertions(+), 6 deletions(-)
>

[...]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 17/27] bpf: Add multi trampoline attach support
  2021-08-26 19:39 ` [PATCH bpf-next v4 17/27] bpf: Add multi trampoline attach support Jiri Olsa
@ 2021-08-31 23:36   ` Andrii Nakryiko
  2021-09-01  0:02     ` Andrii Nakryiko
  2021-09-01 11:39     ` Jiri Olsa
  0 siblings, 2 replies; 44+ messages in thread
From: Andrii Nakryiko @ 2021-08-31 23:36 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Thu, Aug 26, 2021 at 12:41 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> Adding new multi trampoline link (BPF_LINK_TYPE_TRACING_MULTI)
> as an interface to attach program to multiple functions.
>
> The link_create bpf_attr interface already has 'bpf_prog' file
> descriptor, that defines the program to be attached. It must be
> loaded with BPF_F_MULTI_FUNC flag.
>
> Adding new multi_btf_ids/multi_btf_ids_cnt link_create bpf_attr
> fields that provides BTF ids.
>
> The new link gets multi trampoline (via bpf_trampoline_multi_get)
> and links the provided program with embedded trampolines and the
> 'main' trampoline with new multi link/unlink functions:
>
>   int bpf_trampoline_multi_link_prog(struct bpf_prog *prog,
>                                      struct bpf_trampoline_multi *tr);
>   int bpf_trampoline_multi_unlink_prog(struct bpf_prog *prog,
>                                        struct bpf_trampoline_multi *tr);
>
> If embedded trampoline contains fexit programs, we need to switch
> its model to the multi trampoline model (because of the final 'ret'
> argument). We keep the count of attached multi func programs for each
> trampoline, so we can tell when to switch the model.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  include/linux/bpf.h            |   5 ++
>  include/uapi/linux/bpf.h       |   5 ++
>  kernel/bpf/core.c              |   1 +
>  kernel/bpf/syscall.c           | 120 +++++++++++++++++++++++++++++++++
>  kernel/bpf/trampoline.c        |  87 ++++++++++++++++++++++--
>  tools/include/uapi/linux/bpf.h |   5 ++
>  6 files changed, 219 insertions(+), 4 deletions(-)
>

[...]

> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 1f9d336861f0..9533200ffadf 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -1008,6 +1008,7 @@ enum bpf_link_type {
>         BPF_LINK_TYPE_NETNS = 5,
>         BPF_LINK_TYPE_XDP = 6,
>         BPF_LINK_TYPE_PERF_EVENT = 7,
> +       BPF_LINK_TYPE_TRACING_MULTI = 8,
>
>         MAX_BPF_LINK_TYPE,
>  };
> @@ -1462,6 +1463,10 @@ union bpf_attr {
>                                  */
>                                 __u64           bpf_cookie;
>                         } perf_event;
> +                       struct {
> +                               __aligned_u64   multi_btf_ids;          /* addresses to attach */
> +                               __u32           multi_btf_ids_cnt;      /* addresses count */
> +                       };

Please follow the pattern of perf_event, name this struct "multi".

>                 };
>         } link_create;
>
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index bad03dde97a2..6c16ac43dd91 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c

[...]

> +
> +       bpf_link_init(&link->link, BPF_LINK_TYPE_TRACING_MULTI,
> +                     &bpf_tracing_multi_link_lops, prog);
> +       link->attach_type = prog->expected_attach_type;
> +       link->multi = multi;
> +
> +       err = bpf_link_prime(&link->link, &link_primer);
> +       if (err)
> +               goto out_free;
> +       err = bpf_trampoline_multi_link_prog(prog, multi);
> +       if (err)
> +               goto out_free;

bpf_link_cleanup(), can't free link after priming. Look at other
places using bpf_link.


> +       return bpf_link_settle(&link_primer);
> +
> +out_free:
> +       bpf_trampoline_multi_put(multi);
> +       kfree(link);
> +out_free_ids:
> +       kfree(btf_ids);
> +       return err;
> +}
> +

[...]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 18/27] bpf, x64: Store properly return value for trampoline with multi func programs
  2021-08-26 19:39 ` [PATCH bpf-next v4 18/27] bpf, x64: Store properly return value for trampoline with multi func programs Jiri Olsa
@ 2021-08-31 23:51   ` Andrii Nakryiko
  2021-09-01 15:15     ` Jiri Olsa
  0 siblings, 1 reply; 44+ messages in thread
From: Andrii Nakryiko @ 2021-08-31 23:51 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Thu, Aug 26, 2021 at 12:41 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> When we have multi func program attached, the trampoline
> switched to the function model of the multi func program.
>
> This breaks already attached standard programs, for example
> when we attach following program:
>
>   SEC("fexit/bpf_fentry_test2")
>   int BPF_PROG(test1, int a, __u64 b, int ret)
>
> the trampoline pushes on stack args 'a' and 'b' and return
> value 'ret'.
>
> When following multi func program is attached to bpf_fentry_test2:
>
>   SEC("fexit.multi/bpf_fentry_test*")
>   int BPF_PROG(test2, __u64 a, __u64 b, __u64 c, __u64 d,
>                        __u64 e, __u64 f, int ret)
>
> the trampoline takes this program model and pushes all 6 args
> and return value on stack.
>
> But we still have the original 'test1' program attached, that
> expects 'ret' value where there's 'c' argument now:
>
>   test1(a, b, c)
>
> To fix that we simply overwrite 'c' argument with 'ret' value,
> so test1 is called as expected and test2 gets called as:
>
>   test2(a, b, ret, d, e, f, ret)
>
> which is ok, because 'c' is not defined for bpf_fentry_test2
> anyway.
>

What if we change the order on the stack to be the return value first,
followed by input arguments. That would get us a bit closer to
unifying multi-trampoline and the normal one, right? BPF verifier
should be able to rewrite access to the last argument (i.e., return
value) for fexit programs to actually be at offset 0, and shift all
other arguments by 8 bytes. For fentry, if that helps to keep things
more aligned, we'd just skip the first 8 bytes on the stack and store
all the input arguments in the same offsets. So BPF verifier rewriting
logic stays consistent (except offset 0 will be disallowed).

Basically, I'm thinking how we can make normal and multi trampolines
more interoperable to remove those limitations that two
multi-trampolines can't be attached to the same function, which seems
like a pretty annoying limitation which will be easy to hit in
practice. Alexei previously proposed (as an optimization) to group all
to-be-attached functions into groups by number of arguments, so that
we can have up to 6 different trampolines tailored to actual functions
being attached. So that we don't save unnecessary extra input
arguments saving, which will be even more important once we allow more
than 6 arguments in the future.

With such logic, we should be able to split all the functions into
multiple underlying trampolines, so it seems like it should be
possible to also allow multiple multi-fentry programs to be attached
to the same function by having a separate bpf_trampoline just for
those functions. It will be just an extension of the above "just 6
trampolines" strategy to "as much as we need trampolines".

It's just a vague idea, sorry, I don't understand all the code yet.
But the limitation outlined in one of the previous patches seems very
limiting and unpleasant. I can totally see that some 24/7 running BPF
tracing app uses multi-fentry for tracing a small subset of kernel
functions non-stop, and then someone is trying to use bpftrace or
retsnoop to trace overlapping set of functions. And it immediately
fails. Very frustrating.

> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  arch/x86/net/bpf_jit_comp.c | 40 ++++++++++++++++++++++++++++++-------
>  include/linux/bpf.h         |  1 +
>  kernel/bpf/trampoline.c     |  1 +
>  3 files changed, 35 insertions(+), 7 deletions(-)
>

[...]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 17/27] bpf: Add multi trampoline attach support
  2021-08-31 23:36   ` Andrii Nakryiko
@ 2021-09-01  0:02     ` Andrii Nakryiko
  2021-09-01 11:39     ` Jiri Olsa
  1 sibling, 0 replies; 44+ messages in thread
From: Andrii Nakryiko @ 2021-09-01  0:02 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Tue, Aug 31, 2021 at 4:36 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Thu, Aug 26, 2021 at 12:41 PM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > Adding new multi trampoline link (BPF_LINK_TYPE_TRACING_MULTI)
> > as an interface to attach program to multiple functions.
> >
> > The link_create bpf_attr interface already has 'bpf_prog' file
> > descriptor, that defines the program to be attached. It must be
> > loaded with BPF_F_MULTI_FUNC flag.
> >
> > Adding new multi_btf_ids/multi_btf_ids_cnt link_create bpf_attr
> > fields that provides BTF ids.
> >
> > The new link gets multi trampoline (via bpf_trampoline_multi_get)
> > and links the provided program with embedded trampolines and the
> > 'main' trampoline with new multi link/unlink functions:
> >
> >   int bpf_trampoline_multi_link_prog(struct bpf_prog *prog,
> >                                      struct bpf_trampoline_multi *tr);
> >   int bpf_trampoline_multi_unlink_prog(struct bpf_prog *prog,
> >                                        struct bpf_trampoline_multi *tr);
> >
> > If embedded trampoline contains fexit programs, we need to switch
> > its model to the multi trampoline model (because of the final 'ret'
> > argument). We keep the count of attached multi func programs for each
> > trampoline, so we can tell when to switch the model.

Related to my comments on the next patch, if we switch the order of
return value and always reserve 6 slots for input args, regardless of
the actual number of function input args, that should make this
upgrade logic unnecessary, right?

> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  include/linux/bpf.h            |   5 ++
> >  include/uapi/linux/bpf.h       |   5 ++
> >  kernel/bpf/core.c              |   1 +
> >  kernel/bpf/syscall.c           | 120 +++++++++++++++++++++++++++++++++
> >  kernel/bpf/trampoline.c        |  87 ++++++++++++++++++++++--
> >  tools/include/uapi/linux/bpf.h |   5 ++
> >  6 files changed, 219 insertions(+), 4 deletions(-)
> >
>
> [...]
>
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 1f9d336861f0..9533200ffadf 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -1008,6 +1008,7 @@ enum bpf_link_type {
> >         BPF_LINK_TYPE_NETNS = 5,
> >         BPF_LINK_TYPE_XDP = 6,
> >         BPF_LINK_TYPE_PERF_EVENT = 7,
> > +       BPF_LINK_TYPE_TRACING_MULTI = 8,
> >
> >         MAX_BPF_LINK_TYPE,
> >  };
> > @@ -1462,6 +1463,10 @@ union bpf_attr {
> >                                  */
> >                                 __u64           bpf_cookie;
> >                         } perf_event;
> > +                       struct {
> > +                               __aligned_u64   multi_btf_ids;          /* addresses to attach */
> > +                               __u32           multi_btf_ids_cnt;      /* addresses count */
> > +                       };
>
> Please follow the pattern of perf_event, name this struct "multi".
>
> >                 };
> >         } link_create;
> >
> > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > index bad03dde97a2..6c16ac43dd91 100644
> > --- a/kernel/bpf/core.c
> > +++ b/kernel/bpf/core.c
>
> [...]
>
> > +
> > +       bpf_link_init(&link->link, BPF_LINK_TYPE_TRACING_MULTI,
> > +                     &bpf_tracing_multi_link_lops, prog);
> > +       link->attach_type = prog->expected_attach_type;
> > +       link->multi = multi;
> > +
> > +       err = bpf_link_prime(&link->link, &link_primer);
> > +       if (err)
> > +               goto out_free;
> > +       err = bpf_trampoline_multi_link_prog(prog, multi);
> > +       if (err)
> > +               goto out_free;
>
> bpf_link_cleanup(), can't free link after priming. Look at other
> places using bpf_link.
>
>
> > +       return bpf_link_settle(&link_primer);
> > +
> > +out_free:
> > +       bpf_trampoline_multi_put(multi);
> > +       kfree(link);
> > +out_free_ids:
> > +       kfree(btf_ids);
> > +       return err;
> > +}
> > +
>
> [...]

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 20/27] libbpf: Add btf__find_by_glob_kind function
  2021-08-26 19:39 ` [PATCH bpf-next v4 20/27] libbpf: Add btf__find_by_glob_kind function Jiri Olsa
@ 2021-09-01  0:10   ` Andrii Nakryiko
  2021-09-01 11:33     ` Jiri Olsa
  0 siblings, 1 reply; 44+ messages in thread
From: Andrii Nakryiko @ 2021-09-01  0:10 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Thu, Aug 26, 2021 at 12:41 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> Adding btf__find_by_glob_kind function that returns array of
> BTF ids that match given kind and allow/deny patterns.
>
> int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
>                            const char *allow_pattern,
>                            const char *deny_pattern,
>                            __u32 **__ids);
>
> The __ids array is allocated and needs to be manually freed.
>
> At the moment the supported pattern is '*' at the beginning or
> the end of the pattern.
>
> Kindly borrowed from retsnoop.
>
> Suggested-by: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  tools/lib/bpf/btf.c | 80 +++++++++++++++++++++++++++++++++++++++++++++
>  tools/lib/bpf/btf.h |  3 ++
>  2 files changed, 83 insertions(+)
>
> diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> index 77dc24d58302..5baaca6c3134 100644
> --- a/tools/lib/bpf/btf.c
> +++ b/tools/lib/bpf/btf.c
> @@ -711,6 +711,86 @@ __s32 btf__find_by_name_kind(const struct btf *btf, const char *type_name,
>         return libbpf_err(-ENOENT);
>  }
>
> +/* 'borrowed' from retsnoop */
> +static bool glob_matches(const char *glob, const char *s)
> +{
> +       int n = strlen(glob);
> +
> +       if (n == 1 && glob[0] == '*')
> +               return true;
> +
> +       if (glob[0] == '*' && glob[n - 1] == '*') {
> +               const char *subs;
> +               /* substring match */
> +
> +               /* this is hacky, but we don't want to allocate for no good reason */
> +               ((char *)glob)[n - 1] = '\0';
> +               subs = strstr(s, glob + 1);
> +               ((char *)glob)[n - 1] = '*';
> +
> +               return subs != NULL;
> +       } else if (glob[0] == '*') {
> +               size_t nn = strlen(s);
> +               /* suffix match */
> +
> +               /* too short for a given suffix */
> +               if (nn < n - 1)
> +                       return false;
> +
> +               return strcmp(s + nn - (n - 1), glob + 1) == 0;
> +       } else if (glob[n - 1] == '*') {
> +               /* prefix match */
> +               return strncmp(s, glob, n - 1) == 0;
> +       } else {
> +               /* exact match */
> +               return strcmp(glob, s) == 0;
> +       }
> +}
> +
> +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> +                          const char *allow_pattern, const char *deny_pattern,
> +                          __u32 **__ids)
> +{
> +       __u32 i, nr_types = btf__get_nr_types(btf);
> +       int cnt = 0, alloc = 0;
> +       __u32 *ids = NULL;
> +
> +       for (i = 1; i <= nr_types; i++) {
> +               const struct btf_type *t = btf__type_by_id(btf, i);
> +               bool match = false;
> +               const char *name;
> +               __u32 *p;
> +
> +               if (btf_kind(t) != kind)
> +                       continue;
> +               name = btf__name_by_offset(btf, t->name_off);
> +               if (!name)
> +                       continue;
> +
> +               if (allow_pattern && glob_matches(allow_pattern, name))
> +                       match = true;
> +               if (deny_pattern && !glob_matches(deny_pattern, name))
> +                       match = true;

this is wrong, if it matches both deny and allow patterns, you'll
still pass it through. Drop the match flag, just check deny first and
`continue` if matches.

> +               if (!match)
> +                       continue;
> +
> +               if (cnt == alloc) {
> +                       alloc = max(100, alloc * 3 / 2);

nit: maybe start with something like 16?

> +                       p = realloc(ids, alloc * sizeof(__u32));

we have libbpf_reallocarray, please use it

> +                       if (!p) {
> +                               free(ids);
> +                               return -ENOMEM;
> +                       }
> +                       ids = p;
> +               }
> +               ids[cnt] = i;
> +               cnt++;
> +       }
> +
> +       *__ids = ids;
> +       return cnt ?: -ENOENT;

cnt == 0 means -ENOENT, basically, no? It's up to the application to
decide if that's an error, let's just return the number of matches,
i.e., zero.

> +}
> +
>  static bool btf_is_modifiable(const struct btf *btf)
>  {
>         return (void *)btf->hdr != btf->raw_data;
> diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
> index 4a711f990904..b288211770c3 100644
> --- a/tools/lib/bpf/btf.h
> +++ b/tools/lib/bpf/btf.h
> @@ -396,6 +396,9 @@ btf_var_secinfos(const struct btf_type *t)
>         return (struct btf_var_secinfo *)(t + 1);
>  }
>
> +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> +                          const char *allow_pattern, const char *deny_pattern,
> +                          __u32 **__ids);
>  #ifdef __cplusplus
>  } /* extern "C" */
>  #endif
> --
> 2.31.1
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 09/27] bpf: Add support to load multi func tracing program
  2021-08-31 23:17   ` Andrii Nakryiko
@ 2021-09-01 11:32     ` Jiri Olsa
  0 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-09-01 11:32 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Tue, Aug 31, 2021 at 04:17:33PM -0700, Andrii Nakryiko wrote:
> On Thu, Aug 26, 2021 at 12:40 PM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > Adding support to load tracing program with new BPF_F_MULTI_FUNC flag,
> > that allows the program to be loaded without specific function to be
> > attached to.
> 
> Are there any benefits to using a new load flag vs having separate
> expected attach types like FENTRY_MULTI/FEXIT_MULTI? I find load flags
> a bigger pain to work with compared to expected attach type (and
> expected attach type should be more apparent in BPF link info, bpftool
> output, etc).

it means more of the additional code, with the flag we just reuse
BPF_TRACE_FENTRY/BPF_TRACE_FEXIT related code because we use
current trampoline paths

I recall trying that approach while back, but ended up with bigger
changes that seemed unnecessary, I can dig it up to get more
details

jirka

> 
> >
> > Such program will be allowed to be attached to multiple functions
> > in following patches.
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  include/linux/bpf.h            |  1 +
> >  include/uapi/linux/bpf.h       |  7 +++++++
> >  kernel/bpf/syscall.c           | 35 +++++++++++++++++++++++++++++-----
> >  kernel/bpf/verifier.c          |  3 ++-
> >  tools/include/uapi/linux/bpf.h |  7 +++++++
> >  5 files changed, 47 insertions(+), 6 deletions(-)
> >
> 
> [...]
> 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 20/27] libbpf: Add btf__find_by_glob_kind function
  2021-09-01  0:10   ` Andrii Nakryiko
@ 2021-09-01 11:33     ` Jiri Olsa
  0 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-09-01 11:33 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Tue, Aug 31, 2021 at 05:10:52PM -0700, Andrii Nakryiko wrote:
> On Thu, Aug 26, 2021 at 12:41 PM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > Adding btf__find_by_glob_kind function that returns array of
> > BTF ids that match given kind and allow/deny patterns.
> >
> > int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> >                            const char *allow_pattern,
> >                            const char *deny_pattern,
> >                            __u32 **__ids);
> >
> > The __ids array is allocated and needs to be manually freed.
> >
> > At the moment the supported pattern is '*' at the beginning or
> > the end of the pattern.
> >
> > Kindly borrowed from retsnoop.
> >
> > Suggested-by: Andrii Nakryiko <andrii.nakryiko@gmail.com>
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  tools/lib/bpf/btf.c | 80 +++++++++++++++++++++++++++++++++++++++++++++
> >  tools/lib/bpf/btf.h |  3 ++
> >  2 files changed, 83 insertions(+)
> >
> > diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> > index 77dc24d58302..5baaca6c3134 100644
> > --- a/tools/lib/bpf/btf.c
> > +++ b/tools/lib/bpf/btf.c
> > @@ -711,6 +711,86 @@ __s32 btf__find_by_name_kind(const struct btf *btf, const char *type_name,
> >         return libbpf_err(-ENOENT);
> >  }
> >
> > +/* 'borrowed' from retsnoop */
> > +static bool glob_matches(const char *glob, const char *s)
> > +{
> > +       int n = strlen(glob);
> > +
> > +       if (n == 1 && glob[0] == '*')
> > +               return true;
> > +
> > +       if (glob[0] == '*' && glob[n - 1] == '*') {
> > +               const char *subs;
> > +               /* substring match */
> > +
> > +               /* this is hacky, but we don't want to allocate for no good reason */
> > +               ((char *)glob)[n - 1] = '\0';
> > +               subs = strstr(s, glob + 1);
> > +               ((char *)glob)[n - 1] = '*';
> > +
> > +               return subs != NULL;
> > +       } else if (glob[0] == '*') {
> > +               size_t nn = strlen(s);
> > +               /* suffix match */
> > +
> > +               /* too short for a given suffix */
> > +               if (nn < n - 1)
> > +                       return false;
> > +
> > +               return strcmp(s + nn - (n - 1), glob + 1) == 0;
> > +       } else if (glob[n - 1] == '*') {
> > +               /* prefix match */
> > +               return strncmp(s, glob, n - 1) == 0;
> > +       } else {
> > +               /* exact match */
> > +               return strcmp(glob, s) == 0;
> > +       }
> > +}
> > +
> > +int btf__find_by_glob_kind(const struct btf *btf, __u32 kind,
> > +                          const char *allow_pattern, const char *deny_pattern,
> > +                          __u32 **__ids)
> > +{
> > +       __u32 i, nr_types = btf__get_nr_types(btf);
> > +       int cnt = 0, alloc = 0;
> > +       __u32 *ids = NULL;
> > +
> > +       for (i = 1; i <= nr_types; i++) {
> > +               const struct btf_type *t = btf__type_by_id(btf, i);
> > +               bool match = false;
> > +               const char *name;
> > +               __u32 *p;
> > +
> > +               if (btf_kind(t) != kind)
> > +                       continue;
> > +               name = btf__name_by_offset(btf, t->name_off);
> > +               if (!name)
> > +                       continue;
> > +
> > +               if (allow_pattern && glob_matches(allow_pattern, name))
> > +                       match = true;
> > +               if (deny_pattern && !glob_matches(deny_pattern, name))
> > +                       match = true;
> 
> this is wrong, if it matches both deny and allow patterns, you'll
> still pass it through. Drop the match flag, just check deny first and
> `continue` if matches.

true, ok

> 
> > +               if (!match)
> > +                       continue;
> > +
> > +               if (cnt == alloc) {
> > +                       alloc = max(100, alloc * 3 / 2);
> 
> nit: maybe start with something like 16?

ok

> 
> > +                       p = realloc(ids, alloc * sizeof(__u32));
> 
> we have libbpf_reallocarray, please use it

ok

> 
> > +                       if (!p) {
> > +                               free(ids);
> > +                               return -ENOMEM;
> > +                       }
> > +                       ids = p;
> > +               }
> > +               ids[cnt] = i;
> > +               cnt++;
> > +       }
> > +
> > +       *__ids = ids;
> > +       return cnt ?: -ENOENT;
> 
> cnt == 0 means -ENOENT, basically, no? It's up to the application to
> decide if that's an error, let's just return the number of matches,
> i.e., zero.

ok

thanks,
jirka


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 17/27] bpf: Add multi trampoline attach support
  2021-08-31 23:36   ` Andrii Nakryiko
  2021-09-01  0:02     ` Andrii Nakryiko
@ 2021-09-01 11:39     ` Jiri Olsa
  1 sibling, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-09-01 11:39 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Tue, Aug 31, 2021 at 04:36:22PM -0700, Andrii Nakryiko wrote:
> On Thu, Aug 26, 2021 at 12:41 PM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > Adding new multi trampoline link (BPF_LINK_TYPE_TRACING_MULTI)
> > as an interface to attach program to multiple functions.
> >
> > The link_create bpf_attr interface already has 'bpf_prog' file
> > descriptor, that defines the program to be attached. It must be
> > loaded with BPF_F_MULTI_FUNC flag.
> >
> > Adding new multi_btf_ids/multi_btf_ids_cnt link_create bpf_attr
> > fields that provides BTF ids.
> >
> > The new link gets multi trampoline (via bpf_trampoline_multi_get)
> > and links the provided program with embedded trampolines and the
> > 'main' trampoline with new multi link/unlink functions:
> >
> >   int bpf_trampoline_multi_link_prog(struct bpf_prog *prog,
> >                                      struct bpf_trampoline_multi *tr);
> >   int bpf_trampoline_multi_unlink_prog(struct bpf_prog *prog,
> >                                        struct bpf_trampoline_multi *tr);
> >
> > If embedded trampoline contains fexit programs, we need to switch
> > its model to the multi trampoline model (because of the final 'ret'
> > argument). We keep the count of attached multi func programs for each
> > trampoline, so we can tell when to switch the model.
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  include/linux/bpf.h            |   5 ++
> >  include/uapi/linux/bpf.h       |   5 ++
> >  kernel/bpf/core.c              |   1 +
> >  kernel/bpf/syscall.c           | 120 +++++++++++++++++++++++++++++++++
> >  kernel/bpf/trampoline.c        |  87 ++++++++++++++++++++++--
> >  tools/include/uapi/linux/bpf.h |   5 ++
> >  6 files changed, 219 insertions(+), 4 deletions(-)
> >
> 
> [...]
> 
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 1f9d336861f0..9533200ffadf 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -1008,6 +1008,7 @@ enum bpf_link_type {
> >         BPF_LINK_TYPE_NETNS = 5,
> >         BPF_LINK_TYPE_XDP = 6,
> >         BPF_LINK_TYPE_PERF_EVENT = 7,
> > +       BPF_LINK_TYPE_TRACING_MULTI = 8,
> >
> >         MAX_BPF_LINK_TYPE,
> >  };
> > @@ -1462,6 +1463,10 @@ union bpf_attr {
> >                                  */
> >                                 __u64           bpf_cookie;
> >                         } perf_event;
> > +                       struct {
> > +                               __aligned_u64   multi_btf_ids;          /* addresses to attach */
> > +                               __u32           multi_btf_ids_cnt;      /* addresses count */
> > +                       };
> 
> Please follow the pattern of perf_event, name this struct "multi".

I did that in struct bpf_link_create_opts and forgot this place,
will change

> 
> >                 };
> >         } link_create;
> >
> > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > index bad03dde97a2..6c16ac43dd91 100644
> > --- a/kernel/bpf/core.c
> > +++ b/kernel/bpf/core.c
> 
> [...]
> 
> > +
> > +       bpf_link_init(&link->link, BPF_LINK_TYPE_TRACING_MULTI,
> > +                     &bpf_tracing_multi_link_lops, prog);
> > +       link->attach_type = prog->expected_attach_type;
> > +       link->multi = multi;
> > +
> > +       err = bpf_link_prime(&link->link, &link_primer);
> > +       if (err)
> > +               goto out_free;
> > +       err = bpf_trampoline_multi_link_prog(prog, multi);
> > +       if (err)
> > +               goto out_free;
> 
> bpf_link_cleanup(), can't free link after priming. Look at other
> places using bpf_link.

will check, thanks

jirka

> 
> 
> > +       return bpf_link_settle(&link_primer);
> > +
> > +out_free:
> > +       bpf_trampoline_multi_put(multi);
> > +       kfree(link);
> > +out_free_ids:
> > +       kfree(btf_ids);
> > +       return err;
> > +}
> > +
> 
> [...]
> 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 18/27] bpf, x64: Store properly return value for trampoline with multi func programs
  2021-08-31 23:51   ` Andrii Nakryiko
@ 2021-09-01 15:15     ` Jiri Olsa
  2021-09-02  3:56       ` Andrii Nakryiko
  0 siblings, 1 reply; 44+ messages in thread
From: Jiri Olsa @ 2021-09-01 15:15 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Tue, Aug 31, 2021 at 04:51:18PM -0700, Andrii Nakryiko wrote:
> On Thu, Aug 26, 2021 at 12:41 PM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > When we have multi func program attached, the trampoline
> > switched to the function model of the multi func program.
> >
> > This breaks already attached standard programs, for example
> > when we attach following program:
> >
> >   SEC("fexit/bpf_fentry_test2")
> >   int BPF_PROG(test1, int a, __u64 b, int ret)
> >
> > the trampoline pushes on stack args 'a' and 'b' and return
> > value 'ret'.
> >
> > When following multi func program is attached to bpf_fentry_test2:
> >
> >   SEC("fexit.multi/bpf_fentry_test*")
> >   int BPF_PROG(test2, __u64 a, __u64 b, __u64 c, __u64 d,
> >                        __u64 e, __u64 f, int ret)
> >
> > the trampoline takes this program model and pushes all 6 args
> > and return value on stack.
> >
> > But we still have the original 'test1' program attached, that
> > expects 'ret' value where there's 'c' argument now:
> >
> >   test1(a, b, c)
> >
> > To fix that we simply overwrite 'c' argument with 'ret' value,
> > so test1 is called as expected and test2 gets called as:
> >
> >   test2(a, b, ret, d, e, f, ret)
> >
> > which is ok, because 'c' is not defined for bpf_fentry_test2
> > anyway.
> >
> 
> What if we change the order on the stack to be the return value first,
> followed by input arguments. That would get us a bit closer to
> unifying multi-trampoline and the normal one, right? BPF verifier
> should be able to rewrite access to the last argument (i.e., return
> value) for fexit programs to actually be at offset 0, and shift all
> other arguments by 8 bytes. For fentry, if that helps to keep things
> more aligned, we'd just skip the first 8 bytes on the stack and store
> all the input arguments in the same offsets. So BPF verifier rewriting
> logic stays consistent (except offset 0 will be disallowed).

nice idea, with this in place we could cut that args re-arranging code

> 
> Basically, I'm thinking how we can make normal and multi trampolines
> more interoperable to remove those limitations that two
> multi-trampolines can't be attached to the same function, which seems
> like a pretty annoying limitation which will be easy to hit in
> practice. Alexei previously proposed (as an optimization) to group all
> to-be-attached functions into groups by number of arguments, so that
> we can have up to 6 different trampolines tailored to actual functions
> being attached. So that we don't save unnecessary extra input
> arguments saving, which will be even more important once we allow more
> than 6 arguments in the future.
> 
> With such logic, we should be able to split all the functions into
> multiple underlying trampolines, so it seems like it should be
> possible to also allow multiple multi-fentry programs to be attached
> to the same function by having a separate bpf_trampoline just for
> those functions. It will be just an extension of the above "just 6
> trampolines" strategy to "as much as we need trampolines".

I'm probably missing something here.. say we have 2 functions with single
argument:

  foo1(int a)
  foo2(int b)

then having 2 programs:

  A - attaching to foo1
  B - attaching to foo2

then you need to have 2 different trampolines instead of single 'generic-1-argument-trampoline'

> 
> It's just a vague idea, sorry, I don't understand all the code yet.
> But the limitation outlined in one of the previous patches seems very
> limiting and unpleasant. I can totally see that some 24/7 running BPF
> tracing app uses multi-fentry for tracing a small subset of kernel
> functions non-stop, and then someone is trying to use bpftrace or
> retsnoop to trace overlapping set of functions. And it immediately
> fails. Very frustrating.

so the current approach is to some extent driven by the direct ftrace
batch API:

  you have ftrace_ops object and set it up with functions you want
  to change with calling:

  ftrace_set_filter_ip(ops, ip1);
  ftrace_set_filter_ip(ops, ip2);
  ...

and then register trampoline with those functions:

  register_ftrace_direct_multi(ops, tramp_addr);

and with this call being the expensive one (it does the actual work
and sync waiting), my objective was to call it just once for update

now with 2 intersecting multi trampolines we end up with 3 functions
sets:

  A - functions for first multi trampoline
  B - functions for second multi trampoline
  C - intersection of them

each set needs different trampoline:

  tramp A - calls program for first multi trampoline
  tramp B - calls program for second multi trampoline
  tramp C - calls both programs

so we need to call register_ftrace_direct_multi 3 times

if we allow also standard trampolines being attached, it makes
it even more complicated and ultimatelly gets broken to
1-function/1-trampoline pairs, ending up with attach speed
that we have now

...

I have test code for ftrace direct interface that would
allow to register/change separate function/addr pairs,
so in one call you can change multiple ips each to
different tramp addresss

but even with that, I ended up with lot of new complexity
on bpf side keeping track of multi trampolines intersections,
so I thought I'd start with something limited and simpler

perhaps I should move back to that approach and see how bad
it ends ;-)

or this could be next step on top of current work, that should
get simpler with the args re-arranging you proposed

jirka


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 18/27] bpf, x64: Store properly return value for trampoline with multi func programs
  2021-09-01 15:15     ` Jiri Olsa
@ 2021-09-02  3:56       ` Andrii Nakryiko
  2021-09-02 12:57         ` Jiri Olsa
  0 siblings, 1 reply; 44+ messages in thread
From: Andrii Nakryiko @ 2021-09-02  3:56 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Wed, Sep 1, 2021 at 8:15 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Tue, Aug 31, 2021 at 04:51:18PM -0700, Andrii Nakryiko wrote:
> > On Thu, Aug 26, 2021 at 12:41 PM Jiri Olsa <jolsa@redhat.com> wrote:
> > >
> > > When we have multi func program attached, the trampoline
> > > switched to the function model of the multi func program.
> > >
> > > This breaks already attached standard programs, for example
> > > when we attach following program:
> > >
> > >   SEC("fexit/bpf_fentry_test2")
> > >   int BPF_PROG(test1, int a, __u64 b, int ret)
> > >
> > > the trampoline pushes on stack args 'a' and 'b' and return
> > > value 'ret'.
> > >
> > > When following multi func program is attached to bpf_fentry_test2:
> > >
> > >   SEC("fexit.multi/bpf_fentry_test*")
> > >   int BPF_PROG(test2, __u64 a, __u64 b, __u64 c, __u64 d,
> > >                        __u64 e, __u64 f, int ret)
> > >
> > > the trampoline takes this program model and pushes all 6 args
> > > and return value on stack.
> > >
> > > But we still have the original 'test1' program attached, that
> > > expects 'ret' value where there's 'c' argument now:
> > >
> > >   test1(a, b, c)
> > >
> > > To fix that we simply overwrite 'c' argument with 'ret' value,
> > > so test1 is called as expected and test2 gets called as:
> > >
> > >   test2(a, b, ret, d, e, f, ret)
> > >
> > > which is ok, because 'c' is not defined for bpf_fentry_test2
> > > anyway.
> > >
> >
> > What if we change the order on the stack to be the return value first,
> > followed by input arguments. That would get us a bit closer to
> > unifying multi-trampoline and the normal one, right? BPF verifier
> > should be able to rewrite access to the last argument (i.e., return
> > value) for fexit programs to actually be at offset 0, and shift all
> > other arguments by 8 bytes. For fentry, if that helps to keep things
> > more aligned, we'd just skip the first 8 bytes on the stack and store
> > all the input arguments in the same offsets. So BPF verifier rewriting
> > logic stays consistent (except offset 0 will be disallowed).
>
> nice idea, with this in place we could cut that args re-arranging code
>
> >
> > Basically, I'm thinking how we can make normal and multi trampolines
> > more interoperable to remove those limitations that two
> > multi-trampolines can't be attached to the same function, which seems
> > like a pretty annoying limitation which will be easy to hit in
> > practice. Alexei previously proposed (as an optimization) to group all
> > to-be-attached functions into groups by number of arguments, so that
> > we can have up to 6 different trampolines tailored to actual functions
> > being attached. So that we don't save unnecessary extra input
> > arguments saving, which will be even more important once we allow more
> > than 6 arguments in the future.
> >
> > With such logic, we should be able to split all the functions into
> > multiple underlying trampolines, so it seems like it should be
> > possible to also allow multiple multi-fentry programs to be attached
> > to the same function by having a separate bpf_trampoline just for
> > those functions. It will be just an extension of the above "just 6
> > trampolines" strategy to "as much as we need trampolines".
>
> I'm probably missing something here.. say we have 2 functions with single
> argument:
>
>   foo1(int a)
>   foo2(int b)
>
> then having 2 programs:
>
>   A - attaching to foo1
>   B - attaching to foo2
>
> then you need to have 2 different trampolines instead of single 'generic-1-argument-trampoline'

right, you have two different BPF progs attached to two different
functions. You have to have 2 trampolines, not sure what's
confusing?..

>
> >
> > It's just a vague idea, sorry, I don't understand all the code yet.
> > But the limitation outlined in one of the previous patches seems very
> > limiting and unpleasant. I can totally see that some 24/7 running BPF
> > tracing app uses multi-fentry for tracing a small subset of kernel
> > functions non-stop, and then someone is trying to use bpftrace or
> > retsnoop to trace overlapping set of functions. And it immediately
> > fails. Very frustrating.
>
> so the current approach is to some extent driven by the direct ftrace
> batch API:
>
>   you have ftrace_ops object and set it up with functions you want
>   to change with calling:
>
>   ftrace_set_filter_ip(ops, ip1);
>   ftrace_set_filter_ip(ops, ip2);
>   ...
>
> and then register trampoline with those functions:
>
>   register_ftrace_direct_multi(ops, tramp_addr);
>
> and with this call being the expensive one (it does the actual work
> and sync waiting), my objective was to call it just once for update
>
> now with 2 intersecting multi trampolines we end up with 3 functions
> sets:
>
>   A - functions for first multi trampoline
>   B - functions for second multi trampoline
>   C - intersection of them
>
> each set needs different trampoline:
>
>   tramp A - calls program for first multi trampoline
>   tramp B - calls program for second multi trampoline
>   tramp C - calls both programs
>
> so we need to call register_ftrace_direct_multi 3 times

Yes, that's the minimal amount of trampolines you need. Calling
register_ftrace_direct_multi() three times is not that bad at all,
compared to calling it 1000s of times. If you are worried about 1 vs 3
calls, I think you are over-optimizing here. I'd rather take no
restrictions on what can be attached to what and in which sequences
but taking 3ms vs having obscure (for uninitiated users) restrictions,
but in some cases allowing attachment to happen in 1ms.

The goal with multi-attach is to make it decently fast when attaching
to a lot functions, but if attachment speed is fast enough, then such
small performance differences don't matter anymore.

>
> if we allow also standard trampolines being attached, it makes
> it even more complicated and ultimatelly gets broken to
> 1-function/1-trampoline pairs, ending up with attach speed
> that we have now
>

So let's make sure that we are on the same page. Let me write out an example.

Let's say we have 5 kernel functions: a, b, c, d, e. Say a, b, c all
have 1 input args, and d and e have 2.

Now let's say we attach just normal fentry program A to function a.
Also we attach normal fexit program E to func e.

We'll have A  attached to a with trampoline T1. We'll also have E
attached to e with trampoline T2. Right?

And now we try to attach generic fentry (fentry.multi in your
terminology) prog X to all 5 of them. If A and E weren't attached,
we'd need two generic trampolines, one for a, b, c (because 1 input
argument) and another for d,e (because 2 input arguments). But because
we already have A and B attached, we'll end up needing 4:

T1 (1 arg)  for func a calling progs A and X
T2 (2 args) for func e calling progs E and X
T3 (1 arg)  for func b and c calling X
T4 (2 args) for func d calling X

We can't have less than that and satisfy all the constraints. But 4 is
not that bad. If the example has 1000s of functions, you'd still need
between 4 and 8 trampolines (if we had 3, 4, 5, and 6 input args for
kernel functions). That's way less than 1000s of trampolines needed
today. And it's still fast enough on the attachment.

The good thing with what we discussed with making current trampoline
co-exist with generic (multi) fentry/fexit, is that we'll still have
just one trampoline, saving exactly as many input arguments as
attached function(s) have. So at least we don't have to maintain two
separate pieces of logic for that. Then the only added complexity
would be breaking up all to-be-attached kernel functions into groups,
as described in the example.

It sounds a bit more complicated in writing than it will be in
practice, probably. I think the critical part is unification of
trampoline to work with fentry/fexit and fentry.multi/fexit.multi
simultaneously, which seems like you agreed above is achievable.

> ...
>
> I have test code for ftrace direct interface that would
> allow to register/change separate function/addr pairs,
> so in one call you can change multiple ips each to
> different tramp addresss
>
> but even with that, I ended up with lot of new complexity
> on bpf side keeping track of multi trampolines intersections,
> so I thought I'd start with something limited and simpler
>
> perhaps I should move back to that approach and see how bad
> it ends ;-)
>
> or this could be next step on top of current work, that should
> get simpler with the args re-arranging you proposed
>
> jirka
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 18/27] bpf, x64: Store properly return value for trampoline with multi func programs
  2021-09-02  3:56       ` Andrii Nakryiko
@ 2021-09-02 12:57         ` Jiri Olsa
  2021-09-02 16:54           ` Andrii Nakryiko
  2021-09-02 21:55           ` Alexei Starovoitov
  0 siblings, 2 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-09-02 12:57 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Wed, Sep 01, 2021 at 08:56:19PM -0700, Andrii Nakryiko wrote:
> On Wed, Sep 1, 2021 at 8:15 AM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > On Tue, Aug 31, 2021 at 04:51:18PM -0700, Andrii Nakryiko wrote:
> > > On Thu, Aug 26, 2021 at 12:41 PM Jiri Olsa <jolsa@redhat.com> wrote:
> > > >
> > > > When we have multi func program attached, the trampoline
> > > > switched to the function model of the multi func program.
> > > >
> > > > This breaks already attached standard programs, for example
> > > > when we attach following program:
> > > >
> > > >   SEC("fexit/bpf_fentry_test2")
> > > >   int BPF_PROG(test1, int a, __u64 b, int ret)
> > > >
> > > > the trampoline pushes on stack args 'a' and 'b' and return
> > > > value 'ret'.
> > > >
> > > > When following multi func program is attached to bpf_fentry_test2:
> > > >
> > > >   SEC("fexit.multi/bpf_fentry_test*")
> > > >   int BPF_PROG(test2, __u64 a, __u64 b, __u64 c, __u64 d,
> > > >                        __u64 e, __u64 f, int ret)
> > > >
> > > > the trampoline takes this program model and pushes all 6 args
> > > > and return value on stack.
> > > >
> > > > But we still have the original 'test1' program attached, that
> > > > expects 'ret' value where there's 'c' argument now:
> > > >
> > > >   test1(a, b, c)
> > > >
> > > > To fix that we simply overwrite 'c' argument with 'ret' value,
> > > > so test1 is called as expected and test2 gets called as:
> > > >
> > > >   test2(a, b, ret, d, e, f, ret)
> > > >
> > > > which is ok, because 'c' is not defined for bpf_fentry_test2
> > > > anyway.
> > > >
> > >
> > > What if we change the order on the stack to be the return value first,
> > > followed by input arguments. That would get us a bit closer to
> > > unifying multi-trampoline and the normal one, right? BPF verifier
> > > should be able to rewrite access to the last argument (i.e., return
> > > value) for fexit programs to actually be at offset 0, and shift all
> > > other arguments by 8 bytes. For fentry, if that helps to keep things
> > > more aligned, we'd just skip the first 8 bytes on the stack and store
> > > all the input arguments in the same offsets. So BPF verifier rewriting
> > > logic stays consistent (except offset 0 will be disallowed).
> >
> > nice idea, with this in place we could cut that args re-arranging code
> >
> > >
> > > Basically, I'm thinking how we can make normal and multi trampolines
> > > more interoperable to remove those limitations that two
> > > multi-trampolines can't be attached to the same function, which seems
> > > like a pretty annoying limitation which will be easy to hit in
> > > practice. Alexei previously proposed (as an optimization) to group all
> > > to-be-attached functions into groups by number of arguments, so that
> > > we can have up to 6 different trampolines tailored to actual functions
> > > being attached. So that we don't save unnecessary extra input
> > > arguments saving, which will be even more important once we allow more
> > > than 6 arguments in the future.
> > >
> > > With such logic, we should be able to split all the functions into
> > > multiple underlying trampolines, so it seems like it should be
> > > possible to also allow multiple multi-fentry programs to be attached
> > > to the same function by having a separate bpf_trampoline just for
> > > those functions. It will be just an extension of the above "just 6
> > > trampolines" strategy to "as much as we need trampolines".
> >
> > I'm probably missing something here.. say we have 2 functions with single
> > argument:
> >
> >   foo1(int a)
> >   foo2(int b)
> >
> > then having 2 programs:
> >
> >   A - attaching to foo1
> >   B - attaching to foo2
> >
> > then you need to have 2 different trampolines instead of single 'generic-1-argument-trampoline'
> 
> right, you have two different BPF progs attached to two different
> functions. You have to have 2 trampolines, not sure what's
> confusing?..

I misunderstood the statement above:

> > > practice. Alexei previously proposed (as an optimization) to group all
> > > to-be-attached functions into groups by number of arguments, so that
> > > we can have up to 6 different trampolines tailored to actual functions
> > > being attached. So that we don't save unnecessary extra input

you meant just functions to be attached at that moment, not all, ok

> 
> >
> > >
> > > It's just a vague idea, sorry, I don't understand all the code yet.
> > > But the limitation outlined in one of the previous patches seems very
> > > limiting and unpleasant. I can totally see that some 24/7 running BPF
> > > tracing app uses multi-fentry for tracing a small subset of kernel
> > > functions non-stop, and then someone is trying to use bpftrace or
> > > retsnoop to trace overlapping set of functions. And it immediately
> > > fails. Very frustrating.
> >
> > so the current approach is to some extent driven by the direct ftrace
> > batch API:
> >
> >   you have ftrace_ops object and set it up with functions you want
> >   to change with calling:
> >
> >   ftrace_set_filter_ip(ops, ip1);
> >   ftrace_set_filter_ip(ops, ip2);
> >   ...
> >
> > and then register trampoline with those functions:
> >
> >   register_ftrace_direct_multi(ops, tramp_addr);
> >
> > and with this call being the expensive one (it does the actual work
> > and sync waiting), my objective was to call it just once for update
> >
> > now with 2 intersecting multi trampolines we end up with 3 functions
> > sets:
> >
> >   A - functions for first multi trampoline
> >   B - functions for second multi trampoline
> >   C - intersection of them
> >
> > each set needs different trampoline:
> >
> >   tramp A - calls program for first multi trampoline
> >   tramp B - calls program for second multi trampoline
> >   tramp C - calls both programs
> >
> > so we need to call register_ftrace_direct_multi 3 times
> 
> Yes, that's the minimal amount of trampolines you need. Calling
> register_ftrace_direct_multi() three times is not that bad at all,
> compared to calling it 1000s of times. If you are worried about 1 vs 3
> calls, I think you are over-optimizing here. I'd rather take no
> restrictions on what can be attached to what and in which sequences
> but taking 3ms vs having obscure (for uninitiated users) restrictions,
> but in some cases allowing attachment to happen in 1ms.
> 
> The goal with multi-attach is to make it decently fast when attaching
> to a lot functions, but if attachment speed is fast enough, then such
> small performance differences don't matter anymore.

true, I might have been focused on the worst possible case here ;-)

> 
> >
> > if we allow also standard trampolines being attached, it makes
> > it even more complicated and ultimatelly gets broken to
> > 1-function/1-trampoline pairs, ending up with attach speed
> > that we have now
> >
> 
> So let's make sure that we are on the same page. Let me write out an example.
> 
> Let's say we have 5 kernel functions: a, b, c, d, e. Say a, b, c all
> have 1 input args, and d and e have 2.
> 
> Now let's say we attach just normal fentry program A to function a.
> Also we attach normal fexit program E to func e.
> 
> We'll have A  attached to a with trampoline T1. We'll also have E
> attached to e with trampoline T2. Right?
> 
> And now we try to attach generic fentry (fentry.multi in your
> terminology) prog X to all 5 of them. If A and E weren't attached,
> we'd need two generic trampolines, one for a, b, c (because 1 input
> argument) and another for d,e (because 2 input arguments). But because
> we already have A and B attached, we'll end up needing 4:
> 
> T1 (1 arg)  for func a calling progs A and X
> T2 (2 args) for func e calling progs E and X
> T3 (1 arg)  for func b and c calling X
> T4 (2 args) for func d calling X

so current code would group T3/T4 together, but if we keep
them separated, then we won't need to use new model and
cut off some of the code, ok

together with that args shifting we could endup with almost
untouched trampoline generation code ;-)

> 
> We can't have less than that and satisfy all the constraints. But 4 is
> not that bad. If the example has 1000s of functions, you'd still need
> between 4 and 8 trampolines (if we had 3, 4, 5, and 6 input args for
> kernel functions). That's way less than 1000s of trampolines needed
> today. And it's still fast enough on the attachment.
> 
> The good thing with what we discussed with making current trampoline
> co-exist with generic (multi) fentry/fexit, is that we'll still have
> just one trampoline, saving exactly as many input arguments as
> attached function(s) have. So at least we don't have to maintain two
> separate pieces of logic for that. Then the only added complexity
> would be breaking up all to-be-attached kernel functions into groups,
> as described in the example.
> 
> It sounds a bit more complicated in writing than it will be in
> practice, probably. I think the critical part is unification of
> trampoline to work with fentry/fexit and fentry.multi/fexit.multi
> simultaneously, which seems like you agreed above is achievable.

ok, I haven't considered this way, but I think it's doable

thanks,
jirka


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 18/27] bpf, x64: Store properly return value for trampoline with multi func programs
  2021-09-02 12:57         ` Jiri Olsa
@ 2021-09-02 16:54           ` Andrii Nakryiko
  2021-09-02 21:55           ` Alexei Starovoitov
  1 sibling, 0 replies; 44+ messages in thread
From: Andrii Nakryiko @ 2021-09-02 16:54 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Thu, Sep 2, 2021 at 5:57 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Wed, Sep 01, 2021 at 08:56:19PM -0700, Andrii Nakryiko wrote:
> > On Wed, Sep 1, 2021 at 8:15 AM Jiri Olsa <jolsa@redhat.com> wrote:
> > >
> > > On Tue, Aug 31, 2021 at 04:51:18PM -0700, Andrii Nakryiko wrote:
> > > > On Thu, Aug 26, 2021 at 12:41 PM Jiri Olsa <jolsa@redhat.com> wrote:
> > > > >
> > > > > When we have multi func program attached, the trampoline
> > > > > switched to the function model of the multi func program.
> > > > >
> > > > > This breaks already attached standard programs, for example
> > > > > when we attach following program:
> > > > >
> > > > >   SEC("fexit/bpf_fentry_test2")
> > > > >   int BPF_PROG(test1, int a, __u64 b, int ret)
> > > > >
> > > > > the trampoline pushes on stack args 'a' and 'b' and return
> > > > > value 'ret'.
> > > > >
> > > > > When following multi func program is attached to bpf_fentry_test2:
> > > > >
> > > > >   SEC("fexit.multi/bpf_fentry_test*")
> > > > >   int BPF_PROG(test2, __u64 a, __u64 b, __u64 c, __u64 d,
> > > > >                        __u64 e, __u64 f, int ret)
> > > > >
> > > > > the trampoline takes this program model and pushes all 6 args
> > > > > and return value on stack.
> > > > >
> > > > > But we still have the original 'test1' program attached, that
> > > > > expects 'ret' value where there's 'c' argument now:
> > > > >
> > > > >   test1(a, b, c)
> > > > >
> > > > > To fix that we simply overwrite 'c' argument with 'ret' value,
> > > > > so test1 is called as expected and test2 gets called as:
> > > > >
> > > > >   test2(a, b, ret, d, e, f, ret)
> > > > >
> > > > > which is ok, because 'c' is not defined for bpf_fentry_test2
> > > > > anyway.
> > > > >
> > > >
> > > > What if we change the order on the stack to be the return value first,
> > > > followed by input arguments. That would get us a bit closer to
> > > > unifying multi-trampoline and the normal one, right? BPF verifier
> > > > should be able to rewrite access to the last argument (i.e., return
> > > > value) for fexit programs to actually be at offset 0, and shift all
> > > > other arguments by 8 bytes. For fentry, if that helps to keep things
> > > > more aligned, we'd just skip the first 8 bytes on the stack and store
> > > > all the input arguments in the same offsets. So BPF verifier rewriting
> > > > logic stays consistent (except offset 0 will be disallowed).
> > >
> > > nice idea, with this in place we could cut that args re-arranging code
> > >
> > > >
> > > > Basically, I'm thinking how we can make normal and multi trampolines
> > > > more interoperable to remove those limitations that two
> > > > multi-trampolines can't be attached to the same function, which seems
> > > > like a pretty annoying limitation which will be easy to hit in
> > > > practice. Alexei previously proposed (as an optimization) to group all
> > > > to-be-attached functions into groups by number of arguments, so that
> > > > we can have up to 6 different trampolines tailored to actual functions
> > > > being attached. So that we don't save unnecessary extra input
> > > > arguments saving, which will be even more important once we allow more
> > > > than 6 arguments in the future.
> > > >
> > > > With such logic, we should be able to split all the functions into
> > > > multiple underlying trampolines, so it seems like it should be
> > > > possible to also allow multiple multi-fentry programs to be attached
> > > > to the same function by having a separate bpf_trampoline just for
> > > > those functions. It will be just an extension of the above "just 6
> > > > trampolines" strategy to "as much as we need trampolines".
> > >
> > > I'm probably missing something here.. say we have 2 functions with single
> > > argument:
> > >
> > >   foo1(int a)
> > >   foo2(int b)
> > >
> > > then having 2 programs:
> > >
> > >   A - attaching to foo1
> > >   B - attaching to foo2
> > >
> > > then you need to have 2 different trampolines instead of single 'generic-1-argument-trampoline'
> >
> > right, you have two different BPF progs attached to two different
> > functions. You have to have 2 trampolines, not sure what's
> > confusing?..
>
> I misunderstood the statement above:
>
> > > > practice. Alexei previously proposed (as an optimization) to group all
> > > > to-be-attached functions into groups by number of arguments, so that
> > > > we can have up to 6 different trampolines tailored to actual functions
> > > > being attached. So that we don't save unnecessary extra input
>
> you meant just functions to be attached at that moment, not all, ok
>
> >
> > >
> > > >
> > > > It's just a vague idea, sorry, I don't understand all the code yet.
> > > > But the limitation outlined in one of the previous patches seems very
> > > > limiting and unpleasant. I can totally see that some 24/7 running BPF
> > > > tracing app uses multi-fentry for tracing a small subset of kernel
> > > > functions non-stop, and then someone is trying to use bpftrace or
> > > > retsnoop to trace overlapping set of functions. And it immediately
> > > > fails. Very frustrating.
> > >
> > > so the current approach is to some extent driven by the direct ftrace
> > > batch API:
> > >
> > >   you have ftrace_ops object and set it up with functions you want
> > >   to change with calling:
> > >
> > >   ftrace_set_filter_ip(ops, ip1);
> > >   ftrace_set_filter_ip(ops, ip2);
> > >   ...
> > >
> > > and then register trampoline with those functions:
> > >
> > >   register_ftrace_direct_multi(ops, tramp_addr);
> > >
> > > and with this call being the expensive one (it does the actual work
> > > and sync waiting), my objective was to call it just once for update
> > >
> > > now with 2 intersecting multi trampolines we end up with 3 functions
> > > sets:
> > >
> > >   A - functions for first multi trampoline
> > >   B - functions for second multi trampoline
> > >   C - intersection of them
> > >
> > > each set needs different trampoline:
> > >
> > >   tramp A - calls program for first multi trampoline
> > >   tramp B - calls program for second multi trampoline
> > >   tramp C - calls both programs
> > >
> > > so we need to call register_ftrace_direct_multi 3 times
> >
> > Yes, that's the minimal amount of trampolines you need. Calling
> > register_ftrace_direct_multi() three times is not that bad at all,
> > compared to calling it 1000s of times. If you are worried about 1 vs 3
> > calls, I think you are over-optimizing here. I'd rather take no
> > restrictions on what can be attached to what and in which sequences
> > but taking 3ms vs having obscure (for uninitiated users) restrictions,
> > but in some cases allowing attachment to happen in 1ms.
> >
> > The goal with multi-attach is to make it decently fast when attaching
> > to a lot functions, but if attachment speed is fast enough, then such
> > small performance differences don't matter anymore.
>
> true, I might have been focused on the worst possible case here ;-)
>
> >
> > >
> > > if we allow also standard trampolines being attached, it makes
> > > it even more complicated and ultimatelly gets broken to
> > > 1-function/1-trampoline pairs, ending up with attach speed
> > > that we have now
> > >
> >
> > So let's make sure that we are on the same page. Let me write out an example.
> >
> > Let's say we have 5 kernel functions: a, b, c, d, e. Say a, b, c all
> > have 1 input args, and d and e have 2.
> >
> > Now let's say we attach just normal fentry program A to function a.
> > Also we attach normal fexit program E to func e.
> >
> > We'll have A  attached to a with trampoline T1. We'll also have E
> > attached to e with trampoline T2. Right?
> >
> > And now we try to attach generic fentry (fentry.multi in your
> > terminology) prog X to all 5 of them. If A and E weren't attached,
> > we'd need two generic trampolines, one for a, b, c (because 1 input
> > argument) and another for d,e (because 2 input arguments). But because
> > we already have A and B attached, we'll end up needing 4:
> >
> > T1 (1 arg)  for func a calling progs A and X
> > T2 (2 args) for func e calling progs E and X
> > T3 (1 arg)  for func b and c calling X
> > T4 (2 args) for func d calling X
>
> so current code would group T3/T4 together, but if we keep
> them separated, then we won't need to use new model and
> cut off some of the code, ok
>
> together with that args shifting we could endup with almost
> untouched trampoline generation code ;-)

exactly, and thus remove those limitations you've described

>
> >
> > We can't have less than that and satisfy all the constraints. But 4 is
> > not that bad. If the example has 1000s of functions, you'd still need
> > between 4 and 8 trampolines (if we had 3, 4, 5, and 6 input args for
> > kernel functions). That's way less than 1000s of trampolines needed
> > today. And it's still fast enough on the attachment.
> >
> > The good thing with what we discussed with making current trampoline
> > co-exist with generic (multi) fentry/fexit, is that we'll still have
> > just one trampoline, saving exactly as many input arguments as
> > attached function(s) have. So at least we don't have to maintain two
> > separate pieces of logic for that. Then the only added complexity
> > would be breaking up all to-be-attached kernel functions into groups,
> > as described in the example.
> >
> > It sounds a bit more complicated in writing than it will be in
> > practice, probably. I think the critical part is unification of
> > trampoline to work with fentry/fexit and fentry.multi/fexit.multi
> > simultaneously, which seems like you agreed above is achievable.
>
> ok, I haven't considered this way, but I think it's doable
>

awesome, give it a try!


> thanks,
> jirka
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 18/27] bpf, x64: Store properly return value for trampoline with multi func programs
  2021-09-02 12:57         ` Jiri Olsa
  2021-09-02 16:54           ` Andrii Nakryiko
@ 2021-09-02 21:55           ` Alexei Starovoitov
  2021-09-03  9:50             ` Jiri Olsa
  1 sibling, 1 reply; 44+ messages in thread
From: Alexei Starovoitov @ 2021-09-02 21:55 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Thu, Sep 02, 2021 at 02:57:11PM +0200, Jiri Olsa wrote:
> > 
> > Let's say we have 5 kernel functions: a, b, c, d, e. Say a, b, c all
> > have 1 input args, and d and e have 2.
> > 
> > Now let's say we attach just normal fentry program A to function a.
> > Also we attach normal fexit program E to func e.
> > 
> > We'll have A  attached to a with trampoline T1. We'll also have E
> > attached to e with trampoline T2. Right?
> > 
> > And now we try to attach generic fentry (fentry.multi in your
> > terminology) prog X to all 5 of them. If A and E weren't attached,
> > we'd need two generic trampolines, one for a, b, c (because 1 input
> > argument) and another for d,e (because 2 input arguments). But because
> > we already have A and B attached, we'll end up needing 4:
> > 
> > T1 (1 arg)  for func a calling progs A and X
> > T2 (2 args) for func e calling progs E and X
> > T3 (1 arg)  for func b and c calling X
> > T4 (2 args) for func d calling X
> 
> so current code would group T3/T4 together, but if we keep
> them separated, then we won't need to use new model and
> cut off some of the code, ok

We've brainstormed this idea further with Andrii.
(thankfully we could do it in-person now ;) which saved a ton of time)

It seems the following should work:
5 kernel functions: a(int), b(long), c(void*), d(int, int), e(long, long).
fentry prog A is attached to 'a'.
fexit prog E is attached to 'e'.
multi-prog X wants to attach to all of them.
It can be achieved with 4 trampolines.

The trampolines called from funcs 'a' and 'e' can be patched to
call A+X and E+X programs correspondingly.
The multi program X needs to be able to access return values
and arguments of all functions it was attached to.
We can achieve that by always generating a trampoline (both multi and normal)
with extra constant stored in the stack. This constant is the number of
arguments served by this trampoline.
The trampoline 'a' will store nr_args=1.
The tramopline 'e' will store nr_args=2.
We need two multi trampolines.
The multi tramopline X1 that will serve 'b' and 'c' and store nr_args=1
and multi-tramopline X2 that will serve 'd' and store nr_args=2
into hidden stack location (like ctx[-2]).

The multi prog X can look like:
int BPF_PROG(x, __u64 arg1, __u64 arg2, __u64 ret)
in such case it will read correct args and ret when called from 'd' and 'e'
and only correct arg1 when called from 'a', 'b', 'c'.

To always correctly access arguments and the return value
the program can use two new helpers: bpf_arg(ctx, N) and bpf_ret_value(ctx).
Both will be fully inlined helpers similar to bpf_get_func_ip().
u64 bpf_arg(ctx, int n)
{
  u64 nr_args = ctx[-2]; /* that's the place where _all_ trampoline will store nr_args */
  if (n > nr_args)
    return 0;
  return ctx[n];
}
u64 bpf_ret_value(ctx)
{
  u64 nr_args = ctx[-2];
  return ctx[nr_args];
}

These helpers will be the only recommended way to access args and ret value
in multi progs.
The nice advantage is that normal fentry/fexit progs can use them too.

We can rearrange ctx[-1] /* func_ip */ and ctx[-2] /* nr_args */
if it makes things easier.

If multi prog knows that it is attaching to 100 kernel functions
and all of them have 2 arguments it can still do
int BPF_PROG(x, __u64 arg1, __u64 arg2, __u64 ret)
{ // access arg1, arg2, ret directly
and it will work correctly.

We can make it really strict in the verifier and disallow such
direct access to args from the multi prog and only allow
access via bpf_arg/bpf_ret_value helpers, but I think it's overkill.
Reading garbage values from stack isn't great, but it's not a safety issue.
It means that the verifier will allow something like 16 u64-s args
in multi program. It cannot allow large number, since ctx[1024]
might become a safety issue, while ctx[4] could be a garbage
or a valid value depending on the call site.

Thoughts?

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH bpf-next v4 18/27] bpf, x64: Store properly return value for trampoline with multi func programs
  2021-09-02 21:55           ` Alexei Starovoitov
@ 2021-09-03  9:50             ` Jiri Olsa
  0 siblings, 0 replies; 44+ messages in thread
From: Jiri Olsa @ 2021-09-03  9:50 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Steven Rostedt (VMware),
	Networking, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Daniel Xu, Viktor Malik

On Thu, Sep 02, 2021 at 02:55:38PM -0700, Alexei Starovoitov wrote:
> On Thu, Sep 02, 2021 at 02:57:11PM +0200, Jiri Olsa wrote:
> > > 
> > > Let's say we have 5 kernel functions: a, b, c, d, e. Say a, b, c all
> > > have 1 input args, and d and e have 2.
> > > 
> > > Now let's say we attach just normal fentry program A to function a.
> > > Also we attach normal fexit program E to func e.
> > > 
> > > We'll have A  attached to a with trampoline T1. We'll also have E
> > > attached to e with trampoline T2. Right?
> > > 
> > > And now we try to attach generic fentry (fentry.multi in your
> > > terminology) prog X to all 5 of them. If A and E weren't attached,
> > > we'd need two generic trampolines, one for a, b, c (because 1 input
> > > argument) and another for d,e (because 2 input arguments). But because
> > > we already have A and B attached, we'll end up needing 4:
> > > 
> > > T1 (1 arg)  for func a calling progs A and X
> > > T2 (2 args) for func e calling progs E and X
> > > T3 (1 arg)  for func b and c calling X
> > > T4 (2 args) for func d calling X
> > 
> > so current code would group T3/T4 together, but if we keep
> > them separated, then we won't need to use new model and
> > cut off some of the code, ok
> 
> We've brainstormed this idea further with Andrii.
> (thankfully we could do it in-person now ;) which saved a ton of time)
> 
> It seems the following should work:
> 5 kernel functions: a(int), b(long), c(void*), d(int, int), e(long, long).
> fentry prog A is attached to 'a'.
> fexit prog E is attached to 'e'.
> multi-prog X wants to attach to all of them.
> It can be achieved with 4 trampolines.
> 
> The trampolines called from funcs 'a' and 'e' can be patched to
> call A+X and E+X programs correspondingly.
> The multi program X needs to be able to access return values
> and arguments of all functions it was attached to.
> We can achieve that by always generating a trampoline (both multi and normal)
> with extra constant stored in the stack. This constant is the number of
> arguments served by this trampoline.
> The trampoline 'a' will store nr_args=1.
> The tramopline 'e' will store nr_args=2.
> We need two multi trampolines.
> The multi tramopline X1 that will serve 'b' and 'c' and store nr_args=1
> and multi-tramopline X2 that will serve 'd' and store nr_args=2
> into hidden stack location (like ctx[-2]).
> 
> The multi prog X can look like:
> int BPF_PROG(x, __u64 arg1, __u64 arg2, __u64 ret)
> in such case it will read correct args and ret when called from 'd' and 'e'
> and only correct arg1 when called from 'a', 'b', 'c'.
> 
> To always correctly access arguments and the return value
> the program can use two new helpers: bpf_arg(ctx, N) and bpf_ret_value(ctx).
> Both will be fully inlined helpers similar to bpf_get_func_ip().
> u64 bpf_arg(ctx, int n)
> {
>   u64 nr_args = ctx[-2]; /* that's the place where _all_ trampoline will store nr_args */
>   if (n > nr_args)
>     return 0;
>   return ctx[n];
> }
> u64 bpf_ret_value(ctx)
> {
>   u64 nr_args = ctx[-2];
>   return ctx[nr_args];
> }

ok, this is much better then rewiring args access in verifier

> 
> These helpers will be the only recommended way to access args and ret value
> in multi progs.
> The nice advantage is that normal fentry/fexit progs can use them too.
> 
> We can rearrange ctx[-1] /* func_ip */ and ctx[-2] /* nr_args */
> if it makes things easier.

so nr_args will be there all the time, while func_ip is optional
at the moment (based on get_func_ip helper presence in program),
so we can either switch that:

   func_ip in ctx[-2]
   nr_args in ctx[-1]

or make func_ip not optional to avoid confusion

I think pushing func_ip to ctx-2 is ok

> 
> If multi prog knows that it is attaching to 100 kernel functions
> and all of them have 2 arguments it can still do
> int BPF_PROG(x, __u64 arg1, __u64 arg2, __u64 ret)
> { // access arg1, arg2, ret directly
> and it will work correctly.

ok, it's user's decision, because at load time we don't know the
functions it will be attached to, so verifier can't do anything

> 
> We can make it really strict in the verifier and disallow such
> direct access to args from the multi prog and only allow
> access via bpf_arg/bpf_ret_value helpers, but I think it's overkill.
> Reading garbage values from stack isn't great, but it's not a safety issue.

we could also check it in attach time and forbid to attach if there
are attach functions with different nr_args and program does not use
arg helpers


> It means that the verifier will allow something like 16 u64-s args
> in multi program. It cannot allow large number, since ctx[1024]
> might become a safety issue, while ctx[4] could be a garbage
> or a valid value depending on the call site.
> 
> Thoughts?
> 

looks good, thanks for solving this ;-)

jirka


^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2021-09-03  9:50 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-26 19:38 [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Jiri Olsa
2021-08-26 19:38 ` [PATCH bpf-next v4 01/27] x86/ftrace: Remove extra orig rax move Jiri Olsa
2021-08-26 19:38 ` [PATCH bpf-next v4 02/27] x86/ftrace: Remove fault protection code in prepare_ftrace_return Jiri Olsa
2021-08-26 19:38 ` [PATCH bpf-next v4 03/27] x86/ftrace: Make function graph use ftrace directly Jiri Olsa
2021-08-26 19:38 ` [PATCH bpf-next v4 04/27] tracing: Add trampoline/graph selftest Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 05/27] ftrace: Add ftrace_add_rec_direct function Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 06/27] ftrace: Add multi direct register/unregister interface Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 07/27] ftrace: Add multi direct modify interface Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 08/27] ftrace/samples: Add multi direct interface test module Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 09/27] bpf: Add support to load multi func tracing program Jiri Olsa
2021-08-31 23:17   ` Andrii Nakryiko
2021-09-01 11:32     ` Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 10/27] bpf: Add struct bpf_tramp_node layer Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 11/27] bpf: Factor out bpf_trampoline_init function Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 12/27] bpf: Factor out __bpf_trampoline_lookup function Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 13/27] bpf: Factor out __bpf_trampoline_put function Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 14/27] bpf: Change bpf_trampoline_get to return error pointer Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 15/27] bpf, x64: Allow to use caller address from stack Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 16/27] bpf: Add bpf_trampoline_multi_get/put functions Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 17/27] bpf: Add multi trampoline attach support Jiri Olsa
2021-08-31 23:36   ` Andrii Nakryiko
2021-09-01  0:02     ` Andrii Nakryiko
2021-09-01 11:39     ` Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 18/27] bpf, x64: Store properly return value for trampoline with multi func programs Jiri Olsa
2021-08-31 23:51   ` Andrii Nakryiko
2021-09-01 15:15     ` Jiri Olsa
2021-09-02  3:56       ` Andrii Nakryiko
2021-09-02 12:57         ` Jiri Olsa
2021-09-02 16:54           ` Andrii Nakryiko
2021-09-02 21:55           ` Alexei Starovoitov
2021-09-03  9:50             ` Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 19/27] bpf: Attach multi trampoline with ftrace_ops Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 20/27] libbpf: Add btf__find_by_glob_kind function Jiri Olsa
2021-09-01  0:10   ` Andrii Nakryiko
2021-09-01 11:33     ` Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 21/27] libbpf: Add support to link multi func tracing program Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 22/27] selftests/bpf: Add fentry multi func test Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 23/27] selftests/bpf: Add fexit " Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 24/27] selftests/bpf: Add fentry/fexit " Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 25/27] selftests/bpf: Add mixed " Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 26/27] selftests/bpf: Add attach " Jiri Olsa
2021-08-26 19:39 ` [PATCH bpf-next v4 27/27] selftests/bpf: Add ret_mod " Jiri Olsa
2021-08-29 17:04 ` [PATCH bpf-next v4 00/27] x86/ftrace/bpf: Add batch support for direct/tracing attach Alexei Starovoitov
2021-08-30  8:02   ` Jiri Olsa

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.