bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv2 bpf-next 0/6] bpf: Fixes for CONFIG_X86_KERNEL_IBT
@ 2022-08-11  9:15 Jiri Olsa
  2022-08-11  9:15 ` [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag Jiri Olsa
                   ` (6 more replies)
  0 siblings, 7 replies; 30+ messages in thread
From: Jiri Olsa @ 2022-08-11  9:15 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra,
	Masami Hiramatsu (Google)

hi,
Martynas reported bpf_get_func_ip returning +4 address when
CONFIG_X86_KERNEL_IBT option is enabled and I found there are
some failing bpf tests when this option is enabled.

The CONFIG_X86_KERNEL_IBT option adds endbr instruction at the
function entry, so the idea is to 'fix' entry ip for kprobe_multi
and trampoline probes, because they are placed on the function
entry.

v2 changes:
  - change kprobes get_func_ip to return zero for kprobes
    attached within the function body [Andrii]
  - detect IBT config and properly test kprobe with offset 
    [Andrii]

v1 changes:
  - read previous instruction in kprobe_multi link handler
    and adjust entry_ip for CONFIG_X86_KERNEL_IBT option
  - split first patch into 2 separate changes
  - update changelogs

thanks,
jirka


---
Jiri Olsa (6):
      kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag
      ftrace: Keep the resolved addr in kallsyms_callback
      bpf: Use given function address for trampoline ip arg
      bpf: Adjust kprobe_multi entry_ip for CONFIG_X86_KERNEL_IBT
      bpf: Return value in kprobe get_func_ip only for entry address
      selftests/bpf: Fix get_func_ip offset test for CONFIG_X86_KERNEL_IBT

 arch/x86/net/bpf_jit_comp.c                               |  9 ++++-----
 include/linux/kprobes.h                                   |  1 +
 kernel/kprobes.c                                          |  6 +++++-
 kernel/trace/bpf_trace.c                                  | 15 ++++++++++++++-
 kernel/trace/ftrace.c                                     |  3 +--
 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++-----------
 tools/testing/selftests/bpf/progs/get_func_ip_test.c      | 22 ++++++++++------------
 tools/testing/selftests/bpf/progs/kprobe_multi.c          |  4 +---
 8 files changed, 87 insertions(+), 35 deletions(-)

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag
  2022-08-11  9:15 [PATCHv2 bpf-next 0/6] bpf: Fixes for CONFIG_X86_KERNEL_IBT Jiri Olsa
@ 2022-08-11  9:15 ` Jiri Olsa
  2022-08-15  9:57   ` Peter Zijlstra
                     ` (2 more replies)
  2022-08-11  9:15 ` [PATCHv2 bpf-next 2/6] ftrace: Keep the resolved addr in kallsyms_callback Jiri Olsa
                   ` (5 subsequent siblings)
  6 siblings, 3 replies; 30+ messages in thread
From: Jiri Olsa @ 2022-08-11  9:15 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra,
	Masami Hiramatsu (Google)

Adding KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag to indicate that
attach address is on function entry. This is used in following
changes in get_func_ip helper to return correct function address.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/kprobes.h | 1 +
 kernel/kprobes.c        | 6 +++++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index 55041d2f884d..a0b92be98984 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -103,6 +103,7 @@ struct kprobe {
 				   * this flag is only for optimized_kprobe.
 				   */
 #define KPROBE_FLAG_FTRACE	8 /* probe is using ftrace */
+#define KPROBE_FLAG_ON_FUNC_ENTRY	16 /* probe is on the function entry */
 
 /* Has this kprobe gone ? */
 static inline bool kprobe_gone(struct kprobe *p)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index f214f8c088ed..a6b1b5c49d92 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1605,9 +1605,10 @@ int register_kprobe(struct kprobe *p)
 	struct kprobe *old_p;
 	struct module *probed_mod;
 	kprobe_opcode_t *addr;
+	bool on_func_entry;
 
 	/* Adjust probe address from symbol */
-	addr = kprobe_addr(p);
+	addr = _kprobe_addr(p->addr, p->symbol_name, p->offset, &on_func_entry);
 	if (IS_ERR(addr))
 		return PTR_ERR(addr);
 	p->addr = addr;
@@ -1627,6 +1628,9 @@ int register_kprobe(struct kprobe *p)
 
 	mutex_lock(&kprobe_mutex);
 
+	if (on_func_entry)
+		p->flags |= KPROBE_FLAG_ON_FUNC_ENTRY;
+
 	old_p = get_kprobe(p->addr);
 	if (old_p) {
 		/* Since this may unoptimize 'old_p', locking 'text_mutex'. */
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCHv2 bpf-next 2/6] ftrace: Keep the resolved addr in kallsyms_callback
  2022-08-11  9:15 [PATCHv2 bpf-next 0/6] bpf: Fixes for CONFIG_X86_KERNEL_IBT Jiri Olsa
  2022-08-11  9:15 ` [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag Jiri Olsa
@ 2022-08-11  9:15 ` Jiri Olsa
  2022-08-15 10:13   ` Peter Zijlstra
  2022-09-08 12:35   ` Masami Hiramatsu
  2022-08-11  9:15 ` [PATCHv2 bpf-next 3/6] bpf: Use given function address for trampoline ip arg Jiri Olsa
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 30+ messages in thread
From: Jiri Olsa @ 2022-08-11  9:15 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra,
	Masami Hiramatsu (Google)

Keeping the resolved 'addr' in kallsyms_callback, instead of taking
ftrace_location value, because we depend on symbol address in the
cookie related code.

With CONFIG_X86_KERNEL_IBT option the ftrace_location value differs
from symbol address, which screwes the symbol address cookies matching.

There are 2 users of this function:
- bpf_kprobe_multi_link_attach
    for which this fix is for

- get_ftrace_locations
    which is used by register_fprobe_syms

    this function needs to get symbols resolved to addresses,
    but does not need 'ftrace location addresses' at this point
    there's another ftrace location translation in the path done
    by ftrace_set_filter_ips call:

     register_fprobe_syms
       addrs = get_ftrace_locations

       register_fprobe_ips(addrs)
         ...
         ftrace_set_filter_ips
           ...
             __ftrace_match_addr
               ip = ftrace_location(ip);
               ...

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/trace/ftrace.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index bc921a3f7ea8..8a8c90d1a387 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -8268,8 +8268,7 @@ static int kallsyms_callback(void *data, const char *name,
 	if (args->addrs[idx])
 		return 0;
 
-	addr = ftrace_location(addr);
-	if (!addr)
+	if (!ftrace_location(addr))
 		return 0;
 
 	args->addrs[idx] = addr;
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCHv2 bpf-next 3/6] bpf: Use given function address for trampoline ip arg
  2022-08-11  9:15 [PATCHv2 bpf-next 0/6] bpf: Fixes for CONFIG_X86_KERNEL_IBT Jiri Olsa
  2022-08-11  9:15 ` [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag Jiri Olsa
  2022-08-11  9:15 ` [PATCHv2 bpf-next 2/6] ftrace: Keep the resolved addr in kallsyms_callback Jiri Olsa
@ 2022-08-11  9:15 ` Jiri Olsa
  2022-08-15 10:18   ` Peter Zijlstra
  2022-08-11  9:15 ` [PATCHv2 bpf-next 4/6] bpf: Adjust kprobe_multi entry_ip for CONFIG_X86_KERNEL_IBT Jiri Olsa
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 30+ messages in thread
From: Jiri Olsa @ 2022-08-11  9:15 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra,
	Masami Hiramatsu (Google)

Using function address given at the generation time as the trampoline
ip argument. This way we get directly the function address that we
need, so we don't need to:
  - read the ip from the stack
  - subtract X86_PATCH_SIZE
  - subtract ENDBR_INSN_SIZE if CONFIG_X86_KERNEL_IBT is enabled
    which is not even implemented yet ;-)

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/net/bpf_jit_comp.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index c1f6c1c51d99..0194880895aa 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -2013,13 +2013,14 @@ static int invoke_bpf_mod_ret(const struct btf_func_model *m, u8 **pprog,
 int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end,
 				const struct btf_func_model *m, u32 flags,
 				struct bpf_tramp_links *tlinks,
-				void *orig_call)
+				void *func_addr)
 {
 	int ret, i, nr_args = m->nr_args;
 	int regs_off, ip_off, args_off, stack_size = nr_args * 8, run_ctx_off;
 	struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
 	struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
 	struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
+	void *orig_call = func_addr;
 	u8 **branches = NULL;
 	u8 *prog;
 	bool save_ret;
@@ -2092,12 +2093,10 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 
 	if (flags & BPF_TRAMP_F_IP_ARG) {
 		/* Store IP address of the traced function:
-		 * mov rax, QWORD PTR [rbp + 8]
-		 * sub rax, X86_PATCH_SIZE
+		 * mov rax, func_addr
 		 * mov QWORD PTR [rbp - ip_off], rax
 		 */
-		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
-		EMIT4(0x48, 0x83, 0xe8, X86_PATCH_SIZE);
+		emit_mov_imm64(&prog, BPF_REG_0, (long) func_addr >> 32, (u32) (long) func_addr);
 		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ip_off);
 	}
 
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCHv2 bpf-next 4/6] bpf: Adjust kprobe_multi entry_ip for CONFIG_X86_KERNEL_IBT
  2022-08-11  9:15 [PATCHv2 bpf-next 0/6] bpf: Fixes for CONFIG_X86_KERNEL_IBT Jiri Olsa
                   ` (2 preceding siblings ...)
  2022-08-11  9:15 ` [PATCHv2 bpf-next 3/6] bpf: Use given function address for trampoline ip arg Jiri Olsa
@ 2022-08-11  9:15 ` Jiri Olsa
  2022-08-11  9:15 ` [PATCHv2 bpf-next 5/6] bpf: Return value in kprobe get_func_ip only for entry address Jiri Olsa
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2022-08-11  9:15 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: Peter Zijlstra, Martynas Pumputis, bpf, Martin KaFai Lau,
	Song Liu, Yonghong Song, John Fastabend, KP Singh,
	Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google)

Martynas reported bpf_get_func_ip returning +4 address when
CONFIG_X86_KERNEL_IBT option is enabled.

When CONFIG_X86_KERNEL_IBT is enabled we'll have endbr instruction
at the function entry, which screws return value of bpf_get_func_ip()
helper that should return the function address.

There's short term workaround for kprobe_multi bpf program made by
Alexei [1], but we need this fixup also for bpf_get_attach_cookie,
that returns cookie based on the entry_ip value.

Moving the fixup in the fprobe handler, so both bpf_get_func_ip
and bpf_get_attach_cookie get expected function address when
CONFIG_X86_KERNEL_IBT option is enabled.

[1] commit 7f0059b58f02 ("selftests/bpf: Fix kprobe_multi test.")

Cc: Peter Zijlstra <peterz@infradead.org>
Reported-by: Martynas Pumputis <m@lambda.lt>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/trace/bpf_trace.c                         | 4 ++++
 tools/testing/selftests/bpf/progs/kprobe_multi.c | 4 +---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 68e5cdd24cef..bcada91b0b3b 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2419,6 +2419,10 @@ kprobe_multi_link_handler(struct fprobe *fp, unsigned long entry_ip,
 {
 	struct bpf_kprobe_multi_link *link;
 
+#ifdef CONFIG_X86_KERNEL_IBT
+	if (is_endbr(*((u32 *) entry_ip - 1)))
+		entry_ip -= ENDBR_INSN_SIZE;
+#endif
 	link = container_of(fp, struct bpf_kprobe_multi_link, fp);
 	kprobe_multi_link_prog_run(link, entry_ip, regs);
 }
diff --git a/tools/testing/selftests/bpf/progs/kprobe_multi.c b/tools/testing/selftests/bpf/progs/kprobe_multi.c
index 08f95a8155d1..98c3399e15c0 100644
--- a/tools/testing/selftests/bpf/progs/kprobe_multi.c
+++ b/tools/testing/selftests/bpf/progs/kprobe_multi.c
@@ -36,15 +36,13 @@ __u64 kretprobe_test6_result = 0;
 __u64 kretprobe_test7_result = 0;
 __u64 kretprobe_test8_result = 0;
 
-extern bool CONFIG_X86_KERNEL_IBT __kconfig __weak;
-
 static void kprobe_multi_check(void *ctx, bool is_return)
 {
 	if (bpf_get_current_pid_tgid() >> 32 != pid)
 		return;
 
 	__u64 cookie = test_cookie ? bpf_get_attach_cookie(ctx) : 0;
-	__u64 addr = bpf_get_func_ip(ctx) - (CONFIG_X86_KERNEL_IBT ? 4 : 0);
+	__u64 addr = bpf_get_func_ip(ctx);
 
 #define SET(__var, __addr, __cookie) ({			\
 	if (((const void *) addr == __addr) &&		\
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCHv2 bpf-next 5/6] bpf: Return value in kprobe get_func_ip only for entry address
  2022-08-11  9:15 [PATCHv2 bpf-next 0/6] bpf: Fixes for CONFIG_X86_KERNEL_IBT Jiri Olsa
                   ` (3 preceding siblings ...)
  2022-08-11  9:15 ` [PATCHv2 bpf-next 4/6] bpf: Adjust kprobe_multi entry_ip for CONFIG_X86_KERNEL_IBT Jiri Olsa
@ 2022-08-11  9:15 ` Jiri Olsa
  2022-08-11  9:15 ` [PATCHv2 bpf-next 6/6] selftests/bpf: Fix get_func_ip offset test for CONFIG_X86_KERNEL_IBT Jiri Olsa
  2022-08-16  3:27 ` [PATCHv2 bpf-next 0/6] bpf: Fixes " Andrii Nakryiko
  6 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2022-08-11  9:15 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra,
	Masami Hiramatsu (Google)

Changing return value of kprobe's version of bpf_get_func_ip
to return zero if the attach address is not on the function's
entry point.

For kprobes attached in the middle of the function we can't easily
get to the function address especially now with the CONFIG_X86_KERNEL_IBT
support.

If user cares about current IP for kprobes attached within the
function body, they can get it with PT_REGS_IP(ctx).

Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 kernel/trace/bpf_trace.c                             | 11 ++++++++++-
 tools/testing/selftests/bpf/progs/get_func_ip_test.c |  4 ++--
 2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index bcada91b0b3b..027abc38faab 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1029,8 +1029,17 @@ static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
 BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
 {
 	struct kprobe *kp = kprobe_running();
+	uintptr_t addr;
 
-	return kp ? (uintptr_t)kp->addr : 0;
+	if (!kp || !(kp->flags & KPROBE_FLAG_ON_FUNC_ENTRY))
+		return 0;
+
+	addr = (uintptr_t)kp->addr;
+#ifdef CONFIG_X86_KERNEL_IBT
+	if (is_endbr(*((u32 *) addr - 1)))
+		addr -= ENDBR_INSN_SIZE;
+#endif
+	return addr;
 }
 
 static const struct bpf_func_proto bpf_get_func_ip_proto_kprobe = {
diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
index a587aeca5ae0..6db70757bc8b 100644
--- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c
+++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
@@ -69,7 +69,7 @@ int test6(struct pt_regs *ctx)
 {
 	__u64 addr = bpf_get_func_ip(ctx);
 
-	test6_result = (const void *) addr == &bpf_fentry_test6 + 5;
+	test6_result = (const void *) addr == 0;
 	return 0;
 }
 
@@ -79,6 +79,6 @@ int test7(struct pt_regs *ctx)
 {
 	__u64 addr = bpf_get_func_ip(ctx);
 
-	test7_result = (const void *) addr == &bpf_fentry_test7 + 5;
+	test7_result = (const void *) addr == 0;
 	return 0;
 }
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCHv2 bpf-next 6/6] selftests/bpf: Fix get_func_ip offset test for CONFIG_X86_KERNEL_IBT
  2022-08-11  9:15 [PATCHv2 bpf-next 0/6] bpf: Fixes for CONFIG_X86_KERNEL_IBT Jiri Olsa
                   ` (4 preceding siblings ...)
  2022-08-11  9:15 ` [PATCHv2 bpf-next 5/6] bpf: Return value in kprobe get_func_ip only for entry address Jiri Olsa
@ 2022-08-11  9:15 ` Jiri Olsa
  2022-08-16  3:25   ` Andrii Nakryiko
  2022-08-16  3:27 ` [PATCHv2 bpf-next 0/6] bpf: Fixes " Andrii Nakryiko
  6 siblings, 1 reply; 30+ messages in thread
From: Jiri Olsa @ 2022-08-11  9:15 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra,
	Masami Hiramatsu (Google)

With CONFIG_X86_KERNEL_IBT enabled the test for kprobe with offset
won't work because of the extra endbr instruction.

As suggested by Andrii adding CONFIG_X86_KERNEL_IBT detection
and using appropriate offset value based on that.

Also removing test7 program, because it does the same as test6.

Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../bpf/prog_tests/get_func_ip_test.c         | 62 +++++++++++++++----
 .../selftests/bpf/progs/get_func_ip_test.c    | 20 +++---
 2 files changed, 60 insertions(+), 22 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
index 938dbd4d7c2f..a4dab2fa2258 100644
--- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
@@ -2,7 +2,9 @@
 #include <test_progs.h>
 #include "get_func_ip_test.skel.h"
 
-void test_get_func_ip_test(void)
+static int config_ibt;
+
+static void test_function_entry(void)
 {
 	struct get_func_ip_test *skel = NULL;
 	int err, prog_fd;
@@ -12,14 +14,6 @@ void test_get_func_ip_test(void)
 	if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open"))
 		return;
 
-	/* test6 is x86_64 specifc because of the instruction
-	 * offset, disabling it for all other archs
-	 */
-#ifndef __x86_64__
-	bpf_program__set_autoload(skel->progs.test6, false);
-	bpf_program__set_autoload(skel->progs.test7, false);
-#endif
-
 	err = get_func_ip_test__load(skel);
 	if (!ASSERT_OK(err, "get_func_ip_test__load"))
 		goto cleanup;
@@ -38,16 +32,62 @@ void test_get_func_ip_test(void)
 
 	ASSERT_OK(err, "test_run");
 
+	config_ibt = skel->bss->config_ibt;
+	ASSERT_TRUE(config_ibt == 0 || config_ibt == 1, "config_ibt");
+	printf("%s:config_ibt %d\n", __func__, config_ibt);
+
 	ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
 	ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
 	ASSERT_EQ(skel->bss->test3_result, 1, "test3_result");
 	ASSERT_EQ(skel->bss->test4_result, 1, "test4_result");
 	ASSERT_EQ(skel->bss->test5_result, 1, "test5_result");
+
+cleanup:
+	get_func_ip_test__destroy(skel);
+}
+
 #ifdef __x86_64__
+static void test_function_body(void)
+{
+	struct get_func_ip_test *skel = NULL;
+	LIBBPF_OPTS(bpf_test_run_opts, topts);
+	LIBBPF_OPTS(bpf_kprobe_opts, kopts);
+	struct bpf_link *link6 = NULL;
+	int err, prog_fd;
+
+	skel = get_func_ip_test__open();
+	if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open"))
+		return;
+
+	bpf_program__set_autoload(skel->progs.test6, true);
+
+	err = get_func_ip_test__load(skel);
+	if (!ASSERT_OK(err, "get_func_ip_test__load"))
+		goto cleanup;
+
+	kopts.offset = config_ibt ? 9 : 5;
+
+	link6 = bpf_program__attach_kprobe_opts(skel->progs.test6, "bpf_fentry_test6", &kopts);
+	if (!ASSERT_OK_PTR(link6, "link6"))
+		goto cleanup;
+
+	prog_fd = bpf_program__fd(skel->progs.test1);
+	err = bpf_prog_test_run_opts(prog_fd, &topts);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(topts.retval, 0, "test_run");
+
 	ASSERT_EQ(skel->bss->test6_result, 1, "test6_result");
-	ASSERT_EQ(skel->bss->test7_result, 1, "test7_result");
-#endif
 
 cleanup:
+	bpf_link__destroy(link6);
 	get_func_ip_test__destroy(skel);
 }
+#else
+#define test_function_body(arg)
+#endif
+
+void test_get_func_ip_test(void)
+{
+	test_function_entry();
+	test_function_body();
+}
diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
index 6db70757bc8b..cb8e58183d46 100644
--- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c
+++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
@@ -2,6 +2,7 @@
 #include <linux/bpf.h>
 #include <bpf/bpf_helpers.h>
 #include <bpf/bpf_tracing.h>
+#include <stdbool.h>
 
 char _license[] SEC("license") = "GPL";
 
@@ -13,12 +14,19 @@ extern const void bpf_modify_return_test __ksym;
 extern const void bpf_fentry_test6 __ksym;
 extern const void bpf_fentry_test7 __ksym;
 
+extern bool CONFIG_X86_KERNEL_IBT __kconfig __weak;
+
+bool config_ibt;
+
 __u64 test1_result = 0;
 SEC("fentry/bpf_fentry_test1")
 int BPF_PROG(test1, int a)
 {
 	__u64 addr = bpf_get_func_ip(ctx);
 
+	/* just to propagate config option value to user space */
+	config_ibt = CONFIG_X86_KERNEL_IBT;
+
 	test1_result = (const void *) addr == &bpf_fentry_test1;
 	return 0;
 }
@@ -64,7 +72,7 @@ int BPF_PROG(test5, int a, int *b, int ret)
 }
 
 __u64 test6_result = 0;
-SEC("kprobe/bpf_fentry_test6+0x5")
+SEC("?kprobe/")
 int test6(struct pt_regs *ctx)
 {
 	__u64 addr = bpf_get_func_ip(ctx);
@@ -72,13 +80,3 @@ int test6(struct pt_regs *ctx)
 	test6_result = (const void *) addr == 0;
 	return 0;
 }
-
-__u64 test7_result = 0;
-SEC("kprobe/bpf_fentry_test7+5")
-int test7(struct pt_regs *ctx)
-{
-	__u64 addr = bpf_get_func_ip(ctx);
-
-	test7_result = (const void *) addr == 0;
-	return 0;
-}
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag
  2022-08-11  9:15 ` [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag Jiri Olsa
@ 2022-08-15  9:57   ` Peter Zijlstra
  2022-08-15 10:10     ` Jiri Olsa
  2022-08-30 15:08   ` Jiri Olsa
  2022-09-08 12:01   ` Masami Hiramatsu
  2 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2022-08-15  9:57 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google)

On Thu, Aug 11, 2022 at 11:15:21AM +0200, Jiri Olsa wrote:
> Adding KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag to indicate that
> attach address is on function entry. This is used in following
> changes in get_func_ip helper to return correct function address.

IIRC (and I've not digested patch) the intent was to have func+0 mean
this. x86-IBT is not the only case where this applies, there are
multiple architectures where function entry is not +0.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag
  2022-08-15  9:57   ` Peter Zijlstra
@ 2022-08-15 10:10     ` Jiri Olsa
  2022-08-15 12:40       ` Peter Zijlstra
  0 siblings, 1 reply; 30+ messages in thread
From: Jiri Olsa @ 2022-08-15 10:10 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google)

On Mon, Aug 15, 2022 at 11:57:40AM +0200, Peter Zijlstra wrote:
> On Thu, Aug 11, 2022 at 11:15:21AM +0200, Jiri Olsa wrote:
> > Adding KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag to indicate that
> > attach address is on function entry. This is used in following
> > changes in get_func_ip helper to return correct function address.
> 
> IIRC (and I've not digested patch) the intent was to have func+0 mean
> this. x86-IBT is not the only case where this applies, there are
> multiple architectures where function entry is not +0.

we can have kprobe created by user passing just the address

in this case _kprobe_addr still computes the address's offset
from the symbol but does not store it back to 'struct kprobe'

jirka

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 2/6] ftrace: Keep the resolved addr in kallsyms_callback
  2022-08-11  9:15 ` [PATCHv2 bpf-next 2/6] ftrace: Keep the resolved addr in kallsyms_callback Jiri Olsa
@ 2022-08-15 10:13   ` Peter Zijlstra
  2022-08-15 10:30     ` Jiri Olsa
  2022-09-08 12:35   ` Masami Hiramatsu
  1 sibling, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2022-08-15 10:13 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google)

On Thu, Aug 11, 2022 at 11:15:22AM +0200, Jiri Olsa wrote:
> Keeping the resolved 'addr' in kallsyms_callback, instead of taking
> ftrace_location value, because we depend on symbol address in the
> cookie related code.
> 
> With CONFIG_X86_KERNEL_IBT option the ftrace_location value differs
> from symbol address, which screwes the symbol address cookies matching.
> 
> There are 2 users of this function:
> - bpf_kprobe_multi_link_attach
>     for which this fix is for

Except you fail to explain what the problem is and how this helps
anything.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 3/6] bpf: Use given function address for trampoline ip arg
  2022-08-11  9:15 ` [PATCHv2 bpf-next 3/6] bpf: Use given function address for trampoline ip arg Jiri Olsa
@ 2022-08-15 10:18   ` Peter Zijlstra
  2022-08-15 10:57     ` Jiri Olsa
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2022-08-15 10:18 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google)

On Thu, Aug 11, 2022 at 11:15:23AM +0200, Jiri Olsa wrote:
> Using function address given at the generation time as the trampoline
> ip argument. This way we get directly the function address that we
> need, so we don't need to:
>   - read the ip from the stack
>   - subtract X86_PATCH_SIZE
>   - subtract ENDBR_INSN_SIZE if CONFIG_X86_KERNEL_IBT is enabled
>     which is not even implemented yet ;-)

Can you please tell me what all this does and why?


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 2/6] ftrace: Keep the resolved addr in kallsyms_callback
  2022-08-15 10:13   ` Peter Zijlstra
@ 2022-08-15 10:30     ` Jiri Olsa
  2022-08-15 12:45       ` Peter Zijlstra
  0 siblings, 1 reply; 30+ messages in thread
From: Jiri Olsa @ 2022-08-15 10:30 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google)

On Mon, Aug 15, 2022 at 12:13:39PM +0200, Peter Zijlstra wrote:
> On Thu, Aug 11, 2022 at 11:15:22AM +0200, Jiri Olsa wrote:
> > Keeping the resolved 'addr' in kallsyms_callback, instead of taking
> > ftrace_location value, because we depend on symbol address in the
> > cookie related code.
> > 
> > With CONFIG_X86_KERNEL_IBT option the ftrace_location value differs
> > from symbol address, which screwes the symbol address cookies matching.
> > 
> > There are 2 users of this function:
> > - bpf_kprobe_multi_link_attach
> >     for which this fix is for
> 
> Except you fail to explain what the problem is and how this helps
> anything.

we search this array of resolved addresses later in cookie code
(bpf_kprobe_multi_cookie) for address returned by fprobe, which
is not 'ftrace_location' address

so we want ftrace_lookup_symbols to return 'only' resolved address
at this point, not 'ftrace_location' address

jirka

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 3/6] bpf: Use given function address for trampoline ip arg
  2022-08-15 10:18   ` Peter Zijlstra
@ 2022-08-15 10:57     ` Jiri Olsa
  2022-08-15 12:51       ` Peter Zijlstra
  0 siblings, 1 reply; 30+ messages in thread
From: Jiri Olsa @ 2022-08-15 10:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google)

On Mon, Aug 15, 2022 at 12:18:38PM +0200, Peter Zijlstra wrote:
> On Thu, Aug 11, 2022 at 11:15:23AM +0200, Jiri Olsa wrote:
> > Using function address given at the generation time as the trampoline
> > ip argument. This way we get directly the function address that we
> > need, so we don't need to:
> >   - read the ip from the stack
> >   - subtract X86_PATCH_SIZE
> >   - subtract ENDBR_INSN_SIZE if CONFIG_X86_KERNEL_IBT is enabled
> >     which is not even implemented yet ;-)
> 
> Can you please tell me what all this does and why?
> 

arch_prepare_bpf_trampoline prepares bpf trampoline for given function
specified by 'func_addr' argument

the changed code is storing/preparing caller's 'ip' address on the
trampoline's stack so the get_func_ip helper can use it

currently the trampoline code gets the caller's ip address by reading
caller's return address from stack and subtracting X86_PATCH_SIZE from
it

the change uses 'func_addr' as caller's 'ip' address when trampoline is
generated .. this way we don't need to retrieve the return address from
stack and care about endbr instruction if IBT is enabled

jirka

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag
  2022-08-15 10:10     ` Jiri Olsa
@ 2022-08-15 12:40       ` Peter Zijlstra
  2022-08-16  7:01         ` Jiri Olsa
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2022-08-15 12:40 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google)

On Mon, Aug 15, 2022 at 12:10:20PM +0200, Jiri Olsa wrote:
> On Mon, Aug 15, 2022 at 11:57:40AM +0200, Peter Zijlstra wrote:
> > On Thu, Aug 11, 2022 at 11:15:21AM +0200, Jiri Olsa wrote:
> > > Adding KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag to indicate that
> > > attach address is on function entry. This is used in following
> > > changes in get_func_ip helper to return correct function address.
> > 
> > IIRC (and I've not digested patch) the intent was to have func+0 mean
> > this. x86-IBT is not the only case where this applies, there are
> > multiple architectures where function entry is not +0.
> 
> we can have kprobe created by user passing just the address
> 
> in this case _kprobe_addr still computes the address's offset
> from the symbol but does not store it back to 'struct kprobe'

Ah, this is an internal thing to record, in the struct kprobe if the
thing is on the function entry or not?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 2/6] ftrace: Keep the resolved addr in kallsyms_callback
  2022-08-15 10:30     ` Jiri Olsa
@ 2022-08-15 12:45       ` Peter Zijlstra
  2022-08-16  7:06         ` Jiri Olsa
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2022-08-15 12:45 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google)

On Mon, Aug 15, 2022 at 12:30:46PM +0200, Jiri Olsa wrote:
> On Mon, Aug 15, 2022 at 12:13:39PM +0200, Peter Zijlstra wrote:
> > On Thu, Aug 11, 2022 at 11:15:22AM +0200, Jiri Olsa wrote:
> > > Keeping the resolved 'addr' in kallsyms_callback, instead of taking
> > > ftrace_location value, because we depend on symbol address in the
> > > cookie related code.
> > > 
> > > With CONFIG_X86_KERNEL_IBT option the ftrace_location value differs
> > > from symbol address, which screwes the symbol address cookies matching.
> > > 
> > > There are 2 users of this function:
> > > - bpf_kprobe_multi_link_attach
> > >     for which this fix is for
> > 
> > Except you fail to explain what the problem is and how this helps
> > anything.
> 
> we search this array of resolved addresses later in cookie code
> (bpf_kprobe_multi_cookie) for address returned by fprobe, which
> is not 'ftrace_location' address

What is fprobe?

> so we want ftrace_lookup_symbols to return 'only' resolved address
> at this point, not 'ftrace_location' address

In general; I'm completely confused what any of this code is doing.
Mostly I don't speak BPF *at*all*. And have very little clue as to what
things are supposed to do, please help me along.

But the thing is, we're likely going to change all this (function call
abi) again in the very near future; it would be very nice if all this
code could grow some what/why comments, because I've gotten lost
multiple times in all this.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 3/6] bpf: Use given function address for trampoline ip arg
  2022-08-15 10:57     ` Jiri Olsa
@ 2022-08-15 12:51       ` Peter Zijlstra
  2022-08-17 13:40         ` Jiri Olsa
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2022-08-15 12:51 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google)

On Mon, Aug 15, 2022 at 12:57:39PM +0200, Jiri Olsa wrote:
> On Mon, Aug 15, 2022 at 12:18:38PM +0200, Peter Zijlstra wrote:
> > On Thu, Aug 11, 2022 at 11:15:23AM +0200, Jiri Olsa wrote:
> > > Using function address given at the generation time as the trampoline
> > > ip argument. This way we get directly the function address that we
> > > need, so we don't need to:
> > >   - read the ip from the stack
> > >   - subtract X86_PATCH_SIZE
> > >   - subtract ENDBR_INSN_SIZE if CONFIG_X86_KERNEL_IBT is enabled
> > >     which is not even implemented yet ;-)
> > 
> > Can you please tell me what all this does and why?
> > 
> 
> arch_prepare_bpf_trampoline prepares bpf trampoline for given function
> specified by 'func_addr' argument

The bpf trampoline is what's used for ftrace direct call, no?

> the changed code is storing/preparing caller's 'ip' address on the
> trampoline's stack so the get_func_ip helper can use it

I've no idea what get_func_ip() helper is...

> currently the trampoline code gets the caller's ip address by reading
> caller's return address from stack and subtracting X86_PATCH_SIZE from
> it
> 
> the change uses 'func_addr' as caller's 'ip' address when trampoline is
> generated .. this way we don't need to retrieve the return address from
> stack and care about endbr instruction if IBT is enabled

Ok, I *think* I sorta understand that.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 6/6] selftests/bpf: Fix get_func_ip offset test for CONFIG_X86_KERNEL_IBT
  2022-08-11  9:15 ` [PATCHv2 bpf-next 6/6] selftests/bpf: Fix get_func_ip offset test for CONFIG_X86_KERNEL_IBT Jiri Olsa
@ 2022-08-16  3:25   ` Andrii Nakryiko
  2022-08-16  7:00     ` Jiri Olsa
  0 siblings, 1 reply; 30+ messages in thread
From: Andrii Nakryiko @ 2022-08-16  3:25 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra,
	Masami Hiramatsu (Google)

On Thu, Aug 11, 2022 at 2:16 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> With CONFIG_X86_KERNEL_IBT enabled the test for kprobe with offset
> won't work because of the extra endbr instruction.
>
> As suggested by Andrii adding CONFIG_X86_KERNEL_IBT detection
> and using appropriate offset value based on that.
>
> Also removing test7 program, because it does the same as test6.
>
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  .../bpf/prog_tests/get_func_ip_test.c         | 62 +++++++++++++++----
>  .../selftests/bpf/progs/get_func_ip_test.c    | 20 +++---
>  2 files changed, 60 insertions(+), 22 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> index 938dbd4d7c2f..a4dab2fa2258 100644
> --- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> +++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> @@ -2,7 +2,9 @@
>  #include <test_progs.h>
>  #include "get_func_ip_test.skel.h"
>
> -void test_get_func_ip_test(void)
> +static int config_ibt;
> +
> +static void test_function_entry(void)
>  {
>         struct get_func_ip_test *skel = NULL;
>         int err, prog_fd;
> @@ -12,14 +14,6 @@ void test_get_func_ip_test(void)
>         if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open"))
>                 return;
>
> -       /* test6 is x86_64 specifc because of the instruction
> -        * offset, disabling it for all other archs
> -        */
> -#ifndef __x86_64__
> -       bpf_program__set_autoload(skel->progs.test6, false);
> -       bpf_program__set_autoload(skel->progs.test7, false);
> -#endif
> -
>         err = get_func_ip_test__load(skel);
>         if (!ASSERT_OK(err, "get_func_ip_test__load"))
>                 goto cleanup;
> @@ -38,16 +32,62 @@ void test_get_func_ip_test(void)
>
>         ASSERT_OK(err, "test_run");
>
> +       config_ibt = skel->bss->config_ibt;

skel->bss->config_ibt isn't actually necessary, you can just check
skel->kconfig->CONFIG_X86_KERNEL_IBT directly. And you won't need to
trigger BPF program unnecessary, libbpf will fill out kconfig section
during object/skeleton load phase.

> +       ASSERT_TRUE(config_ibt == 0 || config_ibt == 1, "config_ibt");

you won't need this anymore

> +       printf("%s:config_ibt %d\n", __func__, config_ibt);

and this is just debug leftover

> +
>         ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
>         ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
>         ASSERT_EQ(skel->bss->test3_result, 1, "test3_result");
>         ASSERT_EQ(skel->bss->test4_result, 1, "test4_result");
>         ASSERT_EQ(skel->bss->test5_result, 1, "test5_result");
> +

[...]

> diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> index 6db70757bc8b..cb8e58183d46 100644
> --- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> +++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> @@ -2,6 +2,7 @@
>  #include <linux/bpf.h>
>  #include <bpf/bpf_helpers.h>
>  #include <bpf/bpf_tracing.h>
> +#include <stdbool.h>
>
>  char _license[] SEC("license") = "GPL";
>
> @@ -13,12 +14,19 @@ extern const void bpf_modify_return_test __ksym;
>  extern const void bpf_fentry_test6 __ksym;
>  extern const void bpf_fentry_test7 __ksym;
>
> +extern bool CONFIG_X86_KERNEL_IBT __kconfig __weak;
> +
> +bool config_ibt;
> +
>  __u64 test1_result = 0;
>  SEC("fentry/bpf_fentry_test1")
>  int BPF_PROG(test1, int a)
>  {
>         __u64 addr = bpf_get_func_ip(ctx);
>
> +       /* just to propagate config option value to user space */
> +       config_ibt = CONFIG_X86_KERNEL_IBT;
> +

as mentioned above, you shouldn't need it, just read
CONFIG_X86_KERNEL_IBT directly through skeleton

>         test1_result = (const void *) addr == &bpf_fentry_test1;
>         return 0;
>  }
> @@ -64,7 +72,7 @@ int BPF_PROG(test5, int a, int *b, int ret)
>  }
>
>  __u64 test6_result = 0;
> -SEC("kprobe/bpf_fentry_test6+0x5")
> +SEC("?kprobe/")

don't leave / at the end (and I thought that libbpf rejects this,
isn't that a case?...), just SEC("?kprobe")


>  int test6(struct pt_regs *ctx)
>  {
>         __u64 addr = bpf_get_func_ip(ctx);
> @@ -72,13 +80,3 @@ int test6(struct pt_regs *ctx)
>         test6_result = (const void *) addr == 0;
>         return 0;
>  }
> -
> -__u64 test7_result = 0;
> -SEC("kprobe/bpf_fentry_test7+5")
> -int test7(struct pt_regs *ctx)
> -{
> -       __u64 addr = bpf_get_func_ip(ctx);
> -
> -       test7_result = (const void *) addr == 0;
> -       return 0;
> -}
> --
> 2.37.1
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 0/6] bpf: Fixes for CONFIG_X86_KERNEL_IBT
  2022-08-11  9:15 [PATCHv2 bpf-next 0/6] bpf: Fixes for CONFIG_X86_KERNEL_IBT Jiri Olsa
                   ` (5 preceding siblings ...)
  2022-08-11  9:15 ` [PATCHv2 bpf-next 6/6] selftests/bpf: Fix get_func_ip offset test for CONFIG_X86_KERNEL_IBT Jiri Olsa
@ 2022-08-16  3:27 ` Andrii Nakryiko
  6 siblings, 0 replies; 30+ messages in thread
From: Andrii Nakryiko @ 2022-08-16  3:27 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra,
	Masami Hiramatsu (Google)

On Thu, Aug 11, 2022 at 2:15 AM Jiri Olsa <jolsa@kernel.org> wrote:
>
> hi,
> Martynas reported bpf_get_func_ip returning +4 address when
> CONFIG_X86_KERNEL_IBT option is enabled and I found there are
> some failing bpf tests when this option is enabled.
>
> The CONFIG_X86_KERNEL_IBT option adds endbr instruction at the
> function entry, so the idea is to 'fix' entry ip for kprobe_multi
> and trampoline probes, because they are placed on the function
> entry.
>
> v2 changes:
>   - change kprobes get_func_ip to return zero for kprobes
>     attached within the function body [Andrii]
>   - detect IBT config and properly test kprobe with offset
>     [Andrii]
>
> v1 changes:
>   - read previous instruction in kprobe_multi link handler
>     and adjust entry_ip for CONFIG_X86_KERNEL_IBT option
>   - split first patch into 2 separate changes
>   - update changelogs
>
> thanks,
> jirka
>
>
> ---
> Jiri Olsa (6):
>       kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag
>       ftrace: Keep the resolved addr in kallsyms_callback
>       bpf: Use given function address for trampoline ip arg
>       bpf: Adjust kprobe_multi entry_ip for CONFIG_X86_KERNEL_IBT
>       bpf: Return value in kprobe get_func_ip only for entry address
>       selftests/bpf: Fix get_func_ip offset test for CONFIG_X86_KERNEL_IBT
>
>  arch/x86/net/bpf_jit_comp.c                               |  9 ++++-----
>  include/linux/kprobes.h                                   |  1 +
>  kernel/kprobes.c                                          |  6 +++++-
>  kernel/trace/bpf_trace.c                                  | 15 ++++++++++++++-
>  kernel/trace/ftrace.c                                     |  3 +--
>  tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++-----------
>  tools/testing/selftests/bpf/progs/get_func_ip_test.c      | 22 ++++++++++------------
>  tools/testing/selftests/bpf/progs/kprobe_multi.c          |  4 +---
>  8 files changed, 87 insertions(+), 35 deletions(-)

Overall LGTM, please address Peter's questions and request for some
more comments and context. Few nits to simplify selftests further, but
looks great. Thanks!

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 6/6] selftests/bpf: Fix get_func_ip offset test for CONFIG_X86_KERNEL_IBT
  2022-08-16  3:25   ` Andrii Nakryiko
@ 2022-08-16  7:00     ` Jiri Olsa
  2022-08-16 10:18       ` Jiri Olsa
  2022-08-16 17:19       ` Andrii Nakryiko
  0 siblings, 2 replies; 30+ messages in thread
From: Jiri Olsa @ 2022-08-16  7:00 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra,
	Masami Hiramatsu (Google)

On Mon, Aug 15, 2022 at 08:25:51PM -0700, Andrii Nakryiko wrote:
> On Thu, Aug 11, 2022 at 2:16 AM Jiri Olsa <jolsa@kernel.org> wrote:
> >
> > With CONFIG_X86_KERNEL_IBT enabled the test for kprobe with offset
> > won't work because of the extra endbr instruction.
> >
> > As suggested by Andrii adding CONFIG_X86_KERNEL_IBT detection
> > and using appropriate offset value based on that.
> >
> > Also removing test7 program, because it does the same as test6.
> >
> > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  .../bpf/prog_tests/get_func_ip_test.c         | 62 +++++++++++++++----
> >  .../selftests/bpf/progs/get_func_ip_test.c    | 20 +++---
> >  2 files changed, 60 insertions(+), 22 deletions(-)
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > index 938dbd4d7c2f..a4dab2fa2258 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > @@ -2,7 +2,9 @@
> >  #include <test_progs.h>
> >  #include "get_func_ip_test.skel.h"
> >
> > -void test_get_func_ip_test(void)
> > +static int config_ibt;
> > +
> > +static void test_function_entry(void)
> >  {
> >         struct get_func_ip_test *skel = NULL;
> >         int err, prog_fd;
> > @@ -12,14 +14,6 @@ void test_get_func_ip_test(void)
> >         if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open"))
> >                 return;
> >
> > -       /* test6 is x86_64 specifc because of the instruction
> > -        * offset, disabling it for all other archs
> > -        */
> > -#ifndef __x86_64__
> > -       bpf_program__set_autoload(skel->progs.test6, false);
> > -       bpf_program__set_autoload(skel->progs.test7, false);
> > -#endif
> > -
> >         err = get_func_ip_test__load(skel);
> >         if (!ASSERT_OK(err, "get_func_ip_test__load"))
> >                 goto cleanup;
> > @@ -38,16 +32,62 @@ void test_get_func_ip_test(void)
> >
> >         ASSERT_OK(err, "test_run");
> >
> > +       config_ibt = skel->bss->config_ibt;
> 
> skel->bss->config_ibt isn't actually necessary, you can just check
> skel->kconfig->CONFIG_X86_KERNEL_IBT directly. And you won't need to
> trigger BPF program unnecessary, libbpf will fill out kconfig section
> during object/skeleton load phase.

nice, did not know that ;-) will remove it

> 
> > +       ASSERT_TRUE(config_ibt == 0 || config_ibt == 1, "config_ibt");
> 
> you won't need this anymore
> 
> > +       printf("%s:config_ibt %d\n", __func__, config_ibt);
> 
> and this is just debug leftover

it's intentional to find out quickly what config we are failing on

> 
> > +
> >         ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
> >         ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
> >         ASSERT_EQ(skel->bss->test3_result, 1, "test3_result");
> >         ASSERT_EQ(skel->bss->test4_result, 1, "test4_result");
> >         ASSERT_EQ(skel->bss->test5_result, 1, "test5_result");
> > +
> 
> [...]
> 
> > diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> > index 6db70757bc8b..cb8e58183d46 100644
> > --- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> > +++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> > @@ -2,6 +2,7 @@
> >  #include <linux/bpf.h>
> >  #include <bpf/bpf_helpers.h>
> >  #include <bpf/bpf_tracing.h>
> > +#include <stdbool.h>
> >
> >  char _license[] SEC("license") = "GPL";
> >
> > @@ -13,12 +14,19 @@ extern const void bpf_modify_return_test __ksym;
> >  extern const void bpf_fentry_test6 __ksym;
> >  extern const void bpf_fentry_test7 __ksym;
> >
> > +extern bool CONFIG_X86_KERNEL_IBT __kconfig __weak;
> > +
> > +bool config_ibt;
> > +
> >  __u64 test1_result = 0;
> >  SEC("fentry/bpf_fentry_test1")
> >  int BPF_PROG(test1, int a)
> >  {
> >         __u64 addr = bpf_get_func_ip(ctx);
> >
> > +       /* just to propagate config option value to user space */
> > +       config_ibt = CONFIG_X86_KERNEL_IBT;
> > +
> 
> as mentioned above, you shouldn't need it, just read
> CONFIG_X86_KERNEL_IBT directly through skeleton
> 
> >         test1_result = (const void *) addr == &bpf_fentry_test1;
> >         return 0;
> >  }
> > @@ -64,7 +72,7 @@ int BPF_PROG(test5, int a, int *b, int ret)
> >  }
> >
> >  __u64 test6_result = 0;
> > -SEC("kprobe/bpf_fentry_test6+0x5")
> > +SEC("?kprobe/")
> 
> don't leave / at the end (and I thought that libbpf rejects this,
> isn't that a case?...), just SEC("?kprobe")

yes, will remove

thanks,
jirka

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag
  2022-08-15 12:40       ` Peter Zijlstra
@ 2022-08-16  7:01         ` Jiri Olsa
  0 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2022-08-16  7:01 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google)

On Mon, Aug 15, 2022 at 02:40:29PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 15, 2022 at 12:10:20PM +0200, Jiri Olsa wrote:
> > On Mon, Aug 15, 2022 at 11:57:40AM +0200, Peter Zijlstra wrote:
> > > On Thu, Aug 11, 2022 at 11:15:21AM +0200, Jiri Olsa wrote:
> > > > Adding KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag to indicate that
> > > > attach address is on function entry. This is used in following
> > > > changes in get_func_ip helper to return correct function address.
> > > 
> > > IIRC (and I've not digested patch) the intent was to have func+0 mean
> > > this. x86-IBT is not the only case where this applies, there are
> > > multiple architectures where function entry is not +0.
> > 
> > we can have kprobe created by user passing just the address
> > 
> > in this case _kprobe_addr still computes the address's offset
> > from the symbol but does not store it back to 'struct kprobe'
> 
> Ah, this is an internal thing to record, in the struct kprobe if the
> thing is on the function entry or not?

yes, exactly.. it's flag saying the kprobe is on function entry

jirka

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 2/6] ftrace: Keep the resolved addr in kallsyms_callback
  2022-08-15 12:45       ` Peter Zijlstra
@ 2022-08-16  7:06         ` Jiri Olsa
  0 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2022-08-16  7:06 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google)

On Mon, Aug 15, 2022 at 02:45:54PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 15, 2022 at 12:30:46PM +0200, Jiri Olsa wrote:
> > On Mon, Aug 15, 2022 at 12:13:39PM +0200, Peter Zijlstra wrote:
> > > On Thu, Aug 11, 2022 at 11:15:22AM +0200, Jiri Olsa wrote:
> > > > Keeping the resolved 'addr' in kallsyms_callback, instead of taking
> > > > ftrace_location value, because we depend on symbol address in the
> > > > cookie related code.
> > > > 
> > > > With CONFIG_X86_KERNEL_IBT option the ftrace_location value differs
> > > > from symbol address, which screwes the symbol address cookies matching.
> > > > 
> > > > There are 2 users of this function:
> > > > - bpf_kprobe_multi_link_attach
> > > >     for which this fix is for
> > > 
> > > Except you fail to explain what the problem is and how this helps
> > > anything.
> > 
> > we search this array of resolved addresses later in cookie code
> > (bpf_kprobe_multi_cookie) for address returned by fprobe, which
> > is not 'ftrace_location' address
> 
> What is fprobe?

https://lore.kernel.org/bpf/164800288611.1716332.7053663723617614668.stgit@devnote2/

> 
> > so we want ftrace_lookup_symbols to return 'only' resolved address
> > at this point, not 'ftrace_location' address
> 
> In general; I'm completely confused what any of this code is doing.
> Mostly I don't speak BPF *at*all*. And have very little clue as to what
> things are supposed to do, please help me along.
> 
> But the thing is, we're likely going to change all this (function call
> abi) again in the very near future; it would be very nice if all this
> code could grow some what/why comments, because I've gotten lost
> multiple times in all this.

is there any outline of the change? is this the change in your x86/fineibt
branch that you brought up in the other thread? 

jirka

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 6/6] selftests/bpf: Fix get_func_ip offset test for CONFIG_X86_KERNEL_IBT
  2022-08-16  7:00     ` Jiri Olsa
@ 2022-08-16 10:18       ` Jiri Olsa
  2022-08-16 17:28         ` Andrii Nakryiko
  2022-08-16 17:19       ` Andrii Nakryiko
  1 sibling, 1 reply; 30+ messages in thread
From: Jiri Olsa @ 2022-08-16 10:18 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo,
	Peter Zijlstra, Masami Hiramatsu (Google)

On Tue, Aug 16, 2022 at 09:00:20AM +0200, Jiri Olsa wrote:
> On Mon, Aug 15, 2022 at 08:25:51PM -0700, Andrii Nakryiko wrote:
> > On Thu, Aug 11, 2022 at 2:16 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > >
> > > With CONFIG_X86_KERNEL_IBT enabled the test for kprobe with offset
> > > won't work because of the extra endbr instruction.
> > >
> > > As suggested by Andrii adding CONFIG_X86_KERNEL_IBT detection
> > > and using appropriate offset value based on that.
> > >
> > > Also removing test7 program, because it does the same as test6.
> > >
> > > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > ---
> > >  .../bpf/prog_tests/get_func_ip_test.c         | 62 +++++++++++++++----
> > >  .../selftests/bpf/progs/get_func_ip_test.c    | 20 +++---
> > >  2 files changed, 60 insertions(+), 22 deletions(-)
> > >
> > > diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > > index 938dbd4d7c2f..a4dab2fa2258 100644
> > > --- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > > +++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > > @@ -2,7 +2,9 @@
> > >  #include <test_progs.h>
> > >  #include "get_func_ip_test.skel.h"
> > >
> > > -void test_get_func_ip_test(void)
> > > +static int config_ibt;
> > > +
> > > +static void test_function_entry(void)
> > >  {
> > >         struct get_func_ip_test *skel = NULL;
> > >         int err, prog_fd;
> > > @@ -12,14 +14,6 @@ void test_get_func_ip_test(void)
> > >         if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open"))
> > >                 return;
> > >
> > > -       /* test6 is x86_64 specifc because of the instruction
> > > -        * offset, disabling it for all other archs
> > > -        */
> > > -#ifndef __x86_64__
> > > -       bpf_program__set_autoload(skel->progs.test6, false);
> > > -       bpf_program__set_autoload(skel->progs.test7, false);
> > > -#endif
> > > -
> > >         err = get_func_ip_test__load(skel);
> > >         if (!ASSERT_OK(err, "get_func_ip_test__load"))
> > >                 goto cleanup;
> > > @@ -38,16 +32,62 @@ void test_get_func_ip_test(void)
> > >
> > >         ASSERT_OK(err, "test_run");
> > >
> > > +       config_ibt = skel->bss->config_ibt;
> > 
> > skel->bss->config_ibt isn't actually necessary, you can just check
> > skel->kconfig->CONFIG_X86_KERNEL_IBT directly. And you won't need to
> > trigger BPF program unnecessary, libbpf will fill out kconfig section
> > during object/skeleton load phase.
> 
> nice, did not know that ;-) will remove it

actually.. to get kconfig datasec generated in BTF the CONFIG_X86_KERNEL_IBT
symbol needs to be used.. so I came up with adding unused function:

  int unused(void)
  {
        return CONFIG_X86_KERNEL_IBT ? 0 : 1;
  }

but I wonder having the config_ibt variable is better and more clear

thoughts?

thanks,
jirka

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 6/6] selftests/bpf: Fix get_func_ip offset test for CONFIG_X86_KERNEL_IBT
  2022-08-16  7:00     ` Jiri Olsa
  2022-08-16 10:18       ` Jiri Olsa
@ 2022-08-16 17:19       ` Andrii Nakryiko
  1 sibling, 0 replies; 30+ messages in thread
From: Andrii Nakryiko @ 2022-08-16 17:19 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra,
	Masami Hiramatsu (Google)

On Tue, Aug 16, 2022 at 12:00 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Mon, Aug 15, 2022 at 08:25:51PM -0700, Andrii Nakryiko wrote:
> > On Thu, Aug 11, 2022 at 2:16 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > >
> > > With CONFIG_X86_KERNEL_IBT enabled the test for kprobe with offset
> > > won't work because of the extra endbr instruction.
> > >
> > > As suggested by Andrii adding CONFIG_X86_KERNEL_IBT detection
> > > and using appropriate offset value based on that.
> > >
> > > Also removing test7 program, because it does the same as test6.
> > >
> > > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > ---
> > >  .../bpf/prog_tests/get_func_ip_test.c         | 62 +++++++++++++++----
> > >  .../selftests/bpf/progs/get_func_ip_test.c    | 20 +++---
> > >  2 files changed, 60 insertions(+), 22 deletions(-)
> > >
> > > diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > > index 938dbd4d7c2f..a4dab2fa2258 100644
> > > --- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > > +++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > > @@ -2,7 +2,9 @@
> > >  #include <test_progs.h>
> > >  #include "get_func_ip_test.skel.h"
> > >
> > > -void test_get_func_ip_test(void)
> > > +static int config_ibt;
> > > +
> > > +static void test_function_entry(void)
> > >  {
> > >         struct get_func_ip_test *skel = NULL;
> > >         int err, prog_fd;
> > > @@ -12,14 +14,6 @@ void test_get_func_ip_test(void)
> > >         if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open"))
> > >                 return;
> > >
> > > -       /* test6 is x86_64 specifc because of the instruction
> > > -        * offset, disabling it for all other archs
> > > -        */
> > > -#ifndef __x86_64__
> > > -       bpf_program__set_autoload(skel->progs.test6, false);
> > > -       bpf_program__set_autoload(skel->progs.test7, false);
> > > -#endif
> > > -
> > >         err = get_func_ip_test__load(skel);
> > >         if (!ASSERT_OK(err, "get_func_ip_test__load"))
> > >                 goto cleanup;
> > > @@ -38,16 +32,62 @@ void test_get_func_ip_test(void)
> > >
> > >         ASSERT_OK(err, "test_run");
> > >
> > > +       config_ibt = skel->bss->config_ibt;
> >
> > skel->bss->config_ibt isn't actually necessary, you can just check
> > skel->kconfig->CONFIG_X86_KERNEL_IBT directly. And you won't need to
> > trigger BPF program unnecessary, libbpf will fill out kconfig section
> > during object/skeleton load phase.
>
> nice, did not know that ;-) will remove it
>
> >
> > > +       ASSERT_TRUE(config_ibt == 0 || config_ibt == 1, "config_ibt");
> >
> > you won't need this anymore
> >
> > > +       printf("%s:config_ibt %d\n", __func__, config_ibt);
> >
> > and this is just debug leftover
>
> it's intentional to find out quickly what config we are failing on

ASSERT_GE(config_ibt, 0, "config_ibt_lower_bound");
ASSERT_LE(config_ibt, 1, "config_ibt_upper_bound");

Will print out actual value and a bit more meaningful error message if
checks fail. Without polluting output with stray printfs.

>
> >
> > > +
> > >         ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
> > >         ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
> > >         ASSERT_EQ(skel->bss->test3_result, 1, "test3_result");
> > >         ASSERT_EQ(skel->bss->test4_result, 1, "test4_result");
> > >         ASSERT_EQ(skel->bss->test5_result, 1, "test5_result");
> > > +
> >
> > [...]
> >

[...]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 6/6] selftests/bpf: Fix get_func_ip offset test for CONFIG_X86_KERNEL_IBT
  2022-08-16 10:18       ` Jiri Olsa
@ 2022-08-16 17:28         ` Andrii Nakryiko
  0 siblings, 0 replies; 30+ messages in thread
From: Andrii Nakryiko @ 2022-08-16 17:28 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra,
	Masami Hiramatsu (Google)

On Tue, Aug 16, 2022 at 3:18 AM Jiri Olsa <olsajiri@gmail.com> wrote:
>
> On Tue, Aug 16, 2022 at 09:00:20AM +0200, Jiri Olsa wrote:
> > On Mon, Aug 15, 2022 at 08:25:51PM -0700, Andrii Nakryiko wrote:
> > > On Thu, Aug 11, 2022 at 2:16 AM Jiri Olsa <jolsa@kernel.org> wrote:
> > > >
> > > > With CONFIG_X86_KERNEL_IBT enabled the test for kprobe with offset
> > > > won't work because of the extra endbr instruction.
> > > >
> > > > As suggested by Andrii adding CONFIG_X86_KERNEL_IBT detection
> > > > and using appropriate offset value based on that.
> > > >
> > > > Also removing test7 program, because it does the same as test6.
> > > >
> > > > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > > ---
> > > >  .../bpf/prog_tests/get_func_ip_test.c         | 62 +++++++++++++++----
> > > >  .../selftests/bpf/progs/get_func_ip_test.c    | 20 +++---
> > > >  2 files changed, 60 insertions(+), 22 deletions(-)
> > > >
> > > > diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > > > index 938dbd4d7c2f..a4dab2fa2258 100644
> > > > --- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > > > +++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> > > > @@ -2,7 +2,9 @@
> > > >  #include <test_progs.h>
> > > >  #include "get_func_ip_test.skel.h"
> > > >
> > > > -void test_get_func_ip_test(void)
> > > > +static int config_ibt;
> > > > +
> > > > +static void test_function_entry(void)
> > > >  {
> > > >         struct get_func_ip_test *skel = NULL;
> > > >         int err, prog_fd;
> > > > @@ -12,14 +14,6 @@ void test_get_func_ip_test(void)
> > > >         if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open"))
> > > >                 return;
> > > >
> > > > -       /* test6 is x86_64 specifc because of the instruction
> > > > -        * offset, disabling it for all other archs
> > > > -        */
> > > > -#ifndef __x86_64__
> > > > -       bpf_program__set_autoload(skel->progs.test6, false);
> > > > -       bpf_program__set_autoload(skel->progs.test7, false);
> > > > -#endif
> > > > -
> > > >         err = get_func_ip_test__load(skel);
> > > >         if (!ASSERT_OK(err, "get_func_ip_test__load"))
> > > >                 goto cleanup;
> > > > @@ -38,16 +32,62 @@ void test_get_func_ip_test(void)
> > > >
> > > >         ASSERT_OK(err, "test_run");
> > > >
> > > > +       config_ibt = skel->bss->config_ibt;
> > >
> > > skel->bss->config_ibt isn't actually necessary, you can just check
> > > skel->kconfig->CONFIG_X86_KERNEL_IBT directly. And you won't need to
> > > trigger BPF program unnecessary, libbpf will fill out kconfig section
> > > during object/skeleton load phase.
> >
> > nice, did not know that ;-) will remove it
>
> actually.. to get kconfig datasec generated in BTF the CONFIG_X86_KERNEL_IBT
> symbol needs to be used.. so I came up with adding unused function:
>
>   int unused(void)
>   {
>         return CONFIG_X86_KERNEL_IBT ? 0 : 1;
>   }
>
> but I wonder having the config_ibt variable is better and more clear
>
> thoughts?
>

seems like it's not allowed to add __attribute__((used)) onto extern
:( I don't know, this unused function is fine with me, even if a bit
ugly (maybe just leave the comment?). I like the fact that you won't
have to run BPF program to get this config, but it's minor either way.
I feel strongly either way.

> thanks,
> jirka

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 3/6] bpf: Use given function address for trampoline ip arg
  2022-08-15 12:51       ` Peter Zijlstra
@ 2022-08-17 13:40         ` Jiri Olsa
  0 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2022-08-17 13:40 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Masami Hiramatsu (Google)

On Mon, Aug 15, 2022 at 02:51:32PM +0200, Peter Zijlstra wrote:
> On Mon, Aug 15, 2022 at 12:57:39PM +0200, Jiri Olsa wrote:
> > On Mon, Aug 15, 2022 at 12:18:38PM +0200, Peter Zijlstra wrote:
> > > On Thu, Aug 11, 2022 at 11:15:23AM +0200, Jiri Olsa wrote:
> > > > Using function address given at the generation time as the trampoline
> > > > ip argument. This way we get directly the function address that we
> > > > need, so we don't need to:
> > > >   - read the ip from the stack
> > > >   - subtract X86_PATCH_SIZE
> > > >   - subtract ENDBR_INSN_SIZE if CONFIG_X86_KERNEL_IBT is enabled
> > > >     which is not even implemented yet ;-)
> > > 
> > > Can you please tell me what all this does and why?
> > > 
> > 
> > arch_prepare_bpf_trampoline prepares bpf trampoline for given function
> > specified by 'func_addr' argument
> 
> The bpf trampoline is what's used for ftrace direct call, no?

sorry I forgot to answer this one.. yes ;-)

> 
> > the changed code is storing/preparing caller's 'ip' address on the
> > trampoline's stack so the get_func_ip helper can use it
> 
> I've no idea what get_func_ip() helper is...

it's kernel code that can be executed by bpf program,
get_func_ip returns ip address of the probed function
that triggered bpf program

jirka

> 
> > currently the trampoline code gets the caller's ip address by reading
> > caller's return address from stack and subtracting X86_PATCH_SIZE from
> > it
> > 
> > the change uses 'func_addr' as caller's 'ip' address when trampoline is
> > generated .. this way we don't need to retrieve the return address from
> > stack and care about endbr instruction if IBT is enabled
> 
> Ok, I *think* I sorta understand that.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag
  2022-08-11  9:15 ` [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag Jiri Olsa
  2022-08-15  9:57   ` Peter Zijlstra
@ 2022-08-30 15:08   ` Jiri Olsa
  2022-09-08  8:26     ` Jiri Olsa
  2022-09-08 12:01   ` Masami Hiramatsu
  2 siblings, 1 reply; 30+ messages in thread
From: Jiri Olsa @ 2022-08-30 15:08 UTC (permalink / raw)
  To: Alexei Starovoitov, Masami Hiramatsu (Google),
	Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra

Masami,
could you please check on this one?

thanks,
jirka

On Thu, Aug 11, 2022 at 11:15:21AM +0200, Jiri Olsa wrote:
> Adding KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag to indicate that
> attach address is on function entry. This is used in following
> changes in get_func_ip helper to return correct function address.
> 
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  include/linux/kprobes.h | 1 +
>  kernel/kprobes.c        | 6 +++++-
>  2 files changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
> index 55041d2f884d..a0b92be98984 100644
> --- a/include/linux/kprobes.h
> +++ b/include/linux/kprobes.h
> @@ -103,6 +103,7 @@ struct kprobe {
>  				   * this flag is only for optimized_kprobe.
>  				   */
>  #define KPROBE_FLAG_FTRACE	8 /* probe is using ftrace */
> +#define KPROBE_FLAG_ON_FUNC_ENTRY	16 /* probe is on the function entry */
>  
>  /* Has this kprobe gone ? */
>  static inline bool kprobe_gone(struct kprobe *p)
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index f214f8c088ed..a6b1b5c49d92 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -1605,9 +1605,10 @@ int register_kprobe(struct kprobe *p)
>  	struct kprobe *old_p;
>  	struct module *probed_mod;
>  	kprobe_opcode_t *addr;
> +	bool on_func_entry;
>  
>  	/* Adjust probe address from symbol */
> -	addr = kprobe_addr(p);
> +	addr = _kprobe_addr(p->addr, p->symbol_name, p->offset, &on_func_entry);
>  	if (IS_ERR(addr))
>  		return PTR_ERR(addr);
>  	p->addr = addr;
> @@ -1627,6 +1628,9 @@ int register_kprobe(struct kprobe *p)
>  
>  	mutex_lock(&kprobe_mutex);
>  
> +	if (on_func_entry)
> +		p->flags |= KPROBE_FLAG_ON_FUNC_ENTRY;
> +
>  	old_p = get_kprobe(p->addr);
>  	if (old_p) {
>  		/* Since this may unoptimize 'old_p', locking 'text_mutex'. */
> -- 
> 2.37.1
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag
  2022-08-30 15:08   ` Jiri Olsa
@ 2022-09-08  8:26     ` Jiri Olsa
  0 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2022-09-08  8:26 UTC (permalink / raw)
  To: Masami Hiramatsu (Google)
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra

ping, thanks

On Tue, Aug 30, 2022 at 05:08:44PM +0200, Jiri Olsa wrote:
> Masami,
> could you please check on this one?
> 
> thanks,
> jirka
> 
> On Thu, Aug 11, 2022 at 11:15:21AM +0200, Jiri Olsa wrote:
> > Adding KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag to indicate that
> > attach address is on function entry. This is used in following
> > changes in get_func_ip helper to return correct function address.
> > 
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  include/linux/kprobes.h | 1 +
> >  kernel/kprobes.c        | 6 +++++-
> >  2 files changed, 6 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
> > index 55041d2f884d..a0b92be98984 100644
> > --- a/include/linux/kprobes.h
> > +++ b/include/linux/kprobes.h
> > @@ -103,6 +103,7 @@ struct kprobe {
> >  				   * this flag is only for optimized_kprobe.
> >  				   */
> >  #define KPROBE_FLAG_FTRACE	8 /* probe is using ftrace */
> > +#define KPROBE_FLAG_ON_FUNC_ENTRY	16 /* probe is on the function entry */
> >  
> >  /* Has this kprobe gone ? */
> >  static inline bool kprobe_gone(struct kprobe *p)
> > diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> > index f214f8c088ed..a6b1b5c49d92 100644
> > --- a/kernel/kprobes.c
> > +++ b/kernel/kprobes.c
> > @@ -1605,9 +1605,10 @@ int register_kprobe(struct kprobe *p)
> >  	struct kprobe *old_p;
> >  	struct module *probed_mod;
> >  	kprobe_opcode_t *addr;
> > +	bool on_func_entry;
> >  
> >  	/* Adjust probe address from symbol */
> > -	addr = kprobe_addr(p);
> > +	addr = _kprobe_addr(p->addr, p->symbol_name, p->offset, &on_func_entry);
> >  	if (IS_ERR(addr))
> >  		return PTR_ERR(addr);
> >  	p->addr = addr;
> > @@ -1627,6 +1628,9 @@ int register_kprobe(struct kprobe *p)
> >  
> >  	mutex_lock(&kprobe_mutex);
> >  
> > +	if (on_func_entry)
> > +		p->flags |= KPROBE_FLAG_ON_FUNC_ENTRY;
> > +
> >  	old_p = get_kprobe(p->addr);
> >  	if (old_p) {
> >  		/* Since this may unoptimize 'old_p', locking 'text_mutex'. */
> > -- 
> > 2.37.1
> > 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag
  2022-08-11  9:15 ` [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag Jiri Olsa
  2022-08-15  9:57   ` Peter Zijlstra
  2022-08-30 15:08   ` Jiri Olsa
@ 2022-09-08 12:01   ` Masami Hiramatsu
  2 siblings, 0 replies; 30+ messages in thread
From: Masami Hiramatsu @ 2022-09-08 12:01 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra,
	Masami Hiramatsu (Google)

Hi Jiri,

On Thu, 11 Aug 2022 11:15:21 +0200
Jiri Olsa <jolsa@kernel.org> wrote:

> Adding KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag to indicate that
> attach address is on function entry. This is used in following
> changes in get_func_ip helper to return correct function address.

Sorry about late reply.

> 
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  include/linux/kprobes.h | 1 +
>  kernel/kprobes.c        | 6 +++++-
>  2 files changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
> index 55041d2f884d..a0b92be98984 100644
> --- a/include/linux/kprobes.h
> +++ b/include/linux/kprobes.h
> @@ -103,6 +103,7 @@ struct kprobe {
>  				   * this flag is only for optimized_kprobe.
>  				   */
>  #define KPROBE_FLAG_FTRACE	8 /* probe is using ftrace */
> +#define KPROBE_FLAG_ON_FUNC_ENTRY	16 /* probe is on the function entry */
>  
>  /* Has this kprobe gone ? */
>  static inline bool kprobe_gone(struct kprobe *p)
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index f214f8c088ed..a6b1b5c49d92 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -1605,9 +1605,10 @@ int register_kprobe(struct kprobe *p)
>  	struct kprobe *old_p;
>  	struct module *probed_mod;
>  	kprobe_opcode_t *addr;
> +	bool on_func_entry;
>  
>  	/* Adjust probe address from symbol */
> -	addr = kprobe_addr(p);
> +	addr = _kprobe_addr(p->addr, p->symbol_name, p->offset, &on_func_entry);

Hmm, OK. And I think now we can remove kprobe_addr() itself. It is still
used in register_kretprobe() but that is redundant.

Anyway, this patch itself looks good to me.

Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Thank you!

>  	if (IS_ERR(addr))
>  		return PTR_ERR(addr);
>  	p->addr = addr;
> @@ -1627,6 +1628,9 @@ int register_kprobe(struct kprobe *p)
>  
>  	mutex_lock(&kprobe_mutex);
>  
> +	if (on_func_entry)
> +		p->flags |= KPROBE_FLAG_ON_FUNC_ENTRY;
> +
>  	old_p = get_kprobe(p->addr);
>  	if (old_p) {
>  		/* Since this may unoptimize 'old_p', locking 'text_mutex'. */
> -- 
> 2.37.1
> 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 2/6] ftrace: Keep the resolved addr in kallsyms_callback
  2022-08-11  9:15 ` [PATCHv2 bpf-next 2/6] ftrace: Keep the resolved addr in kallsyms_callback Jiri Olsa
  2022-08-15 10:13   ` Peter Zijlstra
@ 2022-09-08 12:35   ` Masami Hiramatsu
  2022-09-08 19:41     ` Jiri Olsa
  1 sibling, 1 reply; 30+ messages in thread
From: Masami Hiramatsu @ 2022-09-08 12:35 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra,
	Masami Hiramatsu (Google)

On Thu, 11 Aug 2022 11:15:22 +0200
Jiri Olsa <jolsa@kernel.org> wrote:

> Keeping the resolved 'addr' in kallsyms_callback, instead of taking
> ftrace_location value, because we depend on symbol address in the
> cookie related code.
> 
> With CONFIG_X86_KERNEL_IBT option the ftrace_location value differs
> from symbol address, which screwes the symbol address cookies matching.
> 
> There are 2 users of this function:
> - bpf_kprobe_multi_link_attach
>     for which this fix is for
> 
> - get_ftrace_locations
>     which is used by register_fprobe_syms
> 
>     this function needs to get symbols resolved to addresses,
>     but does not need 'ftrace location addresses' at this point
>     there's another ftrace location translation in the path done
>     by ftrace_set_filter_ips call:
> 
>      register_fprobe_syms
>        addrs = get_ftrace_locations
> 
>        register_fprobe_ips(addrs)
>          ...
>          ftrace_set_filter_ips
>            ...
>              __ftrace_match_addr
>                ip = ftrace_location(ip);
>                ...
> 

Yes, this looks OK for fprobe. I confirmed above.

Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

One concern was that the caller might expect that the address
must be ftrace_location(), but as far as I can read the function
document, there is no such description.

 * ftrace_lookup_symbols - Lookup addresses for array of symbols
...
 * This function looks up addresses for array of symbols provided in
 * @syms array (must be alphabetically sorted) and stores them in
 * @addrs array, which needs to be big enough to store at least @cnt
 * addresses.

So this change is OK.

Thank you,

> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  kernel/trace/ftrace.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index bc921a3f7ea8..8a8c90d1a387 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -8268,8 +8268,7 @@ static int kallsyms_callback(void *data, const char *name,
>  	if (args->addrs[idx])
>  		return 0;
>  
> -	addr = ftrace_location(addr);
> -	if (!addr)
> +	if (!ftrace_location(addr))
>  		return 0;
>  
>  	args->addrs[idx] = addr;
> -- 
> 2.37.1
> 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCHv2 bpf-next 2/6] ftrace: Keep the resolved addr in kallsyms_callback
  2022-09-08 12:35   ` Masami Hiramatsu
@ 2022-09-08 19:41     ` Jiri Olsa
  0 siblings, 0 replies; 30+ messages in thread
From: Jiri Olsa @ 2022-09-08 19:41 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Stanislav Fomichev, Hao Luo, Peter Zijlstra

On Thu, Sep 08, 2022 at 09:35:51PM +0900, Masami Hiramatsu wrote:
> On Thu, 11 Aug 2022 11:15:22 +0200
> Jiri Olsa <jolsa@kernel.org> wrote:
> 
> > Keeping the resolved 'addr' in kallsyms_callback, instead of taking
> > ftrace_location value, because we depend on symbol address in the
> > cookie related code.
> > 
> > With CONFIG_X86_KERNEL_IBT option the ftrace_location value differs
> > from symbol address, which screwes the symbol address cookies matching.
> > 
> > There are 2 users of this function:
> > - bpf_kprobe_multi_link_attach
> >     for which this fix is for
> > 
> > - get_ftrace_locations
> >     which is used by register_fprobe_syms
> > 
> >     this function needs to get symbols resolved to addresses,
> >     but does not need 'ftrace location addresses' at this point
> >     there's another ftrace location translation in the path done
> >     by ftrace_set_filter_ips call:
> > 
> >      register_fprobe_syms
> >        addrs = get_ftrace_locations
> > 
> >        register_fprobe_ips(addrs)
> >          ...
> >          ftrace_set_filter_ips
> >            ...
> >              __ftrace_match_addr
> >                ip = ftrace_location(ip);
> >                ...
> > 
> 
> Yes, this looks OK for fprobe. I confirmed above.
> 
> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> 
> One concern was that the caller might expect that the address
> must be ftrace_location(), but as far as I can read the function
> document, there is no such description.
> 
>  * ftrace_lookup_symbols - Lookup addresses for array of symbols
> ...
>  * This function looks up addresses for array of symbols provided in
>  * @syms array (must be alphabetically sorted) and stores them in
>  * @addrs array, which needs to be big enough to store at least @cnt
>  * addresses.
> 
> So this change is OK.
> 
> Thank you,

thanks for review, I'll rebase and new version

jirka

> 
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  kernel/trace/ftrace.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> > 
> > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> > index bc921a3f7ea8..8a8c90d1a387 100644
> > --- a/kernel/trace/ftrace.c
> > +++ b/kernel/trace/ftrace.c
> > @@ -8268,8 +8268,7 @@ static int kallsyms_callback(void *data, const char *name,
> >  	if (args->addrs[idx])
> >  		return 0;
> >  
> > -	addr = ftrace_location(addr);
> > -	if (!addr)
> > +	if (!ftrace_location(addr))
> >  		return 0;
> >  
> >  	args->addrs[idx] = addr;
> > -- 
> > 2.37.1
> > 
> 
> 
> -- 
> Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2022-09-08 19:41 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-11  9:15 [PATCHv2 bpf-next 0/6] bpf: Fixes for CONFIG_X86_KERNEL_IBT Jiri Olsa
2022-08-11  9:15 ` [PATCHv2 bpf-next 1/6] kprobes: Add new KPROBE_FLAG_ON_FUNC_ENTRY kprobe flag Jiri Olsa
2022-08-15  9:57   ` Peter Zijlstra
2022-08-15 10:10     ` Jiri Olsa
2022-08-15 12:40       ` Peter Zijlstra
2022-08-16  7:01         ` Jiri Olsa
2022-08-30 15:08   ` Jiri Olsa
2022-09-08  8:26     ` Jiri Olsa
2022-09-08 12:01   ` Masami Hiramatsu
2022-08-11  9:15 ` [PATCHv2 bpf-next 2/6] ftrace: Keep the resolved addr in kallsyms_callback Jiri Olsa
2022-08-15 10:13   ` Peter Zijlstra
2022-08-15 10:30     ` Jiri Olsa
2022-08-15 12:45       ` Peter Zijlstra
2022-08-16  7:06         ` Jiri Olsa
2022-09-08 12:35   ` Masami Hiramatsu
2022-09-08 19:41     ` Jiri Olsa
2022-08-11  9:15 ` [PATCHv2 bpf-next 3/6] bpf: Use given function address for trampoline ip arg Jiri Olsa
2022-08-15 10:18   ` Peter Zijlstra
2022-08-15 10:57     ` Jiri Olsa
2022-08-15 12:51       ` Peter Zijlstra
2022-08-17 13:40         ` Jiri Olsa
2022-08-11  9:15 ` [PATCHv2 bpf-next 4/6] bpf: Adjust kprobe_multi entry_ip for CONFIG_X86_KERNEL_IBT Jiri Olsa
2022-08-11  9:15 ` [PATCHv2 bpf-next 5/6] bpf: Return value in kprobe get_func_ip only for entry address Jiri Olsa
2022-08-11  9:15 ` [PATCHv2 bpf-next 6/6] selftests/bpf: Fix get_func_ip offset test for CONFIG_X86_KERNEL_IBT Jiri Olsa
2022-08-16  3:25   ` Andrii Nakryiko
2022-08-16  7:00     ` Jiri Olsa
2022-08-16 10:18       ` Jiri Olsa
2022-08-16 17:28         ` Andrii Nakryiko
2022-08-16 17:19       ` Andrii Nakryiko
2022-08-16  3:27 ` [PATCHv2 bpf-next 0/6] bpf: Fixes " Andrii Nakryiko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).