All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs
@ 2023-02-22 22:37 Ilya Leoshkevich
  2023-02-22 22:37 ` [PATCH bpf-next v3 01/12] selftests/bpf: Finish folding after BPF_FUNC_csum_diff Ilya Leoshkevich
                   ` (12 more replies)
  0 siblings, 13 replies; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-22 22:37 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich

v2: https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/
v2 -> v3: Drop BPF_HELPER_CALL (Alexei).
          Drop the merged check_subprogs() cleanup.
          Adjust arm, sparc and i386 JITs.
          Fix a few portability issues in test_verifier.
          Fix a few sparc64 issues.
          Trim s390x denylist.

v1: https://lore.kernel.org/bpf/20230214212809.242632-1-iii@linux.ibm.com/T/#t
v1 -> v2: Add BPF_HELPER_CALL (Stanislav).
          Add check_subprogs() cleanup - noticed while reviewing the
          code for BPF_HELPER_CALL.
          Drop WARN_ON_ONCE (Stanislav, Alexei).
          Add bpf_jit_get_func_addr() to x86_64 JIT.

Hi,

This series solves the problems outlined in [1, 2, 3]. The main problem
is that kfuncs in modules do not fit into bpf_insn.imm on s390x; the
secondary problem is that there is a conflict between "abstract" XDP
metadata function BTF ids and their "concrete" addresses.

The solution is to keep fkunc BTF ids in bpf_insn.imm, and put the
addresses into bpf_kfunc_desc, which does not have size restrictions.

Regtested with test_verifier and test_progs on x86_64 and s390x.
Regtested with test_verifier only on arm, sparc64 and i386.

[1] https://lore.kernel.org/bpf/Y9%2FyrKZkBK6yzXp+@krava/
[2] https://lore.kernel.org/bpf/20230128000650.1516334-1-iii@linux.ibm.com/
[3] https://lore.kernel.org/bpf/20230128000650.1516334-32-iii@linux.ibm.com/

Best regards,
Ilya

Ilya Leoshkevich (12):
  selftests/bpf: Finish folding after BPF_FUNC_csum_diff
  selftests/bpf: Fix test_verifier on 32-bit systems
  sparc: Update maximum errno
  bpf: sparc64: Emit fixed-length instructions for BPF_PSEUDO_FUNC
  bpf: sparc64: Fix jumping to the first insn
  bpf: sparc64: Use 32-bit tail call index
  bpf, arm: Use bpf_jit_get_func_addr()
  bpf: sparc64: Use bpf_jit_get_func_addr()
  bpf: x86: Use bpf_jit_get_func_addr()
  bpf, x86_32: Use bpf_jit_get_func_addr()
  bpf: Support 64-bit pointers to kfuncs
  selftests/bpf: Trim DENYLIST.s390x

 arch/arm/net/bpf_jit_32.c                     | 27 +++++--
 arch/sparc/include/asm/errno.h                | 10 +++
 arch/sparc/kernel/entry.S                     |  2 +-
 arch/sparc/kernel/process.c                   |  6 +-
 arch/sparc/kernel/syscalls.S                  |  2 +-
 arch/sparc/net/bpf_jit_comp_64.c              | 50 +++++++-----
 arch/x86/net/bpf_jit_comp.c                   | 15 +++-
 arch/x86/net/bpf_jit_comp32.c                 | 27 +++++--
 include/linux/bpf.h                           |  2 +
 kernel/bpf/core.c                             | 21 ++++-
 kernel/bpf/verifier.c                         | 79 ++++++-------------
 tools/testing/selftests/bpf/DENYLIST.s390x    | 20 -----
 tools/testing/selftests/bpf/test_verifier.c   |  5 ++
 .../selftests/bpf/verifier/array_access.c     | 10 ++-
 .../selftests/bpf/verifier/atomic_fetch_add.c |  2 +-
 .../selftests/bpf/verifier/bpf_get_stack.c    |  2 +-
 tools/testing/selftests/bpf/verifier/calls.c  |  4 +-
 .../testing/selftests/bpf/verifier/map_kptr.c |  2 +-
 .../testing/selftests/bpf/verifier/map_ptr.c  |  2 +-
 19 files changed, 164 insertions(+), 124 deletions(-)
 create mode 100644 arch/sparc/include/asm/errno.h

-- 
2.39.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH bpf-next v3 01/12] selftests/bpf: Finish folding after BPF_FUNC_csum_diff
  2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
@ 2023-02-22 22:37 ` Ilya Leoshkevich
  2023-02-22 22:37 ` [PATCH bpf-next v3 02/12] selftests/bpf: Fix test_verifier on 32-bit systems Ilya Leoshkevich
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-22 22:37 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich

bpf_csum_diff() may return non-folded checksum, and the arm
implementation actually does this. Finish folding in the test prog.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 tools/testing/selftests/bpf/verifier/array_access.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/bpf/verifier/array_access.c b/tools/testing/selftests/bpf/verifier/array_access.c
index 1b138cd2b187..e570d6a95702 100644
--- a/tools/testing/selftests/bpf/verifier/array_access.c
+++ b/tools/testing/selftests/bpf/verifier/array_access.c
@@ -241,7 +241,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 14),
 
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 4),
@@ -250,7 +250,15 @@
 	BPF_MOV64_IMM(BPF_REG_5, 0),
 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
 		     BPF_FUNC_csum_diff),
+	/* csum_partial() is allowed to return both 0xffffffe3 and 0x1ffe2 */
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+	BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 16),
+	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xffff),
+	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
+	BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 16),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xffff),
+	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf-next v3 02/12] selftests/bpf: Fix test_verifier on 32-bit systems
  2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
  2023-02-22 22:37 ` [PATCH bpf-next v3 01/12] selftests/bpf: Finish folding after BPF_FUNC_csum_diff Ilya Leoshkevich
@ 2023-02-22 22:37 ` Ilya Leoshkevich
  2023-02-22 22:37 ` [PATCH bpf-next v3 03/12] sparc: Update maximum errno Ilya Leoshkevich
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-22 22:37 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich

bpf_fentry_test_t.a, bpf_iter_meta.seq, bpf_map.ops and
prog_test_ref_kfunc.next are pointers, so access them using BPF_W on
32-bit systems.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 tools/testing/selftests/bpf/test_verifier.c             | 5 +++++
 tools/testing/selftests/bpf/verifier/atomic_fetch_add.c | 2 +-
 tools/testing/selftests/bpf/verifier/bpf_get_stack.c    | 2 +-
 tools/testing/selftests/bpf/verifier/calls.c            | 4 ++--
 tools/testing/selftests/bpf/verifier/map_kptr.c         | 2 +-
 tools/testing/selftests/bpf/verifier/map_ptr.c          | 2 +-
 6 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 8b9949bb833d..81a8aef7362c 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -62,6 +62,11 @@
 #define MAX_FUNC_INFOS	8
 #define MAX_BTF_STRINGS	256
 #define MAX_BTF_TYPES	256
+#if BITS_PER_LONG == 32
+#define BPF_PTR BPF_W
+#else
+#define BPF_PTR BPF_DW
+#endif
 
 #define INSN_OFF_MASK	((__s16)0xFFFF)
 #define INSN_IMM_MASK	((__s32)0xFFFFFFFF)
diff --git a/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
index a91de8cd9def..6ed98eb217b3 100644
--- a/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
+++ b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
@@ -93,7 +93,7 @@
 		 * because it's kernel memory.
 		 */
 		BPF_MOV64_IMM(BPF_REG_3, 1),
-		BPF_ATOMIC_OP(BPF_DW, BPF_ADD | BPF_FETCH, BPF_REG_2, BPF_REG_3, 0),
+		BPF_ATOMIC_OP(BPF_PTR, BPF_ADD | BPF_FETCH, BPF_REG_2, BPF_REG_3, 0),
 		/* Done */
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
index 3e024c891178..e48b409d1759 100644
--- a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
+++ b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
@@ -46,7 +46,7 @@
 	"bpf_get_task_stack return R0 range is refined",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_6, 0), // ctx->meta->seq
+	BPF_LDX_MEM(BPF_PTR, BPF_REG_6, BPF_REG_6, 0), // ctx->meta->seq
 	BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_1, 8), // ctx->task
 	BPF_LD_MAP_FD(BPF_REG_1, 0), // fixup_map_array_48b
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
index 9d993926bf0e..a59759db8c78 100644
--- a/tools/testing/selftests/bpf/verifier/calls.c
+++ b/tools/testing/selftests/bpf/verifier/calls.c
@@ -168,7 +168,7 @@
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, 16),
+	BPF_LDX_MEM(BPF_PTR, BPF_REG_1, BPF_REG_1, 16),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -4),
 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -230,7 +230,7 @@
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 16),
+	BPF_LDX_MEM(BPF_PTR, BPF_REG_1, BPF_REG_6, 16),
 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/map_kptr.c b/tools/testing/selftests/bpf/verifier/map_kptr.c
index 6914904344c0..b8ea6b0b536d 100644
--- a/tools/testing/selftests/bpf/verifier/map_kptr.c
+++ b/tools/testing/selftests/bpf/verifier/map_kptr.c
@@ -235,7 +235,7 @@
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 16),
+	BPF_LDX_MEM(BPF_PTR, BPF_REG_1, BPF_REG_0, 16),
 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_this_cpu_ptr),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/map_ptr.c b/tools/testing/selftests/bpf/verifier/map_ptr.c
index 17ee84dc7766..b8e308d678b7 100644
--- a/tools/testing/selftests/bpf/verifier/map_ptr.c
+++ b/tools/testing/selftests/bpf/verifier/map_ptr.c
@@ -51,7 +51,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_6, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
+	BPF_LDX_MEM(BPF_PTR, BPF_REG_6, BPF_REG_1, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf-next v3 03/12] sparc: Update maximum errno
  2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
  2023-02-22 22:37 ` [PATCH bpf-next v3 01/12] selftests/bpf: Finish folding after BPF_FUNC_csum_diff Ilya Leoshkevich
  2023-02-22 22:37 ` [PATCH bpf-next v3 02/12] selftests/bpf: Fix test_verifier on 32-bit systems Ilya Leoshkevich
@ 2023-02-22 22:37 ` Ilya Leoshkevich
  2023-02-22 22:37 ` [PATCH bpf-next v3 04/12] bpf: sparc64: Emit fixed-length instructions for BPF_PSEUDO_FUNC Ilya Leoshkevich
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-22 22:37 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich, David S . Miller

When the bpf syscall returns -ENOTSUPP, the kernel does not set
psr/csr. glibc's syscall() then thinks everything is fine and skips
updating errno, causing all kinds of confusion in bpf selftests.

sparc decides whether to set psr/csr by comparing syscall return value
with ERESTART_RESTARTBLOCK, which is smaller than ENOTSUPP. Fix by
introducing EMAXERRNO (like mips) and comparing with that insead.

Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 arch/sparc/include/asm/errno.h | 10 ++++++++++
 arch/sparc/kernel/entry.S      |  2 +-
 arch/sparc/kernel/process.c    |  6 +++---
 arch/sparc/kernel/syscalls.S   |  2 +-
 4 files changed, 15 insertions(+), 5 deletions(-)
 create mode 100644 arch/sparc/include/asm/errno.h

diff --git a/arch/sparc/include/asm/errno.h b/arch/sparc/include/asm/errno.h
new file mode 100644
index 000000000000..0e0a790b8ea8
--- /dev/null
+++ b/arch/sparc/include/asm/errno.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_SPARC_ERRNO_H
+#define _ASM_SPARC_ERRNO_H
+
+#include <uapi/asm/errno.h>
+
+/* The biggest error number defined here or in <linux/errno.h>. */
+#define EMAXERRNO	1133
+
+#endif /* _ASM_SPARC_ERRNO_H */
diff --git a/arch/sparc/kernel/entry.S b/arch/sparc/kernel/entry.S
index a269ad2fe6df..a24b46ad7b20 100644
--- a/arch/sparc/kernel/entry.S
+++ b/arch/sparc/kernel/entry.S
@@ -1004,7 +1004,7 @@ do_syscall:
 
 ret_sys_call:
 	ld	[%curptr + TI_FLAGS], %l5
-	cmp	%o0, -ERESTART_RESTARTBLOCK
+	cmp	%o0, -EMAXERRNO
 	ld	[%sp + STACKFRAME_SZ + PT_PSR], %g3
 	set	PSR_C, %g2
 	bgeu	1f
diff --git a/arch/sparc/kernel/process.c b/arch/sparc/kernel/process.c
index 0442ab00518d..582933d16f9f 100644
--- a/arch/sparc/kernel/process.c
+++ b/arch/sparc/kernel/process.c
@@ -32,7 +32,7 @@ asmlinkage long sparc_fork(struct pt_regs *regs)
 	 * the parent's %o1.  So detect that case and restore it
 	 * here.
 	 */
-	if ((unsigned long)ret >= -ERESTART_RESTARTBLOCK)
+	if ((unsigned long)ret >= -EMAXERRNO)
 		regs->u_regs[UREG_I1] = orig_i1;
 
 	return ret;
@@ -57,7 +57,7 @@ asmlinkage long sparc_vfork(struct pt_regs *regs)
 	 * the parent's %o1.  So detect that case and restore it
 	 * here.
 	 */
-	if ((unsigned long)ret >= -ERESTART_RESTARTBLOCK)
+	if ((unsigned long)ret >= -EMAXERRNO)
 		regs->u_regs[UREG_I1] = orig_i1;
 
 	return ret;
@@ -103,7 +103,7 @@ asmlinkage long sparc_clone(struct pt_regs *regs)
 	 * the parent's %o1.  So detect that case and restore it
 	 * here.
 	 */
-	if ((unsigned long)ret >= -ERESTART_RESTARTBLOCK)
+	if ((unsigned long)ret >= -EMAXERRNO)
 		regs->u_regs[UREG_I1] = orig_i1;
 
 	return ret;
diff --git a/arch/sparc/kernel/syscalls.S b/arch/sparc/kernel/syscalls.S
index 0e8ab0602c36..9064304f6a3a 100644
--- a/arch/sparc/kernel/syscalls.S
+++ b/arch/sparc/kernel/syscalls.S
@@ -262,7 +262,7 @@ ret_sys_call:
 	mov	%ulo(TSTATE_XCARRY | TSTATE_ICARRY), %g2
 	sllx	%g2, 32, %g2
 
-	cmp	%o0, -ERESTART_RESTARTBLOCK
+	cmp	%o0, -EMAXERRNO
 	bgeu,pn	%xcc, 1f
 	 andcc	%l0, (_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT|_TIF_SYSCALL_TRACEPOINT|_TIF_NOHZ), %g0
 	ldx	[%sp + PTREGS_OFF + PT_V9_TNPC], %l1 ! pc = npc
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf-next v3 04/12] bpf: sparc64: Emit fixed-length instructions for BPF_PSEUDO_FUNC
  2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
                   ` (2 preceding siblings ...)
  2023-02-22 22:37 ` [PATCH bpf-next v3 03/12] sparc: Update maximum errno Ilya Leoshkevich
@ 2023-02-22 22:37 ` Ilya Leoshkevich
  2023-02-22 22:37 ` [PATCH bpf-next v3 05/12] bpf: sparc64: Fix jumping to the first insn Ilya Leoshkevich
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-22 22:37 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich, David S . Miller

This is the same as commit b54b6003612a ("riscv, bpf: Emit fixed-length
instructions for BPF_PSEUDO_FUNC"), but for sparc64. The code sequence
is borrowed from sparc64-linux-gnu-gcc -Os.

Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 arch/sparc/net/bpf_jit_comp_64.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index fa0759bfe498..59d2d9953aa7 100644
--- a/arch/sparc/net/bpf_jit_comp_64.c
+++ b/arch/sparc/net/bpf_jit_comp_64.c
@@ -1243,10 +1243,19 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	case BPF_LD | BPF_IMM | BPF_DW:
 	{
 		const struct bpf_insn insn1 = insn[1];
+		const u8 tmp = bpf2sparc[TMP_REG_1];
 		u64 imm64;
 
-		imm64 = (u64)insn1.imm << 32 | (u32)imm;
-		emit_loadimm64(imm64, dst, ctx);
+		if (bpf_pseudo_func(insn)) {
+			/* fixed-length insns for extra jit pass */
+			emit_set_const(dst, insn1.imm, ctx);
+			emit_set_const(tmp, imm, ctx);
+			emit_alu_K(SLLX, dst, 32, ctx);
+			emit_alu3(ADD, dst, tmp, dst, ctx);
+		} else {
+			imm64 = (u64)insn1.imm << 32 | (u32)imm;
+			emit_loadimm64(imm64, dst, ctx);
+		}
 
 		return 1;
 	}
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf-next v3 05/12] bpf: sparc64: Fix jumping to the first insn
  2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
                   ` (3 preceding siblings ...)
  2023-02-22 22:37 ` [PATCH bpf-next v3 04/12] bpf: sparc64: Emit fixed-length instructions for BPF_PSEUDO_FUNC Ilya Leoshkevich
@ 2023-02-22 22:37 ` Ilya Leoshkevich
  2023-02-22 22:37 ` [PATCH bpf-next v3 06/12] bpf: sparc64: Use 32-bit tail call index Ilya Leoshkevich
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-22 22:37 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich, David S . Miller

Jumping to the first insn causes emit_compare_and_branch() to access
ctx->offsets[-1]. Currently ctx->offsets[] stores instruction end
addresses; change it to store start addresses instead. We never need
the end address of the last instruction.

Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 arch/sparc/net/bpf_jit_comp_64.c | 18 +++++++-----------
 1 file changed, 7 insertions(+), 11 deletions(-)

diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index 59d2d9953aa7..0a3c18e39199 100644
--- a/arch/sparc/net/bpf_jit_comp_64.c
+++ b/arch/sparc/net/bpf_jit_comp_64.c
@@ -1168,7 +1168,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 
 	/* JUMP off */
 	case BPF_JMP | BPF_JA:
-		emit_branch(BA, ctx->idx, ctx->offset[i + off], ctx);
+		emit_branch(BA, ctx->idx, ctx->offset[i + off + 1], ctx);
 		emit_nop(ctx);
 		break;
 	/* IF (dst COND src) JUMP off */
@@ -1185,7 +1185,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	case BPF_JMP | BPF_JSET | BPF_X: {
 		int err;
 
-		err = emit_compare_and_branch(code, dst, src, 0, false, i + off, ctx);
+		err = emit_compare_and_branch(code, dst, src, 0, false, i + off + 1, ctx);
 		if (err)
 			return err;
 		break;
@@ -1204,7 +1204,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	case BPF_JMP | BPF_JSET | BPF_K: {
 		int err;
 
-		err = emit_compare_and_branch(code, dst, 0, imm, true, i + off, ctx);
+		err = emit_compare_and_branch(code, dst, 0, imm, true, i + off + 1, ctx);
 		if (err)
 			return err;
 		break;
@@ -1453,16 +1453,12 @@ static int build_body(struct jit_ctx *ctx)
 		const struct bpf_insn *insn = &prog->insnsi[i];
 		int ret;
 
-		ret = build_insn(insn, ctx);
-
-		if (ret > 0) {
-			i++;
-			ctx->offset[i] = ctx->idx;
-			continue;
-		}
 		ctx->offset[i] = ctx->idx;
-		if (ret)
+		ret = build_insn(insn, ctx);
+		if (ret < 0)
 			return ret;
+
+		i += ret;
 	}
 	return 0;
 }
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf-next v3 06/12] bpf: sparc64: Use 32-bit tail call index
  2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
                   ` (4 preceding siblings ...)
  2023-02-22 22:37 ` [PATCH bpf-next v3 05/12] bpf: sparc64: Fix jumping to the first insn Ilya Leoshkevich
@ 2023-02-22 22:37 ` Ilya Leoshkevich
  2023-02-22 22:37 ` [PATCH bpf-next v3 07/12] bpf, arm: Use bpf_jit_get_func_addr() Ilya Leoshkevich
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-22 22:37 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich, David S . Miller

The interpreter does the following:

  u32 index = BPF_R3;

and JITs should do the same.

Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 arch/sparc/net/bpf_jit_comp_64.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index 0a3c18e39199..6c482685dc6c 100644
--- a/arch/sparc/net/bpf_jit_comp_64.c
+++ b/arch/sparc/net/bpf_jit_comp_64.c
@@ -856,6 +856,7 @@ static void emit_tail_call(struct jit_ctx *ctx)
 
 	ctx->saw_tail_call = true;
 
+	emit_alu_K(SRL, bpf_index, 0, ctx);
 	off = offsetof(struct bpf_array, map.max_entries);
 	emit(LD32 | IMMED | RS1(bpf_array) | S13(off) | RD(tmp), ctx);
 	emit_cmp(bpf_index, tmp, ctx);
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf-next v3 07/12] bpf, arm: Use bpf_jit_get_func_addr()
  2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
                   ` (5 preceding siblings ...)
  2023-02-22 22:37 ` [PATCH bpf-next v3 06/12] bpf: sparc64: Use 32-bit tail call index Ilya Leoshkevich
@ 2023-02-22 22:37 ` Ilya Leoshkevich
  2023-02-22 22:37 ` [PATCH bpf-next v3 08/12] bpf: sparc64: " Ilya Leoshkevich
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-22 22:37 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich

Preparation for moving kfunc address from bpf_insn.imm.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 arch/arm/net/bpf_jit_32.c | 27 ++++++++++++++++++++-------
 1 file changed, 20 insertions(+), 7 deletions(-)

diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 6a1c9fca5260..b9a7f7bba811 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -1345,7 +1345,8 @@ static void build_epilogue(struct jit_ctx *ctx)
  *	>0 - Successfully JITed a 16-byte eBPF instruction
  *	<0 - Failed to JIT.
  */
-static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
+static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
+		      bool extra_pass)
 {
 	const u8 code = insn->code;
 	const s8 *dst = bpf2a32[insn->dst_reg];
@@ -1783,7 +1784,14 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		const s8 *r3 = bpf2a32[BPF_REG_3];
 		const s8 *r4 = bpf2a32[BPF_REG_4];
 		const s8 *r5 = bpf2a32[BPF_REG_5];
-		const u32 func = (u32)__bpf_call_base + (u32)imm;
+		bool func_addr_fixed;
+		u64 func;
+		int err;
+
+		err = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,
+					    &func, &func_addr_fixed);
+		if (err)
+			return err;
 
 		emit_a32_mov_r64(true, r0, r1, ctx);
 		emit_a32_mov_r64(true, r1, r2, ctx);
@@ -1791,7 +1799,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		emit_push_r64(r4, ctx);
 		emit_push_r64(r3, ctx);
 
-		emit_a32_mov_i(tmp[1], func, ctx);
+		emit_a32_mov_i(tmp[1], (u32)func, ctx);
 		emit_blx_r(tmp[1], ctx);
 
 		emit(ARM_ADD_I(ARM_SP, ARM_SP, imm8m(24)), ctx); // callee clean
@@ -1826,7 +1834,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	return 0;
 }
 
-static int build_body(struct jit_ctx *ctx)
+static int build_body(struct jit_ctx *ctx, bool extra_pass)
 {
 	const struct bpf_prog *prog = ctx->prog;
 	unsigned int i;
@@ -1835,7 +1843,7 @@ static int build_body(struct jit_ctx *ctx)
 		const struct bpf_insn *insn = &(prog->insnsi[i]);
 		int ret;
 
-		ret = build_insn(insn, ctx);
+		ret = build_insn(insn, ctx, extra_pass);
 
 		/* It's used with loading the 64 bit immediate value. */
 		if (ret > 0) {
@@ -1880,6 +1888,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	struct jit_ctx ctx;
 	unsigned int tmp_idx;
 	unsigned int image_size;
+	bool extra_pass;
 	u8 *image_ptr;
 
 	/* If BPF JIT was not enabled then we must fall back to
@@ -1901,6 +1910,10 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 		prog = tmp;
 	}
 
+	extra_pass = prog->aux->jit_data;
+	if (!extra_pass)
+		prog->aux->jit_data = bpf_int_jit_compile;
+
 	memset(&ctx, 0, sizeof(ctx));
 	ctx.prog = prog;
 	ctx.cpu_architecture = cpu_architecture();
@@ -1924,7 +1937,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	 * being successful in the second pass, so just fall back
 	 * to the interpreter.
 	 */
-	if (build_body(&ctx)) {
+	if (build_body(&ctx, extra_pass)) {
 		prog = orig_prog;
 		goto out_off;
 	}
@@ -1982,7 +1995,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	/* If building the body of the JITed code fails somehow,
 	 * we fall back to the interpretation.
 	 */
-	if (build_body(&ctx) < 0) {
+	if (build_body(&ctx, extra_pass) < 0) {
 		image_ptr = NULL;
 		bpf_jit_binary_free(header);
 		prog = orig_prog;
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf-next v3 08/12] bpf: sparc64: Use bpf_jit_get_func_addr()
  2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
                   ` (6 preceding siblings ...)
  2023-02-22 22:37 ` [PATCH bpf-next v3 07/12] bpf, arm: Use bpf_jit_get_func_addr() Ilya Leoshkevich
@ 2023-02-22 22:37 ` Ilya Leoshkevich
  2023-02-23  9:31   ` Jiri Olsa
  2023-02-22 22:37 ` [PATCH bpf-next v3 09/12] bpf: x86: " Ilya Leoshkevich
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-22 22:37 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich, David S . Miller

Preparation for moving kfunc address from bpf_insn.imm.

Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 arch/sparc/net/bpf_jit_comp_64.c | 20 ++++++++++++++------
 arch/x86/net/bpf_jit_comp32.c    |  7 +++++++
 2 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index 6c482685dc6c..b23083776718 100644
--- a/arch/sparc/net/bpf_jit_comp_64.c
+++ b/arch/sparc/net/bpf_jit_comp_64.c
@@ -893,7 +893,8 @@ static void emit_tail_call(struct jit_ctx *ctx)
 	emit_nop(ctx);
 }
 
-static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
+static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
+		      bool extra_pass)
 {
 	const u8 code = insn->code;
 	const u8 dst = bpf2sparc[insn->dst_reg];
@@ -1214,7 +1215,14 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	/* function call */
 	case BPF_JMP | BPF_CALL:
 	{
-		u8 *func = ((u8 *)__bpf_call_base) + imm;
+		bool func_addr_fixed;
+		u64 func;
+		int err;
+
+		err = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,
+					    &func, &func_addr_fixed);
+		if (err)
+			return err;
 
 		ctx->saw_call = true;
 
@@ -1445,7 +1453,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	return 0;
 }
 
-static int build_body(struct jit_ctx *ctx)
+static int build_body(struct jit_ctx *ctx, bool extra_pass)
 {
 	const struct bpf_prog *prog = ctx->prog;
 	int i;
@@ -1455,7 +1463,7 @@ static int build_body(struct jit_ctx *ctx)
 		int ret;
 
 		ctx->offset[i] = ctx->idx;
-		ret = build_insn(insn, ctx);
+		ret = build_insn(insn, ctx, extra_pass);
 		if (ret < 0)
 			return ret;
 
@@ -1549,7 +1557,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 		ctx.idx = 0;
 
 		build_prologue(&ctx);
-		if (build_body(&ctx)) {
+		if (build_body(&ctx, extra_pass)) {
 			prog = orig_prog;
 			goto out_off;
 		}
@@ -1586,7 +1594,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 
 	build_prologue(&ctx);
 
-	if (build_body(&ctx)) {
+	if (build_body(&ctx, extra_pass)) {
 		bpf_jit_binary_free(header);
 		prog = orig_prog;
 		goto out_off;
diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index 429a89c5468b..0abb4d6c9dec 100644
--- a/arch/x86/net/bpf_jit_comp32.c
+++ b/arch/x86/net/bpf_jit_comp32.c
@@ -2091,6 +2091,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			if (insn->src_reg == BPF_PSEUDO_CALL)
 				goto notyet;
 
+			err = bpf_jit_get_func_addr(bpf_prog, insn, extra_pass,
+						    &func_addr,
+						    &func_addr_fixed);
+			if (err)
+				return err;
+			func = (u8 *)(unsigned long)func_addr;
+
 			if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
 				int err;
 
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf-next v3 09/12] bpf: x86: Use bpf_jit_get_func_addr()
  2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
                   ` (7 preceding siblings ...)
  2023-02-22 22:37 ` [PATCH bpf-next v3 08/12] bpf: sparc64: " Ilya Leoshkevich
@ 2023-02-22 22:37 ` Ilya Leoshkevich
  2023-02-22 22:37 ` [PATCH bpf-next v3 10/12] bpf, x86_32: " Ilya Leoshkevich
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-22 22:37 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich

Preparation for moving kfunc address from bpf_insn.imm.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 arch/x86/net/bpf_jit_comp.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 1056bbf55b17..f722f431ba6f 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -964,7 +964,8 @@ static void emit_shiftx(u8 **pprog, u32 dst_reg, u8 src_reg, bool is64, u8 op)
 #define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp)))
 
 static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image,
-		  int oldproglen, struct jit_context *ctx, bool jmp_padding)
+		  int oldproglen, struct jit_context *ctx, bool jmp_padding,
+		  bool extra_pass)
 {
 	bool tail_call_reachable = bpf_prog->aux->tail_call_reachable;
 	struct bpf_insn *insn = bpf_prog->insnsi;
@@ -1000,9 +1001,11 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 		const s32 imm32 = insn->imm;
 		u32 dst_reg = insn->dst_reg;
 		u32 src_reg = insn->src_reg;
+		bool func_addr_fixed;
 		u8 b2 = 0, b3 = 0;
 		u8 *start_of_ldx;
 		s64 jmp_offset;
+		u64 func_addr;
 		s16 insn_off;
 		u8 jmp_cond;
 		u8 *func;
@@ -1536,7 +1539,12 @@ st:			if (is_imm8(insn->off))
 		case BPF_JMP | BPF_CALL: {
 			int offs;
 
-			func = (u8 *) __bpf_call_base + imm32;
+			err = bpf_jit_get_func_addr(bpf_prog, insn, extra_pass,
+						    &func_addr,
+						    &func_addr_fixed);
+			if (err < 0)
+				return err;
+			func = (u8 *)(unsigned long)func_addr;
 			if (tail_call_reachable) {
 				/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
 				EMIT3_off32(0x48, 0x8B, 0x85,
@@ -2518,7 +2526,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	for (pass = 0; pass < MAX_PASSES || image; pass++) {
 		if (!padding && pass >= PADDING_PASSES)
 			padding = true;
-		proglen = do_jit(prog, addrs, image, rw_image, oldproglen, &ctx, padding);
+		proglen = do_jit(prog, addrs, image, rw_image, oldproglen, &ctx,
+				 padding, extra_pass);
 		if (proglen <= 0) {
 out_image:
 			image = NULL;
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf-next v3 10/12] bpf, x86_32: Use bpf_jit_get_func_addr()
  2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
                   ` (8 preceding siblings ...)
  2023-02-22 22:37 ` [PATCH bpf-next v3 09/12] bpf: x86: " Ilya Leoshkevich
@ 2023-02-22 22:37 ` Ilya Leoshkevich
  2023-02-22 22:37 ` [PATCH bpf-next v3 11/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-22 22:37 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich

Preparation for moving kfunc address from bpf_insn.imm.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 arch/x86/net/bpf_jit_comp32.c | 20 ++++++++++++++------
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index 0abb4d6c9dec..998d9bebe4ae 100644
--- a/arch/x86/net/bpf_jit_comp32.c
+++ b/arch/x86/net/bpf_jit_comp32.c
@@ -1567,7 +1567,7 @@ static u8 get_cond_jmp_opcode(const u8 op, bool is_cmp_lo)
  *	- 0-2 jit-insns (3 bytes each) to handle the return value.
  */
 static int emit_kfunc_call(const struct bpf_prog *bpf_prog, u8 *end_addr,
-			   const struct bpf_insn *insn, u8 **pprog)
+			   const struct bpf_insn *insn, u8 *func, u8 **pprog)
 {
 	const u8 arg_regs[] = { IA32_EAX, IA32_EDX, IA32_ECX };
 	int i, cnt = 0, first_stack_regno, last_stack_regno;
@@ -1628,7 +1628,7 @@ static int emit_kfunc_call(const struct bpf_prog *bpf_prog, u8 *end_addr,
 	if (fm->ret_size)
 		end_addr -= 3;
 
-	jmp_offset = (u8 *)__bpf_call_base + insn->imm - end_addr;
+	jmp_offset = func - end_addr;
 	if (!is_simm32(jmp_offset)) {
 		pr_err("unsupported BPF kernel function jmp_offset:%lld\n",
 		       jmp_offset);
@@ -1657,7 +1657,7 @@ static int emit_kfunc_call(const struct bpf_prog *bpf_prog, u8 *end_addr,
 }
 
 static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
-		  int oldproglen, struct jit_context *ctx)
+		  int oldproglen, struct jit_context *ctx, bool extra_pass)
 {
 	struct bpf_insn *insn = bpf_prog->insnsi;
 	int insn_cnt = bpf_prog->len;
@@ -2087,6 +2087,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			const u8 *r3 = bpf2ia32[BPF_REG_3];
 			const u8 *r4 = bpf2ia32[BPF_REG_4];
 			const u8 *r5 = bpf2ia32[BPF_REG_5];
+			bool func_addr_fixed;
+			u64 func_addr;
+			int err;
 
 			if (insn->src_reg == BPF_PSEUDO_CALL)
 				goto notyet;
@@ -2103,14 +2106,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 
 				err = emit_kfunc_call(bpf_prog,
 						      image + addrs[i],
-						      insn, &prog);
+						      insn, func, &prog);
 
 				if (err)
 					return err;
 				break;
 			}
 
-			func = (u8 *) __bpf_call_base + imm32;
 			jmp_offset = func - (image + addrs[i]);
 
 			if (!imm32 || !is_simm32(jmp_offset)) {
@@ -2533,6 +2535,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	struct jit_context ctx = {};
 	bool tmp_blinded = false;
 	u8 *image = NULL;
+	bool extra_pass;
 	int *addrs;
 	int pass;
 	int i;
@@ -2552,6 +2555,10 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 		prog = tmp;
 	}
 
+	extra_pass = prog->aux->jit_data;
+	if (!extra_pass)
+		prog->aux->jit_data = bpf_int_jit_compile;
+
 	addrs = kmalloc_array(prog->len, sizeof(*addrs), GFP_KERNEL);
 	if (!addrs) {
 		prog = orig_prog;
@@ -2575,7 +2582,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 	 * pass to emit the final image.
 	 */
 	for (pass = 0; pass < 20 || image; pass++) {
-		proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
+		proglen = do_jit(prog, addrs, image, oldproglen, &ctx,
+				 extra_pass);
 		if (proglen <= 0) {
 out_image:
 			image = NULL;
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf-next v3 11/12] bpf: Support 64-bit pointers to kfuncs
  2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
                   ` (9 preceding siblings ...)
  2023-02-22 22:37 ` [PATCH bpf-next v3 10/12] bpf, x86_32: " Ilya Leoshkevich
@ 2023-02-22 22:37 ` Ilya Leoshkevich
  2023-02-22 23:43   ` Stanislav Fomichev
  2023-02-22 22:37 ` [PATCH bpf-next v3 12/12] selftests/bpf: Trim DENYLIST.s390x Ilya Leoshkevich
  2023-02-23 17:17 ` [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Alexei Starovoitov
  12 siblings, 1 reply; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-22 22:37 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich

test_ksyms_module fails to emit a kfunc call targeting a module on
s390x, because the verifier stores the difference between kfunc
address and __bpf_call_base in bpf_insn.imm, which is s32, and modules
are roughly (1 << 42) bytes away from the kernel on s390x.

Fix by keeping BTF id in bpf_insn.imm for BPF_PSEUDO_KFUNC_CALLs,
and storing the absolute address in bpf_kfunc_desc, which JITs retrieve
as usual by calling bpf_jit_get_func_addr().

Introduce bpf_get_kfunc_addr() instead of exposing both
find_kfunc_desc() and struct bpf_kfunc_desc.

This also fixes the problem with XDP metadata functions outlined in
the description of commit 63d7b53ab59f ("s390/bpf: Implement
bpf_jit_supports_kfunc_call()") by replacing address lookups with BTF
id lookups. This eliminates the inconsistency between "abstract" XDP
metadata functions' BTF ids and their concrete addresses.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 include/linux/bpf.h   |  2 ++
 kernel/bpf/core.c     | 21 ++++++++++--
 kernel/bpf/verifier.c | 79 +++++++++++++------------------------------
 3 files changed, 44 insertions(+), 58 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 520b238abd5a..e521eae334ea 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2234,6 +2234,8 @@ bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
 const struct btf_func_model *
 bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
 			 const struct bpf_insn *insn);
+int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
+		       u8 **func_addr);
 struct bpf_core_ctx {
 	struct bpf_verifier_log *log;
 	const struct btf *btf;
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 933869983e2a..4d51782f17ab 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1185,10 +1185,12 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
 {
 	s16 off = insn->off;
 	s32 imm = insn->imm;
+	bool fixed;
 	u8 *addr;
+	int err;
 
-	*func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
-	if (!*func_addr_fixed) {
+	switch (insn->src_reg) {
+	case BPF_PSEUDO_CALL:
 		/* Place-holder address till the last pass has collected
 		 * all addresses for JITed subprograms in which case we
 		 * can pick them up from prog->aux.
@@ -1200,15 +1202,28 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
 			addr = (u8 *)prog->aux->func[off]->bpf_func;
 		else
 			return -EINVAL;
-	} else {
+		fixed = false;
+		break;
+	case 0:
 		/* Address of a BPF helper call. Since part of the core
 		 * kernel, it's always at a fixed location. __bpf_call_base
 		 * and the helper with imm relative to it are both in core
 		 * kernel.
 		 */
 		addr = (u8 *)__bpf_call_base + imm;
+		fixed = true;
+		break;
+	case BPF_PSEUDO_KFUNC_CALL:
+		err = bpf_get_kfunc_addr(prog, imm, off, &addr);
+		if (err)
+			return err;
+		fixed = true;
+		break;
+	default:
+		return -EINVAL;
 	}
 
+	*func_addr_fixed = fixed;
 	*func_addr = (unsigned long)addr;
 	return 0;
 }
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 574d2dfc6ada..6d4632476c9c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2115,8 +2115,8 @@ static int add_subprog(struct bpf_verifier_env *env, int off)
 struct bpf_kfunc_desc {
 	struct btf_func_model func_model;
 	u32 func_id;
-	s32 imm;
 	u16 offset;
+	unsigned long addr;
 };
 
 struct bpf_kfunc_btf {
@@ -2166,6 +2166,19 @@ find_kfunc_desc(const struct bpf_prog *prog, u32 func_id, u16 offset)
 		       sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
 }
 
+int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
+		       u8 **func_addr)
+{
+	const struct bpf_kfunc_desc *desc;
+
+	desc = find_kfunc_desc(prog, func_id, offset);
+	if (!desc)
+		return -EFAULT;
+
+	*func_addr = (u8 *)desc->addr;
+	return 0;
+}
+
 static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env *env,
 					 s16 offset)
 {
@@ -2261,8 +2274,8 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 	struct bpf_kfunc_desc *desc;
 	const char *func_name;
 	struct btf *desc_btf;
-	unsigned long call_imm;
 	unsigned long addr;
+	void *xdp_kfunc;
 	int err;
 
 	prog_aux = env->prog->aux;
@@ -2346,24 +2359,21 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 		return -EINVAL;
 	}
 
-	call_imm = BPF_CALL_IMM(addr);
-	/* Check whether or not the relative offset overflows desc->imm */
-	if ((unsigned long)(s32)call_imm != call_imm) {
-		verbose(env, "address of kernel function %s is out of range\n",
-			func_name);
-		return -EINVAL;
-	}
-
 	if (bpf_dev_bound_kfunc_id(func_id)) {
 		err = bpf_dev_bound_kfunc_check(&env->log, prog_aux);
 		if (err)
 			return err;
+
+		xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, func_id);
+		if (xdp_kfunc)
+			addr = (unsigned long)xdp_kfunc;
+		/* fallback to default kfunc when not supported by netdev */
 	}
 
 	desc = &tab->descs[tab->nr_descs++];
 	desc->func_id = func_id;
-	desc->imm = call_imm;
 	desc->offset = offset;
+	desc->addr = addr;
 	err = btf_distill_func_proto(&env->log, desc_btf,
 				     func_proto, func_name,
 				     &desc->func_model);
@@ -2373,30 +2383,6 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 	return err;
 }
 
-static int kfunc_desc_cmp_by_imm(const void *a, const void *b)
-{
-	const struct bpf_kfunc_desc *d0 = a;
-	const struct bpf_kfunc_desc *d1 = b;
-
-	if (d0->imm > d1->imm)
-		return 1;
-	else if (d0->imm < d1->imm)
-		return -1;
-	return 0;
-}
-
-static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
-{
-	struct bpf_kfunc_desc_tab *tab;
-
-	tab = prog->aux->kfunc_tab;
-	if (!tab)
-		return;
-
-	sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
-	     kfunc_desc_cmp_by_imm, NULL);
-}
-
 bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog)
 {
 	return !!prog->aux->kfunc_tab;
@@ -2407,14 +2393,15 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
 			 const struct bpf_insn *insn)
 {
 	const struct bpf_kfunc_desc desc = {
-		.imm = insn->imm,
+		.func_id = insn->imm,
+		.offset = insn->off,
 	};
 	const struct bpf_kfunc_desc *res;
 	struct bpf_kfunc_desc_tab *tab;
 
 	tab = prog->aux->kfunc_tab;
 	res = bsearch(&desc, tab->descs, tab->nr_descs,
-		      sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm);
+		      sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
 
 	return res ? &res->func_model : NULL;
 }
@@ -16267,7 +16254,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			    struct bpf_insn *insn_buf, int insn_idx, int *cnt)
 {
 	const struct bpf_kfunc_desc *desc;
-	void *xdp_kfunc;
 
 	if (!insn->imm) {
 		verbose(env, "invalid kernel function call not eliminated in verifier pass\n");
@@ -16275,20 +16261,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	}
 
 	*cnt = 0;
-
-	if (bpf_dev_bound_kfunc_id(insn->imm)) {
-		xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, insn->imm);
-		if (xdp_kfunc) {
-			insn->imm = BPF_CALL_IMM(xdp_kfunc);
-			return 0;
-		}
-
-		/* fallback to default kfunc when not supported by netdev */
-	}
-
-	/* insn->imm has the btf func_id. Replace it with
-	 * an address (relative to __bpf_call_base).
-	 */
 	desc = find_kfunc_desc(env->prog, insn->imm, insn->off);
 	if (!desc) {
 		verbose(env, "verifier internal error: kernel function descriptor not found for func_id %u\n",
@@ -16296,7 +16268,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 		return -EFAULT;
 	}
 
-	insn->imm = desc->imm;
 	if (insn->off)
 		return 0;
 	if (desc->func_id == special_kfunc_list[KF_bpf_obj_new_impl]) {
@@ -16850,8 +16821,6 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 		}
 	}
 
-	sort_kfunc_descs_by_imm(env->prog);
-
 	return 0;
 }
 
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH bpf-next v3 12/12] selftests/bpf: Trim DENYLIST.s390x
  2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
                   ` (10 preceding siblings ...)
  2023-02-22 22:37 ` [PATCH bpf-next v3 11/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
@ 2023-02-22 22:37 ` Ilya Leoshkevich
  2023-02-23 17:17 ` [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Alexei Starovoitov
  12 siblings, 0 replies; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-22 22:37 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev, Ilya Leoshkevich

With commit 1a280f48c0e4 ("s390/kprobes: replace kretprobe with
rethook") and commit 2213d44e140f ("s390/syscalls: get rid of system
call alias functions") merged, and kfunc range issues fixed, only two
known test_progs failures remain on s390x.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 tools/testing/selftests/bpf/DENYLIST.s390x | 20 --------------------
 1 file changed, 20 deletions(-)

diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x
index b89eb87034e4..a17baf8c6fd7 100644
--- a/tools/testing/selftests/bpf/DENYLIST.s390x
+++ b/tools/testing/selftests/bpf/DENYLIST.s390x
@@ -1,24 +1,4 @@
 # TEMPORARY
 # Alphabetical order
-bloom_filter_map                         # failed to find kernel BTF type ID of '__x64_sys_getpgid': -3                (?)
-bpf_cookie                               # failed to open_and_load program: -524 (trampoline)
-bpf_loop                                 # attaches to __x64_sys_nanosleep
-cgrp_local_storage                       # prog_attach unexpected error: -524                                          (trampoline)
-fexit_sleep                              # fexit_skel_load fexit skeleton failed                                       (trampoline)
 get_stack_raw_tp                         # user_stack corrupted user stack                                             (no backchain userspace)
-kprobe_multi_bench_attach                # bpf_program__attach_kprobe_multi_opts unexpected error: -95
-kprobe_multi_test                        # relies on fentry
-ksyms_module                             # test_ksyms_module__open_and_load unexpected error: -9                       (?)
-ksyms_module_libbpf                      # JIT does not support calling kernel function                                (kfunc)
-ksyms_module_lskel                       # test_ksyms_module_lskel__open_and_load unexpected error: -9                 (?)
-module_attach                            # skel_attach skeleton attach failed: -524                                    (trampoline)
-ringbuf                                  # skel_load skeleton load failed                                              (?)
 stacktrace_build_id                      # compare_map_keys stackid_hmap vs. stackmap err -2 errno 2                   (?)
-test_lsm                                 # attach unexpected error: -524                                               (trampoline)
-trace_printk                             # trace_printk__load unexpected error: -2 (errno 2)                           (?)
-trace_vprintk                            # trace_vprintk__open_and_load unexpected error: -9                           (?)
-unpriv_bpf_disabled                      # fentry
-user_ringbuf                             # failed to find kernel BTF type ID of '__s390x_sys_prctl': -3                (?)
-verif_stats                              # trace_vprintk__open_and_load unexpected error: -9                           (?)
-xdp_bonding                              # failed to auto-attach program 'trace_on_entry': -524                        (trampoline)
-xdp_metadata                             # JIT does not support calling kernel function                                (kfunc)
-- 
2.39.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf-next v3 11/12] bpf: Support 64-bit pointers to kfuncs
  2023-02-22 22:37 ` [PATCH bpf-next v3 11/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
@ 2023-02-22 23:43   ` Stanislav Fomichev
  2023-02-23  8:39     ` Ilya Leoshkevich
  0 siblings, 1 reply; 21+ messages in thread
From: Stanislav Fomichev @ 2023-02-22 23:43 UTC (permalink / raw)
  To: Ilya Leoshkevich
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa

On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>
> test_ksyms_module fails to emit a kfunc call targeting a module on
> s390x, because the verifier stores the difference between kfunc
> address and __bpf_call_base in bpf_insn.imm, which is s32, and modules
> are roughly (1 << 42) bytes away from the kernel on s390x.
>
> Fix by keeping BTF id in bpf_insn.imm for BPF_PSEUDO_KFUNC_CALLs,
> and storing the absolute address in bpf_kfunc_desc, which JITs retrieve
> as usual by calling bpf_jit_get_func_addr().
>
> Introduce bpf_get_kfunc_addr() instead of exposing both
> find_kfunc_desc() and struct bpf_kfunc_desc.
>
> This also fixes the problem with XDP metadata functions outlined in
> the description of commit 63d7b53ab59f ("s390/bpf: Implement
> bpf_jit_supports_kfunc_call()") by replacing address lookups with BTF
> id lookups. This eliminates the inconsistency between "abstract" XDP
> metadata functions' BTF ids and their concrete addresses.
>
> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>

Acked-by: Stanislav Fomichev <sdf@google.com>

With a nit below (and an unrelated question).

I'll wait a bit for the buildbots to finish until ack'ing the rest.
But the jit (except sparc quirks) and selftest changes also make sense to me.

> ---
>  include/linux/bpf.h   |  2 ++
>  kernel/bpf/core.c     | 21 ++++++++++--
>  kernel/bpf/verifier.c | 79 +++++++++++++------------------------------
>  3 files changed, 44 insertions(+), 58 deletions(-)
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 520b238abd5a..e521eae334ea 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -2234,6 +2234,8 @@ bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
>  const struct btf_func_model *
>  bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
>                          const struct bpf_insn *insn);
> +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
> +                      u8 **func_addr);
>  struct bpf_core_ctx {
>         struct bpf_verifier_log *log;
>         const struct btf *btf;
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index 933869983e2a..4d51782f17ab 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -1185,10 +1185,12 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
>  {
>         s16 off = insn->off;
>         s32 imm = insn->imm;

[..]

> +       bool fixed;

nit: do we really need that extra fixed bool? Why not directly
*func_addr_fixes = true/false in all the places?

>         u8 *addr;
> +       int err;
>
> -       *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
> -       if (!*func_addr_fixed) {
> +       switch (insn->src_reg) {
> +       case BPF_PSEUDO_CALL:
>                 /* Place-holder address till the last pass has collected
>                  * all addresses for JITed subprograms in which case we
>                  * can pick them up from prog->aux.
> @@ -1200,15 +1202,28 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
>                         addr = (u8 *)prog->aux->func[off]->bpf_func;
>                 else
>                         return -EINVAL;
> -       } else {
> +               fixed = false;
> +               break;
> +       case 0:
>                 /* Address of a BPF helper call. Since part of the core
>                  * kernel, it's always at a fixed location. __bpf_call_base
>                  * and the helper with imm relative to it are both in core
>                  * kernel.
>                  */
>                 addr = (u8 *)__bpf_call_base + imm;
> +               fixed = true;
> +               break;
> +       case BPF_PSEUDO_KFUNC_CALL:
> +               err = bpf_get_kfunc_addr(prog, imm, off, &addr);
> +               if (err)
> +                       return err;
> +               fixed = true;
> +               break;
> +       default:
> +               return -EINVAL;
>         }
>
> +       *func_addr_fixed = fixed;
>         *func_addr = (unsigned long)addr;
>         return 0;
>  }
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 574d2dfc6ada..6d4632476c9c 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2115,8 +2115,8 @@ static int add_subprog(struct bpf_verifier_env *env, int off)
>  struct bpf_kfunc_desc {
>         struct btf_func_model func_model;
>         u32 func_id;
> -       s32 imm;
>         u16 offset;

[..]

> +       unsigned long addr;

Do we have some canonical type to store the address? I was using void
* in bpf_dev_bound_resolve_kfunc, but we are doing ulong here. We seem
to be doing u64/void */unsigned long inconsistently.

Also, maybe move it up a bit? To turn u32+u16+gap+u64 into u64+u32+u16+padding ?

>  };
>
>  struct bpf_kfunc_btf {
> @@ -2166,6 +2166,19 @@ find_kfunc_desc(const struct bpf_prog *prog, u32 func_id, u16 offset)
>                        sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
>  }
>
> +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
> +                      u8 **func_addr)
> +{
> +       const struct bpf_kfunc_desc *desc;
> +
> +       desc = find_kfunc_desc(prog, func_id, offset);
> +       if (!desc)
> +               return -EFAULT;
> +
> +       *func_addr = (u8 *)desc->addr;
> +       return 0;
> +}
> +
>  static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env *env,
>                                          s16 offset)
>  {
> @@ -2261,8 +2274,8 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
>         struct bpf_kfunc_desc *desc;
>         const char *func_name;
>         struct btf *desc_btf;
> -       unsigned long call_imm;
>         unsigned long addr;
> +       void *xdp_kfunc;
>         int err;
>
>         prog_aux = env->prog->aux;
> @@ -2346,24 +2359,21 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
>                 return -EINVAL;
>         }
>
> -       call_imm = BPF_CALL_IMM(addr);
> -       /* Check whether or not the relative offset overflows desc->imm */
> -       if ((unsigned long)(s32)call_imm != call_imm) {
> -               verbose(env, "address of kernel function %s is out of range\n",
> -                       func_name);
> -               return -EINVAL;
> -       }
> -
>         if (bpf_dev_bound_kfunc_id(func_id)) {
>                 err = bpf_dev_bound_kfunc_check(&env->log, prog_aux);
>                 if (err)
>                         return err;
> +
> +               xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, func_id);
> +               if (xdp_kfunc)
> +                       addr = (unsigned long)xdp_kfunc;
> +               /* fallback to default kfunc when not supported by netdev */
>         }
>
>         desc = &tab->descs[tab->nr_descs++];
>         desc->func_id = func_id;
> -       desc->imm = call_imm;
>         desc->offset = offset;
> +       desc->addr = addr;
>         err = btf_distill_func_proto(&env->log, desc_btf,
>                                      func_proto, func_name,
>                                      &desc->func_model);
> @@ -2373,30 +2383,6 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
>         return err;
>  }
>
> -static int kfunc_desc_cmp_by_imm(const void *a, const void *b)
> -{
> -       const struct bpf_kfunc_desc *d0 = a;
> -       const struct bpf_kfunc_desc *d1 = b;
> -
> -       if (d0->imm > d1->imm)
> -               return 1;
> -       else if (d0->imm < d1->imm)
> -               return -1;
> -       return 0;
> -}
> -
> -static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
> -{
> -       struct bpf_kfunc_desc_tab *tab;
> -
> -       tab = prog->aux->kfunc_tab;
> -       if (!tab)
> -               return;
> -
> -       sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
> -            kfunc_desc_cmp_by_imm, NULL);
> -}
> -
>  bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog)
>  {
>         return !!prog->aux->kfunc_tab;
> @@ -2407,14 +2393,15 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
>                          const struct bpf_insn *insn)
>  {
>         const struct bpf_kfunc_desc desc = {
> -               .imm = insn->imm,
> +               .func_id = insn->imm,
> +               .offset = insn->off,
>         };
>         const struct bpf_kfunc_desc *res;
>         struct bpf_kfunc_desc_tab *tab;
>
>         tab = prog->aux->kfunc_tab;
>         res = bsearch(&desc, tab->descs, tab->nr_descs,
> -                     sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm);
> +                     sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
>
>         return res ? &res->func_model : NULL;
>  }
> @@ -16267,7 +16254,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>                             struct bpf_insn *insn_buf, int insn_idx, int *cnt)
>  {
>         const struct bpf_kfunc_desc *desc;
> -       void *xdp_kfunc;
>
>         if (!insn->imm) {
>                 verbose(env, "invalid kernel function call not eliminated in verifier pass\n");
> @@ -16275,20 +16261,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>         }
>
>         *cnt = 0;
> -
> -       if (bpf_dev_bound_kfunc_id(insn->imm)) {
> -               xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, insn->imm);
> -               if (xdp_kfunc) {
> -                       insn->imm = BPF_CALL_IMM(xdp_kfunc);
> -                       return 0;
> -               }
> -
> -               /* fallback to default kfunc when not supported by netdev */
> -       }
> -
> -       /* insn->imm has the btf func_id. Replace it with
> -        * an address (relative to __bpf_call_base).
> -        */
>         desc = find_kfunc_desc(env->prog, insn->imm, insn->off);
>         if (!desc) {
>                 verbose(env, "verifier internal error: kernel function descriptor not found for func_id %u\n",
> @@ -16296,7 +16268,6 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>                 return -EFAULT;
>         }
>
> -       insn->imm = desc->imm;
>         if (insn->off)
>                 return 0;
>         if (desc->func_id == special_kfunc_list[KF_bpf_obj_new_impl]) {
> @@ -16850,8 +16821,6 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>                 }
>         }
>
> -       sort_kfunc_descs_by_imm(env->prog);
> -
>         return 0;
>  }
>
> --
> 2.39.1
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf-next v3 11/12] bpf: Support 64-bit pointers to kfuncs
  2023-02-22 23:43   ` Stanislav Fomichev
@ 2023-02-23  8:39     ` Ilya Leoshkevich
  0 siblings, 0 replies; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-23  8:39 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa

On Wed, 2023-02-22 at 15:43 -0800, Stanislav Fomichev wrote:
> On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com>
> wrote:
> > 
> > test_ksyms_module fails to emit a kfunc call targeting a module on
> > s390x, because the verifier stores the difference between kfunc
> > address and __bpf_call_base in bpf_insn.imm, which is s32, and
> > modules
> > are roughly (1 << 42) bytes away from the kernel on s390x.
> > 
> > Fix by keeping BTF id in bpf_insn.imm for BPF_PSEUDO_KFUNC_CALLs,
> > and storing the absolute address in bpf_kfunc_desc, which JITs
> > retrieve
> > as usual by calling bpf_jit_get_func_addr().
> > 
> > Introduce bpf_get_kfunc_addr() instead of exposing both
> > find_kfunc_desc() and struct bpf_kfunc_desc.
> > 
> > This also fixes the problem with XDP metadata functions outlined in
> > the description of commit 63d7b53ab59f ("s390/bpf: Implement
> > bpf_jit_supports_kfunc_call()") by replacing address lookups with
> > BTF
> > id lookups. This eliminates the inconsistency between "abstract"
> > XDP
> > metadata functions' BTF ids and their concrete addresses.
> > 
> > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> 
> Acked-by: Stanislav Fomichev <sdf@google.com>
> 
> With a nit below (and an unrelated question).
> 
> I'll wait a bit for the buildbots to finish until ack'ing the rest.
> But the jit (except sparc quirks) and selftest changes also make
> sense to me.
> 
> > ---
> >  include/linux/bpf.h   |  2 ++
> >  kernel/bpf/core.c     | 21 ++++++++++--
> >  kernel/bpf/verifier.c | 79 +++++++++++++--------------------------
> > ----
> >  3 files changed, 44 insertions(+), 58 deletions(-)
> > 
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index 520b238abd5a..e521eae334ea 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -2234,6 +2234,8 @@ bool bpf_prog_has_kfunc_call(const struct
> > bpf_prog *prog);
> >  const struct btf_func_model *
> >  bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
> >                          const struct bpf_insn *insn);
> > +int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id,
> > u16 offset,
> > +                      u8 **func_addr);
> >  struct bpf_core_ctx {
> >         struct bpf_verifier_log *log;
> >         const struct btf *btf;
> > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > index 933869983e2a..4d51782f17ab 100644
> > --- a/kernel/bpf/core.c
> > +++ b/kernel/bpf/core.c
> > @@ -1185,10 +1185,12 @@ int bpf_jit_get_func_addr(const struct
> > bpf_prog *prog,
> >  {
> >         s16 off = insn->off;
> >         s32 imm = insn->imm;
> 
> [..]
> 
> > +       bool fixed;
> 
> nit: do we really need that extra fixed bool? Why not directly
> *func_addr_fixes = true/false in all the places?

I introduced it in order to avoid touching func_addr_fixed if there
is an error, but actually that's not necessary - it's assigned after
all checks. I will drop it in v4.

> >         u8 *addr;
> > +       int err;
> > 
> > -       *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
> > -       if (!*func_addr_fixed) {
> > +       switch (insn->src_reg) {
> > +       case BPF_PSEUDO_CALL:
> >                 /* Place-holder address till the last pass has
> > collected
> >                  * all addresses for JITed subprograms in which
> > case we
> >                  * can pick them up from prog->aux.
> > @@ -1200,15 +1202,28 @@ int bpf_jit_get_func_addr(const struct
> > bpf_prog *prog,
> >                         addr = (u8 *)prog->aux->func[off]-
> > >bpf_func;
> >                 else
> >                         return -EINVAL;
> > -       } else {
> > +               fixed = false;
> > +               break;
> > +       case 0:
> >                 /* Address of a BPF helper call. Since part of the
> > core
> >                  * kernel, it's always at a fixed location.
> > __bpf_call_base
> >                  * and the helper with imm relative to it are both
> > in core
> >                  * kernel.
> >                  */
> >                 addr = (u8 *)__bpf_call_base + imm;
> > +               fixed = true;
> > +               break;
> > +       case BPF_PSEUDO_KFUNC_CALL:
> > +               err = bpf_get_kfunc_addr(prog, imm, off, &addr);
> > +               if (err)
> > +                       return err;
> > +               fixed = true;
> > +               break;
> > +       default:
> > +               return -EINVAL;
> >         }
> > 
> > +       *func_addr_fixed = fixed;
> >         *func_addr = (unsigned long)addr;
> >         return 0;
> >  }
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 574d2dfc6ada..6d4632476c9c 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -2115,8 +2115,8 @@ static int add_subprog(struct
> > bpf_verifier_env *env, int off)
> >  struct bpf_kfunc_desc {
> >         struct btf_func_model func_model;
> >         u32 func_id;
> > -       s32 imm;
> >         u16 offset;
> 
> [..]
> 
> > +       unsigned long addr;
> 
> Do we have some canonical type to store the address? I was using void
> * in bpf_dev_bound_resolve_kfunc, but we are doing ulong here. We
> seem
> to be doing u64/void */unsigned long inconsistently.

IIUC u64 is for BPF progs [1]. I've seen unsigned long in a number of
places, e.g. kallsyms. My personal heuristic is that if we don't
dereference it on the C side, it can be unsigned long. But I don't have
a strong opinion on this.

> Also, maybe move it up a bit? To turn u32+u16+gap+u64 into
> u64+u32+u16+padding ?

You are right, we can do better here w.r.t. space efficiency:

struct bpf_kfunc_desc {
        struct btf_func_model      func_model;           /*     0    27
*/

        /* XXX 1 byte hole, try to pack */

        u32                        func_id;              /*    28     4
*/
        u16                        offset;               /*    32     2
*/

        /* XXX 6 bytes hole, try to pack */

        long unsigned int          addr;                 /*    40     8
*/


[1]
https://lore.kernel.org/bpf/CAEf4BzaQJfB0Qh2Wn5wd9H0ZSURbzWBfKkav8xbkhozqTWXndw@mail.gmail.com/

Best regards,
Ilya

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf-next v3 08/12] bpf: sparc64: Use bpf_jit_get_func_addr()
  2023-02-22 22:37 ` [PATCH bpf-next v3 08/12] bpf: sparc64: " Ilya Leoshkevich
@ 2023-02-23  9:31   ` Jiri Olsa
  0 siblings, 0 replies; 21+ messages in thread
From: Jiri Olsa @ 2023-02-23  9:31 UTC (permalink / raw)
  To: Ilya Leoshkevich
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Stanislav Fomichev, David S . Miller

On Wed, Feb 22, 2023 at 11:37:10PM +0100, Ilya Leoshkevich wrote:
> Preparation for moving kfunc address from bpf_insn.imm.
> 
> Cc: David S. Miller <davem@davemloft.net>
> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> ---
>  arch/sparc/net/bpf_jit_comp_64.c | 20 ++++++++++++++------
>  arch/x86/net/bpf_jit_comp32.c    |  7 +++++++
>  2 files changed, 21 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
> index 6c482685dc6c..b23083776718 100644
> --- a/arch/sparc/net/bpf_jit_comp_64.c
> +++ b/arch/sparc/net/bpf_jit_comp_64.c
> @@ -893,7 +893,8 @@ static void emit_tail_call(struct jit_ctx *ctx)
>  	emit_nop(ctx);
>  }
>  

SNIP

> diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
> index 429a89c5468b..0abb4d6c9dec 100644
> --- a/arch/x86/net/bpf_jit_comp32.c
> +++ b/arch/x86/net/bpf_jit_comp32.c
> @@ -2091,6 +2091,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
>  			if (insn->src_reg == BPF_PSEUDO_CALL)
>  				goto notyet;
>  
> +			err = bpf_jit_get_func_addr(bpf_prog, insn, extra_pass,
> +						    &func_addr,
> +						    &func_addr_fixed);
> +			if (err)
> +				return err;
> +			func = (u8 *)(unsigned long)func_addr;
> +
>  			if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
>  				int err;

looks like this hunk should be in:
  bpf, x86_32: Use bpf_jit_get_func_addr

jirka

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs
  2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
                   ` (11 preceding siblings ...)
  2023-02-22 22:37 ` [PATCH bpf-next v3 12/12] selftests/bpf: Trim DENYLIST.s390x Ilya Leoshkevich
@ 2023-02-23 17:17 ` Alexei Starovoitov
  2023-02-23 20:42   ` Ilya Leoshkevich
  12 siblings, 1 reply; 21+ messages in thread
From: Alexei Starovoitov @ 2023-02-23 17:17 UTC (permalink / raw)
  To: Ilya Leoshkevich
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev

On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>
> v2: https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/
> v2 -> v3: Drop BPF_HELPER_CALL (Alexei).
>           Drop the merged check_subprogs() cleanup.
>           Adjust arm, sparc and i386 JITs.
>           Fix a few portability issues in test_verifier.
>           Fix a few sparc64 issues.
>           Trim s390x denylist.

I don't think it's a good idea to change a bunch of JITs
that you cannot test just to address the s390 issue.
Please figure out an approach that none of the JITs need changes.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs
  2023-02-23 17:17 ` [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Alexei Starovoitov
@ 2023-02-23 20:42   ` Ilya Leoshkevich
  2023-02-25  0:02     ` Alexei Starovoitov
  0 siblings, 1 reply; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-23 20:42 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev

On Thu, 2023-02-23 at 09:17 -0800, Alexei Starovoitov wrote:
> On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com>
> wrote:
> > 
> > v2:
> > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/
> > v2 -> v3: Drop BPF_HELPER_CALL (Alexei).
> >           Drop the merged check_subprogs() cleanup.
> >           Adjust arm, sparc and i386 JITs.
> >           Fix a few portability issues in test_verifier.
> >           Fix a few sparc64 issues.
> >           Trim s390x denylist.
> 
> I don't think it's a good idea to change a bunch of JITs
> that you cannot test just to address the s390 issue.
> Please figure out an approach that none of the JITs need changes.

What level of testing for these JITs would you find acceptable?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs
  2023-02-23 20:42   ` Ilya Leoshkevich
@ 2023-02-25  0:02     ` Alexei Starovoitov
  2023-02-27 12:36       ` Ilya Leoshkevich
  2023-03-28 12:45       ` Jiri Olsa
  0 siblings, 2 replies; 21+ messages in thread
From: Alexei Starovoitov @ 2023-02-25  0:02 UTC (permalink / raw)
  To: Ilya Leoshkevich
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev

On Thu, Feb 23, 2023 at 12:43 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>
> On Thu, 2023-02-23 at 09:17 -0800, Alexei Starovoitov wrote:
> > On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com>
> > wrote:
> > >
> > > v2:
> > > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/
> > > v2 -> v3: Drop BPF_HELPER_CALL (Alexei).
> > >           Drop the merged check_subprogs() cleanup.
> > >           Adjust arm, sparc and i386 JITs.
> > >           Fix a few portability issues in test_verifier.
> > >           Fix a few sparc64 issues.
> > >           Trim s390x denylist.
> >
> > I don't think it's a good idea to change a bunch of JITs
> > that you cannot test just to address the s390 issue.
> > Please figure out an approach that none of the JITs need changes.
>
> What level of testing for these JITs would you find acceptable?

Just find a way to avoid changing them.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs
  2023-02-25  0:02     ` Alexei Starovoitov
@ 2023-02-27 12:36       ` Ilya Leoshkevich
  2023-03-28 12:45       ` Jiri Olsa
  1 sibling, 0 replies; 21+ messages in thread
From: Ilya Leoshkevich @ 2023-02-27 12:36 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Jiri Olsa,
	Stanislav Fomichev

On Fri, 2023-02-24 at 16:02 -0800, Alexei Starovoitov wrote:
> On Thu, Feb 23, 2023 at 12:43 PM Ilya Leoshkevich <iii@linux.ibm.com>
> wrote:
> > 
> > On Thu, 2023-02-23 at 09:17 -0800, Alexei Starovoitov wrote:
> > > On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich
> > > <iii@linux.ibm.com>
> > > wrote:
> > > > 
> > > > v2:
> > > > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/
> > > > v2 -> v3: Drop BPF_HELPER_CALL (Alexei).
> > > >           Drop the merged check_subprogs() cleanup.
> > > >           Adjust arm, sparc and i386 JITs.
> > > >           Fix a few portability issues in test_verifier.
> > > >           Fix a few sparc64 issues.
> > > >           Trim s390x denylist.
> > > 
> > > I don't think it's a good idea to change a bunch of JITs
> > > that you cannot test just to address the s390 issue.
> > > Please figure out an approach that none of the JITs need changes.
> > 
> > What level of testing for these JITs would you find acceptable?
> 
> Just find a way to avoid changing them.

Ok. But please take a look at patches 1-6. They fix existing issues,
which were found by running test_verifier on arm and sparc64.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs
  2023-02-25  0:02     ` Alexei Starovoitov
  2023-02-27 12:36       ` Ilya Leoshkevich
@ 2023-03-28 12:45       ` Jiri Olsa
  1 sibling, 0 replies; 21+ messages in thread
From: Jiri Olsa @ 2023-03-28 12:45 UTC (permalink / raw)
  To: Alexei Starovoitov, Ilya Leoshkevich
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Stanislav Fomichev

On Fri, Feb 24, 2023 at 04:02:50PM -0800, Alexei Starovoitov wrote:
> On Thu, Feb 23, 2023 at 12:43 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> >
> > On Thu, 2023-02-23 at 09:17 -0800, Alexei Starovoitov wrote:
> > > On Wed, Feb 22, 2023 at 2:37 PM Ilya Leoshkevich <iii@linux.ibm.com>
> > > wrote:
> > > >
> > > > v2:
> > > > https://lore.kernel.org/bpf/20230215235931.380197-1-iii@linux.ibm.com/
> > > > v2 -> v3: Drop BPF_HELPER_CALL (Alexei).
> > > >           Drop the merged check_subprogs() cleanup.
> > > >           Adjust arm, sparc and i386 JITs.
> > > >           Fix a few portability issues in test_verifier.
> > > >           Fix a few sparc64 issues.
> > > >           Trim s390x denylist.
> > >
> > > I don't think it's a good idea to change a bunch of JITs
> > > that you cannot test just to address the s390 issue.
> > > Please figure out an approach that none of the JITs need changes.
> >
> > What level of testing for these JITs would you find acceptable?
> 
> Just find a way to avoid changing them.

hi,
sending another stub on this

the idea is to use 'func_id' in insn->imm for s390 arch and keep
other archs to use the current BPF_CALL_IMM(addr) value

this way the s390 arch is able to lookup the kfunc_desc and use
the stored kfunc address

I added insn->off to the kfunc_desc sorting, which is not needed
for !__s390__ case, but it won't hurt... we can have that separated
as well if needed

the patch below is completely untested on s390x of course, but it
does not seem to break x86 ;-)

I think we could have config option for that instead of using __s390x__

thoughts?

thanks,
jirka


---
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 2d8f3f639e68..b60945e135ee 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2296,6 +2296,8 @@ bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog);
 const struct btf_func_model *
 bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
 			 const struct bpf_insn *insn);
+int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
+		       u8 **func_addr);
 struct bpf_core_ctx {
 	struct bpf_verifier_log *log;
 	const struct btf *btf;
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index b297e9f60ca1..06459df0a8c0 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1186,10 +1186,12 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
 {
 	s16 off = insn->off;
 	s32 imm = insn->imm;
+	bool fixed;
 	u8 *addr;
+	int err;
 
-	*func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL;
-	if (!*func_addr_fixed) {
+	switch (insn->src_reg) {
+	case BPF_PSEUDO_CALL:
 		/* Place-holder address till the last pass has collected
 		 * all addresses for JITed subprograms in which case we
 		 * can pick them up from prog->aux.
@@ -1201,15 +1203,28 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
 			addr = (u8 *)prog->aux->func[off]->bpf_func;
 		else
 			return -EINVAL;
-	} else {
+		fixed = false;
+		break;
+	case 0:
 		/* Address of a BPF helper call. Since part of the core
 		 * kernel, it's always at a fixed location. __bpf_call_base
 		 * and the helper with imm relative to it are both in core
 		 * kernel.
 		 */
 		addr = (u8 *)__bpf_call_base + imm;
+		fixed = true;
+		break;
+	case BPF_PSEUDO_KFUNC_CALL:
+		err = bpf_get_kfunc_addr(prog, imm, off, &addr);
+		if (err)
+			return err;
+		fixed = true;
+		break;
+	default:
+		return -EINVAL;
 	}
 
+	*func_addr_fixed = fixed;
 	*func_addr = (unsigned long)addr;
 	return 0;
 }
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 20eb2015842f..a83750542a09 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2443,6 +2443,7 @@ struct bpf_kfunc_desc {
 	u32 func_id;
 	s32 imm;
 	u16 offset;
+	unsigned long addr;
 };
 
 struct bpf_kfunc_btf {
@@ -2492,6 +2493,23 @@ find_kfunc_desc(const struct bpf_prog *prog, u32 func_id, u16 offset)
 		       sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
 }
 
+int bpf_get_kfunc_addr(const struct bpf_prog *prog, u32 func_id, u16 offset,
+		       u8 **func_addr)
+{
+#ifdef __s390x__
+	const struct bpf_kfunc_desc *desc;
+
+	desc = find_kfunc_desc(prog, func_id, offset);
+	if (!desc)
+		return -EFAULT;
+
+	*func_addr = (u8 *)desc->addr;
+#else
+	*func_addr = (u8 *)__bpf_call_base + func_id;
+#endif
+	return 0;
+}
+
 static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env *env,
 					 s16 offset)
 {
@@ -2672,6 +2690,9 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 		return -EINVAL;
 	}
 
+#ifdef __s390x__
+	call_imm = func_id;
+#else
 	call_imm = BPF_CALL_IMM(addr);
 	/* Check whether or not the relative offset overflows desc->imm */
 	if ((unsigned long)(s32)call_imm != call_imm) {
@@ -2679,17 +2700,25 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 			func_name);
 		return -EINVAL;
 	}
+#endif
 
 	if (bpf_dev_bound_kfunc_id(func_id)) {
 		err = bpf_dev_bound_kfunc_check(&env->log, prog_aux);
 		if (err)
 			return err;
+#ifdef __s390x__
+		xdp_kfunc = bpf_dev_bound_resolve_kfunc(env->prog, func_id);
+		if (xdp_kfunc)
+			addr = (unsigned long)xdp_kfunc;
+		/* fallback to default kfunc when not supported by netdev */
+#endif
 	}
 
 	desc = &tab->descs[tab->nr_descs++];
 	desc->func_id = func_id;
 	desc->imm = call_imm;
 	desc->offset = offset;
+	desc->addr = addr;
 	err = btf_distill_func_proto(&env->log, desc_btf,
 				     func_proto, func_name,
 				     &desc->func_model);
@@ -2699,19 +2728,15 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 	return err;
 }
 
-static int kfunc_desc_cmp_by_imm(const void *a, const void *b)
+static int kfunc_desc_cmp_by_imm_off(const void *a, const void *b)
 {
 	const struct bpf_kfunc_desc *d0 = a;
 	const struct bpf_kfunc_desc *d1 = b;
 
-	if (d0->imm > d1->imm)
-		return 1;
-	else if (d0->imm < d1->imm)
-		return -1;
-	return 0;
+	return d0->imm - d1->imm ?: d0->offset - d1->offset;
 }
 
-static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
+static void sort_kfunc_descs_by_imm_off(struct bpf_prog *prog)
 {
 	struct bpf_kfunc_desc_tab *tab;
 
@@ -2720,7 +2745,7 @@ static void sort_kfunc_descs_by_imm(struct bpf_prog *prog)
 		return;
 
 	sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
-	     kfunc_desc_cmp_by_imm, NULL);
+	     kfunc_desc_cmp_by_imm_off, NULL);
 }
 
 bool bpf_prog_has_kfunc_call(const struct bpf_prog *prog)
@@ -2734,13 +2759,14 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
 {
 	const struct bpf_kfunc_desc desc = {
 		.imm = insn->imm,
+		.offset = insn->off,
 	};
 	const struct bpf_kfunc_desc *res;
 	struct bpf_kfunc_desc_tab *tab;
 
 	tab = prog->aux->kfunc_tab;
 	res = bsearch(&desc, tab->descs, tab->nr_descs,
-		      sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm);
+		      sizeof(tab->descs[0]), kfunc_desc_cmp_by_imm_off);
 
 	return res ? &res->func_model : NULL;
 }
@@ -17886,7 +17912,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 		}
 	}
 
-	sort_kfunc_descs_by_imm(env->prog);
+	sort_kfunc_descs_by_imm_off(env->prog);
 
 	return 0;
 }

^ permalink raw reply related	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2023-03-28 12:46 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-22 22:37 [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
2023-02-22 22:37 ` [PATCH bpf-next v3 01/12] selftests/bpf: Finish folding after BPF_FUNC_csum_diff Ilya Leoshkevich
2023-02-22 22:37 ` [PATCH bpf-next v3 02/12] selftests/bpf: Fix test_verifier on 32-bit systems Ilya Leoshkevich
2023-02-22 22:37 ` [PATCH bpf-next v3 03/12] sparc: Update maximum errno Ilya Leoshkevich
2023-02-22 22:37 ` [PATCH bpf-next v3 04/12] bpf: sparc64: Emit fixed-length instructions for BPF_PSEUDO_FUNC Ilya Leoshkevich
2023-02-22 22:37 ` [PATCH bpf-next v3 05/12] bpf: sparc64: Fix jumping to the first insn Ilya Leoshkevich
2023-02-22 22:37 ` [PATCH bpf-next v3 06/12] bpf: sparc64: Use 32-bit tail call index Ilya Leoshkevich
2023-02-22 22:37 ` [PATCH bpf-next v3 07/12] bpf, arm: Use bpf_jit_get_func_addr() Ilya Leoshkevich
2023-02-22 22:37 ` [PATCH bpf-next v3 08/12] bpf: sparc64: " Ilya Leoshkevich
2023-02-23  9:31   ` Jiri Olsa
2023-02-22 22:37 ` [PATCH bpf-next v3 09/12] bpf: x86: " Ilya Leoshkevich
2023-02-22 22:37 ` [PATCH bpf-next v3 10/12] bpf, x86_32: " Ilya Leoshkevich
2023-02-22 22:37 ` [PATCH bpf-next v3 11/12] bpf: Support 64-bit pointers to kfuncs Ilya Leoshkevich
2023-02-22 23:43   ` Stanislav Fomichev
2023-02-23  8:39     ` Ilya Leoshkevich
2023-02-22 22:37 ` [PATCH bpf-next v3 12/12] selftests/bpf: Trim DENYLIST.s390x Ilya Leoshkevich
2023-02-23 17:17 ` [PATCH bpf-next v3 00/12] bpf: Support 64-bit pointers to kfuncs Alexei Starovoitov
2023-02-23 20:42   ` Ilya Leoshkevich
2023-02-25  0:02     ` Alexei Starovoitov
2023-02-27 12:36       ` Ilya Leoshkevich
2023-03-28 12:45       ` Jiri Olsa

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.