bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 bpf-next 00/18] Introduce BPF trampoline
@ 2019-11-08  6:40 Alexei Starovoitov
  2019-11-08  6:40 ` [PATCH v3 bpf-next 01/18] bpf: refactor x86 JIT into helpers Alexei Starovoitov
                   ` (17 more replies)
  0 siblings, 18 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Introduce BPF trampoline that works as a bridge between kernel functions, BPF
programs and other BPF programs.

The first use case is fentry/fexit BPF programs that are roughly equivalent to
kprobe/kretprobe. Unlike k[ret]probe there is practically zero overhead to call
a set of BPF programs before or after kernel function.

The second use case is heavily influenced by pain points in XDP development.
BPF trampoline allows attaching similar fentry/fexit BPF program to any
networking BPF program. It's now possible to see packets on input and output of
any XDP, TC, lwt, cgroup programs without disturbing them. This greatly helps
BPF-based network troubleshooting.

The third use case of BPF trampoline will be explored in the follow up patches.
The BPF trampoline will be used to dynamicly link BPF programs. It's more
generic mechanism than array and link list of programs used in tracing,
networking, cgroups. In many cases it can be used as a replacement for
bpf_tail_call-based program chaining.

v2->v3:
- Addressed Song's and Andrii's comments
- Fixed few minor bugs discovered while testing
- Added one more libbpf patch

v1->v2:
- Addressed Andrii's comments
- Added more test for fentry/fexit to kernel functions. Including stress test
  for maximum number of progs per trampoline.
- Fixed a race btf_resolve_helper_id()
- Added a patch to compare BTF types of functions arguments with actual types.
- Added support for attaching BPF program to another BPF program via trampoline
- Converted to use text_poke() API. That's the only viable mechanism to
  implement BPF-to-BPF attach. BPF-to-kernel attach can be refactored to use
  register_ftrace_direct() whenever it's available.

Most of the interesting details are in patches 2,3,14,15.

Alexei Starovoitov (18):
  bpf: refactor x86 JIT into helpers
  bpf: Add bpf_arch_text_poke() helper
  bpf: Introduce BPF trampoline
  libbpf: Introduce btf__find_by_name_kind()
  libbpf: Add support to attach to fentry/fexit tracing progs
  selftest/bpf: Simple test for fentry/fexit
  bpf: Add kernel test functions for fentry testing
  selftests/bpf: Add test for BPF trampoline
  selftests/bpf: Add fexit tests for BPF trampoline
  selftests/bpf: Add combined fentry/fexit test
  selftests/bpf: Add stress test for maximum number of progs
  bpf: Reserve space for BPF trampoline in BPF programs
  bpf: Fix race in btf_resolve_helper_id()
  bpf: Compare BTF types of functions arguments with actual types
  bpf: Support attaching tracing BPF program to other BPF programs
  libbpf: Add support for attaching BPF programs to other BPF programs
  selftests/bpf: Extend test_pkt_access test
  selftests/bpf: Add a test for attaching BPF prog to another BPF prog
    and subprog

 arch/x86/net/bpf_jit_comp.c                   | 425 +++++++++++++++---
 include/linux/bpf.h                           | 121 ++++-
 include/linux/bpf_verifier.h                  |   1 +
 include/linux/btf.h                           |   1 +
 include/uapi/linux/bpf.h                      |   3 +
 kernel/bpf/Makefile                           |   1 +
 kernel/bpf/btf.c                              | 274 ++++++++++-
 kernel/bpf/core.c                             |   9 +
 kernel/bpf/syscall.c                          |  73 ++-
 kernel/bpf/trampoline.c                       | 250 +++++++++++
 kernel/bpf/verifier.c                         | 131 +++++-
 kernel/trace/bpf_trace.c                      |   2 -
 net/bpf/test_run.c                            |  41 ++
 net/core/filter.c                             |   2 +-
 tools/include/uapi/linux/bpf.h                |   3 +
 tools/lib/bpf/bpf.c                           |   9 +-
 tools/lib/bpf/bpf.h                           |   5 +-
 tools/lib/bpf/bpf_helpers.h                   |  13 +
 tools/lib/bpf/btf.c                           |  22 +
 tools/lib/bpf/btf.h                           |   2 +
 tools/lib/bpf/libbpf.c                        | 154 +++++--
 tools/lib/bpf/libbpf.h                        |   7 +-
 tools/lib/bpf/libbpf.map                      |   3 +
 .../selftests/bpf/prog_tests/fentry_fexit.c   |  90 ++++
 .../selftests/bpf/prog_tests/fentry_test.c    |  64 +++
 .../selftests/bpf/prog_tests/fexit_bpf2bpf.c  |  76 ++++
 .../selftests/bpf/prog_tests/fexit_stress.c   |  77 ++++
 .../selftests/bpf/prog_tests/fexit_test.c     |  64 +++
 .../selftests/bpf/prog_tests/kfree_skb.c      |  39 +-
 .../testing/selftests/bpf/progs/fentry_test.c |  90 ++++
 .../selftests/bpf/progs/fexit_bpf2bpf.c       |  91 ++++
 .../testing/selftests/bpf/progs/fexit_test.c  |  98 ++++
 tools/testing/selftests/bpf/progs/kfree_skb.c |  52 +++
 .../selftests/bpf/progs/test_pkt_access.c     |  38 +-
 34 files changed, 2198 insertions(+), 133 deletions(-)
 create mode 100644 kernel/bpf/trampoline.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/fentry_fexit.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/fentry_test.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/fexit_stress.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/fexit_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/fentry_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c
 create mode 100644 tools/testing/selftests/bpf/progs/fexit_test.c

-- 
2.23.0


^ permalink raw reply	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 01/18] bpf: refactor x86 JIT into helpers
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08 19:27   ` Andrii Nakryiko
  2019-11-08  6:40 ` [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper Alexei Starovoitov
                   ` (16 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Refactor x86 JITing of LDX, STX, CALL instructions into separate helper
functions.  No functional changes in LDX and STX helpers.  There is a minor
change in CALL helper. It will populate target address correctly on the first
pass of JIT instead of second pass. That won't reduce total number of JIT
passes though.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
---
 arch/x86/net/bpf_jit_comp.c | 153 +++++++++++++++++++++++-------------
 1 file changed, 99 insertions(+), 54 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 8cd23d8309bf..0399b1f83c23 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -198,6 +198,8 @@ struct jit_context {
 /* Maximum number of bytes emitted while JITing one eBPF insn */
 #define BPF_MAX_INSN_SIZE	128
 #define BPF_INSN_SAFETY		64
+/* number of bytes emit_call() needs to generate call instruction */
+#define X86_CALL_SIZE		5
 
 #define PROLOGUE_SIZE		20
 
@@ -390,6 +392,100 @@ static void emit_mov_reg(u8 **pprog, bool is64, u32 dst_reg, u32 src_reg)
 	*pprog = prog;
 }
 
+/* LDX: dst_reg = *(u8*)(src_reg + off) */
+static void emit_ldx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
+{
+	u8 *prog = *pprog;
+	int cnt = 0;
+
+	switch (size) {
+	case BPF_B:
+		/* Emit 'movzx rax, byte ptr [rax + off]' */
+		EMIT3(add_2mod(0x48, src_reg, dst_reg), 0x0F, 0xB6);
+		break;
+	case BPF_H:
+		/* Emit 'movzx rax, word ptr [rax + off]' */
+		EMIT3(add_2mod(0x48, src_reg, dst_reg), 0x0F, 0xB7);
+		break;
+	case BPF_W:
+		/* Emit 'mov eax, dword ptr [rax+0x14]' */
+		if (is_ereg(dst_reg) || is_ereg(src_reg))
+			EMIT2(add_2mod(0x40, src_reg, dst_reg), 0x8B);
+		else
+			EMIT1(0x8B);
+		break;
+	case BPF_DW:
+		/* Emit 'mov rax, qword ptr [rax+0x14]' */
+		EMIT2(add_2mod(0x48, src_reg, dst_reg), 0x8B);
+		break;
+	}
+	/*
+	 * If insn->off == 0 we can save one extra byte, but
+	 * special case of x86 R13 which always needs an offset
+	 * is not worth the hassle
+	 */
+	if (is_imm8(off))
+		EMIT2(add_2reg(0x40, src_reg, dst_reg), off);
+	else
+		EMIT1_off32(add_2reg(0x80, src_reg, dst_reg), off);
+	*pprog = prog;
+}
+
+/* STX: *(u8*)(dst_reg + off) = src_reg */
+static void emit_stx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
+{
+	u8 *prog = *pprog;
+	int cnt = 0;
+
+	switch (size) {
+	case BPF_B:
+		/* Emit 'mov byte ptr [rax + off], al' */
+		if (is_ereg(dst_reg) || is_ereg(src_reg) ||
+		    /* We have to add extra byte for x86 SIL, DIL regs */
+		    src_reg == BPF_REG_1 || src_reg == BPF_REG_2)
+			EMIT2(add_2mod(0x40, dst_reg, src_reg), 0x88);
+		else
+			EMIT1(0x88);
+		break;
+	case BPF_H:
+		if (is_ereg(dst_reg) || is_ereg(src_reg))
+			EMIT3(0x66, add_2mod(0x40, dst_reg, src_reg), 0x89);
+		else
+			EMIT2(0x66, 0x89);
+		break;
+	case BPF_W:
+		if (is_ereg(dst_reg) || is_ereg(src_reg))
+			EMIT2(add_2mod(0x40, dst_reg, src_reg), 0x89);
+		else
+			EMIT1(0x89);
+		break;
+	case BPF_DW:
+		EMIT2(add_2mod(0x48, dst_reg, src_reg), 0x89);
+		break;
+	}
+	if (is_imm8(off))
+		EMIT2(add_2reg(0x40, dst_reg, src_reg), off);
+	else
+		EMIT1_off32(add_2reg(0x80, dst_reg, src_reg),
+			    off);
+	*pprog = prog;
+}
+
+static int emit_call(u8 **pprog, void *func, void *ip)
+{
+	u8 *prog = *pprog;
+	int cnt = 0;
+	s64 offset;
+
+	offset = func - (ip + X86_CALL_SIZE);
+	if (!is_simm32(offset)) {
+		pr_err("Target call %p is out of range\n", func);
+		return -EINVAL;
+	}
+	EMIT1_off32(0xE8, offset);
+	*pprog = prog;
+	return 0;
+}
 
 static bool ex_handler_bpf(const struct exception_table_entry *x,
 			   struct pt_regs *regs, int trapnr,
@@ -773,68 +869,22 @@ st:			if (is_imm8(insn->off))
 
 			/* STX: *(u8*)(dst_reg + off) = src_reg */
 		case BPF_STX | BPF_MEM | BPF_B:
-			/* Emit 'mov byte ptr [rax + off], al' */
-			if (is_ereg(dst_reg) || is_ereg(src_reg) ||
-			    /* We have to add extra byte for x86 SIL, DIL regs */
-			    src_reg == BPF_REG_1 || src_reg == BPF_REG_2)
-				EMIT2(add_2mod(0x40, dst_reg, src_reg), 0x88);
-			else
-				EMIT1(0x88);
-			goto stx;
 		case BPF_STX | BPF_MEM | BPF_H:
-			if (is_ereg(dst_reg) || is_ereg(src_reg))
-				EMIT3(0x66, add_2mod(0x40, dst_reg, src_reg), 0x89);
-			else
-				EMIT2(0x66, 0x89);
-			goto stx;
 		case BPF_STX | BPF_MEM | BPF_W:
-			if (is_ereg(dst_reg) || is_ereg(src_reg))
-				EMIT2(add_2mod(0x40, dst_reg, src_reg), 0x89);
-			else
-				EMIT1(0x89);
-			goto stx;
 		case BPF_STX | BPF_MEM | BPF_DW:
-			EMIT2(add_2mod(0x48, dst_reg, src_reg), 0x89);
-stx:			if (is_imm8(insn->off))
-				EMIT2(add_2reg(0x40, dst_reg, src_reg), insn->off);
-			else
-				EMIT1_off32(add_2reg(0x80, dst_reg, src_reg),
-					    insn->off);
+			emit_stx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off);
 			break;
 
 			/* LDX: dst_reg = *(u8*)(src_reg + off) */
 		case BPF_LDX | BPF_MEM | BPF_B:
 		case BPF_LDX | BPF_PROBE_MEM | BPF_B:
-			/* Emit 'movzx rax, byte ptr [rax + off]' */
-			EMIT3(add_2mod(0x48, src_reg, dst_reg), 0x0F, 0xB6);
-			goto ldx;
 		case BPF_LDX | BPF_MEM | BPF_H:
 		case BPF_LDX | BPF_PROBE_MEM | BPF_H:
-			/* Emit 'movzx rax, word ptr [rax + off]' */
-			EMIT3(add_2mod(0x48, src_reg, dst_reg), 0x0F, 0xB7);
-			goto ldx;
 		case BPF_LDX | BPF_MEM | BPF_W:
 		case BPF_LDX | BPF_PROBE_MEM | BPF_W:
-			/* Emit 'mov eax, dword ptr [rax+0x14]' */
-			if (is_ereg(dst_reg) || is_ereg(src_reg))
-				EMIT2(add_2mod(0x40, src_reg, dst_reg), 0x8B);
-			else
-				EMIT1(0x8B);
-			goto ldx;
 		case BPF_LDX | BPF_MEM | BPF_DW:
 		case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
-			/* Emit 'mov rax, qword ptr [rax+0x14]' */
-			EMIT2(add_2mod(0x48, src_reg, dst_reg), 0x8B);
-ldx:			/*
-			 * If insn->off == 0 we can save one extra byte, but
-			 * special case of x86 R13 which always needs an offset
-			 * is not worth the hassle
-			 */
-			if (is_imm8(insn->off))
-				EMIT2(add_2reg(0x40, src_reg, dst_reg), insn->off);
-			else
-				EMIT1_off32(add_2reg(0x80, src_reg, dst_reg),
-					    insn->off);
+			emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off);
 			if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {
 				struct exception_table_entry *ex;
 				u8 *_insn = image + proglen;
@@ -899,13 +949,8 @@ xadd:			if (is_imm8(insn->off))
 			/* call */
 		case BPF_JMP | BPF_CALL:
 			func = (u8 *) __bpf_call_base + imm32;
-			jmp_offset = func - (image + addrs[i]);
-			if (!imm32 || !is_simm32(jmp_offset)) {
-				pr_err("unsupported BPF func %d addr %p image %p\n",
-				       imm32, func, image);
+			if (!imm32 || emit_call(&prog, func, image + addrs[i - 1]))
 				return -EINVAL;
-			}
-			EMIT1_off32(0xE8, jmp_offset);
 			break;
 
 		case BPF_JMP | BPF_TAIL_CALL:
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
  2019-11-08  6:40 ` [PATCH v3 bpf-next 01/18] bpf: refactor x86 JIT into helpers Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08  6:56   ` Song Liu
                     ` (2 more replies)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 03/18] bpf: Introduce BPF trampoline Alexei Starovoitov
                   ` (15 subsequent siblings)
  17 siblings, 3 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Add bpf_arch_text_poke() helper that is used by BPF trampoline logic to patch
nops/calls in kernel text into calls into BPF trampoline and to patch
calls/nops inside BPF programs too.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 arch/x86/net/bpf_jit_comp.c | 51 +++++++++++++++++++++++++++++++++++++
 include/linux/bpf.h         |  8 ++++++
 kernel/bpf/core.c           |  6 +++++
 3 files changed, 65 insertions(+)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 0399b1f83c23..bb8467fd6715 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -9,9 +9,11 @@
 #include <linux/filter.h>
 #include <linux/if_vlan.h>
 #include <linux/bpf.h>
+#include <linux/memory.h>
 #include <asm/extable.h>
 #include <asm/set_memory.h>
 #include <asm/nospec-branch.h>
+#include <asm/text-patching.h>
 
 static u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len)
 {
@@ -487,6 +489,55 @@ static int emit_call(u8 **pprog, void *func, void *ip)
 	return 0;
 }
 
+int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
+		       void *old_addr, void *new_addr)
+{
+	u8 old_insn[X86_CALL_SIZE] = {};
+	u8 new_insn[X86_CALL_SIZE] = {};
+	u8 *prog;
+	int ret;
+
+	if (!is_kernel_text((long)ip))
+		/* BPF trampoline in modules is not supported */
+		return -EINVAL;
+
+	if (old_addr) {
+		prog = old_insn;
+		ret = emit_call(&prog, old_addr, (void *)ip);
+		if (ret)
+			return ret;
+	}
+	if (new_addr) {
+		prog = new_insn;
+		ret = emit_call(&prog, new_addr, (void *)ip);
+		if (ret)
+			return ret;
+	}
+	ret = -EBUSY;
+	mutex_lock(&text_mutex);
+	switch (t) {
+	case BPF_MOD_NOP_TO_CALL:
+		if (memcmp(ip, ideal_nops[NOP_ATOMIC5], X86_CALL_SIZE))
+			goto out;
+		text_poke(ip, new_insn, X86_CALL_SIZE);
+		break;
+	case BPF_MOD_CALL_TO_CALL:
+		if (memcmp(ip, old_insn, X86_CALL_SIZE))
+			goto out;
+		text_poke(ip, new_insn, X86_CALL_SIZE);
+		break;
+	case BPF_MOD_CALL_TO_NOP:
+		if (memcmp(ip, old_insn, X86_CALL_SIZE))
+			goto out;
+		text_poke(ip, ideal_nops[NOP_ATOMIC5], X86_CALL_SIZE);
+		break;
+	}
+	ret = 0;
+out:
+	mutex_unlock(&text_mutex);
+	return ret;
+}
+
 static bool ex_handler_bpf(const struct exception_table_entry *x,
 			   struct pt_regs *regs, int trapnr,
 			   unsigned long error_code, unsigned long fault_addr)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 7c7f518811a6..8b90db25348a 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1157,4 +1157,12 @@ static inline u32 bpf_xdp_sock_convert_ctx_access(enum bpf_access_type type,
 }
 #endif /* CONFIG_INET */
 
+enum bpf_text_poke_type {
+	BPF_MOD_NOP_TO_CALL,
+	BPF_MOD_CALL_TO_CALL,
+	BPF_MOD_CALL_TO_NOP,
+};
+int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
+		       void *addr1, void *addr2);
+
 #endif /* _LINUX_BPF_H */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index c1fde0303280..c4bcec1014a9 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2140,6 +2140,12 @@ int __weak skb_copy_bits(const struct sk_buff *skb, int offset, void *to,
 	return -EFAULT;
 }
 
+int __weak bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
+			      void *addr1, void *addr2)
+{
+	return -ENOTSUPP;
+}
+
 DEFINE_STATIC_KEY_FALSE(bpf_stats_enabled_key);
 EXPORT_SYMBOL(bpf_stats_enabled_key);
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 03/18] bpf: Introduce BPF trampoline
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
  2019-11-08  6:40 ` [PATCH v3 bpf-next 01/18] bpf: refactor x86 JIT into helpers Alexei Starovoitov
  2019-11-08  6:40 ` [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08  7:04   ` Song Liu
  2019-11-08  6:40 ` [PATCH v3 bpf-next 04/18] libbpf: Introduce btf__find_by_name_kind() Alexei Starovoitov
                   ` (14 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Introduce BPF trampoline concept to allow kernel code to call into BPF programs
with practically zero overhead.  The trampoline generation logic is
architecture dependent.  It's converting native calling convention into BPF
calling convention.  BPF ISA is 64-bit (even on 32-bit architectures). The
registers R1 to R5 are used to pass arguments into BPF functions. The main BPF
program accepts only single argument "ctx" in R1. Whereas CPU native calling
convention is different. x86-64 is passing first 6 arguments in registers
and the rest on the stack. x86-32 is passing first 3 arguments in registers.
sparc64 is passing first 6 in registers. And so on.

The trampolines between BPF and kernel already exist.  BPF_CALL_x macros in
include/linux/filter.h statically compile trampolines from BPF into kernel
helpers. They convert up to five u64 arguments into kernel C pointers and
integers. On 64-bit architectures this BPF_to_kernel trampolines are nops. On
32-bit architecture they're meaningful.

The opposite job kernel_to_BPF trampolines is done by CAST_TO_U64 macros and
__bpf_trace_##call() shim functions in include/trace/bpf_probe.h. They convert
kernel function arguments into array of u64s that BPF program consumes via
R1=ctx pointer.

This patch set is doing the same job as __bpf_trace_##call() static
trampolines, but dynamically for any kernel function. There are ~22k global
kernel functions that are attachable via nop at function entry. The function
arguments and types are described in BTF.  The job of btf_distill_func_proto()
function is to extract useful information from BTF into "function model" that
architecture dependent trampoline generators will use to generate assembly code
to cast kernel function arguments into array of u64s.  For example the kernel
function eth_type_trans has two pointers. They will be casted to u64 and stored
into stack of generated trampoline. The pointer to that stack space will be
passed into BPF program in R1. On x86-64 such generated trampoline will consume
16 bytes of stack and two stores of %rdi and %rsi into stack. The verifier will
make sure that only two u64 are accessed read-only by BPF program. The verifier
will also recognize the precise type of the pointers being accessed and will
not allow typecasting of the pointer to a different type within BPF program.

The tracing use case in the datacenter demonstrated that certain key kernel
functions have (like tcp_retransmit_skb) have 2 or more kprobes that are always
active.  Other functions have both kprobe and kretprobe.  So it is essential to
keep both kernel code and BPF programs executing at maximum speed. Hence
generated BPF trampoline is re-generated every time new program is attached or
detached to maintain maximum performance.

To avoid the high cost of retpoline the attached BPF programs are called
directly. __bpf_prog_enter/exit() are used to support per-program execution
stats.  In the future this logic will be optimized further by adding support
for bpf_stats_enabled_key inside generated assembly code. Introduction of
preemptible and sleepable BPF programs will completely remove the need to call
to __bpf_prog_enter/exit().

Detach of a BPF program from the trampoline should not fail. To avoid memory
allocation in detach path the half of the page is used as a reserve and flipped
after each attach/detach. 2k bytes is enough to call 40+ BPF programs directly
which is enough for BPF tracing use cases. This limit can be increased in the
future.

BPF_TRACE_FENTRY programs have access to raw kernel function arguments while
BPF_TRACE_FEXIT programs have access to kernel return value as well. Often
kprobe BPF program remembers function arguments in a map while kretprobe
fetches arguments from a map and analyzes them together with return value.
BPF_TRACE_FEXIT accelerates this typical use case.

Recursion prevention for kprobe BPF programs is done via per-cpu
bpf_prog_active counter. In practice that turned out to be a mistake. It
caused programs to randomly skip execution. The tracing tools missed results
they were looking for. Hence BPF trampoline doesn't provide builtin recursion
prevention. It's a job of BPF program itself and will be addressed in the
follow up patches.

BPF trampoline is intended to be used beyond tracing and fentry/fexit use cases
in the future. For example to remove retpoline cost from XDP programs.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
---
 arch/x86/net/bpf_jit_comp.c | 211 +++++++++++++++++++++++++++++-
 include/linux/bpf.h         |  98 ++++++++++++++
 include/uapi/linux/bpf.h    |   2 +
 kernel/bpf/Makefile         |   1 +
 kernel/bpf/btf.c            |  77 ++++++++++-
 kernel/bpf/core.c           |   1 +
 kernel/bpf/syscall.c        |  53 +++++++-
 kernel/bpf/trampoline.c     | 250 ++++++++++++++++++++++++++++++++++++
 kernel/bpf/verifier.c       |  39 ++++++
 9 files changed, 722 insertions(+), 10 deletions(-)
 create mode 100644 kernel/bpf/trampoline.c

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index bb8467fd6715..9ccf7cd51a21 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -98,6 +98,7 @@ static int bpf_size_to_x86_bytes(int bpf_size)
 
 /* Pick a register outside of BPF range for JIT internal work */
 #define AUX_REG (MAX_BPF_JIT_REG + 1)
+#define X86_REG_R9 (MAX_BPF_JIT_REG + 2)
 
 /*
  * The following table maps BPF registers to x86-64 registers.
@@ -106,8 +107,8 @@ static int bpf_size_to_x86_bytes(int bpf_size)
  * register in load/store instructions, it always needs an
  * extra byte of encoding and is callee saved.
  *
- * Also x86-64 register R9 is unused. x86-64 register R10 is
- * used for blinding (if enabled).
+ * x86-64 register R9 is not used by BPF programs, but can be used by BPF
+ * trampoline. x86-64 register R10 is used for blinding (if enabled).
  */
 static const int reg2hex[] = {
 	[BPF_REG_0] = 0,  /* RAX */
@@ -123,6 +124,7 @@ static const int reg2hex[] = {
 	[BPF_REG_FP] = 5, /* RBP readonly */
 	[BPF_REG_AX] = 2, /* R10 temp register */
 	[AUX_REG] = 3,    /* R11 temp register */
+	[X86_REG_R9] = 1, /* R9 register, 6th function argument */
 };
 
 static const int reg2pt_regs[] = {
@@ -150,6 +152,7 @@ static bool is_ereg(u32 reg)
 			     BIT(BPF_REG_7) |
 			     BIT(BPF_REG_8) |
 			     BIT(BPF_REG_9) |
+			     BIT(X86_REG_R9) |
 			     BIT(BPF_REG_AX));
 }
 
@@ -1234,6 +1237,210 @@ xadd:			if (is_imm8(insn->off))
 	return proglen;
 }
 
+static void save_regs(struct btf_func_model *m, u8 **prog, int nr_args,
+		      int stack_size)
+{
+	int i;
+	/* Store function arguments to stack.
+	 * For a function that accepts two pointers the sequence will be:
+	 * mov QWORD PTR [rbp-0x10],rdi
+	 * mov QWORD PTR [rbp-0x8],rsi
+	 */
+	for (i = 0; i < min(nr_args, 6); i++)
+		emit_stx(prog, bytes_to_bpf_size(m->arg_size[i]),
+			 BPF_REG_FP,
+			 i == 5 ? X86_REG_R9 : BPF_REG_1 + i,
+			 -(stack_size - i * 8));
+}
+
+static void restore_regs(struct btf_func_model *m, u8 **prog, int nr_args,
+			 int stack_size)
+{
+	int i;
+
+	/* Restore function arguments from stack.
+	 * For a function that accepts two pointers the sequence will be:
+	 * EMIT4(0x48, 0x8B, 0x7D, 0xF0); mov rdi,QWORD PTR [rbp-0x10]
+	 * EMIT4(0x48, 0x8B, 0x75, 0xF8); mov rsi,QWORD PTR [rbp-0x8]
+	 */
+	for (i = 0; i < min(nr_args, 6); i++)
+		emit_ldx(prog, bytes_to_bpf_size(m->arg_size[i]),
+			 i == 5 ? X86_REG_R9 : BPF_REG_1 + i,
+			 BPF_REG_FP,
+			 -(stack_size - i * 8));
+}
+
+static int invoke_bpf(struct btf_func_model *m, u8 **pprog,
+		      struct bpf_prog **progs, int prog_cnt, int stack_size)
+{
+	u8 *prog = *pprog;
+	int cnt = 0, i;
+
+	for (i = 0; i < prog_cnt; i++) {
+		if (emit_call(&prog, __bpf_prog_enter, prog))
+			return -EINVAL;
+		/* remember prog start time returned by __bpf_prog_enter */
+		emit_mov_reg(&prog, true, BPF_REG_6, BPF_REG_0);
+
+		/* arg1: lea rdi, [rbp - stack_size] */
+		EMIT4(0x48, 0x8D, 0x7D, -stack_size);
+		/* arg2: progs[i]->insnsi for interpreter */
+		if (!progs[i]->jited)
+			emit_mov_imm64(&prog, BPF_REG_2,
+				       (long) progs[i]->insnsi >> 32,
+				       (u32) (long) progs[i]->insnsi);
+		/* call JITed bpf program or interpreter */
+		if (emit_call(&prog, progs[i]->bpf_func, prog))
+			return -EINVAL;
+
+		/* arg1: mov rdi, progs[i] */
+		emit_mov_imm64(&prog, BPF_REG_1, (long) progs[i] >> 32,
+			       (u32) (long) progs[i]);
+		/* arg2: mov rsi, rbx <- start time in nsec */
+		emit_mov_reg(&prog, true, BPF_REG_2, BPF_REG_6);
+		if (emit_call(&prog, __bpf_prog_exit, prog))
+			return -EINVAL;
+	}
+	*pprog = prog;
+	return 0;
+}
+
+/* Example:
+ * __be16 eth_type_trans(struct sk_buff *skb, struct net_device *dev);
+ * its 'struct btf_func_model' will be nr_args=2
+ * The assembly code when eth_type_trans is executing after trampoline:
+ *
+ * push rbp
+ * mov rbp, rsp
+ * sub rsp, 16                     // space for skb and dev
+ * push rbx                        // temp regs to pass start time
+ * mov qword ptr [rbp - 16], rdi   // save skb pointer to stack
+ * mov qword ptr [rbp - 8], rsi    // save dev pointer to stack
+ * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
+ * mov rbx, rax                    // remember start time in bpf stats are enabled
+ * lea rdi, [rbp - 16]             // R1==ctx of bpf prog
+ * call addr_of_jited_FENTRY_prog
+ * movabsq rdi, 64bit_addr_of_struct_bpf_prog  // unused if bpf stats are off
+ * mov rsi, rbx                    // prog start time
+ * call __bpf_prog_exit            // rcu_read_unlock, preempt_enable and stats math
+ * mov rdi, qword ptr [rbp - 16]   // restore skb pointer from stack
+ * mov rsi, qword ptr [rbp - 8]    // restore dev pointer from stack
+ * pop rbx
+ * leave
+ * ret
+ *
+ * eth_type_trans has 5 byte nop at the beginning. These 5 bytes will be
+ * replaced with 'call generated_bpf_trampoline'. When it returns
+ * eth_type_trans will continue executing with original skb and dev pointers.
+ *
+ * The assembly code when eth_type_trans is called from trampoline:
+ *
+ * push rbp
+ * mov rbp, rsp
+ * sub rsp, 24                     // space for skb, dev, return value
+ * push rbx                        // temp regs to pass start time
+ * mov qword ptr [rbp - 24], rdi   // save skb pointer to stack
+ * mov qword ptr [rbp - 16], rsi   // save dev pointer to stack
+ * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
+ * mov rbx, rax                    // remember start time if bpf stats are enabled
+ * lea rdi, [rbp - 24]             // R1==ctx of bpf prog
+ * call addr_of_jited_FENTRY_prog  // bpf prog can access skb and dev
+ * movabsq rdi, 64bit_addr_of_struct_bpf_prog  // unused if bpf stats are off
+ * mov rsi, rbx                    // prog start time
+ * call __bpf_prog_exit            // rcu_read_unlock, preempt_enable and stats math
+ * mov rdi, qword ptr [rbp - 24]   // restore skb pointer from stack
+ * mov rsi, qword ptr [rbp - 16]   // restore dev pointer from stack
+ * call eth_type_trans+5           // execute body of eth_type_trans
+ * mov qword ptr [rbp - 8], rax    // save return value
+ * call __bpf_prog_enter           // rcu_read_lock and preempt_disable
+ * mov rbx, rax                    // remember start time in bpf stats are enabled
+ * lea rdi, [rbp - 24]             // R1==ctx of bpf prog
+ * call addr_of_jited_FEXIT_prog   // bpf prog can access skb, dev, return value
+ * movabsq rdi, 64bit_addr_of_struct_bpf_prog  // unused if bpf stats are off
+ * mov rsi, rbx                    // prog start time
+ * call __bpf_prog_exit            // rcu_read_unlock, preempt_enable and stats math
+ * mov rax, qword ptr [rbp - 8]    // restore eth_type_trans's return value
+ * pop rbx
+ * leave
+ * add rsp, 8                      // skip eth_type_trans's frame
+ * ret                             // return to its caller
+ */
+int arch_prepare_bpf_trampoline(void *image, struct btf_func_model *m, u32 flags,
+				struct bpf_prog **fentry_progs, int fentry_cnt,
+				struct bpf_prog **fexit_progs, int fexit_cnt,
+				void *orig_call)
+{
+	int cnt = 0, nr_args = m->nr_args;
+	int stack_size = nr_args * 8;
+	u8 *prog;
+
+	/* x86-64 supports up to 6 arguments. 7+ can be added in the future */
+	if (nr_args > 6)
+		return -ENOTSUPP;
+
+	if ((flags & BPF_TRAMP_F_RESTORE_REGS) &&
+	    (flags & BPF_TRAMP_F_SKIP_FRAME))
+		return -EINVAL;
+
+	if (flags & BPF_TRAMP_F_CALL_ORIG)
+		stack_size += 8; /* room for return value of orig_call */
+
+	if (flags & BPF_TRAMP_F_SKIP_FRAME)
+		/* skip patched call instruction and point orig_call to actual
+		 * body of the kernel function.
+		 */
+		orig_call += X86_CALL_SIZE;
+
+	prog = image;
+
+	EMIT1(0x55);		 /* push rbp */
+	EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
+	EMIT4(0x48, 0x83, 0xEC, stack_size); /* sub rsp, stack_size */
+	EMIT1(0x53);		 /* push rbx */
+
+	save_regs(m, &prog, nr_args, stack_size);
+
+	if (fentry_cnt)
+		if (invoke_bpf(m, &prog, fentry_progs, fentry_cnt, stack_size))
+			return -EINVAL;
+
+	if (flags & BPF_TRAMP_F_CALL_ORIG) {
+		if (fentry_cnt)
+			restore_regs(m, &prog, nr_args, stack_size);
+
+		/* call original function */
+		if (emit_call(&prog, orig_call, prog))
+			return -EINVAL;
+		/* remember return value in a stack for bpf prog to access */
+		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -8);
+	}
+
+	if (fexit_cnt)
+		if (invoke_bpf(m, &prog, fexit_progs, fexit_cnt, stack_size))
+			return -EINVAL;
+
+	if (flags & BPF_TRAMP_F_RESTORE_REGS)
+		restore_regs(m, &prog, nr_args, stack_size);
+
+	if (flags & BPF_TRAMP_F_CALL_ORIG)
+		/* restore original return value back into RAX */
+		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
+
+	EMIT1(0x5B); /* pop rbx */
+	EMIT1(0xC9); /* leave */
+	if (flags & BPF_TRAMP_F_SKIP_FRAME)
+		/* skip our return address and return to parent */
+		EMIT4(0x48, 0x83, 0xC4, 8); /* add rsp, 8 */
+	EMIT1(0xC3); /* ret */
+	/* One half of the page has active running trampoline.
+	 * Another half is an area for next trampoline.
+	 * Make sure the trampoline generation logic doesn't overflow.
+	 */
+	if (WARN_ON_ONCE(prog - (u8 *)image > PAGE_SIZE / 2 - BPF_INSN_SAFETY))
+		return -EFAULT;
+	return 0;
+}
+
 struct x64_jit_data {
 	struct bpf_binary_header *header;
 	int *addrs;
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 8b90db25348a..9206b7e86dde 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -14,6 +14,7 @@
 #include <linux/numa.h>
 #include <linux/wait.h>
 #include <linux/u64_stats_sync.h>
+#include <linux/refcount.h>
 
 struct bpf_verifier_env;
 struct bpf_verifier_log;
@@ -384,6 +385,94 @@ struct bpf_prog_stats {
 	struct u64_stats_sync syncp;
 } __aligned(2 * sizeof(u64));
 
+struct btf_func_model {
+	u8 ret_size;
+	u8 nr_args;
+	u8 arg_size[MAX_BPF_FUNC_ARGS];
+};
+
+/* Restore arguments before returning from trampoline to let original function
+ * continue executing. This flag is used for fentry progs when there are no
+ * fexit progs.
+ */
+#define BPF_TRAMP_F_RESTORE_REGS	BIT(0)
+/* Call original function after fentry progs, but before fexit progs.
+ * Makes sense for fentry/fexit, normal calls and indirect calls.
+ */
+#define BPF_TRAMP_F_CALL_ORIG		BIT(1)
+/* Skip current frame and return to parent.  Makes sense for fentry/fexit
+ * programs only. Should not be used with normal calls and indirect calls.
+ */
+#define BPF_TRAMP_F_SKIP_FRAME		BIT(2)
+
+/* Different use cases for BPF trampoline:
+ * 1. replace nop at the function entry (kprobe equivalent)
+ *    flags = BPF_TRAMP_F_RESTORE_REGS
+ *    fentry = a set of programs to run before returning from trampoline
+ *
+ * 2. replace nop at the function entry (kprobe + kretprobe equivalent)
+ *    flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME
+ *    orig_call = fentry_ip + MCOUNT_INSN_SIZE
+ *    fentry = a set of program to run before calling original function
+ *    fexit = a set of program to run after original function
+ *
+ * 3. replace direct call instruction anywhere in the function body
+ *    or assign a function pointer for indirect call (like tcp_congestion_ops->cong_avoid)
+ *    With flags = 0
+ *      fentry = a set of programs to run before returning from trampoline
+ *    With flags = BPF_TRAMP_F_CALL_ORIG
+ *      orig_call = original callback addr or direct function addr
+ *      fentry = a set of program to run before calling original function
+ *      fexit = a set of program to run after original function
+ */
+int arch_prepare_bpf_trampoline(void *image, struct btf_func_model *m, u32 flags,
+				struct bpf_prog **fentry_progs, int fentry_cnt,
+				struct bpf_prog **fexit_progs, int fexit_cnt,
+				void *orig_call);
+/* these two functions are called from generated trampoline */
+u64 notrace __bpf_prog_enter(void);
+void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start);
+
+enum bpf_tramp_prog_type {
+	BPF_TRAMP_FENTRY,
+	BPF_TRAMP_FEXIT,
+	BPF_TRAMP_MAX
+};
+
+struct bpf_trampoline {
+	struct hlist_node hlist;
+	refcount_t refcnt;
+	u64 key;
+	struct {
+		struct btf_func_model model;
+		void *addr;
+	} func;
+	struct hlist_head progs_hlist[BPF_TRAMP_MAX];
+	int progs_cnt[BPF_TRAMP_MAX];
+	void *image;
+	u64 selector;
+};
+#ifdef CONFIG_BPF_JIT
+struct bpf_trampoline *bpf_trampoline_lookup(u64 key);
+int bpf_trampoline_link_prog(struct bpf_prog *prog);
+int bpf_trampoline_unlink_prog(struct bpf_prog *prog);
+void bpf_trampoline_put(struct bpf_trampoline *tr);
+#else
+static inline struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
+{
+	return NULL;
+}
+static inline int bpf_trampoline_link_prog(struct bpf_prog *prog)
+{
+	return -ENOTSUPP;
+}
+static inline int bpf_trampoline_unlink_prog(struct bpf_prog *prog)
+{
+	return -ENOTSUPP;
+}
+static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {}
+#endif
+
 struct bpf_prog_aux {
 	atomic_t refcnt;
 	u32 used_map_cnt;
@@ -398,6 +487,9 @@ struct bpf_prog_aux {
 	bool verifier_zext; /* Zero extensions has been inserted by verifier. */
 	bool offload_requested;
 	bool attach_btf_trace; /* true if attaching to BTF-enabled raw tp */
+	enum bpf_tramp_prog_type trampoline_prog_type;
+	struct bpf_trampoline *trampoline;
+	struct hlist_node tramp_hlist;
 	/* BTF_KIND_FUNC_PROTO for valid attach_btf_id */
 	const struct btf_type *attach_func_proto;
 	/* function name for valid attach_btf_id */
@@ -784,6 +876,12 @@ int btf_struct_access(struct bpf_verifier_log *log,
 		      u32 *next_btf_id);
 u32 btf_resolve_helper_id(struct bpf_verifier_log *log, void *, int);
 
+int btf_distill_func_proto(struct bpf_verifier_log *log,
+			   struct btf *btf,
+			   const struct btf_type *func_proto,
+			   const char *func_name,
+			   struct btf_func_model *m);
+
 #else /* !CONFIG_BPF_SYSCALL */
 static inline struct bpf_prog *bpf_prog_get(u32 ufd)
 {
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index df6809a76404..69c200e6e696 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -201,6 +201,8 @@ enum bpf_attach_type {
 	BPF_CGROUP_GETSOCKOPT,
 	BPF_CGROUP_SETSOCKOPT,
 	BPF_TRACE_RAW_TP,
+	BPF_TRACE_FENTRY,
+	BPF_TRACE_FEXIT,
 	__MAX_BPF_ATTACH_TYPE
 };
 
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index e1d9adb212f9..3f671bf617e8 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -6,6 +6,7 @@ obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o
 obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o
 obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o
 obj-$(CONFIG_BPF_SYSCALL) += disasm.o
+obj-$(CONFIG_BPF_JIT) += trampoline.o
 obj-$(CONFIG_BPF_SYSCALL) += btf.o
 ifeq ($(CONFIG_NET),y)
 obj-$(CONFIG_BPF_SYSCALL) += devmap.o
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 4639c4ba9a9b..9e1164e5b429 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3517,13 +3517,18 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		args++;
 		nr_args--;
 	}
-	if (arg >= nr_args) {
+
+	if (prog->expected_attach_type == BPF_TRACE_FEXIT &&
+	    arg == nr_args) {
+		/* function return type */
+		t = btf_type_by_id(btf_vmlinux, t->type);
+	} else if (arg >= nr_args) {
 		bpf_log(log, "func '%s' doesn't have %d-th argument\n",
-			tname, arg);
+			tname, arg + 1);
 		return false;
+	} else {
+		t = btf_type_by_id(btf_vmlinux, args[arg].type);
 	}
-
-	t = btf_type_by_id(btf_vmlinux, args[arg].type);
 	/* skip modifiers */
 	while (btf_type_is_modifier(t))
 		t = btf_type_by_id(btf_vmlinux, t->type);
@@ -3784,6 +3789,70 @@ u32 btf_resolve_helper_id(struct bpf_verifier_log *log, void *fn, int arg)
 	return btf_id;
 }
 
+static int __get_type_size(struct btf *btf, u32 btf_id,
+			   const struct btf_type **bad_type)
+{
+	const struct btf_type *t;
+
+	if (!btf_id)
+		/* void */
+		return 0;
+	t = btf_type_by_id(btf, btf_id);
+	while (t && btf_type_is_modifier(t))
+		t = btf_type_by_id(btf, t->type);
+	if (!t)
+		return -EINVAL;
+	if (btf_type_is_ptr(t))
+		/* kernel size of pointer. Not BPF's size of pointer*/
+		return sizeof(void *);
+	if (btf_type_is_int(t) || btf_type_is_enum(t))
+		return t->size;
+	*bad_type = t;
+	return -EINVAL;
+}
+
+int btf_distill_func_proto(struct bpf_verifier_log *log,
+			   struct btf *btf,
+			   const struct btf_type *func,
+			   const char *tname,
+			   struct btf_func_model *m)
+{
+	const struct btf_param *args;
+	const struct btf_type *t;
+	u32 i, nargs;
+	int ret;
+
+	args = (const struct btf_param *)(func + 1);
+	nargs = btf_type_vlen(func);
+	if (nargs >= MAX_BPF_FUNC_ARGS) {
+		bpf_log(log,
+			"The function %s has %d arguments. Too many.\n",
+			tname, nargs);
+		return -EINVAL;
+	}
+	ret = __get_type_size(btf, func->type, &t);
+	if (ret < 0) {
+		bpf_log(log,
+			"The function %s return type %s is unsupported.\n",
+			tname, btf_kind_str[BTF_INFO_KIND(t->info)]);
+		return -EINVAL;
+	}
+	m->ret_size = ret;
+
+	for (i = 0; i < nargs; i++) {
+		ret = __get_type_size(btf, args[i].type, &t);
+		if (ret < 0) {
+			bpf_log(log,
+				"The function %s arg%d type %s is unsupported.\n",
+				tname, i, btf_kind_str[BTF_INFO_KIND(t->info)]);
+			return -EINVAL;
+		}
+		m->arg_size[i] = ret;
+	}
+	m->nr_args = nargs;
+	return 0;
+}
+
 void btf_type_seq_show(const struct btf *btf, u32 type_id, void *obj,
 		       struct seq_file *m)
 {
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index c4bcec1014a9..75ad0f907eef 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2011,6 +2011,7 @@ static void bpf_prog_free_deferred(struct work_struct *work)
 	if (aux->prog->has_callchain_buf)
 		put_callchain_buffers();
 #endif
+	bpf_trampoline_put(aux->trampoline);
 	for (i = 0; i < aux->func_cnt; i++)
 		bpf_jit_free(aux->func[i]);
 	if (aux->func_cnt) {
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 6d9ce95e5a8d..e2e37bea86bc 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1799,6 +1799,49 @@ static int bpf_obj_get(const union bpf_attr *attr)
 				attr->file_flags);
 }
 
+static int bpf_tracing_prog_release(struct inode *inode, struct file *filp)
+{
+	struct bpf_prog *prog = filp->private_data;
+
+	WARN_ON_ONCE(bpf_trampoline_unlink_prog(prog));
+	bpf_prog_put(prog);
+	return 0;
+}
+
+static const struct file_operations bpf_tracing_prog_fops = {
+	.release	= bpf_tracing_prog_release,
+	.read		= bpf_dummy_read,
+	.write		= bpf_dummy_write,
+};
+
+static int bpf_tracing_prog_attach(struct bpf_prog *prog)
+{
+	int tr_fd, err;
+
+	if (prog->expected_attach_type != BPF_TRACE_FENTRY &&
+	    prog->expected_attach_type != BPF_TRACE_FEXIT) {
+		err = -EINVAL;
+		goto out_put_prog;
+	}
+
+	err = bpf_trampoline_link_prog(prog);
+	if (err)
+		goto out_put_prog;
+
+	tr_fd = anon_inode_getfd("bpf-tracing-prog", &bpf_tracing_prog_fops,
+				 prog, O_CLOEXEC);
+	if (tr_fd < 0) {
+		WARN_ON_ONCE(bpf_trampoline_unlink_prog(prog));
+		err = tr_fd;
+		goto out_put_prog;
+	}
+	return tr_fd;
+
+out_put_prog:
+	bpf_prog_put(prog);
+	return err;
+}
+
 struct bpf_raw_tracepoint {
 	struct bpf_raw_event_map *btp;
 	struct bpf_prog *prog;
@@ -1850,14 +1893,16 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
 
 	if (prog->type == BPF_PROG_TYPE_TRACING) {
 		if (attr->raw_tracepoint.name) {
-			/* raw_tp name should not be specified in raw_tp
-			 * programs that were verified via in-kernel BTF info
+			/* The attach point for this category of programs
+			 * should be specified via btf_id during program load.
 			 */
 			err = -EINVAL;
 			goto out_put_prog;
 		}
-		/* raw_tp name is taken from type name instead */
-		tp_name = prog->aux->attach_func_name;
+		if (prog->expected_attach_type == BPF_TRACE_RAW_TP)
+			tp_name = prog->aux->attach_func_name;
+		else
+			return bpf_tracing_prog_attach(prog);
 	} else {
 		if (strncpy_from_user(buf,
 				      u64_to_user_ptr(attr->raw_tracepoint.name),
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
new file mode 100644
index 000000000000..814d749dd4f1
--- /dev/null
+++ b/kernel/bpf/trampoline.c
@@ -0,0 +1,250 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2019 Facebook */
+#include <linux/hash.h>
+#include <linux/bpf.h>
+#include <linux/filter.h>
+
+/* btf_vmlinux has ~22k attachable functions. 1k htab is enough. */
+#define TRAMPOLINE_HASH_BITS 10
+#define TRAMPOLINE_TABLE_SIZE (1 << TRAMPOLINE_HASH_BITS)
+
+static struct hlist_head trampoline_table[TRAMPOLINE_TABLE_SIZE];
+
+static DEFINE_MUTEX(trampoline_mutex);
+
+struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
+{
+	struct bpf_trampoline *tr;
+	struct hlist_head *head;
+	void *image;
+	int i;
+
+	mutex_lock(&trampoline_mutex);
+	head = &trampoline_table[hash_64(key, TRAMPOLINE_HASH_BITS)];
+	hlist_for_each_entry(tr, head, hlist) {
+		if (tr->key == key) {
+			refcount_inc(&tr->refcnt);
+			goto out;
+		}
+	}
+	tr = kzalloc(sizeof(*tr), GFP_KERNEL);
+	if (!tr)
+		goto out;
+
+	/* is_root was checked earlier. No need for bpf_jit_charge_modmem() */
+	image = bpf_jit_alloc_exec(PAGE_SIZE);
+	if (!image) {
+		kfree(tr);
+		tr = NULL;
+		goto out;
+	}
+
+	tr->key = key;
+	INIT_HLIST_NODE(&tr->hlist);
+	hlist_add_head(&tr->hlist, head);
+	refcount_set(&tr->refcnt, 1);
+	for (i = 0; i < BPF_TRAMP_MAX; i++)
+		INIT_HLIST_HEAD(&tr->progs_hlist[i]);
+
+	set_vm_flush_reset_perms(image);
+	/* Keep image as writeable. The alternative is to keep flipping ro/rw
+	 * everytime new program is attached or detached.
+	 */
+	set_memory_x((long)image, 1);
+	tr->image = image;
+out:
+	mutex_unlock(&trampoline_mutex);
+	return tr;
+}
+
+/* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
+ * bytes on x86.  Pick a number to fit into PAGE_SIZE / 2
+ */
+#define BPF_MAX_TRAMP_PROGS 40
+
+static int bpf_trampoline_update(struct bpf_trampoline *tr)
+{
+	void *old_image = tr->image + ((tr->selector + 1) & 1) * PAGE_SIZE/2;
+	void *new_image = tr->image + (tr->selector & 1) * PAGE_SIZE/2;
+	struct bpf_prog *progs_to_run[BPF_MAX_TRAMP_PROGS];
+	int fentry_cnt = tr->progs_cnt[BPF_TRAMP_FENTRY];
+	int fexit_cnt = tr->progs_cnt[BPF_TRAMP_FEXIT];
+	struct bpf_prog **progs, **fentry, **fexit;
+	u32 flags = BPF_TRAMP_F_RESTORE_REGS;
+	struct bpf_prog_aux *aux;
+	int err;
+
+	if (fentry_cnt + fexit_cnt == 0) {
+		err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_CALL_TO_NOP,
+					 old_image, NULL);
+		tr->selector = 0;
+		goto out;
+	}
+
+	/* populate fentry progs */
+	fentry = progs = progs_to_run;
+	hlist_for_each_entry(aux, &tr->progs_hlist[BPF_TRAMP_FENTRY], tramp_hlist)
+		*progs++ = aux->prog;
+
+	/* populate fexit progs */
+	fexit = progs;
+	hlist_for_each_entry(aux, &tr->progs_hlist[BPF_TRAMP_FEXIT], tramp_hlist)
+		*progs++ = aux->prog;
+
+	if (fexit_cnt)
+		flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
+
+	err = arch_prepare_bpf_trampoline(new_image, &tr->func.model, flags,
+					  fentry, fentry_cnt,
+					  fexit, fexit_cnt,
+					  tr->func.addr);
+	if (err)
+		goto out;
+
+	if (tr->selector)
+		/* progs already running at this address */
+		err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_CALL_TO_CALL,
+					 old_image, new_image);
+	else
+		/* first time registering */
+		err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_NOP_TO_CALL,
+					 NULL, new_image);
+	if (err)
+		goto out;
+	tr->selector++;
+out:
+	return err;
+}
+
+static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(enum bpf_attach_type t)
+{
+	switch (t) {
+	case BPF_TRACE_FENTRY:
+		return BPF_TRAMP_FENTRY;
+	default:
+		return BPF_TRAMP_FEXIT;
+	}
+}
+
+int bpf_trampoline_link_prog(struct bpf_prog *prog)
+{
+	enum bpf_tramp_prog_type kind;
+	struct bpf_trampoline *tr;
+	int err = 0;
+
+	tr = prog->aux->trampoline;
+	kind = bpf_attach_type_to_tramp(prog->expected_attach_type);
+	mutex_lock(&trampoline_mutex);
+	if (tr->progs_cnt[BPF_TRAMP_FENTRY] + tr->progs_cnt[BPF_TRAMP_FEXIT]
+	    >= BPF_MAX_TRAMP_PROGS) {
+		err = -E2BIG;
+		goto out;
+	}
+	if (!hlist_unhashed(&prog->aux->tramp_hlist)) {
+		/* prog already linked */
+		err = -EBUSY;
+		goto out;
+	}
+	hlist_add_head(&prog->aux->tramp_hlist, &tr->progs_hlist[kind]);
+	tr->progs_cnt[kind]++;
+	err = bpf_trampoline_update(prog->aux->trampoline);
+	if (err) {
+		hlist_del(&prog->aux->tramp_hlist);
+		tr->progs_cnt[kind]--;
+	}
+out:
+	mutex_unlock(&trampoline_mutex);
+	return err;
+}
+
+/* bpf_trampoline_unlink_prog() should never fail. */
+int bpf_trampoline_unlink_prog(struct bpf_prog *prog)
+{
+	enum bpf_tramp_prog_type kind;
+	struct bpf_trampoline *tr;
+	int err;
+
+	tr = prog->aux->trampoline;
+	kind = bpf_attach_type_to_tramp(prog->expected_attach_type);
+	mutex_lock(&trampoline_mutex);
+	hlist_del(&prog->aux->tramp_hlist);
+	tr->progs_cnt[kind]--;
+	err = bpf_trampoline_update(prog->aux->trampoline);
+	mutex_unlock(&trampoline_mutex);
+	return err;
+}
+
+void bpf_trampoline_put(struct bpf_trampoline *tr)
+{
+	if (!tr)
+		return;
+	mutex_lock(&trampoline_mutex);
+	if (!refcount_dec_and_test(&tr->refcnt))
+		goto out;
+	if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FENTRY])))
+		goto out;
+	if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT])))
+		goto out;
+	bpf_jit_free_exec(tr->image);
+	hlist_del(&tr->hlist);
+	kfree(tr);
+out:
+	mutex_unlock(&trampoline_mutex);
+}
+
+/* The logic is similar to BPF_PROG_RUN, but with explicit rcu and preempt that
+ * are needed for trampoline. The macro is split into
+ * call _bpf_prog_enter
+ * call prog->bpf_func
+ * call __bpf_prog_exit
+ */
+u64 notrace __bpf_prog_enter(void)
+{
+	u64 start = 0;
+
+	rcu_read_lock();
+	preempt_disable();
+	if (static_branch_unlikely(&bpf_stats_enabled_key))
+		start = sched_clock();
+	return start;
+}
+
+void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start)
+{
+	struct bpf_prog_stats *stats;
+
+	if (static_branch_unlikely(&bpf_stats_enabled_key) &&
+	    /* static_key could be enabled in __bpf_prog_enter
+	     * and disabled in __bpf_prog_exit.
+	     * And vice versa.
+	     * Hence check that 'start' is not zero.
+	     */
+	    start) {
+		stats = this_cpu_ptr(prog->aux->stats);
+		u64_stats_update_begin(&stats->syncp);
+		stats->cnt++;
+		stats->nsecs += sched_clock() - start;
+		u64_stats_update_end(&stats->syncp);
+	}
+	preempt_enable();
+	rcu_read_unlock();
+}
+
+int __weak
+arch_prepare_bpf_trampoline(void *image, struct btf_func_model *m, u32 flags,
+			    struct bpf_prog **fentry_progs, int fentry_cnt,
+			    struct bpf_prog **fexit_progs, int fexit_cnt,
+			    void *orig_call)
+{
+	return -ENOTSUPP;
+}
+
+static int __init init_trampolines(void)
+{
+	int i;
+
+	for (i = 0; i < TRAMPOLINE_TABLE_SIZE; i++)
+		INIT_HLIST_HEAD(&trampoline_table[i]);
+	return 0;
+}
+late_initcall(init_trampolines);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 2f2374967b36..36542edd4936 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -9382,8 +9382,11 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 	struct bpf_prog *prog = env->prog;
 	u32 btf_id = prog->aux->attach_btf_id;
 	const char prefix[] = "btf_trace_";
+	struct bpf_trampoline *tr;
 	const struct btf_type *t;
 	const char *tname;
+	long addr;
+	int ret;
 
 	if (prog->type != BPF_PROG_TYPE_TRACING)
 		return 0;
@@ -9432,6 +9435,42 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 		prog->aux->attach_func_proto = t;
 		prog->aux->attach_btf_trace = true;
 		return 0;
+	case BPF_TRACE_FENTRY:
+	case BPF_TRACE_FEXIT:
+		if (!btf_type_is_func(t)) {
+			verbose(env, "attach_btf_id %u is not a function\n",
+				btf_id);
+			return -EINVAL;
+		}
+		t = btf_type_by_id(btf_vmlinux, t->type);
+		if (!btf_type_is_func_proto(t))
+			return -EINVAL;
+		tr = bpf_trampoline_lookup(btf_id);
+		if (!tr)
+			return -ENOMEM;
+		prog->aux->attach_func_name = tname;
+		prog->aux->attach_func_proto = t;
+		if (tr->func.addr) {
+			prog->aux->trampoline = tr;
+			return 0;
+		}
+		ret = btf_distill_func_proto(&env->log, btf_vmlinux, t,
+					     tname, &tr->func.model);
+		if (ret < 0) {
+			bpf_trampoline_put(tr);
+			return ret;
+		}
+		addr = kallsyms_lookup_name(tname);
+		if (!addr) {
+			verbose(env,
+				"The address of function %s cannot be found\n",
+				tname);
+			bpf_trampoline_put(tr);
+			return -ENOENT;
+		}
+		tr->func.addr = (void *)addr;
+		prog->aux->trampoline = tr;
+		return 0;
 	default:
 		return -EINVAL;
 	}
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 04/18] libbpf: Introduce btf__find_by_name_kind()
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (2 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 03/18] bpf: Introduce BPF trampoline Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08  7:05   ` Song Liu
  2019-11-08 19:21   ` Andrii Nakryiko
  2019-11-08  6:40 ` [PATCH v3 bpf-next 05/18] libbpf: Add support to attach to fentry/fexit tracing progs Alexei Starovoitov
                   ` (13 subsequent siblings)
  17 siblings, 2 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Introduce btf__find_by_name_kind() helper to search BTF by name and kind, since
name alone can be ambiguous.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/lib/bpf/btf.c      | 22 ++++++++++++++++++++++
 tools/lib/bpf/btf.h      |  2 ++
 tools/lib/bpf/libbpf.map |  1 +
 3 files changed, 25 insertions(+)

diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 86a1847e4a9f..88efa2bb7137 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -316,6 +316,28 @@ __s32 btf__find_by_name(const struct btf *btf, const char *type_name)
 	return -ENOENT;
 }
 
+__s32 btf__find_by_name_kind(const struct btf *btf, const char *type_name,
+			     __u32 kind)
+{
+	__u32 i;
+
+	if (kind == BTF_KIND_UNKN || !strcmp(type_name, "void"))
+		return 0;
+
+	for (i = 1; i <= btf->nr_types; i++) {
+		const struct btf_type *t = btf->types[i];
+		const char *name;
+
+		if (btf_kind(t) != kind)
+			continue;
+		name = btf__name_by_offset(btf, t->name_off);
+		if (name && !strcmp(type_name, name))
+			return i;
+	}
+
+	return -ENOENT;
+}
+
 void btf__free(struct btf *btf)
 {
 	if (!btf)
diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
index b18994116a44..d9ac73a02cde 100644
--- a/tools/lib/bpf/btf.h
+++ b/tools/lib/bpf/btf.h
@@ -72,6 +72,8 @@ LIBBPF_API int btf__finalize_data(struct bpf_object *obj, struct btf *btf);
 LIBBPF_API int btf__load(struct btf *btf);
 LIBBPF_API __s32 btf__find_by_name(const struct btf *btf,
 				   const char *type_name);
+LIBBPF_API __s32 btf__find_by_name_kind(const struct btf *btf,
+					const char *type_name, __u32 kind);
 LIBBPF_API __u32 btf__get_nr_types(const struct btf *btf);
 LIBBPF_API const struct btf_type *btf__type_by_id(const struct btf *btf,
 						  __u32 id);
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 86173cbb159d..cddb0e9d0695 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -202,4 +202,5 @@ LIBBPF_0.0.6 {
 		bpf_program__get_type;
 		bpf_program__is_tracing;
 		bpf_program__set_tracing;
+		btf__find_by_name_kind;
 } LIBBPF_0.0.5;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 05/18] libbpf: Add support to attach to fentry/fexit tracing progs
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (3 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 04/18] libbpf: Introduce btf__find_by_name_kind() Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08  7:12   ` Song Liu
  2019-11-08 19:44   ` Andrii Nakryiko
  2019-11-08  6:40 ` [PATCH v3 bpf-next 06/18] selftest/bpf: Simple test for fentry/fexit Alexei Starovoitov
                   ` (12 subsequent siblings)
  17 siblings, 2 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Teach libbpf to recognize tracing programs types and attach them to
fentry/fexit.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/include/uapi/linux/bpf.h |  2 +
 tools/lib/bpf/libbpf.c         | 99 +++++++++++++++++++++++++---------
 tools/lib/bpf/libbpf.h         |  4 ++
 tools/lib/bpf/libbpf.map       |  2 +
 4 files changed, 82 insertions(+), 25 deletions(-)

diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index df6809a76404..69c200e6e696 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -201,6 +201,8 @@ enum bpf_attach_type {
 	BPF_CGROUP_GETSOCKOPT,
 	BPF_CGROUP_SETSOCKOPT,
 	BPF_TRACE_RAW_TP,
+	BPF_TRACE_FENTRY,
+	BPF_TRACE_FEXIT,
 	__MAX_BPF_ATTACH_TYPE
 };
 
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index fde6cb3e5d41..e9c9961bbb9f 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -3859,7 +3859,8 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
 	return 0;
 }
 
-static int libbpf_attach_btf_id_by_name(const char *name, __u32 *btf_id);
+static int libbpf_attach_btf_id_by_name(const char *name,
+					enum bpf_attach_type attach_type);
 
 static struct bpf_object *
 __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
@@ -3913,7 +3914,6 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
 	bpf_object__for_each_program(prog, obj) {
 		enum bpf_prog_type prog_type;
 		enum bpf_attach_type attach_type;
-		__u32 btf_id;
 
 		err = libbpf_prog_type_by_name(prog->section_name, &prog_type,
 					       &attach_type);
@@ -3926,10 +3926,11 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
 		bpf_program__set_type(prog, prog_type);
 		bpf_program__set_expected_attach_type(prog, attach_type);
 		if (prog_type == BPF_PROG_TYPE_TRACING) {
-			err = libbpf_attach_btf_id_by_name(prog->section_name, &btf_id);
-			if (err)
+			err = libbpf_attach_btf_id_by_name(prog->section_name,
+							   attach_type);
+			if (err <= 0)
 				goto out;
-			prog->attach_btf_id = btf_id;
+			prog->attach_btf_id = err;
 		}
 	}
 
@@ -4928,6 +4929,10 @@ static const struct {
 	BPF_PROG_SEC("raw_tp/",			BPF_PROG_TYPE_RAW_TRACEPOINT),
 	BPF_PROG_BTF("tp_btf/",			BPF_PROG_TYPE_TRACING,
 						BPF_TRACE_RAW_TP),
+	BPF_PROG_BTF("fentry/",			BPF_PROG_TYPE_TRACING,
+						BPF_TRACE_FENTRY),
+	BPF_PROG_BTF("fexit/",			BPF_PROG_TYPE_TRACING,
+						BPF_TRACE_FEXIT),
 	BPF_PROG_SEC("xdp",			BPF_PROG_TYPE_XDP),
 	BPF_PROG_SEC("perf_event",		BPF_PROG_TYPE_PERF_EVENT),
 	BPF_PROG_SEC("lwt_in",			BPF_PROG_TYPE_LWT_IN),
@@ -5045,43 +5050,56 @@ int libbpf_prog_type_by_name(const char *name, enum bpf_prog_type *prog_type,
 }
 
 #define BTF_PREFIX "btf_trace_"
-static int libbpf_attach_btf_id_by_name(const char *name, __u32 *btf_id)
+int libbpf_find_vmlinux_btf_id(const char *name,
+			       enum bpf_attach_type attach_type)
 {
 	struct btf *btf = bpf_core_find_kernel_btf();
-	char raw_tp_btf_name[128] = BTF_PREFIX;
-	char *dst = raw_tp_btf_name + sizeof(BTF_PREFIX) - 1;
-	int ret, i, err = -EINVAL;
+	char raw_tp_btf[128] = BTF_PREFIX;
+	char *dst = raw_tp_btf + sizeof(BTF_PREFIX) - 1;
+	const char *btf_name;
+	int err = -EINVAL;
+	u32 kind;
 
 	if (IS_ERR(btf)) {
 		pr_warn("vmlinux BTF is not found\n");
 		return -EINVAL;
 	}
 
+	if (attach_type == BPF_TRACE_RAW_TP) {
+		/* prepend "btf_trace_" prefix per kernel convention */
+		strncat(dst, name, sizeof(raw_tp_btf) - sizeof(BTF_PREFIX));
+		btf_name = raw_tp_btf;
+		kind = BTF_KIND_TYPEDEF;
+	} else {
+		btf_name = name;
+		kind = BTF_KIND_FUNC;
+	}
+	err = btf__find_by_name_kind(btf, btf_name, kind);
+	btf__free(btf);
+	return err;
+}
+
+static int libbpf_attach_btf_id_by_name(const char *name,
+					enum bpf_attach_type attach_type)
+{
+	int i, err;
+
 	if (!name)
-		goto out;
+		return -EINVAL;
 
 	for (i = 0; i < ARRAY_SIZE(section_names); i++) {
 		if (!section_names[i].is_attach_btf)
 			continue;
 		if (strncmp(name, section_names[i].sec, section_names[i].len))
 			continue;
-		/* prepend "btf_trace_" prefix per kernel convention */
-		strncat(dst, name + section_names[i].len,
-			sizeof(raw_tp_btf_name) - sizeof(BTF_PREFIX));
-		ret = btf__find_by_name(btf, raw_tp_btf_name);
-		if (ret <= 0) {
-			pr_warn("%s is not found in vmlinux BTF\n", dst);
-			goto out;
-		}
-		*btf_id = ret;
-		err = 0;
-		goto out;
+		err = libbpf_find_vmlinux_btf_id(name + section_names[i].len,
+						 attach_type);
+		if (err <= 0)
+			pr_warn("%s is not found in vmlinux BTF\n", name);
+		return err;
 	}
 	pr_warn("failed to identify btf_id based on ELF section name '%s'\n", name);
-	err = -ESRCH;
-out:
-	btf__free(btf);
-	return err;
+	return -ESRCH;
 }
 
 int libbpf_attach_type_by_name(const char *name,
@@ -5709,6 +5727,37 @@ struct bpf_link *bpf_program__attach_raw_tracepoint(struct bpf_program *prog,
 	return (struct bpf_link *)link;
 }
 
+struct bpf_link *bpf_program__attach_trace(struct bpf_program *prog)
+{
+	char errmsg[STRERR_BUFSIZE];
+	struct bpf_link_fd *link;
+	int prog_fd, pfd;
+
+	prog_fd = bpf_program__fd(prog);
+	if (prog_fd < 0) {
+		pr_warn("program '%s': can't attach before loaded\n",
+			bpf_program__title(prog, false));
+		return ERR_PTR(-EINVAL);
+	}
+
+	link = malloc(sizeof(*link));
+	if (!link)
+		return ERR_PTR(-ENOMEM);
+	link->link.destroy = &bpf_link__destroy_fd;
+
+	pfd = bpf_raw_tracepoint_open(NULL, prog_fd);
+	if (pfd < 0) {
+		pfd = -errno;
+		free(link);
+		pr_warn("program '%s': failed to attach to trace: %s\n",
+			bpf_program__title(prog, false),
+			libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
+		return ERR_PTR(pfd);
+	}
+	link->fd = pfd;
+	return (struct bpf_link *)link;
+}
+
 enum bpf_perf_event_ret
 bpf_perf_event_read_simple(void *mmap_mem, size_t mmap_size, size_t page_size,
 			   void **copy_mem, size_t *copy_size,
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 6ddc0419337b..0fe47807916d 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -188,6 +188,8 @@ libbpf_prog_type_by_name(const char *name, enum bpf_prog_type *prog_type,
 			 enum bpf_attach_type *expected_attach_type);
 LIBBPF_API int libbpf_attach_type_by_name(const char *name,
 					  enum bpf_attach_type *attach_type);
+LIBBPF_API int libbpf_find_vmlinux_btf_id(const char *name,
+					  enum bpf_attach_type attach_type);
 
 /* Accessors of bpf_program */
 struct bpf_program;
@@ -248,6 +250,8 @@ LIBBPF_API struct bpf_link *
 bpf_program__attach_raw_tracepoint(struct bpf_program *prog,
 				   const char *tp_name);
 
+LIBBPF_API struct bpf_link *
+bpf_program__attach_trace(struct bpf_program *prog);
 struct bpf_insn;
 
 /*
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index cddb0e9d0695..52d2c9eeb0a7 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -198,9 +198,11 @@ LIBBPF_0.0.6 {
 		bpf_map__set_pin_path;
 		bpf_object__open_file;
 		bpf_object__open_mem;
+		bpf_program__attach_trace;
 		bpf_program__get_expected_attach_type;
 		bpf_program__get_type;
 		bpf_program__is_tracing;
 		bpf_program__set_tracing;
 		btf__find_by_name_kind;
+		libbpf_find_vmlinux_btf_id;
 } LIBBPF_0.0.5;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 06/18] selftest/bpf: Simple test for fentry/fexit
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (4 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 05/18] libbpf: Add support to attach to fentry/fexit tracing progs Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08  6:40 ` [PATCH v3 bpf-next 07/18] bpf: Add kernel test functions for fentry testing Alexei Starovoitov
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Add simple test for fentry and fexit programs around eth_type_trans.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Song Liu <songliubraving@fb.com>
---
 .../selftests/bpf/prog_tests/kfree_skb.c      | 39 ++++++++++++--
 tools/testing/selftests/bpf/progs/kfree_skb.c | 52 +++++++++++++++++++
 2 files changed, 88 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/kfree_skb.c b/tools/testing/selftests/bpf/prog_tests/kfree_skb.c
index 55d36856e621..7507c8f689bc 100644
--- a/tools/testing/selftests/bpf/prog_tests/kfree_skb.c
+++ b/tools/testing/selftests/bpf/prog_tests/kfree_skb.c
@@ -60,15 +60,17 @@ void test_kfree_skb(void)
 		.file = "./kfree_skb.o",
 	};
 
+	struct bpf_link *link = NULL, *link_fentry = NULL, *link_fexit = NULL;
+	struct bpf_map *perf_buf_map, *global_data;
+	struct bpf_program *prog, *fentry, *fexit;
 	struct bpf_object *obj, *obj2 = NULL;
 	struct perf_buffer_opts pb_opts = {};
 	struct perf_buffer *pb = NULL;
-	struct bpf_link *link = NULL;
-	struct bpf_map *perf_buf_map;
-	struct bpf_program *prog;
 	int err, kfree_skb_fd;
 	bool passed = false;
 	__u32 duration = 0;
+	const int zero = 0;
+	bool test_ok[2];
 
 	err = bpf_prog_load("./test_pkt_access.o", BPF_PROG_TYPE_SCHED_CLS,
 			    &obj, &tattr.prog_fd);
@@ -82,9 +84,28 @@ void test_kfree_skb(void)
 	prog = bpf_object__find_program_by_title(obj2, "tp_btf/kfree_skb");
 	if (CHECK(!prog, "find_prog", "prog kfree_skb not found\n"))
 		goto close_prog;
+	fentry = bpf_object__find_program_by_title(obj2, "fentry/eth_type_trans");
+	if (CHECK(!fentry, "find_prog", "prog eth_type_trans not found\n"))
+		goto close_prog;
+	fexit = bpf_object__find_program_by_title(obj2, "fexit/eth_type_trans");
+	if (CHECK(!fexit, "find_prog", "prog eth_type_trans not found\n"))
+		goto close_prog;
+
+	global_data = bpf_object__find_map_by_name(obj2, "kfree_sk.bss");
+	if (CHECK(!global_data, "find global data", "not found\n"))
+		goto close_prog;
+
 	link = bpf_program__attach_raw_tracepoint(prog, NULL);
 	if (CHECK(IS_ERR(link), "attach_raw_tp", "err %ld\n", PTR_ERR(link)))
 		goto close_prog;
+	link_fentry = bpf_program__attach_trace(fentry);
+	if (CHECK(IS_ERR(link_fentry), "attach fentry", "err %ld\n",
+		  PTR_ERR(link_fentry)))
+		goto close_prog;
+	link_fexit = bpf_program__attach_trace(fexit);
+	if (CHECK(IS_ERR(link_fexit), "attach fexit", "err %ld\n",
+		  PTR_ERR(link_fexit)))
+		goto close_prog;
 
 	perf_buf_map = bpf_object__find_map_by_name(obj2, "perf_buf_map");
 	if (CHECK(!perf_buf_map, "find_perf_buf_map", "not found\n"))
@@ -108,14 +129,26 @@ void test_kfree_skb(void)
 	err = perf_buffer__poll(pb, 100);
 	if (CHECK(err < 0, "perf_buffer__poll", "err %d\n", err))
 		goto close_prog;
+
 	/* make sure kfree_skb program was triggered
 	 * and it sent expected skb into ring buffer
 	 */
 	CHECK_FAIL(!passed);
+
+	err = bpf_map_lookup_elem(bpf_map__fd(global_data), &zero, test_ok);
+	if (CHECK(err, "get_result",
+		  "failed to get output data: %d\n", err))
+		goto close_prog;
+
+	CHECK_FAIL(!test_ok[0] || !test_ok[1]);
 close_prog:
 	perf_buffer__free(pb);
 	if (!IS_ERR_OR_NULL(link))
 		bpf_link__destroy(link);
+	if (!IS_ERR_OR_NULL(link_fentry))
+		bpf_link__destroy(link_fentry);
+	if (!IS_ERR_OR_NULL(link_fexit))
+		bpf_link__destroy(link_fexit);
 	bpf_object__close(obj);
 	bpf_object__close(obj2);
 }
diff --git a/tools/testing/selftests/bpf/progs/kfree_skb.c b/tools/testing/selftests/bpf/progs/kfree_skb.c
index f769fdbf6725..dcc9feac8338 100644
--- a/tools/testing/selftests/bpf/progs/kfree_skb.c
+++ b/tools/testing/selftests/bpf/progs/kfree_skb.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
 // Copyright (c) 2019 Facebook
 #include <linux/bpf.h>
+#include <stdbool.h>
 #include "bpf_helpers.h"
 #include "bpf_endian.h"
 
@@ -116,3 +117,54 @@ int trace_kfree_skb(struct trace_kfree_skb *ctx)
 		       &meta, sizeof(meta));
 	return 0;
 }
+
+static volatile struct {
+	bool fentry_test_ok;
+	bool fexit_test_ok;
+} result;
+
+struct eth_type_trans_args {
+	struct sk_buff *skb;
+	struct net_device *dev;
+	unsigned short protocol; /* return value available to fexit progs */
+};
+
+SEC("fentry/eth_type_trans")
+int fentry_eth_type_trans(struct eth_type_trans_args *ctx)
+{
+	struct sk_buff *skb = ctx->skb;
+	struct net_device *dev = ctx->dev;
+	int len, ifindex;
+
+	__builtin_preserve_access_index(({
+		len = skb->len;
+		ifindex = dev->ifindex;
+	}));
+
+	/* fentry sees full packet including L2 header */
+	if (len != 74 || ifindex != 1)
+		return 0;
+	result.fentry_test_ok = true;
+	return 0;
+}
+
+SEC("fexit/eth_type_trans")
+int fexit_eth_type_trans(struct eth_type_trans_args *ctx)
+{
+	struct sk_buff *skb = ctx->skb;
+	struct net_device *dev = ctx->dev;
+	int len, ifindex;
+
+	__builtin_preserve_access_index(({
+		len = skb->len;
+		ifindex = dev->ifindex;
+	}));
+
+	/* fexit sees packet without L2 header that eth_type_trans should have
+	 * consumed.
+	 */
+	if (len != 60 || ctx->protocol != bpf_htons(0x86dd) || ifindex != 1)
+		return 0;
+	result.fexit_test_ok = true;
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 07/18] bpf: Add kernel test functions for fentry testing
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (5 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 06/18] selftest/bpf: Simple test for fentry/fexit Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08  6:40 ` [PATCH v3 bpf-next 08/18] selftests/bpf: Add test for BPF trampoline Alexei Starovoitov
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Add few kernel functions with various number of arguments,
their types and sizes for BPF trampoline testing to cover
different calling conventions.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
---
 net/bpf/test_run.c | 41 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 0be4497cb832..62933279fbba 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -105,6 +105,40 @@ static int bpf_test_finish(const union bpf_attr *kattr,
 	return err;
 }
 
+/* Integer types of various sizes and pointer combinations cover variety of
+ * architecture dependent calling conventions. 7+ can be supported in the
+ * future.
+ */
+int noinline bpf_fentry_test1(int a)
+{
+	return a + 1;
+}
+
+int noinline bpf_fentry_test2(int a, u64 b)
+{
+	return a + b;
+}
+
+int noinline bpf_fentry_test3(char a, int b, u64 c)
+{
+	return a + b + c;
+}
+
+int noinline bpf_fentry_test4(void *a, char b, int c, u64 d)
+{
+	return (long)a + b + c + d;
+}
+
+int noinline bpf_fentry_test5(u64 a, void *b, short c, int d, u64 e)
+{
+	return a + (long)b + c + d + e;
+}
+
+int noinline bpf_fentry_test6(u64 a, void *b, short c, int d, void *e, u64 f)
+{
+	return a + (long)b + c + d + (long)e + f;
+}
+
 static void *bpf_test_init(const union bpf_attr *kattr, u32 size,
 			   u32 headroom, u32 tailroom)
 {
@@ -122,6 +156,13 @@ static void *bpf_test_init(const union bpf_attr *kattr, u32 size,
 		kfree(data);
 		return ERR_PTR(-EFAULT);
 	}
+	if (bpf_fentry_test1(1) != 2 ||
+	    bpf_fentry_test2(2, 3) != 5 ||
+	    bpf_fentry_test3(4, 5, 6) != 15 ||
+	    bpf_fentry_test4((void *)7, 8, 9, 10) != 34 ||
+	    bpf_fentry_test5(11, (void *)12, 13, 14, 15) != 65 ||
+	    bpf_fentry_test6(16, (void *)17, 18, 19, (void *)20, 21) != 111)
+		return ERR_PTR(-EFAULT);
 	return data;
 }
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 08/18] selftests/bpf: Add test for BPF trampoline
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (6 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 07/18] bpf: Add kernel test functions for fentry testing Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08  6:40 ` [PATCH v3 bpf-next 09/18] selftests/bpf: Add fexit tests " Alexei Starovoitov
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Add sanity test for BPF trampoline that checks kernel functions
with up to 6 arguments of different sizes.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
---
 tools/lib/bpf/bpf_helpers.h                   | 13 +++
 .../selftests/bpf/prog_tests/fentry_test.c    | 64 +++++++++++++
 .../testing/selftests/bpf/progs/fentry_test.c | 90 +++++++++++++++++++
 3 files changed, 167 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/fentry_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/fentry_test.c

diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
index 0c7d28292898..c63ab1add126 100644
--- a/tools/lib/bpf/bpf_helpers.h
+++ b/tools/lib/bpf/bpf_helpers.h
@@ -44,4 +44,17 @@ enum libbpf_pin_type {
 	LIBBPF_PIN_BY_NAME,
 };
 
+/* The following types should be used by BPF_PROG_TYPE_TRACING program to
+ * access kernel function arguments. BPF trampoline and raw tracepoints
+ * typecast arguments to 'unsigned long long'.
+ */
+typedef int __attribute__((aligned(8))) ks32;
+typedef char __attribute__((aligned(8))) ks8;
+typedef short __attribute__((aligned(8))) ks16;
+typedef long long __attribute__((aligned(8))) ks64;
+typedef unsigned int __attribute__((aligned(8))) ku32;
+typedef unsigned char __attribute__((aligned(8))) ku8;
+typedef unsigned short __attribute__((aligned(8))) ku16;
+typedef unsigned long long __attribute__((aligned(8))) ku64;
+
 #endif
diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_test.c b/tools/testing/selftests/bpf/prog_tests/fentry_test.c
new file mode 100644
index 000000000000..9fb103193878
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/fentry_test.c
@@ -0,0 +1,64 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019 Facebook */
+#include <test_progs.h>
+
+void test_fentry_test(void)
+{
+	struct bpf_prog_load_attr attr = {
+		.file = "./fentry_test.o",
+	};
+
+	char prog_name[] = "fentry/bpf_fentry_testX";
+	struct bpf_object *obj = NULL, *pkt_obj;
+	int err, pkt_fd, kfree_skb_fd, i;
+	struct bpf_link *link[6] = {};
+	struct bpf_program *prog[6];
+	__u32 duration, retval;
+	struct bpf_map *data_map;
+	const int zero = 0;
+	u64 result[6];
+
+	err = bpf_prog_load("./test_pkt_access.o", BPF_PROG_TYPE_SCHED_CLS,
+			    &pkt_obj, &pkt_fd);
+	if (CHECK(err, "prog_load sched cls", "err %d errno %d\n", err, errno))
+		return;
+	err = bpf_prog_load_xattr(&attr, &obj, &kfree_skb_fd);
+	if (CHECK(err, "prog_load fail", "err %d errno %d\n", err, errno))
+		goto close_prog;
+
+	for (i = 0; i < 6; i++) {
+		prog_name[sizeof(prog_name) - 2] = '1' + i;
+		prog[i] = bpf_object__find_program_by_title(obj, prog_name);
+		if (CHECK(!prog[i], "find_prog", "prog %s not found\n", prog_name))
+			goto close_prog;
+		link[i] = bpf_program__attach_trace(prog[i]);
+		if (CHECK(IS_ERR(link[i]), "attach_trace", "failed to link\n"))
+			goto close_prog;
+	}
+	data_map = bpf_object__find_map_by_name(obj, "fentry_t.bss");
+	if (CHECK(!data_map, "find_data_map", "data map not found\n"))
+		goto close_prog;
+
+	err = bpf_prog_test_run(pkt_fd, 1, &pkt_v6, sizeof(pkt_v6),
+				NULL, NULL, &retval, &duration);
+	CHECK(err || retval, "ipv6",
+	      "err %d errno %d retval %d duration %d\n",
+	      err, errno, retval, duration);
+
+	err = bpf_map_lookup_elem(bpf_map__fd(data_map), &zero, &result);
+	if (CHECK(err, "get_result",
+		  "failed to get output data: %d\n", err))
+		goto close_prog;
+
+	for (i = 0; i < 6; i++)
+		if (CHECK(result[i] != 1, "result", "bpf_fentry_test%d failed err %ld\n",
+			  i + 1, result[i]))
+			goto close_prog;
+
+close_prog:
+	for (i = 0; i < 6; i++)
+		if (!IS_ERR_OR_NULL(link[i]))
+			bpf_link__destroy(link[i]);
+	bpf_object__close(obj);
+	bpf_object__close(pkt_obj);
+}
diff --git a/tools/testing/selftests/bpf/progs/fentry_test.c b/tools/testing/selftests/bpf/progs/fentry_test.c
new file mode 100644
index 000000000000..545788bf8d50
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_test.c
@@ -0,0 +1,90 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019 Facebook */
+#include <linux/bpf.h>
+#include "bpf_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+struct test1 {
+	ks32 a;
+};
+static volatile __u64 test1_result;
+SEC("fentry/bpf_fentry_test1")
+int test1(struct test1 *ctx)
+{
+	test1_result = ctx->a == 1;
+	return 0;
+}
+
+struct test2 {
+	ks32 a;
+	ku64 b;
+};
+static volatile __u64 test2_result;
+SEC("fentry/bpf_fentry_test2")
+int test2(struct test2 *ctx)
+{
+	test2_result = ctx->a == 2 && ctx->b == 3;
+	return 0;
+}
+
+struct test3 {
+	ks8 a;
+	ks32 b;
+	ku64 c;
+};
+static volatile __u64 test3_result;
+SEC("fentry/bpf_fentry_test3")
+int test3(struct test3 *ctx)
+{
+	test3_result = ctx->a == 4 && ctx->b == 5 && ctx->c == 6;
+	return 0;
+}
+
+struct test4 {
+	void *a;
+	ks8 b;
+	ks32 c;
+	ku64 d;
+};
+static volatile __u64 test4_result;
+SEC("fentry/bpf_fentry_test4")
+int test4(struct test4 *ctx)
+{
+	test4_result = ctx->a == (void *)7 && ctx->b == 8 && ctx->c == 9 &&
+		ctx->d == 10;
+	return 0;
+}
+
+struct test5 {
+	ku64 a;
+	void *b;
+	ks16 c;
+	ks32 d;
+	ku64 e;
+};
+static volatile __u64 test5_result;
+SEC("fentry/bpf_fentry_test5")
+int test5(struct test5 *ctx)
+{
+	test5_result = ctx->a == 11 && ctx->b == (void *)12 && ctx->c == 13 &&
+		ctx->d == 14 && ctx->e == 15;
+	return 0;
+}
+
+struct test6 {
+	ku64 a;
+	void *b;
+	ks16 c;
+	ks32 d;
+	void *e;
+	ks64 f;
+};
+static volatile __u64 test6_result;
+SEC("fentry/bpf_fentry_test6")
+int test6(struct test6 *ctx)
+{
+	test6_result = ctx->a == 16 && ctx->b == (void *)17 && ctx->c == 18 &&
+		ctx->d == 19 && ctx->e == (void *)20 && ctx->f == 21;
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 09/18] selftests/bpf: Add fexit tests for BPF trampoline
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (7 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 08/18] selftests/bpf: Add test for BPF trampoline Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08  6:40 ` [PATCH v3 bpf-next 10/18] selftests/bpf: Add combined fentry/fexit test Alexei Starovoitov
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Add fexit tests for BPF trampoline that checks kernel functions
with up to 6 arguments of different sizes and their return values.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
---
 .../selftests/bpf/prog_tests/fexit_test.c     | 64 ++++++++++++
 .../testing/selftests/bpf/progs/fexit_test.c  | 98 +++++++++++++++++++
 2 files changed, 162 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/fexit_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/fexit_test.c

diff --git a/tools/testing/selftests/bpf/prog_tests/fexit_test.c b/tools/testing/selftests/bpf/prog_tests/fexit_test.c
new file mode 100644
index 000000000000..f99013222c74
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/fexit_test.c
@@ -0,0 +1,64 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019 Facebook */
+#include <test_progs.h>
+
+void test_fexit_test(void)
+{
+	struct bpf_prog_load_attr attr = {
+		.file = "./fexit_test.o",
+	};
+
+	char prog_name[] = "fexit/bpf_fentry_testX";
+	struct bpf_object *obj = NULL, *pkt_obj;
+	int err, pkt_fd, kfree_skb_fd, i;
+	struct bpf_link *link[6] = {};
+	struct bpf_program *prog[6];
+	__u32 duration, retval;
+	struct bpf_map *data_map;
+	const int zero = 0;
+	u64 result[6];
+
+	err = bpf_prog_load("./test_pkt_access.o", BPF_PROG_TYPE_SCHED_CLS,
+			    &pkt_obj, &pkt_fd);
+	if (CHECK(err, "prog_load sched cls", "err %d errno %d\n", err, errno))
+		return;
+	err = bpf_prog_load_xattr(&attr, &obj, &kfree_skb_fd);
+	if (CHECK(err, "prog_load fail", "err %d errno %d\n", err, errno))
+		goto close_prog;
+
+	for (i = 0; i < 6; i++) {
+		prog_name[sizeof(prog_name) - 2] = '1' + i;
+		prog[i] = bpf_object__find_program_by_title(obj, prog_name);
+		if (CHECK(!prog[i], "find_prog", "prog %s not found\n", prog_name))
+			goto close_prog;
+		link[i] = bpf_program__attach_trace(prog[i]);
+		if (CHECK(IS_ERR(link[i]), "attach_trace", "failed to link\n"))
+			goto close_prog;
+	}
+	data_map = bpf_object__find_map_by_name(obj, "fexit_te.bss");
+	if (CHECK(!data_map, "find_data_map", "data map not found\n"))
+		goto close_prog;
+
+	err = bpf_prog_test_run(pkt_fd, 1, &pkt_v6, sizeof(pkt_v6),
+				NULL, NULL, &retval, &duration);
+	CHECK(err || retval, "ipv6",
+	      "err %d errno %d retval %d duration %d\n",
+	      err, errno, retval, duration);
+
+	err = bpf_map_lookup_elem(bpf_map__fd(data_map), &zero, &result);
+	if (CHECK(err, "get_result",
+		  "failed to get output data: %d\n", err))
+		goto close_prog;
+
+	for (i = 0; i < 6; i++)
+		if (CHECK(result[i] != 1, "result", "bpf_fentry_test%d failed err %ld\n",
+			  i + 1, result[i]))
+			goto close_prog;
+
+close_prog:
+	for (i = 0; i < 6; i++)
+		if (!IS_ERR_OR_NULL(link[i]))
+			bpf_link__destroy(link[i]);
+	bpf_object__close(obj);
+	bpf_object__close(pkt_obj);
+}
diff --git a/tools/testing/selftests/bpf/progs/fexit_test.c b/tools/testing/selftests/bpf/progs/fexit_test.c
new file mode 100644
index 000000000000..8b98b1a51784
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fexit_test.c
@@ -0,0 +1,98 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019 Facebook */
+#include <linux/bpf.h>
+#include "bpf_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+struct test1 {
+	ks32 a;
+	ks32 ret;
+};
+static volatile __u64 test1_result;
+SEC("fexit/bpf_fentry_test1")
+int test1(struct test1 *ctx)
+{
+	test1_result = ctx->a == 1 && ctx->ret == 2;
+	return 0;
+}
+
+struct test2 {
+	ks32 a;
+	ku64 b;
+	ks32 ret;
+};
+static volatile __u64 test2_result;
+SEC("fexit/bpf_fentry_test2")
+int test2(struct test2 *ctx)
+{
+	test2_result = ctx->a == 2 && ctx->b == 3 && ctx->ret == 5;
+	return 0;
+}
+
+struct test3 {
+	ks8 a;
+	ks32 b;
+	ku64 c;
+	ks32 ret;
+};
+static volatile __u64 test3_result;
+SEC("fexit/bpf_fentry_test3")
+int test3(struct test3 *ctx)
+{
+	test3_result = ctx->a == 4 && ctx->b == 5 && ctx->c == 6 &&
+		ctx->ret == 15;
+	return 0;
+}
+
+struct test4 {
+	void *a;
+	ks8 b;
+	ks32 c;
+	ku64 d;
+	ks32 ret;
+};
+static volatile __u64 test4_result;
+SEC("fexit/bpf_fentry_test4")
+int test4(struct test4 *ctx)
+{
+	test4_result = ctx->a == (void *)7 && ctx->b == 8 && ctx->c == 9 &&
+		ctx->d == 10 && ctx->ret == 34;
+	return 0;
+}
+
+struct test5 {
+	ku64 a;
+	void *b;
+	ks16 c;
+	ks32 d;
+	ku64 e;
+	ks32 ret;
+};
+static volatile __u64 test5_result;
+SEC("fexit/bpf_fentry_test5")
+int test5(struct test5 *ctx)
+{
+	test5_result = ctx->a == 11 && ctx->b == (void *)12 && ctx->c == 13 &&
+		ctx->d == 14 && ctx->e == 15 && ctx->ret == 65;
+	return 0;
+}
+
+struct test6 {
+	ku64 a;
+	void *b;
+	ks16 c;
+	ks32 d;
+	void *e;
+	ks64 f;
+	ks32 ret;
+};
+static volatile __u64 test6_result;
+SEC("fexit/bpf_fentry_test6")
+int test6(struct test6 *ctx)
+{
+	test6_result = ctx->a == 16 && ctx->b == (void *)17 && ctx->c == 18 &&
+		ctx->d == 19 && ctx->e == (void *)20 && ctx->f == 21 &&
+		ctx->ret == 111;
+	return 0;
+}
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 10/18] selftests/bpf: Add combined fentry/fexit test
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (8 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 09/18] selftests/bpf: Add fexit tests " Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08  7:14   ` Song Liu
  2019-11-08  6:40 ` [PATCH v3 bpf-next 11/18] selftests/bpf: Add stress test for maximum number of progs Alexei Starovoitov
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Add a combined fentry/fexit test.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 .../selftests/bpf/prog_tests/fentry_fexit.c   | 90 +++++++++++++++++++
 1 file changed, 90 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/fentry_fexit.c

diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c b/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c
new file mode 100644
index 000000000000..40bcff2cc274
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/fentry_fexit.c
@@ -0,0 +1,90 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019 Facebook */
+#include <test_progs.h>
+
+void test_fentry_fexit(void)
+{
+	struct bpf_prog_load_attr attr_fentry = {
+		.file = "./fentry_test.o",
+	};
+	struct bpf_prog_load_attr attr_fexit = {
+		.file = "./fexit_test.o",
+	};
+
+	struct bpf_object *obj_fentry = NULL, *obj_fexit = NULL, *pkt_obj;
+	struct bpf_map *data_map_fentry, *data_map_fexit;
+	char fentry_name[] = "fentry/bpf_fentry_testX";
+	char fexit_name[] = "fexit/bpf_fentry_testX";
+	int err, pkt_fd, kfree_skb_fd, i;
+	struct bpf_link *link[12] = {};
+	struct bpf_program *prog[12];
+	__u32 duration, retval;
+	const int zero = 0;
+	u64 result[12];
+
+	err = bpf_prog_load("./test_pkt_access.o", BPF_PROG_TYPE_SCHED_CLS,
+			    &pkt_obj, &pkt_fd);
+	if (CHECK(err, "prog_load sched cls", "err %d errno %d\n", err, errno))
+		return;
+	err = bpf_prog_load_xattr(&attr_fentry, &obj_fentry, &kfree_skb_fd);
+	if (CHECK(err, "prog_load fail", "err %d errno %d\n", err, errno))
+		goto close_prog;
+	err = bpf_prog_load_xattr(&attr_fexit, &obj_fexit, &kfree_skb_fd);
+	if (CHECK(err, "prog_load fail", "err %d errno %d\n", err, errno))
+		goto close_prog;
+
+	for (i = 0; i < 6; i++) {
+		fentry_name[sizeof(fentry_name) - 2] = '1' + i;
+		prog[i] = bpf_object__find_program_by_title(obj_fentry, fentry_name);
+		if (CHECK(!prog[i], "find_prog", "prog %s not found\n", fentry_name))
+			goto close_prog;
+		link[i] = bpf_program__attach_trace(prog[i]);
+		if (CHECK(IS_ERR(link[i]), "attach_trace", "failed to link\n"))
+			goto close_prog;
+	}
+	data_map_fentry = bpf_object__find_map_by_name(obj_fentry, "fentry_t.bss");
+	if (CHECK(!data_map_fentry, "find_data_map", "data map not found\n"))
+		goto close_prog;
+
+	for (i = 6; i < 12; i++) {
+		fexit_name[sizeof(fexit_name) - 2] = '1' + i - 6;
+		prog[i] = bpf_object__find_program_by_title(obj_fexit, fexit_name);
+		if (CHECK(!prog[i], "find_prog", "prog %s not found\n", fexit_name))
+			goto close_prog;
+		link[i] = bpf_program__attach_trace(prog[i]);
+		if (CHECK(IS_ERR(link[i]), "attach_trace", "failed to link\n"))
+			goto close_prog;
+	}
+	data_map_fexit = bpf_object__find_map_by_name(obj_fexit, "fexit_te.bss");
+	if (CHECK(!data_map_fexit, "find_data_map", "data map not found\n"))
+		goto close_prog;
+
+	err = bpf_prog_test_run(pkt_fd, 1, &pkt_v6, sizeof(pkt_v6),
+				NULL, NULL, &retval, &duration);
+	CHECK(err || retval, "ipv6",
+	      "err %d errno %d retval %d duration %d\n",
+	      err, errno, retval, duration);
+
+	err = bpf_map_lookup_elem(bpf_map__fd(data_map_fentry), &zero, &result);
+	if (CHECK(err, "get_result",
+		  "failed to get output data: %d\n", err))
+		goto close_prog;
+
+	err = bpf_map_lookup_elem(bpf_map__fd(data_map_fexit), &zero, result + 6);
+	if (CHECK(err, "get_result",
+		  "failed to get output data: %d\n", err))
+		goto close_prog;
+
+	for (i = 0; i < 12; i++)
+		if (CHECK(result[i] != 1, "result", "bpf_fentry_test%d failed err %ld\n",
+			  i % 6 + 1, result[i]))
+			goto close_prog;
+
+close_prog:
+	for (i = 0; i < 12; i++)
+		if (!IS_ERR_OR_NULL(link[i]))
+			bpf_link__destroy(link[i]);
+	bpf_object__close(obj_fentry);
+	bpf_object__close(obj_fexit);
+	bpf_object__close(pkt_obj);
+}
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 11/18] selftests/bpf: Add stress test for maximum number of progs
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (9 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 10/18] selftests/bpf: Add combined fentry/fexit test Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08  7:24   ` Song Liu
  2019-11-08  6:40 ` [PATCH v3 bpf-next 12/18] bpf: Reserve space for BPF trampoline in BPF programs Alexei Starovoitov
                   ` (6 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Add stress test for maximum number of attached BPF programs per BPF trampoline.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 .../selftests/bpf/prog_tests/fexit_stress.c   | 77 +++++++++++++++++++
 1 file changed, 77 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/fexit_stress.c

diff --git a/tools/testing/selftests/bpf/prog_tests/fexit_stress.c b/tools/testing/selftests/bpf/prog_tests/fexit_stress.c
new file mode 100644
index 000000000000..74915eb4b58a
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/fexit_stress.c
@@ -0,0 +1,77 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019 Facebook */
+#include <test_progs.h>
+#include <linux/nbd.h>
+
+/* x86-64 fits 55 JITed and 43 interpreted progs into half page */
+#define CNT 40
+
+void test_fexit_stress(void)
+{
+	char test_skb[128] = {};
+	int fexit_fd[CNT] = {};
+	int link_fd[CNT] = {};
+	__u32 duration = 0;
+	char error[4096];
+	__u32 prog_ret;
+	int err, i, filter_fd;
+
+	const struct bpf_insn trace_program[] = {
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_EXIT_INSN(),
+	};
+
+	struct bpf_load_program_attr load_attr = {
+		.prog_type = BPF_PROG_TYPE_TRACING,
+		.license = "GPL",
+		.insns = trace_program,
+		.insns_cnt = sizeof(trace_program) / sizeof(struct bpf_insn),
+		.expected_attach_type = BPF_TRACE_FEXIT,
+	};
+
+	const struct bpf_insn skb_program[] = {
+		BPF_MOV64_IMM(BPF_REG_0, 0),
+		BPF_EXIT_INSN(),
+	};
+
+	struct bpf_load_program_attr skb_load_attr = {
+		.prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
+		.license = "GPL",
+		.insns = skb_program,
+		.insns_cnt = sizeof(skb_program) / sizeof(struct bpf_insn),
+	};
+
+	err = libbpf_find_vmlinux_btf_id("bpf_fentry_test1",
+					 load_attr.expected_attach_type);
+	if (CHECK(err <= 0, "find_vmlinux_btf_id", "failed: %d\n", err))
+		goto out;
+	load_attr.attach_btf_id = err;
+
+	for (i = 0; i < CNT; i++) {
+		fexit_fd[i] = bpf_load_program_xattr(&load_attr, error, sizeof(error));
+		if (CHECK(fexit_fd[i] < 0, "fexit loaded",
+			  "failed: %d errno %d\n", fexit_fd[i], errno))
+			goto out;
+		link_fd[i] = bpf_raw_tracepoint_open(NULL, fexit_fd[i]);
+		if (CHECK(link_fd[i] < 0, "fexit attach failed",
+			  "prog %d failed: %d err %d\n", i, link_fd[i], errno))
+			goto out;
+	}
+
+	filter_fd = bpf_load_program_xattr(&skb_load_attr, error, sizeof(error));
+	if (CHECK(filter_fd < 0, "test_program_loaded", "failed: %d errno %d\n",
+		  filter_fd, errno))
+		goto out;
+
+	err = bpf_prog_test_run(filter_fd, 1, test_skb, sizeof(test_skb), 0,
+				0, &prog_ret, 0);
+	close(filter_fd);
+	CHECK_FAIL(err);
+out:
+	for (i = 0; i < CNT; i++) {
+		if (link_fd[i])
+			close(link_fd[i]);
+		if (fexit_fd[i])
+			close(fexit_fd[i]);
+	}
+}
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 12/18] bpf: Reserve space for BPF trampoline in BPF programs
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (10 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 11/18] selftests/bpf: Add stress test for maximum number of progs Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08  7:25   ` Song Liu
  2019-11-08  6:40 ` [PATCH v3 bpf-next 13/18] bpf: Fix race in btf_resolve_helper_id() Alexei Starovoitov
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

BPF trampoline can be made to work with existing 5 bytes of BPF program
prologue, but let's add 5 bytes of NOPs to the beginning of every JITed BPF
program to make BPF trampoline job easier. They can be removed in the future.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
---
 arch/x86/net/bpf_jit_comp.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 9ccf7cd51a21..3123d4ddc6ca 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -206,7 +206,7 @@ struct jit_context {
 /* number of bytes emit_call() needs to generate call instruction */
 #define X86_CALL_SIZE		5
 
-#define PROLOGUE_SIZE		20
+#define PROLOGUE_SIZE		25
 
 /*
  * Emit x86-64 prologue code for BPF program and check its size.
@@ -215,8 +215,13 @@ struct jit_context {
 static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf)
 {
 	u8 *prog = *pprog;
-	int cnt = 0;
+	int cnt = X86_CALL_SIZE;
 
+	/* BPF trampoline can be made to work without these nops,
+	 * but let's waste 5 bytes for now and optimize later
+	 */
+	memcpy(prog, ideal_nops[NOP_ATOMIC5], cnt);
+	prog += cnt;
 	EMIT1(0x55);             /* push rbp */
 	EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
 	/* sub rsp, rounded_stack_depth */
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 13/18] bpf: Fix race in btf_resolve_helper_id()
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (11 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 12/18] bpf: Reserve space for BPF trampoline in BPF programs Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08  7:32   ` Song Liu
  2019-11-08 19:58   ` Andrii Nakryiko
  2019-11-08  6:40 ` [PATCH v3 bpf-next 14/18] bpf: Compare BTF types of functions arguments with actual types Alexei Starovoitov
                   ` (4 subsequent siblings)
  17 siblings, 2 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

btf_resolve_helper_id() caching logic is a bit racy, since under root the
verifier can verify several programs in parallel. Fix it with READ/WRITE_ONCE.
Fix the type as well, since error is also recorded.

Fixes: a7658e1a4164 ("bpf: Check types of arguments passed into helpers")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/bpf.h   |  5 +++--
 kernel/bpf/btf.c      | 26 +++++++++++++++++++++++++-
 kernel/bpf/verifier.c |  8 +++-----
 net/core/filter.c     |  2 +-
 4 files changed, 32 insertions(+), 9 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 9206b7e86dde..e177758ae439 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -247,7 +247,7 @@ struct bpf_func_proto {
 		};
 		enum bpf_arg_type arg_type[5];
 	};
-	u32 *btf_id; /* BTF ids of arguments */
+	int *btf_id; /* BTF ids of arguments */
 };
 
 /* bpf_context is intentionally undefined structure. Pointer to bpf_context is
@@ -874,7 +874,8 @@ int btf_struct_access(struct bpf_verifier_log *log,
 		      const struct btf_type *t, int off, int size,
 		      enum bpf_access_type atype,
 		      u32 *next_btf_id);
-u32 btf_resolve_helper_id(struct bpf_verifier_log *log, void *, int);
+int btf_resolve_helper_id(struct bpf_verifier_log *log,
+			  const struct bpf_func_proto *fn, int);
 
 int btf_distill_func_proto(struct bpf_verifier_log *log,
 			   struct btf *btf,
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 9e1164e5b429..033d071eb59c 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3721,7 +3721,8 @@ int btf_struct_access(struct bpf_verifier_log *log,
 	return -EINVAL;
 }
 
-u32 btf_resolve_helper_id(struct bpf_verifier_log *log, void *fn, int arg)
+static int __btf_resolve_helper_id(struct bpf_verifier_log *log, void *fn,
+				   int arg)
 {
 	char fnname[KSYM_SYMBOL_LEN + 4] = "btf_";
 	const struct btf_param *args;
@@ -3789,6 +3790,29 @@ u32 btf_resolve_helper_id(struct bpf_verifier_log *log, void *fn, int arg)
 	return btf_id;
 }
 
+int btf_resolve_helper_id(struct bpf_verifier_log *log,
+			  const struct bpf_func_proto *fn, int arg)
+{
+	int *btf_id = &fn->btf_id[arg];
+	int ret;
+
+	if (fn->arg_type[arg] != ARG_PTR_TO_BTF_ID)
+		return -EINVAL;
+
+	ret = READ_ONCE(*btf_id);
+	if (ret)
+		return ret;
+	/* ok to race the search. The result is the same */
+	ret = __btf_resolve_helper_id(log, fn->func, arg);
+	if (!ret) {
+		/* Function argument cannot be type 'void' */
+		bpf_log(log, "BTF resolution bug\n");
+		return -EFAULT;
+	}
+	WRITE_ONCE(*btf_id, ret);
+	return ret;
+}
+
 static int __get_type_size(struct btf *btf, u32 btf_id,
 			   const struct btf_type **bad_type)
 {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 36542edd4936..90375c26ad45 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4147,11 +4147,9 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
 	meta.func_id = func_id;
 	/* check args */
 	for (i = 0; i < 5; i++) {
-		if (fn->arg_type[i] == ARG_PTR_TO_BTF_ID) {
-			if (!fn->btf_id[i])
-				fn->btf_id[i] = btf_resolve_helper_id(&env->log, fn->func, i);
-			meta.btf_id = fn->btf_id[i];
-		}
+		err = btf_resolve_helper_id(&env->log, fn, i);
+		if (err > 0)
+			meta.btf_id = err;
 		err = check_func_arg(env, BPF_REG_1 + i, fn->arg_type[i], &meta);
 		if (err)
 			return err;
diff --git a/net/core/filter.c b/net/core/filter.c
index fc303abec8fa..f72face90659 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -3816,7 +3816,7 @@ static const struct bpf_func_proto bpf_skb_event_output_proto = {
 	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
 };
 
-static u32 bpf_skb_output_btf_ids[5];
+static int bpf_skb_output_btf_ids[5];
 const struct bpf_func_proto bpf_skb_output_proto = {
 	.func		= bpf_skb_event_output,
 	.gpl_only	= true,
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 14/18] bpf: Compare BTF types of functions arguments with actual types
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (12 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 13/18] bpf: Fix race in btf_resolve_helper_id() Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08 17:28   ` Song Liu
  2019-11-08 23:46   ` Andrii Nakryiko
  2019-11-08  6:40 ` [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs Alexei Starovoitov
                   ` (3 subsequent siblings)
  17 siblings, 2 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Make the verifier check that BTF types of function arguments match actual types
passed into top-level BPF program and into BPF-to-BPF calls. If types match
such BPF programs and sub-programs will have full support of BPF trampoline. If
types mismatch the trampoline has to be conservative. It has to save/restore
all 5 program arguments and assume 64-bit scalars. If FENTRY/FEXIT program is
attached to this program in the future such FENTRY/FEXIT program will be able
to follow pointers only via bpf_probe_read_kernel().

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/bpf.h          |   8 +++
 include/linux/bpf_verifier.h |   1 +
 kernel/bpf/btf.c             | 117 +++++++++++++++++++++++++++++++++++
 kernel/bpf/syscall.c         |   1 +
 kernel/bpf/verifier.c        |  18 +++++-
 5 files changed, 142 insertions(+), 3 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index e177758ae439..d1b3d600fb48 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -473,6 +473,10 @@ static inline int bpf_trampoline_unlink_prog(struct bpf_prog *prog)
 static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {}
 #endif
 
+struct bpf_func_info_aux {
+	bool unreliable;
+};
+
 struct bpf_prog_aux {
 	atomic_t refcnt;
 	u32 used_map_cnt;
@@ -487,6 +491,7 @@ struct bpf_prog_aux {
 	bool verifier_zext; /* Zero extensions has been inserted by verifier. */
 	bool offload_requested;
 	bool attach_btf_trace; /* true if attaching to BTF-enabled raw tp */
+	bool func_proto_unreliable;
 	enum bpf_tramp_prog_type trampoline_prog_type;
 	struct bpf_trampoline *trampoline;
 	struct hlist_node tramp_hlist;
@@ -511,6 +516,7 @@ struct bpf_prog_aux {
 	struct bpf_prog_offload *offload;
 	struct btf *btf;
 	struct bpf_func_info *func_info;
+	struct bpf_func_info_aux *func_info_aux;
 	/* bpf_line_info loaded from userspace.  linfo->insn_off
 	 * has the xlated insn offset.
 	 * Both the main and sub prog share the same linfo.
@@ -883,6 +889,8 @@ int btf_distill_func_proto(struct bpf_verifier_log *log,
 			   const char *func_name,
 			   struct btf_func_model *m);
 
+int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog);
+
 #else /* !CONFIG_BPF_SYSCALL */
 static inline struct bpf_prog *bpf_prog_get(u32 ufd)
 {
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 6e7284ea1468..cdd08bf0ec06 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -343,6 +343,7 @@ static inline bool bpf_verifier_log_needed(const struct bpf_verifier_log *log)
 #define BPF_MAX_SUBPROGS 256
 
 struct bpf_subprog_info {
+	/* 'start' has to be the first field otherwise find_subprog() won't work */
 	u32 start; /* insn idx of function entry point */
 	u32 linfo_idx; /* The idx to the main_prog->aux->linfo */
 	u16 stack_depth; /* max. stack depth used by this function */
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 033d071eb59c..0eb843cd9975 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3877,6 +3877,123 @@ int btf_distill_func_proto(struct bpf_verifier_log *log,
 	return 0;
 }
 
+static bool btf_check_ctx_ptr_name(struct bpf_verifier_log *log,
+				   struct btf *btf,
+				   const struct btf_type *t,
+				   enum bpf_prog_type type)
+{
+	const char *tname;
+
+	t = btf_type_by_id(btf, t->type);
+	while (btf_type_is_modifier(t))
+		t = btf_type_by_id(btf, t->type);
+	if (!btf_type_is_struct(t))
+		return false;
+	tname = btf_name_by_offset(btf, t->name_off);
+	if (!tname)
+		return false;
+	switch (type) {
+	case BPF_PROG_TYPE_SOCKET_FILTER:
+	case BPF_PROG_TYPE_SCHED_CLS:
+	case BPF_PROG_TYPE_SCHED_ACT:
+	case BPF_PROG_TYPE_CGROUP_SKB:
+	case BPF_PROG_TYPE_LWT_IN:
+	case BPF_PROG_TYPE_LWT_OUT:
+	case BPF_PROG_TYPE_LWT_XMIT:
+	case BPF_PROG_TYPE_SK_SKB:
+		return !strcmp(tname, "__sk_buff");
+	case BPF_PROG_TYPE_XDP:
+		return !strcmp(tname, "xdp_md");
+	case BPF_PROG_TYPE_CGROUP_SOCK:
+		return !strcmp(tname, "bpf_sock");
+	default:
+		/* generalize is_valid_access() next */
+		bpf_log(log, "Type %s is not supported yet\n", tname);
+		return false;
+	}
+}
+
+int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog)
+{
+	struct bpf_verifier_state *st = env->cur_state;
+	struct bpf_func_state *func = st->frame[st->curframe];
+	struct bpf_reg_state *reg = func->regs;
+	struct bpf_verifier_log *log = &env->log;
+	struct bpf_prog *prog = env->prog;
+	struct btf *btf = prog->aux->btf;
+	const struct btf_param *args;
+	const struct btf_type *t;
+	u32 i, nargs, btf_id;
+	const char *tname;
+
+	if (!prog->aux->func_info)
+		return 0;
+
+	btf_id = prog->aux->func_info[subprog].type_id;
+	if (!btf_id)
+		return 0;
+
+	if (prog->aux->func_info_aux[subprog].unreliable)
+		return 0;
+
+	t = btf_type_by_id(btf, btf_id);
+	if (!t || !btf_type_is_func(t)) {
+		bpf_log(log, "BTF of subprog %d doesn't point to KIND_FUNC\n",
+			subprog);
+		return -EINVAL;
+	}
+	tname = btf_name_by_offset(btf, t->name_off);
+
+	t = btf_type_by_id(btf, t->type);
+	if (!t || !btf_type_is_func_proto(t)) {
+		bpf_log(log, "Invalid type of func %s\n", tname);
+		return -EINVAL;
+	}
+	args = (const struct btf_param *)(t + 1);
+	nargs = btf_type_vlen(t);
+	if (nargs > 5) {
+		bpf_log(log, "Function %s has %d > 5 args\n", tname, nargs);
+		goto out;
+	}
+	/* check that BTF function arguments match actual types that the
+	 * verifier sees.
+	 */
+	for (i = 0; i < nargs; i++) {
+		t = btf_type_by_id(btf, args[i].type);
+		while (btf_type_is_modifier(t))
+			t = btf_type_by_id(btf, t->type);
+		if (btf_type_is_int(t) || btf_type_is_enum(t)) {
+			if (reg[i + 1].type == SCALAR_VALUE)
+				continue;
+			bpf_log(log, "R%d is not a scalar\n", i + 1);
+			goto out;
+		}
+		if (btf_type_is_ptr(t)) {
+			if (reg[i + 1].type == SCALAR_VALUE) {
+				bpf_log(log, "R%d is not a pointer\n", i + 1);
+				goto out;
+			}
+			/* bpf_skb_output() needs valid skb. Check that
+			 * networking program's BTF matches PTR_TO_CTX.
+			 */
+			if (reg[i + 1].type == PTR_TO_CTX &&
+			    !btf_check_ctx_ptr_name(log, btf, t, prog->type))
+				goto out;
+			/* All other pointers are ok */
+			continue;
+		}
+		bpf_log(log, "Unrecognized argument type %s\n",
+			btf_kind_str[BTF_INFO_KIND(t->info)]);
+		goto out;
+	}
+	return 0;
+out:
+	bpf_log(log,
+		"Type info disagrees with actual arguments due to compiler optimizations\n");
+	prog->aux->func_info_aux[subprog].unreliable = true;
+	return 0;
+}
+
 void btf_type_seq_show(const struct btf *btf, u32 type_id, void *obj,
 		       struct seq_file *m)
 {
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index e2e37bea86bc..9759b78fe571 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1328,6 +1328,7 @@ static void __bpf_prog_put_rcu(struct rcu_head *rcu)
 	struct bpf_prog_aux *aux = container_of(rcu, struct bpf_prog_aux, rcu);
 
 	kvfree(aux->func_info);
+	kfree(aux->func_info_aux);
 	free_used_maps(aux);
 	bpf_prog_uncharge_memlock(aux->prog);
 	security_bpf_prog_free(aux);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 90375c26ad45..cd9a9395c4b5 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3970,6 +3970,9 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	/* only increment it after check_reg_arg() finished */
 	state->curframe++;
 
+	if (btf_check_func_arg_match(env, subprog))
+		return -EINVAL;
+
 	/* and go analyze first insn of the callee */
 	*insn_idx = target_insn;
 
@@ -6564,6 +6567,7 @@ static int check_btf_func(struct bpf_verifier_env *env,
 	u32 i, nfuncs, urec_size, min_size;
 	u32 krec_size = sizeof(struct bpf_func_info);
 	struct bpf_func_info *krecord;
+	struct bpf_func_info_aux *info_aux = NULL;
 	const struct btf_type *type;
 	struct bpf_prog *prog;
 	const struct btf *btf;
@@ -6597,6 +6601,9 @@ static int check_btf_func(struct bpf_verifier_env *env,
 	krecord = kvcalloc(nfuncs, krec_size, GFP_KERNEL | __GFP_NOWARN);
 	if (!krecord)
 		return -ENOMEM;
+	info_aux = kcalloc(nfuncs, sizeof(*info_aux), GFP_KERNEL | __GFP_NOWARN);
+	if (!info_aux)
+		goto err_free;
 
 	for (i = 0; i < nfuncs; i++) {
 		ret = bpf_check_uarg_tail_zero(urecord, krec_size, urec_size);
@@ -6648,29 +6655,31 @@ static int check_btf_func(struct bpf_verifier_env *env,
 			ret = -EINVAL;
 			goto err_free;
 		}
-
 		prev_offset = krecord[i].insn_off;
 		urecord += urec_size;
 	}
 
 	prog->aux->func_info = krecord;
 	prog->aux->func_info_cnt = nfuncs;
+	prog->aux->func_info_aux = info_aux;
 	return 0;
 
 err_free:
 	kvfree(krecord);
+	kfree(info_aux);
 	return ret;
 }
 
 static void adjust_btf_func(struct bpf_verifier_env *env)
 {
+	struct bpf_prog_aux *aux = env->prog->aux;
 	int i;
 
-	if (!env->prog->aux->func_info)
+	if (!aux->func_info)
 		return;
 
 	for (i = 0; i < env->subprog_cnt; i++)
-		env->prog->aux->func_info[i].insn_off = env->subprog_info[i].start;
+		aux->func_info[i].insn_off = env->subprog_info[i].start;
 }
 
 #define MIN_BPF_LINEINFO_SIZE	(offsetof(struct bpf_line_info, line_col) + \
@@ -7651,6 +7660,9 @@ static int do_check(struct bpf_verifier_env *env)
 			0 /* frameno */,
 			0 /* subprogno, zero == main subprog */);
 
+	if (btf_check_func_arg_match(env, 0))
+		return -EINVAL;
+
 	for (;;) {
 		struct bpf_insn *insn;
 		u8 class;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (13 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 14/18] bpf: Compare BTF types of functions arguments with actual types Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08 18:49   ` Song Liu
                     ` (2 more replies)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 16/18] libbpf: Add support for attaching BPF programs " Alexei Starovoitov
                   ` (2 subsequent siblings)
  17 siblings, 3 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Allow FENTRY/FEXIT BPF programs to attach to other BPF programs of any type
including their subprograms. This feature allows snooping on input and output
packets in XDP, TC programs including their return values. In order to do that
the verifier needs to track types not only of vmlinux, but types of other BPF
programs as well. The verifier also needs to translate uapi/linux/bpf.h types
used by networking programs into kernel internal BTF types used by FENTRY/FEXIT
BPF programs. In some cases LLVM optimizations can remove arguments from BPF
subprograms without adjusting BTF info that LLVM backend knows. When BTF info
disagrees with actual types that the verifiers sees the BPF trampoline has to
fallback to conservative and treat all arguments as u64. The FENTRY/FEXIT
program can still attach to such subprograms, but won't be able to recognize
pointer types like 'struct sk_buff *' into won't be able to pass them to
bpf_skb_output() for dumping to user space.

The BPF_PROG_LOAD command is extended with attach_prog_fd field. When it's set
to zero the attach_btf_id is one vmlinux BTF type ids. When attach_prog_fd
points to previously loaded BPF program the attach_btf_id is BTF type id of
main function or one of its subprograms.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 arch/x86/net/bpf_jit_comp.c |  3 +-
 include/linux/bpf.h         |  2 +
 include/linux/btf.h         |  1 +
 include/uapi/linux/bpf.h    |  1 +
 kernel/bpf/btf.c            | 58 +++++++++++++++++++---
 kernel/bpf/core.c           |  2 +
 kernel/bpf/syscall.c        | 19 +++++--
 kernel/bpf/verifier.c       | 98 +++++++++++++++++++++++++++++--------
 kernel/trace/bpf_trace.c    |  2 -
 9 files changed, 151 insertions(+), 35 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 3123d4ddc6ca..79157d886a3e 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -505,7 +505,8 @@ int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
 	u8 *prog;
 	int ret;
 
-	if (!is_kernel_text((long)ip))
+	if (!is_kernel_text((long)ip) &&
+	    !is_bpf_text_address((long)ip))
 		/* BPF trampoline in modules is not supported */
 		return -EINVAL;
 
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index d1b3d600fb48..6a80af092048 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -488,6 +488,7 @@ struct bpf_prog_aux {
 	u32 func_cnt; /* used by non-func prog as the number of func progs */
 	u32 func_idx; /* 0 for non-func prog, the index in func array for func prog */
 	u32 attach_btf_id; /* in-kernel BTF type id to attach to */
+	struct bpf_prog *linked_prog;
 	bool verifier_zext; /* Zero extensions has been inserted by verifier. */
 	bool offload_requested;
 	bool attach_btf_trace; /* true if attaching to BTF-enabled raw tp */
@@ -1174,6 +1175,7 @@ extern const struct bpf_func_proto bpf_get_local_storage_proto;
 extern const struct bpf_func_proto bpf_strtol_proto;
 extern const struct bpf_func_proto bpf_strtoul_proto;
 extern const struct bpf_func_proto bpf_tcp_sock_proto;
+extern const struct bpf_func_proto bpf_skb_output_proto;
 
 /* Shared helpers among cBPF and eBPF. */
 void bpf_user_rnd_init_once(void);
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 9dee00859c5f..79d4abc2556a 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -88,6 +88,7 @@ static inline bool btf_type_is_func_proto(const struct btf_type *t)
 const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id);
 const char *btf_name_by_offset(const struct btf *btf, u32 offset);
 struct btf *btf_parse_vmlinux(void);
+struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog);
 #else
 static inline const struct btf_type *btf_type_by_id(const struct btf *btf,
 						    u32 type_id)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 69c200e6e696..4842a134b202 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -425,6 +425,7 @@ union bpf_attr {
 		__aligned_u64	line_info;	/* line info */
 		__u32		line_info_cnt;	/* number of bpf_line_info records */
 		__u32		attach_btf_id;	/* in-kernel BTF type id to attach to */
+		__u32		attach_prog_fd; /* 0 to attach to vmlinux */
 	};
 
 	struct { /* anonymous struct used by BPF_OBJ_* commands */
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 0eb843cd9975..a2dea1bb0a84 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3493,12 +3493,45 @@ struct btf *btf_parse_vmlinux(void)
 }
 
 extern struct btf *btf_vmlinux;
+struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog)
+{
+	struct bpf_prog *tgt_prog = prog->aux->linked_prog;
+
+	if (tgt_prog) {
+		return tgt_prog->aux->btf;
+	} else {
+		return btf_vmlinux;
+	}
+}
+
+static bool btf_translate_to_vmlinux(struct bpf_verifier_log *log,
+				     struct btf *btf,
+				     const struct btf_type *t,
+				     struct bpf_insn_access_aux *info)
+{
+	const char *tname = __btf_name_by_offset(btf, t->name_off);
+	int btf_id;
+
+	if (!tname) {
+		bpf_log(log, "Program's type doesn't have a name\n");
+		return false;
+	}
+	if (strcmp(tname, "__sk_buff") == 0) {
+		btf_id = btf_resolve_helper_id(log, &bpf_skb_output_proto, 0);
+		if (btf_id < 0)
+			return false;
+		info->btf_id = btf_id;
+		return true;
+	}
+	return false;
+}
 
 bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		    const struct bpf_prog *prog,
 		    struct bpf_insn_access_aux *info)
 {
 	const struct btf_type *t = prog->aux->attach_func_proto;
+	struct btf *btf = bpf_prog_get_target_btf(prog);
 	const char *tname = prog->aux->attach_func_name;
 	struct bpf_verifier_log *log = info->log;
 	const struct btf_param *args;
@@ -3511,7 +3544,8 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 	}
 	arg = off / 8;
 	args = (const struct btf_param *)(t + 1);
-	nr_args = btf_type_vlen(t);
+	/* if (t == NULL) Fall back to default BPF prog with 5 u64 arguments */
+	nr_args = t ? btf_type_vlen(t) : 5;
 	if (prog->aux->attach_btf_trace) {
 		/* skip first 'void *__data' argument in btf_trace_##name typedef */
 		args++;
@@ -3520,18 +3554,24 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 
 	if (prog->expected_attach_type == BPF_TRACE_FEXIT &&
 	    arg == nr_args) {
+		if (!t)
+			/* Default prog with 5 args. 6th arg is retval. */
+			return true;
 		/* function return type */
-		t = btf_type_by_id(btf_vmlinux, t->type);
+		t = btf_type_by_id(btf, t->type);
 	} else if (arg >= nr_args) {
 		bpf_log(log, "func '%s' doesn't have %d-th argument\n",
 			tname, arg + 1);
 		return false;
 	} else {
-		t = btf_type_by_id(btf_vmlinux, args[arg].type);
+		if (!t)
+			/* Default prog with 5 args */
+			return true;
+		t = btf_type_by_id(btf, args[arg].type);
 	}
 	/* skip modifiers */
 	while (btf_type_is_modifier(t))
-		t = btf_type_by_id(btf_vmlinux, t->type);
+		t = btf_type_by_id(btf, t->type);
 	if (btf_type_is_int(t))
 		/* accessing a scalar */
 		return true;
@@ -3539,7 +3579,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		bpf_log(log,
 			"func '%s' arg%d '%s' has type %s. Only pointer access is allowed\n",
 			tname, arg,
-			__btf_name_by_offset(btf_vmlinux, t->name_off),
+			__btf_name_by_offset(btf, t->name_off),
 			btf_kind_str[BTF_INFO_KIND(t->info)]);
 		return false;
 	}
@@ -3554,10 +3594,10 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 	info->reg_type = PTR_TO_BTF_ID;
 	info->btf_id = t->type;
 
-	t = btf_type_by_id(btf_vmlinux, t->type);
+	t = btf_type_by_id(btf, t->type);
 	/* skip modifiers */
 	while (btf_type_is_modifier(t))
-		t = btf_type_by_id(btf_vmlinux, t->type);
+		t = btf_type_by_id(btf, t->type);
 	if (!btf_type_is_struct(t)) {
 		bpf_log(log,
 			"func '%s' arg%d type %s is not a struct\n",
@@ -3566,7 +3606,9 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 	}
 	bpf_log(log, "func '%s' arg%d has btf_id %d type %s '%s'\n",
 		tname, arg, info->btf_id, btf_kind_str[BTF_INFO_KIND(t->info)],
-		__btf_name_by_offset(btf_vmlinux, t->name_off));
+		__btf_name_by_offset(btf, t->name_off));
+	if (btf != btf_vmlinux)
+		return btf_translate_to_vmlinux(log, btf, t, info);
 	return true;
 }
 
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 75ad0f907eef..2e319e4a8aae 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2027,6 +2027,8 @@ void bpf_prog_free(struct bpf_prog *fp)
 {
 	struct bpf_prog_aux *aux = fp->aux;
 
+	if (aux->linked_prog)
+		bpf_prog_put(aux->linked_prog);
 	INIT_WORK(&aux->work, bpf_prog_free_deferred);
 	schedule_work(&aux->work);
 }
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 9759b78fe571..684c4fea5e30 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1577,7 +1577,7 @@ static void bpf_prog_load_fixup_attach_type(union bpf_attr *attr)
 static int
 bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
 			   enum bpf_attach_type expected_attach_type,
-			   u32 btf_id)
+			   u32 btf_id, u32 prog_fd)
 {
 	switch (prog_type) {
 	case BPF_PROG_TYPE_TRACING:
@@ -1585,7 +1585,7 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
 			return -EINVAL;
 		break;
 	default:
-		if (btf_id)
+		if (btf_id || prog_fd)
 			return -EINVAL;
 		break;
 	}
@@ -1636,7 +1636,7 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
 }
 
 /* last field in 'union bpf_attr' used by this command */
-#define	BPF_PROG_LOAD_LAST_FIELD attach_btf_id
+#define	BPF_PROG_LOAD_LAST_FIELD attach_prog_fd
 
 static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
 {
@@ -1679,7 +1679,8 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
 
 	bpf_prog_load_fixup_attach_type(attr);
 	if (bpf_prog_load_check_attach(type, attr->expected_attach_type,
-				       attr->attach_btf_id))
+				       attr->attach_btf_id,
+				       attr->attach_prog_fd))
 		return -EINVAL;
 
 	/* plain bpf_prog allocation */
@@ -1689,6 +1690,16 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
 
 	prog->expected_attach_type = attr->expected_attach_type;
 	prog->aux->attach_btf_id = attr->attach_btf_id;
+	if (attr->attach_prog_fd) {
+		struct bpf_prog *tgt_prog;
+
+		tgt_prog = bpf_prog_get(attr->attach_prog_fd);
+		if (IS_ERR(tgt_prog)) {
+			err = PTR_ERR(tgt_prog);
+			goto free_prog_nouncharge;
+		}
+		prog->aux->linked_prog = tgt_prog;
+	}
 
 	prog->aux->offload_requested = !!attr->prog_ifindex;
 
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index cd9a9395c4b5..f385c4043594 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -9390,13 +9390,17 @@ static void print_verification_stats(struct bpf_verifier_env *env)
 static int check_attach_btf_id(struct bpf_verifier_env *env)
 {
 	struct bpf_prog *prog = env->prog;
+	struct bpf_prog *tgt_prog = prog->aux->linked_prog;
 	u32 btf_id = prog->aux->attach_btf_id;
 	const char prefix[] = "btf_trace_";
 	struct bpf_trampoline *tr;
 	const struct btf_type *t;
+	int ret, subprog = -1, i;
+	bool conservative = true;
 	const char *tname;
+	struct btf *btf;
 	long addr;
-	int ret;
+	u64 key;
 
 	if (prog->type != BPF_PROG_TYPE_TRACING)
 		return 0;
@@ -9405,19 +9409,42 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 		verbose(env, "Tracing programs must provide btf_id\n");
 		return -EINVAL;
 	}
-	t = btf_type_by_id(btf_vmlinux, btf_id);
+	btf = bpf_prog_get_target_btf(prog);
+	t = btf_type_by_id(btf, btf_id);
 	if (!t) {
 		verbose(env, "attach_btf_id %u is invalid\n", btf_id);
 		return -EINVAL;
 	}
-	tname = btf_name_by_offset(btf_vmlinux, t->name_off);
+	tname = btf_name_by_offset(btf, t->name_off);
 	if (!tname) {
 		verbose(env, "attach_btf_id %u doesn't have a name\n", btf_id);
 		return -EINVAL;
 	}
+	if (tgt_prog) {
+		struct bpf_prog_aux *aux = tgt_prog->aux;
+
+		for (i = 0; i < aux->func_info_cnt; i++)
+			if (aux->func_info[i].type_id == btf_id) {
+				subprog = i;
+				break;
+			}
+		if (subprog == -1) {
+			verbose(env, "Subprog %s doesn't exist\n", tname);
+			return -EINVAL;
+		}
+		conservative = aux->func_info_aux[subprog].unreliable;
+		key = ((u64)aux->id) << 32 | btf_id;
+	} else {
+		key = btf_id;
+	}
 
 	switch (prog->expected_attach_type) {
 	case BPF_TRACE_RAW_TP:
+		if (tgt_prog) {
+			verbose(env,
+				"Only FENTRY/FEXIT progs are attachable to another BPF prog\n");
+			return -EINVAL;
+		}
 		if (!btf_type_is_typedef(t)) {
 			verbose(env, "attach_btf_id %u is not a typedef\n",
 				btf_id);
@@ -9429,11 +9456,11 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 			return -EINVAL;
 		}
 		tname += sizeof(prefix) - 1;
-		t = btf_type_by_id(btf_vmlinux, t->type);
+		t = btf_type_by_id(btf, t->type);
 		if (!btf_type_is_ptr(t))
 			/* should never happen in valid vmlinux build */
 			return -EINVAL;
-		t = btf_type_by_id(btf_vmlinux, t->type);
+		t = btf_type_by_id(btf, t->type);
 		if (!btf_type_is_func_proto(t))
 			/* should never happen in valid vmlinux build */
 			return -EINVAL;
@@ -9452,35 +9479,66 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
 				btf_id);
 			return -EINVAL;
 		}
-		t = btf_type_by_id(btf_vmlinux, t->type);
+		t = btf_type_by_id(btf, t->type);
 		if (!btf_type_is_func_proto(t))
 			return -EINVAL;
-		tr = bpf_trampoline_lookup(btf_id);
+		tr = bpf_trampoline_lookup(key);
 		if (!tr)
 			return -ENOMEM;
 		prog->aux->attach_func_name = tname;
+		/* t is either vmlinux type or another program's type */
 		prog->aux->attach_func_proto = t;
 		if (tr->func.addr) {
 			prog->aux->trampoline = tr;
 			return 0;
 		}
-		ret = btf_distill_func_proto(&env->log, btf_vmlinux, t,
-					     tname, &tr->func.model);
-		if (ret < 0) {
-			bpf_trampoline_put(tr);
-			return ret;
-		}
-		addr = kallsyms_lookup_name(tname);
-		if (!addr) {
-			verbose(env,
-				"The address of function %s cannot be found\n",
-				tname);
-			bpf_trampoline_put(tr);
-			return -ENOENT;
+		if (tgt_prog && conservative) {
+			struct btf_func_model *m = &tr->func.model;
+
+			/* BTF function prototype doesn't match the verifier types.
+			 * Fall back to 5 u64 args.
+			 */
+			for (i = 0; i < 5; i++)
+				m->arg_size[i] = 8;
+			m->ret_size = 8;
+			m->nr_args = 5;
+			prog->aux->attach_func_proto = NULL;
+		} else {
+			ret = btf_distill_func_proto(&env->log, btf, t,
+						     tname, &tr->func.model);
+			if (ret < 0)
+				goto out;
+		}
+		if (tgt_prog) {
+			if (!tgt_prog->jited) {
+				/* for now */
+				verbose(env, "Can trace only JITed BPF progs\n");
+				ret = -EINVAL;
+				goto out;
+			}
+			if (tgt_prog->type == BPF_PROG_TYPE_TRACING) {
+				/* prevent cycles */
+				verbose(env, "Cannot recursively attach\n");
+				ret = -EINVAL;
+				goto out;
+			}
+			addr = (long) tgt_prog->aux->func[subprog]->bpf_func;
+		} else {
+			addr = kallsyms_lookup_name(tname);
+			if (!addr) {
+				verbose(env,
+					"The address of function %s cannot be found\n",
+					tname);
+				ret = -ENOENT;
+				goto out;
+			}
 		}
 		tr->func.addr = (void *)addr;
 		prog->aux->trampoline = tr;
 		return 0;
+out:
+		bpf_trampoline_put(tr);
+		return ret;
 	default:
 		return -EINVAL;
 	}
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index ffc91d4935ac..0d13f24a71b8 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1082,8 +1082,6 @@ static const struct bpf_func_proto bpf_perf_event_output_proto_raw_tp = {
 	.arg5_type	= ARG_CONST_SIZE_OR_ZERO,
 };
 
-extern const struct bpf_func_proto bpf_skb_output_proto;
-
 BPF_CALL_3(bpf_get_stackid_raw_tp, struct bpf_raw_tracepoint_args *, args,
 	   struct bpf_map *, map, u64, flags)
 {
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 16/18] libbpf: Add support for attaching BPF programs to other BPF programs
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (14 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08 18:57   ` Song Liu
  2019-11-10 16:56   ` Andrii Nakryiko
  2019-11-08  6:40 ` [PATCH v3 bpf-next 17/18] selftests/bpf: Extend test_pkt_access test Alexei Starovoitov
  2019-11-08  6:40 ` [PATCH v3 bpf-next 18/18] selftests/bpf: Add a test for attaching BPF prog to another BPF prog and subprog Alexei Starovoitov
  17 siblings, 2 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Extend libbpf api to pass attach_prog_fd into bpf_object__open.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/include/uapi/linux/bpf.h |  1 +
 tools/lib/bpf/bpf.c            |  9 +++--
 tools/lib/bpf/bpf.h            |  5 ++-
 tools/lib/bpf/libbpf.c         | 65 +++++++++++++++++++++++++++++-----
 tools/lib/bpf/libbpf.h         |  3 +-
 5 files changed, 69 insertions(+), 14 deletions(-)

diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 69c200e6e696..4842a134b202 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -425,6 +425,7 @@ union bpf_attr {
 		__aligned_u64	line_info;	/* line info */
 		__u32		line_info_cnt;	/* number of bpf_line_info records */
 		__u32		attach_btf_id;	/* in-kernel BTF type id to attach to */
+		__u32		attach_prog_fd; /* 0 to attach to vmlinux */
 	};
 
 	struct { /* anonymous struct used by BPF_OBJ_* commands */
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index b3e3e99a0f28..f805787c8efd 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -228,10 +228,14 @@ int bpf_load_program_xattr(const struct bpf_load_program_attr *load_attr,
 	memset(&attr, 0, sizeof(attr));
 	attr.prog_type = load_attr->prog_type;
 	attr.expected_attach_type = load_attr->expected_attach_type;
-	if (attr.prog_type == BPF_PROG_TYPE_TRACING)
+	if (attr.prog_type == BPF_PROG_TYPE_TRACING) {
 		attr.attach_btf_id = load_attr->attach_btf_id;
-	else
+		if (load_attr->attach_prog_fd)
+			attr.attach_prog_fd = load_attr->attach_prog_fd;
+	} else {
 		attr.prog_ifindex = load_attr->prog_ifindex;
+		attr.kern_version = load_attr->kern_version;
+	}
 	attr.insn_cnt = (__u32)load_attr->insns_cnt;
 	attr.insns = ptr_to_u64(load_attr->insns);
 	attr.license = ptr_to_u64(load_attr->license);
@@ -245,7 +249,6 @@ int bpf_load_program_xattr(const struct bpf_load_program_attr *load_attr,
 		attr.log_size = 0;
 	}
 
-	attr.kern_version = load_attr->kern_version;
 	attr.prog_btf_fd = load_attr->prog_btf_fd;
 	attr.func_info_rec_size = load_attr->func_info_rec_size;
 	attr.func_info_cnt = load_attr->func_info_cnt;
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 1c53bc5b4b3c..3c791fa8e68e 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -77,7 +77,10 @@ struct bpf_load_program_attr {
 	const struct bpf_insn *insns;
 	size_t insns_cnt;
 	const char *license;
-	__u32 kern_version;
+	union {
+		__u32 kern_version;
+		__u32 attach_prog_fd;
+	};
 	union {
 		__u32 prog_ifindex;
 		__u32 attach_btf_id;
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index e9c9961bbb9f..93188c35d76a 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -189,6 +189,7 @@ struct bpf_program {
 
 	enum bpf_attach_type expected_attach_type;
 	__u32 attach_btf_id;
+	__u32 attach_prog_fd;
 	void *func_info;
 	__u32 func_info_rec_size;
 	__u32 func_info_cnt;
@@ -3681,8 +3682,13 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
 	load_attr.insns = insns;
 	load_attr.insns_cnt = insns_cnt;
 	load_attr.license = license;
-	load_attr.kern_version = kern_version;
-	load_attr.prog_ifindex = prog->prog_ifindex;
+	if (prog->type == BPF_PROG_TYPE_TRACING) {
+		load_attr.attach_prog_fd = prog->attach_prog_fd;
+		load_attr.attach_btf_id = prog->attach_btf_id;
+	} else {
+		load_attr.kern_version = kern_version;
+		load_attr.prog_ifindex = prog->prog_ifindex;
+	}
 	/* if .BTF.ext was loaded, kernel supports associated BTF for prog */
 	if (prog->obj->btf_ext)
 		btf_fd = bpf_object__btf_fd(prog->obj);
@@ -3697,7 +3703,6 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
 	load_attr.line_info_cnt = prog->line_info_cnt;
 	load_attr.log_level = prog->log_level;
 	load_attr.prog_flags = prog->prog_flags;
-	load_attr.attach_btf_id = prog->attach_btf_id;
 
 retry_load:
 	log_buf = malloc(log_buf_size);
@@ -3860,8 +3865,8 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
 }
 
 static int libbpf_attach_btf_id_by_name(const char *name,
-					enum bpf_attach_type attach_type);
-
+					enum bpf_attach_type attach_type,
+					__u32 attach_prog_fd);
 static struct bpf_object *
 __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
 		   struct bpf_object_open_opts *opts)
@@ -3872,6 +3877,7 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
 	const char *obj_name;
 	char tmp_name[64];
 	bool relaxed_maps;
+	__u32 attach_prog_fd;
 	int err;
 
 	if (elf_version(EV_CURRENT) == EV_NONE) {
@@ -3902,6 +3908,7 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
 	obj->relaxed_core_relocs = OPTS_GET(opts, relaxed_core_relocs, false);
 	relaxed_maps = OPTS_GET(opts, relaxed_maps, false);
 	pin_root_path = OPTS_GET(opts, pin_root_path, NULL);
+	attach_prog_fd = OPTS_GET(opts, attach_prog_fd, 0);
 
 	CHECK_ERR(bpf_object__elf_init(obj), err, out);
 	CHECK_ERR(bpf_object__check_endianness(obj), err, out);
@@ -3927,10 +3934,12 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
 		bpf_program__set_expected_attach_type(prog, attach_type);
 		if (prog_type == BPF_PROG_TYPE_TRACING) {
 			err = libbpf_attach_btf_id_by_name(prog->section_name,
-							   attach_type);
+							   attach_type,
+							   attach_prog_fd);
 			if (err <= 0)
 				goto out;
 			prog->attach_btf_id = err;
+			prog->attach_prog_fd = attach_prog_fd;
 		}
 	}
 
@@ -5079,8 +5088,42 @@ int libbpf_find_vmlinux_btf_id(const char *name,
 	return err;
 }
 
+static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd)
+{
+	struct bpf_prog_info_linear *info_linear;
+	struct bpf_prog_info *info;
+	struct btf *btf = NULL;
+	int err = -EINVAL;
+
+	info_linear = bpf_program__get_prog_info_linear(attach_prog_fd, 0);
+	if (IS_ERR_OR_NULL(info_linear)) {
+		pr_warn("failed get_prog_info_linear for FD %d\n",
+			attach_prog_fd);
+		return -EINVAL;
+	}
+	info = &info_linear->info;
+	if (!info->btf_id) {
+		pr_warn("The target program doesn't have BTF\n");
+		goto out;
+	}
+	if (btf__get_from_id(info->btf_id, &btf)) {
+		pr_warn("Failed to get BTF of the program\n");
+		goto out;
+	}
+	err = btf__find_by_name_kind(btf, name, BTF_KIND_FUNC);
+	btf__free(btf);
+	if (err <= 0) {
+		pr_warn("%s is not found in prog's BTF\n", name);
+		goto out;
+	}
+out:
+	free(info_linear);
+	return err;
+}
+
 static int libbpf_attach_btf_id_by_name(const char *name,
-					enum bpf_attach_type attach_type)
+					enum bpf_attach_type attach_type,
+					__u32 attach_prog_fd)
 {
 	int i, err;
 
@@ -5092,8 +5135,12 @@ static int libbpf_attach_btf_id_by_name(const char *name,
 			continue;
 		if (strncmp(name, section_names[i].sec, section_names[i].len))
 			continue;
-		err = libbpf_find_vmlinux_btf_id(name + section_names[i].len,
-						 attach_type);
+		if (attach_prog_fd)
+			err = libbpf_find_prog_btf_id(name + section_names[i].len,
+						      attach_prog_fd);
+		else
+			err = libbpf_find_vmlinux_btf_id(name + section_names[i].len,
+							 attach_type);
 		if (err <= 0)
 			pr_warn("%s is not found in vmlinux BTF\n", name);
 		return err;
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 0fe47807916d..d6b23d81c975 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -108,8 +108,9 @@ struct bpf_object_open_opts {
 	 * auto-pinned to that path on load; defaults to "/sys/fs/bpf".
 	 */
 	const char *pin_root_path;
+	__u32 attach_prog_fd;
 };
-#define bpf_object_open_opts__last_field pin_root_path
+#define bpf_object_open_opts__last_field attach_prog_fd
 
 LIBBPF_API struct bpf_object *bpf_object__open(const char *path);
 LIBBPF_API struct bpf_object *
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 17/18] selftests/bpf: Extend test_pkt_access test
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (15 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 16/18] libbpf: Add support for attaching BPF programs " Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08 19:03   ` Song Liu
  2019-11-10 16:58   ` Andrii Nakryiko
  2019-11-08  6:40 ` [PATCH v3 bpf-next 18/18] selftests/bpf: Add a test for attaching BPF prog to another BPF prog and subprog Alexei Starovoitov
  17 siblings, 2 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

The test_pkt_access.o is used by multiple tests. Fix its section name so that
program type can be automatically detected by libbpf and make it call other
subprograms with skb argument.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 .../selftests/bpf/progs/test_pkt_access.c     | 38 ++++++++++++++++++-
 1 file changed, 36 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/bpf/progs/test_pkt_access.c b/tools/testing/selftests/bpf/progs/test_pkt_access.c
index 7cf42d14103f..3a7b4b607ed3 100644
--- a/tools/testing/selftests/bpf/progs/test_pkt_access.c
+++ b/tools/testing/selftests/bpf/progs/test_pkt_access.c
@@ -17,8 +17,38 @@
 #define barrier() __asm__ __volatile__("": : :"memory")
 int _version SEC("version") = 1;
 
-SEC("test1")
-int process(struct __sk_buff *skb)
+/* llvm will optimize both subprograms into exactly the same BPF assembly
+ *
+ * Disassembly of section .text:
+ *
+ * 0000000000000000 test_pkt_access_subprog1:
+ * ; 	return skb->len * 2;
+ *        0:	61 10 00 00 00 00 00 00	r0 = *(u32 *)(r1 + 0)
+ *        1:	64 00 00 00 01 00 00 00	w0 <<= 1
+ *        2:	95 00 00 00 00 00 00 00	exit
+ *
+ * 0000000000000018 test_pkt_access_subprog2:
+ * ; 	return skb->len * val;
+ *        3:	61 10 00 00 00 00 00 00	r0 = *(u32 *)(r1 + 0)
+ *        4:	64 00 00 00 01 00 00 00	w0 <<= 1
+ *        5:	95 00 00 00 00 00 00 00	exit
+ *
+ * Which makes it an interesting test for BTF-enabled verifier.
+ */
+static __attribute__ ((noinline))
+int test_pkt_access_subprog1(volatile struct __sk_buff *skb)
+{
+	return skb->len * 2;
+}
+
+static __attribute__ ((noinline))
+int test_pkt_access_subprog2(int val, volatile struct __sk_buff *skb)
+{
+	return skb->len * val;
+}
+
+SEC("classifier/test_pkt_access")
+int test_pkt_access(struct __sk_buff *skb)
 {
 	void *data_end = (void *)(long)skb->data_end;
 	void *data = (void *)(long)skb->data;
@@ -48,6 +78,10 @@ int process(struct __sk_buff *skb)
 		tcp = (struct tcphdr *)((void *)(ip6h) + ihl_len);
 	}
 
+	if (test_pkt_access_subprog1(skb) != skb->len * 2)
+		return TC_ACT_SHOT;
+	if (test_pkt_access_subprog2(2, skb) != skb->len * 2)
+		return TC_ACT_SHOT;
 	if (tcp) {
 		if (((void *)(tcp) + 20) > data_end || proto != 6)
 			return TC_ACT_SHOT;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* [PATCH v3 bpf-next 18/18] selftests/bpf: Add a test for attaching BPF prog to another BPF prog and subprog
  2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
                   ` (16 preceding siblings ...)
  2019-11-08  6:40 ` [PATCH v3 bpf-next 17/18] selftests/bpf: Extend test_pkt_access test Alexei Starovoitov
@ 2019-11-08  6:40 ` Alexei Starovoitov
  2019-11-08 19:13   ` Song Liu
  2019-11-10 17:04   ` Andrii Nakryiko
  17 siblings, 2 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08  6:40 UTC (permalink / raw)
  To: davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Add a test that attaches one FEXIT program to main sched_cls networking program
and two other FEXIT programs to subprograms. All three tracing programs
access return values and skb->len of networking program and subprograms.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 .../selftests/bpf/prog_tests/fexit_bpf2bpf.c  | 76 ++++++++++++++++
 .../selftests/bpf/progs/fexit_bpf2bpf.c       | 91 +++++++++++++++++++
 2 files changed, 167 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
 create mode 100644 tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c

diff --git a/tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c b/tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
new file mode 100644
index 000000000000..15c7378362dd
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
@@ -0,0 +1,76 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019 Facebook */
+#include <test_progs.h>
+
+#define PROG_CNT 3
+
+void test_fexit_bpf2bpf(void)
+{
+	const char *prog_name[PROG_CNT] = {
+		"fexit/test_pkt_access",
+		"fexit/test_pkt_access_subprog1",
+		"fexit/test_pkt_access_subprog2",
+	};
+	struct bpf_object *obj = NULL, *pkt_obj;
+	int err, pkt_fd, i;
+	struct bpf_link *link[PROG_CNT] = {};
+	struct bpf_program *prog[PROG_CNT];
+	__u32 duration, retval;
+	struct bpf_map *data_map;
+	const int zero = 0;
+	u64 result[PROG_CNT];
+
+	err = bpf_prog_load("./test_pkt_access.o", BPF_PROG_TYPE_UNSPEC,
+			    &pkt_obj, &pkt_fd);
+	if (CHECK(err, "prog_load sched cls", "err %d errno %d\n", err, errno))
+		return;
+	DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts,
+			    .attach_prog_fd = pkt_fd,
+			   );
+
+	obj = bpf_object__open_file("./fexit_bpf2bpf.o", &opts);
+	if (CHECK(IS_ERR_OR_NULL(obj), "obj_open",
+		  "failed to open fexit_bpf2bpf: %ld\n",
+		  PTR_ERR(obj)))
+		goto close_prog;
+
+	err = bpf_object__load(obj);
+	if (CHECK(err, "obj_load", "err %d\n", err))
+		goto close_prog;
+
+	for (i = 0; i < PROG_CNT; i++) {
+		prog[i] = bpf_object__find_program_by_title(obj, prog_name[i]);
+		if (CHECK(!prog[i], "find_prog", "prog %s not found\n", prog_name[i]))
+			goto close_prog;
+		link[i] = bpf_program__attach_trace(prog[i]);
+		if (CHECK(IS_ERR(link[i]), "attach_trace", "failed to link\n"))
+			goto close_prog;
+	}
+	data_map = bpf_object__find_map_by_name(obj, "fexit_bp.bss");
+	if (CHECK(!data_map, "find_data_map", "data map not found\n"))
+		goto close_prog;
+
+	err = bpf_prog_test_run(pkt_fd, 1, &pkt_v6, sizeof(pkt_v6),
+				NULL, NULL, &retval, &duration);
+	CHECK(err || retval, "ipv6",
+	      "err %d errno %d retval %d duration %d\n",
+	      err, errno, retval, duration);
+
+	err = bpf_map_lookup_elem(bpf_map__fd(data_map), &zero, &result);
+	if (CHECK(err, "get_result",
+		  "failed to get output data: %d\n", err))
+		goto close_prog;
+
+	for (i = 0; i < PROG_CNT; i++)
+		if (CHECK(result[i] != 1, "result", "fexit_bpf2bpf failed err %ld\n",
+			  result[i]))
+			goto close_prog;
+
+close_prog:
+	for (i = 0; i < PROG_CNT; i++)
+		if (!IS_ERR_OR_NULL(link[i]))
+			bpf_link__destroy(link[i]);
+	if (!IS_ERR_OR_NULL(obj))
+		bpf_object__close(obj);
+	bpf_object__close(pkt_obj);
+}
diff --git a/tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c b/tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c
new file mode 100644
index 000000000000..69b592217e99
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c
@@ -0,0 +1,91 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2019 Facebook */
+#include <linux/bpf.h>
+#include "bpf_helpers.h"
+
+struct sk_buff {
+	unsigned int len;
+};
+
+struct args {
+	struct sk_buff *skb;
+	ks32 ret;
+};
+static volatile __u64 test_result;
+SEC("fexit/test_pkt_access")
+int test_main(struct args *ctx)
+{
+	struct sk_buff *skb = ctx->skb;
+	int len;
+
+	__builtin_preserve_access_index(({
+		len = skb->len;
+	}));
+	if (len != 74 || ctx->ret != 0)
+		return 0;
+	test_result = 1;
+	return 0;
+}
+
+struct args_subprog1 {
+	struct sk_buff *skb;
+	ks32 ret;
+};
+static volatile __u64 test_result_subprog1;
+SEC("fexit/test_pkt_access_subprog1")
+int test_subprog1(struct args_subprog1 *ctx)
+{
+	struct sk_buff *skb = ctx->skb;
+	int len;
+
+	__builtin_preserve_access_index(({
+		len = skb->len;
+	}));
+	if (len != 74 || ctx->ret != 148)
+		return 0;
+	test_result_subprog1 = 1;
+	return 0;
+}
+
+/* Though test_pkt_access_subprog2() is defined in C as:
+ * static __attribute__ ((noinline))
+ * int test_pkt_access_subprog2(int val, volatile struct __sk_buff *skb)
+ * {
+ *     return skb->len * val;
+ * }
+ * llvm optimizations remove 'int val' argument and generate BPF assembly:
+ *   r0 = *(u32 *)(r1 + 0)
+ *   w0 <<= 1
+ *   exit
+ * In such case the verifier falls back to conservative and
+ * tracing program can access arguments and return value as u64
+ * instead of accurate types.
+ */
+struct args_subprog2 {
+	ku64 args[5];
+	ku64 ret;
+};
+static volatile __u64 test_result_subprog2;
+SEC("fexit/test_pkt_access_subprog2")
+int test_subprog2(struct args_subprog2 *ctx)
+{
+	struct sk_buff *skb = (void *)ctx->args[0];
+	__u64 ret;
+	int len;
+
+	bpf_probe_read(&len, sizeof(len),
+		       __builtin_preserve_access_index(&skb->len));
+
+	ret = ctx->ret;
+	/* bpf_prog_load() loads "test_pkt_access.o" with BPF_F_TEST_RND_HI32
+	 * which randomizes upper 32 bits after BPF_ALU32 insns.
+	 * Hence after 'w0 <<= 1' upper bits of $rax are random.
+	 * That is expected and correct. Trim them.
+	 */
+	ret = (__u32) ret;
+	if (len != 74 || ret != 148)
+		return 0;
+	test_result_subprog2 = 1;
+	return 0;
+}
+char _license[] SEC("license") = "GPL";
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-08  6:40 ` [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper Alexei Starovoitov
@ 2019-11-08  6:56   ` Song Liu
  2019-11-08  8:23   ` Björn Töpel
  2019-11-08  9:11   ` Peter Zijlstra
  2 siblings, 0 replies; 67+ messages in thread
From: Song Liu @ 2019-11-08  6:56 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> Add bpf_arch_text_poke() helper that is used by BPF trampoline logic to patch
> nops/calls in kernel text into calls into BPF trampoline and to patch
> calls/nops inside BPF programs too.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Acked-by: Song Liu <songliubraving@fb.com>


> ---
> arch/x86/net/bpf_jit_comp.c | 51 +++++++++++++++++++++++++++++++++++++
> include/linux/bpf.h         |  8 ++++++
> kernel/bpf/core.c           |  6 +++++
> 3 files changed, 65 insertions(+)
> 
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 0399b1f83c23..bb8467fd6715 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -9,9 +9,11 @@
> #include <linux/filter.h>
> #include <linux/if_vlan.h>
> #include <linux/bpf.h>
> +#include <linux/memory.h>
> #include <asm/extable.h>
> #include <asm/set_memory.h>
> #include <asm/nospec-branch.h>
> +#include <asm/text-patching.h>
> 
> static u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len)
> {
> @@ -487,6 +489,55 @@ static int emit_call(u8 **pprog, void *func, void *ip)
> 	return 0;
> }
> 
> +int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
> +		       void *old_addr, void *new_addr)
> +{
> +	u8 old_insn[X86_CALL_SIZE] = {};
> +	u8 new_insn[X86_CALL_SIZE] = {};
> +	u8 *prog;
> +	int ret;
> +
> +	if (!is_kernel_text((long)ip))
> +		/* BPF trampoline in modules is not supported */
> +		return -EINVAL;
> +
> +	if (old_addr) {
> +		prog = old_insn;
> +		ret = emit_call(&prog, old_addr, (void *)ip);
> +		if (ret)
> +			return ret;
> +	}
> +	if (new_addr) {
> +		prog = new_insn;
> +		ret = emit_call(&prog, new_addr, (void *)ip);
> +		if (ret)
> +			return ret;
> +	}
> +	ret = -EBUSY;
> +	mutex_lock(&text_mutex);
> +	switch (t) {
> +	case BPF_MOD_NOP_TO_CALL:
> +		if (memcmp(ip, ideal_nops[NOP_ATOMIC5], X86_CALL_SIZE))
> +			goto out;
> +		text_poke(ip, new_insn, X86_CALL_SIZE);
> +		break;
> +	case BPF_MOD_CALL_TO_CALL:
> +		if (memcmp(ip, old_insn, X86_CALL_SIZE))
> +			goto out;
> +		text_poke(ip, new_insn, X86_CALL_SIZE);
> +		break;
> +	case BPF_MOD_CALL_TO_NOP:
> +		if (memcmp(ip, old_insn, X86_CALL_SIZE))
> +			goto out;
> +		text_poke(ip, ideal_nops[NOP_ATOMIC5], X86_CALL_SIZE);
> +		break;
> +	}
> +	ret = 0;
> +out:
> +	mutex_unlock(&text_mutex);
> +	return ret;
> +}
> +
> static bool ex_handler_bpf(const struct exception_table_entry *x,
> 			   struct pt_regs *regs, int trapnr,
> 			   unsigned long error_code, unsigned long fault_addr)
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 7c7f518811a6..8b90db25348a 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1157,4 +1157,12 @@ static inline u32 bpf_xdp_sock_convert_ctx_access(enum bpf_access_type type,
> }
> #endif /* CONFIG_INET */
> 
> +enum bpf_text_poke_type {
> +	BPF_MOD_NOP_TO_CALL,
> +	BPF_MOD_CALL_TO_CALL,
> +	BPF_MOD_CALL_TO_NOP,
> +};
> +int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
> +		       void *addr1, void *addr2);
> +
> #endif /* _LINUX_BPF_H */
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index c1fde0303280..c4bcec1014a9 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -2140,6 +2140,12 @@ int __weak skb_copy_bits(const struct sk_buff *skb, int offset, void *to,
> 	return -EFAULT;
> }
> 
> +int __weak bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
> +			      void *addr1, void *addr2)
> +{
> +	return -ENOTSUPP;
> +}
> +
> DEFINE_STATIC_KEY_FALSE(bpf_stats_enabled_key);
> EXPORT_SYMBOL(bpf_stats_enabled_key);
> 
> -- 
> 2.23.0
> 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 03/18] bpf: Introduce BPF trampoline
  2019-11-08  6:40 ` [PATCH v3 bpf-next 03/18] bpf: Introduce BPF trampoline Alexei Starovoitov
@ 2019-11-08  7:04   ` Song Liu
  0 siblings, 0 replies; 67+ messages in thread
From: Song Liu @ 2019-11-08  7:04 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:

[...]

> BPF trampoline is intended to be used beyond tracing and fentry/fexit use cases
> in the future. For example to remove retpoline cost from XDP programs.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> Acked-by: Andrii Nakryiko <andriin@fb.com>

Acked-by: Song Liu <songliubraving@fb.com>




^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 04/18] libbpf: Introduce btf__find_by_name_kind()
  2019-11-08  6:40 ` [PATCH v3 bpf-next 04/18] libbpf: Introduce btf__find_by_name_kind() Alexei Starovoitov
@ 2019-11-08  7:05   ` Song Liu
  2019-11-08 19:21   ` Andrii Nakryiko
  1 sibling, 0 replies; 67+ messages in thread
From: Song Liu @ 2019-11-08  7:05 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> Introduce btf__find_by_name_kind() helper to search BTF by name and kind, since
> name alone can be ambiguous.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Acked-by: Song Liu <songliubraving@fb.com>


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 05/18] libbpf: Add support to attach to fentry/fexit tracing progs
  2019-11-08  6:40 ` [PATCH v3 bpf-next 05/18] libbpf: Add support to attach to fentry/fexit tracing progs Alexei Starovoitov
@ 2019-11-08  7:12   ` Song Liu
  2019-11-08 19:44   ` Andrii Nakryiko
  1 sibling, 0 replies; 67+ messages in thread
From: Song Liu @ 2019-11-08  7:12 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> Teach libbpf to recognize tracing programs types and attach them to
> fentry/fexit.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Acked-by: Song Liu <songliubraving@fb.com>


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 10/18] selftests/bpf: Add combined fentry/fexit test
  2019-11-08  6:40 ` [PATCH v3 bpf-next 10/18] selftests/bpf: Add combined fentry/fexit test Alexei Starovoitov
@ 2019-11-08  7:14   ` Song Liu
  0 siblings, 0 replies; 67+ messages in thread
From: Song Liu @ 2019-11-08  7:14 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> Add a combined fentry/fexit test.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Acked-by: Song Liu <songliubraving@fb.com>


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 11/18] selftests/bpf: Add stress test for maximum number of progs
  2019-11-08  6:40 ` [PATCH v3 bpf-next 11/18] selftests/bpf: Add stress test for maximum number of progs Alexei Starovoitov
@ 2019-11-08  7:24   ` Song Liu
  0 siblings, 0 replies; 67+ messages in thread
From: Song Liu @ 2019-11-08  7:24 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> Add stress test for maximum number of attached BPF programs per BPF trampoline.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Acked-by: Song Liu <songliubraving@fb.com>


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 12/18] bpf: Reserve space for BPF trampoline in BPF programs
  2019-11-08  6:40 ` [PATCH v3 bpf-next 12/18] bpf: Reserve space for BPF trampoline in BPF programs Alexei Starovoitov
@ 2019-11-08  7:25   ` Song Liu
  0 siblings, 0 replies; 67+ messages in thread
From: Song Liu @ 2019-11-08  7:25 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> BPF trampoline can be made to work with existing 5 bytes of BPF program
> prologue, but let's add 5 bytes of NOPs to the beginning of every JITed BPF
> program to make BPF trampoline job easier. They can be removed in the future.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> Acked-by: Andrii Nakryiko <andriin@fb.com>

Acked-by: Song Liu <songliubraving@fb.com>



^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 13/18] bpf: Fix race in btf_resolve_helper_id()
  2019-11-08  6:40 ` [PATCH v3 bpf-next 13/18] bpf: Fix race in btf_resolve_helper_id() Alexei Starovoitov
@ 2019-11-08  7:32   ` Song Liu
  2019-11-08 19:58   ` Andrii Nakryiko
  1 sibling, 0 replies; 67+ messages in thread
From: Song Liu @ 2019-11-08  7:32 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> btf_resolve_helper_id() caching logic is a bit racy, since under root the
> verifier can verify several programs in parallel. Fix it with READ/WRITE_ONCE.
> Fix the type as well, since error is also recorded.
> 
> Fixes: a7658e1a4164 ("bpf: Check types of arguments passed into helpers")
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Acked-by: Song Liu <songliubraving@fb.com>


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-08  6:40 ` [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper Alexei Starovoitov
  2019-11-08  6:56   ` Song Liu
@ 2019-11-08  8:23   ` Björn Töpel
  2019-11-08 14:09     ` Alexei Starovoitov
  2019-11-08  9:11   ` Peter Zijlstra
  2 siblings, 1 reply; 67+ messages in thread
From: Björn Töpel @ 2019-11-08  8:23 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David Miller, Daniel Borkmann, x86, Netdev, bpf, Kernel Team

On Fri, 8 Nov 2019 at 07:41, Alexei Starovoitov <ast@kernel.org> wrote:
>
> Add bpf_arch_text_poke() helper that is used by BPF trampoline logic to patch
> nops/calls in kernel text into calls into BPF trampoline and to patch
> calls/nops inside BPF programs too.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---
>  arch/x86/net/bpf_jit_comp.c | 51 +++++++++++++++++++++++++++++++++++++
>  include/linux/bpf.h         |  8 ++++++
>  kernel/bpf/core.c           |  6 +++++
>  3 files changed, 65 insertions(+)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 0399b1f83c23..bb8467fd6715 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -9,9 +9,11 @@
>  #include <linux/filter.h>
>  #include <linux/if_vlan.h>
>  #include <linux/bpf.h>
> +#include <linux/memory.h>
>  #include <asm/extable.h>
>  #include <asm/set_memory.h>
>  #include <asm/nospec-branch.h>
> +#include <asm/text-patching.h>
>
>  static u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len)
>  {
> @@ -487,6 +489,55 @@ static int emit_call(u8 **pprog, void *func, void *ip)
>         return 0;
>  }
>
> +int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
> +                      void *old_addr, void *new_addr)
> +{
> +       u8 old_insn[X86_CALL_SIZE] = {};
> +       u8 new_insn[X86_CALL_SIZE] = {};
> +       u8 *prog;
> +       int ret;
> +
> +       if (!is_kernel_text((long)ip))
> +               /* BPF trampoline in modules is not supported */
> +               return -EINVAL;
> +
> +       if (old_addr) {
> +               prog = old_insn;
> +               ret = emit_call(&prog, old_addr, (void *)ip);
> +               if (ret)
> +                       return ret;
> +       }
> +       if (new_addr) {
> +               prog = new_insn;
> +               ret = emit_call(&prog, new_addr, (void *)ip);
> +               if (ret)
> +                       return ret;
> +       }
> +       ret = -EBUSY;
> +       mutex_lock(&text_mutex);
> +       switch (t) {
> +       case BPF_MOD_NOP_TO_CALL:
> +               if (memcmp(ip, ideal_nops[NOP_ATOMIC5], X86_CALL_SIZE))
> +                       goto out;
> +               text_poke(ip, new_insn, X86_CALL_SIZE);

I'm probably missing something, but why isn't text_poke_bp() needed here?

> +               break;
> +       case BPF_MOD_CALL_TO_CALL:
> +               if (memcmp(ip, old_insn, X86_CALL_SIZE))
> +                       goto out;
> +               text_poke(ip, new_insn, X86_CALL_SIZE);
> +               break;
> +       case BPF_MOD_CALL_TO_NOP:
> +               if (memcmp(ip, old_insn, X86_CALL_SIZE))
> +                       goto out;
> +               text_poke(ip, ideal_nops[NOP_ATOMIC5], X86_CALL_SIZE);
> +               break;
> +       }
> +       ret = 0;
> +out:
> +       mutex_unlock(&text_mutex);
> +       return ret;
> +}
> +
>  static bool ex_handler_bpf(const struct exception_table_entry *x,
>                            struct pt_regs *regs, int trapnr,
>                            unsigned long error_code, unsigned long fault_addr)
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 7c7f518811a6..8b90db25348a 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1157,4 +1157,12 @@ static inline u32 bpf_xdp_sock_convert_ctx_access(enum bpf_access_type type,
>  }
>  #endif /* CONFIG_INET */
>
> +enum bpf_text_poke_type {
> +       BPF_MOD_NOP_TO_CALL,
> +       BPF_MOD_CALL_TO_CALL,
> +       BPF_MOD_CALL_TO_NOP,
> +};
> +int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
> +                      void *addr1, void *addr2);
> +
>  #endif /* _LINUX_BPF_H */
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index c1fde0303280..c4bcec1014a9 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -2140,6 +2140,12 @@ int __weak skb_copy_bits(const struct sk_buff *skb, int offset, void *to,
>         return -EFAULT;
>  }
>
> +int __weak bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
> +                             void *addr1, void *addr2)
> +{
> +       return -ENOTSUPP;
> +}
> +
>  DEFINE_STATIC_KEY_FALSE(bpf_stats_enabled_key);
>  EXPORT_SYMBOL(bpf_stats_enabled_key);
>
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-08  6:40 ` [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper Alexei Starovoitov
  2019-11-08  6:56   ` Song Liu
  2019-11-08  8:23   ` Björn Töpel
@ 2019-11-08  9:11   ` Peter Zijlstra
  2019-11-08  9:36     ` Peter Zijlstra
  2 siblings, 1 reply; 67+ messages in thread
From: Peter Zijlstra @ 2019-11-08  9:11 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: davem, daniel, x86, netdev, bpf, kernel-team

On Thu, Nov 07, 2019 at 10:40:23PM -0800, Alexei Starovoitov wrote:
> Add bpf_arch_text_poke() helper that is used by BPF trampoline logic to patch
> nops/calls in kernel text into calls into BPF trampoline and to patch
> calls/nops inside BPF programs too.

This thing assumes the text is unused, right? That isn't spelled out
anywhere. The implementation is very much unsafe vs concurrent execution
of the text.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-08  9:11   ` Peter Zijlstra
@ 2019-11-08  9:36     ` Peter Zijlstra
  2019-11-08 13:41       ` Alexei Starovoitov
  0 siblings, 1 reply; 67+ messages in thread
From: Peter Zijlstra @ 2019-11-08  9:36 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: davem, daniel, x86, netdev, bpf, kernel-team

On Fri, Nov 08, 2019 at 10:11:56AM +0100, Peter Zijlstra wrote:
> On Thu, Nov 07, 2019 at 10:40:23PM -0800, Alexei Starovoitov wrote:
> > Add bpf_arch_text_poke() helper that is used by BPF trampoline logic to patch
> > nops/calls in kernel text into calls into BPF trampoline and to patch
> > calls/nops inside BPF programs too.
> 
> This thing assumes the text is unused, right? That isn't spelled out
> anywhere. The implementation is very much unsafe vs concurrent execution
> of the text.

Also, what NOP/CALL instructions will you be hijacking? If you're
planning on using the fentry nops, then what ensures this and ftrace
don't trample on one another? Similar for kprobes.

In general, what ensures every instruction only has a single modifier?

I'm very uncomfortable letting random bpf proglets poke around in the
kernel text.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-08  9:36     ` Peter Zijlstra
@ 2019-11-08 13:41       ` Alexei Starovoitov
  2019-11-08 19:32         ` Alexei Starovoitov
  0 siblings, 1 reply; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08 13:41 UTC (permalink / raw)
  To: Peter Zijlstra, Alexei Starovoitov
  Cc: davem, daniel, x86, netdev, bpf, Kernel Team

On 11/8/19 1:36 AM, Peter Zijlstra wrote:
> On Fri, Nov 08, 2019 at 10:11:56AM +0100, Peter Zijlstra wrote:
>> On Thu, Nov 07, 2019 at 10:40:23PM -0800, Alexei Starovoitov wrote:
>>> Add bpf_arch_text_poke() helper that is used by BPF trampoline logic to patch
>>> nops/calls in kernel text into calls into BPF trampoline and to patch
>>> calls/nops inside BPF programs too.
>>
>> This thing assumes the text is unused, right? That isn't spelled out
>> anywhere. The implementation is very much unsafe vs concurrent execution
>> of the text.
> 
> Also, what NOP/CALL instructions will you be hijacking? If you're
> planning on using the fentry nops, then what ensures this and ftrace
> don't trample on one another? Similar for kprobes.
> 
> In general, what ensures every instruction only has a single modifier?

Looks like you didn't bother reading cover letter and missed a month
of discussions between my and Steven regarding exactly this topic
though you were directly cc-ed in all threads :(
tldr for kernel fentry nops it will be converted to use 
register_ftrace_direct() whenever it's available.
For all other nops, calls, jumps that are inside BPF programs BPF infra
will continue modifying them through this helper.
Daniel's upcoming bpf_tail_call() optimization will use text_poke as well.

 > I'm very uncomfortable letting random bpf proglets poke around in the
kernel text.

1. There is no such thing as 'proglet'. Please don't invent meaningless 
names.
2. BPF programs have no ability to modify kernel text.
3. BPF infra taking all necessary measures to make sure that poking
kernel's and BPF generated text is safe.
If you see specific issue please say so. We'll be happy to address
all issues. Being 'uncomfortable' is not constructive.




^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-08  8:23   ` Björn Töpel
@ 2019-11-08 14:09     ` Alexei Starovoitov
  0 siblings, 0 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08 14:09 UTC (permalink / raw)
  To: Björn Töpel, Alexei Starovoitov
  Cc: David Miller, Daniel Borkmann, x86, Netdev, bpf, Kernel Team,
	Peter Zijlstra

On 11/8/19 12:23 AM, Björn Töpel wrote:
> On Fri, 8 Nov 2019 at 07:41, Alexei Starovoitov <ast@kernel.org> wrote:
>>
>> Add bpf_arch_text_poke() helper that is used by BPF trampoline logic to patch
>> nops/calls in kernel text into calls into BPF trampoline and to patch
>> calls/nops inside BPF programs too.
>>
>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>> ---
>>   arch/x86/net/bpf_jit_comp.c | 51 +++++++++++++++++++++++++++++++++++++
>>   include/linux/bpf.h         |  8 ++++++
>>   kernel/bpf/core.c           |  6 +++++
>>   3 files changed, 65 insertions(+)
>>
>> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
>> index 0399b1f83c23..bb8467fd6715 100644
>> --- a/arch/x86/net/bpf_jit_comp.c
>> +++ b/arch/x86/net/bpf_jit_comp.c
>> @@ -9,9 +9,11 @@
>>   #include <linux/filter.h>
>>   #include <linux/if_vlan.h>
>>   #include <linux/bpf.h>
>> +#include <linux/memory.h>
>>   #include <asm/extable.h>
>>   #include <asm/set_memory.h>
>>   #include <asm/nospec-branch.h>
>> +#include <asm/text-patching.h>
>>
>>   static u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len)
>>   {
>> @@ -487,6 +489,55 @@ static int emit_call(u8 **pprog, void *func, void *ip)
>>          return 0;
>>   }
>>
>> +int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t,
>> +                      void *old_addr, void *new_addr)
>> +{
>> +       u8 old_insn[X86_CALL_SIZE] = {};
>> +       u8 new_insn[X86_CALL_SIZE] = {};
>> +       u8 *prog;
>> +       int ret;
>> +
>> +       if (!is_kernel_text((long)ip))
>> +               /* BPF trampoline in modules is not supported */
>> +               return -EINVAL;
>> +
>> +       if (old_addr) {
>> +               prog = old_insn;
>> +               ret = emit_call(&prog, old_addr, (void *)ip);
>> +               if (ret)
>> +                       return ret;
>> +       }
>> +       if (new_addr) {
>> +               prog = new_insn;
>> +               ret = emit_call(&prog, new_addr, (void *)ip);
>> +               if (ret)
>> +                       return ret;
>> +       }
>> +       ret = -EBUSY;
>> +       mutex_lock(&text_mutex);
>> +       switch (t) {
>> +       case BPF_MOD_NOP_TO_CALL:
>> +               if (memcmp(ip, ideal_nops[NOP_ATOMIC5], X86_CALL_SIZE))
>> +                       goto out;
>> +               text_poke(ip, new_insn, X86_CALL_SIZE);
> 
> I'm probably missing something, but why isn't text_poke_bp() needed here?

I should have documented the intent better.
text_poke_bp() is being changed by Peter to emulate instructions
properly in his ftrace->text_poke conversion set.
So I cannot use it just yet.
To you point that text_poke() is technically incorrect here. Yep.
Well aware. This is temporarily. As I said in the cover letter this
needs to change to register_ftrace_direct() for kernel text poking to
play nice with ftrace. Thinking about it more... I guess I can use
text_poke_bp(). Just need to setup handler properly. I may need to do it 
for bpf prog poking anyway. Wanted to avoid extra churn that is going
to be removed during merge window when trees converge.

Since we're on this subject.
Peter,
why you don't do 8 byte atomic rewrite when start addr of insn
is properly aligned? This trap dance would be unnecessary.
That will make everything so much simpler.


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 14/18] bpf: Compare BTF types of functions arguments with actual types
  2019-11-08  6:40 ` [PATCH v3 bpf-next 14/18] bpf: Compare BTF types of functions arguments with actual types Alexei Starovoitov
@ 2019-11-08 17:28   ` Song Liu
  2019-11-08 17:32     ` Song Liu
  2019-11-08 23:46   ` Andrii Nakryiko
  1 sibling, 1 reply; 67+ messages in thread
From: Song Liu @ 2019-11-08 17:28 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> Make the verifier check that BTF types of function arguments match actual types
> passed into top-level BPF program and into BPF-to-BPF calls. If types match
> such BPF programs and sub-programs will have full support of BPF trampoline. If
> types mismatch the trampoline has to be conservative. It has to save/restore
> all 5 program arguments and assume 64-bit scalars. If FENTRY/FEXIT program is
> attached to this program in the future such FENTRY/FEXIT program will be able
> to follow pointers only via bpf_probe_read_kernel().
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Acked-by: Song Liu <songliubraving@fb.com>


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 14/18] bpf: Compare BTF types of functions arguments with actual types
  2019-11-08 17:28   ` Song Liu
@ 2019-11-08 17:32     ` Song Liu
  2019-11-08 17:57       ` Alexei Starovoitov
  0 siblings, 1 reply; 67+ messages in thread
From: Song Liu @ 2019-11-08 17:32 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 8, 2019, at 9:28 AM, Song Liu <songliubraving@fb.com> wrote:
> 
> 
> 
>> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
>> 
>> Make the verifier check that BTF types of function arguments match actual types
>> passed into top-level BPF program and into BPF-to-BPF calls. If types match
>> such BPF programs and sub-programs will have full support of BPF trampoline. If
>> types mismatch the trampoline has to be conservative. It has to save/restore
>> all 5 program arguments and assume 64-bit scalars. If FENTRY/FEXIT program is
>> attached to this program in the future such FENTRY/FEXIT program will be able
>> to follow pointers only via bpf_probe_read_kernel().
>> 
>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> 
> Acked-by: Song Liu <songliubraving@fb.com>

One nit though: maybe use "reliable" instead of "unreliable"

+struct bpf_func_info_aux {
+	bool reliable;
+};
+

+	bool func_proto_reliable;

So the default value 0, is not reliable. 

Thanks,
Song

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 14/18] bpf: Compare BTF types of functions arguments with actual types
  2019-11-08 17:32     ` Song Liu
@ 2019-11-08 17:57       ` Alexei Starovoitov
  2019-11-08 17:59         ` Song Liu
  0 siblings, 1 reply; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08 17:57 UTC (permalink / raw)
  To: Song Liu, Alexei Starovoitov
  Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team

On 11/8/19 9:32 AM, Song Liu wrote:
> 
> 
>> On Nov 8, 2019, at 9:28 AM, Song Liu <songliubraving@fb.com> wrote:
>>
>>
>>
>>> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
>>>
>>> Make the verifier check that BTF types of function arguments match actual types
>>> passed into top-level BPF program and into BPF-to-BPF calls. If types match
>>> such BPF programs and sub-programs will have full support of BPF trampoline. If
>>> types mismatch the trampoline has to be conservative. It has to save/restore
>>> all 5 program arguments and assume 64-bit scalars. If FENTRY/FEXIT program is
>>> attached to this program in the future such FENTRY/FEXIT program will be able
>>> to follow pointers only via bpf_probe_read_kernel().
>>>
>>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>>
>> Acked-by: Song Liu <songliubraving@fb.com>
> 
> One nit though: maybe use "reliable" instead of "unreliable"
> 
> +struct bpf_func_info_aux {
> +	bool reliable;
> +};
> +
> 
> +	bool func_proto_reliable;
> 
> So the default value 0, is not reliable.

I don't see how this can work.
Once particular func proto was found unreliable the verifier won't keep 
checking. If we start with 'bool reliable = false'
how do you see the whole mechanism working ?
Say the first time the verifier analyzed the subroutine and everything
matches. Can it do reliable = true ? No. It has to check all other
callsites. Then it would need another variable and extra pass ?


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 14/18] bpf: Compare BTF types of functions arguments with actual types
  2019-11-08 17:57       ` Alexei Starovoitov
@ 2019-11-08 17:59         ` Song Liu
  0 siblings, 0 replies; 67+ messages in thread
From: Song Liu @ 2019-11-08 17:59 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 8, 2019, at 9:57 AM, Alexei Starovoitov <ast@fb.com> wrote:
> 
> On 11/8/19 9:32 AM, Song Liu wrote:
>> 
>> 
>>> On Nov 8, 2019, at 9:28 AM, Song Liu <songliubraving@fb.com> wrote:
>>> 
>>> 
>>> 
>>>> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
>>>> 
>>>> Make the verifier check that BTF types of function arguments match actual types
>>>> passed into top-level BPF program and into BPF-to-BPF calls. If types match
>>>> such BPF programs and sub-programs will have full support of BPF trampoline. If
>>>> types mismatch the trampoline has to be conservative. It has to save/restore
>>>> all 5 program arguments and assume 64-bit scalars. If FENTRY/FEXIT program is
>>>> attached to this program in the future such FENTRY/FEXIT program will be able
>>>> to follow pointers only via bpf_probe_read_kernel().
>>>> 
>>>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>>> 
>>> Acked-by: Song Liu <songliubraving@fb.com>
>> 
>> One nit though: maybe use "reliable" instead of "unreliable"
>> 
>> +struct bpf_func_info_aux {
>> +	bool reliable;
>> +};
>> +
>> 
>> +	bool func_proto_reliable;
>> 
>> So the default value 0, is not reliable.
> 
> I don't see how this can work.
> Once particular func proto was found unreliable the verifier won't keep 
> checking. If we start with 'bool reliable = false'
> how do you see the whole mechanism working ?
> Say the first time the verifier analyzed the subroutine and everything
> matches. Can it do reliable = true ? No. It has to check all other
> callsites. Then it would need another variable and extra pass ?

I see. I missed the multiple call sites part. 

Thanks for the explanation. 
Song


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs
  2019-11-08  6:40 ` [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs Alexei Starovoitov
@ 2019-11-08 18:49   ` Song Liu
  2019-11-08 18:59     ` Alexei Starovoitov
  2019-11-08 20:17   ` Toke Høiland-Jørgensen
  2019-11-10  7:17   ` Andrii Nakryiko
  2 siblings, 1 reply; 67+ messages in thread
From: Song Liu @ 2019-11-08 18:49 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> Allow FENTRY/FEXIT BPF programs to attach to other BPF programs of any type
> including their subprograms. This feature allows snooping on input and output
> packets in XDP, TC programs including their return values. In order to do that
> the verifier needs to track types not only of vmlinux, but types of other BPF
> programs as well. The verifier also needs to translate uapi/linux/bpf.h types
> used by networking programs into kernel internal BTF types used by FENTRY/FEXIT
> BPF programs. In some cases LLVM optimizations can remove arguments from BPF
> subprograms without adjusting BTF info that LLVM backend knows. When BTF info
> disagrees with actual types that the verifiers sees the BPF trampoline has to
> fallback to conservative and treat all arguments as u64. The FENTRY/FEXIT
> program can still attach to such subprograms, but won't be able to recognize
> pointer types like 'struct sk_buff *' into won't be able to pass them to
					^^^^^ these few words are confusing
> bpf_skb_output() for dumping to user space.
> 
> The BPF_PROG_LOAD command is extended with attach_prog_fd field. When it's set
> to zero the attach_btf_id is one vmlinux BTF type ids. When attach_prog_fd
> points to previously loaded BPF program the attach_btf_id is BTF type id of
> main function or one of its subprograms.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> 

[...]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index cd9a9395c4b5..f385c4043594 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -9390,13 +9390,17 @@ static void print_verification_stats(struct bpf_verifier_env *env)
> static int check_attach_btf_id(struct bpf_verifier_env *env)
> {
> 	struct bpf_prog *prog = env->prog;
> +	struct bpf_prog *tgt_prog = prog->aux->linked_prog;
> 	u32 btf_id = prog->aux->attach_btf_id;
> 	const char prefix[] = "btf_trace_";
> 	struct bpf_trampoline *tr;
> 	const struct btf_type *t;
> +	int ret, subprog = -1, i;
> +	bool conservative = true;
> 	const char *tname;
> +	struct btf *btf;
> 	long addr;
> -	int ret;
> +	u64 key;
> 
> 	if (prog->type != BPF_PROG_TYPE_TRACING)
> 		return 0;
> @@ -9405,19 +9409,42 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
> 		verbose(env, "Tracing programs must provide btf_id\n");
> 		return -EINVAL;
> 	}
> -	t = btf_type_by_id(btf_vmlinux, btf_id);
> +	btf = bpf_prog_get_target_btf(prog);

btf could be NULL here, so we need to check it?


[...]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 16/18] libbpf: Add support for attaching BPF programs to other BPF programs
  2019-11-08  6:40 ` [PATCH v3 bpf-next 16/18] libbpf: Add support for attaching BPF programs " Alexei Starovoitov
@ 2019-11-08 18:57   ` Song Liu
  2019-11-08 19:13     ` Alexei Starovoitov
  2019-11-10 16:56   ` Andrii Nakryiko
  1 sibling, 1 reply; 67+ messages in thread
From: Song Liu @ 2019-11-08 18:57 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> Extend libbpf api to pass attach_prog_fd into bpf_object__open.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---

[...]

> +static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd)
> +{
> +	struct bpf_prog_info_linear *info_linear;
> +	struct bpf_prog_info *info;
> +	struct btf *btf = NULL;
> +	int err = -EINVAL;
> +
> +	info_linear = bpf_program__get_prog_info_linear(attach_prog_fd, 0);
> +	if (IS_ERR_OR_NULL(info_linear)) {
> +		pr_warn("failed get_prog_info_linear for FD %d\n",
> +			attach_prog_fd);
> +		return -EINVAL;
> +	}
> +	info = &info_linear->info;
> +	if (!info->btf_id) {
> +		pr_warn("The target program doesn't have BTF\n");
> +		goto out;
> +	}
> +	if (btf__get_from_id(info->btf_id, &btf)) {
> +		pr_warn("Failed to get BTF of the program\n");
> +		goto out;
> +	}
> +	err = btf__find_by_name_kind(btf, name, BTF_KIND_FUNC);
> +	btf__free(btf);
> +	if (err <= 0) {
> +		pr_warn("%s is not found in prog's BTF\n", name);
> +		goto out;
		^^^ This goto doesn't really do much. 
> +	}
> +out:
> +	free(info_linear);
> +	return err;
> +}

Otherwise

Acked-by: Song Liu <songliubraving@fb.com>


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs
  2019-11-08 18:49   ` Song Liu
@ 2019-11-08 18:59     ` Alexei Starovoitov
  0 siblings, 0 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08 18:59 UTC (permalink / raw)
  To: Song Liu, Alexei Starovoitov
  Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team

On 11/8/19 10:49 AM, Song Liu wrote:
> 
> 
>> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
>>
>> Allow FENTRY/FEXIT BPF programs to attach to other BPF programs of any type
>> including their subprograms. This feature allows snooping on input and output
>> packets in XDP, TC programs including their return values. In order to do that
>> the verifier needs to track types not only of vmlinux, but types of other BPF
>> programs as well. The verifier also needs to translate uapi/linux/bpf.h types
>> used by networking programs into kernel internal BTF types used by FENTRY/FEXIT
>> BPF programs. In some cases LLVM optimizations can remove arguments from BPF
>> subprograms without adjusting BTF info that LLVM backend knows. When BTF info
>> disagrees with actual types that the verifiers sees the BPF trampoline has to
>> fallback to conservative and treat all arguments as u64. The FENTRY/FEXIT
>> program can still attach to such subprograms, but won't be able to recognize
>> pointer types like 'struct sk_buff *' into won't be able to pass them to
> 					^^^^^ these few words are confusing

yep. will fix.

>> bpf_skb_output() for dumping to user space.
>>
>> The BPF_PROG_LOAD command is extended with attach_prog_fd field. When it's set
>> to zero the attach_btf_id is one vmlinux BTF type ids. When attach_prog_fd
>> points to previously loaded BPF program the attach_btf_id is BTF type id of
>> main function or one of its subprograms.
>>
>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>>
> 
> [...]
> 
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index cd9a9395c4b5..f385c4043594 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -9390,13 +9390,17 @@ static void print_verification_stats(struct bpf_verifier_env *env)
>> static int check_attach_btf_id(struct bpf_verifier_env *env)
>> {
>> 	struct bpf_prog *prog = env->prog;
>> +	struct bpf_prog *tgt_prog = prog->aux->linked_prog;
>> 	u32 btf_id = prog->aux->attach_btf_id;
>> 	const char prefix[] = "btf_trace_";
>> 	struct bpf_trampoline *tr;
>> 	const struct btf_type *t;
>> +	int ret, subprog = -1, i;
>> +	bool conservative = true;
>> 	const char *tname;
>> +	struct btf *btf;
>> 	long addr;
>> -	int ret;
>> +	u64 key;
>>
>> 	if (prog->type != BPF_PROG_TYPE_TRACING)
>> 		return 0;
>> @@ -9405,19 +9409,42 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
>> 		verbose(env, "Tracing programs must provide btf_id\n");
>> 		return -EINVAL;
>> 	}
>> -	t = btf_type_by_id(btf_vmlinux, btf_id);
>> +	btf = bpf_prog_get_target_btf(prog);
> 
> btf could be NULL here, so we need to check it?

yep. will fix.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 17/18] selftests/bpf: Extend test_pkt_access test
  2019-11-08  6:40 ` [PATCH v3 bpf-next 17/18] selftests/bpf: Extend test_pkt_access test Alexei Starovoitov
@ 2019-11-08 19:03   ` Song Liu
  2019-11-10 16:58   ` Andrii Nakryiko
  1 sibling, 0 replies; 67+ messages in thread
From: Song Liu @ 2019-11-08 19:03 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> The test_pkt_access.o is used by multiple tests. Fix its section name so that
> program type can be automatically detected by libbpf and make it call other
> subprograms with skb argument.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Acked-by: Song Liu <songliubraving@fb.com>

> ---
> .../selftests/bpf/progs/test_pkt_access.c     | 38 ++++++++++++++++++-
> 1 file changed, 36 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/testing/selftests/bpf/progs/test_pkt_access.c b/tools/testing/selftests/bpf/progs/test_pkt_access.c
> index 7cf42d14103f..3a7b4b607ed3 100644
> --- a/tools/testing/selftests/bpf/progs/test_pkt_access.c
> +++ b/tools/testing/selftests/bpf/progs/test_pkt_access.c
> @@ -17,8 +17,38 @@
> #define barrier() __asm__ __volatile__("": : :"memory")
> int _version SEC("version") = 1;
> 
> -SEC("test1")
> -int process(struct __sk_buff *skb)
> +/* llvm will optimize both subprograms into exactly the same BPF assembly
> + *
> + * Disassembly of section .text:
> + *
> + * 0000000000000000 test_pkt_access_subprog1:
> + * ; 	return skb->len * 2;
> + *        0:	61 10 00 00 00 00 00 00	r0 = *(u32 *)(r1 + 0)
> + *        1:	64 00 00 00 01 00 00 00	w0 <<= 1
> + *        2:	95 00 00 00 00 00 00 00	exit
> + *
> + * 0000000000000018 test_pkt_access_subprog2:
> + * ; 	return skb->len * val;
> + *        3:	61 10 00 00 00 00 00 00	r0 = *(u32 *)(r1 + 0)
> + *        4:	64 00 00 00 01 00 00 00	w0 <<= 1
> + *        5:	95 00 00 00 00 00 00 00	exit
> + *
> + * Which makes it an interesting test for BTF-enabled verifier.

This is interesting!


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 16/18] libbpf: Add support for attaching BPF programs to other BPF programs
  2019-11-08 18:57   ` Song Liu
@ 2019-11-08 19:13     ` Alexei Starovoitov
  2019-11-08 19:14       ` Song Liu
  0 siblings, 1 reply; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08 19:13 UTC (permalink / raw)
  To: Song Liu, Alexei Starovoitov
  Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team

On 11/8/19 10:57 AM, Song Liu wrote:
> 
> 
>> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
>>
>> Extend libbpf api to pass attach_prog_fd into bpf_object__open.
>>
>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>> ---
> 
> [...]
> 
>> +static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd)
>> +{
>> +	struct bpf_prog_info_linear *info_linear;
>> +	struct bpf_prog_info *info;
>> +	struct btf *btf = NULL;
>> +	int err = -EINVAL;
>> +
>> +	info_linear = bpf_program__get_prog_info_linear(attach_prog_fd, 0);
>> +	if (IS_ERR_OR_NULL(info_linear)) {
>> +		pr_warn("failed get_prog_info_linear for FD %d\n",
>> +			attach_prog_fd);
>> +		return -EINVAL;
>> +	}
>> +	info = &info_linear->info;
>> +	if (!info->btf_id) {
>> +		pr_warn("The target program doesn't have BTF\n");
>> +		goto out;
>> +	}
>> +	if (btf__get_from_id(info->btf_id, &btf)) {
>> +		pr_warn("Failed to get BTF of the program\n");
>> +		goto out;
>> +	}
>> +	err = btf__find_by_name_kind(btf, name, BTF_KIND_FUNC);
>> +	btf__free(btf);
>> +	if (err <= 0) {
>> +		pr_warn("%s is not found in prog's BTF\n", name);
>> +		goto out;
> 		^^^ This goto doesn't really do much.

yeah. it does look a bit weird.
I wanted to keep uniform error handling, but can remove it
if you insist.

>> +	}
>> +out:
>> +	free(info_linear);
>> +	return err;
>> +}
> 
> Otherwise
> 
> Acked-by: Song Liu <songliubraving@fb.com>
> 


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 18/18] selftests/bpf: Add a test for attaching BPF prog to another BPF prog and subprog
  2019-11-08  6:40 ` [PATCH v3 bpf-next 18/18] selftests/bpf: Add a test for attaching BPF prog to another BPF prog and subprog Alexei Starovoitov
@ 2019-11-08 19:13   ` Song Liu
  2019-11-10 17:04   ` Andrii Nakryiko
  1 sibling, 0 replies; 67+ messages in thread
From: Song Liu @ 2019-11-08 19:13 UTC (permalink / raw)
  To: Alexei Starovoitov; +Cc: David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
> 
> Add a test that attaches one FEXIT program to main sched_cls networking program
> and two other FEXIT programs to subprograms. All three tracing programs
> access return values and skb->len of networking program and subprograms.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

Acked-by: Song Liu <songliubraving@fb.com>

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 16/18] libbpf: Add support for attaching BPF programs to other BPF programs
  2019-11-08 19:13     ` Alexei Starovoitov
@ 2019-11-08 19:14       ` Song Liu
  0 siblings, 0 replies; 67+ messages in thread
From: Song Liu @ 2019-11-08 19:14 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, David Miller, daniel, x86, netdev, bpf, Kernel Team



> On Nov 8, 2019, at 11:13 AM, Alexei Starovoitov <ast@fb.com> wrote:
> 
> On 11/8/19 10:57 AM, Song Liu wrote:
>> 
>> 
>>> On Nov 7, 2019, at 10:40 PM, Alexei Starovoitov <ast@kernel.org> wrote:
>>> 
>>> Extend libbpf api to pass attach_prog_fd into bpf_object__open.
>>> 
>>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>>> ---
>> 
>> [...]
>> 
>>> +static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd)
>>> +{
>>> +	struct bpf_prog_info_linear *info_linear;
>>> +	struct bpf_prog_info *info;
>>> +	struct btf *btf = NULL;
>>> +	int err = -EINVAL;
>>> +
>>> +	info_linear = bpf_program__get_prog_info_linear(attach_prog_fd, 0);
>>> +	if (IS_ERR_OR_NULL(info_linear)) {
>>> +		pr_warn("failed get_prog_info_linear for FD %d\n",
>>> +			attach_prog_fd);
>>> +		return -EINVAL;
>>> +	}
>>> +	info = &info_linear->info;
>>> +	if (!info->btf_id) {
>>> +		pr_warn("The target program doesn't have BTF\n");
>>> +		goto out;
>>> +	}
>>> +	if (btf__get_from_id(info->btf_id, &btf)) {
>>> +		pr_warn("Failed to get BTF of the program\n");
>>> +		goto out;
>>> +	}
>>> +	err = btf__find_by_name_kind(btf, name, BTF_KIND_FUNC);
>>> +	btf__free(btf);
>>> +	if (err <= 0) {
>>> +		pr_warn("%s is not found in prog's BTF\n", name);
>>> +		goto out;
>> 		^^^ This goto doesn't really do much.
> 
> yeah. it does look a bit weird.
> I wanted to keep uniform error handling, but can remove it
> if you insist.

I think it is good as-is. 

Thanks,
Song

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 04/18] libbpf: Introduce btf__find_by_name_kind()
  2019-11-08  6:40 ` [PATCH v3 bpf-next 04/18] libbpf: Introduce btf__find_by_name_kind() Alexei Starovoitov
  2019-11-08  7:05   ` Song Liu
@ 2019-11-08 19:21   ` Andrii Nakryiko
  1 sibling, 0 replies; 67+ messages in thread
From: Andrii Nakryiko @ 2019-11-08 19:21 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, x86, Networking, bpf, Kernel Team

On Thu, Nov 7, 2019 at 10:41 PM Alexei Starovoitov <ast@kernel.org> wrote:
>
> Introduce btf__find_by_name_kind() helper to search BTF by name and kind, since
> name alone can be ambiguous.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---

Ok, makes sense.

Acked-by: Andrii Nakryiko <andriin@fb.com>

>  tools/lib/bpf/btf.c      | 22 ++++++++++++++++++++++
>  tools/lib/bpf/btf.h      |  2 ++
>  tools/lib/bpf/libbpf.map |  1 +
>  3 files changed, 25 insertions(+)

[...]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 01/18] bpf: refactor x86 JIT into helpers
  2019-11-08  6:40 ` [PATCH v3 bpf-next 01/18] bpf: refactor x86 JIT into helpers Alexei Starovoitov
@ 2019-11-08 19:27   ` Andrii Nakryiko
  0 siblings, 0 replies; 67+ messages in thread
From: Andrii Nakryiko @ 2019-11-08 19:27 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, x86, Networking, bpf, Kernel Team

On Thu, Nov 7, 2019 at 10:41 PM Alexei Starovoitov <ast@kernel.org> wrote:
>
> Refactor x86 JITing of LDX, STX, CALL instructions into separate helper
> functions.  No functional changes in LDX and STX helpers.  There is a minor
> change in CALL helper. It will populate target address correctly on the first
> pass of JIT instead of second pass. That won't reduce total number of JIT
> passes though.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> Acked-by: Song Liu <songliubraving@fb.com>
> ---

Nice, logic is cleaner without extra gotos now, thanks!

Acked-by: Andrii Nakryiko <andriin@fb.com>

[...]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-08 13:41       ` Alexei Starovoitov
@ 2019-11-08 19:32         ` Alexei Starovoitov
  2019-11-08 21:36           ` Peter Zijlstra
  0 siblings, 1 reply; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08 19:32 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Alexei Starovoitov, davem, daniel, x86, netdev, bpf, Kernel Team

On Fri, Nov 8, 2019 at 5:42 AM Alexei Starovoitov <ast@fb.com> wrote:
>
> On 11/8/19 1:36 AM, Peter Zijlstra wrote:
> > On Fri, Nov 08, 2019 at 10:11:56AM +0100, Peter Zijlstra wrote:
> >> On Thu, Nov 07, 2019 at 10:40:23PM -0800, Alexei Starovoitov wrote:
> >>> Add bpf_arch_text_poke() helper that is used by BPF trampoline logic to patch
> >>> nops/calls in kernel text into calls into BPF trampoline and to patch
> >>> calls/nops inside BPF programs too.
> >>
> >> This thing assumes the text is unused, right? That isn't spelled out
> >> anywhere. The implementation is very much unsafe vs concurrent execution
> >> of the text.
> >
> > Also, what NOP/CALL instructions will you be hijacking? If you're
> > planning on using the fentry nops, then what ensures this and ftrace
> > don't trample on one another? Similar for kprobes.
> >
> > In general, what ensures every instruction only has a single modifier?
>
> Looks like you didn't bother reading cover letter and missed a month
> of discussions between my and Steven regarding exactly this topic
> though you were directly cc-ed in all threads :(
> tldr for kernel fentry nops it will be converted to use
> register_ftrace_direct() whenever it's available.
> For all other nops, calls, jumps that are inside BPF programs BPF infra
> will continue modifying them through this helper.
> Daniel's upcoming bpf_tail_call() optimization will use text_poke as well.
>
>  > I'm very uncomfortable letting random bpf proglets poke around in the
> kernel text.
>
> 1. There is no such thing as 'proglet'. Please don't invent meaningless
> names.
> 2. BPF programs have no ability to modify kernel text.
> 3. BPF infra taking all necessary measures to make sure that poking
> kernel's and BPF generated text is safe.
> If you see specific issue please say so. We'll be happy to address
> all issues. Being 'uncomfortable' is not constructive.
>

I was thinking more about this.
Peter,
do you mind we apply your first patch:
https://lore.kernel.org/lkml/20191007081944.88332264.2@infradead.org/
to both tip and bpf-next trees?
Then I can use text_poke_bp() as-is without any additional ugliness
on my side that would need to be removed in few weeks.
Do you have it in tip already?
I can cherry-pick from there to make sure it's exactly the same commit log
then there will be no merge issues during merge window.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 05/18] libbpf: Add support to attach to fentry/fexit tracing progs
  2019-11-08  6:40 ` [PATCH v3 bpf-next 05/18] libbpf: Add support to attach to fentry/fexit tracing progs Alexei Starovoitov
  2019-11-08  7:12   ` Song Liu
@ 2019-11-08 19:44   ` Andrii Nakryiko
  1 sibling, 0 replies; 67+ messages in thread
From: Andrii Nakryiko @ 2019-11-08 19:44 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, x86, Networking, bpf, Kernel Team

On Thu, Nov 7, 2019 at 10:41 PM Alexei Starovoitov <ast@kernel.org> wrote:
>
> Teach libbpf to recognize tracing programs types and attach them to
> fentry/fexit.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---

Acked-by: Andrii Nakryiko <andriin@fb.com>

>  tools/include/uapi/linux/bpf.h |  2 +
>  tools/lib/bpf/libbpf.c         | 99 +++++++++++++++++++++++++---------
>  tools/lib/bpf/libbpf.h         |  4 ++
>  tools/lib/bpf/libbpf.map       |  2 +
>  4 files changed, 82 insertions(+), 25 deletions(-)
>

[...]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 13/18] bpf: Fix race in btf_resolve_helper_id()
  2019-11-08  6:40 ` [PATCH v3 bpf-next 13/18] bpf: Fix race in btf_resolve_helper_id() Alexei Starovoitov
  2019-11-08  7:32   ` Song Liu
@ 2019-11-08 19:58   ` Andrii Nakryiko
  1 sibling, 0 replies; 67+ messages in thread
From: Andrii Nakryiko @ 2019-11-08 19:58 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, x86, Networking, bpf, Kernel Team

On Thu, Nov 7, 2019 at 10:42 PM Alexei Starovoitov <ast@kernel.org> wrote:
>
> btf_resolve_helper_id() caching logic is a bit racy, since under root the
> verifier can verify several programs in parallel. Fix it with READ/WRITE_ONCE.
> Fix the type as well, since error is also recorded.
>
> Fixes: a7658e1a4164 ("bpf: Check types of arguments passed into helpers")
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---

looks good

Acked-by: Andrii Nakryiko <andriin@fb.com>

>  include/linux/bpf.h   |  5 +++--
>  kernel/bpf/btf.c      | 26 +++++++++++++++++++++++++-
>  kernel/bpf/verifier.c |  8 +++-----
>  net/core/filter.c     |  2 +-
>  4 files changed, 32 insertions(+), 9 deletions(-)
>

[...]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs
  2019-11-08  6:40 ` [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs Alexei Starovoitov
  2019-11-08 18:49   ` Song Liu
@ 2019-11-08 20:17   ` Toke Høiland-Jørgensen
  2019-11-08 21:14     ` Alexei Starovoitov
  2019-11-10  7:17   ` Andrii Nakryiko
  2 siblings, 1 reply; 67+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-11-08 20:17 UTC (permalink / raw)
  To: Alexei Starovoitov, davem; +Cc: daniel, x86, netdev, bpf, kernel-team

Alexei Starovoitov <ast@kernel.org> writes:

> Allow FENTRY/FEXIT BPF programs to attach to other BPF programs of any type
> including their subprograms. This feature allows snooping on input and output
> packets in XDP, TC programs including their return values. In order to do that
> the verifier needs to track types not only of vmlinux, but types of other BPF
> programs as well. The verifier also needs to translate uapi/linux/bpf.h types
> used by networking programs into kernel internal BTF types used by FENTRY/FEXIT
> BPF programs. In some cases LLVM optimizations can remove arguments from BPF
> subprograms without adjusting BTF info that LLVM backend knows. When BTF info
> disagrees with actual types that the verifiers sees the BPF trampoline has to
> fallback to conservative and treat all arguments as u64. The FENTRY/FEXIT
> program can still attach to such subprograms, but won't be able to recognize
> pointer types like 'struct sk_buff *' into won't be able to pass them to
> bpf_skb_output() for dumping to user space.
>
> The BPF_PROG_LOAD command is extended with attach_prog_fd field. When it's set
> to zero the attach_btf_id is one vmlinux BTF type ids. When attach_prog_fd
> points to previously loaded BPF program the attach_btf_id is BTF type id of
> main function or one of its subprograms.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

This is cool! Certainly solves the xdpdump use case; thanks!

I do have a few questions (thinking about whether it can also be used
for running multiple XDP programs):

- Can a FEXIT function loaded this way only *observe* the return code of
  the BPF program it attaches to, or can it also change it?

- Is it possible to attach multiple FENTRY/FEXIT programs to the same
  XDP program and/or to recursively attach FENTRY/FEXIT programs to each
  other?

- Could it be possible for an FENTRY/FEXIT program to call into another
  XDP program (i.e., one that has the regular XDP program type)?

-Toke


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs
  2019-11-08 20:17   ` Toke Høiland-Jørgensen
@ 2019-11-08 21:14     ` Alexei Starovoitov
  2019-11-08 21:32       ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08 21:14 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen
  Cc: Alexei Starovoitov, davem, daniel, x86, netdev, bpf, kernel-team

On Fri, Nov 08, 2019 at 09:17:12PM +0100, Toke Høiland-Jørgensen wrote:
> Alexei Starovoitov <ast@kernel.org> writes:
> 
> > Allow FENTRY/FEXIT BPF programs to attach to other BPF programs of any type
> > including their subprograms. This feature allows snooping on input and output
> > packets in XDP, TC programs including their return values. In order to do that
> > the verifier needs to track types not only of vmlinux, but types of other BPF
> > programs as well. The verifier also needs to translate uapi/linux/bpf.h types
> > used by networking programs into kernel internal BTF types used by FENTRY/FEXIT
> > BPF programs. In some cases LLVM optimizations can remove arguments from BPF
> > subprograms without adjusting BTF info that LLVM backend knows. When BTF info
> > disagrees with actual types that the verifiers sees the BPF trampoline has to
> > fallback to conservative and treat all arguments as u64. The FENTRY/FEXIT
> > program can still attach to such subprograms, but won't be able to recognize
> > pointer types like 'struct sk_buff *' into won't be able to pass them to
> > bpf_skb_output() for dumping to user space.
> >
> > The BPF_PROG_LOAD command is extended with attach_prog_fd field. When it's set
> > to zero the attach_btf_id is one vmlinux BTF type ids. When attach_prog_fd
> > points to previously loaded BPF program the attach_btf_id is BTF type id of
> > main function or one of its subprograms.
> >
> > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> 
> This is cool! Certainly solves the xdpdump use case; thanks!
> 
> I do have a few questions (thinking about whether it can also be used
> for running multiple XDP programs):

excellent questions.

> - Can a FEXIT function loaded this way only *observe* the return code of
>   the BPF program it attaches to, or can it also change it?

yes. the verifier can be taught to support that for certain class of programs.
That needs careful thinking to make sure it's safe.

> - Is it possible to attach multiple FENTRY/FEXIT programs to the same
>   XDP program 

Yes. Already possible. See fexit_stress.c that attaches 40 progs to the same
kernel function. Same thing when attaching fexit BPF to any XDP program.
Since all of them are read only tracing prog all progs have access to skb on
input and ouput along with unmodified return value.

> and/or to recursively attach FENTRY/FEXIT programs to each
>   other?

Not right now to avoid complex logic of detecting cycles. See simple bit:
   if (tgt_prog->type == BPF_PROG_TYPE_TRACING) {
           /* prevent cycles */
           verbose(env, "Cannot recursively attach\n");

> - Could it be possible for an FENTRY/FEXIT program to call into another
>   XDP program (i.e., one that has the regular XDP program type)?

It's possible to teach verifier to do that, but we probably shouldn't take that
route. Instead I've started exploring the idea of dynamic linking. The
trampoline logic will be used to replace existing BPF program or subprogram
instead of attaching read-only to it. If types match the new program can
replace existing one. The concept allows to build any kind of callchain
programmatically. Pretty much what Ed proposed with static linking, but doing
it dynamically. I'll start a separate email thread explaining details.


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs
  2019-11-08 21:14     ` Alexei Starovoitov
@ 2019-11-08 21:32       ` Toke Høiland-Jørgensen
  0 siblings, 0 replies; 67+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-11-08 21:32 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, davem, daniel, x86, netdev, bpf, kernel-team

Alexei Starovoitov <alexei.starovoitov@gmail.com> writes:

> On Fri, Nov 08, 2019 at 09:17:12PM +0100, Toke Høiland-Jørgensen wrote:
>> Alexei Starovoitov <ast@kernel.org> writes:
>> 
>> > Allow FENTRY/FEXIT BPF programs to attach to other BPF programs of any type
>> > including their subprograms. This feature allows snooping on input and output
>> > packets in XDP, TC programs including their return values. In order to do that
>> > the verifier needs to track types not only of vmlinux, but types of other BPF
>> > programs as well. The verifier also needs to translate uapi/linux/bpf.h types
>> > used by networking programs into kernel internal BTF types used by FENTRY/FEXIT
>> > BPF programs. In some cases LLVM optimizations can remove arguments from BPF
>> > subprograms without adjusting BTF info that LLVM backend knows. When BTF info
>> > disagrees with actual types that the verifiers sees the BPF trampoline has to
>> > fallback to conservative and treat all arguments as u64. The FENTRY/FEXIT
>> > program can still attach to such subprograms, but won't be able to recognize
>> > pointer types like 'struct sk_buff *' into won't be able to pass them to
>> > bpf_skb_output() for dumping to user space.
>> >
>> > The BPF_PROG_LOAD command is extended with attach_prog_fd field. When it's set
>> > to zero the attach_btf_id is one vmlinux BTF type ids. When attach_prog_fd
>> > points to previously loaded BPF program the attach_btf_id is BTF type id of
>> > main function or one of its subprograms.
>> >
>> > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>> 
>> This is cool! Certainly solves the xdpdump use case; thanks!
>> 
>> I do have a few questions (thinking about whether it can also be used
>> for running multiple XDP programs):
>
> excellent questions.
>
>> - Can a FEXIT function loaded this way only *observe* the return code of
>>   the BPF program it attaches to, or can it also change it?
>
> yes. the verifier can be taught to support that for certain class of programs.
> That needs careful thinking to make sure it's safe.

OK. I think this could potentially be useful to have for XDP (for
instance, to have xdpdump "steal" any packets it is observing by
changing the return code to XDP_DROP).

>> - Is it possible to attach multiple FENTRY/FEXIT programs to the same
>>   XDP program 
>
> Yes. Already possible. See fexit_stress.c that attaches 40 progs to the same
> kernel function. Same thing when attaching fexit BPF to any XDP program.
> Since all of them are read only tracing prog all progs have access to skb on
> input and ouput along with unmodified return value.

Right, cool.

>> and/or to recursively attach FENTRY/FEXIT programs to each
>>   other?
>
> Not right now to avoid complex logic of detecting cycles. See simple bit:
>    if (tgt_prog->type == BPF_PROG_TYPE_TRACING) {
>            /* prevent cycles */
>            verbose(env, "Cannot recursively attach\n");

OK, that is probably a reasonable tradeoff.

>> - Could it be possible for an FENTRY/FEXIT program to call into another
>>   XDP program (i.e., one that has the regular XDP program type)?
>
> It's possible to teach verifier to do that, but we probably shouldn't take that
> route. Instead I've started exploring the idea of dynamic linking. The
> trampoline logic will be used to replace existing BPF program or subprogram
> instead of attaching read-only to it. If types match the new program can
> replace existing one. The concept allows to build any kind of callchain
> programmatically. Pretty much what Ed proposed with static linking, but doing
> it dynamically. I'll start a separate email thread explaining details.

SGTM; will wait for the sequel, then :)

-Toke


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-08 19:32         ` Alexei Starovoitov
@ 2019-11-08 21:36           ` Peter Zijlstra
  2019-11-08 21:39             ` David Miller
  2019-11-08 23:05             ` Alexei Starovoitov
  0 siblings, 2 replies; 67+ messages in thread
From: Peter Zijlstra @ 2019-11-08 21:36 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, davem, daniel, x86, netdev, bpf, Kernel Team

On Fri, Nov 08, 2019 at 11:32:41AM -0800, Alexei Starovoitov wrote:
> On Fri, Nov 8, 2019 at 5:42 AM Alexei Starovoitov <ast@fb.com> wrote:
> >
> > On 11/8/19 1:36 AM, Peter Zijlstra wrote:
> > > On Fri, Nov 08, 2019 at 10:11:56AM +0100, Peter Zijlstra wrote:
> > >> On Thu, Nov 07, 2019 at 10:40:23PM -0800, Alexei Starovoitov wrote:
> > >>> Add bpf_arch_text_poke() helper that is used by BPF trampoline logic to patch
> > >>> nops/calls in kernel text into calls into BPF trampoline and to patch
> > >>> calls/nops inside BPF programs too.
> > >>
> > >> This thing assumes the text is unused, right? That isn't spelled out
> > >> anywhere. The implementation is very much unsafe vs concurrent execution
> > >> of the text.
> > >
> > > Also, what NOP/CALL instructions will you be hijacking? If you're
> > > planning on using the fentry nops, then what ensures this and ftrace
> > > don't trample on one another? Similar for kprobes.
> > >
> > > In general, what ensures every instruction only has a single modifier?
> >
> > Looks like you didn't bother reading cover letter and missed a month

I did indeed not. A Changelog should be self sufficient and this one is
sorely lacking. The cover leter is not preserved and should therefore
not contain anything of value that is not also covered in the
Changelogs.

> > of discussions between my and Steven regarding exactly this topic
> > though you were directly cc-ed in all threads :(

I read some of it; it is a sad fact that I cannot read all email in my
inbox, esp. not if, like in the last week or so, I'm busy hunting a
regression.

And what I did remember of the emails I saw left me with the questions
that were not answered by the changelog.

> > tldr for kernel fentry nops it will be converted to use
> > register_ftrace_direct() whenever it's available.

So why the rush and not wait for that work to complete? It appears to me
that without due coordination between bpf and ftrace badness could
happen.

> > For all other nops, calls, jumps that are inside BPF programs BPF infra
> > will continue modifying them through this helper.
> > Daniel's upcoming bpf_tail_call() optimization will use text_poke as well.

This is probably off topic, but isn't tail-call optimization something
done at JIT time and therefore not in need ot text_poke()?

> >  > I'm very uncomfortable letting random bpf proglets poke around in the
> > kernel text.
> >
> > 1. There is no such thing as 'proglet'. Please don't invent meaningless
> > names.

Again offtopic, but I'm not inventing stuff here:

  /prog'let/ [UK] A short extempore program written to meet an immediate, transient need.

which, methinks, succinctly captures the spirit of BPF.

> > 2. BPF programs have no ability to modify kernel text.

OK, that is good.

> > 3. BPF infra taking all necessary measures to make sure that poking
> > kernel's and BPF generated text is safe.

From the other subthread and your response below, it seems you are aware
that you in fact need text_poke_bp(). Again, it would've been excellent
Changelog/comment material to call out that the patch as presented is in
fact broken.

> I was thinking more about this.
> Peter,
> do you mind we apply your first patch:
> https://lore.kernel.org/lkml/20191007081944.88332264.2@infradead.org/
> to both tip and bpf-next trees?

That would indeed be a much better solution. I'll repost much of that on
Monday, and then we'll work on getting at the very least that one patch
in a tip/branch we can share.

> Then I can use text_poke_bp() as-is without any additional ugliness
> on my side that would need to be removed in few weeks.

This I do _NOT_ understand. Why are you willing to merge a known broken
patch? What is the rush, why can't you wait for all the prerequisites to
land?

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-08 21:36           ` Peter Zijlstra
@ 2019-11-08 21:39             ` David Miller
  2019-11-11  8:14               ` Peter Zijlstra
  2019-11-08 23:05             ` Alexei Starovoitov
  1 sibling, 1 reply; 67+ messages in thread
From: David Miller @ 2019-11-08 21:39 UTC (permalink / raw)
  To: peterz; +Cc: alexei.starovoitov, ast, daniel, x86, netdev, bpf, Kernel-team

From: Peter Zijlstra <peterz@infradead.org>
Date: Fri, 8 Nov 2019 22:36:24 +0100

> The cover leter is not preserved and should therefore
 ...

The cover letter is +ALWAYS+ preserved, we put them in the merge
commit.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-08 21:36           ` Peter Zijlstra
  2019-11-08 21:39             ` David Miller
@ 2019-11-08 23:05             ` Alexei Starovoitov
  2019-11-10 10:54               ` Thomas Gleixner
  1 sibling, 1 reply; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-08 23:05 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Alexei Starovoitov, davem, daniel, x86, netdev, bpf, Kernel Team

On Fri, Nov 08, 2019 at 10:36:24PM +0100, Peter Zijlstra wrote:
> On Fri, Nov 08, 2019 at 11:32:41AM -0800, Alexei Starovoitov wrote:
> > On Fri, Nov 8, 2019 at 5:42 AM Alexei Starovoitov <ast@fb.com> wrote:
> > >
> > > On 11/8/19 1:36 AM, Peter Zijlstra wrote:
> > > > On Fri, Nov 08, 2019 at 10:11:56AM +0100, Peter Zijlstra wrote:
> > > >> On Thu, Nov 07, 2019 at 10:40:23PM -0800, Alexei Starovoitov wrote:
> > > >>> Add bpf_arch_text_poke() helper that is used by BPF trampoline logic to patch
> > > >>> nops/calls in kernel text into calls into BPF trampoline and to patch
> > > >>> calls/nops inside BPF programs too.
> > > >>
> > > >> This thing assumes the text is unused, right? That isn't spelled out
> > > >> anywhere. The implementation is very much unsafe vs concurrent execution
> > > >> of the text.
> > > >
> > > > Also, what NOP/CALL instructions will you be hijacking? If you're
> > > > planning on using the fentry nops, then what ensures this and ftrace
> > > > don't trample on one another? Similar for kprobes.
> > > >
> > > > In general, what ensures every instruction only has a single modifier?
> > >
> > > Looks like you didn't bother reading cover letter and missed a month
> 
> I did indeed not. A Changelog should be self sufficient and this one is
> sorely lacking. The cover leter is not preserved and should therefore
> not contain anything of value that is not also covered in the
> Changelogs.
> 
> > > of discussions between my and Steven regarding exactly this topic
> > > though you were directly cc-ed in all threads :(
> 
> I read some of it; it is a sad fact that I cannot read all email in my
> inbox, esp. not if, like in the last week or so, I'm busy hunting a
> regression.
> 
> And what I did remember of the emails I saw left me with the questions
> that were not answered by the changelog.
> 
> > > tldr for kernel fentry nops it will be converted to use
> > > register_ftrace_direct() whenever it's available.
> 
> So why the rush and not wait for that work to complete? It appears to me
> that without due coordination between bpf and ftrace badness could
> happen.
> 
> > > For all other nops, calls, jumps that are inside BPF programs BPF infra
> > > will continue modifying them through this helper.
> > > Daniel's upcoming bpf_tail_call() optimization will use text_poke as well.
> 
> This is probably off topic, but isn't tail-call optimization something
> done at JIT time and therefore not in need ot text_poke()?

Not quite. bpf_tail_call() are done via prog_array which is indirect jmp and
it suffers from retpoline. The verifier can see that in a lot of cases the
prog_array is used with constant index into array instead of a variable. In
such case indirect jmps can be optimized with direct jmps. That is
essentially what Daniel's patches are doing that are building on top of
bpf_arch_text_poke() and trampoline that I'm introducing in this set.

Another set is being prepared by Bjorn that also builds on top of
bpf_arch_text_poke() and trampoline. It's serving the purpose of getting rid of
indirect call when driver calls into BPF program for the first time. We've
looked at your static_call and concluded that it doesn't quite cut for this use
case.

The third framework is worked on by Martin. Who is using BPF trampoline for
BPF-based TCP extensions. This bit is not related to indirect call/jmp
optimization, but needs trampoline.

> > I was thinking more about this.
> > Peter,
> > do you mind we apply your first patch:
> > https://lore.kernel.org/lkml/20191007081944.88332264.2@infradead.org/
> > to both tip and bpf-next trees?
> 
> That would indeed be a much better solution. I'll repost much of that on
> Monday, and then we'll work on getting at the very least that one patch
> in a tip/branch we can share.

Awesome! I can certainly wait till next week. I just don't want to miss the
merge window for the work that it is ready. More below.

> > Then I can use text_poke_bp() as-is without any additional ugliness
> > on my side that would need to be removed in few weeks.
> 
> This I do _NOT_ understand. Why are you willing to merge a known broken
> patch? What is the rush, why can't you wait for all the prerequisites to
> land?

People have deadlines and here I'm not talking about fb deadlines. If it was
only up to me I could have waited until yours and Steven's patches land in
Linus's tree. Then Dave would pick them up after the merge window into net-next
and bpf things would be ready for the next release. Which is in 1.5 + 2 + 8
weeks (assuming 1.5 weeks until merge window, 2 weeks merge window, and 8
weeks next release cycle).
But most of bpf things are ready. I have one more follow up to do for another
feature. The first 4-5 patches of my set will enable Bjorn, Daniel, and
Martin's work. So I'm mainly looking for a way to converge three trees during
the merge window with no conflicts.

Just saw that Steven posted his set. That is great. If you land your first part
of text_poke_pb() next week into tip it will enable us to cherry-pick the first
few patches from tip and continue with bpf trampoline in net-next. Then during
the merge window tip, Steven's and net-next land into Linus's tree. Then I'll
send small follow up to switch to Steven's register_ftrace_direct() in places
that can use it and the other bits of bpf will keep using yours text_poke_bp()
because it's for the code inside generated bpf progs, various generated
trampolines and such. The conversion of some of bpf bits to
register_ftrace_direct() can be delayed by a release if really necessary. Since
text_poke_bp() approach will work fine, just not as nice if there is a full
integration via ftrace.
imo it's the best path for 3 trees to converge without delaying things for bpf
folks by a full release. At the end the deadlines are met and a bunch of people
are unblocked and happy. I hope that explains the rush.


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 14/18] bpf: Compare BTF types of functions arguments with actual types
  2019-11-08  6:40 ` [PATCH v3 bpf-next 14/18] bpf: Compare BTF types of functions arguments with actual types Alexei Starovoitov
  2019-11-08 17:28   ` Song Liu
@ 2019-11-08 23:46   ` Andrii Nakryiko
  1 sibling, 0 replies; 67+ messages in thread
From: Andrii Nakryiko @ 2019-11-08 23:46 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, x86, Networking, bpf, Kernel Team

On Thu, Nov 7, 2019 at 10:42 PM Alexei Starovoitov <ast@kernel.org> wrote:
>
> Make the verifier check that BTF types of function arguments match actual types
> passed into top-level BPF program and into BPF-to-BPF calls. If types match
> such BPF programs and sub-programs will have full support of BPF trampoline. If
> types mismatch the trampoline has to be conservative. It has to save/restore
> all 5 program arguments and assume 64-bit scalars. If FENTRY/FEXIT program is
> attached to this program in the future such FENTRY/FEXIT program will be able
> to follow pointers only via bpf_probe_read_kernel().
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---

Acked-by: Andrii Nakryiko <andriin@fb.com>

>  include/linux/bpf.h          |   8 +++
>  include/linux/bpf_verifier.h |   1 +
>  kernel/bpf/btf.c             | 117 +++++++++++++++++++++++++++++++++++
>  kernel/bpf/syscall.c         |   1 +
>  kernel/bpf/verifier.c        |  18 +++++-
>  5 files changed, 142 insertions(+), 3 deletions(-)
>

[...]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs
  2019-11-08  6:40 ` [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs Alexei Starovoitov
  2019-11-08 18:49   ` Song Liu
  2019-11-08 20:17   ` Toke Høiland-Jørgensen
@ 2019-11-10  7:17   ` Andrii Nakryiko
  2019-11-11 23:04     ` Alexei Starovoitov
  2 siblings, 1 reply; 67+ messages in thread
From: Andrii Nakryiko @ 2019-11-10  7:17 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, x86, Networking, bpf, Kernel Team

On Thu, Nov 7, 2019 at 10:41 PM Alexei Starovoitov <ast@kernel.org> wrote:
>
> Allow FENTRY/FEXIT BPF programs to attach to other BPF programs of any type
> including their subprograms. This feature allows snooping on input and output
> packets in XDP, TC programs including their return values. In order to do that
> the verifier needs to track types not only of vmlinux, but types of other BPF
> programs as well. The verifier also needs to translate uapi/linux/bpf.h types
> used by networking programs into kernel internal BTF types used by FENTRY/FEXIT
> BPF programs. In some cases LLVM optimizations can remove arguments from BPF
> subprograms without adjusting BTF info that LLVM backend knows. When BTF info
> disagrees with actual types that the verifiers sees the BPF trampoline has to
> fallback to conservative and treat all arguments as u64. The FENTRY/FEXIT
> program can still attach to such subprograms, but won't be able to recognize
> pointer types like 'struct sk_buff *' into won't be able to pass them to
> bpf_skb_output() for dumping to user space.
>
> The BPF_PROG_LOAD command is extended with attach_prog_fd field. When it's set
> to zero the attach_btf_id is one vmlinux BTF type ids. When attach_prog_fd
> points to previously loaded BPF program the attach_btf_id is BTF type id of
> main function or one of its subprograms.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---
>  arch/x86/net/bpf_jit_comp.c |  3 +-
>  include/linux/bpf.h         |  2 +
>  include/linux/btf.h         |  1 +
>  include/uapi/linux/bpf.h    |  1 +
>  kernel/bpf/btf.c            | 58 +++++++++++++++++++---
>  kernel/bpf/core.c           |  2 +
>  kernel/bpf/syscall.c        | 19 +++++--
>  kernel/bpf/verifier.c       | 98 +++++++++++++++++++++++++++++--------
>  kernel/trace/bpf_trace.c    |  2 -
>  9 files changed, 151 insertions(+), 35 deletions(-)
>

[...]

> +
> +static bool btf_translate_to_vmlinux(struct bpf_verifier_log *log,
> +                                    struct btf *btf,
> +                                    const struct btf_type *t,
> +                                    struct bpf_insn_access_aux *info)
> +{
> +       const char *tname = __btf_name_by_offset(btf, t->name_off);
> +       int btf_id;
> +
> +       if (!tname) {
> +               bpf_log(log, "Program's type doesn't have a name\n");
> +               return false;
> +       }
> +       if (strcmp(tname, "__sk_buff") == 0) {

might be a good idea to ensure that t's type is also a struct?

> +               btf_id = btf_resolve_helper_id(log, &bpf_skb_output_proto, 0);

This is kind of ugly and high-maintenance. Have you considered having
something like this, to do this mapping:

struct bpf_ctx_mapping {
    struct sk_buff *__sk_buff;
    struct xdp_buff *xdp_md;
};

So field name is a name you are trying to match, while field type is
actual type you are mapping to? You won't need to find special
function protos (like bpf_skb_output_proto), it will be easy to
extend, you'll have real vmlinux types automatically captured for you
(you'll just have to initially find bpf_ctx_mapping's btf_id).


> +               if (btf_id < 0)
> +                       return false;
> +               info->btf_id = btf_id;
> +               return true;
> +       }
> +       return false;
> +}
>

[...]

> +               if (tgt_prog && conservative) {
> +                       struct btf_func_model *m = &tr->func.model;
> +
> +                       /* BTF function prototype doesn't match the verifier types.
> +                        * Fall back to 5 u64 args.
> +                        */
> +                       for (i = 0; i < 5; i++)
> +                               m->arg_size[i] = 8;
> +                       m->ret_size = 8;
> +                       m->nr_args = 5;
> +                       prog->aux->attach_func_proto = NULL;
> +               } else {
> +                       ret = btf_distill_func_proto(&env->log, btf, t,
> +                                                    tname, &tr->func.model);

there is nothing preventing some parallel thread to modify
tr->func.model in parallel, right? Should these modifications be
either locked or at least WRITE_ONCE, similar to
btf_resolve_helper_id?


> +                       if (ret < 0)
> +                               goto out;
> +               }

[...]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-08 23:05             ` Alexei Starovoitov
@ 2019-11-10 10:54               ` Thomas Gleixner
  0 siblings, 0 replies; 67+ messages in thread
From: Thomas Gleixner @ 2019-11-10 10:54 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Peter Zijlstra, Alexei Starovoitov, davem, daniel, x86, netdev,
	bpf, Kernel Team, Linus Torvalds

On Fri, 8 Nov 2019, Alexei Starovoitov wrote:
> On Fri, Nov 08, 2019 at 10:36:24PM +0100, Peter Zijlstra wrote:
> > This I do _NOT_ understand. Why are you willing to merge a known broken
> > patch? What is the rush, why can't you wait for all the prerequisites to
> > land?
> 
> People have deadlines and here I'm not talking about fb deadlines. If it was
> only up to me I could have waited until yours and Steven's patches land in
> Linus's tree. Then Dave would pick them up after the merge window into net-next
> and bpf things would be ready for the next release. Which is in 1.5 + 2 + 8
> weeks (assuming 1.5 weeks until merge window, 2 weeks merge window, and 8
> weeks next release cycle).
> But most of bpf things are ready. I have one more follow up to do for another
> feature. The first 4-5 patches of my set will enable Bjorn, Daniel, and
> Martin's work. So I'm mainly looking for a way to converge three trees during
> the merge window with no conflicts.

No. Nobodys deadlines are justifying anything.

You recently gave me a lecture how to do proper kernel development just
because I did the right thing, i.e. preventing a bad interaction by making
a Kconfig dependency which does not affect you in any way.

Now for your own interests you try to justify something which is
fundamentaly worse: Merging known to be broken code.

BPF is not special and has to wait for the next merge window if the
prerequisites are not ready in time as any other patch set does.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 16/18] libbpf: Add support for attaching BPF programs to other BPF programs
  2019-11-08  6:40 ` [PATCH v3 bpf-next 16/18] libbpf: Add support for attaching BPF programs " Alexei Starovoitov
  2019-11-08 18:57   ` Song Liu
@ 2019-11-10 16:56   ` Andrii Nakryiko
  1 sibling, 0 replies; 67+ messages in thread
From: Andrii Nakryiko @ 2019-11-10 16:56 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, x86, Networking, bpf, Kernel Team

On Thu, Nov 7, 2019 at 10:41 PM Alexei Starovoitov <ast@kernel.org> wrote:
>
> Extend libbpf api to pass attach_prog_fd into bpf_object__open.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---
>  tools/include/uapi/linux/bpf.h |  1 +
>  tools/lib/bpf/bpf.c            |  9 +++--
>  tools/lib/bpf/bpf.h            |  5 ++-
>  tools/lib/bpf/libbpf.c         | 65 +++++++++++++++++++++++++++++-----
>  tools/lib/bpf/libbpf.h         |  3 +-
>  5 files changed, 69 insertions(+), 14 deletions(-)
>
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 69c200e6e696..4842a134b202 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -425,6 +425,7 @@ union bpf_attr {
>                 __aligned_u64   line_info;      /* line info */
>                 __u32           line_info_cnt;  /* number of bpf_line_info records */
>                 __u32           attach_btf_id;  /* in-kernel BTF type id to attach to */
> +               __u32           attach_prog_fd; /* 0 to attach to vmlinux */
>         };
>
>         struct { /* anonymous struct used by BPF_OBJ_* commands */
> diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> index b3e3e99a0f28..f805787c8efd 100644
> --- a/tools/lib/bpf/bpf.c
> +++ b/tools/lib/bpf/bpf.c
> @@ -228,10 +228,14 @@ int bpf_load_program_xattr(const struct bpf_load_program_attr *load_attr,
>         memset(&attr, 0, sizeof(attr));
>         attr.prog_type = load_attr->prog_type;
>         attr.expected_attach_type = load_attr->expected_attach_type;
> -       if (attr.prog_type == BPF_PROG_TYPE_TRACING)
> +       if (attr.prog_type == BPF_PROG_TYPE_TRACING) {
>                 attr.attach_btf_id = load_attr->attach_btf_id;
> -       else
> +               if (load_attr->attach_prog_fd)
> +                       attr.attach_prog_fd = load_attr->attach_prog_fd;

why the if? if it's zero, attr.attach_prog_fd will stay zero.

> +       } else {
>                 attr.prog_ifindex = load_attr->prog_ifindex;
> +               attr.kern_version = load_attr->kern_version;
> +       }
>         attr.insn_cnt = (__u32)load_attr->insns_cnt;
>         attr.insns = ptr_to_u64(load_attr->insns);
>         attr.license = ptr_to_u64(load_attr->license);

[...]

>         CHECK_ERR(bpf_object__elf_init(obj), err, out);
>         CHECK_ERR(bpf_object__check_endianness(obj), err, out);
> @@ -3927,10 +3934,12 @@ __bpf_object__open(const char *path, const void *obj_buf, size_t obj_buf_sz,
>                 bpf_program__set_expected_attach_type(prog, attach_type);
>                 if (prog_type == BPF_PROG_TYPE_TRACING) {
>                         err = libbpf_attach_btf_id_by_name(prog->section_name,
> -                                                          attach_type);
> +                                                          attach_type,
> +                                                          attach_prog_fd);

libbpf_find_attach_btf_id seems like a better name at this point

>                         if (err <= 0)
>                                 goto out;
>                         prog->attach_btf_id = err;
> +                       prog->attach_prog_fd = attach_prog_fd;
>                 }
>         }
>

[...]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 17/18] selftests/bpf: Extend test_pkt_access test
  2019-11-08  6:40 ` [PATCH v3 bpf-next 17/18] selftests/bpf: Extend test_pkt_access test Alexei Starovoitov
  2019-11-08 19:03   ` Song Liu
@ 2019-11-10 16:58   ` Andrii Nakryiko
  1 sibling, 0 replies; 67+ messages in thread
From: Andrii Nakryiko @ 2019-11-10 16:58 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, x86, Networking, bpf, Kernel Team

On Thu, Nov 7, 2019 at 10:42 PM Alexei Starovoitov <ast@kernel.org> wrote:
>
> The test_pkt_access.o is used by multiple tests. Fix its section name so that
> program type can be automatically detected by libbpf and make it call other
> subprograms with skb argument.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---

Acked-by: Andrii Nakryiko <andriin@fb.com>

>  .../selftests/bpf/progs/test_pkt_access.c     | 38 ++++++++++++++++++-
>  1 file changed, 36 insertions(+), 2 deletions(-)
>

[...]

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 18/18] selftests/bpf: Add a test for attaching BPF prog to another BPF prog and subprog
  2019-11-08  6:40 ` [PATCH v3 bpf-next 18/18] selftests/bpf: Add a test for attaching BPF prog to another BPF prog and subprog Alexei Starovoitov
  2019-11-08 19:13   ` Song Liu
@ 2019-11-10 17:04   ` Andrii Nakryiko
  2019-11-11 23:07     ` Alexei Starovoitov
  1 sibling, 1 reply; 67+ messages in thread
From: Andrii Nakryiko @ 2019-11-10 17:04 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, x86, Networking, bpf, Kernel Team

On Thu, Nov 7, 2019 at 10:43 PM Alexei Starovoitov <ast@kernel.org> wrote:
>
> Add a test that attaches one FEXIT program to main sched_cls networking program
> and two other FEXIT programs to subprograms. All three tracing programs
> access return values and skb->len of networking program and subprograms.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---

Acked-by: Andrii Nakryiko <andriin@fb.com>

>  .../selftests/bpf/prog_tests/fexit_bpf2bpf.c  | 76 ++++++++++++++++
>  .../selftests/bpf/progs/fexit_bpf2bpf.c       | 91 +++++++++++++++++++
>  2 files changed, 167 insertions(+)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
>  create mode 100644 tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c
>

[...]

> +SEC("fexit/test_pkt_access_subprog2")
> +int test_subprog2(struct args_subprog2 *ctx)
> +{
> +       struct sk_buff *skb = (void *)ctx->args[0];
> +       __u64 ret;
> +       int len;
> +
> +       bpf_probe_read(&len, sizeof(len),
> +                      __builtin_preserve_access_index(&skb->len));

nit: we have bpf_core_read() for this, but I suspect you may have
wanted __builtin spelled out explicitly


> +
> +       ret = ctx->ret;
> +       /* bpf_prog_load() loads "test_pkt_access.o" with BPF_F_TEST_RND_HI32
> +        * which randomizes upper 32 bits after BPF_ALU32 insns.
> +        * Hence after 'w0 <<= 1' upper bits of $rax are random.
> +        * That is expected and correct. Trim them.
> +        */
> +       ret = (__u32) ret;
> +       if (len != 74 || ret != 148)
> +               return 0;
> +       test_result_subprog2 = 1;
> +       return 0;
> +}
> +char _license[] SEC("license") = "GPL";
> --
> 2.23.0
>

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-08 21:39             ` David Miller
@ 2019-11-11  8:14               ` Peter Zijlstra
  2019-11-11 10:21                 ` Daniel Borkmann
  2019-11-11 16:10                 ` Jonathan Corbet
  0 siblings, 2 replies; 67+ messages in thread
From: Peter Zijlstra @ 2019-11-11  8:14 UTC (permalink / raw)
  To: David Miller
  Cc: alexei.starovoitov, ast, daniel, x86, netdev, bpf, Kernel-team

On Fri, Nov 08, 2019 at 01:39:24PM -0800, David Miller wrote:
> From: Peter Zijlstra <peterz@infradead.org>
> Date: Fri, 8 Nov 2019 22:36:24 +0100
> 
> > The cover leter is not preserved and should therefore
>  ...
> 
> The cover letter is +ALWAYS+ preserved, we put them in the merge
> commit.

Good to know; is this a netdev special? I've not seen this before.

Is there a convenient git command to find the merge commit for a given
regular commit? Say I've used git-blame to find a troublesome commit,
then how do I find the merge commit for it?

Also, I still think this does not excuse weak individual Changelogs.

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-11  8:14               ` Peter Zijlstra
@ 2019-11-11 10:21                 ` Daniel Borkmann
  2019-11-11 16:10                 ` Jonathan Corbet
  1 sibling, 0 replies; 67+ messages in thread
From: Daniel Borkmann @ 2019-11-11 10:21 UTC (permalink / raw)
  To: Peter Zijlstra, David Miller
  Cc: alexei.starovoitov, ast, x86, netdev, bpf, Kernel-team

On 11/11/19 9:14 AM, Peter Zijlstra wrote:
> On Fri, Nov 08, 2019 at 01:39:24PM -0800, David Miller wrote:
>> From: Peter Zijlstra <peterz@infradead.org>
>> Date: Fri, 8 Nov 2019 22:36:24 +0100
>>
>>> The cover leter is not preserved and should therefore
>>   ...
>>
>> The cover letter is +ALWAYS+ preserved, we put them in the merge
>> commit.
> 
> Good to know; is this a netdev special? I've not seen this before.

I think it might be netdev special, and given developers are often used to
this practice on netdev, we've adopted the same for both bpf trees as well
if the cover letter contains a useful high level summary of the whole set.

We've recently changed our workflow a bit after last maintainers summit and
reuse Thomas' mb2q [0] in our small collection of scripts [1], so aside from
other useful features, for every commit under bpf/bpf-next there is now also
a 'Link:' tag pointing to https://lore.kernel.org/bpf/ archive, thus cover
letter or discussions could alternatively be found this way.

One downside of merge commit as cover letter is that they are usually lost (*)
once commits get cherry-picked into stable trees or other downstream, backport
heavy kernels, so with a 'Link:' tag it's a convenient way to quickly get more
context or discussions for those cases.

> Is there a convenient git command to find the merge commit for a given
> regular commit? Say I've used git-blame to find a troublesome commit,
> then how do I find the merge commit for it?

Once you have the sha, you could for example retrieve the corresponding
merge commit from the upstream tree (as top commit) via:

   $ git log <sha>..master --ancestry-path --merges --reverse

> Also, I still think this does not excuse weak individual Changelogs.

Ideally commit messages should be as self-contained as possible to have all
the necessary context w/o having to look up other resources also given issue
(*) above.

Thanks,
Daniel

   [0] https://git.kernel.org/pub/scm/linux/kernel/git/tglx/quilttools.git/
   [1] https://git.kernel.org/pub/scm/linux/kernel/git/dborkman/pw.git/

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper
  2019-11-11  8:14               ` Peter Zijlstra
  2019-11-11 10:21                 ` Daniel Borkmann
@ 2019-11-11 16:10                 ` Jonathan Corbet
  1 sibling, 0 replies; 67+ messages in thread
From: Jonathan Corbet @ 2019-11-11 16:10 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: David Miller, alexei.starovoitov, ast, daniel, x86, netdev, bpf,
	Kernel-team

On Mon, 11 Nov 2019 09:14:03 +0100
Peter Zijlstra <peterz@infradead.org> wrote:

> On Fri, Nov 08, 2019 at 01:39:24PM -0800, David Miller wrote:
> > From: Peter Zijlstra <peterz@infradead.org>
> > Date: Fri, 8 Nov 2019 22:36:24 +0100
> >   
> > > The cover leter is not preserved and should therefore  
> >  ...
> > 
> > The cover letter is +ALWAYS+ preserved, we put them in the merge
> > commit.  
> 
> Good to know; is this a netdev special? I've not seen this before.

I read a *lot* of merge commit changelogs; it's not just netdev that does
this.  I try to do it with significant documentation sets as well.  I
agree that changelogs should contain all relevant information, but there
is value in an overview as well.  But then, I make my living in the
overview business...:)

jon

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs
  2019-11-10  7:17   ` Andrii Nakryiko
@ 2019-11-11 23:04     ` Alexei Starovoitov
  2019-11-12  4:38       ` Andrii Nakryiko
  0 siblings, 1 reply; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-11 23:04 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, David S. Miller, Daniel Borkmann, x86,
	Networking, bpf, Kernel Team

On Sat, Nov 09, 2019 at 11:17:37PM -0800, Andrii Nakryiko wrote:
> On Thu, Nov 7, 2019 at 10:41 PM Alexei Starovoitov <ast@kernel.org> wrote:
> >
> > Allow FENTRY/FEXIT BPF programs to attach to other BPF programs of any type
> > including their subprograms. This feature allows snooping on input and output
> > packets in XDP, TC programs including their return values. In order to do that
> > the verifier needs to track types not only of vmlinux, but types of other BPF
> > programs as well. The verifier also needs to translate uapi/linux/bpf.h types
> > used by networking programs into kernel internal BTF types used by FENTRY/FEXIT
> > BPF programs. In some cases LLVM optimizations can remove arguments from BPF
> > subprograms without adjusting BTF info that LLVM backend knows. When BTF info
> > disagrees with actual types that the verifiers sees the BPF trampoline has to
> > fallback to conservative and treat all arguments as u64. The FENTRY/FEXIT
> > program can still attach to such subprograms, but won't be able to recognize
> > pointer types like 'struct sk_buff *' into won't be able to pass them to
> > bpf_skb_output() for dumping to user space.
> >
> > The BPF_PROG_LOAD command is extended with attach_prog_fd field. When it's set
> > to zero the attach_btf_id is one vmlinux BTF type ids. When attach_prog_fd
> > points to previously loaded BPF program the attach_btf_id is BTF type id of
> > main function or one of its subprograms.
> >
> > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> > ---
> >  arch/x86/net/bpf_jit_comp.c |  3 +-
> >  include/linux/bpf.h         |  2 +
> >  include/linux/btf.h         |  1 +
> >  include/uapi/linux/bpf.h    |  1 +
> >  kernel/bpf/btf.c            | 58 +++++++++++++++++++---
> >  kernel/bpf/core.c           |  2 +
> >  kernel/bpf/syscall.c        | 19 +++++--
> >  kernel/bpf/verifier.c       | 98 +++++++++++++++++++++++++++++--------
> >  kernel/trace/bpf_trace.c    |  2 -
> >  9 files changed, 151 insertions(+), 35 deletions(-)
> >
> 
> [...]
> 
> > +
> > +static bool btf_translate_to_vmlinux(struct bpf_verifier_log *log,
> > +                                    struct btf *btf,
> > +                                    const struct btf_type *t,
> > +                                    struct bpf_insn_access_aux *info)
> > +{
> > +       const char *tname = __btf_name_by_offset(btf, t->name_off);
> > +       int btf_id;
> > +
> > +       if (!tname) {
> > +               bpf_log(log, "Program's type doesn't have a name\n");
> > +               return false;
> > +       }
> > +       if (strcmp(tname, "__sk_buff") == 0) {
> 
> might be a good idea to ensure that t's type is also a struct?
> 
> > +               btf_id = btf_resolve_helper_id(log, &bpf_skb_output_proto, 0);
> 
> This is kind of ugly and high-maintenance. Have you considered having
> something like this, to do this mapping:
> 
> struct bpf_ctx_mapping {
>     struct sk_buff *__sk_buff;
>     struct xdp_buff *xdp_md;
> };
> 
> So field name is a name you are trying to match, while field type is
> actual type you are mapping to? You won't need to find special
> function protos (like bpf_skb_output_proto), it will be easy to
> extend, you'll have real vmlinux types automatically captured for you
> (you'll just have to initially find bpf_ctx_mapping's btf_id).

I was thinking something along these lines.
The problem with single struct like above is that it's centralized.
convert_ctx_access callbacks are all over the place.
So I'm thinking to add macro like this to bpf.h
+#define BPF_RECORD_CTX_CONVERSION(user_type, kernel_type) \
+       ({typedef kernel_type (*bpf_ctx_convert)(user_type); \
+        (void) (bpf_ctx_convert) (void *) 0;})

and then do
BPF_RECORD_CTX_CONVERSION(struct bpf_xdp_sock, struct xdp_sock);
inside convert_ctx_access functions (like bpf_xdp_sock_convert_ctx_access).
There will be several typedefs with 'bpf_ctx_convert' name. The
btf_translate_to_vmlinux() will iterate over them. Speed is not criticial here,
but long term we probably need to merge prog's BTF with vmlinux's BTF, so most
of the type comparison is done during prog load. It probably should reduce the
size of prog's BTF too. Renumbering of prog's BTF will be annoying though.
Something to consider long term.

> 
> > +               if (btf_id < 0)
> > +                       return false;
> > +               info->btf_id = btf_id;
> > +               return true;
> > +       }
> > +       return false;
> > +}
> >
> 
> [...]
> 
> > +               if (tgt_prog && conservative) {
> > +                       struct btf_func_model *m = &tr->func.model;
> > +
> > +                       /* BTF function prototype doesn't match the verifier types.
> > +                        * Fall back to 5 u64 args.
> > +                        */
> > +                       for (i = 0; i < 5; i++)
> > +                               m->arg_size[i] = 8;
> > +                       m->ret_size = 8;
> > +                       m->nr_args = 5;
> > +                       prog->aux->attach_func_proto = NULL;
> > +               } else {
> > +                       ret = btf_distill_func_proto(&env->log, btf, t,
> > +                                                    tname, &tr->func.model);
> 
> there is nothing preventing some parallel thread to modify
> tr->func.model in parallel, right? Should these modifications be
> either locked or at least WRITE_ONCE, similar to
> btf_resolve_helper_id?

hmm. Right. There is a race with bpf_trampoline_lookup. One thread could have
just created the trampoline and still doing distill, while another thread is
trying to use it after getting it from bpf_trampoline_lookup. The fix choices
are not pretty. Either to add a mutex to check_attach_btf_id() or do
bpf_trampoline_lookup_or_create() with extra callback that does
btf_distill_func_proto while bpf_trampoline_lookup_or_create is holding
trampoline_mutex or move most of the check_attach_btf_id() logic into
bpf_trampoline_lookup_or_create().
I tried to keep trampoline as abstract concept, but with callback or move
the verifer and btf logic will bleed into trampoline. Hmm.


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 18/18] selftests/bpf: Add a test for attaching BPF prog to another BPF prog and subprog
  2019-11-10 17:04   ` Andrii Nakryiko
@ 2019-11-11 23:07     ` Alexei Starovoitov
  0 siblings, 0 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-11 23:07 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, David S. Miller, Daniel Borkmann, x86,
	Networking, bpf, Kernel Team

On Sun, Nov 10, 2019 at 09:04:58AM -0800, Andrii Nakryiko wrote:
> On Thu, Nov 7, 2019 at 10:43 PM Alexei Starovoitov <ast@kernel.org> wrote:
> >
> > Add a test that attaches one FEXIT program to main sched_cls networking program
> > and two other FEXIT programs to subprograms. All three tracing programs
> > access return values and skb->len of networking program and subprograms.
> >
> > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> > ---
> 
> Acked-by: Andrii Nakryiko <andriin@fb.com>
> 
> >  .../selftests/bpf/prog_tests/fexit_bpf2bpf.c  | 76 ++++++++++++++++
> >  .../selftests/bpf/progs/fexit_bpf2bpf.c       | 91 +++++++++++++++++++
> >  2 files changed, 167 insertions(+)
> >  create mode 100644 tools/testing/selftests/bpf/prog_tests/fexit_bpf2bpf.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/fexit_bpf2bpf.c
> >
> 
> [...]
> 
> > +SEC("fexit/test_pkt_access_subprog2")
> > +int test_subprog2(struct args_subprog2 *ctx)
> > +{
> > +       struct sk_buff *skb = (void *)ctx->args[0];
> > +       __u64 ret;
> > +       int len;
> > +
> > +       bpf_probe_read(&len, sizeof(len),
> > +                      __builtin_preserve_access_index(&skb->len));
> 
> nit: we have bpf_core_read() for this, but I suspect you may have
> wanted __builtin spelled out explicitly

exactly. it should have been bpf_probe_read_kernel though. will fix.


^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs
  2019-11-11 23:04     ` Alexei Starovoitov
@ 2019-11-12  4:38       ` Andrii Nakryiko
  2019-11-12  4:47         ` Alexei Starovoitov
  0 siblings, 1 reply; 67+ messages in thread
From: Andrii Nakryiko @ 2019-11-12  4:38 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, David S. Miller, Daniel Borkmann, x86,
	Networking, bpf, Kernel Team

On Mon, Nov 11, 2019 at 3:04 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Sat, Nov 09, 2019 at 11:17:37PM -0800, Andrii Nakryiko wrote:
> > On Thu, Nov 7, 2019 at 10:41 PM Alexei Starovoitov <ast@kernel.org> wrote:
> > >
> > > Allow FENTRY/FEXIT BPF programs to attach to other BPF programs of any type
> > > including their subprograms. This feature allows snooping on input and output
> > > packets in XDP, TC programs including their return values. In order to do that
> > > the verifier needs to track types not only of vmlinux, but types of other BPF
> > > programs as well. The verifier also needs to translate uapi/linux/bpf.h types
> > > used by networking programs into kernel internal BTF types used by FENTRY/FEXIT
> > > BPF programs. In some cases LLVM optimizations can remove arguments from BPF
> > > subprograms without adjusting BTF info that LLVM backend knows. When BTF info
> > > disagrees with actual types that the verifiers sees the BPF trampoline has to
> > > fallback to conservative and treat all arguments as u64. The FENTRY/FEXIT
> > > program can still attach to such subprograms, but won't be able to recognize
> > > pointer types like 'struct sk_buff *' into won't be able to pass them to
> > > bpf_skb_output() for dumping to user space.
> > >
> > > The BPF_PROG_LOAD command is extended with attach_prog_fd field. When it's set
> > > to zero the attach_btf_id is one vmlinux BTF type ids. When attach_prog_fd
> > > points to previously loaded BPF program the attach_btf_id is BTF type id of
> > > main function or one of its subprograms.
> > >
> > > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> > > ---
> > >  arch/x86/net/bpf_jit_comp.c |  3 +-
> > >  include/linux/bpf.h         |  2 +
> > >  include/linux/btf.h         |  1 +
> > >  include/uapi/linux/bpf.h    |  1 +
> > >  kernel/bpf/btf.c            | 58 +++++++++++++++++++---
> > >  kernel/bpf/core.c           |  2 +
> > >  kernel/bpf/syscall.c        | 19 +++++--
> > >  kernel/bpf/verifier.c       | 98 +++++++++++++++++++++++++++++--------
> > >  kernel/trace/bpf_trace.c    |  2 -
> > >  9 files changed, 151 insertions(+), 35 deletions(-)
> > >
> >
> > [...]
> >
> > > +
> > > +static bool btf_translate_to_vmlinux(struct bpf_verifier_log *log,
> > > +                                    struct btf *btf,
> > > +                                    const struct btf_type *t,
> > > +                                    struct bpf_insn_access_aux *info)
> > > +{
> > > +       const char *tname = __btf_name_by_offset(btf, t->name_off);
> > > +       int btf_id;
> > > +
> > > +       if (!tname) {
> > > +               bpf_log(log, "Program's type doesn't have a name\n");
> > > +               return false;
> > > +       }
> > > +       if (strcmp(tname, "__sk_buff") == 0) {
> >
> > might be a good idea to ensure that t's type is also a struct?
> >
> > > +               btf_id = btf_resolve_helper_id(log, &bpf_skb_output_proto, 0);
> >
> > This is kind of ugly and high-maintenance. Have you considered having
> > something like this, to do this mapping:
> >
> > struct bpf_ctx_mapping {
> >     struct sk_buff *__sk_buff;
> >     struct xdp_buff *xdp_md;
> > };
> >
> > So field name is a name you are trying to match, while field type is
> > actual type you are mapping to? You won't need to find special
> > function protos (like bpf_skb_output_proto), it will be easy to
> > extend, you'll have real vmlinux types automatically captured for you
> > (you'll just have to initially find bpf_ctx_mapping's btf_id).
>
> I was thinking something along these lines.
> The problem with single struct like above is that it's centralized.
> convert_ctx_access callbacks are all over the place.
> So I'm thinking to add macro like this to bpf.h
> +#define BPF_RECORD_CTX_CONVERSION(user_type, kernel_type) \
> +       ({typedef kernel_type (*bpf_ctx_convert)(user_type); \
> +        (void) (bpf_ctx_convert) (void *) 0;})
>
> and then do
> BPF_RECORD_CTX_CONVERSION(struct bpf_xdp_sock, struct xdp_sock);
> inside convert_ctx_access functions (like bpf_xdp_sock_convert_ctx_access).
> There will be several typedefs with 'bpf_ctx_convert' name. The
> btf_translate_to_vmlinux() will iterate over them. Speed is not criticial here,

I guess that works as well. Please leave a comment explaining the idea
behind this distributed mapping :)

> but long term we probably need to merge prog's BTF with vmlinux's BTF, so most
> of the type comparison is done during prog load. It probably should reduce the
> size of prog's BTF too. Renumbering of prog's BTF will be annoying though.
> Something to consider long term.
>
> >
> > > +               if (btf_id < 0)
> > > +                       return false;
> > > +               info->btf_id = btf_id;
> > > +               return true;
> > > +       }
> > > +       return false;
> > > +}
> > >
> >
> > [...]
> >
> > > +               if (tgt_prog && conservative) {
> > > +                       struct btf_func_model *m = &tr->func.model;
> > > +
> > > +                       /* BTF function prototype doesn't match the verifier types.
> > > +                        * Fall back to 5 u64 args.
> > > +                        */
> > > +                       for (i = 0; i < 5; i++)
> > > +                               m->arg_size[i] = 8;
> > > +                       m->ret_size = 8;
> > > +                       m->nr_args = 5;
> > > +                       prog->aux->attach_func_proto = NULL;
> > > +               } else {
> > > +                       ret = btf_distill_func_proto(&env->log, btf, t,
> > > +                                                    tname, &tr->func.model);
> >
> > there is nothing preventing some parallel thread to modify
> > tr->func.model in parallel, right? Should these modifications be
> > either locked or at least WRITE_ONCE, similar to
> > btf_resolve_helper_id?
>
> hmm. Right. There is a race with bpf_trampoline_lookup. One thread could have
> just created the trampoline and still doing distill, while another thread is
> trying to use it after getting it from bpf_trampoline_lookup. The fix choices
> are not pretty. Either to add a mutex to check_attach_btf_id() or do
> bpf_trampoline_lookup_or_create() with extra callback that does
> btf_distill_func_proto while bpf_trampoline_lookup_or_create is holding
> trampoline_mutex or move most of the check_attach_btf_id() logic into
> bpf_trampoline_lookup_or_create().
> I tried to keep trampoline as abstract concept, but with callback or move
> the verifer and btf logic will bleed into trampoline. Hmm.

yeah, that sounds too intrusive. I'd change btf_distill_func_proto to
accept struct btf_func_model **m, allocate model dynamically, and then
compare_exchange the final constructed model pointer.

Similarly for "fallback to conservative" case.

>

^ permalink raw reply	[flat|nested] 67+ messages in thread

* Re: [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs
  2019-11-12  4:38       ` Andrii Nakryiko
@ 2019-11-12  4:47         ` Alexei Starovoitov
  0 siblings, 0 replies; 67+ messages in thread
From: Alexei Starovoitov @ 2019-11-12  4:47 UTC (permalink / raw)
  To: Andrii Nakryiko, Alexei Starovoitov
  Cc: Alexei Starovoitov, David S. Miller, Daniel Borkmann, x86,
	Networking, bpf, Kernel Team

On 11/11/19 8:38 PM, Andrii Nakryiko wrote:
> On Mon, Nov 11, 2019 at 3:04 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
>>
>> On Sat, Nov 09, 2019 at 11:17:37PM -0800, Andrii Nakryiko wrote:
>>> On Thu, Nov 7, 2019 at 10:41 PM Alexei Starovoitov <ast@kernel.org> wrote:
>>>>
>>>> Allow FENTRY/FEXIT BPF programs to attach to other BPF programs of any type
>>>> including their subprograms. This feature allows snooping on input and output
>>>> packets in XDP, TC programs including their return values. In order to do that
>>>> the verifier needs to track types not only of vmlinux, but types of other BPF
>>>> programs as well. The verifier also needs to translate uapi/linux/bpf.h types
>>>> used by networking programs into kernel internal BTF types used by FENTRY/FEXIT
>>>> BPF programs. In some cases LLVM optimizations can remove arguments from BPF
>>>> subprograms without adjusting BTF info that LLVM backend knows. When BTF info
>>>> disagrees with actual types that the verifiers sees the BPF trampoline has to
>>>> fallback to conservative and treat all arguments as u64. The FENTRY/FEXIT
>>>> program can still attach to such subprograms, but won't be able to recognize
>>>> pointer types like 'struct sk_buff *' into won't be able to pass them to
>>>> bpf_skb_output() for dumping to user space.
>>>>
>>>> The BPF_PROG_LOAD command is extended with attach_prog_fd field. When it's set
>>>> to zero the attach_btf_id is one vmlinux BTF type ids. When attach_prog_fd
>>>> points to previously loaded BPF program the attach_btf_id is BTF type id of
>>>> main function or one of its subprograms.
>>>>
>>>> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>>>> ---
>>>>   arch/x86/net/bpf_jit_comp.c |  3 +-
>>>>   include/linux/bpf.h         |  2 +
>>>>   include/linux/btf.h         |  1 +
>>>>   include/uapi/linux/bpf.h    |  1 +
>>>>   kernel/bpf/btf.c            | 58 +++++++++++++++++++---
>>>>   kernel/bpf/core.c           |  2 +
>>>>   kernel/bpf/syscall.c        | 19 +++++--
>>>>   kernel/bpf/verifier.c       | 98 +++++++++++++++++++++++++++++--------
>>>>   kernel/trace/bpf_trace.c    |  2 -
>>>>   9 files changed, 151 insertions(+), 35 deletions(-)
>>>>
>>>
>>> [...]
>>>
>>>> +
>>>> +static bool btf_translate_to_vmlinux(struct bpf_verifier_log *log,
>>>> +                                    struct btf *btf,
>>>> +                                    const struct btf_type *t,
>>>> +                                    struct bpf_insn_access_aux *info)
>>>> +{
>>>> +       const char *tname = __btf_name_by_offset(btf, t->name_off);
>>>> +       int btf_id;
>>>> +
>>>> +       if (!tname) {
>>>> +               bpf_log(log, "Program's type doesn't have a name\n");
>>>> +               return false;
>>>> +       }
>>>> +       if (strcmp(tname, "__sk_buff") == 0) {
>>>
>>> might be a good idea to ensure that t's type is also a struct?
>>>
>>>> +               btf_id = btf_resolve_helper_id(log, &bpf_skb_output_proto, 0);
>>>
>>> This is kind of ugly and high-maintenance. Have you considered having
>>> something like this, to do this mapping:
>>>
>>> struct bpf_ctx_mapping {
>>>      struct sk_buff *__sk_buff;
>>>      struct xdp_buff *xdp_md;
>>> };
>>>
>>> So field name is a name you are trying to match, while field type is
>>> actual type you are mapping to? You won't need to find special
>>> function protos (like bpf_skb_output_proto), it will be easy to
>>> extend, you'll have real vmlinux types automatically captured for you
>>> (you'll just have to initially find bpf_ctx_mapping's btf_id).
>>
>> I was thinking something along these lines.
>> The problem with single struct like above is that it's centralized.
>> convert_ctx_access callbacks are all over the place.
>> So I'm thinking to add macro like this to bpf.h
>> +#define BPF_RECORD_CTX_CONVERSION(user_type, kernel_type) \
>> +       ({typedef kernel_type (*bpf_ctx_convert)(user_type); \
>> +        (void) (bpf_ctx_convert) (void *) 0;})
>>
>> and then do
>> BPF_RECORD_CTX_CONVERSION(struct bpf_xdp_sock, struct xdp_sock);
>> inside convert_ctx_access functions (like bpf_xdp_sock_convert_ctx_access).
>> There will be several typedefs with 'bpf_ctx_convert' name. The
>> btf_translate_to_vmlinux() will iterate over them. Speed is not criticial here,
> 
> I guess that works as well. Please leave a comment explaining the idea
> behind this distributed mapping :)
> 
>> but long term we probably need to merge prog's BTF with vmlinux's BTF, so most
>> of the type comparison is done during prog load. It probably should reduce the
>> size of prog's BTF too. Renumbering of prog's BTF will be annoying though.
>> Something to consider long term.
>>
>>>
>>>> +               if (btf_id < 0)
>>>> +                       return false;
>>>> +               info->btf_id = btf_id;
>>>> +               return true;
>>>> +       }
>>>> +       return false;
>>>> +}
>>>>
>>>
>>> [...]
>>>
>>>> +               if (tgt_prog && conservative) {
>>>> +                       struct btf_func_model *m = &tr->func.model;
>>>> +
>>>> +                       /* BTF function prototype doesn't match the verifier types.
>>>> +                        * Fall back to 5 u64 args.
>>>> +                        */
>>>> +                       for (i = 0; i < 5; i++)
>>>> +                               m->arg_size[i] = 8;
>>>> +                       m->ret_size = 8;
>>>> +                       m->nr_args = 5;
>>>> +                       prog->aux->attach_func_proto = NULL;
>>>> +               } else {
>>>> +                       ret = btf_distill_func_proto(&env->log, btf, t,
>>>> +                                                    tname, &tr->func.model);
>>>
>>> there is nothing preventing some parallel thread to modify
>>> tr->func.model in parallel, right? Should these modifications be
>>> either locked or at least WRITE_ONCE, similar to
>>> btf_resolve_helper_id?
>>
>> hmm. Right. There is a race with bpf_trampoline_lookup. One thread could have
>> just created the trampoline and still doing distill, while another thread is
>> trying to use it after getting it from bpf_trampoline_lookup. The fix choices
>> are not pretty. Either to add a mutex to check_attach_btf_id() or do
>> bpf_trampoline_lookup_or_create() with extra callback that does
>> btf_distill_func_proto while bpf_trampoline_lookup_or_create is holding
>> trampoline_mutex or move most of the check_attach_btf_id() logic into
>> bpf_trampoline_lookup_or_create().
>> I tried to keep trampoline as abstract concept, but with callback or move
>> the verifer and btf logic will bleed into trampoline. Hmm.
> 
> yeah, that sounds too intrusive. I'd change btf_distill_func_proto to
> accept struct btf_func_model **m, allocate model dynamically, and then
> compare_exchange the final constructed model pointer.

cmpxchg is too ugly and also not covering all other fields that may need 
to have serialized access in the future. I went with simpler model of
additional mutex per trampoline. It also helped to avoid global mutex
for link/unlink.

^ permalink raw reply	[flat|nested] 67+ messages in thread

end of thread, other threads:[~2019-11-12  4:48 UTC | newest]

Thread overview: 67+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-08  6:40 [PATCH v3 bpf-next 00/18] Introduce BPF trampoline Alexei Starovoitov
2019-11-08  6:40 ` [PATCH v3 bpf-next 01/18] bpf: refactor x86 JIT into helpers Alexei Starovoitov
2019-11-08 19:27   ` Andrii Nakryiko
2019-11-08  6:40 ` [PATCH v3 bpf-next 02/18] bpf: Add bpf_arch_text_poke() helper Alexei Starovoitov
2019-11-08  6:56   ` Song Liu
2019-11-08  8:23   ` Björn Töpel
2019-11-08 14:09     ` Alexei Starovoitov
2019-11-08  9:11   ` Peter Zijlstra
2019-11-08  9:36     ` Peter Zijlstra
2019-11-08 13:41       ` Alexei Starovoitov
2019-11-08 19:32         ` Alexei Starovoitov
2019-11-08 21:36           ` Peter Zijlstra
2019-11-08 21:39             ` David Miller
2019-11-11  8:14               ` Peter Zijlstra
2019-11-11 10:21                 ` Daniel Borkmann
2019-11-11 16:10                 ` Jonathan Corbet
2019-11-08 23:05             ` Alexei Starovoitov
2019-11-10 10:54               ` Thomas Gleixner
2019-11-08  6:40 ` [PATCH v3 bpf-next 03/18] bpf: Introduce BPF trampoline Alexei Starovoitov
2019-11-08  7:04   ` Song Liu
2019-11-08  6:40 ` [PATCH v3 bpf-next 04/18] libbpf: Introduce btf__find_by_name_kind() Alexei Starovoitov
2019-11-08  7:05   ` Song Liu
2019-11-08 19:21   ` Andrii Nakryiko
2019-11-08  6:40 ` [PATCH v3 bpf-next 05/18] libbpf: Add support to attach to fentry/fexit tracing progs Alexei Starovoitov
2019-11-08  7:12   ` Song Liu
2019-11-08 19:44   ` Andrii Nakryiko
2019-11-08  6:40 ` [PATCH v3 bpf-next 06/18] selftest/bpf: Simple test for fentry/fexit Alexei Starovoitov
2019-11-08  6:40 ` [PATCH v3 bpf-next 07/18] bpf: Add kernel test functions for fentry testing Alexei Starovoitov
2019-11-08  6:40 ` [PATCH v3 bpf-next 08/18] selftests/bpf: Add test for BPF trampoline Alexei Starovoitov
2019-11-08  6:40 ` [PATCH v3 bpf-next 09/18] selftests/bpf: Add fexit tests " Alexei Starovoitov
2019-11-08  6:40 ` [PATCH v3 bpf-next 10/18] selftests/bpf: Add combined fentry/fexit test Alexei Starovoitov
2019-11-08  7:14   ` Song Liu
2019-11-08  6:40 ` [PATCH v3 bpf-next 11/18] selftests/bpf: Add stress test for maximum number of progs Alexei Starovoitov
2019-11-08  7:24   ` Song Liu
2019-11-08  6:40 ` [PATCH v3 bpf-next 12/18] bpf: Reserve space for BPF trampoline in BPF programs Alexei Starovoitov
2019-11-08  7:25   ` Song Liu
2019-11-08  6:40 ` [PATCH v3 bpf-next 13/18] bpf: Fix race in btf_resolve_helper_id() Alexei Starovoitov
2019-11-08  7:32   ` Song Liu
2019-11-08 19:58   ` Andrii Nakryiko
2019-11-08  6:40 ` [PATCH v3 bpf-next 14/18] bpf: Compare BTF types of functions arguments with actual types Alexei Starovoitov
2019-11-08 17:28   ` Song Liu
2019-11-08 17:32     ` Song Liu
2019-11-08 17:57       ` Alexei Starovoitov
2019-11-08 17:59         ` Song Liu
2019-11-08 23:46   ` Andrii Nakryiko
2019-11-08  6:40 ` [PATCH v3 bpf-next 15/18] bpf: Support attaching tracing BPF program to other BPF programs Alexei Starovoitov
2019-11-08 18:49   ` Song Liu
2019-11-08 18:59     ` Alexei Starovoitov
2019-11-08 20:17   ` Toke Høiland-Jørgensen
2019-11-08 21:14     ` Alexei Starovoitov
2019-11-08 21:32       ` Toke Høiland-Jørgensen
2019-11-10  7:17   ` Andrii Nakryiko
2019-11-11 23:04     ` Alexei Starovoitov
2019-11-12  4:38       ` Andrii Nakryiko
2019-11-12  4:47         ` Alexei Starovoitov
2019-11-08  6:40 ` [PATCH v3 bpf-next 16/18] libbpf: Add support for attaching BPF programs " Alexei Starovoitov
2019-11-08 18:57   ` Song Liu
2019-11-08 19:13     ` Alexei Starovoitov
2019-11-08 19:14       ` Song Liu
2019-11-10 16:56   ` Andrii Nakryiko
2019-11-08  6:40 ` [PATCH v3 bpf-next 17/18] selftests/bpf: Extend test_pkt_access test Alexei Starovoitov
2019-11-08 19:03   ` Song Liu
2019-11-10 16:58   ` Andrii Nakryiko
2019-11-08  6:40 ` [PATCH v3 bpf-next 18/18] selftests/bpf: Add a test for attaching BPF prog to another BPF prog and subprog Alexei Starovoitov
2019-11-08 19:13   ` Song Liu
2019-11-10 17:04   ` Andrii Nakryiko
2019-11-11 23:07     ` Alexei Starovoitov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).