All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel.
@ 2021-09-17 21:57 Alexei Starovoitov
  2021-09-17 21:57 ` [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty Alexei Starovoitov
                   ` (10 more replies)
  0 siblings, 11 replies; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-17 21:57 UTC (permalink / raw)
  To: davem; +Cc: daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Hi All,

This is very early RFC that introduces CO-RE support in the kernel.
There are several reasons to add such support:
1. It's a step toward signed BPF programs.
2. It allows golang like languages that struggle to adopt libbpf
   to take advantage of CO-RE powers.
3. Currently the field accessed by 'ldx [R1 + 10]' insn is recognized
   by the verifier purely based on +10 offset. If R1 points to a union
   the verifier picks one of the fields at this offset.
   With CO-RE the kernel can disambiguate the field access.

This set wires all relevant pieces and passes two selftests with CO-RE
in the kernel.

The main goal of RFC is to get feedback on patch 3.
It's passing CO-RE relocation into the kernel via bpf_core_apply_relo()
helper that is called by the loader program.
It works, but there is no clean way to add error reporting here.
So I'm thinking that the better approach would be to pass an array
of 'struct bpf_core_relo_desc' into PROG_LOAD command similar to
how func_info and line_info are passed.
Such approach would allow for the use case 3 above (which
current approach in patch 3 doesn't support).

Major TODOs:
- rename kernel btf_*() helpers to match libbpf btf_*() helpers
  to avoid equivalent helpers in patch 1.
- bpf_core_match_member() in relo_core.c is recursive.
  Limit its recursion or refactor.
- implemented bpf_core_types_are_compat(). duh.
- decide whether bpf_core_find_cands() needs hash table (like libbpf does)
  or not.

Alexei Starovoitov (10):
  bpf: Prepare relo_core.c for kernel duty.
  bpf: Define enum bpf_core_relo_kind as uapi.
  bpf: Add proto of bpf_core_apply_relo()
  bpf: Add bpf_core_add_cands() and wire it into
    bpf_core_apply_relo_insn().
  libbpf: Use CO-RE in the kernel in light skeleton.
  libbpf: Make gen_loader data aligned.
  libbpf: Support init of inner maps in light skeleton.
  selftests/bpf: Convert kfunc test with CO-RE to lskel.
  selftests/bpf: Improve inner_map test coverage.
  selftests/bpf: Convert map_ptr_kern test to use light skeleton.

 include/linux/bpf.h                           |   1 +
 include/linux/btf.h                           |  89 ++++++++
 include/uapi/linux/bpf.h                      |  33 +++
 kernel/bpf/Makefile                           |   4 +
 kernel/bpf/btf.c                              | 193 ++++++++++++++++++
 kernel/bpf/syscall.c                          |   2 +
 tools/include/uapi/linux/bpf.h                |  33 +++
 tools/lib/bpf/bpf_gen_internal.h              |  13 ++
 tools/lib/bpf/gen_loader.c                    |  85 +++++++-
 tools/lib/bpf/libbpf.c                        |  35 +++-
 tools/lib/bpf/relo_core.c                     | 156 ++++++++++----
 tools/lib/bpf/relo_core.h                     |  18 --
 tools/testing/selftests/bpf/Makefile          |   2 +-
 .../selftests/bpf/prog_tests/kfunc_call.c     |   4 +-
 .../selftests/bpf/prog_tests/map_ptr.c        |   8 +-
 .../selftests/bpf/progs/map_ptr_kern.c        |  16 +-
 16 files changed, 611 insertions(+), 81 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty.
  2021-09-17 21:57 [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Alexei Starovoitov
@ 2021-09-17 21:57 ` Alexei Starovoitov
  2021-09-21 21:25   ` Andrii Nakryiko
  2021-09-28 14:45   ` Matteo Croce
  2021-09-17 21:57 ` [PATCH RFC bpf-next 02/10] bpf: Define enum bpf_core_relo_kind as uapi Alexei Starovoitov
                   ` (9 subsequent siblings)
  10 siblings, 2 replies; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-17 21:57 UTC (permalink / raw)
  To: davem; +Cc: daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Make relo_core.c to be compiled with kernel and with libbpf.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/btf.h       | 89 +++++++++++++++++++++++++++++++++++++++
 kernel/bpf/Makefile       |  4 ++
 kernel/bpf/btf.c          | 26 ++++++++++++
 tools/lib/bpf/relo_core.c | 72 ++++++++++++++++++++++++++++++-
 4 files changed, 189 insertions(+), 2 deletions(-)

diff --git a/include/linux/btf.h b/include/linux/btf.h
index 214fde93214b..bc42ab20d549 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -143,6 +143,53 @@ static inline bool btf_type_is_enum(const struct btf_type *t)
 	return BTF_INFO_KIND(t->info) == BTF_KIND_ENUM;
 }
 
+static inline bool str_is_empty(const char *s)
+{
+	return !s || !s[0];
+}
+
+static inline u16 btf_kind(const struct btf_type *t)
+{
+	return BTF_INFO_KIND(t->info);
+}
+
+static inline bool btf_is_enum(const struct btf_type *t)
+{
+	return btf_kind(t) == BTF_KIND_ENUM;
+}
+
+static inline bool btf_is_composite(const struct btf_type *t)
+{
+	u16 kind = btf_kind(t);
+
+	return kind == BTF_KIND_STRUCT || kind == BTF_KIND_UNION;
+}
+
+static inline bool btf_is_array(const struct btf_type *t)
+{
+	return btf_kind(t) == BTF_KIND_ARRAY;
+}
+
+static inline bool btf_is_int(const struct btf_type *t)
+{
+	return btf_kind(t) == BTF_KIND_INT;
+}
+
+static inline bool btf_is_ptr(const struct btf_type *t)
+{
+	return btf_kind(t) == BTF_KIND_PTR;
+}
+
+static inline u8 btf_int_offset(const struct btf_type *t)
+{
+	return BTF_INT_OFFSET(*(u32 *)(t + 1));
+}
+
+static inline u8 btf_int_encoding(const struct btf_type *t)
+{
+	return BTF_INT_ENCODING(*(u32 *)(t + 1));
+}
+
 static inline bool btf_type_is_scalar(const struct btf_type *t)
 {
 	return btf_type_is_int(t) || btf_type_is_enum(t);
@@ -183,6 +230,11 @@ static inline u16 btf_type_vlen(const struct btf_type *t)
 	return BTF_INFO_VLEN(t->info);
 }
 
+static inline u16 btf_vlen(const struct btf_type *t)
+{
+	return btf_type_vlen(t);
+}
+
 static inline u16 btf_func_linkage(const struct btf_type *t)
 {
 	return BTF_INFO_VLEN(t->info);
@@ -193,6 +245,27 @@ static inline bool btf_type_kflag(const struct btf_type *t)
 	return BTF_INFO_KFLAG(t->info);
 }
 
+static inline struct btf_member *btf_members(const struct btf_type *t)
+{
+	return (struct btf_member *)(t + 1);
+}
+#ifdef RELO_CORE
+static inline u32 btf_member_bit_offset(const struct btf_type *t, u32 member_idx)
+{
+	const struct btf_member *m = btf_members(t) + member_idx;
+	bool kflag = btf_type_kflag(t);
+
+	return kflag ? BTF_MEMBER_BIT_OFFSET(m->offset) : m->offset;
+}
+
+static inline u32 btf_member_bitfield_size(const struct btf_type *t, u32 member_idx)
+{
+	const struct btf_member *m = btf_members(t) + member_idx;
+	bool kflag = btf_type_kflag(t);
+
+	return kflag ? BTF_MEMBER_BITFIELD_SIZE(m->offset) : 0;
+}
+#else
 static inline u32 btf_member_bit_offset(const struct btf_type *struct_type,
 					const struct btf_member *member)
 {
@@ -206,12 +279,24 @@ static inline u32 btf_member_bitfield_size(const struct btf_type *struct_type,
 	return btf_type_kflag(struct_type) ? BTF_MEMBER_BITFIELD_SIZE(member->offset)
 					   : 0;
 }
+#endif
 
 static inline const struct btf_member *btf_type_member(const struct btf_type *t)
 {
 	return (const struct btf_member *)(t + 1);
 }
 
+
+static inline struct btf_array *btf_array(const struct btf_type *t)
+{
+	return (struct btf_array *)(t + 1);
+}
+
+static inline struct btf_enum *btf_enum(const struct btf_type *t)
+{
+	return (struct btf_enum *)(t + 1);
+}
+
 static inline const struct btf_var_secinfo *btf_type_var_secinfo(
 		const struct btf_type *t)
 {
@@ -222,6 +307,10 @@ static inline const struct btf_var_secinfo *btf_type_var_secinfo(
 struct bpf_prog;
 
 const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id);
+static inline const struct btf_type *btf__type_by_id(const struct btf *btf, u32 type_id)
+{
+	return btf_type_by_id(btf, type_id);
+}
 const char *btf_name_by_offset(const struct btf *btf, u32 offset);
 struct btf *btf_parse_vmlinux(void);
 struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog);
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 7f33098ca63f..3d5370c876b5 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -36,3 +36,7 @@ obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o
 obj-${CONFIG_BPF_LSM} += bpf_lsm.o
 endif
 obj-$(CONFIG_BPF_PRELOAD) += preload/
+
+obj-$(CONFIG_BPF_SYSCALL) += relo_core.o
+$(obj)/relo_core.o: $(srctree)/tools/lib/bpf/relo_core.c FORCE
+	$(call if_changed_rule,cc_o_c)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index c3d605b22473..fa2c88f6ac4a 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6343,3 +6343,29 @@ const struct bpf_func_proto bpf_btf_find_by_name_kind_proto = {
 };
 
 BTF_ID_LIST_GLOBAL_SINGLE(btf_task_struct_ids, struct, task_struct)
+
+int bpf_core_types_are_compat(const struct btf *local_btf, __u32 local_id,
+			      const struct btf *targ_btf, __u32 targ_id)
+{
+	return -EOPNOTSUPP;
+}
+
+static bool bpf_core_is_flavor_sep(const char *s)
+{
+	/* check X___Y name pattern, where X and Y are not underscores */
+	return s[0] != '_' &&				      /* X */
+	       s[1] == '_' && s[2] == '_' && s[3] == '_' &&   /* ___ */
+	       s[4] != '_';				      /* Y */
+}
+
+size_t bpf_core_essential_name_len(const char *name)
+{
+	size_t n = strlen(name);
+	int i;
+
+	for (i = n - 5; i >= 0; i--) {
+		if (bpf_core_is_flavor_sep(name + i))
+			return i + 1;
+	}
+	return n;
+}
diff --git a/tools/lib/bpf/relo_core.c b/tools/lib/bpf/relo_core.c
index 4016ed492d0c..d30c315f8fd4 100644
--- a/tools/lib/bpf/relo_core.c
+++ b/tools/lib/bpf/relo_core.c
@@ -1,6 +1,73 @@
 // SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
 /* Copyright (c) 2019 Facebook */
 
+#ifdef __KERNEL__
+#define RELO_CORE
+#include <linux/bpf.h>
+#include <linux/btf.h>
+#include <linux/string.h>
+#include "relo_core.h"
+
+static const char *btf_kind_str(const struct btf_type *t)
+{
+	return btf_type_str(t);
+}
+
+static bool is_ldimm64_insn(struct bpf_insn *insn)
+{
+	return insn->code == (BPF_LD | BPF_IMM | BPF_DW);
+}
+
+static const struct btf_type *
+skip_mods_and_typedefs(const struct btf *btf, u32 id, u32 *res_id)
+{
+	return btf_type_skip_modifiers(btf, id, res_id);
+}
+
+static const char *btf__name_by_offset(const struct btf *btf, u32 offset)
+{
+	return btf_name_by_offset(btf, offset);
+}
+
+static s64 btf__resolve_size(const struct btf *btf, u32 type_id)
+{
+	const struct btf_type *t;
+	int size;
+
+	t = btf_type_by_id(btf, type_id);
+	t = btf_resolve_size(btf, t, &size);
+	if (IS_ERR(t))
+		return PTR_ERR(t);
+	return size;
+}
+
+enum libbpf_print_level {
+        LIBBPF_WARN,
+        LIBBPF_INFO,
+        LIBBPF_DEBUG,
+};
+__attribute__((format(printf, 2, 3)))
+void libbpf_print(enum libbpf_print_level level,
+		  const char *format, ...)
+{
+	va_list args;
+
+	va_start(args, format);
+	vprintk(format, args);
+	va_end(args);
+}
+#define __pr(level, fmt, ...)	\
+do {				\
+	libbpf_print(level, "libbpf: " fmt, ##__VA_ARGS__);	\
+} while (0)
+
+#undef pr_warn
+#undef pr_info
+#undef pr_debug
+#define pr_warn(fmt, ...)	__pr(LIBBPF_WARN, fmt, ##__VA_ARGS__)
+#define pr_info(fmt, ...)	__pr(LIBBPF_INFO, fmt, ##__VA_ARGS__)
+#define pr_debug(fmt, ...)	__pr(LIBBPF_DEBUG, fmt, ##__VA_ARGS__)
+#else
 #include <stdio.h>
 #include <string.h>
 #include <errno.h>
@@ -12,8 +79,9 @@
 #include "btf.h"
 #include "str_error.h"
 #include "libbpf_internal.h"
+#endif
 
-#define BPF_CORE_SPEC_MAX_LEN 64
+#define BPF_CORE_SPEC_MAX_LEN 32
 
 /* represents BPF CO-RE field or array element accessor */
 struct bpf_core_accessor {
@@ -662,7 +730,7 @@ static int bpf_core_calc_field_relo(const char *prog_name,
 			*validate = true; /* signedness is never ambiguous */
 		break;
 	case BPF_FIELD_LSHIFT_U64:
-#if __BYTE_ORDER == __LITTLE_ENDIAN
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
 		*val = 64 - (bit_off + bit_sz - byte_off  * 8);
 #else
 		*val = (8 - byte_sz) * 8 + (bit_off - byte_off * 8);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH RFC bpf-next 02/10] bpf: Define enum bpf_core_relo_kind as uapi.
  2021-09-17 21:57 [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Alexei Starovoitov
  2021-09-17 21:57 ` [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty Alexei Starovoitov
@ 2021-09-17 21:57 ` Alexei Starovoitov
  2021-09-21 21:27   ` Andrii Nakryiko
  2021-09-17 21:57 ` [PATCH RFC bpf-next 03/10] bpf: Add proto of bpf_core_apply_relo() Alexei Starovoitov
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-17 21:57 UTC (permalink / raw)
  To: davem; +Cc: daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

enum bpf_core_relo_kind is generated by llvm and processed by libbpf.
It's a de-facto uapi.
With CO-RE in the kernel the bpf_core_relo_kind values become uapi de-jure.
Also rename them with BPF_CORE_ prefix to distinguish from conflicting names in
bpf_core_read.h. The enums bpf_field_info_kind, bpf_type_id_kind,
bpf_type_info_kind, bpf_enum_value_kind are passing different values from bpf
program into llvm.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/uapi/linux/bpf.h       | 19 ++++++++
 tools/include/uapi/linux/bpf.h | 19 ++++++++
 tools/lib/bpf/libbpf.c         |  2 +-
 tools/lib/bpf/relo_core.c      | 84 +++++++++++++++++-----------------
 tools/lib/bpf/relo_core.h      | 18 --------
 5 files changed, 81 insertions(+), 61 deletions(-)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 6fc59d61937a..8fb61f22b72c 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -6294,4 +6294,23 @@ enum {
 	BTF_F_ZERO	=	(1ULL << 3),
 };
 
+/* bpf_core_relo_kind encodes which aspect of captured field/type/enum value
+ * has to be adjusted by relocations. It is emitted by llvm and passed to
+ * libbpf and later to the kernel.
+ */
+enum bpf_core_relo_kind {
+	BPF_CORE_FIELD_BYTE_OFFSET = 0,      /* field byte offset */
+	BPF_CORE_FIELD_BYTE_SIZE = 1,        /* field size in bytes */
+	BPF_CORE_FIELD_EXISTS = 2,           /* field existence in target kernel */
+	BPF_CORE_FIELD_SIGNED = 3,           /* field signedness (0 - unsigned, 1 - signed) */
+	BPF_CORE_FIELD_LSHIFT_U64 = 4,       /* bitfield-specific left bitshift */
+	BPF_CORE_FIELD_RSHIFT_U64 = 5,       /* bitfield-specific right bitshift */
+	BPF_CORE_TYPE_ID_LOCAL = 6,          /* type ID in local BPF object */
+	BPF_CORE_TYPE_ID_TARGET = 7,         /* type ID in target kernel */
+	BPF_CORE_TYPE_EXISTS = 8,            /* type existence in target kernel */
+	BPF_CORE_TYPE_SIZE = 9,              /* type size in bytes */
+	BPF_CORE_ENUMVAL_EXISTS = 10,        /* enum value existence in target kernel */
+	BPF_CORE_ENUMVAL_VALUE = 11,         /* enum value integer value */
+};
+
 #endif /* _UAPI__LINUX_BPF_H__ */
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 6fc59d61937a..8fb61f22b72c 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -6294,4 +6294,23 @@ enum {
 	BTF_F_ZERO	=	(1ULL << 3),
 };
 
+/* bpf_core_relo_kind encodes which aspect of captured field/type/enum value
+ * has to be adjusted by relocations. It is emitted by llvm and passed to
+ * libbpf and later to the kernel.
+ */
+enum bpf_core_relo_kind {
+	BPF_CORE_FIELD_BYTE_OFFSET = 0,      /* field byte offset */
+	BPF_CORE_FIELD_BYTE_SIZE = 1,        /* field size in bytes */
+	BPF_CORE_FIELD_EXISTS = 2,           /* field existence in target kernel */
+	BPF_CORE_FIELD_SIGNED = 3,           /* field signedness (0 - unsigned, 1 - signed) */
+	BPF_CORE_FIELD_LSHIFT_U64 = 4,       /* bitfield-specific left bitshift */
+	BPF_CORE_FIELD_RSHIFT_U64 = 5,       /* bitfield-specific right bitshift */
+	BPF_CORE_TYPE_ID_LOCAL = 6,          /* type ID in local BPF object */
+	BPF_CORE_TYPE_ID_TARGET = 7,         /* type ID in target kernel */
+	BPF_CORE_TYPE_EXISTS = 8,            /* type existence in target kernel */
+	BPF_CORE_TYPE_SIZE = 9,              /* type size in bytes */
+	BPF_CORE_ENUMVAL_EXISTS = 10,        /* enum value existence in target kernel */
+	BPF_CORE_ENUMVAL_VALUE = 11,         /* enum value integer value */
+};
+
 #endif /* _UAPI__LINUX_BPF_H__ */
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index da65a1666a5e..bc023b6a6d87 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -5181,7 +5181,7 @@ static int bpf_core_apply_relo(struct bpf_program *prog,
 		return -ENOTSUP;
 	}
 
-	if (relo->kind != BPF_TYPE_ID_LOCAL &&
+	if (relo->kind != BPF_CORE_TYPE_ID_LOCAL &&
 	    !hashmap__find(cand_cache, type_key, (void **)&cands)) {
 		cands = bpf_core_find_cands(prog->obj, local_btf, local_id);
 		if (IS_ERR(cands)) {
diff --git a/tools/lib/bpf/relo_core.c b/tools/lib/bpf/relo_core.c
index d30c315f8fd4..ad3a28079c7d 100644
--- a/tools/lib/bpf/relo_core.c
+++ b/tools/lib/bpf/relo_core.c
@@ -126,18 +126,18 @@ static bool is_flex_arr(const struct btf *btf,
 static const char *core_relo_kind_str(enum bpf_core_relo_kind kind)
 {
 	switch (kind) {
-	case BPF_FIELD_BYTE_OFFSET: return "byte_off";
-	case BPF_FIELD_BYTE_SIZE: return "byte_sz";
-	case BPF_FIELD_EXISTS: return "field_exists";
-	case BPF_FIELD_SIGNED: return "signed";
-	case BPF_FIELD_LSHIFT_U64: return "lshift_u64";
-	case BPF_FIELD_RSHIFT_U64: return "rshift_u64";
-	case BPF_TYPE_ID_LOCAL: return "local_type_id";
-	case BPF_TYPE_ID_TARGET: return "target_type_id";
-	case BPF_TYPE_EXISTS: return "type_exists";
-	case BPF_TYPE_SIZE: return "type_size";
-	case BPF_ENUMVAL_EXISTS: return "enumval_exists";
-	case BPF_ENUMVAL_VALUE: return "enumval_value";
+	case BPF_CORE_FIELD_BYTE_OFFSET: return "byte_off";
+	case BPF_CORE_FIELD_BYTE_SIZE: return "byte_sz";
+	case BPF_CORE_FIELD_EXISTS: return "field_exists";
+	case BPF_CORE_FIELD_SIGNED: return "signed";
+	case BPF_CORE_FIELD_LSHIFT_U64: return "lshift_u64";
+	case BPF_CORE_FIELD_RSHIFT_U64: return "rshift_u64";
+	case BPF_CORE_TYPE_ID_LOCAL: return "local_type_id";
+	case BPF_CORE_TYPE_ID_TARGET: return "target_type_id";
+	case BPF_CORE_TYPE_EXISTS: return "type_exists";
+	case BPF_CORE_TYPE_SIZE: return "type_size";
+	case BPF_CORE_ENUMVAL_EXISTS: return "enumval_exists";
+	case BPF_CORE_ENUMVAL_VALUE: return "enumval_value";
 	default: return "unknown";
 	}
 }
@@ -145,12 +145,12 @@ static const char *core_relo_kind_str(enum bpf_core_relo_kind kind)
 static bool core_relo_is_field_based(enum bpf_core_relo_kind kind)
 {
 	switch (kind) {
-	case BPF_FIELD_BYTE_OFFSET:
-	case BPF_FIELD_BYTE_SIZE:
-	case BPF_FIELD_EXISTS:
-	case BPF_FIELD_SIGNED:
-	case BPF_FIELD_LSHIFT_U64:
-	case BPF_FIELD_RSHIFT_U64:
+	case BPF_CORE_FIELD_BYTE_OFFSET:
+	case BPF_CORE_FIELD_BYTE_SIZE:
+	case BPF_CORE_FIELD_EXISTS:
+	case BPF_CORE_FIELD_SIGNED:
+	case BPF_CORE_FIELD_LSHIFT_U64:
+	case BPF_CORE_FIELD_RSHIFT_U64:
 		return true;
 	default:
 		return false;
@@ -160,10 +160,10 @@ static bool core_relo_is_field_based(enum bpf_core_relo_kind kind)
 static bool core_relo_is_type_based(enum bpf_core_relo_kind kind)
 {
 	switch (kind) {
-	case BPF_TYPE_ID_LOCAL:
-	case BPF_TYPE_ID_TARGET:
-	case BPF_TYPE_EXISTS:
-	case BPF_TYPE_SIZE:
+	case BPF_CORE_TYPE_ID_LOCAL:
+	case BPF_CORE_TYPE_ID_TARGET:
+	case BPF_CORE_TYPE_EXISTS:
+	case BPF_CORE_TYPE_SIZE:
 		return true;
 	default:
 		return false;
@@ -173,8 +173,8 @@ static bool core_relo_is_type_based(enum bpf_core_relo_kind kind)
 static bool core_relo_is_enumval_based(enum bpf_core_relo_kind kind)
 {
 	switch (kind) {
-	case BPF_ENUMVAL_EXISTS:
-	case BPF_ENUMVAL_VALUE:
+	case BPF_CORE_ENUMVAL_EXISTS:
+	case BPF_CORE_ENUMVAL_VALUE:
 		return true;
 	default:
 		return false;
@@ -639,7 +639,7 @@ static int bpf_core_calc_field_relo(const char *prog_name,
 
 	*field_sz = 0;
 
-	if (relo->kind == BPF_FIELD_EXISTS) {
+	if (relo->kind == BPF_CORE_FIELD_EXISTS) {
 		*val = spec ? 1 : 0;
 		return 0;
 	}
@@ -652,7 +652,7 @@ static int bpf_core_calc_field_relo(const char *prog_name,
 
 	/* a[n] accessor needs special handling */
 	if (!acc->name) {
-		if (relo->kind == BPF_FIELD_BYTE_OFFSET) {
+		if (relo->kind == BPF_CORE_FIELD_BYTE_OFFSET) {
 			*val = spec->bit_offset / 8;
 			/* remember field size for load/store mem size */
 			sz = btf__resolve_size(spec->btf, acc->type_id);
@@ -660,7 +660,7 @@ static int bpf_core_calc_field_relo(const char *prog_name,
 				return -EINVAL;
 			*field_sz = sz;
 			*type_id = acc->type_id;
-		} else if (relo->kind == BPF_FIELD_BYTE_SIZE) {
+		} else if (relo->kind == BPF_CORE_FIELD_BYTE_SIZE) {
 			sz = btf__resolve_size(spec->btf, acc->type_id);
 			if (sz < 0)
 				return -EINVAL;
@@ -712,36 +712,36 @@ static int bpf_core_calc_field_relo(const char *prog_name,
 		*validate = !bitfield;
 
 	switch (relo->kind) {
-	case BPF_FIELD_BYTE_OFFSET:
+	case BPF_CORE_FIELD_BYTE_OFFSET:
 		*val = byte_off;
 		if (!bitfield) {
 			*field_sz = byte_sz;
 			*type_id = field_type_id;
 		}
 		break;
-	case BPF_FIELD_BYTE_SIZE:
+	case BPF_CORE_FIELD_BYTE_SIZE:
 		*val = byte_sz;
 		break;
-	case BPF_FIELD_SIGNED:
+	case BPF_CORE_FIELD_SIGNED:
 		/* enums will be assumed unsigned */
 		*val = btf_is_enum(mt) ||
 		       (btf_int_encoding(mt) & BTF_INT_SIGNED);
 		if (validate)
 			*validate = true; /* signedness is never ambiguous */
 		break;
-	case BPF_FIELD_LSHIFT_U64:
+	case BPF_CORE_FIELD_LSHIFT_U64:
 #if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
 		*val = 64 - (bit_off + bit_sz - byte_off  * 8);
 #else
 		*val = (8 - byte_sz) * 8 + (bit_off - byte_off * 8);
 #endif
 		break;
-	case BPF_FIELD_RSHIFT_U64:
+	case BPF_CORE_FIELD_RSHIFT_U64:
 		*val = 64 - bit_sz;
 		if (validate)
 			*validate = true; /* right shift is never ambiguous */
 		break;
-	case BPF_FIELD_EXISTS:
+	case BPF_CORE_FIELD_EXISTS:
 	default:
 		return -EOPNOTSUPP;
 	}
@@ -762,20 +762,20 @@ static int bpf_core_calc_type_relo(const struct bpf_core_relo *relo,
 	}
 
 	switch (relo->kind) {
-	case BPF_TYPE_ID_TARGET:
+	case BPF_CORE_TYPE_ID_TARGET:
 		*val = spec->root_type_id;
 		break;
-	case BPF_TYPE_EXISTS:
+	case BPF_CORE_TYPE_EXISTS:
 		*val = 1;
 		break;
-	case BPF_TYPE_SIZE:
+	case BPF_CORE_TYPE_SIZE:
 		sz = btf__resolve_size(spec->btf, spec->root_type_id);
 		if (sz < 0)
 			return -EINVAL;
 		*val = sz;
 		break;
-	case BPF_TYPE_ID_LOCAL:
-	/* BPF_TYPE_ID_LOCAL is handled specially and shouldn't get here */
+	case BPF_CORE_TYPE_ID_LOCAL:
+	/* BPF_CORE_TYPE_ID_LOCAL is handled specially and shouldn't get here */
 	default:
 		return -EOPNOTSUPP;
 	}
@@ -791,10 +791,10 @@ static int bpf_core_calc_enumval_relo(const struct bpf_core_relo *relo,
 	const struct btf_enum *e;
 
 	switch (relo->kind) {
-	case BPF_ENUMVAL_EXISTS:
+	case BPF_CORE_ENUMVAL_EXISTS:
 		*val = spec ? 1 : 0;
 		break;
-	case BPF_ENUMVAL_VALUE:
+	case BPF_CORE_ENUMVAL_VALUE:
 		if (!spec)
 			return -EUCLEAN; /* request instruction poisoning */
 		t = btf__type_by_id(spec->btf, spec->spec[0].type_id);
@@ -1253,7 +1253,7 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
 	libbpf_print(LIBBPF_DEBUG, "\n");
 
 	/* TYPE_ID_LOCAL relo is special and doesn't need candidate search */
-	if (relo->kind == BPF_TYPE_ID_LOCAL) {
+	if (relo->kind == BPF_CORE_TYPE_ID_LOCAL) {
 		targ_res.validate = true;
 		targ_res.poison = false;
 		targ_res.orig_val = local_spec.root_type_id;
@@ -1319,7 +1319,7 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
 	}
 
 	/*
-	 * For BPF_FIELD_EXISTS relo or when used BPF program has field
+	 * For BPF_CORE_FIELD_EXISTS relo or when used BPF program has field
 	 * existence checks or kernel version/config checks, it's expected
 	 * that we might not find any candidates. In this case, if field
 	 * wasn't found in any candidate, the list of candidates shouldn't
diff --git a/tools/lib/bpf/relo_core.h b/tools/lib/bpf/relo_core.h
index 3b9f8f18346c..7720af11f96f 100644
--- a/tools/lib/bpf/relo_core.h
+++ b/tools/lib/bpf/relo_core.h
@@ -4,24 +4,6 @@
 #ifndef __RELO_CORE_H
 #define __RELO_CORE_H
 
-/* bpf_core_relo_kind encodes which aspect of captured field/type/enum value
- * has to be adjusted by relocations.
- */
-enum bpf_core_relo_kind {
-	BPF_FIELD_BYTE_OFFSET = 0,	/* field byte offset */
-	BPF_FIELD_BYTE_SIZE = 1,	/* field size in bytes */
-	BPF_FIELD_EXISTS = 2,		/* field existence in target kernel */
-	BPF_FIELD_SIGNED = 3,		/* field signedness (0 - unsigned, 1 - signed) */
-	BPF_FIELD_LSHIFT_U64 = 4,	/* bitfield-specific left bitshift */
-	BPF_FIELD_RSHIFT_U64 = 5,	/* bitfield-specific right bitshift */
-	BPF_TYPE_ID_LOCAL = 6,		/* type ID in local BPF object */
-	BPF_TYPE_ID_TARGET = 7,		/* type ID in target kernel */
-	BPF_TYPE_EXISTS = 8,		/* type existence in target kernel */
-	BPF_TYPE_SIZE = 9,		/* type size in bytes */
-	BPF_ENUMVAL_EXISTS = 10,	/* enum value existence in target kernel */
-	BPF_ENUMVAL_VALUE = 11,		/* enum value integer value */
-};
-
 /* The minimum bpf_core_relo checked by the loader
  *
  * CO-RE relocation captures the following data:
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH RFC bpf-next 03/10] bpf: Add proto of bpf_core_apply_relo()
  2021-09-17 21:57 [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Alexei Starovoitov
  2021-09-17 21:57 ` [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty Alexei Starovoitov
  2021-09-17 21:57 ` [PATCH RFC bpf-next 02/10] bpf: Define enum bpf_core_relo_kind as uapi Alexei Starovoitov
@ 2021-09-17 21:57 ` Alexei Starovoitov
  2021-09-23 11:21   ` Lorenz Bauer
  2021-09-17 21:57 ` [PATCH RFC bpf-next 04/10] bpf: Add bpf_core_add_cands() and wire it into bpf_core_apply_relo_insn() Alexei Starovoitov
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-17 21:57 UTC (permalink / raw)
  To: davem; +Cc: daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Prototype of bpf_core_apply_relo() helper.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/bpf.h            |  1 +
 include/uapi/linux/bpf.h       | 14 ++++++++++++++
 kernel/bpf/btf.c               | 22 ++++++++++++++++++++++
 kernel/bpf/syscall.c           |  2 ++
 tools/include/uapi/linux/bpf.h | 14 ++++++++++++++
 5 files changed, 53 insertions(+)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index b6c45a6cbbba..a3cb8ea272f7 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2102,6 +2102,7 @@ extern const struct bpf_func_proto bpf_for_each_map_elem_proto;
 extern const struct bpf_func_proto bpf_btf_find_by_name_kind_proto;
 extern const struct bpf_func_proto bpf_sk_setsockopt_proto;
 extern const struct bpf_func_proto bpf_sk_getsockopt_proto;
+extern const struct bpf_func_proto bpf_core_apply_relo_proto;
 
 const struct bpf_func_proto *tracing_prog_func_proto(
   enum bpf_func_id func_id, const struct bpf_prog *prog);
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 8fb61f22b72c..c2b8857b8a1c 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -4909,6 +4909,13 @@ union bpf_attr {
  *	Return
  *		The number of bytes written to the buffer, or a negative error
  *		in case of failure.
+ *
+ * long bpf_core_apply_relo(int btf_fd, struct bpf_core_relo_desc *relo, int relo_sz,
+ *			    struct bpf_insn *insn, int flags)
+ * 	Description
+ * 		Apply given relo.
+ * 	Return
+ * 		Returns 0 on success.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5089,6 +5096,7 @@ union bpf_attr {
 	FN(task_pt_regs),		\
 	FN(get_branch_snapshot),	\
 	FN(trace_vprintk),		\
+	FN(core_apply_relo),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
@@ -6313,4 +6321,10 @@ enum bpf_core_relo_kind {
 	BPF_CORE_ENUMVAL_VALUE = 11,         /* enum value integer value */
 };
 
+struct bpf_core_relo_desc {
+	__u32 type_id;
+	__u32 access_str_off;
+	enum bpf_core_relo_kind kind;
+};
+
 #endif /* _UAPI__LINUX_BPF_H__ */
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index fa2c88f6ac4a..9bb1247346ce 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6369,3 +6369,25 @@ size_t bpf_core_essential_name_len(const char *name)
 	}
 	return n;
 }
+
+BPF_CALL_5(bpf_core_apply_relo, int, btf_fd, struct bpf_core_relo_desc *, relo,
+	   int, relo_sz, void *, insn, int, flags)
+{
+	struct btf *btf;
+	long ret;
+
+	if (flags)
+		return -EINVAL;
+	return -EOPNOTSUPP;
+}
+
+const struct bpf_func_proto bpf_core_apply_relo_proto = {
+	.func		= bpf_core_apply_relo,
+	.gpl_only	= false,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_ANYTHING,
+	.arg2_type	= ARG_PTR_TO_MEM,
+	.arg3_type	= ARG_CONST_SIZE,
+	.arg4_type	= ARG_PTR_TO_LONG,
+	.arg5_type	= ARG_ANYTHING,
+};
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 4e50c0bfdb7d..3c349b244a28 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -4763,6 +4763,8 @@ syscall_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_btf_find_by_name_kind_proto;
 	case BPF_FUNC_sys_close:
 		return &bpf_sys_close_proto;
+	case BPF_FUNC_core_apply_relo:
+		return &bpf_core_apply_relo_proto;
 	default:
 		return tracing_prog_func_proto(func_id, prog);
 	}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 8fb61f22b72c..c2b8857b8a1c 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -4909,6 +4909,13 @@ union bpf_attr {
  *	Return
  *		The number of bytes written to the buffer, or a negative error
  *		in case of failure.
+ *
+ * long bpf_core_apply_relo(int btf_fd, struct bpf_core_relo_desc *relo, int relo_sz,
+ *			    struct bpf_insn *insn, int flags)
+ * 	Description
+ * 		Apply given relo.
+ * 	Return
+ * 		Returns 0 on success.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5089,6 +5096,7 @@ union bpf_attr {
 	FN(task_pt_regs),		\
 	FN(get_branch_snapshot),	\
 	FN(trace_vprintk),		\
+	FN(core_apply_relo),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
@@ -6313,4 +6321,10 @@ enum bpf_core_relo_kind {
 	BPF_CORE_ENUMVAL_VALUE = 11,         /* enum value integer value */
 };
 
+struct bpf_core_relo_desc {
+	__u32 type_id;
+	__u32 access_str_off;
+	enum bpf_core_relo_kind kind;
+};
+
 #endif /* _UAPI__LINUX_BPF_H__ */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH RFC bpf-next 04/10] bpf: Add bpf_core_add_cands() and wire it into bpf_core_apply_relo_insn().
  2021-09-17 21:57 [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Alexei Starovoitov
                   ` (2 preceding siblings ...)
  2021-09-17 21:57 ` [PATCH RFC bpf-next 03/10] bpf: Add proto of bpf_core_apply_relo() Alexei Starovoitov
@ 2021-09-17 21:57 ` Alexei Starovoitov
  2021-09-21 21:34   ` Andrii Nakryiko
  2021-09-27 18:04   ` Matteo Croce
  2021-09-17 21:57 ` [PATCH RFC bpf-next 05/10] libbpf: Use CO-RE in the kernel in light skeleton Alexei Starovoitov
                   ` (6 subsequent siblings)
  10 siblings, 2 replies; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-17 21:57 UTC (permalink / raw)
  To: davem; +Cc: daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Given BPF program's BTF perform a linear search through kernel BTFs for
a possible candidate.
Then wire the result into bpf_core_apply_relo_insn().

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 kernel/bpf/btf.c | 149 ++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 147 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 9bb1247346ce..e04b5e669d12 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -25,6 +25,7 @@
 #include <linux/kobject.h>
 #include <linux/sysfs.h>
 #include <net/sock.h>
+#include "../tools/lib/bpf/relo_core.h"
 
 /* BTF (BPF Type Format) is the meta data format which describes
  * the data types of BPF program/map.  Hence, it basically focus
@@ -6370,15 +6371,159 @@ size_t bpf_core_essential_name_len(const char *name)
 	return n;
 }
 
+static void bpf_core_free_cands(struct bpf_core_cand_list *cands)
+{
+        kfree(cands->cands);
+        kfree(cands);
+}
+
+static int bpf_core_add_cands(struct bpf_core_cand *local_cand,
+                              size_t local_essent_len,
+                              const struct btf *targ_btf,
+                              int targ_start_id,
+                              struct bpf_core_cand_list *cands)
+{
+	struct bpf_core_cand *new_cands, *cand;
+	const struct btf_type *t;
+	const char *targ_name;
+	size_t targ_essent_len;
+	int n, i;
+
+	n = btf_nr_types(targ_btf);
+	for (i = targ_start_id; i < n; i++) {
+		t = btf__type_by_id(targ_btf, i);
+		if (btf_kind(t) != btf_kind(local_cand->t))
+			continue;
+
+		targ_name = btf_name_by_offset(targ_btf, t->name_off);
+		if (str_is_empty(targ_name))
+			continue;
+
+		targ_essent_len = bpf_core_essential_name_len(targ_name);
+		if (targ_essent_len != local_essent_len)
+			continue;
+
+		if (strncmp(local_cand->name, targ_name, local_essent_len) != 0)
+			continue;
+
+/*		printk("CO-RE relocating [%d] %s %s: found target candidate [%d] %s %s\n",
+		       local_cand->id, btf_type_str(local_cand->t),
+		       local_cand->name, i, btf_type_str(t), targ_name);*/
+		new_cands = krealloc(cands->cands,
+				     (cands->len + 1) * sizeof(*cands->cands), GFP_KERNEL);
+		if (!new_cands)
+			return -ENOMEM;
+
+		cand = &new_cands[cands->len];
+		cand->btf = targ_btf;
+		cand->t = t;
+		cand->name = targ_name;
+		cand->id = i;
+
+		cands->cands = new_cands;
+		cands->len++;
+	}
+	return 0;
+}
+
+static struct bpf_core_cand_list *
+bpf_core_find_cands(const struct btf *local_btf, u32 local_type_id)
+{
+	struct bpf_core_cand local_cand = {};
+	struct bpf_core_cand_list *cands;
+	const struct btf *main_btf;
+	size_t local_essent_len;
+	struct btf *mod_btf;
+	int err;
+	int id;
+
+	local_cand.btf = local_btf;
+	local_cand.t = btf__type_by_id(local_btf, local_type_id);
+	if (!local_cand.t)
+		return ERR_PTR(-EINVAL);
+
+	local_cand.name = btf_name_by_offset(local_btf, local_cand.t->name_off);
+	if (str_is_empty(local_cand.name))
+		return ERR_PTR(-EINVAL);
+	local_essent_len = bpf_core_essential_name_len(local_cand.name);
+
+	cands = kcalloc(1, sizeof(*cands), GFP_KERNEL);
+	if (!cands)
+		return ERR_PTR(-ENOMEM);
+
+	/* Attempt to find target candidates in vmlinux BTF first */
+	main_btf = bpf_get_btf_vmlinux();
+	err = bpf_core_add_cands(&local_cand, local_essent_len, main_btf, 1, cands);
+	if (err)
+		goto err_out;
+
+	/* if vmlinux BTF has any candidate, don't got for module BTFs */
+	if (cands->len)
+		return cands;
+
+	/* If candidate is not found in vmlinux's BTF then search in module's BTFs */
+	spin_lock_bh(&btf_idr_lock);
+	idr_for_each_entry(&btf_idr, mod_btf, id) {
+		if (!btf_is_module(mod_btf))
+			continue;
+		/* linear search could be slow hence unlock/lock
+		 * the IDR to avoiding holding it for too long
+		 */
+		btf_get(mod_btf);
+		spin_unlock_bh(&btf_idr_lock);
+		err = bpf_core_add_cands(&local_cand, local_essent_len,
+					 mod_btf,
+					 btf_nr_types(main_btf),
+					 cands);
+		if (err)
+			btf_put(mod_btf);
+		goto err_out;
+		spin_lock_bh(&btf_idr_lock);
+		btf_put(mod_btf);
+	}
+	spin_unlock_bh(&btf_idr_lock);
+
+	return cands;
+err_out:
+	bpf_core_free_cands(cands);
+	return ERR_PTR(err);
+}
+
 BPF_CALL_5(bpf_core_apply_relo, int, btf_fd, struct bpf_core_relo_desc *, relo,
 	   int, relo_sz, void *, insn, int, flags)
 {
+	struct bpf_core_cand_list *cands = NULL;
+	struct bpf_core_relo core_relo = {};
 	struct btf *btf;
-	long ret;
+	int err;
 
 	if (flags)
 		return -EINVAL;
-	return -EOPNOTSUPP;
+
+	if (sizeof(*relo) != relo_sz)
+		return -EINVAL;
+	btf = btf_get_by_fd(btf_fd);
+	if (IS_ERR(btf))
+		return PTR_ERR(btf);
+	if (btf_is_kernel(btf)) {
+		btf_put(btf);
+		return -EACCES;
+	}
+	if (relo->kind != BPF_CORE_TYPE_ID_LOCAL) {
+		cands = bpf_core_find_cands(btf, relo->type_id);
+		if (IS_ERR(cands)) {
+			btf_put(btf);
+			printk("target candidate search failed for %d\n",
+			       relo->type_id);
+                        return PTR_ERR(cands);
+                }
+	}
+	core_relo.type_id = relo->type_id;
+	core_relo.access_str_off = relo->access_str_off;
+	core_relo.kind = relo->kind;
+	err = bpf_core_apply_relo_insn("prog_name", insn, 0, &core_relo, 0, btf, cands);
+	btf_put(btf);
+	return 0;
 }
 
 const struct bpf_func_proto bpf_core_apply_relo_proto = {
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH RFC bpf-next 05/10] libbpf: Use CO-RE in the kernel in light skeleton.
  2021-09-17 21:57 [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Alexei Starovoitov
                   ` (3 preceding siblings ...)
  2021-09-17 21:57 ` [PATCH RFC bpf-next 04/10] bpf: Add bpf_core_add_cands() and wire it into bpf_core_apply_relo_insn() Alexei Starovoitov
@ 2021-09-17 21:57 ` Alexei Starovoitov
  2021-09-17 21:57 ` [PATCH RFC bpf-next 06/10] libbpf: Make gen_loader data aligned Alexei Starovoitov
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-17 21:57 UTC (permalink / raw)
  To: davem; +Cc: daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/lib/bpf/bpf_gen_internal.h | 12 ++++++++
 tools/lib/bpf/gen_loader.c       | 47 ++++++++++++++++++++++++++++++++
 tools/lib/bpf/libbpf.c           | 27 +++++++++++++++---
 3 files changed, 82 insertions(+), 4 deletions(-)

diff --git a/tools/lib/bpf/bpf_gen_internal.h b/tools/lib/bpf/bpf_gen_internal.h
index 615400391e57..1b27faf71080 100644
--- a/tools/lib/bpf/bpf_gen_internal.h
+++ b/tools/lib/bpf/bpf_gen_internal.h
@@ -9,6 +9,13 @@ struct ksym_relo_desc {
 	int insn_idx;
 };
 
+struct core_relo_desc {
+	__u32   insn_idx;
+	__u32   type_id;
+	__u32   access_str_off;
+	__u32	kind;
+};
+
 struct bpf_gen {
 	struct gen_loader_opts *opts;
 	void *data_start;
@@ -22,6 +29,8 @@ struct bpf_gen {
 	int error;
 	struct ksym_relo_desc *relos;
 	int relo_cnt;
+	struct core_relo_desc *core_relos;
+	int core_relo_cnt;
 	char attach_target[128];
 	int attach_kind;
 };
@@ -37,5 +46,8 @@ void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *value, __u
 void bpf_gen__map_freeze(struct bpf_gen *gen, int map_idx);
 void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *name, enum bpf_attach_type type);
 void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, int kind, int insn_idx);
+struct bpf_core_relo;
+void bpf_gen__record_relo_core(struct bpf_gen *gen, const struct bpf_core_relo *core_relo,
+			       int insn_idx);
 
 #endif
diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
index 8df718a6b142..fe2415ab53f6 100644
--- a/tools/lib/bpf/gen_loader.c
+++ b/tools/lib/bpf/gen_loader.c
@@ -524,6 +524,23 @@ void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, int kind,
 	gen->relo_cnt++;
 }
 
+void bpf_gen__record_relo_core(struct bpf_gen *gen, const struct bpf_core_relo *core_relo,
+			       int insn_idx)
+{
+	struct core_relo_desc *relo;
+
+	relo = libbpf_reallocarray(gen->core_relos, gen->core_relo_cnt + 1, sizeof(*relo));
+	if (!relo) {
+		gen->error = -ENOMEM;
+		return;
+	}
+	gen->core_relos = relo;
+	relo += gen->core_relo_cnt;
+	memcpy(relo, core_relo, sizeof(*relo));
+	relo->insn_idx = insn_idx;
+	gen->core_relo_cnt++;
+}
+
 static void emit_relo(struct bpf_gen *gen, struct ksym_relo_desc *relo, int insns)
 {
 	int name, insn, len = strlen(relo->name) + 1;
@@ -554,12 +571,37 @@ static void emit_relo(struct bpf_gen *gen, struct ksym_relo_desc *relo, int insn
 	}
 }
 
+static void emit_core_relo(struct bpf_gen *gen, struct core_relo_desc *relo, int insns)
+{
+	int relo_off, insn, len = sizeof(*relo) - 4;
+
+	pr_debug("gen: relo_core: local_btf_id %d kind %d at %d\n",
+		 relo->type_id, relo->kind, relo->insn_idx);
+	relo_off = add_data(gen, &relo->type_id, len);
+
+	emit(gen, BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, stack_off(btf_fd)));
+	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_2, BPF_PSEUDO_MAP_IDX_VALUE,
+					 0, 0, 0, relo_off));
+	emit(gen, BPF_MOV64_IMM(BPF_REG_3, len));
+	insn = insns + sizeof(struct bpf_insn) * relo->insn_idx;
+	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_4, BPF_PSEUDO_MAP_IDX_VALUE,
+					 0, 0, 0, insn));
+	emit(gen, BPF_MOV64_IMM(BPF_REG_5, 0));
+	emit(gen, BPF_EMIT_CALL(BPF_FUNC_core_apply_relo));
+	emit(gen, BPF_MOV64_REG(BPF_REG_7, BPF_REG_0));
+	debug_ret(gen, "core_apply_relo(btf_id=%d,kind=%d,insn=%d)",
+		  relo->type_id, relo->kind, relo->insn_idx);
+	emit_check_err(gen);
+}
+
 static void emit_relos(struct bpf_gen *gen, int insns)
 {
 	int i;
 
 	for (i = 0; i < gen->relo_cnt; i++)
 		emit_relo(gen, gen->relos + i, insns);
+	for (i = 0; i < gen->core_relo_cnt; i++)
+		emit_core_relo(gen, gen->core_relos + i, insns);
 }
 
 static void cleanup_relos(struct bpf_gen *gen, int insns)
@@ -580,6 +622,11 @@ static void cleanup_relos(struct bpf_gen *gen, int insns)
 		gen->relo_cnt = 0;
 		gen->relos = NULL;
 	}
+	if (gen->core_relo_cnt) {
+		free(gen->core_relos);
+		gen->core_relo_cnt = 0;
+		gen->core_relos = NULL;
+	}
 }
 
 void bpf_gen__prog_load(struct bpf_gen *gen,
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index bc023b6a6d87..ed2c4f6661fe 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -5175,10 +5175,17 @@ static int bpf_core_apply_relo(struct bpf_program *prog,
 		return -EINVAL;
 
 	if (prog->obj->gen_loader) {
-		pr_warn("// TODO core_relo: prog %td insn[%d] %s kind %d\n",
+		const char *spec_str = btf__name_by_offset(local_btf, relo->access_str_off);
+
+		/* Adjust relo_core's insn_idx since final subprog offset in
+		 * the main prog is known.
+		 */
+		insn_idx += prog->sub_insn_off;
+		pr_debug("record_relo_core: prog %td insn[%d] %s %s %s final insn_idx %d\n",
 			prog - prog->obj->programs, relo->insn_off / 8,
-			local_name, relo->kind);
-		return -ENOTSUP;
+			btf_kind_str(local_type), local_name, spec_str, insn_idx);
+		bpf_gen__record_relo_core(prog->obj->gen_loader, relo, insn_idx);
+		return 0;
 	}
 
 	if (relo->kind != BPF_CORE_TYPE_ID_LOCAL &&
@@ -5813,7 +5820,7 @@ bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path)
 	size_t i, j;
 	int err;
 
-	if (obj->btf_ext) {
+	if (obj->btf_ext && !obj->gen_loader) {
 		err = bpf_object__relocate_core(obj, targ_btf_path);
 		if (err) {
 			pr_warn("failed to perform CO-RE relocations: %d\n",
@@ -5863,6 +5870,18 @@ bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path)
 			return err;
 		}
 	}
+
+	/* Let gen_loader record CO-RE relocations after subprogs were appended
+	 * to make sure that insn_idx is calculated as the offset in main progs.
+	 */
+	if (obj->btf_ext && obj->gen_loader) {
+		err = bpf_object__relocate_core(obj, targ_btf_path);
+		if (err) {
+			pr_warn("failed to perform CO-RE relocations: %d\n",
+				err);
+			return err;
+		}
+	}
 	/* Process data relos for main programs */
 	for (i = 0; i < obj->nr_programs; i++) {
 		prog = &obj->programs[i];
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH RFC bpf-next 06/10] libbpf: Make gen_loader data aligned.
  2021-09-17 21:57 [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Alexei Starovoitov
                   ` (4 preceding siblings ...)
  2021-09-17 21:57 ` [PATCH RFC bpf-next 05/10] libbpf: Use CO-RE in the kernel in light skeleton Alexei Starovoitov
@ 2021-09-17 21:57 ` Alexei Starovoitov
  2021-09-17 21:57 ` [PATCH RFC bpf-next 07/10] libbpf: Support init of inner maps in light skeleton Alexei Starovoitov
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-17 21:57 UTC (permalink / raw)
  To: davem; +Cc: daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Align gen_loader data to 8 byte boundary to make sure union bpf_attr,
bpf_insns and other structs are aligned.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/lib/bpf/gen_loader.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
index fe2415ab53f6..28ecd932713b 100644
--- a/tools/lib/bpf/gen_loader.c
+++ b/tools/lib/bpf/gen_loader.c
@@ -5,6 +5,7 @@
 #include <string.h>
 #include <errno.h>
 #include <linux/filter.h>
+#include <sys/param.h>
 #include "btf.h"
 #include "bpf.h"
 #include "libbpf.h"
@@ -135,13 +136,17 @@ void bpf_gen__init(struct bpf_gen *gen, int log_level)
 
 static int add_data(struct bpf_gen *gen, const void *data, __u32 size)
 {
+	__u32 size8 = roundup(size, 8);
+	__u64 zero = 0;
 	void *prev;
 
-	if (realloc_data_buf(gen, size))
+	if (realloc_data_buf(gen, size8))
 		return 0;
 	prev = gen->data_cur;
 	memcpy(gen->data_cur, data, size);
 	gen->data_cur += size;
+	memcpy(gen->data_cur, &zero, size8 - size);
+	gen->data_cur += size8 - size;
 	return prev - gen->data_start;
 }
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH RFC bpf-next 07/10] libbpf: Support init of inner maps in light skeleton.
  2021-09-17 21:57 [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Alexei Starovoitov
                   ` (5 preceding siblings ...)
  2021-09-17 21:57 ` [PATCH RFC bpf-next 06/10] libbpf: Make gen_loader data aligned Alexei Starovoitov
@ 2021-09-17 21:57 ` Alexei Starovoitov
  2021-09-17 21:57 ` [PATCH RFC bpf-next 08/10] selftests/bpf: Convert kfunc test with CO-RE to lskel Alexei Starovoitov
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-17 21:57 UTC (permalink / raw)
  To: davem; +Cc: daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Add ability to initialize inner maps in light skeleton.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/lib/bpf/bpf_gen_internal.h |  1 +
 tools/lib/bpf/gen_loader.c       | 31 +++++++++++++++++++++++++++++++
 tools/lib/bpf/libbpf.c           |  6 +++---
 3 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/tools/lib/bpf/bpf_gen_internal.h b/tools/lib/bpf/bpf_gen_internal.h
index 1b27faf71080..e7f6ad53aed6 100644
--- a/tools/lib/bpf/bpf_gen_internal.h
+++ b/tools/lib/bpf/bpf_gen_internal.h
@@ -49,5 +49,6 @@ void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, int kind, int
 struct bpf_core_relo;
 void bpf_gen__record_relo_core(struct bpf_gen *gen, const struct bpf_core_relo *core_relo,
 			       int insn_idx);
+void bpf_gen__populate_outer_map(struct bpf_gen *gen, int outer_map_idx, int key, int inner_map_idx);
 
 #endif
diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
index 28ecd932713b..35f51d837320 100644
--- a/tools/lib/bpf/gen_loader.c
+++ b/tools/lib/bpf/gen_loader.c
@@ -763,6 +763,37 @@ void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *pvalue,
 	emit_check_err(gen);
 }
 
+void bpf_gen__populate_outer_map(struct bpf_gen *gen, int outer_map_idx, int slot,
+				 int inner_map_idx)
+{
+	int attr_size = offsetofend(union bpf_attr, flags);
+	int map_update_attr, key;
+	union bpf_attr attr;
+
+	memset(&attr, 0, attr_size);
+	pr_debug("gen: populate_outer_map: outer %d key %d inner %d\n",
+		 outer_map_idx, slot, inner_map_idx);
+
+	key = add_data(gen, &slot, sizeof(slot));
+
+	map_update_attr = add_data(gen, &attr, attr_size);
+	move_stack2blob(gen, attr_field(map_update_attr, map_fd), 4,
+			stack_off(map_fd[outer_map_idx]));
+	emit_rel_store(gen, attr_field(map_update_attr, key), key);
+
+	emit(gen, BPF_MOV64_REG(BPF_REG_1, BPF_REG_10));
+	emit(gen, BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, stack_off(map_fd[inner_map_idx])));
+	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_2, BPF_PSEUDO_MAP_IDX_VALUE,
+					 0, 0, 0, attr_field(map_update_attr, value)));
+	emit(gen, BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, 0));
+
+	/* emit MAP_UPDATE_ELEM command */
+	emit_sys_bpf(gen, BPF_MAP_UPDATE_ELEM, map_update_attr, attr_size);
+	debug_ret(gen, "populate_outer_map outer %d key %d inner %d",
+		  outer_map_idx, slot, inner_map_idx);
+	emit_check_err(gen);
+}
+
 void bpf_gen__map_freeze(struct bpf_gen *gen, int map_idx)
 {
 	int attr_size = offsetofend(union bpf_attr, map_fd);
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index ed2c4f6661fe..b4ebe19c4f6c 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -4693,9 +4693,9 @@ static int init_map_slots(struct bpf_object *obj, struct bpf_map *map)
 		targ_map = map->init_slots[i];
 		fd = bpf_map__fd(targ_map);
 		if (obj->gen_loader) {
-			pr_warn("// TODO map_update_elem: idx %td key %d value==map_idx %td\n",
-				map - obj->maps, i, targ_map - obj->maps);
-			return -ENOTSUP;
+			bpf_gen__populate_outer_map(obj->gen_loader,
+						    map - obj->maps, i,
+						    targ_map - obj->maps);
 		} else {
 			err = bpf_map_update_elem(map->fd, &i, &fd, 0);
 		}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH RFC bpf-next 08/10] selftests/bpf: Convert kfunc test with CO-RE to lskel.
  2021-09-17 21:57 [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Alexei Starovoitov
                   ` (6 preceding siblings ...)
  2021-09-17 21:57 ` [PATCH RFC bpf-next 07/10] libbpf: Support init of inner maps in light skeleton Alexei Starovoitov
@ 2021-09-17 21:57 ` Alexei Starovoitov
  2021-09-17 21:57 ` [PATCH RFC bpf-next 09/10] selftests/bpf: Improve inner_map test coverage Alexei Starovoitov
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-17 21:57 UTC (permalink / raw)
  To: davem; +Cc: daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Convert kfunc_call_test_subprog to light skeleton.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/testing/selftests/bpf/Makefile                | 2 +-
 tools/testing/selftests/bpf/prog_tests/kfunc_call.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 326ea75ce99e..81031c8bdcb8 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -316,7 +316,7 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h		\
 
 LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
 	test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c \
-	trace_vprintk.c
+	trace_vprintk.c	kfunc_call_test_subprog.c
 SKEL_BLACKLIST += $$(LSKELS)
 
 test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o
diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
index 9611f2bc50df..01bd11f04dc6 100644
--- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
+++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
@@ -3,7 +3,7 @@
 #include <test_progs.h>
 #include <network_helpers.h>
 #include "kfunc_call_test.lskel.h"
-#include "kfunc_call_test_subprog.skel.h"
+#include "kfunc_call_test_subprog.lskel.h"
 
 static void test_main(void)
 {
@@ -38,7 +38,7 @@ static void test_subprog(void)
 	if (!ASSERT_OK_PTR(skel, "skel"))
 		return;
 
-	prog_fd = bpf_program__fd(skel->progs.kfunc_call_test1);
+	prog_fd = skel->progs.kfunc_call_test1.prog_fd;
 	err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
 				NULL, NULL, (__u32 *)&retval, NULL);
 	ASSERT_OK(err, "bpf_prog_test_run(test1)");
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH RFC bpf-next 09/10] selftests/bpf: Improve inner_map test coverage.
  2021-09-17 21:57 [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Alexei Starovoitov
                   ` (7 preceding siblings ...)
  2021-09-17 21:57 ` [PATCH RFC bpf-next 08/10] selftests/bpf: Convert kfunc test with CO-RE to lskel Alexei Starovoitov
@ 2021-09-17 21:57 ` Alexei Starovoitov
  2021-09-17 21:57 ` [PATCH RFC bpf-next 10/10] selftests/bpf: Convert map_ptr_kern test to use light skeleton Alexei Starovoitov
  2021-09-23 11:33 ` [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Lorenz Bauer
  10 siblings, 0 replies; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-17 21:57 UTC (permalink / raw)
  To: davem; +Cc: daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

Check that hash and array inner maps are properly initialized.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/testing/selftests/bpf/progs/map_ptr_kern.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/bpf/progs/map_ptr_kern.c b/tools/testing/selftests/bpf/progs/map_ptr_kern.c
index d1d304c980f0..b9b7342763c3 100644
--- a/tools/testing/selftests/bpf/progs/map_ptr_kern.c
+++ b/tools/testing/selftests/bpf/progs/map_ptr_kern.c
@@ -334,9 +334,11 @@ static inline int check_lpm_trie(void)
 	return 1;
 }
 
+#define INNER_MAX_ENTRIES 1234
+
 struct inner_map {
 	__uint(type, BPF_MAP_TYPE_ARRAY);
-	__uint(max_entries, 1);
+	__uint(max_entries, INNER_MAX_ENTRIES);
 	__type(key, __u32);
 	__type(value, __u32);
 } inner_map SEC(".maps");
@@ -348,7 +350,7 @@ struct {
 	__type(value, __u32);
 	__array(values, struct {
 		__uint(type, BPF_MAP_TYPE_ARRAY);
-		__uint(max_entries, 1);
+		__uint(max_entries, INNER_MAX_ENTRIES);
 		__type(key, __u32);
 		__type(value, __u32);
 	});
@@ -360,8 +362,13 @@ static inline int check_array_of_maps(void)
 {
 	struct bpf_array *array_of_maps = (struct bpf_array *)&m_array_of_maps;
 	struct bpf_map *map = (struct bpf_map *)&m_array_of_maps;
+	struct bpf_array *inner_map;
+	int key = 0;
 
 	VERIFY(check_default(&array_of_maps->map, map));
+	inner_map = bpf_map_lookup_elem(array_of_maps, &key);
+	VERIFY(inner_map != 0);
+	VERIFY(inner_map->map.max_entries == INNER_MAX_ENTRIES);
 
 	return 1;
 }
@@ -382,8 +389,13 @@ static inline int check_hash_of_maps(void)
 {
 	struct bpf_htab *hash_of_maps = (struct bpf_htab *)&m_hash_of_maps;
 	struct bpf_map *map = (struct bpf_map *)&m_hash_of_maps;
+	struct bpf_htab *inner_map;
+	int key = 2;
 
 	VERIFY(check_default(&hash_of_maps->map, map));
+	inner_map = bpf_map_lookup_elem(hash_of_maps, &key);
+	VERIFY(inner_map != 0);
+	VERIFY(inner_map->map.max_entries == INNER_MAX_ENTRIES);
 
 	return 1;
 }
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH RFC bpf-next 10/10] selftests/bpf: Convert map_ptr_kern test to use light skeleton.
  2021-09-17 21:57 [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Alexei Starovoitov
                   ` (8 preceding siblings ...)
  2021-09-17 21:57 ` [PATCH RFC bpf-next 09/10] selftests/bpf: Improve inner_map test coverage Alexei Starovoitov
@ 2021-09-17 21:57 ` Alexei Starovoitov
  2021-09-23 11:33 ` [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Lorenz Bauer
  10 siblings, 0 replies; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-17 21:57 UTC (permalink / raw)
  To: davem; +Cc: daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

To exercise CO-RE in the kernel further convert map_ptr_kern
test to light skeleton.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 tools/testing/selftests/bpf/Makefile             | 2 +-
 tools/testing/selftests/bpf/prog_tests/map_ptr.c | 8 +++-----
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 81031c8bdcb8..390e9875a13e 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -316,7 +316,7 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h		\
 
 LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
 	test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c \
-	trace_vprintk.c	kfunc_call_test_subprog.c
+	trace_vprintk.c	kfunc_call_test_subprog.c map_ptr_kern.c
 SKEL_BLACKLIST += $$(LSKELS)
 
 test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o
diff --git a/tools/testing/selftests/bpf/prog_tests/map_ptr.c b/tools/testing/selftests/bpf/prog_tests/map_ptr.c
index 4972f92205c7..62369f8ae124 100644
--- a/tools/testing/selftests/bpf/prog_tests/map_ptr.c
+++ b/tools/testing/selftests/bpf/prog_tests/map_ptr.c
@@ -4,7 +4,7 @@
 #include <test_progs.h>
 #include <network_helpers.h>
 
-#include "map_ptr_kern.skel.h"
+#include "map_ptr_kern.lskel.h"
 
 void test_map_ptr(void)
 {
@@ -18,9 +18,7 @@ void test_map_ptr(void)
 	if (!ASSERT_OK_PTR(skel, "skel_open"))
 		return;
 
-	err = bpf_map__set_max_entries(skel->maps.m_ringbuf, page_size);
-	if (!ASSERT_OK(err, "bpf_map__set_max_entries"))
-		goto cleanup;
+	skel->maps.m_ringbuf.max_entries = page_size;
 
 	err = map_ptr_kern__load(skel);
 	if (!ASSERT_OK(err, "skel_load"))
@@ -28,7 +26,7 @@ void test_map_ptr(void)
 
 	skel->bss->page_size = page_size;
 
-	err = bpf_prog_test_run(bpf_program__fd(skel->progs.cg_skb), 1, &pkt_v4,
+	err = bpf_prog_test_run(skel->progs.cg_skb.prog_fd, 1, &pkt_v4,
 				sizeof(pkt_v4), buf, NULL, &retval, NULL);
 
 	if (CHECK(err, "test_run", "err=%d errno=%d\n", err, errno))
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty.
  2021-09-17 21:57 ` [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty Alexei Starovoitov
@ 2021-09-21 21:25   ` Andrii Nakryiko
  2021-09-28 14:45   ` Matteo Croce
  1 sibling, 0 replies; 34+ messages in thread
From: Andrii Nakryiko @ 2021-09-21 21:25 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, Andrii Nakryiko,
	john fastabend, Lorenz Bauer, mcroce, bpf, Kernel Team

On Fri, Sep 17, 2021 at 2:57 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> From: Alexei Starovoitov <ast@kernel.org>
>
> Make relo_core.c to be compiled with kernel and with libbpf.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---
>  include/linux/btf.h       | 89 +++++++++++++++++++++++++++++++++++++++
>  kernel/bpf/Makefile       |  4 ++
>  kernel/bpf/btf.c          | 26 ++++++++++++
>  tools/lib/bpf/relo_core.c | 72 ++++++++++++++++++++++++++++++-
>  4 files changed, 189 insertions(+), 2 deletions(-)
>

[...]

>  static inline u16 btf_func_linkage(const struct btf_type *t)
>  {
>         return BTF_INFO_VLEN(t->info);
> @@ -193,6 +245,27 @@ static inline bool btf_type_kflag(const struct btf_type *t)
>         return BTF_INFO_KFLAG(t->info);
>  }
>
> +static inline struct btf_member *btf_members(const struct btf_type *t)
> +{
> +       return (struct btf_member *)(t + 1);
> +}
> +#ifdef RELO_CORE

ugh... seems like in most (if not all) cases kernel has member_idx
available, so it will be a simple and mechanical change to use
libbpf's member_idx implementation everywhere. Would allow #define
RELO_CORE, unless you think it's useful for something else?

> +static inline u32 btf_member_bit_offset(const struct btf_type *t, u32 member_idx)
> +{
> +       const struct btf_member *m = btf_members(t) + member_idx;
> +       bool kflag = btf_type_kflag(t);
> +
> +       return kflag ? BTF_MEMBER_BIT_OFFSET(m->offset) : m->offset;
> +}
> +

[...]

>  static inline const struct btf_var_secinfo *btf_type_var_secinfo(
>                 const struct btf_type *t)
>  {
> @@ -222,6 +307,10 @@ static inline const struct btf_var_secinfo *btf_type_var_secinfo(
>  struct bpf_prog;
>
>  const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id);
> +static inline const struct btf_type *btf__type_by_id(const struct btf *btf, u32 type_id)
> +{
> +       return btf_type_by_id(btf, type_id);
> +}

There is just one place in relo_core.c where btf__type_by_id()
behavior matters, that's in bpf_core_apply_relo_insn() which validates
that relocation info is valid. But we've already done that in
bpf_core_apply_relo() anyways, so I think all relo_core.c uses can be
just switched to btf_type_by_id(). So you don't need to add this tiny
wrapper.

>  const char *btf_name_by_offset(const struct btf *btf, u32 offset);
>  struct btf *btf_parse_vmlinux(void);
>  struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog);
> diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
> index 7f33098ca63f..3d5370c876b5 100644
> --- a/kernel/bpf/Makefile
> +++ b/kernel/bpf/Makefile
> @@ -36,3 +36,7 @@ obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o
>  obj-${CONFIG_BPF_LSM} += bpf_lsm.o
>  endif
>  obj-$(CONFIG_BPF_PRELOAD) += preload/
> +
> +obj-$(CONFIG_BPF_SYSCALL) += relo_core.o
> +$(obj)/relo_core.o: $(srctree)/tools/lib/bpf/relo_core.c FORCE
> +       $(call if_changed_rule,cc_o_c)
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index c3d605b22473..fa2c88f6ac4a 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -6343,3 +6343,29 @@ const struct bpf_func_proto bpf_btf_find_by_name_kind_proto = {
>  };
>
>  BTF_ID_LIST_GLOBAL_SINGLE(btf_task_struct_ids, struct, task_struct)
> +
> +int bpf_core_types_are_compat(const struct btf *local_btf, __u32 local_id,
> +                             const struct btf *targ_btf, __u32 targ_id)
> +{

We chatted offline about bpf_core_types_are_compat() and how it's used
in only one place. It's because fields have dedicated
bpf_core_fields_are_compat(), which is non-recursive.

> +       return -EOPNOTSUPP;
> +}
> +
> +static bool bpf_core_is_flavor_sep(const char *s)
> +{
> +       /* check X___Y name pattern, where X and Y are not underscores */
> +       return s[0] != '_' &&                                 /* X */
> +              s[1] == '_' && s[2] == '_' && s[3] == '_' &&   /* ___ */
> +              s[4] != '_';                                   /* Y */
> +}
> +

[...]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 02/10] bpf: Define enum bpf_core_relo_kind as uapi.
  2021-09-17 21:57 ` [PATCH RFC bpf-next 02/10] bpf: Define enum bpf_core_relo_kind as uapi Alexei Starovoitov
@ 2021-09-21 21:27   ` Andrii Nakryiko
  0 siblings, 0 replies; 34+ messages in thread
From: Andrii Nakryiko @ 2021-09-21 21:27 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, Andrii Nakryiko,
	john fastabend, Lorenz Bauer, mcroce, bpf, Kernel Team

On Fri, Sep 17, 2021 at 2:57 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> From: Alexei Starovoitov <ast@kernel.org>
>
> enum bpf_core_relo_kind is generated by llvm and processed by libbpf.
> It's a de-facto uapi.
> With CO-RE in the kernel the bpf_core_relo_kind values become uapi de-jure.
> Also rename them with BPF_CORE_ prefix to distinguish from conflicting names in
> bpf_core_read.h. The enums bpf_field_info_kind, bpf_type_id_kind,
> bpf_type_info_kind, bpf_enum_value_kind are passing different values from bpf
> program into llvm.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---

LGTM.

Acked-by: Andrii Nakryiko <andrii@kernel.org>

>  include/uapi/linux/bpf.h       | 19 ++++++++
>  tools/include/uapi/linux/bpf.h | 19 ++++++++
>  tools/lib/bpf/libbpf.c         |  2 +-
>  tools/lib/bpf/relo_core.c      | 84 +++++++++++++++++-----------------
>  tools/lib/bpf/relo_core.h      | 18 --------
>  5 files changed, 81 insertions(+), 61 deletions(-)
>

[...]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 04/10] bpf: Add bpf_core_add_cands() and wire it into bpf_core_apply_relo_insn().
  2021-09-17 21:57 ` [PATCH RFC bpf-next 04/10] bpf: Add bpf_core_add_cands() and wire it into bpf_core_apply_relo_insn() Alexei Starovoitov
@ 2021-09-21 21:34   ` Andrii Nakryiko
  2021-09-27 18:04   ` Matteo Croce
  1 sibling, 0 replies; 34+ messages in thread
From: Andrii Nakryiko @ 2021-09-21 21:34 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, Andrii Nakryiko,
	john fastabend, Lorenz Bauer, mcroce, bpf, Kernel Team

On Fri, Sep 17, 2021 at 2:57 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> From: Alexei Starovoitov <ast@kernel.org>
>
> Given BPF program's BTF perform a linear search through kernel BTFs for
> a possible candidate.
> Then wire the result into bpf_core_apply_relo_insn().
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---
>  kernel/bpf/btf.c | 149 ++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 147 insertions(+), 2 deletions(-)
>

[...]

> +       /* If candidate is not found in vmlinux's BTF then search in module's BTFs */
> +       spin_lock_bh(&btf_idr_lock);
> +       idr_for_each_entry(&btf_idr, mod_btf, id) {
> +               if (!btf_is_module(mod_btf))
> +                       continue;
> +               /* linear search could be slow hence unlock/lock
> +                * the IDR to avoiding holding it for too long
> +                */
> +               btf_get(mod_btf);
> +               spin_unlock_bh(&btf_idr_lock);
> +               err = bpf_core_add_cands(&local_cand, local_essent_len,
> +                                        mod_btf,
> +                                        btf_nr_types(main_btf),
> +                                        cands);
> +               if (err)
> +                       btf_put(mod_btf);
> +               goto err_out;

this couldn't have worked properly for modules, missing {} ?

> +               spin_lock_bh(&btf_idr_lock);
> +               btf_put(mod_btf);
> +       }
> +       spin_unlock_bh(&btf_idr_lock);
> +
> +       return cands;
> +err_out:
> +       bpf_core_free_cands(cands);
> +       return ERR_PTR(err);
> +}
> +

[...]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 03/10] bpf: Add proto of bpf_core_apply_relo()
  2021-09-17 21:57 ` [PATCH RFC bpf-next 03/10] bpf: Add proto of bpf_core_apply_relo() Alexei Starovoitov
@ 2021-09-23 11:21   ` Lorenz Bauer
  2021-09-23 18:48     ` Andrii Nakryiko
  0 siblings, 1 reply; 34+ messages in thread
From: Lorenz Bauer @ 2021-09-23 11:21 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S . Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, mcroce, bpf, Kernel Team

On Fri, 17 Sept 2021 at 22:57, Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> From: Alexei Starovoitov <ast@kernel.org>
>
> Prototype of bpf_core_apply_relo() helper.
>
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>

...

> @@ -6313,4 +6321,10 @@ enum bpf_core_relo_kind {
>         BPF_CORE_ENUMVAL_VALUE = 11,         /* enum value integer value */
>  };
>
> +struct bpf_core_relo_desc {
> +       __u32 type_id;
> +       __u32 access_str_off;
> +       enum bpf_core_relo_kind kind;

Not a C expert, I thought enums don't have a fixed size?

--
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel.
  2021-09-17 21:57 [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Alexei Starovoitov
                   ` (9 preceding siblings ...)
  2021-09-17 21:57 ` [PATCH RFC bpf-next 10/10] selftests/bpf: Convert map_ptr_kern test to use light skeleton Alexei Starovoitov
@ 2021-09-23 11:33 ` Lorenz Bauer
  2021-09-23 18:52   ` Andrii Nakryiko
  2021-09-24 23:13   ` Alexei Starovoitov
  10 siblings, 2 replies; 34+ messages in thread
From: Lorenz Bauer @ 2021-09-23 11:33 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S . Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, mcroce, bpf, Kernel Team

On Fri, 17 Sept 2021 at 22:57, Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> From: Alexei Starovoitov <ast@kernel.org>
>
> Hi All,
>
> This is very early RFC that introduces CO-RE support in the kernel.
> There are several reasons to add such support:
> 1. It's a step toward signed BPF programs.
> 2. It allows golang like languages that struggle to adopt libbpf
>    to take advantage of CO-RE powers.
> 3. Currently the field accessed by 'ldx [R1 + 10]' insn is recognized
>    by the verifier purely based on +10 offset. If R1 points to a union
>    the verifier picks one of the fields at this offset.
>    With CO-RE the kernel can disambiguate the field access.
>
> This set wires all relevant pieces and passes two selftests with CO-RE
> in the kernel.
>
> The main goal of RFC is to get feedback on patch 3.
> It's passing CO-RE relocation into the kernel via bpf_core_apply_relo()
> helper that is called by the loader program.
> It works, but there is no clean way to add error reporting here.
> So I'm thinking that the better approach would be to pass an array
> of 'struct bpf_core_relo_desc' into PROG_LOAD command similar to
> how func_info and line_info are passed.
> Such approach would allow for the use case 3 above (which
> current approach in patch 3 doesn't support).

+1 to having good error reporting, it's hard to debug CO-RE failures
as they are. PROG_LOAD seems nice, provided that relocation happens
before verification.

Some questions:
* How can this handle kernels that don't have built-in BTF? Not a
problem for myself, but some people have to deal with BTF-less distro
kernels by using pahole to generate external BTF from debug symbols.
Can we accommodate that?
* Does in-kernel CO-RE need to account for packed structs w/ bitfields
in them? If relocation happens after verification this could be a
problem: [1].
* Tangent: libbpf CO-RE has this res->validate flag, which turns off
some checks for bitfields. I've never fully understood why that is,
maybe Andrii can explain it?

Lorenz

1: https://lore.kernel.org/bpf/CACAyw9_R4_ib0KvcuQC4nSOy5+Hn8-Xq-G8geDdLsNztX=0Fsw@mail.gmail.com/

-- 
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 03/10] bpf: Add proto of bpf_core_apply_relo()
  2021-09-23 11:21   ` Lorenz Bauer
@ 2021-09-23 18:48     ` Andrii Nakryiko
  0 siblings, 0 replies; 34+ messages in thread
From: Andrii Nakryiko @ 2021-09-23 18:48 UTC (permalink / raw)
  To: Lorenz Bauer
  Cc: Alexei Starovoitov, David S . Miller, Daniel Borkmann,
	Andrii Nakryiko, John Fastabend, mcroce, bpf, Kernel Team

On Thu, Sep 23, 2021 at 4:22 AM Lorenz Bauer <lmb@cloudflare.com> wrote:
>
> On Fri, 17 Sept 2021 at 22:57, Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > From: Alexei Starovoitov <ast@kernel.org>
> >
> > Prototype of bpf_core_apply_relo() helper.
> >
> > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>
> ...
>
> > @@ -6313,4 +6321,10 @@ enum bpf_core_relo_kind {
> >         BPF_CORE_ENUMVAL_VALUE = 11,         /* enum value integer value */
> >  };
> >
> > +struct bpf_core_relo_desc {
> > +       __u32 type_id;
> > +       __u32 access_str_off;
> > +       enum bpf_core_relo_kind kind;
>
> Not a C expert, I thought enums don't have a fixed size?

They are backed by int (4 bytes) in C, unless values don't fit into
int, then it's long long (or unsigned variants of int or long long).
The only exception is when it's used in a packed struct, then field
will get allocated only as little space as necessary to keep all
values (so could be byte or two bytes). You can also have a
bitfield-backed enum field in a struct, but enum itself is still going
to be of whole and power-of-two of number of bytes.

So long story short, here it's 4 bytes and will stay 4 bytes.

C++ gives more flexibility and you can actually specify backing
integer type. Not the case for C, though.

>
> --
> Lorenz Bauer  |  Systems Engineer
> 6th Floor, County Hall/The Riverside Building, SE1 7PB, UK
>
> www.cloudflare.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel.
  2021-09-23 11:33 ` [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Lorenz Bauer
@ 2021-09-23 18:52   ` Andrii Nakryiko
  2021-09-24 23:13   ` Alexei Starovoitov
  1 sibling, 0 replies; 34+ messages in thread
From: Andrii Nakryiko @ 2021-09-23 18:52 UTC (permalink / raw)
  To: Lorenz Bauer
  Cc: Alexei Starovoitov, David S . Miller, Daniel Borkmann,
	Andrii Nakryiko, John Fastabend, mcroce, bpf, Kernel Team

On Thu, Sep 23, 2021 at 4:34 AM Lorenz Bauer <lmb@cloudflare.com> wrote:
>
> On Fri, 17 Sept 2021 at 22:57, Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > From: Alexei Starovoitov <ast@kernel.org>
> >
> > Hi All,
> >
> > This is very early RFC that introduces CO-RE support in the kernel.
> > There are several reasons to add such support:
> > 1. It's a step toward signed BPF programs.
> > 2. It allows golang like languages that struggle to adopt libbpf
> >    to take advantage of CO-RE powers.
> > 3. Currently the field accessed by 'ldx [R1 + 10]' insn is recognized
> >    by the verifier purely based on +10 offset. If R1 points to a union
> >    the verifier picks one of the fields at this offset.
> >    With CO-RE the kernel can disambiguate the field access.
> >
> > This set wires all relevant pieces and passes two selftests with CO-RE
> > in the kernel.
> >
> > The main goal of RFC is to get feedback on patch 3.
> > It's passing CO-RE relocation into the kernel via bpf_core_apply_relo()
> > helper that is called by the loader program.
> > It works, but there is no clean way to add error reporting here.
> > So I'm thinking that the better approach would be to pass an array
> > of 'struct bpf_core_relo_desc' into PROG_LOAD command similar to
> > how func_info and line_info are passed.
> > Such approach would allow for the use case 3 above (which
> > current approach in patch 3 doesn't support).
>
> +1 to having good error reporting, it's hard to debug CO-RE failures
> as they are. PROG_LOAD seems nice, provided that relocation happens
> before verification.
>
> Some questions:
> * How can this handle kernels that don't have built-in BTF? Not a
> problem for myself, but some people have to deal with BTF-less distro
> kernels by using pahole to generate external BTF from debug symbols.
> Can we accommodate that?
> * Does in-kernel CO-RE need to account for packed structs w/ bitfields
> in them? If relocation happens after verification this could be a
> problem: [1].

The way that CO-RE relocs for bitfields are implemented with libbpf is
through the use of 5 different relocation kinds. See
BPF_CORE_READ_BITFIELD() macro in bpf_core_read.h.

> * Tangent: libbpf CO-RE has this res->validate flag, which turns off
> some checks for bitfields. I've never fully understood why that is,
> maybe Andrii can explain it?

Because there is no single canonical set of those 5 relocated values
(that I mentioned above) that the compiler has to general. There are
many ways to extract a bitfield and compiler can use different ones
due to optimization and code generation choices. So in general it's
ambiguous and impossible to validate that we agree with the compiler.
Generally we won't agree, but will still do correct bitfield
relocation. Again, I'll refer you to BPF_CORE_READ_BITFIELD() macro
for details.


>
> Lorenz
>
> 1: https://lore.kernel.org/bpf/CACAyw9_R4_ib0KvcuQC4nSOy5+Hn8-Xq-G8geDdLsNztX=0Fsw@mail.gmail.com/
>
> --
> Lorenz Bauer  |  Systems Engineer
> 6th Floor, County Hall/The Riverside Building, SE1 7PB, UK
>
> www.cloudflare.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel.
  2021-09-23 11:33 ` [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Lorenz Bauer
  2021-09-23 18:52   ` Andrii Nakryiko
@ 2021-09-24 23:13   ` Alexei Starovoitov
  2021-09-27 16:12     ` Lorenz Bauer
  1 sibling, 1 reply; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-24 23:13 UTC (permalink / raw)
  To: Lorenz Bauer
  Cc: David S . Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, mcroce, bpf, Kernel Team

On Thu, Sep 23, 2021 at 12:33:58PM +0100, Lorenz Bauer wrote:
> 
> Some questions:
> * How can this handle kernels that don't have built-in BTF? Not a
> problem for myself, but some people have to deal with BTF-less distro
> kernels by using pahole to generate external BTF from debug symbols.
> Can we accommodate that?

I think so, but it probably should be done as a generic feature:
"populate kernel BTF".
When kernel wasn't compiled with BTF there could be a way to
populate it with such. Just like we do sys_bpf(BTF_LOAD)
for program's BTF we can allow populating vmlinux BTF this way.
Unlike builtin BTF it wouldn't be trusted for certain verifier assumptions,
but better than nothing and more convenient than specifying BTF file
on a side for every bpf prog load with traditional libbpf style.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel.
  2021-09-24 23:13   ` Alexei Starovoitov
@ 2021-09-27 16:12     ` Lorenz Bauer
  2021-09-27 16:50       ` Alexei Starovoitov
  0 siblings, 1 reply; 34+ messages in thread
From: Lorenz Bauer @ 2021-09-27 16:12 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S . Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, mcroce, bpf, Kernel Team

On Sat, 25 Sept 2021 at 00:13, Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Thu, Sep 23, 2021 at 12:33:58PM +0100, Lorenz Bauer wrote:
> >
> > Some questions:
> > * How can this handle kernels that don't have built-in BTF? Not a
> > problem for myself, but some people have to deal with BTF-less distro
> > kernels by using pahole to generate external BTF from debug symbols.
> > Can we accommodate that?
>
> I think so, but it probably should be done as a generic feature:
> "populate kernel BTF".
> When kernel wasn't compiled with BTF there could be a way to
> populate it with such. Just like we do sys_bpf(BTF_LOAD)
> for program's BTF we can allow populating vmlinux BTF this way.
> Unlike builtin BTF it wouldn't be trusted for certain verifier assumptions,
> but better than nothing and more convenient than specifying BTF file
> on a side for every bpf prog load with traditional libbpf style.

From my POV we already have an API for external BTF (and I think
libbpf does too?) but would need a new API for "load kernel BTF".
Global state like this also doesn't work well for several individual
processes. Imagine multiple programs on the system trying to each
replace the kernel BTF, how would that work? Which one wins? Being
able to give my own fd for kernel BTF circumvents all those problems
and seems much cleaner to me.

Lorenz

-- 
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel.
  2021-09-27 16:12     ` Lorenz Bauer
@ 2021-09-27 16:50       ` Alexei Starovoitov
  2021-09-28  8:30         ` Lorenz Bauer
  0 siblings, 1 reply; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-27 16:50 UTC (permalink / raw)
  To: Lorenz Bauer
  Cc: David S . Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, mcroce, bpf, Kernel Team

On Mon, Sep 27, 2021 at 05:12:15PM +0100, Lorenz Bauer wrote:
> On Sat, 25 Sept 2021 at 00:13, Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Thu, Sep 23, 2021 at 12:33:58PM +0100, Lorenz Bauer wrote:
> > >
> > > Some questions:
> > > * How can this handle kernels that don't have built-in BTF? Not a
> > > problem for myself, but some people have to deal with BTF-less distro
> > > kernels by using pahole to generate external BTF from debug symbols.
> > > Can we accommodate that?
> >
> > I think so, but it probably should be done as a generic feature:
> > "populate kernel BTF".
> > When kernel wasn't compiled with BTF there could be a way to
> > populate it with such. Just like we do sys_bpf(BTF_LOAD)
> > for program's BTF we can allow populating vmlinux BTF this way.
> > Unlike builtin BTF it wouldn't be trusted for certain verifier assumptions,
> > but better than nothing and more convenient than specifying BTF file
> > on a side for every bpf prog load with traditional libbpf style.
> 
> From my POV we already have an API for external BTF (and I think
> libbpf does too?) but would need a new API for "load kernel BTF".
> Global state like this also doesn't work well for several individual
> processes. Imagine multiple programs on the system trying to each
> replace the kernel BTF, how would that work? Which one wins? 

The kernel BTF can be only one, of course.
I don't expect progs to update the kernel BTF when they start.
It's more of the admin/chef job when kernel boots.
Only for the cases when kernel somehow was compiled without BTF.

> Being
> able to give my own fd for kernel BTF circumvents all those problems
> and seems much cleaner to me.

You mean to pass kernel BTF's fd to the kernel?
It's doable, but I don't quite see the operational side of it.
If progs have to carry both their BTF and kernel BTF why would
they need CO-RE at all? If they were compiled with given kernel BTF
there is no need to adjust offsets for the given host.
I suspect I simply don't understand your use case :)

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 04/10] bpf: Add bpf_core_add_cands() and wire it into bpf_core_apply_relo_insn().
  2021-09-17 21:57 ` [PATCH RFC bpf-next 04/10] bpf: Add bpf_core_add_cands() and wire it into bpf_core_apply_relo_insn() Alexei Starovoitov
  2021-09-21 21:34   ` Andrii Nakryiko
@ 2021-09-27 18:04   ` Matteo Croce
  1 sibling, 0 replies; 34+ messages in thread
From: Matteo Croce @ 2021-09-27 18:04 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

On Fri, 17 Sep 2021 14:57:15 -0700
Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:

> From: Alexei Starovoitov <ast@kernel.org>
> +	err = bpf_core_apply_relo_insn("prog_name", insn, 0,
> &core_relo, 0, btf, cands);
> +	btf_put(btf);
> +	return 0;

Why not returning err here?

-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel.
  2021-09-27 16:50       ` Alexei Starovoitov
@ 2021-09-28  8:30         ` Lorenz Bauer
  2021-09-28 16:35           ` Alexei Starovoitov
  0 siblings, 1 reply; 34+ messages in thread
From: Lorenz Bauer @ 2021-09-28  8:30 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S . Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, mcroce, bpf, Kernel Team

On Mon, 27 Sept 2021 at 17:50, Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Mon, Sep 27, 2021 at 05:12:15PM +0100, Lorenz Bauer wrote:
> > On Sat, 25 Sept 2021 at 00:13, Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:
> > >
> > > On Thu, Sep 23, 2021 at 12:33:58PM +0100, Lorenz Bauer wrote:
> > > >
> > > > Some questions:
> > > > * How can this handle kernels that don't have built-in BTF? Not a
> > > > problem for myself, but some people have to deal with BTF-less distro
> > > > kernels by using pahole to generate external BTF from debug symbols.
> > > > Can we accommodate that?
> > >
> > > I think so, but it probably should be done as a generic feature:
> > > "populate kernel BTF".
> > > When kernel wasn't compiled with BTF there could be a way to
> > > populate it with such. Just like we do sys_bpf(BTF_LOAD)
> > > for program's BTF we can allow populating vmlinux BTF this way.
> > > Unlike builtin BTF it wouldn't be trusted for certain verifier assumptions,
> > > but better than nothing and more convenient than specifying BTF file
> > > on a side for every bpf prog load with traditional libbpf style.
> >
> > From my POV we already have an API for external BTF (and I think
> > libbpf does too?) but would need a new API for "load kernel BTF".
> > Global state like this also doesn't work well for several individual
> > processes. Imagine multiple programs on the system trying to each
> > replace the kernel BTF, how would that work? Which one wins?
>
> The kernel BTF can be only one, of course.
> I don't expect progs to update the kernel BTF when they start.
> It's more of the admin/chef job when kernel boots.
> Only for the cases when kernel somehow was compiled without BTF.
>
> > Being
> > able to give my own fd for kernel BTF circumvents all those problems
> > and seems much cleaner to me.
>
> You mean to pass kernel BTF's fd to the kernel?
> It's doable, but I don't quite see the operational side of it.
> If progs have to carry both their BTF and kernel BTF why would
> they need CO-RE at all? If they were compiled with given kernel BTF
> there is no need to adjust offsets for the given host.
> I suspect I simply don't understand your use case :)

This is the "distro ships without BTF" case that the aqua sec folks
have been grappling with, and for which btfhub is a solution. If the
distro disables BTF they are unlikely to perform this "admin" job in
the first place. So whose responsibility is it to load that BTF?
Currently it falls on the developers of the user space tooling to
provide alternative BTF. Only allowing a single replacement BTF makes
this difficult.

Here is why:
* Since external BTF is a thing, loaders today have to provide a way
to relocate against external BTF in a non-standard location. This
means loading the file from disk and then performing CO-RE using that
info.
* Users of the loader build a btfhub integration (or similar) and
provide a path to the external BTF during load. They do this because
they will have to support legacy kernels for years to come.
* Under my proposal, a loader can detect whether in-kernel CO-RE is
supported, load the BTF provided by the user into the kernel, and pass
that fd to PROG_LOAD.
* This is transparent to the user: they keep using their existing BTF
but get the benefit of canonical CO-RE resolution.

We don't have to introduce a new loader-side API to deal with this
situation. We also don't have to deal with a global resource that is
subject to the whims of the distro.

Lorenz

-- 
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty.
  2021-09-17 21:57 ` [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty Alexei Starovoitov
  2021-09-21 21:25   ` Andrii Nakryiko
@ 2021-09-28 14:45   ` Matteo Croce
  2021-09-28 16:37     ` Alexei Starovoitov
  1 sibling, 1 reply; 34+ messages in thread
From: Matteo Croce @ 2021-09-28 14:45 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

On Fri, 17 Sep 2021 14:57:12 -0700
Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:

> From: Alexei Starovoitov <ast@kernel.org>
> 
> Make relo_core.c to be compiled with kernel and with libbpf.
> 
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---

I give it a try with a sample co-re program.
I don't know how much of them will stay in the final work, but the
debug prints are borked because of the printk trailing \n.
I managed to get a decent output like:

[   36.154379] libbpf: prog 'prog_name': relo #0: kind <byte_off> (0), spec is [24] STRUCT net_device.ifindex (0:17 @ offset 208)
[   36.154399] libbpf: prog 'prog_name': relo #0: matching candidate #0 [2617] STRUCT net_device.ifindex (0:17 @ offset 208)
[   36.154524] libbpf: prog 'prog_name': relo #0: patched insn #0 (LDX/ST/STX) off 208 -> 208
[   36.155319] libbpf: prog 'prog_name': relo #0: kind <byte_off> (0), spec is [24] STRUCT net_device.ifindex (0:17 @ offset 208)

With this change:

--- a/tools/lib/bpf/relo_core.c
+++ b/tools/lib/bpf/relo_core.c
@@ -79,6 +79,9 @@ do {				\
 #include "btf.h"
 #include "str_error.h"
 #include "libbpf_internal.h"
+
+#define KERN_CONT
+
 #endif
 
 #define BPF_CORE_SPEC_MAX_LEN 32
@@ -1125,7 +1128,7 @@ static void bpf_core_dump_spec(int level, const struct bpf_core_spec *spec)
 	t = btf__type_by_id(spec->btf, type_id);
 	s = btf__name_by_offset(spec->btf, t->name_off);
 
-	libbpf_print(level, "[%u] %s %s", type_id, btf_kind_str(t), str_is_empty(s) ? "<anon>" : s);
+	libbpf_print(level, KERN_CONT "[%u] %s %s", type_id, btf_kind_str(t), str_is_empty(s) ? "<anon>" : s);
 
 	if (core_relo_is_type_based(spec->relo_kind))
 		return;
@@ -1135,29 +1138,33 @@ static void bpf_core_dump_spec(int level, const struct bpf_core_spec *spec)
 		e = btf_enum(t) + spec->raw_spec[0];
 		s = btf__name_by_offset(spec->btf, e->name_off);
 
-		libbpf_print(level, "::%s = %u", s, e->val);
+		libbpf_print(level, KERN_CONT "::%s = %u", s, e->val);
 		return;
 	}
 
 	if (core_relo_is_field_based(spec->relo_kind)) {
 		for (i = 0; i < spec->len; i++) {
 			if (spec->spec[i].name)
-				libbpf_print(level, ".%s", spec->spec[i].name);
+				libbpf_print(level, KERN_CONT ".%s", spec->spec[i].name);
 			else if (i > 0 || spec->spec[i].idx > 0)
-				libbpf_print(level, "[%u]", spec->spec[i].idx);
+				libbpf_print(level, KERN_CONT "[%u]", spec->spec[i].idx);
 		}
 
-		libbpf_print(level, " (");
+		libbpf_print(level, KERN_CONT " (");
 		for (i = 0; i < spec->raw_len; i++)
-			libbpf_print(level, "%s%d", i == 0 ? "" : ":", spec->raw_spec[i]);
+			libbpf_print(level, KERN_CONT "%s%d", i == 0 ? "" : ":", spec->raw_spec[i]);
 
 		if (spec->bit_offset % 8)
-			libbpf_print(level, " @ offset %u.%u)",
+			libbpf_print(level, KERN_CONT " @ offset %u.%u)",
 				     spec->bit_offset / 8, spec->bit_offset % 8);
 		else
-			libbpf_print(level, " @ offset %u)", spec->bit_offset / 8);
+			libbpf_print(level, KERN_CONT " @ offset %u)", spec->bit_offset / 8);
 		return;
 	}
+
+#ifndef RELO_CORE
+	libbpf_print(level, KERN_CONT "\n");
+#endif
 }
 
 /*
@@ -1250,7 +1257,6 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
 	pr_debug("prog '%s': relo #%d: kind <%s> (%d), spec is ", prog_name,
 		 relo_idx, core_relo_kind_str(relo->kind), relo->kind);
 	bpf_core_dump_spec(LIBBPF_DEBUG, &local_spec);
-	libbpf_print(LIBBPF_DEBUG, "\n");
 
 	/* TYPE_ID_LOCAL relo is special and doesn't need candidate search */
 	if (relo->kind == BPF_CORE_TYPE_ID_LOCAL) {
@@ -1283,7 +1289,6 @@ int bpf_core_apply_relo_insn(const char *prog_name, struct bpf_insn *insn,
 		pr_debug("prog '%s': relo #%d: %s candidate #%d ", prog_name,
 			 relo_idx, err == 0 ? "non-matching" : "matching", i);
 		bpf_core_dump_spec(LIBBPF_DEBUG, &cand_spec);
-		libbpf_print(LIBBPF_DEBUG, "\n");
 
 		if (err == 0)
 			continue;


Regards,
-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel.
  2021-09-28  8:30         ` Lorenz Bauer
@ 2021-09-28 16:35           ` Alexei Starovoitov
  0 siblings, 0 replies; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-28 16:35 UTC (permalink / raw)
  To: Lorenz Bauer
  Cc: David S . Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, mcroce, bpf, Kernel Team

On Tue, Sep 28, 2021 at 09:30:23AM +0100, Lorenz Bauer wrote:
> On Mon, 27 Sept 2021 at 17:50, Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Mon, Sep 27, 2021 at 05:12:15PM +0100, Lorenz Bauer wrote:
> > > On Sat, 25 Sept 2021 at 00:13, Alexei Starovoitov
> > > <alexei.starovoitov@gmail.com> wrote:
> > > >
> > > > On Thu, Sep 23, 2021 at 12:33:58PM +0100, Lorenz Bauer wrote:
> > > > >
> > > > > Some questions:
> > > > > * How can this handle kernels that don't have built-in BTF? Not a
> > > > > problem for myself, but some people have to deal with BTF-less distro
> > > > > kernels by using pahole to generate external BTF from debug symbols.
> > > > > Can we accommodate that?
> > > >
> > > > I think so, but it probably should be done as a generic feature:
> > > > "populate kernel BTF".
> > > > When kernel wasn't compiled with BTF there could be a way to
> > > > populate it with such. Just like we do sys_bpf(BTF_LOAD)
> > > > for program's BTF we can allow populating vmlinux BTF this way.
> > > > Unlike builtin BTF it wouldn't be trusted for certain verifier assumptions,
> > > > but better than nothing and more convenient than specifying BTF file
> > > > on a side for every bpf prog load with traditional libbpf style.
> > >
> > > From my POV we already have an API for external BTF (and I think
> > > libbpf does too?) but would need a new API for "load kernel BTF".
> > > Global state like this also doesn't work well for several individual
> > > processes. Imagine multiple programs on the system trying to each
> > > replace the kernel BTF, how would that work? Which one wins?
> >
> > The kernel BTF can be only one, of course.
> > I don't expect progs to update the kernel BTF when they start.
> > It's more of the admin/chef job when kernel boots.
> > Only for the cases when kernel somehow was compiled without BTF.
> >
> > > Being
> > > able to give my own fd for kernel BTF circumvents all those problems
> > > and seems much cleaner to me.
> >
> > You mean to pass kernel BTF's fd to the kernel?
> > It's doable, but I don't quite see the operational side of it.
> > If progs have to carry both their BTF and kernel BTF why would
> > they need CO-RE at all? If they were compiled with given kernel BTF
> > there is no need to adjust offsets for the given host.
> > I suspect I simply don't understand your use case :)
> 
> This is the "distro ships without BTF" case that the aqua sec folks
> have been grappling with, and for which btfhub is a solution. If the
> distro disables BTF they are unlikely to perform this "admin" job in
> the first place. So whose responsibility is it to load that BTF?
> Currently it falls on the developers of the user space tooling to
> provide alternative BTF. Only allowing a single replacement BTF makes
> this difficult.

There is only one BTF that matches the kernel. If one was buggy
(due to pahole/compiler issue) it would be replaced with the fixed one.
I can see the case where two vmlinux BTFs would be used for testing.
Like the kernel compiled with clang produces one BTF and the kernel
compiled with gcc->pahole produces another BTF, but the vmlinux would
be different too. So the admins/users should be using BTF that
matches the kernel.

> Here is why:
> * Since external BTF is a thing, loaders today have to provide a way
> to relocate against external BTF in a non-standard location. This
> means loading the file from disk and then performing CO-RE using that
> info.
> * Users of the loader build a btfhub integration (or similar) and
> provide a path to the external BTF during load. They do this because
> they will have to support legacy kernels for years to come.
> * Under my proposal, a loader can detect whether in-kernel CO-RE is
> supported, load the BTF provided by the user into the kernel, and pass
> that fd to PROG_LOAD.
> * This is transparent to the user: they keep using their existing BTF
> but get the benefit of canonical CO-RE resolution.
> 
> We don't have to introduce a new loader-side API to deal with this
> situation. We also don't have to deal with a global resource that is
> subject to the whims of the distro.

I agree with all of the above. It's easy to add 'target_vmlinux_btf_fd'
to PROG_LOAD and let CO-RE in the kernel use that, but the kernel
has dynamically loaded kernel modules and it does search through them.
They will not be supported in such case. I think it's an ok limitation.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty.
  2021-09-28 14:45   ` Matteo Croce
@ 2021-09-28 16:37     ` Alexei Starovoitov
  2021-09-28 17:11       ` Matteo Croce
  0 siblings, 1 reply; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-28 16:37 UTC (permalink / raw)
  To: Matteo Croce
  Cc: davem, daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

On Tue, Sep 28, 2021 at 04:45:15PM +0200, Matteo Croce wrote:
> On Fri, 17 Sep 2021 14:57:12 -0700
> Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> 
> > From: Alexei Starovoitov <ast@kernel.org>
> > 
> > Make relo_core.c to be compiled with kernel and with libbpf.
> > 
> > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> > ---
> 
> I give it a try with a sample co-re program.

Thanks for testing!

> I don't know how much of them will stay in the final work, but the
> debug prints are borked because of the printk trailing \n.
...
> -	libbpf_print(level, "[%u] %s %s", type_id, btf_kind_str(t), str_is_empty(s) ? "<anon>" : s);
> +	libbpf_print(level, KERN_CONT "[%u] %s %s", type_id, btf_kind_str(t), str_is_empty(s) ? "<anon>" : s);

Right. It's a known limitation of helper approach.
Currently I'm refactoring all the prints to go through the verifier log.
So all messages will be neat and clean :)

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty.
  2021-09-28 16:37     ` Alexei Starovoitov
@ 2021-09-28 17:11       ` Matteo Croce
  2021-09-28 20:34         ` Alexei Starovoitov
  0 siblings, 1 reply; 34+ messages in thread
From: Matteo Croce @ 2021-09-28 17:11 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: davem, daniel, andrii, john.fastabend, lmb, mcroce, bpf, kernel-team

On Tue, 28 Sep 2021 09:37:30 -0700
Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:

> On Tue, Sep 28, 2021 at 04:45:15PM +0200, Matteo Croce wrote:
> > On Fri, 17 Sep 2021 14:57:12 -0700
> > Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> > 
> > > From: Alexei Starovoitov <ast@kernel.org>
> > > 
> > > Make relo_core.c to be compiled with kernel and with libbpf.
> > > 
> > > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> > > ---
> > 
> > I give it a try with a sample co-re program.
> 
> Thanks for testing!
> 

I just found an error during the relocations.
It was hiding because of bpf_core_apply_relo() always returning
success[1].

I have a BPF with the following programs:

#if 0
SEC("tp_btf/xdp_devmap_xmit")
int BPF_PROG(tp_xdp_devmap_xmit_multi, const struct net_device *from_dev,
             const struct net_device *to_dev, int sent, int drops, int err)

{
        randmap(from_dev->ifindex);
        return 0;
}
#endif

SEC("fentry/eth_type_trans")
int BPF_PROG(fentry_eth_type_trans, struct sk_buff *skb,
             struct net_device *dev, unsigned short protocol)
{
        randmap(dev->ifindex + skb->len);
        return 0;
}

SEC("fexit/eth_type_trans")
int BPF_PROG(fexit_eth_type_trans, struct sk_buff *skb,
             struct net_device *dev, unsigned short protocol)
{
        randmap(dev->ifindex + skb->len);
        return 0;
}


randmap() just writes the value into a random map. If I keep #if 0
everything works, if I remove it so to build tp_btf/xdp_devmap_xmit
too, I get this:

[ 3619.229378] libbpf: prog 'prog_name': relo #0: kind <byte_off> (0), spec is [24] STRUCT net_device.ifindex (0:17 @ offset 208)
[ 3619.229384] libbpf: prog 'prog_name': relo #0: matching candidate #0 [2617] STRUCT net_device.ifindex (0:17 @ offset 208)
[ 3619.229538] libbpf: prog 'prog_name': relo #0: patched insn #0 (LDX/ST/STX) off 208 -> 208
[ 3619.230278] libbpf: prog 'prog_name': relo #0: kind <byte_off> (0), spec is [87] STRUCT sk_buff.len (0:5 @ offset 104)
[ 3619.230282] libbpf: prog 'prog_name': relo #0: matching candidate #0 [2660] STRUCT sk_buff.len (0:5 @ offset 104)
[ 3619.230393] libbpf: prog 'prog_name': relo #0: trying to relocate unrecognized insn #0, code:0x85, src:0x0, dst:0x0, off:0x0, imm:0x7
[ 3619.230562] libbpf: prog 'prog_name': relo #0: failed to patch insn #0: -22

The program in tp_btf/xdp_devmap_xmit makes the relocations into
another section fail, note that sk_buff.len is used in the fentry
program.

Ideas?

[1] https://lore.kernel.org/bpf/20210927200410.460e014f@linux.microsoft.com/

Regards,
-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty.
  2021-09-28 17:11       ` Matteo Croce
@ 2021-09-28 20:34         ` Alexei Starovoitov
  2021-09-29 12:32           ` Matteo Croce
  0 siblings, 1 reply; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-28 20:34 UTC (permalink / raw)
  To: Matteo Croce
  Cc: David S. Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, Lorenz Bauer, mcroce, bpf, Kernel Team

On Tue, Sep 28, 2021 at 10:11 AM Matteo Croce
<mcroce@linux.microsoft.com> wrote:
>
> On Tue, 28 Sep 2021 09:37:30 -0700
> Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
>
> > On Tue, Sep 28, 2021 at 04:45:15PM +0200, Matteo Croce wrote:
> > > On Fri, 17 Sep 2021 14:57:12 -0700
> > > Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> > >
> > > > From: Alexei Starovoitov <ast@kernel.org>
> > > >
> > > > Make relo_core.c to be compiled with kernel and with libbpf.
> > > >
> > > > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> > > > ---
> > >
> > > I give it a try with a sample co-re program.
> >
> > Thanks for testing!
> >
>
> I just found an error during the relocations.
> It was hiding because of bpf_core_apply_relo() always returning
> success[1].
>
> I have a BPF with the following programs:
>
> #if 0
> SEC("tp_btf/xdp_devmap_xmit")
> int BPF_PROG(tp_xdp_devmap_xmit_multi, const struct net_device *from_dev,
>              const struct net_device *to_dev, int sent, int drops, int err)
>
> {
>         randmap(from_dev->ifindex);
>         return 0;
> }
> #endif
>
> SEC("fentry/eth_type_trans")
> int BPF_PROG(fentry_eth_type_trans, struct sk_buff *skb,
>              struct net_device *dev, unsigned short protocol)
> {
>         randmap(dev->ifindex + skb->len);
>         return 0;
> }
>
> SEC("fexit/eth_type_trans")
> int BPF_PROG(fexit_eth_type_trans, struct sk_buff *skb,
>              struct net_device *dev, unsigned short protocol)
> {
>         randmap(dev->ifindex + skb->len);
>         return 0;
> }
>
>
> randmap() just writes the value into a random map. If I keep #if 0
> everything works, if I remove it so to build tp_btf/xdp_devmap_xmit
> too, I get this:
>
> [ 3619.229378] libbpf: prog 'prog_name': relo #0: kind <byte_off> (0), spec is [24] STRUCT net_device.ifindex (0:17 @ offset 208)
> [ 3619.229384] libbpf: prog 'prog_name': relo #0: matching candidate #0 [2617] STRUCT net_device.ifindex (0:17 @ offset 208)
> [ 3619.229538] libbpf: prog 'prog_name': relo #0: patched insn #0 (LDX/ST/STX) off 208 -> 208
> [ 3619.230278] libbpf: prog 'prog_name': relo #0: kind <byte_off> (0), spec is [87] STRUCT sk_buff.len (0:5 @ offset 104)
> [ 3619.230282] libbpf: prog 'prog_name': relo #0: matching candidate #0 [2660] STRUCT sk_buff.len (0:5 @ offset 104)
> [ 3619.230393] libbpf: prog 'prog_name': relo #0: trying to relocate unrecognized insn #0, code:0x85, src:0x0, dst:0x0, off:0x0, imm:0x7
> [ 3619.230562] libbpf: prog 'prog_name': relo #0: failed to patch insn #0: -22
>
> The program in tp_btf/xdp_devmap_xmit makes the relocations into
> another section fail, note that sk_buff.len is used in the fentry
> program.
>
> Ideas?

I'll take a look. Could you provide the full .c file?

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty.
  2021-09-28 20:34         ` Alexei Starovoitov
@ 2021-09-29 12:32           ` Matteo Croce
  2021-09-29 17:38             ` Matteo Croce
  0 siblings, 1 reply; 34+ messages in thread
From: Matteo Croce @ 2021-09-29 12:32 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, Lorenz Bauer, Matteo Croce, bpf, Kernel Team

On Tue, Sep 28, 2021 at 10:35 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, Sep 28, 2021 at 10:11 AM Matteo Croce
> <mcroce@linux.microsoft.com> wrote:
> >
> > On Tue, 28 Sep 2021 09:37:30 -0700
> > Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> >
> > > On Tue, Sep 28, 2021 at 04:45:15PM +0200, Matteo Croce wrote:
> > > > On Fri, 17 Sep 2021 14:57:12 -0700
> > > > Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> > > >
> > > > > From: Alexei Starovoitov <ast@kernel.org>
> > > > >
> > > > > Make relo_core.c to be compiled with kernel and with libbpf.
> > > > >
> > > > > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> > > > > ---
> > > >
> > > > I give it a try with a sample co-re program.
> > >
> > > Thanks for testing!
> > >
> >
> > I just found an error during the relocations.
> > It was hiding because of bpf_core_apply_relo() always returning
> > success[1].
> >
> > I have a BPF with the following programs:
> >
> > #if 0
> > SEC("tp_btf/xdp_devmap_xmit")
> > int BPF_PROG(tp_xdp_devmap_xmit_multi, const struct net_device *from_dev,
> >              const struct net_device *to_dev, int sent, int drops, int err)
> >
> > {
> >         randmap(from_dev->ifindex);
> >         return 0;
> > }
> > #endif
> >
> > SEC("fentry/eth_type_trans")
> > int BPF_PROG(fentry_eth_type_trans, struct sk_buff *skb,
> >              struct net_device *dev, unsigned short protocol)
> > {
> >         randmap(dev->ifindex + skb->len);
> >         return 0;
> > }
> >
> > SEC("fexit/eth_type_trans")
> > int BPF_PROG(fexit_eth_type_trans, struct sk_buff *skb,
> >              struct net_device *dev, unsigned short protocol)
> > {
> >         randmap(dev->ifindex + skb->len);
> >         return 0;
> > }
> >
> >
> > randmap() just writes the value into a random map. If I keep #if 0
> > everything works, if I remove it so to build tp_btf/xdp_devmap_xmit
> > too, I get this:
> >
> > [ 3619.229378] libbpf: prog 'prog_name': relo #0: kind <byte_off> (0), spec is [24] STRUCT net_device.ifindex (0:17 @ offset 208)
> > [ 3619.229384] libbpf: prog 'prog_name': relo #0: matching candidate #0 [2617] STRUCT net_device.ifindex (0:17 @ offset 208)
> > [ 3619.229538] libbpf: prog 'prog_name': relo #0: patched insn #0 (LDX/ST/STX) off 208 -> 208
> > [ 3619.230278] libbpf: prog 'prog_name': relo #0: kind <byte_off> (0), spec is [87] STRUCT sk_buff.len (0:5 @ offset 104)
> > [ 3619.230282] libbpf: prog 'prog_name': relo #0: matching candidate #0 [2660] STRUCT sk_buff.len (0:5 @ offset 104)
> > [ 3619.230393] libbpf: prog 'prog_name': relo #0: trying to relocate unrecognized insn #0, code:0x85, src:0x0, dst:0x0, off:0x0, imm:0x7
> > [ 3619.230562] libbpf: prog 'prog_name': relo #0: failed to patch insn #0: -22
> >
> > The program in tp_btf/xdp_devmap_xmit makes the relocations into
> > another section fail, note that sk_buff.len is used in the fentry
> > program.
> >
> > Ideas?
>
> I'll take a look. Could you provide the full .c file?

Sure. I put everything in this repo:

https://gist.github.com/teknoraver/2855e0f8770d1363b57d683fa32bccc3

tp_btf/xdp_devmap_xmit is the program which lets the other fail.

-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty.
  2021-09-29 12:32           ` Matteo Croce
@ 2021-09-29 17:38             ` Matteo Croce
  2021-09-29 23:00               ` Alexei Starovoitov
  0 siblings, 1 reply; 34+ messages in thread
From: Matteo Croce @ 2021-09-29 17:38 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, Lorenz Bauer, Matteo Croce, bpf, Kernel Team

On Wed, 29 Sep 2021 14:32:37 +0200
Matteo Croce <mcroce@linux.microsoft.com> wrote:

> On Tue, Sep 28, 2021 at 10:35 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Tue, Sep 28, 2021 at 10:11 AM Matteo Croce
> > <mcroce@linux.microsoft.com> wrote:
> > >
> > > On Tue, 28 Sep 2021 09:37:30 -0700
> > > Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> > >
> > > > On Tue, Sep 28, 2021 at 04:45:15PM +0200, Matteo Croce wrote:
> > > > > On Fri, 17 Sep 2021 14:57:12 -0700
> > > > > Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> > > > >
> > > > > > From: Alexei Starovoitov <ast@kernel.org>
> > > > > >
> > > > > > Make relo_core.c to be compiled with kernel and with libbpf.
> > > > > >
> > > > > > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> > > > > > ---
> > > > >
> > > > > I give it a try with a sample co-re program.
> > > >
> > > > Thanks for testing!
> > > >
> > >
> > > I just found an error during the relocations.
> > > It was hiding because of bpf_core_apply_relo() always returning
> > > success[1].
> > >
> > > I have a BPF with the following programs:
> > >
> > > #if 0
> > > SEC("tp_btf/xdp_devmap_xmit")
> > > int BPF_PROG(tp_xdp_devmap_xmit_multi, const struct net_device
> > > *from_dev, const struct net_device *to_dev, int sent, int drops,
> > > int err)
> > >
> > > {
> > >         randmap(from_dev->ifindex);
> > >         return 0;
> > > }
> > > #endif
> > >
> > > SEC("fentry/eth_type_trans")
> > > int BPF_PROG(fentry_eth_type_trans, struct sk_buff *skb,
> > >              struct net_device *dev, unsigned short protocol)
> > > {
> > >         randmap(dev->ifindex + skb->len);
> > >         return 0;
> > > }
> > >
> > > SEC("fexit/eth_type_trans")
> > > int BPF_PROG(fexit_eth_type_trans, struct sk_buff *skb,
> > >              struct net_device *dev, unsigned short protocol)
> > > {
> > >         randmap(dev->ifindex + skb->len);
> > >         return 0;
> > > }
> > >
> > >
> > > randmap() just writes the value into a random map. If I keep #if 0
> > > everything works, if I remove it so to build
> > > tp_btf/xdp_devmap_xmit too, I get this:
> > >
> > > [ 3619.229378] libbpf: prog 'prog_name': relo #0: kind <byte_off>
> > > (0), spec is [24] STRUCT net_device.ifindex (0:17 @ offset 208) [
> > > 3619.229384] libbpf: prog 'prog_name': relo #0: matching
> > > candidate #0 [2617] STRUCT net_device.ifindex (0:17 @ offset 208)
> > > [ 3619.229538] libbpf: prog 'prog_name': relo #0: patched insn #0
> > > (LDX/ST/STX) off 208 -> 208 [ 3619.230278] libbpf: prog
> > > 'prog_name': relo #0: kind <byte_off> (0), spec is [87] STRUCT
> > > sk_buff.len (0:5 @ offset 104) [ 3619.230282] libbpf: prog
> > > 'prog_name': relo #0: matching candidate #0 [2660] STRUCT
> > > sk_buff.len (0:5 @ offset 104) [ 3619.230393] libbpf: prog
> > > 'prog_name': relo #0: trying to relocate unrecognized insn #0,
> > > code:0x85, src:0x0, dst:0x0, off:0x0, imm:0x7 [ 3619.230562]
> > > libbpf: prog 'prog_name': relo #0: failed to patch insn #0: -22
> > >
> > > The program in tp_btf/xdp_devmap_xmit makes the relocations into
> > > another section fail, note that sk_buff.len is used in the fentry
> > > program.
> > >
> > > Ideas?
> >
> > I'll take a look. Could you provide the full .c file?
> 
> Sure. I put everything in this repo:
> 
> https://gist.github.com/teknoraver/2855e0f8770d1363b57d683fa32bccc3
> 
> tp_btf/xdp_devmap_xmit is the program which lets the other fail.
> 

I enabled debugging in userspace libbpf to compare the two outputs.

Userspace libbpf:

libbpf: CO-RE relocating [0] struct net_device: found target candidate [2617] struct net_device in [vmlinux]
libbpf: prog 'tp_xdp_devmap_xmit_multi': relo #0: kind <byte_off> (0), spec is [24] struct net_device.ifindex (0:17 @ offset 208)
libbpf: prog 'tp_xdp_devmap_xmit_multi': relo #0: matching candidate #0 [2617] struct net_device.ifindex (0:17 @ offset 208)
libbpf: prog 'tp_xdp_devmap_xmit_multi': relo #0: patched insn #2 (LDX/ST/STX) off 208 -> 208
libbpf: prog 'tp_xdp_devmap_xmit_multi': relo #1: kind <byte_off> (0), spec is [24] struct net_device.ifindex (0:17 @ offset 208)
libbpf: prog 'tp_xdp_devmap_xmit_multi': relo #1: matching candidate #0 [2617] struct net_device.ifindex (0:17 @ offset 208)
libbpf: prog 'tp_xdp_devmap_xmit_multi': relo #1: patched insn #3 (LDX/ST/STX) off 208 -> 208
libbpf: sec 'fentry/eth_type_trans': found 2 CO-RE relocations
libbpf: CO-RE relocating [0] struct sk_buff: found target candidate [2660] struct sk_buff in [vmlinux]
libbpf: prog 'fentry_eth_type_trans': relo #0: kind <byte_off> (0), spec is [87] struct sk_buff.len (0:5 @ offset 104)
libbpf: prog 'fentry_eth_type_trans': relo #0: matching candidate #0 [2660] struct sk_buff.len (0:5 @ offset 104)
libbpf: prog 'fentry_eth_type_trans': relo #0: patched insn #2 (LDX/ST/STX) off 104 -> 104
libbpf: prog 'fentry_eth_type_trans': relo #1: kind <byte_off> (0), spec is [24] struct net_device.ifindex (0:17 @ offset 208)
libbpf: prog 'fentry_eth_type_trans': relo #1: matching candidate #0 [2617] struct net_device.ifindex (0:17 @ offset 208)
libbpf: prog 'fentry_eth_type_trans': relo #1: patched insn #3 (LDX/ST/STX) off 208 -> 208
libbpf: sec 'fexit/eth_type_trans': found 2 CO-RE relocations
libbpf: prog 'fexit_eth_type_trans': relo #0: kind <byte_off> (0), spec is [87] struct sk_buff.len (0:5 @ offset 104)
libbpf: prog 'fexit_eth_type_trans': relo #0: matching candidate #0 [2660] struct sk_buff.len (0:5 @ offset 104)
libbpf: prog 'fexit_eth_type_trans': relo #0: patched insn #2 (LDX/ST/STX) off 104 -> 104
libbpf: prog 'fexit_eth_type_trans': relo #1: kind <byte_off> (0), spec is [24] struct net_device.ifindex (0:17 @ offset 208)
libbpf: prog 'fexit_eth_type_trans': relo #1: matching candidate #0 [2617] struct net_device.ifindex (0:17 @ offset 208)
libbpf: prog 'fexit_eth_type_trans': relo #1: patched insn #3 (LDX/ST/STX) off 208 -> 208

Kernel implementation

[13234.650397] libbpf: prog 'prog_name': relo #0: kind <byte_off> (0), spec is [24] STRUCT net_device.ifindex (0:17 @ offset 208)
[13234.650406] libbpf: prog 'prog_name': relo #0: matching candidate #0 [2617] STRUCT net_device.ifindex (0:17 @ offset 208)
[13234.650558] libbpf: prog 'prog_name': relo #0: patched insn #0 (LDX/ST/STX) off 208 -> 208
[13234.651251] libbpf: prog 'prog_name': relo #0: kind <byte_off> (0), spec is [24] STRUCT net_device.ifindex (0:17 @ offset 208)
[13234.651255] libbpf: prog 'prog_name': relo #0: matching candidate #0 [2617] STRUCT net_device.ifindex (0:17 @ offset 208)
[13234.651349] libbpf: prog 'prog_name': relo #0: patched insn #0 (LDX/ST/STX) off 208 -> 208
[13234.651935] libbpf: prog 'prog_name': relo #0: kind <byte_off> (0), spec is [87] STRUCT sk_buff.len (0:5 @ offset 104)
[13234.651939] libbpf: prog 'prog_name': relo #0: matching candidate #0 [2660] STRUCT sk_buff.len (0:5 @ offset 104)
[13234.652001] libbpf: prog 'prog_name': relo #0: unexpected insn #0 (LDX/ST/STX) value: got 208, exp 104 -> 104
[13234.652105] libbpf: prog 'prog_name': relo #0: failed to patch insn #0: -22

The sk_buff.len has a wrong offset, 208 instead of 104. Can it be a
leftover value from the previous relocation?

Regards,
-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty.
  2021-09-29 17:38             ` Matteo Croce
@ 2021-09-29 23:00               ` Alexei Starovoitov
  2021-09-29 23:49                 ` Matteo Croce
  0 siblings, 1 reply; 34+ messages in thread
From: Alexei Starovoitov @ 2021-09-29 23:00 UTC (permalink / raw)
  To: Matteo Croce
  Cc: David S. Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, Lorenz Bauer, Matteo Croce, bpf, Kernel Team

On Wed, Sep 29, 2021 at 10:39 AM Matteo Croce
<mcroce@linux.microsoft.com> wrote:
> > >
> > > I'll take a look. Could you provide the full .c file?
> >
> > Sure. I put everything in this repo:
> >
> > https://gist.github.com/teknoraver/2855e0f8770d1363b57d683fa32bccc3

This gist is not a reproducer. It doesn't have a single CO-RE relo.

But I've hacked it with dev->ifindex like in your email above
and managed to repro.
My error is different though:
[ 1127.634633] libbpf: prog 'prog_name': relo #0: trying to relocate
unrecognized insn #0, code:0x85, src:0x0, dst:0x0, off:0x0, imm:0x7
[ 1127.636003] libbpf: prog 'prog_name': relo #0: failed to patch insn #0: -22

But there is a bug. Debugging...

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty.
  2021-09-29 23:00               ` Alexei Starovoitov
@ 2021-09-29 23:49                 ` Matteo Croce
  2021-10-22  0:48                   ` Matteo Croce
  0 siblings, 1 reply; 34+ messages in thread
From: Matteo Croce @ 2021-09-29 23:49 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, Lorenz Bauer, Matteo Croce, bpf, Kernel Team

On Thu, Sep 30, 2021 at 1:01 AM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Wed, Sep 29, 2021 at 10:39 AM Matteo Croce
> <mcroce@linux.microsoft.com> wrote:
> > > >
> > > > I'll take a look. Could you provide the full .c file?
> > >
> > > Sure. I put everything in this repo:
> > >
> > > https://gist.github.com/teknoraver/2855e0f8770d1363b57d683fa32bccc3
>
> This gist is not a reproducer. It doesn't have a single CO-RE relo.
>
> But I've hacked it with dev->ifindex like in your email above
> and managed to repro.
> My error is different though:
> [ 1127.634633] libbpf: prog 'prog_name': relo #0: trying to relocate
> unrecognized insn #0, code:0x85, src:0x0, dst:0x0, off:0x0, imm:0x7
> [ 1127.636003] libbpf: prog 'prog_name': relo #0: failed to patch insn #0: -22
>
> But there is a bug. Debugging...

Oops, I forget to force push, sorry..
I've updated the gist, even if you managed to reproduce a similar error.

Regards,
-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty.
  2021-09-29 23:49                 ` Matteo Croce
@ 2021-10-22  0:48                   ` Matteo Croce
  2021-10-22  0:51                     ` Alexei Starovoitov
  0 siblings, 1 reply; 34+ messages in thread
From: Matteo Croce @ 2021-10-22  0:48 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: David S. Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, Lorenz Bauer, Matteo Croce, bpf, Kernel Team

On Thu, Sep 30, 2021 at 1:49 AM Matteo Croce <mcroce@linux.microsoft.com> wrote:
>
> On Thu, Sep 30, 2021 at 1:01 AM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Wed, Sep 29, 2021 at 10:39 AM Matteo Croce
> > <mcroce@linux.microsoft.com> wrote:
> > > > >
> > > > > I'll take a look. Could you provide the full .c file?
> > > >
> > > > Sure. I put everything in this repo:
> > > >
> > > > https://gist.github.com/teknoraver/2855e0f8770d1363b57d683fa32bccc3
> >
> > This gist is not a reproducer. It doesn't have a single CO-RE relo.
> >
> > But I've hacked it with dev->ifindex like in your email above
> > and managed to repro.
> > My error is different though:
> > [ 1127.634633] libbpf: prog 'prog_name': relo #0: trying to relocate
> > unrecognized insn #0, code:0x85, src:0x0, dst:0x0, off:0x0, imm:0x7
> > [ 1127.636003] libbpf: prog 'prog_name': relo #0: failed to patch insn #0: -22
> >
> > But there is a bug. Debugging...
>
> Oops, I forget to force push, sorry..
> I've updated the gist, even if you managed to reproduce a similar error.
>
> Regards,
> --
> per aspera ad upstream

Hi Alexei,

Did you find out anything?

I posted an RFC for the eBPF signature which depends on this series.

Regards,
-- 
per aspera ad upstream

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty.
  2021-10-22  0:48                   ` Matteo Croce
@ 2021-10-22  0:51                     ` Alexei Starovoitov
  0 siblings, 0 replies; 34+ messages in thread
From: Alexei Starovoitov @ 2021-10-22  0:51 UTC (permalink / raw)
  To: Matteo Croce
  Cc: David S. Miller, Daniel Borkmann, Andrii Nakryiko,
	John Fastabend, Lorenz Bauer, Matteo Croce, bpf, Kernel Team

On Thu, Oct 21, 2021 at 5:49 PM Matteo Croce <mcroce@linux.microsoft.com> wrote:
> I posted an RFC for the eBPF signature which depends on this series.

you need to wait.

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2021-10-22  0:51 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-17 21:57 [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Alexei Starovoitov
2021-09-17 21:57 ` [PATCH RFC bpf-next 01/10] bpf: Prepare relo_core.c for kernel duty Alexei Starovoitov
2021-09-21 21:25   ` Andrii Nakryiko
2021-09-28 14:45   ` Matteo Croce
2021-09-28 16:37     ` Alexei Starovoitov
2021-09-28 17:11       ` Matteo Croce
2021-09-28 20:34         ` Alexei Starovoitov
2021-09-29 12:32           ` Matteo Croce
2021-09-29 17:38             ` Matteo Croce
2021-09-29 23:00               ` Alexei Starovoitov
2021-09-29 23:49                 ` Matteo Croce
2021-10-22  0:48                   ` Matteo Croce
2021-10-22  0:51                     ` Alexei Starovoitov
2021-09-17 21:57 ` [PATCH RFC bpf-next 02/10] bpf: Define enum bpf_core_relo_kind as uapi Alexei Starovoitov
2021-09-21 21:27   ` Andrii Nakryiko
2021-09-17 21:57 ` [PATCH RFC bpf-next 03/10] bpf: Add proto of bpf_core_apply_relo() Alexei Starovoitov
2021-09-23 11:21   ` Lorenz Bauer
2021-09-23 18:48     ` Andrii Nakryiko
2021-09-17 21:57 ` [PATCH RFC bpf-next 04/10] bpf: Add bpf_core_add_cands() and wire it into bpf_core_apply_relo_insn() Alexei Starovoitov
2021-09-21 21:34   ` Andrii Nakryiko
2021-09-27 18:04   ` Matteo Croce
2021-09-17 21:57 ` [PATCH RFC bpf-next 05/10] libbpf: Use CO-RE in the kernel in light skeleton Alexei Starovoitov
2021-09-17 21:57 ` [PATCH RFC bpf-next 06/10] libbpf: Make gen_loader data aligned Alexei Starovoitov
2021-09-17 21:57 ` [PATCH RFC bpf-next 07/10] libbpf: Support init of inner maps in light skeleton Alexei Starovoitov
2021-09-17 21:57 ` [PATCH RFC bpf-next 08/10] selftests/bpf: Convert kfunc test with CO-RE to lskel Alexei Starovoitov
2021-09-17 21:57 ` [PATCH RFC bpf-next 09/10] selftests/bpf: Improve inner_map test coverage Alexei Starovoitov
2021-09-17 21:57 ` [PATCH RFC bpf-next 10/10] selftests/bpf: Convert map_ptr_kern test to use light skeleton Alexei Starovoitov
2021-09-23 11:33 ` [PATCH RFC bpf-next 00/10] bpf: CO-RE support in the kernel Lorenz Bauer
2021-09-23 18:52   ` Andrii Nakryiko
2021-09-24 23:13   ` Alexei Starovoitov
2021-09-27 16:12     ` Lorenz Bauer
2021-09-27 16:50       ` Alexei Starovoitov
2021-09-28  8:30         ` Lorenz Bauer
2021-09-28 16:35           ` Alexei Starovoitov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.