All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 00/10] CO-RE offset relocations
@ 2019-07-24 19:27 Andrii Nakryiko
  2019-07-24 19:27 ` [PATCH bpf-next 01/10] libbpf: add .BTF.ext offset relocation section loading Andrii Nakryiko
                   ` (10 more replies)
  0 siblings, 11 replies; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-24 19:27 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel, yhs
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

This patch set implements central part of CO-RE (Compile Once - Run
Everywhere, see [0] and [1] for slides and video): relocating field offsets.
Most of the details are written down as comments to corresponding parts of the
code.

Patch #1 adds loading of .BTF.ext offset relocations section and macros to
work with its contents.
Patch #2 implements CO-RE relocations algorithm in libbpf.
Patches #3-#10 adds selftests validating various parts of relocation handling,
type compatibility, etc.

For all tests to work, you'll need latest Clang/LLVM supporting
__builtin_preserve_access_index intrinsic, used for recording offset
relocations. Kernel on which selftests run should have BTF information built
in (CONFIG_DEBUG_INFO_BTF=y).

  [0] http://vger.kernel.org/bpfconf2019.html#session-2
  [1] http://vger.kernel.org/lpc-bpf2018.html#session-2CO-RE relocations

This patch set implements central part of CO-RE (Compile Once - Run
Everywhere, see [0] and [1] for slides and video): relocating field offsets.
Most of the details are written down as comments to corresponding parts of the
code.

Patch #1 adds loading of .BTF.ext offset relocations section and macros to
work with its contents.
Patch #2 implements CO-RE relocations algorithm in libbpf.
Patches #3-#10 adds selftests validating various parts of relocation handling,
type compatibility, etc.

For all tests to work, you'll need latest Clang/LLVM supporting
__builtin_preserve_access_index intrinsic, used for recording offset
relocations. Kernel on which selftests run should have BTF information built
in (CONFIG_DEBUG_INFO_BTF=y).

  [0] http://vger.kernel.org/bpfconf2019.html#session-2
  [1] http://vger.kernel.org/lpc-bpf2018.html#session-2

Andrii Nakryiko (10):
  libbpf: add .BTF.ext offset relocation section loading
  libbpf: implement BPF CO-RE offset relocation algorithm
  selftests/bpf: add CO-RE relocs testing setup
  selftests/bpf: add CO-RE relocs struct flavors tests
  selftests/bpf: add CO-RE relocs nesting tests
  selftests/bpf: add CO-RE relocs array tests
  selftests/bpf: add CO-RE relocs enum/ptr/func_proto tests
  selftests/bpf: add CO-RE relocs modifiers/typedef tests
  selftest/bpf: add CO-RE relocs ptr-as-array tests
  selftests/bpf: add CO-RE relocs ints tests

 tools/lib/bpf/btf.c                           |  64 +-
 tools/lib/bpf/btf.h                           |   4 +
 tools/lib/bpf/libbpf.c                        | 866 +++++++++++++++++-
 tools/lib/bpf/libbpf.h                        |   1 +
 tools/lib/bpf/libbpf_internal.h               |  91 ++
 .../selftests/bpf/prog_tests/core_reloc.c     | 363 ++++++++
 .../bpf/progs/btf__core_reloc_arrays.c        |   3 +
 .../btf__core_reloc_arrays___diff_arr_dim.c   |   3 +
 ...btf__core_reloc_arrays___diff_arr_val_sz.c |   3 +
 .../btf__core_reloc_arrays___err_non_array.c  |   3 +
 ...btf__core_reloc_arrays___err_too_shallow.c |   3 +
 .../btf__core_reloc_arrays___err_too_small.c  |   3 +
 ..._core_reloc_arrays___err_wrong_val_type1.c |   3 +
 ..._core_reloc_arrays___err_wrong_val_type2.c |   3 +
 .../bpf/progs/btf__core_reloc_flavors.c       |   3 +
 .../btf__core_reloc_flavors__err_wrong_name.c |   3 +
 .../bpf/progs/btf__core_reloc_ints.c          |   3 +
 .../bpf/progs/btf__core_reloc_ints___bool.c   |   3 +
 .../btf__core_reloc_ints___err_bitfield.c     |   3 +
 .../btf__core_reloc_ints___err_wrong_sz_16.c  |   3 +
 .../btf__core_reloc_ints___err_wrong_sz_32.c  |   3 +
 .../btf__core_reloc_ints___err_wrong_sz_64.c  |   3 +
 .../btf__core_reloc_ints___err_wrong_sz_8.c   |   3 +
 .../btf__core_reloc_ints___reverse_sign.c     |   3 +
 .../bpf/progs/btf__core_reloc_mods.c          |   3 +
 .../progs/btf__core_reloc_mods___mod_swap.c   |   3 +
 .../progs/btf__core_reloc_mods___typedefs.c   |   3 +
 .../bpf/progs/btf__core_reloc_nesting.c       |   3 +
 .../btf__core_reloc_nesting___anon_embed.c    |   3 +
 ...f__core_reloc_nesting___dup_compat_types.c |   5 +
 ...core_reloc_nesting___err_array_container.c |   3 +
 ...tf__core_reloc_nesting___err_array_field.c |   3 +
 ...e_reloc_nesting___err_dup_incompat_types.c |   4 +
 ...re_reloc_nesting___err_missing_container.c |   3 +
 ...__core_reloc_nesting___err_missing_field.c |   3 +
 ..._reloc_nesting___err_nonstruct_container.c |   3 +
 ...e_reloc_nesting___err_partial_match_dups.c |   4 +
 .../btf__core_reloc_nesting___err_too_deep.c  |   3 +
 .../btf__core_reloc_nesting___extra_nesting.c |   3 +
 ..._core_reloc_nesting___struct_union_mixup.c |   3 +
 .../bpf/progs/btf__core_reloc_primitives.c    |   3 +
 ...f__core_reloc_primitives___diff_enum_def.c |   3 +
 ..._core_reloc_primitives___diff_func_proto.c |   3 +
 ...f__core_reloc_primitives___diff_ptr_type.c |   3 +
 ...tf__core_reloc_primitives___err_non_enum.c |   3 +
 ...btf__core_reloc_primitives___err_non_int.c |   3 +
 ...btf__core_reloc_primitives___err_non_ptr.c |   3 +
 .../bpf/progs/btf__core_reloc_ptr_as_arr.c    |   3 +
 .../btf__core_reloc_ptr_as_arr___diff_sz.c    |   3 +
 .../selftests/bpf/progs/core_reloc_types.h    | 642 +++++++++++++
 .../bpf/progs/test_core_reloc_arrays.c        |  58 ++
 .../bpf/progs/test_core_reloc_flavors.c       |  65 ++
 .../bpf/progs/test_core_reloc_ints.c          |  48 +
 .../bpf/progs/test_core_reloc_kernel.c        |  39 +
 .../bpf/progs/test_core_reloc_mods.c          |  68 ++
 .../bpf/progs/test_core_reloc_nesting.c       |  48 +
 .../bpf/progs/test_core_reloc_primitives.c    |  50 +
 .../bpf/progs/test_core_reloc_ptr_as_arr.c    |  34 +
 58 files changed, 2527 insertions(+), 47 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/core_reloc.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_dim.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_val_sz.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_non_array.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_shallow.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_small.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type1.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type2.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_flavors.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_flavors__err_wrong_name.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___bool.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_bitfield.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_16.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_32.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_64.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_8.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___reverse_sign.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods___mod_swap.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods___typedefs.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___anon_embed.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___dup_compat_types.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_container.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_field.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_dup_incompat_types.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_container.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_field.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_nonstruct_container.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_partial_match_dups.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_too_deep.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___extra_nesting.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___struct_union_mixup.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_enum_def.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_func_proto.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_ptr_type.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_enum.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_int.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_ptr.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr___diff_sz.c
 create mode 100644 tools/testing/selftests/bpf/progs/core_reloc_types.h
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_flavors.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_ints.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_kernel.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_mods.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_nesting.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_primitives.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_ptr_as_arr.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH bpf-next 01/10] libbpf: add .BTF.ext offset relocation section loading
  2019-07-24 19:27 [PATCH bpf-next 00/10] CO-RE offset relocations Andrii Nakryiko
@ 2019-07-24 19:27 ` Andrii Nakryiko
  2019-07-24 21:42   ` Andrii Nakryiko
  2019-07-25  0:00   ` Song Liu
  2019-07-24 19:27 ` [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm Andrii Nakryiko
                   ` (9 subsequent siblings)
  10 siblings, 2 replies; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-24 19:27 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel, yhs
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

Add support for BPF CO-RE offset relocations. Add section/record
iteration macros for .BTF.ext. These macro are useful for iterating over
each .BTF.ext record, either for dumping out contents or later for BPF
CO-RE relocation handling.

To enable other parts of libbpf to work with .BTF.ext contents, moved
a bunch of type definitions into libbpf_internal.h.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 tools/lib/bpf/btf.c             | 64 +++++++++--------------
 tools/lib/bpf/btf.h             |  4 ++
 tools/lib/bpf/libbpf_internal.h | 91 +++++++++++++++++++++++++++++++++
 3 files changed, 118 insertions(+), 41 deletions(-)

diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 467224feb43b..4a36bc783848 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -42,47 +42,6 @@ struct btf {
 	int fd;
 };
 
-struct btf_ext_info {
-	/*
-	 * info points to the individual info section (e.g. func_info and
-	 * line_info) from the .BTF.ext. It does not include the __u32 rec_size.
-	 */
-	void *info;
-	__u32 rec_size;
-	__u32 len;
-};
-
-struct btf_ext {
-	union {
-		struct btf_ext_header *hdr;
-		void *data;
-	};
-	struct btf_ext_info func_info;
-	struct btf_ext_info line_info;
-	__u32 data_size;
-};
-
-struct btf_ext_info_sec {
-	__u32	sec_name_off;
-	__u32	num_info;
-	/* Followed by num_info * record_size number of bytes */
-	__u8	data[0];
-};
-
-/* The minimum bpf_func_info checked by the loader */
-struct bpf_func_info_min {
-	__u32   insn_off;
-	__u32   type_id;
-};
-
-/* The minimum bpf_line_info checked by the loader */
-struct bpf_line_info_min {
-	__u32	insn_off;
-	__u32	file_name_off;
-	__u32	line_off;
-	__u32	line_col;
-};
-
 static inline __u64 ptr_to_u64(const void *ptr)
 {
 	return (__u64) (unsigned long) ptr;
@@ -831,6 +790,9 @@ static int btf_ext_setup_info(struct btf_ext *btf_ext,
 	/* The start of the info sec (including the __u32 record_size). */
 	void *info;
 
+	if (ext_sec->len == 0)
+		return 0;
+
 	if (ext_sec->off & 0x03) {
 		pr_debug(".BTF.ext %s section is not aligned to 4 bytes\n",
 		     ext_sec->desc);
@@ -934,6 +896,19 @@ static int btf_ext_setup_line_info(struct btf_ext *btf_ext)
 	return btf_ext_setup_info(btf_ext, &param);
 }
 
+static int btf_ext_setup_offset_reloc(struct btf_ext *btf_ext)
+{
+	struct btf_ext_sec_setup_param param = {
+		.off = btf_ext->hdr->offset_reloc_off,
+		.len = btf_ext->hdr->offset_reloc_len,
+		.min_rec_size = sizeof(struct bpf_offset_reloc),
+		.ext_info = &btf_ext->offset_reloc_info,
+		.desc = "offset_reloc",
+	};
+
+	return btf_ext_setup_info(btf_ext, &param);
+}
+
 static int btf_ext_parse_hdr(__u8 *data, __u32 data_size)
 {
 	const struct btf_ext_header *hdr = (struct btf_ext_header *)data;
@@ -1004,6 +979,13 @@ struct btf_ext *btf_ext__new(__u8 *data, __u32 size)
 	if (err)
 		goto done;
 
+	/* check if there is offset_reloc_off/offset_reloc_len fields */
+	if (btf_ext->hdr->hdr_len < sizeof(struct btf_ext_header))
+		goto done;
+	err = btf_ext_setup_offset_reloc(btf_ext);
+	if (err)
+		goto done;
+
 done:
 	if (err) {
 		btf_ext__free(btf_ext);
diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
index 88a52ae56fc6..287361ee1f6b 100644
--- a/tools/lib/bpf/btf.h
+++ b/tools/lib/bpf/btf.h
@@ -57,6 +57,10 @@ struct btf_ext_header {
 	__u32	func_info_len;
 	__u32	line_info_off;
 	__u32	line_info_len;
+
+	/* optional part of .BTF.ext header */
+	__u32	offset_reloc_off;
+	__u32	offset_reloc_len;
 };
 
 LIBBPF_API void btf__free(struct btf *btf);
diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
index 2ac29bd36226..087ff512282f 100644
--- a/tools/lib/bpf/libbpf_internal.h
+++ b/tools/lib/bpf/libbpf_internal.h
@@ -46,4 +46,95 @@ do {				\
 int libbpf__load_raw_btf(const char *raw_types, size_t types_len,
 			 const char *str_sec, size_t str_len);
 
+struct btf_ext_info {
+	/*
+	 * info points to the individual info section (e.g. func_info and
+	 * line_info) from the .BTF.ext. It does not include the __u32 rec_size.
+	 */
+	void *info;
+	__u32 rec_size;
+	__u32 len;
+};
+
+#define for_each_btf_ext_sec(seg, sec)					\
+	for (sec = (seg)->info;						\
+	     (void *)sec < (seg)->info + (seg)->len;			\
+	     sec = (void *)sec + sizeof(struct btf_ext_info_sec) +	\
+		   (seg)->rec_size * sec->num_info)
+
+#define for_each_btf_ext_rec(seg, sec, i, rec)				\
+	for (i = 0, rec = (void *)&(sec)->data;				\
+	     i < (sec)->num_info;					\
+	     i++, rec = (void *)rec + (seg)->rec_size)
+
+struct btf_ext {
+	union {
+		struct btf_ext_header *hdr;
+		void *data;
+	};
+	struct btf_ext_info func_info;
+	struct btf_ext_info line_info;
+	struct btf_ext_info offset_reloc_info;
+	__u32 data_size;
+};
+
+struct btf_ext_info_sec {
+	__u32	sec_name_off;
+	__u32	num_info;
+	/* Followed by num_info * record_size number of bytes */
+	__u8	data[0];
+};
+
+/* The minimum bpf_func_info checked by the loader */
+struct bpf_func_info_min {
+	__u32   insn_off;
+	__u32   type_id;
+};
+
+/* The minimum bpf_line_info checked by the loader */
+struct bpf_line_info_min {
+	__u32	insn_off;
+	__u32	file_name_off;
+	__u32	line_off;
+	__u32	line_col;
+};
+
+/* The minimum bpf_offset_reloc checked by the loader
+ *
+ * Offset relocation captures the following data:
+ * - insn_off - instruction offset (in bytes) within a BPF program that needs
+ *   its insn->imm field to be relocated with actual offset;
+ * - type_id - BTF type ID of the "root" (containing) entity of a relocatable
+ *   offset;
+ * - access_str_off - offset into corresponding .BTF string section. String
+ *   itself encodes an accessed field using a sequence of field and array
+ *   indicies, separated by colon (:). It's conceptually very close to LLVM's
+ *   getelementptr ([0]) instruction's arguments for identifying offset to 
+ *   a field.
+ *
+ * Example to provide a better feel.
+ *
+ *   struct sample {
+ *       int a;
+ *       struct {
+ *           int b[10];
+ *       };
+ *   };
+ * 
+ *   struct sample *s = ...;
+ *   int x = &s->a;     // encoded as "0:0" (a is field #0)
+ *   int y = &s->b[5];  // encoded as "0:1:5" (b is field #1, arr elem #5)
+ *   int z = &s[10]->b; // encoded as "10:1" (ptr is used as an array)
+ *
+ * type_id for all relocs in this example  will capture BTF type id of
+ * `struct sample`.
+ *
+ *   [0] https://llvm.org/docs/LangRef.html#getelementptr-instruction
+ */
+struct bpf_offset_reloc {
+	__u32   insn_off;
+	__u32   type_id;
+	__u32   access_str_off;
+};
+
 #endif /* __LIBBPF_LIBBPF_INTERNAL_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm
  2019-07-24 19:27 [PATCH bpf-next 00/10] CO-RE offset relocations Andrii Nakryiko
  2019-07-24 19:27 ` [PATCH bpf-next 01/10] libbpf: add .BTF.ext offset relocation section loading Andrii Nakryiko
@ 2019-07-24 19:27 ` Andrii Nakryiko
  2019-07-25 19:32   ` Song Liu
                     ` (2 more replies)
  2019-07-24 19:27 ` [PATCH bpf-next 03/10] selftests/bpf: add CO-RE relocs testing setup Andrii Nakryiko
                   ` (8 subsequent siblings)
  10 siblings, 3 replies; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-24 19:27 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel, yhs
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

This patch implements the core logic for BPF CO-RE offsets relocations.
All the details are described in code comments.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 tools/lib/bpf/libbpf.c | 866 ++++++++++++++++++++++++++++++++++++++++-
 tools/lib/bpf/libbpf.h |   1 +
 2 files changed, 861 insertions(+), 6 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 8741c39adb1c..86d87bf10d46 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -38,6 +38,7 @@
 #include <sys/stat.h>
 #include <sys/types.h>
 #include <sys/vfs.h>
+#include <sys/utsname.h>
 #include <tools/libc_compat.h>
 #include <libelf.h>
 #include <gelf.h>
@@ -47,6 +48,7 @@
 #include "btf.h"
 #include "str_error.h"
 #include "libbpf_internal.h"
+#include "hashmap.h"
 
 #ifndef EM_BPF
 #define EM_BPF 247
@@ -1013,16 +1015,22 @@ static int bpf_object__init_user_maps(struct bpf_object *obj, bool strict)
 }
 
 static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
-						     __u32 id)
+						     __u32 id,
+						     __u32 *res_id)
 {
 	const struct btf_type *t = btf__type_by_id(btf, id);
 
+	if (res_id)
+		*res_id = id;
+
 	while (true) {
 		switch (BTF_INFO_KIND(t->info)) {
 		case BTF_KIND_VOLATILE:
 		case BTF_KIND_CONST:
 		case BTF_KIND_RESTRICT:
 		case BTF_KIND_TYPEDEF:
+			if (res_id)
+				*res_id = t->type;
 			t = btf__type_by_id(btf, t->type);
 			break;
 		default:
@@ -1041,7 +1049,7 @@ static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
 static bool get_map_field_int(const char *map_name, const struct btf *btf,
 			      const struct btf_type *def,
 			      const struct btf_member *m, __u32 *res) {
-	const struct btf_type *t = skip_mods_and_typedefs(btf, m->type);
+	const struct btf_type *t = skip_mods_and_typedefs(btf, m->type, NULL);
 	const char *name = btf__name_by_offset(btf, m->name_off);
 	const struct btf_array *arr_info;
 	const struct btf_type *arr_t;
@@ -1107,7 +1115,7 @@ static int bpf_object__init_user_btf_map(struct bpf_object *obj,
 		return -EOPNOTSUPP;
 	}
 
-	def = skip_mods_and_typedefs(obj->btf, var->type);
+	def = skip_mods_and_typedefs(obj->btf, var->type, NULL);
 	if (BTF_INFO_KIND(def->info) != BTF_KIND_STRUCT) {
 		pr_warning("map '%s': unexpected def kind %u.\n",
 			   map_name, BTF_INFO_KIND(var->info));
@@ -2289,6 +2297,845 @@ bpf_program_reloc_btf_ext(struct bpf_program *prog, struct bpf_object *obj,
 	return 0;
 }
 
+#define BPF_CORE_SPEC_MAX_LEN 64
+
+/* represents BPF CO-RE field or array element accessor */
+struct bpf_core_accessor {
+	__u32 type_id;		/* struct/union type or array element type */
+	__u32 idx;		/* field index or array index */
+	const char *name;	/* field name or NULL for array accessor */
+};
+
+struct bpf_core_spec {
+	const struct btf *btf;
+	/* high-level spec: named fields and array indicies only */
+	struct bpf_core_accessor spec[BPF_CORE_SPEC_MAX_LEN];
+	/* high-level spec length */
+	int len;
+	/* raw, low-level spec: 1-to-1 with accessor spec string */
+	int raw_spec[BPF_CORE_SPEC_MAX_LEN];
+	/* raw spec length */
+	int raw_len;
+	/* field byte offset represented by spec */
+	__u32 offset;
+};
+
+static bool str_is_empty(const char *s)
+{
+	return !s || !s[0];
+}
+
+static int btf_kind(const struct btf_type *t)
+{
+	return BTF_INFO_KIND(t->info);
+}
+
+static bool btf_is_composite(const struct btf_type *t)
+{
+	int kind = btf_kind(t);
+
+	return kind == BTF_KIND_STRUCT || kind == BTF_KIND_UNION;
+}
+
+static bool btf_is_array(const struct btf_type *t)
+{
+	return btf_kind(t) == BTF_KIND_ARRAY;
+}
+
+/* 
+ * Turn bpf_offset_reloc into a low- and high-level spec representation,
+ * validating correctness along the way, as well as calculating resulting
+ * field offset (in bytes), specified by accessor string. Low-level spec
+ * captures every single level of nestedness, including traversing anonymous
+ * struct/union members. High-level one only captures semantically meaningful
+ * "turning points": named fields and array indicies.
+ * E.g., for this case:
+ *
+ *   struct sample {
+ *       int __unimportant;
+ *       struct {
+ *           int __1;
+ *           int __2;
+ *           int a[7];
+ *       };
+ *   };
+ *
+ *   struct sample *s = ...;
+ *
+ *   int x = &s->a[3]; // access string = '0:1:2:3'
+ *
+ * Low-level spec has 1:1 mapping with each element of access string (it's
+ * just a parsed access string representation): [0, 1, 2, 3].
+ *
+ * High-level spec will capture only 3 points:
+ *   - intial zero-index access by pointer (&s->... is the same as &s[0]...);
+ *   - field 'a' access (corresponds to '2' in low-level spec);
+ *   - array element #3 access (corresponds to '3' in low-level spec).
+ *
+ */
+static int bpf_core_spec_parse(const struct btf *btf,
+			       __u32 type_id,
+			       const char *spec_str,
+			       struct bpf_core_spec *spec)
+{
+	int access_idx, parsed_len, i;
+	const struct btf_type *t;
+	__u32 id = type_id;
+	const char *name;
+	__s64 sz;
+
+	if (str_is_empty(spec_str) || *spec_str == ':')
+		return -EINVAL;
+
+	memset(spec, 0, sizeof(*spec));
+	spec->btf = btf;
+
+	/* parse spec_str="0:1:2:3:4" into array raw_spec=[0, 1, 2, 3, 4] */
+	while (*spec_str) {
+		if (*spec_str == ':')
+			++spec_str;
+		if (sscanf(spec_str, "%d%n", &access_idx, &parsed_len) != 1)
+			return -EINVAL;
+		if (spec->raw_len == BPF_CORE_SPEC_MAX_LEN)
+			return -E2BIG;
+		spec_str += parsed_len;
+		spec->raw_spec[spec->raw_len++] = access_idx;
+	}
+
+	if (spec->raw_len == 0)
+		return -EINVAL;
+
+	for (i = 0; i < spec->raw_len; i++) {
+		t = skip_mods_and_typedefs(btf, id, &id);
+		if (!t)
+			return -EINVAL;
+
+		access_idx = spec->raw_spec[i];
+
+		if (i == 0) {
+			/* first spec value is always reloc type array index */
+			spec->spec[spec->len].type_id = id;
+			spec->spec[spec->len].idx = access_idx;
+			spec->len++;
+
+			sz = btf__resolve_size(btf, id);
+			if (sz < 0)
+				return sz;
+			spec->offset += access_idx * sz;
+			continue;
+		}
+
+		if (btf_is_composite(t)) {
+			const struct btf_member *m = (void *)(t + 1);
+			__u32 offset;
+
+			if (access_idx >= BTF_INFO_VLEN(t->info))
+				return -EINVAL;
+
+			m = &m[access_idx];
+
+			if (BTF_INFO_KFLAG(t->info)) {
+				if (BTF_MEMBER_BITFIELD_SIZE(m->offset))
+					return -EINVAL;
+				offset = BTF_MEMBER_BIT_OFFSET(m->offset);
+			} else {
+				offset = m->offset;
+			}
+			if (m->offset % 8)
+				return -EINVAL;
+			spec->offset += offset / 8;
+
+			if (m->name_off) {
+				name = btf__name_by_offset(btf, m->name_off);
+				if (str_is_empty(name))
+					return -EINVAL;
+
+				spec->spec[spec->len].type_id = id;
+				spec->spec[spec->len].idx = access_idx;
+				spec->spec[spec->len].name = name;
+				spec->len++;
+			}
+
+			id = m->type;
+		} else if (btf_is_array(t)) {
+			const struct btf_array *a = (void *)(t + 1);
+
+			t = skip_mods_and_typedefs(btf, a->type, &id);
+			if (!t || access_idx >= a->nelems)
+				return -EINVAL;
+
+			spec->spec[spec->len].type_id = id;
+			spec->spec[spec->len].idx = access_idx;
+			spec->len++;
+
+			sz = btf__resolve_size(btf, id);
+			if (sz < 0)
+				return sz;
+			spec->offset += access_idx * sz;
+		} else {
+			pr_warning("relo for [%u] %s (at idx %d) captures type [%d] of unexpected kind %d\n",
+				   type_id, spec_str, i, id, btf_kind(t));
+			return -EINVAL;
+		}
+	}
+
+	if (spec->len == 0)
+		return -EINVAL;
+
+	return 0;
+}
+
+/* Given 'some_struct_name___with_flavor' return the length of a name prefix
+ * before last triple underscore. Struct name part after last triple
+ * underscore is ignored by BPF CO-RE relocation during relocation matching.
+ */
+static size_t bpf_core_essential_name_len(const char *name)
+{
+	size_t n = strlen(name);
+	int i = n - 3;
+
+	while (i > 0) {
+		if (name[i] == '_' && name[i + 1] == '_' && name[i + 2] == '_')
+			return i;
+		i--;
+	}
+	return n;
+}
+
+/* dynamically sized list of type IDs */
+struct ids_vec {
+	__u32 *data;
+	int len;
+};
+
+static void bpf_core_free_cands(struct ids_vec *cand_ids)
+{
+	free(cand_ids->data);
+	free(cand_ids);
+}
+
+static struct ids_vec *bpf_core_find_cands(const struct btf *local_btf,
+					   __u32 local_type_id,
+					   const struct btf *targ_btf)
+{
+	size_t local_essent_len, targ_essent_len;
+	const char *local_name, *targ_name;
+	const struct btf_type *t;
+	struct ids_vec *cand_ids;
+	__u32 *new_ids;
+	int i, err, n;
+
+	t = btf__type_by_id(local_btf, local_type_id);
+	if (!t)
+		return ERR_PTR(-EINVAL);
+
+	local_name = btf__name_by_offset(local_btf, t->name_off);
+	if (str_is_empty(local_name))
+		return ERR_PTR(-EINVAL);
+	local_essent_len = bpf_core_essential_name_len(local_name);
+
+	cand_ids = calloc(1, sizeof(*cand_ids));
+	if (!cand_ids)
+		return ERR_PTR(-ENOMEM);
+
+	n = btf__get_nr_types(targ_btf);
+	for (i = 1; i <= n; i++) {
+		t = btf__type_by_id(targ_btf, i);
+		targ_name = btf__name_by_offset(targ_btf, t->name_off);
+		if (str_is_empty(targ_name))
+			continue;
+
+		targ_essent_len = bpf_core_essential_name_len(targ_name);
+		if (targ_essent_len != local_essent_len)
+			continue;
+
+		if (strncmp(local_name, targ_name, local_essent_len) == 0) {
+			pr_debug("[%d] (%s): found candidate [%d] (%s)\n",
+				 local_type_id, local_name, i, targ_name);
+			new_ids = realloc(cand_ids->data, cand_ids->len + 1);
+			if (!new_ids) {
+				err = -ENOMEM;
+				goto err_out;
+			}
+			cand_ids->data = new_ids;
+			cand_ids->data[cand_ids->len++] = i;
+		}
+	}
+	return cand_ids;
+err_out:
+	bpf_core_free_cands(cand_ids);
+	return ERR_PTR(err);
+}
+
+/* Check two types for compatibility, skipping const/volatile/restrict and
+ * typedefs, to ensure we are relocating offset to the compatible entities:
+ *   - any two STRUCTs/UNIONs are compatible and can be mixed;
+ *   - any two FWDs are compatible;
+ *   - any two PTRs are always compatible;
+ *   - for ENUMs, check sizes, names are ignored;
+ *   - for INT, size and bitness should match, signedness is ignored;
+ *   - for ARRAY, dimensionality is ignored, element types are checked for
+ *     compatibility recursively;
+ *   - everything else shouldn't be ever a target of relocation.
+ * These rules are not set in stone and probably will be adjusted as we get
+ * more experience with using BPF CO-RE relocations.
+ */
+static int bpf_core_fields_are_compat(const struct btf *local_btf,
+				      __u32 local_id,
+				      const struct btf *targ_btf,
+				      __u32 targ_id)
+{
+	const struct btf_type *local_type, *targ_type;
+	__u16 kind;
+
+recur:
+	local_type = skip_mods_and_typedefs(local_btf, local_id, &local_id);
+	targ_type = skip_mods_and_typedefs(targ_btf, targ_id, &targ_id);
+	if (!local_type || !targ_type)
+		return -EINVAL;
+
+	if (btf_is_composite(local_type) && btf_is_composite(targ_type))
+		return 1;
+	if (BTF_INFO_KIND(local_type->info) != BTF_INFO_KIND(targ_type->info))
+		return 0;
+
+	kind = BTF_INFO_KIND(local_type->info);
+	switch (kind) {
+	case BTF_KIND_FWD:
+	case BTF_KIND_PTR:
+		return 1;
+	case BTF_KIND_ENUM:
+		return local_type->size == targ_type->size;
+	case BTF_KIND_INT: {
+		__u32 loc_int = *(__u32 *)(local_type + 1);
+		__u32 targ_int = *(__u32 *)(targ_type + 1);
+
+		return BTF_INT_OFFSET(loc_int) == 0 &&
+		       BTF_INT_OFFSET(targ_int) == 0 &&
+		       local_type->size == targ_type->size &&
+		       BTF_INT_BITS(loc_int) == BTF_INT_BITS(targ_int);
+	}
+	case BTF_KIND_ARRAY: {
+		const struct btf_array *loc_a, *targ_a;
+
+		loc_a = (void *)(local_type + 1);
+		targ_a = (void *)(targ_type + 1);
+		local_id = loc_a->type;
+		targ_id = targ_a->type;
+		goto recur;
+	}
+	default:
+		pr_warning("unexpected kind %d relocated, local [%d], target [%d]\n",
+			   kind, local_id, targ_id);
+		return 0;
+	}
+}
+
+/* 
+ * Given single high-level accessor (either named field or array index) in
+ * local type, find corresponding high-level accessor for a target type. Along
+ * the way, maintain low-level spec for target as well. Also keep updating
+ * target offset.
+ */
+static int bpf_core_match_member(const struct btf *local_btf,
+				 const struct bpf_core_accessor *local_acc,
+				 const struct btf *targ_btf,
+				 __u32 targ_id,
+				 struct bpf_core_spec *spec,
+				 __u32 *next_targ_id)
+{
+	const struct btf_type *local_type, *targ_type;
+	const struct btf_member *local_member, *m;
+	const char *local_name, *targ_name;
+	__u32 local_id;
+	int i, n, found;
+
+	targ_type = skip_mods_and_typedefs(targ_btf, targ_id, &targ_id);
+	if (!targ_type)
+		return -EINVAL;
+	if (!btf_is_composite(targ_type))
+		return 0;
+
+	local_id = local_acc->type_id;
+	local_type = btf__type_by_id(local_btf, local_id);
+	local_member = (void *)(local_type + 1);
+	local_member += local_acc->idx;
+	local_name = btf__name_by_offset(local_btf, local_member->name_off);
+
+	n = BTF_INFO_VLEN(targ_type->info);
+	m = (void *)(targ_type + 1);
+	for (i = 0; i < n; i++, m++) {
+		__u32 offset;
+
+		/* bitfield relocations not supported */
+		if (BTF_INFO_KFLAG(targ_type->info)) {
+			if (BTF_MEMBER_BITFIELD_SIZE(m->offset))
+				continue;
+			offset = BTF_MEMBER_BIT_OFFSET(m->offset);
+		} else {
+			offset = m->offset;
+		}
+		if (offset % 8)
+			continue;
+
+		/* too deep struct/union/array nesting */
+		if (spec->raw_len == BPF_CORE_SPEC_MAX_LEN)
+			return -E2BIG;
+
+		/* speculate this member will be the good one */
+		spec->offset += offset / 8;
+		spec->raw_spec[spec->raw_len++] = i;
+
+		targ_name = btf__name_by_offset(targ_btf, m->name_off);
+		if (str_is_empty(targ_name)) {
+			/* embedded struct/union, we need to go deeper */
+			found = bpf_core_match_member(local_btf, local_acc,
+						      targ_btf, m->type,
+						      spec, next_targ_id);
+			if (found) /* either found or error */
+				return found;
+		} else if (strcmp(local_name, targ_name) == 0) {
+			/* matching named field */
+			struct bpf_core_accessor *targ_acc;
+
+			targ_acc = &spec->spec[spec->len++];
+			targ_acc->type_id = targ_id;
+			targ_acc->idx = i;
+			targ_acc->name = targ_name;
+
+			*next_targ_id = m->type;
+			found = bpf_core_fields_are_compat(local_btf,
+							   local_member->type,
+							   targ_btf, m->type);
+			if (!found)
+				spec->len--; /* pop accessor */
+			return found;
+		}
+		/* member turned out to be not we looked for */
+		spec->offset -= offset / 8;
+		spec->raw_len--;
+	}
+
+	return 0;
+}
+
+/*
+ * Try to match local spec to a target type and, if successful, produce full
+ * target spec (high-level, low-level + offset).
+ */
+static int bpf_core_spec_match(struct bpf_core_spec *local_spec,
+			       const struct btf *targ_btf, __u32 targ_id,
+			       struct bpf_core_spec *targ_spec)
+{
+	const struct btf_type *targ_type;
+	const struct bpf_core_accessor *local_acc;
+	struct bpf_core_accessor *targ_acc;
+	int i, sz, matched;
+
+	memset(targ_spec, 0, sizeof(*targ_spec));
+	targ_spec->btf = targ_btf;
+
+	local_acc = &local_spec->spec[0];
+	targ_acc = &targ_spec->spec[0];
+
+	for (i = 0; i < local_spec->len; i++, local_acc++, targ_acc++) {
+		targ_type = skip_mods_and_typedefs(targ_spec->btf, targ_id,
+						   &targ_id);
+		if (!targ_type)
+			return -EINVAL;
+
+		if (local_acc->name) {
+			if (!btf_is_composite(targ_type))
+				return 0;
+
+			matched = bpf_core_match_member(local_spec->btf,
+							local_acc,
+							targ_btf, targ_id,
+							targ_spec, &targ_id);
+			if (matched <= 0)
+				return matched;
+		} else {
+			/* for i=0, targ_id is already treated as array element
+			 * type (because it's the original struct), for others
+			 * we should find array element type first
+			 */
+			if (i > 0) {
+				const struct btf_array *a;
+
+				if (!btf_is_array(targ_type))
+					return 0;
+
+				a = (void *)(targ_type + 1);
+				if (local_acc->idx >= a->nelems)
+					return 0;
+				if (!skip_mods_and_typedefs(targ_btf, a->type,
+							    &targ_id))
+					return -EINVAL;
+			}
+
+			/* too deep struct/union/array nesting */
+			if (targ_spec->raw_len == BPF_CORE_SPEC_MAX_LEN)
+				return -E2BIG;
+
+			targ_acc->type_id = targ_id;
+			targ_acc->idx = local_acc->idx;
+			targ_acc->name = NULL;
+			targ_spec->len++;
+			targ_spec->raw_spec[targ_spec->raw_len] = targ_acc->idx;
+			targ_spec->raw_len++;
+
+			sz = btf__resolve_size(targ_btf, targ_id);
+			if (sz < 0)
+				return sz;
+			targ_spec->offset += local_acc->idx * sz;
+		}
+	}
+
+	return 1;
+}
+
+/*
+ * Patch relocatable BPF instruction.
+ * Expected insn->imm value is provided for validation, as well as the new
+ * relocated value.
+ *
+ * Currently three kinds of BPF instructions are supported:
+ * 1. rX = <imm> (assignment with immediate operand);
+ * 2. rX += <imm> (arithmetic operations with immediate operand);
+ * 3. *(rX) = <imm> (indirect memory assignment with immediate operand).
+ *
+ * If actual insn->imm value is wrong, bail out.
+ */
+static int bpf_core_reloc_insn(struct bpf_program *prog, int insn_off,
+			       __u32 orig_off, __u32 new_off)
+{
+	struct bpf_insn *insn;
+	int insn_idx;
+	__u8 class;
+
+	if (insn_off % sizeof(struct bpf_insn))
+		return -EINVAL;
+	insn_idx = insn_off / sizeof(struct bpf_insn);
+
+	insn = &prog->insns[insn_idx];
+	class = BPF_CLASS(insn->code);
+
+	if (class == BPF_ALU || class == BPF_ALU64) {
+		if (BPF_SRC(insn->code) != BPF_K)
+			return -EINVAL;
+		if (insn->imm != orig_off)
+			return -EINVAL;
+		insn->imm = new_off;
+		pr_debug("prog '%s': patched insn #%d (ALU/ALU64) imm %d -> %d\n",
+			 bpf_program__title(prog, false),
+			 insn_idx, orig_off, new_off);
+	} else if (class == BPF_ST && BPF_MODE(insn->code) == BPF_MEM) {
+		if (insn->imm != orig_off)
+			return -EINVAL;
+		insn->imm = new_off;
+		pr_debug("prog '%s': patched insn #%d (ST | MEM) imm %d -> %d\n",
+			 bpf_program__title(prog, false),
+			 insn_idx, orig_off, new_off);
+	} else {
+		pr_warning("prog '%s': trying to relocate unrecognized insn #%d, code:%x, src:%x, dst:%x, off:%x, imm:%x\n",
+			   bpf_program__title(prog, false),
+			   insn_idx, insn->code, insn->src_reg, insn->dst_reg,
+			   insn->off, insn->imm);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+/*
+ * Probe few well-known locations for vmlinux kernel image and try to load BTF
+ * data out of it to use for target BTF.
+ */
+static struct btf *bpf_core_find_kernel_btf(void)
+{
+	const char *locations[] = {
+		"/lib/modules/%1$s/vmlinux-%1$s",
+		"/usr/lib/modules/%1$s/kernel/vmlinux",
+	};
+	char path[PATH_MAX + 1];
+	struct utsname buf;
+	struct btf *btf;
+	int i, err;
+
+	err = uname(&buf);
+	if (err) {
+		pr_warning("failed to uname(): %d\n", err);
+		return ERR_PTR(err);
+	}
+
+	for (i = 0; i < ARRAY_SIZE(locations); i++) {
+		snprintf(path, PATH_MAX, locations[i], buf.release);
+		pr_debug("attempting to load kernel BTF from '%s'\n", path);
+
+		if (access(path, R_OK))
+			continue;
+
+		btf = btf__parse_elf(path, NULL);
+		if (IS_ERR(btf))
+			continue;
+
+		pr_debug("successfully loaded kernel BTF from '%s'\n", path);
+		return btf;
+	}
+
+	pr_warning("failed to find valid kernel BTF\n");
+	return ERR_PTR(-ESRCH);
+}
+
+static size_t bpf_core_hash_fn(const void *key, void *ctx)
+{
+	return (size_t)key;
+}
+
+static bool bpf_core_equal_fn(const void *k1, const void *k2, void *ctx)
+{
+	return k1 == k2;
+}
+
+static void *u32_to_ptr(__u32 x)
+{
+	return (void *)(uintptr_t)x;
+}
+
+/* 
+ * CO-RE relocate single instruction.
+ *
+ * The outline and important points of the algorithm:
+ * 1. For given local type, find corresponding candidate target types.
+ *    Candidate type is a type with the same "essential" name, ignoring
+ *    everything after last triple underscore (___). E.g., `sample`,
+ *    `sample___flavor_one`, `sample___flavor_another_one`, are all candidates
+ *    for each other. Names with triple underscore are referred to as
+ *    "flavors" and are useful, among other things, to allow to
+ *    specify/support incompatible variations of the same kernel struct, which
+ *    might differ between different kernel versions and/or build
+ *    configurations.
+ * 2. For each candidate type, try to match local specification to this
+ *    candidate target type. Matching involves finding corresponding
+ *    high-level spec accessors, meaning that all named fields should match,
+ *    as well as all array accesses should be within the actual bounds. Also,
+ *    types should be compatible (see bpf_core_fields_are_compat for details).
+ * 3. It is supported and expected that there might be multiple flavors
+ *    matching the spec. As long as all the specs resolve to the same set of
+ *    offsets across all candidates, there is not error. If there is any
+ *    ambiguity, CO-RE relocation will fail. This is necessary to accomodate
+ *    imprefection of BTF deduplication, which can cause slight duplication of
+ *    the same BTF type, if some directly or indirectly referenced (by
+ *    pointer) type gets resolved to different actual types in different
+ *    object files. If such situation occurs, deduplicated BTF will end up
+ *    with two (or more) structurally identical types, which differ only in
+ *    types they refer to through pointer. This should be OK in most cases and
+ *    is not an error.
+ * 4. Candidate types search is performed by linearly scanning through all
+ *    types in target BTF. It is anticipated that this is overall more
+ *    efficient memory-wise and not significantly worse (if not better)
+ *    CPU-wise compared to prebuilding a map from all local type names to
+ *    a list of candidate type names. It's also sped up by caching resolved
+ *    list of matching candidates per each local "root" type ID, that has at
+ *    least one bpf_offset_reloc associated with it. This list is shared
+ *    between multiple relocations for the same type ID and is updated as some
+ *    of the candidates are pruned due to structural incompatibility.
+ */
+static int bpf_core_reloc_offset(struct bpf_program *prog,
+				 const struct bpf_offset_reloc *relo,
+				 int relo_idx,
+				 const struct btf *local_btf,
+				 const struct btf *targ_btf,
+				 struct hashmap *cand_cache)
+{
+	const char *prog_name = bpf_program__title(prog, false);
+	struct bpf_core_spec local_spec, cand_spec, targ_spec;
+	const void *type_key = u32_to_ptr(relo->type_id);
+	const struct btf_type *local_type, *cand_type;
+	const char *local_name, *cand_name;
+	struct ids_vec *cand_ids;
+	__u32 local_id, cand_id;
+	const char *spec_str;
+	int i, j, err;
+
+	local_id = relo->type_id;
+	local_type = btf__type_by_id(local_btf, local_id);
+	if (!local_type)
+		return -EINVAL;
+
+	local_name = btf__name_by_offset(local_btf, local_type->name_off);
+	if (str_is_empty(local_name))
+		return -EINVAL;
+
+	spec_str = btf__name_by_offset(local_btf, relo->access_str_off);
+	if (str_is_empty(spec_str))
+		return -EINVAL;
+
+	pr_debug("prog '%s': relo #%d: insn_off=%d, [%d] (%s) + %s\n",
+		 prog_name, relo_idx, relo->insn_off,
+		 local_id, local_name, spec_str);
+
+	err = bpf_core_spec_parse(local_btf, local_id, spec_str, &local_spec);
+	if (err) {
+		pr_warning("prog '%s': relo #%d: parsing [%d] (%s) + %s failed: %d\n",
+			   prog_name, relo_idx, local_id, local_name, spec_str,
+			   err);
+		return -EINVAL;
+	}
+	pr_debug("prog '%s': relo #%d: [%d] (%s) + %s is off %u, len %d, raw_len %d\n",
+		 prog_name, relo_idx, local_id, local_name, spec_str,
+		 local_spec.offset, local_spec.len, local_spec.raw_len);
+
+	if (!hashmap__find(cand_cache, type_key, (void **)&cand_ids)) {
+		cand_ids = bpf_core_find_cands(local_btf, local_id, targ_btf);
+		if (IS_ERR(cand_ids)) {
+			pr_warning("prog '%s': relo #%d: target candidate search failed for [%d] (%s) + %s: %ld\n",
+				   prog_name, relo_idx, local_id, local_name,
+				   spec_str, PTR_ERR(cand_ids));
+			return PTR_ERR(cand_ids);
+		}
+		err = hashmap__set(cand_cache, type_key, cand_ids, NULL, NULL);
+		if (err) {
+			bpf_core_free_cands(cand_ids);
+			return err;
+		}
+	}
+
+	for (i = 0, j = 0; i < cand_ids->len; i++) {
+		cand_id = cand_ids->data[j];
+		cand_type = btf__type_by_id(targ_btf, cand_id);
+		cand_name = btf__name_by_offset(targ_btf, cand_type->name_off);
+
+		err = bpf_core_spec_match(&local_spec, targ_btf,
+					  cand_id, &cand_spec);
+		if (err < 0) {
+			pr_warning("prog '%s': relo #%d: failed to match spec [%d] (%s) + %s to candidate #%d [%d] (%s): %d\n",
+				   prog_name, relo_idx, local_id, local_name,
+				   spec_str, i, cand_id, cand_name, err);
+			return err;
+		}
+		if (err == 0) {
+			pr_debug("prog '%s': relo #%d: candidate #%d [%d] (%s) doesn't match spec\n",
+				 prog_name, relo_idx, i, cand_id, cand_name);
+			continue;
+		}
+
+		pr_debug("prog '%s': relo #%d: candidate #%d ([%d] %s) is off %u, len %d, raw_len %d\n",
+			 prog_name, relo_idx, i, cand_id, cand_name,
+			 cand_spec.offset, cand_spec.len, cand_spec.raw_len);
+
+		if (j == 0) {
+			targ_spec = cand_spec;
+		} else if (cand_spec.offset != targ_spec.offset) {
+			/* if there are many candidates, they should all
+			 * resolve to the same offset
+			 */
+			pr_warning("prog '%s': relo #%d: candidate #%d ([%d] %s): conflicting offset found (%u != %u)\n",
+				   prog_name, relo_idx, i, cand_id, cand_name,
+				   cand_spec.offset, targ_spec.offset);
+			return -EINVAL;
+		}
+
+		cand_ids->data[j++] = cand_spec.spec[0].type_id;
+	}
+
+	cand_ids->len = j;
+	if (cand_ids->len == 0) {
+		pr_warning("prog '%s': relo #%d: no matching targets found for [%d] (%s) + %s\n",
+			   prog_name, relo_idx, local_id, local_name, spec_str);
+		return -ESRCH;
+	}
+
+	err = bpf_core_reloc_insn(prog, relo->insn_off,
+				  local_spec.offset, targ_spec.offset);
+	if (err) {
+		pr_warning("prog '%s': relo #%d: failed to patch insn at offset %d: %d\n",
+			   prog_name, relo_idx, relo->insn_off, err);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+bpf_core_reloc_offsets(struct bpf_object *obj, const char *targ_btf_path)
+{
+	const struct btf_ext_info_sec *sec;
+	const struct bpf_offset_reloc *rec;
+	const struct btf_ext_info *seg;
+	struct hashmap_entry *entry;
+	struct hashmap *cand_cache = NULL;
+	struct bpf_program *prog;
+	struct btf *targ_btf;
+	const char *sec_name;
+	int i, err = 0;
+
+	if (targ_btf_path)
+		targ_btf = btf__parse_elf(targ_btf_path, NULL);
+	else
+		targ_btf = bpf_core_find_kernel_btf();
+	if (IS_ERR(targ_btf)) {
+		pr_warning("failed to get target BTF: %ld\n",
+			   PTR_ERR(targ_btf));
+		return PTR_ERR(targ_btf);
+	}
+
+	cand_cache = hashmap__new(bpf_core_hash_fn, bpf_core_equal_fn, NULL);
+	if (IS_ERR(cand_cache)) {
+		err = PTR_ERR(cand_cache);
+		goto out;
+	}
+
+	seg = &obj->btf_ext->offset_reloc_info;
+	for_each_btf_ext_sec(seg, sec) {
+		sec_name = btf__name_by_offset(obj->btf, sec->sec_name_off);
+		if (str_is_empty(sec_name)) {
+			err = -EINVAL;
+			goto out;
+		}
+		prog = bpf_object__find_program_by_title(obj, sec_name);
+		if (!prog) {
+			pr_warning("failed to find program '%s' for CO-RE offset relocation\n",
+				   sec_name);
+			err = -EINVAL;
+			goto out;
+		}
+
+		pr_debug("prog '%s': performing %d CO-RE offset relocs\n",
+			 sec_name, sec->num_info);
+
+		for_each_btf_ext_rec(seg, sec, i, rec) {
+			err = bpf_core_reloc_offset(prog, rec, i, obj->btf,
+						    targ_btf, cand_cache);
+			if (err) {
+				pr_warning("prog '%s': relo #%d: failed to relocate: %d\n",
+					   sec_name, i, err);
+				goto out;
+			}
+		}
+	}
+
+out:
+	btf__free(targ_btf);
+	if (!IS_ERR_OR_NULL(cand_cache)) {
+		hashmap__for_each_entry(cand_cache, entry, i) {
+			bpf_core_free_cands(entry->value);
+		}
+		hashmap__free(cand_cache);
+	}
+	return err;
+}
+
+static int
+bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
+{
+	int err = 0;
+
+	if (obj->btf_ext->offset_reloc_info.len)
+		err = bpf_core_reloc_offsets(obj, targ_btf_path);
+
+	return err;
+}
+
 static int
 bpf_program__reloc_text(struct bpf_program *prog, struct bpf_object *obj,
 			struct reloc_desc *relo)
@@ -2396,14 +3243,21 @@ bpf_program__relocate(struct bpf_program *prog, struct bpf_object *obj)
 	return 0;
 }
 
-
 static int
-bpf_object__relocate(struct bpf_object *obj)
+bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path)
 {
 	struct bpf_program *prog;
 	size_t i;
 	int err;
 
+	if (obj->btf_ext) {
+		err = bpf_object__relocate_core(obj, targ_btf_path);
+		if (err) {
+			pr_warning("failed to perform CO-RE relocations: %d\n",
+				   err);
+			return err;
+		}
+	}
 	for (i = 0; i < obj->nr_programs; i++) {
 		prog = &obj->programs[i];
 
@@ -2804,7 +3658,7 @@ int bpf_object__load_xattr(struct bpf_object_load_attr *attr)
 	obj->loaded = true;
 
 	CHECK_ERR(bpf_object__create_maps(obj), err, out);
-	CHECK_ERR(bpf_object__relocate(obj), err, out);
+	CHECK_ERR(bpf_object__relocate(obj, attr->target_btf_path), err, out);
 	CHECK_ERR(bpf_object__load_progs(obj, attr->log_level), err, out);
 
 	return 0;
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 5cbf459ece0b..6004bfe1ebf0 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -92,6 +92,7 @@ LIBBPF_API void bpf_object__close(struct bpf_object *object);
 struct bpf_object_load_attr {
 	struct bpf_object *obj;
 	int log_level;
+	const char *target_btf_path;
 };
 
 /* Load/unload object into/from kernel */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH bpf-next 03/10] selftests/bpf: add CO-RE relocs testing setup
  2019-07-24 19:27 [PATCH bpf-next 00/10] CO-RE offset relocations Andrii Nakryiko
  2019-07-24 19:27 ` [PATCH bpf-next 01/10] libbpf: add .BTF.ext offset relocation section loading Andrii Nakryiko
  2019-07-24 19:27 ` [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm Andrii Nakryiko
@ 2019-07-24 19:27 ` Andrii Nakryiko
  2019-07-29 20:22   ` Song Liu
  2019-07-24 19:27 ` [PATCH bpf-next 04/10] selftests/bpf: add CO-RE relocs struct flavors tests Andrii Nakryiko
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-24 19:27 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel, yhs
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

Add CO-RE relocation test runner. Add one simple test validating that
libbpf's logic for searching for kernel image and loading BTF out of it
works.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../selftests/bpf/prog_tests/core_reloc.c     | 126 ++++++++++++++++++
 .../bpf/progs/test_core_reloc_kernel.c        |  39 ++++++
 2 files changed, 165 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/core_reloc.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_kernel.c

diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
new file mode 100644
index 000000000000..b8d6ea578b20
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
@@ -0,0 +1,126 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+
+struct core_reloc_test_case {
+	const char *case_name;
+	const char *bpf_obj_file;
+	const char *btf_src_file;
+	const char *input;
+	int input_len;
+	const char *output;
+	int output_len;
+	bool fails;
+};
+
+static struct core_reloc_test_case test_cases[] = {
+	/* validate we can find kernel image and use its BTF for relocs */
+	{
+		.case_name = "kernel",
+		.bpf_obj_file = "test_core_reloc_kernel.o",
+		.btf_src_file = NULL, /* load from /lib/modules/$(uname -r) */
+		.input = "",
+		.input_len = 0,
+		.output = "\1", /* true */
+		.output_len = 1,
+	},
+};
+
+struct data {
+	char in[256];
+	char out[256];
+};
+
+void test_core_reloc(void)
+{
+	const char *probe_name = "raw_tracepoint/sys_enter";
+	struct bpf_object_load_attr load_attr = {};
+	struct core_reloc_test_case *test_case;
+	int err, duration = 0, i, equal;
+	struct bpf_link *link = NULL;
+	struct bpf_map *data_map;
+	struct bpf_program *prog;
+	struct bpf_object *obj;
+	const int zero = 0;
+	struct data data;
+
+	for (i = 0; i < ARRAY_SIZE(test_cases); i++) {
+		test_case = &test_cases[i];
+
+		obj = bpf_object__open(test_case->bpf_obj_file);
+		if (CHECK(IS_ERR_OR_NULL(obj), "obj_open",
+			  "case #%d: failed to open '%s': %ld\n",
+			  i, test_case->bpf_obj_file, PTR_ERR(obj)))
+			return;
+
+		prog = bpf_object__find_program_by_title(obj, probe_name);
+		if (CHECK(!prog, "find_probe",
+			  "case #%d: prog '%s' not found\n",
+			  i, probe_name))
+			goto cleanup;
+		bpf_program__set_type(prog, BPF_PROG_TYPE_RAW_TRACEPOINT);
+
+		load_attr.obj = obj;
+		load_attr.log_level = 0;
+		load_attr.target_btf_path = test_case->btf_src_file;
+		err = bpf_object__load_xattr(&load_attr);
+		if (test_case->fails) {
+			CHECK(!err, "obj_load_fail",
+			      "case #%d: should fail to load prog '%s'\n",
+			      i, probe_name);
+			goto cleanup;
+		} else {
+			if (CHECK(err, "obj_load",
+				  "case #%d: failed to load prog '%s': %d\n",
+				  i, probe_name, err))
+				goto cleanup;
+		}
+
+		link = bpf_program__attach_raw_tracepoint(prog, "sys_enter");
+		if (CHECK(IS_ERR(link), "attach_raw_tp", "case #%d: err %ld\n",
+			  i, PTR_ERR(link)))
+			goto cleanup;
+
+		data_map = bpf_object__find_map_by_name(obj, "test_cor.bss");
+		if (CHECK(!data_map, "find_data_map",
+			  "case #%d: failed to find data map\n", i))
+			goto cleanup;
+
+		memset(&data, 0, sizeof(data));
+		memcpy(data.in, test_case->input, test_case->input_len);
+
+		err = bpf_map_update_elem(bpf_map__fd(data_map),
+					  &zero, &data, 0);
+		if (CHECK(err, "update_data_map",
+			  "case #%d: failed to update .data map: %d\n",
+			  i, err))
+			goto cleanup;
+
+		/* trigger test run */
+		usleep(1);
+
+		err = bpf_map_lookup_elem(bpf_map__fd(data_map), &zero, &data);
+		if (CHECK(err, "get_result",
+			  "case #%d: failed to get output data: %d\n", i, err))
+			goto cleanup;
+
+		equal = memcmp(data.out, test_case->output,
+			       test_case->output_len) == 0;
+		if (CHECK(!equal, "check_result",
+			  "case #%d: input/output data don't match\n", i)) {
+			int j;
+
+			for (j = 0; j < test_case->output_len; j++) {
+				printf("case #%d: byte #%d, EXP 0x%02hhx GOT 0x%02hhx\n",
+				       i, j, test_case->output[j], data.out[j]);
+			}
+			goto cleanup;
+		}
+
+cleanup:
+		if (!IS_ERR_OR_NULL(link)) {
+			bpf_link__destroy(link);
+			link = NULL;
+		}
+		bpf_object__close(obj);
+	}
+}
diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_kernel.c b/tools/testing/selftests/bpf/progs/test_core_reloc_kernel.c
new file mode 100644
index 000000000000..9a71080ce4ec
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_core_reloc_kernel.c
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2019 Facebook
+
+#include <linux/bpf.h>
+#include <stdint.h>
+#include "bpf_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+static volatile struct data {
+	char in[256];
+	char out[256];
+} data;
+
+struct task_struct {
+	int pid;
+	int tgid;
+};
+
+SEC("raw_tracepoint/sys_enter")
+int test_core_nesting(void *ctx)
+{
+	struct task_struct *task = (void *)bpf_get_current_task();
+	uint64_t pid_tgid = bpf_get_current_pid_tgid();
+	int pid, tgid;
+
+	if (bpf_probe_read(&pid, sizeof(pid),
+			   __builtin_preserve_access_index(&task->pid)))
+		return 1;
+	if (bpf_probe_read(&tgid, sizeof(tgid),
+			   __builtin_preserve_access_index(&task->tgid)))
+		return 1;
+
+	/* validate pid + tgid matches */
+	data.out[0] = (((uint64_t)pid << 32) | tgid) == pid_tgid;
+
+	return 0;
+}
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH bpf-next 04/10] selftests/bpf: add CO-RE relocs struct flavors tests
  2019-07-24 19:27 [PATCH bpf-next 00/10] CO-RE offset relocations Andrii Nakryiko
                   ` (2 preceding siblings ...)
  2019-07-24 19:27 ` [PATCH bpf-next 03/10] selftests/bpf: add CO-RE relocs testing setup Andrii Nakryiko
@ 2019-07-24 19:27 ` Andrii Nakryiko
  2019-07-29 20:37   ` Song Liu
  2019-07-24 19:27 ` [PATCH bpf-next 05/10] selftests/bpf: add CO-RE relocs nesting tests Andrii Nakryiko
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-24 19:27 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel, yhs
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

Add tests verifying that BPF program can use various struct/union
"flavors" to extract data from the same target struct/union.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../selftests/bpf/prog_tests/core_reloc.c     | 34 ++++++++++
 .../bpf/progs/btf__core_reloc_flavors.c       |  3 +
 .../btf__core_reloc_flavors__err_wrong_name.c |  3 +
 .../selftests/bpf/progs/core_reloc_types.h    | 15 +++++
 .../bpf/progs/test_core_reloc_flavors.c       | 65 +++++++++++++++++++
 5 files changed, 120 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_flavors.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_flavors__err_wrong_name.c
 create mode 100644 tools/testing/selftests/bpf/progs/core_reloc_types.h
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_flavors.c

diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
index b8d6ea578b20..c553c5f07ec3 100644
--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
@@ -1,5 +1,32 @@
 // SPDX-License-Identifier: GPL-2.0
 #include <test_progs.h>
+#include "progs/core_reloc_types.h"
+
+#define STRUCT_TO_CHAR_PTR(struct_name) (const char *)&(struct struct_name)
+
+#define FLAVORS_DATA(struct_name) STRUCT_TO_CHAR_PTR(struct_name) {	\
+	.a = 42,							\
+	.b = 0xc001,							\
+	.c = 0xbeef,							\
+}
+
+#define FLAVORS_CASE_COMMON(name)					\
+	.case_name = #name,						\
+	.bpf_obj_file = "test_core_reloc_flavors.o",			\
+	.btf_src_file = "btf__core_reloc_" #name ".o"			\
+
+#define FLAVORS_CASE(name) {						\
+	FLAVORS_CASE_COMMON(name),					\
+	.input = FLAVORS_DATA(core_reloc_##name),			\
+	.input_len = sizeof(struct core_reloc_##name),			\
+	.output = FLAVORS_DATA(core_reloc_flavors),			\
+	.output_len = sizeof(struct core_reloc_flavors),		\
+}
+
+#define FLAVORS_ERR_CASE(name) {					\
+	FLAVORS_CASE_COMMON(name),					\
+	.fails = true,							\
+}
 
 struct core_reloc_test_case {
 	const char *case_name;
@@ -23,6 +50,13 @@ static struct core_reloc_test_case test_cases[] = {
 		.output = "\1", /* true */
 		.output_len = 1,
 	},
+
+	/* validate BPF program can use multiple flavors to match against
+	 * single target BTF type
+	 */
+	FLAVORS_CASE(flavors),
+
+	FLAVORS_ERR_CASE(flavors__err_wrong_name),
 };
 
 struct data {
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_flavors.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_flavors.c
new file mode 100644
index 000000000000..b74455b91227
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_flavors.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_flavors x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_flavors__err_wrong_name.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_flavors__err_wrong_name.c
new file mode 100644
index 000000000000..7b6035f86ee6
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_flavors__err_wrong_name.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_flavors__err_wrong_name x) {}
diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
new file mode 100644
index 000000000000..33b0c6a61912
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
@@ -0,0 +1,15 @@
+/*
+ * FLAVORS
+ */
+struct core_reloc_flavors {
+	int a;
+	int b;
+	int c;
+};
+
+/* this is not a flavor, as it doesn't have triple underscore */
+struct core_reloc_flavors__err_wrong_name {
+	int a;
+	int b;
+	int c;
+};
diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_flavors.c b/tools/testing/selftests/bpf/progs/test_core_reloc_flavors.c
new file mode 100644
index 000000000000..92660344518d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_core_reloc_flavors.c
@@ -0,0 +1,65 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2019 Facebook
+
+#include <linux/bpf.h>
+#include <stdint.h>
+#include "bpf_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+static volatile struct data {
+	char in[256];
+	char out[256];
+} data;
+
+struct core_reloc_flavors {
+	int a;
+	int b;
+	int c;
+};
+
+/* local flavor with reversed layout */
+struct core_reloc_flavors___reversed {
+	int c;
+	int b;
+	int a;
+};
+
+/* local flavor with nested/overlapping layout */
+struct core_reloc_flavors___weird {
+	struct {
+		int b;
+	};
+	/* a and c overlap in local flavor, but this should still work
+	 * correctly with target original flavor
+	 */
+	union {
+		int a;
+		int c;
+	};
+};
+
+SEC("raw_tracepoint/sys_enter")
+int test_core_nesting(void *ctx)
+{
+	struct core_reloc_flavors *in_orig = (void *)&data.in;
+	struct core_reloc_flavors___reversed *in_rev = (void *)&data.in;
+	struct core_reloc_flavors___weird *in_weird = (void *)&data.in;
+	struct core_reloc_flavors *out = (void *)&data.out;
+
+	/* read a using weird layout */
+	if (bpf_probe_read(&out->a, sizeof(in_weird->a),
+			   __builtin_preserve_access_index(&in_weird->a)))
+		return 1;
+	/* read b using reversed layout */
+	if (bpf_probe_read(&out->b, sizeof(in_rev->b),
+			   __builtin_preserve_access_index(&in_rev->b)))
+		return 1;
+	/* read c using original layout */
+	if (bpf_probe_read(&out->c, sizeof(in_orig->c),
+			   __builtin_preserve_access_index(&in_orig->c)))
+		return 1;
+
+	return 0;
+}
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH bpf-next 05/10] selftests/bpf: add CO-RE relocs nesting tests
  2019-07-24 19:27 [PATCH bpf-next 00/10] CO-RE offset relocations Andrii Nakryiko
                   ` (3 preceding siblings ...)
  2019-07-24 19:27 ` [PATCH bpf-next 04/10] selftests/bpf: add CO-RE relocs struct flavors tests Andrii Nakryiko
@ 2019-07-24 19:27 ` Andrii Nakryiko
  2019-07-29 21:06   ` Song Liu
  2019-07-24 19:27 ` [PATCH bpf-next 06/10] selftests/bpf: add CO-RE relocs array tests Andrii Nakryiko
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-24 19:27 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel, yhs
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

Add a bunch of test validating correct handling of nested
structs/unions.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../selftests/bpf/prog_tests/core_reloc.c     |  39 +++
 .../bpf/progs/btf__core_reloc_nesting.c       |   3 +
 .../btf__core_reloc_nesting___anon_embed.c    |   3 +
 ...f__core_reloc_nesting___dup_compat_types.c |   5 +
 ...core_reloc_nesting___err_array_container.c |   3 +
 ...tf__core_reloc_nesting___err_array_field.c |   3 +
 ...e_reloc_nesting___err_dup_incompat_types.c |   4 +
 ...re_reloc_nesting___err_missing_container.c |   3 +
 ...__core_reloc_nesting___err_missing_field.c |   3 +
 ..._reloc_nesting___err_nonstruct_container.c |   3 +
 ...e_reloc_nesting___err_partial_match_dups.c |   4 +
 .../btf__core_reloc_nesting___err_too_deep.c  |   3 +
 .../btf__core_reloc_nesting___extra_nesting.c |   3 +
 ..._core_reloc_nesting___struct_union_mixup.c |   3 +
 .../selftests/bpf/progs/core_reloc_types.h    | 293 ++++++++++++++++++
 .../bpf/progs/test_core_reloc_nesting.c       |  48 +++
 16 files changed, 423 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___anon_embed.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___dup_compat_types.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_container.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_field.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_dup_incompat_types.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_container.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_field.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_nonstruct_container.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_partial_match_dups.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_too_deep.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___extra_nesting.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___struct_union_mixup.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_nesting.c

diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
index c553c5f07ec3..8e19a6c38446 100644
--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
@@ -28,6 +28,29 @@
 	.fails = true,							\
 }
 
+#define NESTING_DATA(struct_name) STRUCT_TO_CHAR_PTR(struct_name) {	\
+	.a = { .a = { .a = 42 } },					\
+	.b = { .b = { .b = 0xc001 } },					\
+}
+
+#define NESTING_CASE_COMMON(name)					\
+	.case_name = #name,						\
+	.bpf_obj_file = "test_core_reloc_nesting.o",			\
+	.btf_src_file = "btf__core_reloc_" #name ".o"
+
+#define NESTING_CASE(name) {						\
+	NESTING_CASE_COMMON(name),					\
+	.input = NESTING_DATA(core_reloc_##name),			\
+	.input_len = sizeof(struct core_reloc_##name),			\
+	.output = NESTING_DATA(core_reloc_nesting),			\
+	.output_len = sizeof(struct core_reloc_nesting)			\
+}
+
+#define NESTING_ERR_CASE(name) {					\
+	NESTING_CASE_COMMON(name),					\
+	.fails = true,							\
+}
+
 struct core_reloc_test_case {
 	const char *case_name;
 	const char *bpf_obj_file;
@@ -57,6 +80,22 @@ static struct core_reloc_test_case test_cases[] = {
 	FLAVORS_CASE(flavors),
 
 	FLAVORS_ERR_CASE(flavors__err_wrong_name),
+
+	/* various struct/enum nesting and resolution scenarios */
+	NESTING_CASE(nesting),
+	NESTING_CASE(nesting___anon_embed),
+	NESTING_CASE(nesting___struct_union_mixup),
+	NESTING_CASE(nesting___extra_nesting),
+	NESTING_CASE(nesting___dup_compat_types),
+
+	NESTING_ERR_CASE(nesting___err_missing_field),
+	NESTING_ERR_CASE(nesting___err_array_field),
+	NESTING_ERR_CASE(nesting___err_missing_container),
+	NESTING_ERR_CASE(nesting___err_nonstruct_container),
+	NESTING_ERR_CASE(nesting___err_array_container),
+	NESTING_ERR_CASE(nesting___err_dup_incompat_types),
+	NESTING_ERR_CASE(nesting___err_partial_match_dups),
+	NESTING_ERR_CASE(nesting___err_too_deep),
 };
 
 struct data {
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting.c
new file mode 100644
index 000000000000..4480fcc0f183
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_nesting x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___anon_embed.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___anon_embed.c
new file mode 100644
index 000000000000..13e108f76ece
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___anon_embed.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_nesting___anon_embed x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___dup_compat_types.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___dup_compat_types.c
new file mode 100644
index 000000000000..76b54fda5fbb
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___dup_compat_types.c
@@ -0,0 +1,5 @@
+#include "core_reloc_types.h"
+
+void f1(struct core_reloc_nesting___dup_compat_types x) {}
+void f2(struct core_reloc_nesting___dup_compat_types__2 x) {}
+void f3(struct core_reloc_nesting___dup_compat_types__3 x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_container.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_container.c
new file mode 100644
index 000000000000..975fb95db810
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_container.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_nesting___err_array_container x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_field.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_field.c
new file mode 100644
index 000000000000..ad66c67e7980
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_field.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_nesting___err_array_field x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_dup_incompat_types.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_dup_incompat_types.c
new file mode 100644
index 000000000000..35c5f8da6812
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_dup_incompat_types.c
@@ -0,0 +1,4 @@
+#include "core_reloc_types.h"
+
+void f1(struct core_reloc_nesting___err_dup_incompat_types__1 x) {}
+void f2(struct core_reloc_nesting___err_dup_incompat_types__2 x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_container.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_container.c
new file mode 100644
index 000000000000..142e332041db
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_container.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_nesting___err_missing_container x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_field.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_field.c
new file mode 100644
index 000000000000..efcae167fab9
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_field.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_nesting___err_missing_field x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_nonstruct_container.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_nonstruct_container.c
new file mode 100644
index 000000000000..97aaaedd8ada
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_nonstruct_container.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_nesting___err_nonstruct_container x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_partial_match_dups.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_partial_match_dups.c
new file mode 100644
index 000000000000..ffde35086e90
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_partial_match_dups.c
@@ -0,0 +1,4 @@
+#include "core_reloc_types.h"
+
+void f1(struct core_reloc_nesting___err_partial_match_dups__a x) {}
+void f2(struct core_reloc_nesting___err_partial_match_dups__b x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_too_deep.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_too_deep.c
new file mode 100644
index 000000000000..39a2fadd8e95
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_too_deep.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_nesting___err_too_deep x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___extra_nesting.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___extra_nesting.c
new file mode 100644
index 000000000000..a09d9dfb20df
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___extra_nesting.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_nesting___extra_nesting x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___struct_union_mixup.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___struct_union_mixup.c
new file mode 100644
index 000000000000..3d8a1a74012f
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___struct_union_mixup.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_nesting___struct_union_mixup x) {}
diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
index 33b0c6a61912..340ee2bcd463 100644
--- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
+++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
@@ -13,3 +13,296 @@ struct core_reloc_flavors__err_wrong_name {
 	int b;
 	int c;
 };
+
+/*
+ * NESTING
+ */
+/* original set up, used to record relocations in BPF program */
+struct core_reloc_nesting_substruct {
+	int a;
+};
+
+union core_reloc_nesting_subunion {
+	int b;
+};
+
+struct core_reloc_nesting {
+	union {
+		struct core_reloc_nesting_substruct a;
+	} a;
+	struct {
+		union core_reloc_nesting_subunion b;
+	} b;
+};
+
+/* inlined anonymous struct/union instead of named structs in original */
+struct core_reloc_nesting___anon_embed {
+	int __just_for_padding;
+	union {
+		struct {
+			int a;
+		} a;
+	} a;
+	struct {
+		union {
+			int b;
+		} b;
+	} b;
+};
+
+/* different mix of nested structs/unions than in original */
+struct core_reloc_nesting___struct_union_mixup {
+	int __a;
+	struct {
+		int __a;
+		union {
+			char __a;
+			int a;
+		} a;
+	} a;
+	int __b;
+	union {
+		int __b;
+		union {
+			char __b;
+			int b;
+		} b;
+	} b;
+};
+
+/* extra anon structs/unions, but still valid a.a.a and b.b.b accessors */
+struct core_reloc_nesting___extra_nesting {
+	int __padding;
+	struct {
+		struct {
+			struct {
+				struct {
+					union {
+						int a;
+					} a;
+				};
+			};
+		} a;
+		int __some_more;
+		struct {
+			union {
+				union {
+					union {
+						struct {
+							int b;
+						};
+					} b;
+				};
+			} b;
+		};
+	};
+};
+
+/* three flavors of same struct with different structure but same layout for
+ * a.a.a and b.b.b, thus successfully resolved and relocatable */
+struct core_reloc_nesting___dup_compat_types {
+	char __just_for_padding;
+	/* 3 more bytes of padding */
+	struct {
+		struct {
+			int a; /* offset 4 */
+		} a;
+	} a;
+	long long __more_padding;
+	struct {
+		struct {
+			int b; /* offset 16 */
+		} b;
+	} b;
+};
+
+struct core_reloc_nesting___dup_compat_types__2 {
+	int __aligned_padding;
+	struct {
+		int __trickier_noop[0];
+		struct {
+			char __some_more_noops[0];
+			int a; /* offset 4 */
+		} a;
+	} a;
+	int __more_padding;
+	struct {
+		struct {
+			struct {
+				int __critical_padding;
+				int b; /* offset 16 */
+			} b;
+			int __does_not_matter;
+		};
+	} b;
+	int __more_irrelevant_stuff;
+};
+
+struct core_reloc_nesting___dup_compat_types__3 {
+	char __correct_padding[4];
+	struct {
+		struct {
+			int a; /* offset 4 */
+		} a;
+	} a;
+	/* 8 byte padding due to next struct's alignment */
+	struct {
+		struct {
+			int b;
+		} b;
+	} b __attribute__((aligned(16)));
+};
+
+/* b.b.b field is missing */
+struct core_reloc_nesting___err_missing_field {
+	struct {
+		struct {
+			int a;
+		} a;
+	} a;
+	struct {
+		struct {
+			int x;
+		} b;
+	} b;
+};
+
+/* b.b.b field is an array of integers instead of plain int */
+struct core_reloc_nesting___err_array_field {
+	struct {
+		struct {
+			int a;
+		} a;
+	} a;
+	struct {
+		struct {
+			int b[1];
+		} b;
+	} b;
+};
+
+/* middle b container is missing */
+struct core_reloc_nesting___err_missing_container {
+	struct {
+		struct {
+			int a;
+		} a;
+	} a;
+	struct {
+		int x;
+	} b;
+};
+
+/* middle b container is referenced through pointer instead of being embedded */
+struct core_reloc_nesting___err_nonstruct_container {
+	struct {
+		struct {
+			int a;
+		} a;
+	} a;
+	struct {
+		struct {
+			int b;
+		} *b;
+	} b;
+};
+
+/* middle b container is an array of structs instead of plain struct */
+struct core_reloc_nesting___err_array_container {
+	struct {
+		struct {
+			int a;
+		} a;
+	} a;
+	struct {
+		struct {
+			int b;
+		} b[1];
+	} b;
+};
+
+/* two flavors of same struct with incompatible layout for b.b.b */
+struct core_reloc_nesting___err_dup_incompat_types__1 {
+	struct {
+		struct {
+			int a; /* offset 0 */
+		} a;
+	} a;
+	struct {
+		struct {
+			int b; /* offset 4 */
+		} b;
+	} b;
+};
+
+struct core_reloc_nesting___err_dup_incompat_types__2 {
+	struct {
+		struct {
+			int a; /* offset 0 */
+		} a;
+	} a;
+	int __extra_padding;
+	struct {
+		struct {
+			int b; /* offset 8 (!) */
+		} b;
+	} b;
+};
+
+/* two flavors of same struct having one of a.a.a and b.b.b, but not both */
+struct core_reloc_nesting___err_partial_match_dups__a {
+	struct {
+		struct {
+			int a;
+		} a;
+	} a;
+};
+
+struct core_reloc_nesting___err_partial_match_dups__b {
+	struct {
+		struct {
+			int b;
+		} b;
+	} b;
+};
+
+struct core_reloc_nesting___err_too_deep {
+	struct {
+		struct {
+			int a;
+		} a;
+	} a;
+	/* 65 levels of nestedness for b.b.b */
+	struct {
+		struct {
+			struct { struct { struct { struct { struct {
+			struct { struct { struct { struct { struct {
+			struct { struct { struct { struct { struct {
+			struct { struct { struct { struct { struct {
+			struct { struct { struct { struct { struct {
+			struct { struct { struct { struct { struct {
+			struct { struct { struct { struct { struct {
+			struct { struct { struct { struct { struct {
+			struct { struct { struct { struct { struct {
+			struct { struct { struct { struct { struct {
+			struct { struct { struct { struct { struct {
+			struct { struct { struct { struct { struct {
+				/* this one is one too much */
+				struct {
+					int b;
+				};
+			}; }; }; }; };
+			}; }; }; }; };
+			}; }; }; }; };
+			}; }; }; }; };
+			}; }; }; }; };
+			}; }; }; }; };
+			}; }; }; }; };
+			}; }; }; }; };
+			}; }; }; }; };
+			}; }; }; }; };
+			}; }; }; }; };
+			}; }; }; }; };
+		} b;
+	} b;
+};
diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_nesting.c b/tools/testing/selftests/bpf/progs/test_core_reloc_nesting.c
new file mode 100644
index 000000000000..04a539030c5d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_core_reloc_nesting.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2019 Facebook
+
+#include <linux/bpf.h>
+#include <stdint.h>
+#include "bpf_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+static volatile struct data {
+	char in[256];
+	char out[256];
+} data;
+
+struct core_reloc_nesting_substruct {
+	int a;
+};
+
+union core_reloc_nesting_subunion {
+	int b;
+};
+
+/* int a.a.a and b.b.b accesses */
+struct core_reloc_nesting {
+	union {
+		struct core_reloc_nesting_substruct a;
+	} a;
+	struct {
+		union core_reloc_nesting_subunion b;
+	} b;
+};
+
+SEC("raw_tracepoint/sys_enter")
+int test_core_nesting(void *ctx)
+{
+	struct core_reloc_nesting *in = (void *)&data.in;
+	struct core_reloc_nesting *out = (void *)&data.out;
+
+	if (bpf_probe_read(&out->a.a.a, sizeof(in->a.a.a),
+			   __builtin_preserve_access_index(&in->a.a.a)))
+		return 1;
+	if (bpf_probe_read(&out->b.b.b, sizeof(in->b.b.b),
+			   __builtin_preserve_access_index(&in->b.b.b)))
+		return 1;
+
+	return 0;
+}
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH bpf-next 06/10] selftests/bpf: add CO-RE relocs array tests
  2019-07-24 19:27 [PATCH bpf-next 00/10] CO-RE offset relocations Andrii Nakryiko
                   ` (4 preceding siblings ...)
  2019-07-24 19:27 ` [PATCH bpf-next 05/10] selftests/bpf: add CO-RE relocs nesting tests Andrii Nakryiko
@ 2019-07-24 19:27 ` Andrii Nakryiko
  2019-07-25 23:26   ` Alexei Starovoitov
  2019-07-24 19:27 ` [PATCH bpf-next 07/10] selftests/bpf: add CO-RE relocs enum/ptr/func_proto tests Andrii Nakryiko
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-24 19:27 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel, yhs
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

Add tests for various array handling/relocation scenarios.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../selftests/bpf/prog_tests/core_reloc.c     | 41 ++++++++++
 .../bpf/progs/btf__core_reloc_arrays.c        |  3 +
 .../btf__core_reloc_arrays___diff_arr_dim.c   |  3 +
 ...btf__core_reloc_arrays___diff_arr_val_sz.c |  3 +
 .../btf__core_reloc_arrays___err_non_array.c  |  3 +
 ...btf__core_reloc_arrays___err_too_shallow.c |  3 +
 .../btf__core_reloc_arrays___err_too_small.c  |  3 +
 ..._core_reloc_arrays___err_wrong_val_type1.c |  3 +
 ..._core_reloc_arrays___err_wrong_val_type2.c |  3 +
 .../selftests/bpf/progs/core_reloc_types.h    | 81 +++++++++++++++++++
 .../bpf/progs/test_core_reloc_arrays.c        | 58 +++++++++++++
 11 files changed, 204 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_dim.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_val_sz.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_non_array.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_shallow.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_small.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type1.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type2.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c

diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
index 8e19a6c38446..d4a5fc5304b3 100644
--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
@@ -51,6 +51,36 @@
 	.fails = true,							\
 }
 
+#define ARRAYS_DATA(struct_name) STRUCT_TO_CHAR_PTR(struct_name) {	\
+	.a = { [2] = 1 },						\
+	.b = { [1] = { [2] = { [3] = 2 } } },				\
+	.c = { [1] = { .c =  3 } },					\
+	.d = { [0] = { [0] = { .d = 4 } } },				\
+}
+
+#define ARRAYS_CASE_COMMON(name)					\
+	.case_name = #name,						\
+	.bpf_obj_file = "test_core_reloc_arrays.o",			\
+	.btf_src_file = "btf__core_reloc_" #name ".o"
+
+#define ARRAYS_CASE(name) {						\
+	ARRAYS_CASE_COMMON(name),					\
+	.input = ARRAYS_DATA(core_reloc_##name),			\
+	.input_len = sizeof(struct core_reloc_##name),			\
+	.output = STRUCT_TO_CHAR_PTR(core_reloc_arrays_output) {	\
+		.a2   = 1,						\
+		.b123 = 2,						\
+		.c1c  = 3,						\
+		.d00d = 4,						\
+	},								\
+	.output_len = sizeof(struct core_reloc_arrays_output)		\
+}
+
+#define ARRAYS_ERR_CASE(name) {						\
+	ARRAYS_CASE_COMMON(name),					\
+	.fails = true,							\
+}
+
 struct core_reloc_test_case {
 	const char *case_name;
 	const char *bpf_obj_file;
@@ -96,6 +126,17 @@ static struct core_reloc_test_case test_cases[] = {
 	NESTING_ERR_CASE(nesting___err_dup_incompat_types),
 	NESTING_ERR_CASE(nesting___err_partial_match_dups),
 	NESTING_ERR_CASE(nesting___err_too_deep),
+
+	/* various array access relocation scenarios */
+	ARRAYS_CASE(arrays),
+	ARRAYS_CASE(arrays___diff_arr_dim),
+	ARRAYS_CASE(arrays___diff_arr_val_sz),
+
+	ARRAYS_ERR_CASE(arrays___err_too_small),
+	ARRAYS_ERR_CASE(arrays___err_too_shallow),
+	ARRAYS_ERR_CASE(arrays___err_non_array),
+	ARRAYS_ERR_CASE(arrays___err_wrong_val_type1),
+	ARRAYS_ERR_CASE(arrays___err_wrong_val_type2),
 };
 
 struct data {
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays.c
new file mode 100644
index 000000000000..018ed7fbba3a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_arrays x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_dim.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_dim.c
new file mode 100644
index 000000000000..13d662c57014
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_dim.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_arrays___diff_arr_dim x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_val_sz.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_val_sz.c
new file mode 100644
index 000000000000..a351f418c85d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_val_sz.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_arrays___diff_arr_val_sz x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_non_array.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_non_array.c
new file mode 100644
index 000000000000..a8735009becc
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_non_array.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_arrays___err_non_array x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_shallow.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_shallow.c
new file mode 100644
index 000000000000..2a67c28b1e75
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_shallow.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_arrays___err_too_shallow x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_small.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_small.c
new file mode 100644
index 000000000000..1142c08c925f
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_small.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_arrays___err_too_small x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type1.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type1.c
new file mode 100644
index 000000000000..795a5b729176
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type1.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_arrays___err_wrong_val_type1 x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type2.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type2.c
new file mode 100644
index 000000000000..3af74b837c4d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type2.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_arrays___err_wrong_val_type2 x) {}
diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
index 340ee2bcd463..45de7986ea2e 100644
--- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
+++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
@@ -306,3 +306,84 @@ struct core_reloc_nesting___err_too_deep {
 		} b;
 	} b;
 };
+
+/*
+ * ARRAYS
+ */
+struct core_reloc_arrays_output {
+	int a2;
+	char b123;
+	int c1c;
+	int d00d;
+};
+
+struct core_reloc_arrays_substruct {
+	int c;
+	int d;
+};
+
+struct core_reloc_arrays {
+	int a[5];
+	char b[2][3][4];
+	struct core_reloc_arrays_substruct c[3];
+	struct core_reloc_arrays_substruct d[1][2];
+};
+
+/* bigger array dimensions */
+struct core_reloc_arrays___diff_arr_dim {
+	int a[7];
+	char b[3][4][5];
+	struct core_reloc_arrays_substruct c[4];
+	struct core_reloc_arrays_substruct d[2][3];
+};
+
+/* different size of array's value (struct) */
+struct core_reloc_arrays___diff_arr_val_sz {
+	int a[5];
+	char b[2][3][4];
+	struct {
+		int __padding1;
+		int c;
+		int __padding2;
+	} c[3];
+	struct {
+		int __padding1;
+		int d;
+		int __padding2;
+	} d[1][2];
+};
+
+struct core_reloc_arrays___err_too_small {
+	int a[2]; /* this one is too small */
+	char b[2][3][4];
+	struct core_reloc_arrays_substruct c[3];
+	struct core_reloc_arrays_substruct d[1][2];
+};
+
+struct core_reloc_arrays___err_too_shallow {
+	int a[5];
+	char b[2][3]; /* this one lacks one dimension */
+	struct core_reloc_arrays_substruct c[3];
+	struct core_reloc_arrays_substruct d[1][2];
+};
+
+struct core_reloc_arrays___err_non_array {
+	int a; /* not an array */
+	char b[2][3][4];
+	struct core_reloc_arrays_substruct c[3];
+	struct core_reloc_arrays_substruct d[1][2];
+};
+
+struct core_reloc_arrays___err_wrong_val_type1 {
+	char a[5]; /* char instead of int */
+	char b[2][3][4];
+	struct core_reloc_arrays_substruct c[3];
+	struct core_reloc_arrays_substruct d[1][2];
+};
+
+struct core_reloc_arrays___err_wrong_val_type2 {
+	int a[5];
+	char b[2][3][4];
+	int c[3]; /* value is not a struct */
+	struct core_reloc_arrays_substruct d[1][2];
+};
diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c b/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
new file mode 100644
index 000000000000..e73ef19348c6
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
@@ -0,0 +1,58 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2019 Facebook
+
+#include <linux/bpf.h>
+#include <stdint.h>
+#include "bpf_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+static volatile struct data {
+	char in[256];
+	char out[256];
+} data;
+
+struct core_reloc_arrays_output {
+	int a2;
+	char b123;
+	int c1c;
+	int d00d;
+};
+
+struct core_reloc_arrays_substruct {
+	int c;
+	int d;
+};
+
+struct core_reloc_arrays {
+	int a[5];
+	char b[2][3][4];
+	struct core_reloc_arrays_substruct c[3];
+	struct core_reloc_arrays_substruct d[1][2];
+};
+
+#define CORE_READ(dst, src) \
+	bpf_probe_read(dst, sizeof(*src), __builtin_preserve_access_index(src))
+
+SEC("raw_tracepoint/sys_enter")
+int test_core_nesting(void *ctx)
+{
+	struct core_reloc_arrays *in = (void *)&data.in;
+	struct core_reloc_arrays_output *out = (void *)&data.out;
+
+	/* in->a[2] */
+	if (CORE_READ(&out->a2, &in->a[2]))
+		return 1;
+	/* in->b[1][2][3] */
+	if (CORE_READ(&out->b123, &in->b[1][2][3]))
+		return 1;
+	/* in->c[1].c */
+	if (CORE_READ(&out->c1c, &in->c[1].c))
+		return 1;
+	/* in->d[0][0].d */
+	if (CORE_READ(&out->d00d, &in->d[0][0].d))
+		return 1;
+
+	return 0;
+}
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH bpf-next 07/10] selftests/bpf: add CO-RE relocs enum/ptr/func_proto tests
  2019-07-24 19:27 [PATCH bpf-next 00/10] CO-RE offset relocations Andrii Nakryiko
                   ` (5 preceding siblings ...)
  2019-07-24 19:27 ` [PATCH bpf-next 06/10] selftests/bpf: add CO-RE relocs array tests Andrii Nakryiko
@ 2019-07-24 19:27 ` Andrii Nakryiko
  2019-07-29 21:09   ` Song Liu
  2019-07-24 19:27 ` [PATCH bpf-next 08/10] selftests/bpf: add CO-RE relocs modifiers/typedef tests Andrii Nakryiko
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-24 19:27 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel, yhs
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

Test CO-RE relocation handling of ints, enums, pointers, func protos, etc.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../selftests/bpf/prog_tests/core_reloc.c     | 36 ++++++++++
 .../bpf/progs/btf__core_reloc_primitives.c    |  3 +
 ...f__core_reloc_primitives___diff_enum_def.c |  3 +
 ..._core_reloc_primitives___diff_func_proto.c |  3 +
 ...f__core_reloc_primitives___diff_ptr_type.c |  3 +
 ...tf__core_reloc_primitives___err_non_enum.c |  3 +
 ...btf__core_reloc_primitives___err_non_int.c |  3 +
 ...btf__core_reloc_primitives___err_non_ptr.c |  3 +
 .../selftests/bpf/progs/core_reloc_types.h    | 67 +++++++++++++++++++
 .../bpf/progs/test_core_reloc_primitives.c    | 50 ++++++++++++++
 10 files changed, 174 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_enum_def.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_func_proto.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_ptr_type.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_enum.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_int.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_ptr.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_primitives.c

diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
index d4a5fc5304b3..05746ead48d9 100644
--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
@@ -81,6 +81,32 @@
 	.fails = true,							\
 }
 
+#define PRIMITIVES_DATA(struct_name) STRUCT_TO_CHAR_PTR(struct_name) {	\
+	.a = 1,								\
+	.b = 2,								\
+	.c = 3,								\
+	.d = (void *)4,							\
+	.f = (void *)5,							\
+}
+
+#define PRIMITIVES_CASE_COMMON(name)					\
+	.case_name = #name,						\
+	.bpf_obj_file = "test_core_reloc_primitives.o",			\
+	.btf_src_file = "btf__core_reloc_" #name ".o"
+
+#define PRIMITIVES_CASE(name) {						\
+	PRIMITIVES_CASE_COMMON(name),					\
+	.input = PRIMITIVES_DATA(core_reloc_##name),			\
+	.input_len = sizeof(struct core_reloc_##name),			\
+	.output = PRIMITIVES_DATA(core_reloc_primitives),		\
+	.output_len = sizeof(struct core_reloc_primitives),		\
+}
+
+#define PRIMITIVES_ERR_CASE(name) {					\
+	PRIMITIVES_CASE_COMMON(name),					\
+	.fails = true,							\
+}
+
 struct core_reloc_test_case {
 	const char *case_name;
 	const char *bpf_obj_file;
@@ -137,6 +163,16 @@ static struct core_reloc_test_case test_cases[] = {
 	ARRAYS_ERR_CASE(arrays___err_non_array),
 	ARRAYS_ERR_CASE(arrays___err_wrong_val_type1),
 	ARRAYS_ERR_CASE(arrays___err_wrong_val_type2),
+
+	/* enum/ptr/int handling scenarios */
+	PRIMITIVES_CASE(primitives),
+	PRIMITIVES_CASE(primitives___diff_enum_def),
+	PRIMITIVES_CASE(primitives___diff_func_proto),
+	PRIMITIVES_CASE(primitives___diff_ptr_type),
+
+	PRIMITIVES_ERR_CASE(primitives___err_non_enum),
+	PRIMITIVES_ERR_CASE(primitives___err_non_int),
+	PRIMITIVES_ERR_CASE(primitives___err_non_ptr),
 };
 
 struct data {
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives.c
new file mode 100644
index 000000000000..96b90e39242a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_primitives x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_enum_def.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_enum_def.c
new file mode 100644
index 000000000000..6e87233a3ed0
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_enum_def.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_primitives___diff_enum_def x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_func_proto.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_func_proto.c
new file mode 100644
index 000000000000..d9f48e80b9d9
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_func_proto.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_primitives___diff_func_proto x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_ptr_type.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_ptr_type.c
new file mode 100644
index 000000000000..c718f75f8f3b
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_ptr_type.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_primitives___diff_ptr_type x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_enum.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_enum.c
new file mode 100644
index 000000000000..b8a120830891
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_enum.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_primitives___err_non_enum x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_int.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_int.c
new file mode 100644
index 000000000000..ad8b3c9aa76f
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_int.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_primitives___err_non_int x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_ptr.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_ptr.c
new file mode 100644
index 000000000000..e20bc1d42d0a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_ptr.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_primitives___err_non_ptr x) {}
diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
index 45de7986ea2e..7526a5f5755b 100644
--- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
+++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
@@ -387,3 +387,70 @@ struct core_reloc_arrays___err_wrong_val_type2 {
 	int c[3]; /* value is not a struct */
 	struct core_reloc_arrays_substruct d[1][2];
 };
+
+/*
+ * PRIMITIVES
+ */
+enum core_reloc_primitives_enum {
+	A = 0,
+	B = 1,
+};
+
+struct core_reloc_primitives {
+	char a;
+	int b;
+	enum core_reloc_primitives_enum c;
+	void *d;
+	int (*f)(const char *);
+};
+
+struct core_reloc_primitives___diff_enum_def {
+	char a;
+	int b;
+	void *d;
+	int (*f)(const char *);
+	enum {
+		X = 100,
+		Y = 200,
+	} c; /* inline enum def with differing set of values */
+};
+
+struct core_reloc_primitives___diff_func_proto {
+	void (*f)(int); /* incompatible function prototype */
+	void *d;
+	enum core_reloc_primitives_enum c;
+	int b;
+	char a;
+};
+
+struct core_reloc_primitives___diff_ptr_type {
+	const char * const d; /* different pointee type + modifiers */
+	char a;
+	int b;
+	enum core_reloc_primitives_enum c;
+	int (*f)(const char *);
+};
+
+struct core_reloc_primitives___err_non_enum {
+	char a[1];
+	int b;
+	int c; /* int instead of enum */
+	void *d;
+	int (*f)(const char *);
+};
+
+struct core_reloc_primitives___err_non_int {
+	char a[1];
+	int *b; /* ptr instead of int */
+	enum core_reloc_primitives_enum c;
+	void *d;
+	int (*f)(const char *);
+};
+
+struct core_reloc_primitives___err_non_ptr {
+	char a[1];
+	int b;
+	enum core_reloc_primitives_enum c;
+	int d; /* int instead of ptr */
+	int (*f)(const char *);
+};
diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_primitives.c b/tools/testing/selftests/bpf/progs/test_core_reloc_primitives.c
new file mode 100644
index 000000000000..9b345e2a7d6c
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_core_reloc_primitives.c
@@ -0,0 +1,50 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2019 Facebook
+
+#include <linux/bpf.h>
+#include <stdint.h>
+#include "bpf_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+static volatile struct data {
+	char in[256];
+	char out[256];
+} data;
+
+enum core_reloc_primitives_enum {
+	A = 0,
+	B = 1,
+};
+
+struct core_reloc_primitives {
+	char a;
+	int b;
+	enum core_reloc_primitives_enum c;
+	void *d;
+	int (*f)(const char *);
+};
+
+#define CORE_READ(dst, src) \
+	bpf_probe_read(dst, sizeof(*src), __builtin_preserve_access_index(src))
+
+SEC("raw_tracepoint/sys_enter")
+int test_core_nesting(void *ctx)
+{
+	struct core_reloc_primitives *in = (void *)&data.in;
+	struct core_reloc_primitives *out = (void *)&data.out;
+
+	if (CORE_READ(&out->a, &in->a))
+		return 1;
+	if (CORE_READ(&out->b, &in->b))
+		return 1;
+	if (CORE_READ(&out->c, &in->c))
+		return 1;
+	if (CORE_READ(&out->d, &in->d))
+		return 1;
+	if (CORE_READ(&out->f, &in->f))
+		return 1;
+
+	return 0;
+}
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH bpf-next 08/10] selftests/bpf: add CO-RE relocs modifiers/typedef tests
  2019-07-24 19:27 [PATCH bpf-next 00/10] CO-RE offset relocations Andrii Nakryiko
                   ` (6 preceding siblings ...)
  2019-07-24 19:27 ` [PATCH bpf-next 07/10] selftests/bpf: add CO-RE relocs enum/ptr/func_proto tests Andrii Nakryiko
@ 2019-07-24 19:27 ` Andrii Nakryiko
  2019-07-29 21:11   ` Song Liu
  2019-07-24 19:27 ` [PATCH bpf-next 09/10] selftest/bpf: add CO-RE relocs ptr-as-array tests Andrii Nakryiko
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-24 19:27 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel, yhs
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

Add tests validating correct handling of various combinations of
typedefs and const/volatile/restrict modifiers.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../selftests/bpf/prog_tests/core_reloc.c     | 27 +++++++
 .../bpf/progs/btf__core_reloc_mods.c          |  3 +
 .../progs/btf__core_reloc_mods___mod_swap.c   |  3 +
 .../progs/btf__core_reloc_mods___typedefs.c   |  3 +
 .../selftests/bpf/progs/core_reloc_types.h    | 72 +++++++++++++++++++
 .../bpf/progs/test_core_reloc_mods.c          | 68 ++++++++++++++++++
 6 files changed, 176 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods___mod_swap.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods___typedefs.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_mods.c

diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
index 05746ead48d9..f2c7ed67a81c 100644
--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
@@ -107,6 +107,28 @@
 	.fails = true,							\
 }
 
+#define MODS_CASE(name) {						\
+	.case_name = #name,						\
+	.bpf_obj_file = "test_core_reloc_mods.o",			\
+	.btf_src_file = "btf__core_reloc_" #name ".o",			\
+	.input = STRUCT_TO_CHAR_PTR(core_reloc_##name) {		\
+		.a = 1,							\
+		.b = 2,							\
+		.c = (void *)3,						\
+		.d = (void *)4,						\
+		.e = { [2] = 5 },					\
+		.f = { [1] = 6 },					\
+		.g = { .x = 7 },					\
+		.h = { .y = 8 },					\
+	},								\
+	.input_len = sizeof(struct core_reloc_##name),			\
+	.output = STRUCT_TO_CHAR_PTR(core_reloc_mods_output) {		\
+		.a = 1, .b = 2, .c = 3, .d = 4,				\
+		.e = 5, .f = 6, .g = 7, .h = 8,				\
+	},								\
+	.output_len = sizeof(struct core_reloc_mods_output),		\
+}
+
 struct core_reloc_test_case {
 	const char *case_name;
 	const char *bpf_obj_file;
@@ -173,6 +195,11 @@ static struct core_reloc_test_case test_cases[] = {
 	PRIMITIVES_ERR_CASE(primitives___err_non_enum),
 	PRIMITIVES_ERR_CASE(primitives___err_non_int),
 	PRIMITIVES_ERR_CASE(primitives___err_non_ptr),
+
+	/* const/volatile/restrict and typedefs scenarios */
+	MODS_CASE(mods),
+	MODS_CASE(mods___mod_swap),
+	MODS_CASE(mods___typedefs),
 };
 
 struct data {
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_mods.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_mods.c
new file mode 100644
index 000000000000..124197a2e813
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_mods.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_mods x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_mods___mod_swap.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_mods___mod_swap.c
new file mode 100644
index 000000000000..f8a6592ca75f
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_mods___mod_swap.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_mods___mod_swap x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_mods___typedefs.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_mods___typedefs.c
new file mode 100644
index 000000000000..5c0d73687247
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_mods___typedefs.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_mods___typedefs x) {}
diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
index 7526a5f5755b..3401e8342e57 100644
--- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
+++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
@@ -454,3 +454,75 @@ struct core_reloc_primitives___err_non_ptr {
 	int d; /* int instead of ptr */
 	int (*f)(const char *);
 };
+
+/*
+ * MODS
+ */
+struct core_reloc_mods_output {
+	int a, b, c, d, e, f, g, h;
+};
+
+typedef const int int_t;
+typedef const char *char_ptr_t;
+typedef const int arr_t[7];
+
+struct core_reloc_mods_substruct {
+	int x;
+	int y;
+};
+
+typedef struct {
+	int x;
+	int y;
+} core_reloc_mods_substruct_t;
+
+struct core_reloc_mods {
+	int a;
+	int_t b;
+	char *c;
+	char_ptr_t d;
+	int e[3];
+	arr_t f;
+	struct core_reloc_mods_substruct g;
+	core_reloc_mods_substruct_t h;
+};
+
+/* a/b, c/d, e/f, and g/h pairs are swapped */
+struct core_reloc_mods___mod_swap {
+	int b;
+	int_t a;
+	char *d;
+	char_ptr_t c;
+	int f[3];
+	arr_t e;
+	struct {
+		int y;
+		int x;
+	} h;
+	core_reloc_mods_substruct_t g;
+};
+
+typedef int int1_t;
+typedef int1_t int2_t;
+typedef int2_t int3_t;
+
+typedef int arr1_t[5];
+typedef arr1_t arr2_t;
+typedef arr2_t arr3_t;
+typedef arr3_t arr4_t;
+
+typedef const char * const volatile restrict fancy_char_ptr_t;
+
+typedef core_reloc_mods_substruct_t core_reloc_mods_substruct_tt;
+
+/* we need more typedefs */
+struct core_reloc_mods___typedefs {
+	core_reloc_mods_substruct_tt g;
+	core_reloc_mods_substruct_tt h;
+	arr4_t f;
+	arr4_t e;
+	fancy_char_ptr_t d;
+	fancy_char_ptr_t c;
+	int3_t b;
+	int3_t a;
+};
diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_mods.c b/tools/testing/selftests/bpf/progs/test_core_reloc_mods.c
new file mode 100644
index 000000000000..eaf436922cb3
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_core_reloc_mods.c
@@ -0,0 +1,68 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2019 Facebook
+
+#include <linux/bpf.h>
+#include <stdint.h>
+#include "bpf_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+static volatile struct data {
+	char in[256];
+	char out[256];
+} data;
+
+struct core_reloc_mods_output {
+	int a, b, c, d, e, f, g, h;
+};
+
+typedef const int int_t;
+typedef const char *char_ptr_t;
+typedef const int arr_t[7];
+
+struct core_reloc_mods_substruct {
+	int x;
+	int y;
+};
+
+typedef struct {
+	int x;
+	int y;
+} core_reloc_mods_substruct_t;
+
+struct core_reloc_mods {
+	int a;
+	int_t b;
+	char *c;
+	char_ptr_t d;
+	int e[3];
+	/* BUG: doesn't work if using `arr_t f;` */
+	int f[7];
+	struct core_reloc_mods_substruct g;
+	/* BUG: doesn't work if using `core_reloc_mods_substruct_t h;` */
+	struct core_reloc_mods_substruct h;
+};
+
+#define CORE_READ(dst, src)					\
+	bpf_probe_read((void *)dst, sizeof(*dst),		\
+		       __builtin_preserve_access_index(src))
+
+SEC("raw_tracepoint/sys_enter")
+int test_core_mods(void *ctx)
+{
+	struct core_reloc_mods *in = (void *)&data.in;
+	struct core_reloc_mods_output *out = (void *)&data.out;
+
+	if (CORE_READ(&out->a, &in->a) ||
+	    CORE_READ(&out->b, &in->b) ||
+	    CORE_READ(&out->c, &in->c) ||
+	    CORE_READ(&out->d, &in->d) ||
+	    CORE_READ(&out->e, &in->e[2]) ||
+	    CORE_READ(&out->f, &in->f[1]) ||
+	    CORE_READ(&out->g, &in->g.x) ||
+	    CORE_READ(&out->h, &in->h.y))
+		return 1;
+
+	return 0;
+}
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH bpf-next 09/10] selftest/bpf: add CO-RE relocs ptr-as-array tests
  2019-07-24 19:27 [PATCH bpf-next 00/10] CO-RE offset relocations Andrii Nakryiko
                   ` (7 preceding siblings ...)
  2019-07-24 19:27 ` [PATCH bpf-next 08/10] selftests/bpf: add CO-RE relocs modifiers/typedef tests Andrii Nakryiko
@ 2019-07-24 19:27 ` Andrii Nakryiko
  2019-07-29 21:14   ` Song Liu
  2019-07-24 19:27 ` [PATCH bpf-next 10/10] selftests/bpf: add CO-RE relocs ints tests Andrii Nakryiko
  2019-07-29 20:20 ` [PATCH bpf-next 00/10] CO-RE offset relocations Song Liu
  10 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-24 19:27 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel, yhs
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

Add test validating correct relocation handling for cases where pointer
to something is used as an array. E.g.:

  int *ptr = ...;
  int x = ptr[42];

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../selftests/bpf/prog_tests/core_reloc.c     | 20 +++++++++++
 .../bpf/progs/btf__core_reloc_ptr_as_arr.c    |  3 ++
 .../btf__core_reloc_ptr_as_arr___diff_sz.c    |  3 ++
 .../selftests/bpf/progs/core_reloc_types.h    | 13 +++++++
 .../bpf/progs/test_core_reloc_ptr_as_arr.c    | 34 +++++++++++++++++++
 5 files changed, 73 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr___diff_sz.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_ptr_as_arr.c

diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
index f2c7ed67a81c..9cb969de487b 100644
--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
@@ -129,6 +129,22 @@
 	.output_len = sizeof(struct core_reloc_mods_output),		\
 }
 
+#define PTR_AS_ARR_CASE(name) {						\
+	.case_name = #name,						\
+	.bpf_obj_file = "test_core_reloc_ptr_as_arr.o",			\
+	.btf_src_file = "btf__core_reloc_" #name ".o",			\
+	.input = (const char *)&(struct core_reloc_##name []){		\
+		{ .a = 1 },						\
+		{ .a = 2 },						\
+		{ .a = 3 },						\
+	},								\
+	.input_len = 3 * sizeof(struct core_reloc_##name),		\
+	.output = STRUCT_TO_CHAR_PTR(core_reloc_ptr_as_arr) {		\
+		.a = 3,							\
+	},								\
+	.output_len = sizeof(struct core_reloc_ptr_as_arr),		\
+}
+
 struct core_reloc_test_case {
 	const char *case_name;
 	const char *bpf_obj_file;
@@ -200,6 +216,10 @@ static struct core_reloc_test_case test_cases[] = {
 	MODS_CASE(mods),
 	MODS_CASE(mods___mod_swap),
 	MODS_CASE(mods___typedefs),
+
+	/* handling "ptr is an array" semantics */
+	PTR_AS_ARR_CASE(ptr_as_arr),
+	PTR_AS_ARR_CASE(ptr_as_arr___diff_sz),
 };
 
 struct data {
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr.c
new file mode 100644
index 000000000000..8da52432ba17
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_ptr_as_arr x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr___diff_sz.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr___diff_sz.c
new file mode 100644
index 000000000000..003acfc9a3e7
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr___diff_sz.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_ptr_as_arr___diff_sz x) {}
diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
index 3401e8342e57..c17c9279deae 100644
--- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
+++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
@@ -526,3 +526,16 @@ struct core_reloc_mods___typedefs {
 	int3_t b;
 	int3_t a;
 };
+
+/*
+ * PTR_AS_ARR
+ */
+struct core_reloc_ptr_as_arr {
+	int a;
+};
+
+struct core_reloc_ptr_as_arr___diff_sz {
+	int :32; /* padding */
+	char __some_more_padding;
+	int a;
+};
diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_ptr_as_arr.c b/tools/testing/selftests/bpf/progs/test_core_reloc_ptr_as_arr.c
new file mode 100644
index 000000000000..6fc36c37c8b3
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_core_reloc_ptr_as_arr.c
@@ -0,0 +1,34 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2019 Facebook
+
+#include <linux/bpf.h>
+#include <stdint.h>
+#include "bpf_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+static volatile struct data {
+	char in[256];
+	char out[256];
+} data;
+
+struct core_reloc_ptr_as_arr {
+	int a;
+};
+
+#define CORE_READ(dst, src)					\
+	bpf_probe_read((void *)dst, sizeof(*dst),		\
+		       __builtin_preserve_access_index(src))
+
+SEC("raw_tracepoint/sys_enter")
+int test_core_mods(void *ctx)
+{
+	struct core_reloc_ptr_as_arr *in = (void *)&data.in;
+	struct core_reloc_ptr_as_arr *out = (void *)&data.out;
+
+	if (CORE_READ(&out->a, &in[2].a))
+		return 1;
+
+	return 0;
+}
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH bpf-next 10/10] selftests/bpf: add CO-RE relocs ints tests
  2019-07-24 19:27 [PATCH bpf-next 00/10] CO-RE offset relocations Andrii Nakryiko
                   ` (8 preceding siblings ...)
  2019-07-24 19:27 ` [PATCH bpf-next 09/10] selftest/bpf: add CO-RE relocs ptr-as-array tests Andrii Nakryiko
@ 2019-07-24 19:27 ` Andrii Nakryiko
  2019-07-29 21:21   ` Song Liu
  2019-07-29 20:20 ` [PATCH bpf-next 00/10] CO-RE offset relocations Song Liu
  10 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-24 19:27 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel, yhs
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko

Add various tests validating handling compatible/incompatible integer
types.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../selftests/bpf/prog_tests/core_reloc.c     |  40 +++++++
 .../bpf/progs/btf__core_reloc_ints.c          |   3 +
 .../bpf/progs/btf__core_reloc_ints___bool.c   |   3 +
 .../btf__core_reloc_ints___err_bitfield.c     |   3 +
 .../btf__core_reloc_ints___err_wrong_sz_16.c  |   3 +
 .../btf__core_reloc_ints___err_wrong_sz_32.c  |   3 +
 .../btf__core_reloc_ints___err_wrong_sz_64.c  |   3 +
 .../btf__core_reloc_ints___err_wrong_sz_8.c   |   3 +
 .../btf__core_reloc_ints___reverse_sign.c     |   3 +
 .../selftests/bpf/progs/core_reloc_types.h    | 101 ++++++++++++++++++
 .../bpf/progs/test_core_reloc_ints.c          |  48 +++++++++
 11 files changed, 213 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___bool.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_bitfield.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_16.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_32.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_64.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_8.c
 create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___reverse_sign.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_ints.c

diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
index 9cb969de487b..da6254d54c08 100644
--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
@@ -145,6 +145,35 @@
 	.output_len = sizeof(struct core_reloc_ptr_as_arr),		\
 }
 
+#define INTS_DATA(struct_name) STRUCT_TO_CHAR_PTR(struct_name) {	\
+	.u8_field = 1,							\
+	.s8_field = 2,							\
+	.u16_field = 3,							\
+	.s16_field = 4,							\
+	.u32_field = 5,							\
+	.s32_field = 6,							\
+	.u64_field = 7,							\
+	.s64_field = 8,							\
+}
+
+#define INTS_CASE_COMMON(name)						\
+	.case_name = #name,						\
+	.bpf_obj_file = "test_core_reloc_ints.o",			\
+	.btf_src_file = "btf__core_reloc_" #name ".o"
+
+#define INTS_CASE(name) {						\
+	INTS_CASE_COMMON(name),						\
+	.input = INTS_DATA(core_reloc_##name),				\
+	.input_len = sizeof(struct core_reloc_##name),			\
+	.output = INTS_DATA(core_reloc_ints),				\
+	.output_len = sizeof(struct core_reloc_ints),			\
+}
+
+#define INTS_ERR_CASE(name) {						\
+	INTS_CASE_COMMON(name),						\
+	.fails = true,							\
+}
+
 struct core_reloc_test_case {
 	const char *case_name;
 	const char *bpf_obj_file;
@@ -220,6 +249,17 @@ static struct core_reloc_test_case test_cases[] = {
 	/* handling "ptr is an array" semantics */
 	PTR_AS_ARR_CASE(ptr_as_arr),
 	PTR_AS_ARR_CASE(ptr_as_arr___diff_sz),
+
+	/* int signedness/sizing/bitfield handling */
+	INTS_CASE(ints),
+	INTS_CASE(ints___bool),
+	INTS_CASE(ints___reverse_sign),
+
+	INTS_ERR_CASE(ints___err_bitfield),
+	INTS_ERR_CASE(ints___err_wrong_sz_8),
+	INTS_ERR_CASE(ints___err_wrong_sz_16),
+	INTS_ERR_CASE(ints___err_wrong_sz_32),
+	INTS_ERR_CASE(ints___err_wrong_sz_64),
 };
 
 struct data {
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_ints.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints.c
new file mode 100644
index 000000000000..7d0f041042c5
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_ints x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___bool.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___bool.c
new file mode 100644
index 000000000000..f9359450186e
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___bool.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_ints___bool x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_bitfield.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_bitfield.c
new file mode 100644
index 000000000000..50369e8320a0
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_bitfield.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_ints___err_bitfield x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_16.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_16.c
new file mode 100644
index 000000000000..823bac13d641
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_16.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_ints___err_wrong_sz_16 x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_32.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_32.c
new file mode 100644
index 000000000000..b44f3be18535
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_32.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_ints___err_wrong_sz_32 x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_64.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_64.c
new file mode 100644
index 000000000000..9a3dd2099c0f
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_64.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_ints___err_wrong_sz_64 x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_8.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_8.c
new file mode 100644
index 000000000000..9f11ef5f6e88
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_8.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_ints___err_wrong_sz_8 x) {}
diff --git a/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___reverse_sign.c b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___reverse_sign.c
new file mode 100644
index 000000000000..aafb1c5819d7
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__core_reloc_ints___reverse_sign.c
@@ -0,0 +1,3 @@
+#include "core_reloc_types.h"
+
+void f(struct core_reloc_ints___reverse_sign x) {}
diff --git a/tools/testing/selftests/bpf/progs/core_reloc_types.h b/tools/testing/selftests/bpf/progs/core_reloc_types.h
index c17c9279deae..5f3ebd4f6dc3 100644
--- a/tools/testing/selftests/bpf/progs/core_reloc_types.h
+++ b/tools/testing/selftests/bpf/progs/core_reloc_types.h
@@ -1,3 +1,6 @@
+#include <stdint.h>
+#include <stdbool.h>
+
 /*
  * FLAVORS
  */
@@ -539,3 +542,101 @@ struct core_reloc_ptr_as_arr___diff_sz {
 	char __some_more_padding;
 	int a;
 };
+
+/*
+ * INTS
+ */
+struct core_reloc_ints {
+	uint8_t		u8_field;
+	int8_t		s8_field;
+	uint16_t	u16_field;
+	int16_t		s16_field;
+	uint32_t	u32_field;
+	int32_t		s32_field;
+	uint64_t	u64_field;
+	int64_t		s64_field;
+};
+
+/* signed/unsigned types swap */
+struct core_reloc_ints___reverse_sign {
+	int8_t		u8_field;
+	uint8_t		s8_field;
+	int16_t		u16_field;
+	uint16_t	s16_field;
+	int32_t		u32_field;
+	uint32_t	s32_field;
+	int64_t		u64_field;
+	uint64_t	s64_field;
+};
+
+struct core_reloc_ints___bool {
+	bool		u8_field; /* bool instead of uint8 */
+	int8_t		s8_field;
+	uint16_t	u16_field;
+	int16_t		s16_field;
+	uint32_t	u32_field;
+	int32_t		s32_field;
+	uint64_t	u64_field;
+	int64_t		s64_field;
+};
+
+struct core_reloc_ints___err_bitfield {
+	uint8_t		u8_field;
+	int8_t		s8_field;
+	uint16_t	u16_field;
+	int16_t		s16_field;
+	uint32_t	u32_field: 32; /* bitfields are not supported */
+	int32_t		s32_field;
+	uint64_t	u64_field;
+	int64_t		s64_field;
+};
+
+struct core_reloc_ints___err_wrong_sz_8 {
+	uint16_t	u8_field; /* not 8-bit anymore */
+	int16_t		s8_field; /* not 8-bit anymore */
+
+	uint16_t	u16_field;
+	int16_t		s16_field;
+	uint32_t	u32_field;
+	int32_t		s32_field;
+	uint64_t	u64_field;
+	int64_t		s64_field;
+};
+
+struct core_reloc_ints___err_wrong_sz_16 {
+	uint8_t		u8_field;
+	int8_t		s8_field;
+
+	uint32_t	u16_field; /* not 16-bit anymore */
+	int32_t		s16_field; /* not 16-bit anymore */
+
+	uint32_t	u32_field;
+	int32_t		s32_field;
+	uint64_t	u64_field;
+	int64_t		s64_field;
+};
+
+struct core_reloc_ints___err_wrong_sz_32 {
+	uint8_t		u8_field;
+	int8_t		s8_field;
+	uint16_t	u16_field;
+	int16_t		s16_field;
+
+	uint64_t	u32_field; /* not 32-bit anymore */
+	int64_t		s32_field; /* not 32-bit anymore */
+
+	uint64_t	u64_field;
+	int64_t		s64_field;
+};
+
+struct core_reloc_ints___err_wrong_sz_64 {
+	uint8_t		u8_field;
+	int8_t		s8_field;
+	uint16_t	u16_field;
+	int16_t		s16_field;
+	uint32_t	u32_field;
+	int32_t		s32_field;
+
+	uint32_t	u64_field; /* not 64-bit anymore */
+	int32_t		s64_field; /* not 64-bit anymore */
+};
diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_ints.c b/tools/testing/selftests/bpf/progs/test_core_reloc_ints.c
new file mode 100644
index 000000000000..2eecbefdfd2f
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_core_reloc_ints.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2019 Facebook
+
+#include <linux/bpf.h>
+#include <stdint.h>
+#include "bpf_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+static volatile struct data {
+	char in[256];
+	char out[256];
+} data;
+
+struct core_reloc_ints {
+	uint8_t		u8_field;
+	int8_t		s8_field;
+	uint16_t	u16_field;
+	int16_t		s16_field;
+	uint32_t	u32_field;
+	int32_t		s32_field;
+	uint64_t	u64_field;
+	int64_t		s64_field;
+};
+
+#define CORE_READ(dst, src)					\
+	bpf_probe_read((void *)dst, sizeof(*dst),		\
+		       __builtin_preserve_access_index(src))
+
+SEC("raw_tracepoint/sys_enter")
+int test_core_mods(void *ctx)
+{
+	struct core_reloc_ints *in = (void *)&data.in;
+	struct core_reloc_ints *out = (void *)&data.out;
+
+	if (CORE_READ(&out->u8_field, &in->u8_field) ||
+	    CORE_READ(&out->s8_field, &in->s8_field) ||
+	    CORE_READ(&out->u16_field, &in->u16_field) ||
+	    CORE_READ(&out->s16_field, &in->s16_field) ||
+	    CORE_READ(&out->u32_field, &in->u32_field) ||
+	    CORE_READ(&out->s32_field, &in->s32_field) ||
+	    CORE_READ(&out->u64_field, &in->u64_field) ||
+	    CORE_READ(&out->s64_field, &in->s64_field))
+		return 1;
+
+	return 0;
+}
+
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 01/10] libbpf: add .BTF.ext offset relocation section loading
  2019-07-24 19:27 ` [PATCH bpf-next 01/10] libbpf: add .BTF.ext offset relocation section loading Andrii Nakryiko
@ 2019-07-24 21:42   ` Andrii Nakryiko
  2019-07-25  0:00   ` Song Liu
  1 sibling, 0 replies; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-24 21:42 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Yonghong Song, Kernel Team

On Wed, Jul 24, 2019 at 12:28 PM Andrii Nakryiko <andriin@fb.com> wrote:
>
> Add support for BPF CO-RE offset relocations. Add section/record
> iteration macros for .BTF.ext. These macro are useful for iterating over
> each .BTF.ext record, either for dumping out contents or later for BPF
> CO-RE relocation handling.
>
> To enable other parts of libbpf to work with .BTF.ext contents, moved
> a bunch of type definitions into libbpf_internal.h.
>
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---

[...]

> + *
> + * Example to provide a better feel.
> + *
> + *   struct sample {
> + *       int a;
> + *       struct {
> + *           int b[10];
> + *       };
> + *   };
> + *
> + *   struct sample *s = ...;
> + *   int x = &s->a;     // encoded as "0:0" (a is field #0)
> + *   int y = &s->b[5];  // encoded as "0:1:5" (b is field #1, arr elem #5)

This should be "0:1:0:5", actually. Anon struct is field #1 in BTF, b
is field #0 inside that anon struct + array index 5.
Will update it locally and incorporate into next version once the rest
of patch set is reviewed.

> + *   int z = &s[10]->b; // encoded as "10:1" (ptr is used as an array)
> + *
> + * type_id for all relocs in this example  will capture BTF type id of
> + * `struct sample`.
> + *
> + *   [0] https://llvm.org/docs/LangRef.html#getelementptr-instruction
> + */
> +struct bpf_offset_reloc {
> +       __u32   insn_off;
> +       __u32   type_id;
> +       __u32   access_str_off;
> +};
> +
>  #endif /* __LIBBPF_LIBBPF_INTERNAL_H */
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 01/10] libbpf: add .BTF.ext offset relocation section loading
  2019-07-24 19:27 ` [PATCH bpf-next 01/10] libbpf: add .BTF.ext offset relocation section loading Andrii Nakryiko
  2019-07-24 21:42   ` Andrii Nakryiko
@ 2019-07-25  0:00   ` Song Liu
  2019-07-25  0:37     ` Andrii Nakryiko
  1 sibling, 1 reply; 42+ messages in thread
From: Song Liu @ 2019-07-25  0:00 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Yonghong Song, andrii.nakryiko, Kernel Team



> On Jul 24, 2019, at 12:27 PM, Andrii Nakryiko <andriin@fb.com> wrote:
> 
> Add support for BPF CO-RE offset relocations. Add section/record
> iteration macros for .BTF.ext. These macro are useful for iterating over
> each .BTF.ext record, either for dumping out contents or later for BPF
> CO-RE relocation handling.
> 
> To enable other parts of libbpf to work with .BTF.ext contents, moved
> a bunch of type definitions into libbpf_internal.h.
> 
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
> tools/lib/bpf/btf.c             | 64 +++++++++--------------
> tools/lib/bpf/btf.h             |  4 ++
> tools/lib/bpf/libbpf_internal.h | 91 +++++++++++++++++++++++++++++++++
> 3 files changed, 118 insertions(+), 41 deletions(-)
> 
> diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> index 467224feb43b..4a36bc783848 100644
> --- a/tools/lib/bpf/btf.c
> +++ b/tools/lib/bpf/btf.c
> @@ -42,47 +42,6 @@ struct btf {
> 	int fd;
> };
> 
> -struct btf_ext_info {
> -	/*
> -	 * info points to the individual info section (e.g. func_info and
> -	 * line_info) from the .BTF.ext. It does not include the __u32 rec_size.
> -	 */
> -	void *info;
> -	__u32 rec_size;
> -	__u32 len;
> -};
> -
> -struct btf_ext {
> -	union {
> -		struct btf_ext_header *hdr;
> -		void *data;
> -	};
> -	struct btf_ext_info func_info;
> -	struct btf_ext_info line_info;
> -	__u32 data_size;
> -};
> -
> -struct btf_ext_info_sec {
> -	__u32	sec_name_off;
> -	__u32	num_info;
> -	/* Followed by num_info * record_size number of bytes */
> -	__u8	data[0];
> -};
> -
> -/* The minimum bpf_func_info checked by the loader */
> -struct bpf_func_info_min {
> -	__u32   insn_off;
> -	__u32   type_id;
> -};
> -
> -/* The minimum bpf_line_info checked by the loader */
> -struct bpf_line_info_min {
> -	__u32	insn_off;
> -	__u32	file_name_off;
> -	__u32	line_off;
> -	__u32	line_col;
> -};
> -
> static inline __u64 ptr_to_u64(const void *ptr)
> {
> 	return (__u64) (unsigned long) ptr;
> @@ -831,6 +790,9 @@ static int btf_ext_setup_info(struct btf_ext *btf_ext,
> 	/* The start of the info sec (including the __u32 record_size). */
> 	void *info;
> 
> +	if (ext_sec->len == 0)
> +		return 0;
> +
> 	if (ext_sec->off & 0x03) {
> 		pr_debug(".BTF.ext %s section is not aligned to 4 bytes\n",
> 		     ext_sec->desc);
> @@ -934,6 +896,19 @@ static int btf_ext_setup_line_info(struct btf_ext *btf_ext)
> 	return btf_ext_setup_info(btf_ext, &param);
> }
> 
> +static int btf_ext_setup_offset_reloc(struct btf_ext *btf_ext)
> +{
> +	struct btf_ext_sec_setup_param param = {
> +		.off = btf_ext->hdr->offset_reloc_off,
> +		.len = btf_ext->hdr->offset_reloc_len,
> +		.min_rec_size = sizeof(struct bpf_offset_reloc),
> +		.ext_info = &btf_ext->offset_reloc_info,
> +		.desc = "offset_reloc",
> +	};
> +
> +	return btf_ext_setup_info(btf_ext, &param);
> +}
> +
> static int btf_ext_parse_hdr(__u8 *data, __u32 data_size)
> {
> 	const struct btf_ext_header *hdr = (struct btf_ext_header *)data;
> @@ -1004,6 +979,13 @@ struct btf_ext *btf_ext__new(__u8 *data, __u32 size)
> 	if (err)
> 		goto done;
> 
> +	/* check if there is offset_reloc_off/offset_reloc_len fields */
> +	if (btf_ext->hdr->hdr_len < sizeof(struct btf_ext_header))

This check will break when we add more optional sections to btf_ext_header.
Maybe use offsetof() instead?

> +		goto done;
> +	err = btf_ext_setup_offset_reloc(btf_ext);
> +	if (err)
> +		goto done;
> +
> done:
> 	if (err) {
> 		btf_ext__free(btf_ext);
> diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
> index 88a52ae56fc6..287361ee1f6b 100644
> --- a/tools/lib/bpf/btf.h
> +++ b/tools/lib/bpf/btf.h
> @@ -57,6 +57,10 @@ struct btf_ext_header {
> 	__u32	func_info_len;
> 	__u32	line_info_off;
> 	__u32	line_info_len;
> +
> +	/* optional part of .BTF.ext header */
> +	__u32	offset_reloc_off;
> +	__u32	offset_reloc_len;
> };
> 
> LIBBPF_API void btf__free(struct btf *btf);
> diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
> index 2ac29bd36226..087ff512282f 100644
> --- a/tools/lib/bpf/libbpf_internal.h
> +++ b/tools/lib/bpf/libbpf_internal.h
> @@ -46,4 +46,95 @@ do {				\
> int libbpf__load_raw_btf(const char *raw_types, size_t types_len,
> 			 const char *str_sec, size_t str_len);
> 
> +struct btf_ext_info {
> +	/*
> +	 * info points to the individual info section (e.g. func_info and
> +	 * line_info) from the .BTF.ext. It does not include the __u32 rec_size.
> +	 */
> +	void *info;
> +	__u32 rec_size;
> +	__u32 len;
> +};
> +
> +#define for_each_btf_ext_sec(seg, sec)					\
> +	for (sec = (seg)->info;						\
> +	     (void *)sec < (seg)->info + (seg)->len;			\
> +	     sec = (void *)sec + sizeof(struct btf_ext_info_sec) +	\
> +		   (seg)->rec_size * sec->num_info)
> +
> +#define for_each_btf_ext_rec(seg, sec, i, rec)				\
> +	for (i = 0, rec = (void *)&(sec)->data;				\
> +	     i < (sec)->num_info;					\
> +	     i++, rec = (void *)rec + (seg)->rec_size)
> +
> +struct btf_ext {
> +	union {
> +		struct btf_ext_header *hdr;
> +		void *data;
> +	};
> +	struct btf_ext_info func_info;
> +	struct btf_ext_info line_info;
> +	struct btf_ext_info offset_reloc_info;
> +	__u32 data_size;
> +};
> +
> +struct btf_ext_info_sec {
> +	__u32	sec_name_off;
> +	__u32	num_info;
> +	/* Followed by num_info * record_size number of bytes */
> +	__u8	data[0];
> +};
> +
> +/* The minimum bpf_func_info checked by the loader */
> +struct bpf_func_info_min {
> +	__u32   insn_off;
> +	__u32   type_id;
> +};
> +
> +/* The minimum bpf_line_info checked by the loader */
> +struct bpf_line_info_min {
> +	__u32	insn_off;
> +	__u32	file_name_off;
> +	__u32	line_off;
> +	__u32	line_col;
> +};
> +
> +/* The minimum bpf_offset_reloc checked by the loader
> + *
> + * Offset relocation captures the following data:
> + * - insn_off - instruction offset (in bytes) within a BPF program that needs
> + *   its insn->imm field to be relocated with actual offset;
> + * - type_id - BTF type ID of the "root" (containing) entity of a relocatable
> + *   offset;
> + * - access_str_off - offset into corresponding .BTF string section. String
> + *   itself encodes an accessed field using a sequence of field and array
> + *   indicies, separated by colon (:). It's conceptually very close to LLVM's
> + *   getelementptr ([0]) instruction's arguments for identifying offset to 
> + *   a field.
> + *
> + * Example to provide a better feel.
> + *
> + *   struct sample {
> + *       int a;
> + *       struct {
> + *           int b[10];
> + *       };
> + *   };
> + * 
> + *   struct sample *s = ...;
> + *   int x = &s->a;     // encoded as "0:0" (a is field #0)
> + *   int y = &s->b[5];  // encoded as "0:1:5" (b is field #1, arr elem #5)
> + *   int z = &s[10]->b; // encoded as "10:1" (ptr is used as an array)
> + *
> + * type_id for all relocs in this example  will capture BTF type id of
> + * `struct sample`.
> + *
> + *   [0] https://llvm.org/docs/LangRef.html#getelementptr-instruction
> + */
> +struct bpf_offset_reloc {
> +	__u32   insn_off;
> +	__u32   type_id;
> +	__u32   access_str_off;
> +};
> +
> #endif /* __LIBBPF_LIBBPF_INTERNAL_H */
> -- 
> 2.17.1
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 01/10] libbpf: add .BTF.ext offset relocation section loading
  2019-07-25  0:00   ` Song Liu
@ 2019-07-25  0:37     ` Andrii Nakryiko
  2019-07-25  5:20       ` Song Liu
  0 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-25  0:37 UTC (permalink / raw)
  To: Song Liu
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Yonghong Song, Kernel Team

On Wed, Jul 24, 2019 at 5:00 PM Song Liu <songliubraving@fb.com> wrote:
>
>
>
> > On Jul 24, 2019, at 12:27 PM, Andrii Nakryiko <andriin@fb.com> wrote:
> >
> > Add support for BPF CO-RE offset relocations. Add section/record
> > iteration macros for .BTF.ext. These macro are useful for iterating over
> > each .BTF.ext record, either for dumping out contents or later for BPF
> > CO-RE relocation handling.
> >
> > To enable other parts of libbpf to work with .BTF.ext contents, moved
> > a bunch of type definitions into libbpf_internal.h.
> >
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > ---
> > tools/lib/bpf/btf.c             | 64 +++++++++--------------
> > tools/lib/bpf/btf.h             |  4 ++
> > tools/lib/bpf/libbpf_internal.h | 91 +++++++++++++++++++++++++++++++++
> > 3 files changed, 118 insertions(+), 41 deletions(-)
> >

[...]

> > +
> > static int btf_ext_parse_hdr(__u8 *data, __u32 data_size)
> > {
> >       const struct btf_ext_header *hdr = (struct btf_ext_header *)data;
> > @@ -1004,6 +979,13 @@ struct btf_ext *btf_ext__new(__u8 *data, __u32 size)
> >       if (err)
> >               goto done;
> >
> > +     /* check if there is offset_reloc_off/offset_reloc_len fields */
> > +     if (btf_ext->hdr->hdr_len < sizeof(struct btf_ext_header))
>
> This check will break when we add more optional sections to btf_ext_header.
> Maybe use offsetof() instead?

I didn't do it, because there are no fields after offset_reloc_len.
But now I though that maybe it would be ok to add zero-sized marker
field, kind of like marking off various versions of btf_ext header?

Alternatively, I can add offsetofend() macro somewhere in libbpf_internal.h.

Do you have any preference?

>
> > +             goto done;
> > +     err = btf_ext_setup_offset_reloc(btf_ext);
> > +     if (err)
> > +             goto done;
> > +
> > done:

[...]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 01/10] libbpf: add .BTF.ext offset relocation section loading
  2019-07-25  0:37     ` Andrii Nakryiko
@ 2019-07-25  5:20       ` Song Liu
  2019-07-27  5:11         ` Andrii Nakryiko
  0 siblings, 1 reply; 42+ messages in thread
From: Song Liu @ 2019-07-25  5:20 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Yonghong Song, Kernel Team



> On Jul 24, 2019, at 5:37 PM, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> 
> On Wed, Jul 24, 2019 at 5:00 PM Song Liu <songliubraving@fb.com> wrote:
>> 
>> 
>> 
>>> On Jul 24, 2019, at 12:27 PM, Andrii Nakryiko <andriin@fb.com> wrote:
>>> 
>>> Add support for BPF CO-RE offset relocations. Add section/record
>>> iteration macros for .BTF.ext. These macro are useful for iterating over
>>> each .BTF.ext record, either for dumping out contents or later for BPF
>>> CO-RE relocation handling.
>>> 
>>> To enable other parts of libbpf to work with .BTF.ext contents, moved
>>> a bunch of type definitions into libbpf_internal.h.
>>> 
>>> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
>>> ---
>>> tools/lib/bpf/btf.c             | 64 +++++++++--------------
>>> tools/lib/bpf/btf.h             |  4 ++
>>> tools/lib/bpf/libbpf_internal.h | 91 +++++++++++++++++++++++++++++++++
>>> 3 files changed, 118 insertions(+), 41 deletions(-)
>>> 
> 
> [...]
> 
>>> +
>>> static int btf_ext_parse_hdr(__u8 *data, __u32 data_size)
>>> {
>>>      const struct btf_ext_header *hdr = (struct btf_ext_header *)data;
>>> @@ -1004,6 +979,13 @@ struct btf_ext *btf_ext__new(__u8 *data, __u32 size)
>>>      if (err)
>>>              goto done;
>>> 
>>> +     /* check if there is offset_reloc_off/offset_reloc_len fields */
>>> +     if (btf_ext->hdr->hdr_len < sizeof(struct btf_ext_header))
>> 
>> This check will break when we add more optional sections to btf_ext_header.
>> Maybe use offsetof() instead?
> 
> I didn't do it, because there are no fields after offset_reloc_len.
> But now I though that maybe it would be ok to add zero-sized marker
> field, kind of like marking off various versions of btf_ext header?
> 
> Alternatively, I can add offsetofend() macro somewhere in libbpf_internal.h.
> 
> Do you have any preference?

We only need a stable number to compare against. offsetofend() works. 
Or we can simply have something like

    if (btf_ext->hdr->hdr_len <= offsetof(struct btf_ext_header, offset_reloc_off))
          goto done;
or 
    if (btf_ext->hdr->hdr_len < offsetof(struct btf_ext_header, offset_reloc_len))
          goto done;

Does this make sense?

Thanks,
Song

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm
  2019-07-24 19:27 ` [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm Andrii Nakryiko
@ 2019-07-25 19:32   ` Song Liu
  2019-07-27  6:11     ` Andrii Nakryiko
  2019-07-25 23:18   ` Alexei Starovoitov
  2019-07-29 19:56   ` Song Liu
  2 siblings, 1 reply; 42+ messages in thread
From: Song Liu @ 2019-07-25 19:32 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Yonghong Song, Andrii Nakryiko, Kernel Team



> On Jul 24, 2019, at 12:27 PM, Andrii Nakryiko <andriin@fb.com> wrote:
> 
> This patch implements the core logic for BPF CO-RE offsets relocations.
> All the details are described in code comments.

Some description in the change log is still useful. Please at least 
copy-paste key comments here. 

And, this is looooong. I think it is totally possible to split it into
multiple smaller patches. 

I haven't finished all of it. Please see my comments below of parts I
have covered. 

Thanks,
Song

> 
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
> tools/lib/bpf/libbpf.c | 866 ++++++++++++++++++++++++++++++++++++++++-
> tools/lib/bpf/libbpf.h |   1 +
> 2 files changed, 861 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 8741c39adb1c..86d87bf10d46 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -38,6 +38,7 @@
> #include <sys/stat.h>
> #include <sys/types.h>
> #include <sys/vfs.h>
> +#include <sys/utsname.h>
> #include <tools/libc_compat.h>
> #include <libelf.h>
> #include <gelf.h>
> @@ -47,6 +48,7 @@
> #include "btf.h"
> #include "str_error.h"
> #include "libbpf_internal.h"
> +#include "hashmap.h"
> 
> #ifndef EM_BPF
> #define EM_BPF 247
> @@ -1013,16 +1015,22 @@ static int bpf_object__init_user_maps(struct bpf_object *obj, bool strict)
> }
> 
> static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
> -						     __u32 id)
> +						     __u32 id,
> +						     __u32 *res_id)
> {
> 	const struct btf_type *t = btf__type_by_id(btf, id);

Maybe have a local "__u32 rid;" 

> 
> +	if (res_id)
> +		*res_id = id;
> +

and do "rid = id;" here

> 	while (true) {
> 		switch (BTF_INFO_KIND(t->info)) {
> 		case BTF_KIND_VOLATILE:
> 		case BTF_KIND_CONST:
> 		case BTF_KIND_RESTRICT:
> 		case BTF_KIND_TYPEDEF:
> +			if (res_id)
> +				*res_id = t->type;
and here

> 			t = btf__type_by_id(btf, t->type);
> 			break;
> 		default:
and "*res_id = rid;" right before return?

> @@ -1041,7 +1049,7 @@ static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
> static bool get_map_field_int(const char *map_name, const struct btf *btf,
> 			      const struct btf_type *def,
> 			      const struct btf_member *m, __u32 *res) {
> -	const struct btf_type *t = skip_mods_and_typedefs(btf, m->type);
> +	const struct btf_type *t = skip_mods_and_typedefs(btf, m->type, NULL);
> 	const char *name = btf__name_by_offset(btf, m->name_off);
> 	const struct btf_array *arr_info;
> 	const struct btf_type *arr_t;
> @@ -1107,7 +1115,7 @@ static int bpf_object__init_user_btf_map(struct bpf_object *obj,
> 		return -EOPNOTSUPP;
> 	}
> 
> -	def = skip_mods_and_typedefs(obj->btf, var->type);
> +	def = skip_mods_and_typedefs(obj->btf, var->type, NULL);
> 	if (BTF_INFO_KIND(def->info) != BTF_KIND_STRUCT) {
> 		pr_warning("map '%s': unexpected def kind %u.\n",
> 			   map_name, BTF_INFO_KIND(var->info));
> @@ -2289,6 +2297,845 @@ bpf_program_reloc_btf_ext(struct bpf_program *prog, struct bpf_object *obj,
> 	return 0;
> }
> 
> +#define BPF_CORE_SPEC_MAX_LEN 64
> +
> +/* represents BPF CO-RE field or array element accessor */
> +struct bpf_core_accessor {
> +	__u32 type_id;		/* struct/union type or array element type */
> +	__u32 idx;		/* field index or array index */
> +	const char *name;	/* field name or NULL for array accessor */
> +};
> +
> +struct bpf_core_spec {
> +	const struct btf *btf;
> +	/* high-level spec: named fields and array indicies only */

typo: indices

> +	struct bpf_core_accessor spec[BPF_CORE_SPEC_MAX_LEN];
> +	/* high-level spec length */
> +	int len;
> +	/* raw, low-level spec: 1-to-1 with accessor spec string */
> +	int raw_spec[BPF_CORE_SPEC_MAX_LEN];
> +	/* raw spec length */
> +	int raw_len;
> +	/* field byte offset represented by spec */
> +	__u32 offset;
> +};
> +
> +static bool str_is_empty(const char *s)
> +{
> +	return !s || !s[0];
> +}
> +
> +static int btf_kind(const struct btf_type *t)
> +{
> +	return BTF_INFO_KIND(t->info);
> +}
> +
> +static bool btf_is_composite(const struct btf_type *t)
> +{
> +	int kind = btf_kind(t);
> +
> +	return kind == BTF_KIND_STRUCT || kind == BTF_KIND_UNION;
> +}
> +
> +static bool btf_is_array(const struct btf_type *t)
> +{
> +	return btf_kind(t) == BTF_KIND_ARRAY;
> +}
> +
> +/* 
> + * Turn bpf_offset_reloc into a low- and high-level spec representation,
> + * validating correctness along the way, as well as calculating resulting
> + * field offset (in bytes), specified by accessor string. Low-level spec
> + * captures every single level of nestedness, including traversing anonymous
> + * struct/union members. High-level one only captures semantically meaningful
> + * "turning points": named fields and array indicies.
> + * E.g., for this case:
> + *
> + *   struct sample {
> + *       int __unimportant;
> + *       struct {
> + *           int __1;
> + *           int __2;
> + *           int a[7];
> + *       };
> + *   };
> + *
> + *   struct sample *s = ...;
> + *
> + *   int x = &s->a[3]; // access string = '0:1:2:3'
> + *
> + * Low-level spec has 1:1 mapping with each element of access string (it's
> + * just a parsed access string representation): [0, 1, 2, 3].
> + *
> + * High-level spec will capture only 3 points:
> + *   - intial zero-index access by pointer (&s->... is the same as &s[0]...);
> + *   - field 'a' access (corresponds to '2' in low-level spec);
> + *   - array element #3 access (corresponds to '3' in low-level spec).
> + *
> + */

IIUC, high-level points are subset of low-level points. How about we introduce
"anonymous" high-level points, so that high-level points and low-level points
are 1:1 mapping? 

> +static int bpf_core_spec_parse(const struct btf *btf,
> +			       __u32 type_id,
> +			       const char *spec_str,
> +			       struct bpf_core_spec *spec)
> +{
> +	int access_idx, parsed_len, i;
> +	const struct btf_type *t;
> +	__u32 id = type_id;
> +	const char *name;
> +	__s64 sz;
> +
> +	if (str_is_empty(spec_str) || *spec_str == ':')
> +		return -EINVAL;
> +
> +	memset(spec, 0, sizeof(*spec));
> +	spec->btf = btf;
> +
> +	/* parse spec_str="0:1:2:3:4" into array raw_spec=[0, 1, 2, 3, 4] */
> +	while (*spec_str) {
> +		if (*spec_str == ':')
> +			++spec_str;
> +		if (sscanf(spec_str, "%d%n", &access_idx, &parsed_len) != 1)
> +			return -EINVAL;
> +		if (spec->raw_len == BPF_CORE_SPEC_MAX_LEN)
> +			return -E2BIG;
> +		spec_str += parsed_len;
> +		spec->raw_spec[spec->raw_len++] = access_idx;
> +	}
> +
> +	if (spec->raw_len == 0)
> +		return -EINVAL;
> +
> +	for (i = 0; i < spec->raw_len; i++) {
> +		t = skip_mods_and_typedefs(btf, id, &id);
> +		if (!t)
> +			return -EINVAL;
> +
> +		access_idx = spec->raw_spec[i];
> +
> +		if (i == 0) {
> +			/* first spec value is always reloc type array index */
> +			spec->spec[spec->len].type_id = id;
> +			spec->spec[spec->len].idx = access_idx;
> +			spec->len++;
> +
> +			sz = btf__resolve_size(btf, id);
> +			if (sz < 0)
> +				return sz;
> +			spec->offset += access_idx * sz;
          spec->offset = access_idx * sz;  should be enough

> +			continue;
> +		}

Maybe pull i == 0 case out of the for loop? 

> +
> +		if (btf_is_composite(t)) {
> +			const struct btf_member *m = (void *)(t + 1);
> +			__u32 offset;
> +
> +			if (access_idx >= BTF_INFO_VLEN(t->info))
> +				return -EINVAL;
> +
> +			m = &m[access_idx];
> +
> +			if (BTF_INFO_KFLAG(t->info)) {
> +				if (BTF_MEMBER_BITFIELD_SIZE(m->offset))
> +					return -EINVAL;
> +				offset = BTF_MEMBER_BIT_OFFSET(m->offset);
> +			} else {
> +				offset = m->offset;
> +			}
> +			if (m->offset % 8)
> +				return -EINVAL;
> +			spec->offset += offset / 8;
> +
> +			if (m->name_off) {
> +				name = btf__name_by_offset(btf, m->name_off);
> +				if (str_is_empty(name))
> +					return -EINVAL;
> +
> +				spec->spec[spec->len].type_id = id;
> +				spec->spec[spec->len].idx = access_idx;
> +				spec->spec[spec->len].name = name;
> +				spec->len++;
> +			}
> +
> +			id = m->type;
> +		} else if (btf_is_array(t)) {
> +			const struct btf_array *a = (void *)(t + 1);
> +
> +			t = skip_mods_and_typedefs(btf, a->type, &id);
> +			if (!t || access_idx >= a->nelems)
> +				return -EINVAL;
> +
> +			spec->spec[spec->len].type_id = id;
> +			spec->spec[spec->len].idx = access_idx;
> +			spec->len++;
> +
> +			sz = btf__resolve_size(btf, id);
> +			if (sz < 0)
> +				return sz;
> +			spec->offset += access_idx * sz;
> +		} else {
> +			pr_warning("relo for [%u] %s (at idx %d) captures type [%d] of unexpected kind %d\n",
> +				   type_id, spec_str, i, id, btf_kind(t));
> +			return -EINVAL;
> +		}
> +	}
> +
> +	if (spec->len == 0)
> +		return -EINVAL;

Can this ever happen? 

> +
> +	return 0;
> +}
> +
> +/* Given 'some_struct_name___with_flavor' return the length of a name prefix
> + * before last triple underscore. Struct name part after last triple
> + * underscore is ignored by BPF CO-RE relocation during relocation matching.
> + */
> +static size_t bpf_core_essential_name_len(const char *name)
> +{
> +	size_t n = strlen(name);
> +	int i = n - 3;
> +
> +	while (i > 0) {
> +		if (name[i] == '_' && name[i + 1] == '_' && name[i + 2] == '_')
> +			return i;
> +		i--;
> +	}
> +	return n;
> +}
> +
> +/* dynamically sized list of type IDs */
> +struct ids_vec {
> +	__u32 *data;
> +	int len;
> +};
> +
> +static void bpf_core_free_cands(struct ids_vec *cand_ids)
> +{
> +	free(cand_ids->data);
> +	free(cand_ids);
> +}
> +
> +static struct ids_vec *bpf_core_find_cands(const struct btf *local_btf,
> +					   __u32 local_type_id,
> +					   const struct btf *targ_btf)
> +{
> +	size_t local_essent_len, targ_essent_len;
> +	const char *local_name, *targ_name;
> +	const struct btf_type *t;
> +	struct ids_vec *cand_ids;
> +	__u32 *new_ids;
> +	int i, err, n;
> +
> +	t = btf__type_by_id(local_btf, local_type_id);
> +	if (!t)
> +		return ERR_PTR(-EINVAL);
> +
> +	local_name = btf__name_by_offset(local_btf, t->name_off);
> +	if (str_is_empty(local_name))
> +		return ERR_PTR(-EINVAL);
> +	local_essent_len = bpf_core_essential_name_len(local_name);
> +
> +	cand_ids = calloc(1, sizeof(*cand_ids));
> +	if (!cand_ids)
> +		return ERR_PTR(-ENOMEM);
> +
> +	n = btf__get_nr_types(targ_btf);
> +	for (i = 1; i <= n; i++) {
> +		t = btf__type_by_id(targ_btf, i);
> +		targ_name = btf__name_by_offset(targ_btf, t->name_off);
> +		if (str_is_empty(targ_name))
> +			continue;
> +
> +		targ_essent_len = bpf_core_essential_name_len(targ_name);
> +		if (targ_essent_len != local_essent_len)
> +			continue;
> +
> +		if (strncmp(local_name, targ_name, local_essent_len) == 0) {
> +			pr_debug("[%d] (%s): found candidate [%d] (%s)\n",
> +				 local_type_id, local_name, i, targ_name);
> +			new_ids = realloc(cand_ids->data, cand_ids->len + 1);
> +			if (!new_ids) {
> +				err = -ENOMEM;
> +				goto err_out;
> +			}
> +			cand_ids->data = new_ids;
> +			cand_ids->data[cand_ids->len++] = i;
> +		}
> +	}
> +	return cand_ids;
> +err_out:
> +	bpf_core_free_cands(cand_ids);
> +	return ERR_PTR(err);
> +}
> +
> +/* Check two types for compatibility, skipping const/volatile/restrict and
> + * typedefs, to ensure we are relocating offset to the compatible entities:
> + *   - any two STRUCTs/UNIONs are compatible and can be mixed;
> + *   - any two FWDs are compatible;
> + *   - any two PTRs are always compatible;
> + *   - for ENUMs, check sizes, names are ignored;
> + *   - for INT, size and bitness should match, signedness is ignored;
> + *   - for ARRAY, dimensionality is ignored, element types are checked for
> + *     compatibility recursively;
> + *   - everything else shouldn't be ever a target of relocation.
> + * These rules are not set in stone and probably will be adjusted as we get
> + * more experience with using BPF CO-RE relocations.
> + */
> +static int bpf_core_fields_are_compat(const struct btf *local_btf,
> +				      __u32 local_id,
> +				      const struct btf *targ_btf,
> +				      __u32 targ_id)
> +{
> +	const struct btf_type *local_type, *targ_type;
> +	__u16 kind;
> +
> +recur:
> +	local_type = skip_mods_and_typedefs(local_btf, local_id, &local_id);
> +	targ_type = skip_mods_and_typedefs(targ_btf, targ_id, &targ_id);
> +	if (!local_type || !targ_type)
> +		return -EINVAL;
> +
> +	if (btf_is_composite(local_type) && btf_is_composite(targ_type))
> +		return 1;
> +	if (BTF_INFO_KIND(local_type->info) != BTF_INFO_KIND(targ_type->info))
> +		return 0;
> +
> +	kind = BTF_INFO_KIND(local_type->info);
> +	switch (kind) {
> +	case BTF_KIND_FWD:
> +	case BTF_KIND_PTR:
> +		return 1;
> +	case BTF_KIND_ENUM:
> +		return local_type->size == targ_type->size;
> +	case BTF_KIND_INT: {
> +		__u32 loc_int = *(__u32 *)(local_type + 1);
> +		__u32 targ_int = *(__u32 *)(targ_type + 1);
> +
> +		return BTF_INT_OFFSET(loc_int) == 0 &&
> +		       BTF_INT_OFFSET(targ_int) == 0 &&
> +		       local_type->size == targ_type->size &&
> +		       BTF_INT_BITS(loc_int) == BTF_INT_BITS(targ_int);
> +	}
> +	case BTF_KIND_ARRAY: {
> +		const struct btf_array *loc_a, *targ_a;
> +
> +		loc_a = (void *)(local_type + 1);
> +		targ_a = (void *)(targ_type + 1);
> +		local_id = loc_a->type;
> +		targ_id = targ_a->type;
> +		goto recur;
> +	}
> +	default:
> +		pr_warning("unexpected kind %d relocated, local [%d], target [%d]\n",
> +			   kind, local_id, targ_id);
> +		return 0;
> +	}
> +}
> +
> +/* 
> + * Given single high-level accessor (either named field or array index) in
> + * local type, find corresponding high-level accessor for a target type. Along
> + * the way, maintain low-level spec for target as well. Also keep updating
> + * target offset.
> + */

Please describe the recursive algorithm here. I am kinda lost. 
Also, please document the meaning of zero, positive, negative return values.

> +static int bpf_core_match_member(const struct btf *local_btf,
> +				 const struct bpf_core_accessor *local_acc,
> +				 const struct btf *targ_btf,
> +				 __u32 targ_id,
> +				 struct bpf_core_spec *spec,
> +				 __u32 *next_targ_id)
> +{
> +	const struct btf_type *local_type, *targ_type;
> +	const struct btf_member *local_member, *m;
> +	const char *local_name, *targ_name;
> +	__u32 local_id;
> +	int i, n, found;
> +
> +	targ_type = skip_mods_and_typedefs(targ_btf, targ_id, &targ_id);
> +	if (!targ_type)
> +		return -EINVAL;
> +	if (!btf_is_composite(targ_type))
> +		return 0;
> +
> +	local_id = local_acc->type_id;
> +	local_type = btf__type_by_id(local_btf, local_id);
> +	local_member = (void *)(local_type + 1);
> +	local_member += local_acc->idx;
> +	local_name = btf__name_by_offset(local_btf, local_member->name_off);
> +
> +	n = BTF_INFO_VLEN(targ_type->info);
> +	m = (void *)(targ_type + 1);
> +	for (i = 0; i < n; i++, m++) {
> +		__u32 offset;
> +
> +		/* bitfield relocations not supported */
> +		if (BTF_INFO_KFLAG(targ_type->info)) {
> +			if (BTF_MEMBER_BITFIELD_SIZE(m->offset))
> +				continue;
> +			offset = BTF_MEMBER_BIT_OFFSET(m->offset);
> +		} else {
> +			offset = m->offset;
> +		}
> +		if (offset % 8)
> +			continue;
> +
> +		/* too deep struct/union/array nesting */
> +		if (spec->raw_len == BPF_CORE_SPEC_MAX_LEN)
> +			return -E2BIG;
> +
> +		/* speculate this member will be the good one */
> +		spec->offset += offset / 8;
> +		spec->raw_spec[spec->raw_len++] = i;
> +
> +		targ_name = btf__name_by_offset(targ_btf, m->name_off);
> +		if (str_is_empty(targ_name)) {
> +			/* embedded struct/union, we need to go deeper */
> +			found = bpf_core_match_member(local_btf, local_acc,
> +						      targ_btf, m->type,
> +						      spec, next_targ_id);
> +			if (found) /* either found or error */
> +				return found;
> +		} else if (strcmp(local_name, targ_name) == 0) {
> +			/* matching named field */
> +			struct bpf_core_accessor *targ_acc;
> +
> +			targ_acc = &spec->spec[spec->len++];
> +			targ_acc->type_id = targ_id;
> +			targ_acc->idx = i;
> +			targ_acc->name = targ_name;
> +
> +			*next_targ_id = m->type;
> +			found = bpf_core_fields_are_compat(local_btf,
> +							   local_member->type,
> +							   targ_btf, m->type);
> +			if (!found)
> +				spec->len--; /* pop accessor */
> +			return found;
> +		}
> +		/* member turned out to be not we looked for */
> +		spec->offset -= offset / 8;
> +		spec->raw_len--;
> +	}
> +
> +	return 0;
> +}
> +
> +/*
> + * Try to match local spec to a target type and, if successful, produce full
> + * target spec (high-level, low-level + offset).
> + */
> +static int bpf_core_spec_match(struct bpf_core_spec *local_spec,
> +			       const struct btf *targ_btf, __u32 targ_id,
> +			       struct bpf_core_spec *targ_spec)
> +{
> +	const struct btf_type *targ_type;
> +	const struct bpf_core_accessor *local_acc;
> +	struct bpf_core_accessor *targ_acc;
> +	int i, sz, matched;
> +
> +	memset(targ_spec, 0, sizeof(*targ_spec));
> +	targ_spec->btf = targ_btf;
> +
> +	local_acc = &local_spec->spec[0];
> +	targ_acc = &targ_spec->spec[0];
> +
> +	for (i = 0; i < local_spec->len; i++, local_acc++, targ_acc++) {
> +		targ_type = skip_mods_and_typedefs(targ_spec->btf, targ_id,
> +						   &targ_id);
> +		if (!targ_type)
> +			return -EINVAL;
> +
> +		if (local_acc->name) {
> +			if (!btf_is_composite(targ_type))
> +				return 0;
> +
> +			matched = bpf_core_match_member(local_spec->btf,
> +							local_acc,
> +							targ_btf, targ_id,
> +							targ_spec, &targ_id);
> +			if (matched <= 0)
> +				return matched;
> +		} else {
> +			/* for i=0, targ_id is already treated as array element
> +			 * type (because it's the original struct), for others
> +			 * we should find array element type first
> +			 */
> +			if (i > 0) {

i == 0 case would go into "if (local_acc->name)" branch, no? 

> +				const struct btf_array *a;
> +
> +				if (!btf_is_array(targ_type))
> +					return 0;
> +
> +				a = (void *)(targ_type + 1);
> +				if (local_acc->idx >= a->nelems)
> +					return 0;
> +				if (!skip_mods_and_typedefs(targ_btf, a->type,
> +							    &targ_id))
> +					return -EINVAL;
> +			}
> +
> +			/* too deep struct/union/array nesting */
> +			if (targ_spec->raw_len == BPF_CORE_SPEC_MAX_LEN)
> +				return -E2BIG;
> +
> +			targ_acc->type_id = targ_id;
> +			targ_acc->idx = local_acc->idx;
> +			targ_acc->name = NULL;
> +			targ_spec->len++;
> +			targ_spec->raw_spec[targ_spec->raw_len] = targ_acc->idx;
> +			targ_spec->raw_len++;
> +
> +			sz = btf__resolve_size(targ_btf, targ_id);
> +			if (sz < 0)
> +				return sz;
> +			targ_spec->offset += local_acc->idx * sz;
> +		}
> +	}
> +
> +	return 1;
> +}
> +
> +/*
> + * Patch relocatable BPF instruction.
> + * Expected insn->imm value is provided for validation, as well as the new
> + * relocated value.
> + *
> + * Currently three kinds of BPF instructions are supported:
> + * 1. rX = <imm> (assignment with immediate operand);
> + * 2. rX += <imm> (arithmetic operations with immediate operand);
> + * 3. *(rX) = <imm> (indirect memory assignment with immediate operand).
> + *
> + * If actual insn->imm value is wrong, bail out.
> + */
> +static int bpf_core_reloc_insn(struct bpf_program *prog, int insn_off,
> +			       __u32 orig_off, __u32 new_off)
> +{
> +	struct bpf_insn *insn;
> +	int insn_idx;
> +	__u8 class;
> +
> +	if (insn_off % sizeof(struct bpf_insn))
> +		return -EINVAL;
> +	insn_idx = insn_off / sizeof(struct bpf_insn);
> +
> +	insn = &prog->insns[insn_idx];
> +	class = BPF_CLASS(insn->code);
> +
> +	if (class == BPF_ALU || class == BPF_ALU64) {
> +		if (BPF_SRC(insn->code) != BPF_K)
> +			return -EINVAL;
> +		if (insn->imm != orig_off)
> +			return -EINVAL;
> +		insn->imm = new_off;
> +		pr_debug("prog '%s': patched insn #%d (ALU/ALU64) imm %d -> %d\n",
> +			 bpf_program__title(prog, false),
> +			 insn_idx, orig_off, new_off);
> +	} else if (class == BPF_ST && BPF_MODE(insn->code) == BPF_MEM) {
> +		if (insn->imm != orig_off)
> +			return -EINVAL;
> +		insn->imm = new_off;
> +		pr_debug("prog '%s': patched insn #%d (ST | MEM) imm %d -> %d\n",
> +			 bpf_program__title(prog, false),
> +			 insn_idx, orig_off, new_off);
> +	} else {
> +		pr_warning("prog '%s': trying to relocate unrecognized insn #%d, code:%x, src:%x, dst:%x, off:%x, imm:%x\n",
> +			   bpf_program__title(prog, false),
> +			   insn_idx, insn->code, insn->src_reg, insn->dst_reg,
> +			   insn->off, insn->imm);
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +
> +/*
> + * Probe few well-known locations for vmlinux kernel image and try to load BTF
> + * data out of it to use for target BTF.
> + */
> +static struct btf *bpf_core_find_kernel_btf(void)
> +{
> +	const char *locations[] = {
> +		"/lib/modules/%1$s/vmlinux-%1$s",
> +		"/usr/lib/modules/%1$s/kernel/vmlinux",
> +	};
> +	char path[PATH_MAX + 1];
> +	struct utsname buf;
> +	struct btf *btf;
> +	int i, err;
> +
> +	err = uname(&buf);
> +	if (err) {
> +		pr_warning("failed to uname(): %d\n", err);
> +		return ERR_PTR(err);
> +	}
> +
> +	for (i = 0; i < ARRAY_SIZE(locations); i++) {
> +		snprintf(path, PATH_MAX, locations[i], buf.release);
> +		pr_debug("attempting to load kernel BTF from '%s'\n", path);
> +
> +		if (access(path, R_OK))
> +			continue;
> +
> +		btf = btf__parse_elf(path, NULL);
> +		if (IS_ERR(btf))
> +			continue;
> +
> +		pr_debug("successfully loaded kernel BTF from '%s'\n", path);
> +		return btf;
> +	}
> +
> +	pr_warning("failed to find valid kernel BTF\n");
> +	return ERR_PTR(-ESRCH);
> +}
> +
> +static size_t bpf_core_hash_fn(const void *key, void *ctx)
> +{
> +	return (size_t)key;
> +}
> +
> +static bool bpf_core_equal_fn(const void *k1, const void *k2, void *ctx)
> +{
> +	return k1 == k2;
> +}
> +
> +static void *u32_to_ptr(__u32 x)
> +{
> +	return (void *)(uintptr_t)x;
> +}
> +
> +/* 
> + * CO-RE relocate single instruction.
> + *
> + * The outline and important points of the algorithm:
> + * 1. For given local type, find corresponding candidate target types.
> + *    Candidate type is a type with the same "essential" name, ignoring
> + *    everything after last triple underscore (___). E.g., `sample`,
> + *    `sample___flavor_one`, `sample___flavor_another_one`, are all candidates
> + *    for each other. Names with triple underscore are referred to as
> + *    "flavors" and are useful, among other things, to allow to
> + *    specify/support incompatible variations of the same kernel struct, which
> + *    might differ between different kernel versions and/or build
> + *    configurations.
> + * 2. For each candidate type, try to match local specification to this
> + *    candidate target type. Matching involves finding corresponding
> + *    high-level spec accessors, meaning that all named fields should match,
> + *    as well as all array accesses should be within the actual bounds. Also,
> + *    types should be compatible (see bpf_core_fields_are_compat for details).
> + * 3. It is supported and expected that there might be multiple flavors
> + *    matching the spec. As long as all the specs resolve to the same set of
> + *    offsets across all candidates, there is not error. If there is any
> + *    ambiguity, CO-RE relocation will fail. This is necessary to accomodate
> + *    imprefection of BTF deduplication, which can cause slight duplication of
> + *    the same BTF type, if some directly or indirectly referenced (by
> + *    pointer) type gets resolved to different actual types in different
> + *    object files. If such situation occurs, deduplicated BTF will end up
> + *    with two (or more) structurally identical types, which differ only in
> + *    types they refer to through pointer. This should be OK in most cases and
> + *    is not an error.
> + * 4. Candidate types search is performed by linearly scanning through all
> + *    types in target BTF. It is anticipated that this is overall more
> + *    efficient memory-wise and not significantly worse (if not better)
> + *    CPU-wise compared to prebuilding a map from all local type names to
> + *    a list of candidate type names. It's also sped up by caching resolved
> + *    list of matching candidates per each local "root" type ID, that has at
> + *    least one bpf_offset_reloc associated with it. This list is shared
> + *    between multiple relocations for the same type ID and is updated as some
> + *    of the candidates are pruned due to structural incompatibility.
> + */
> +static int bpf_core_reloc_offset(struct bpf_program *prog,
> +				 const struct bpf_offset_reloc *relo,
> +				 int relo_idx,
> +				 const struct btf *local_btf,
> +				 const struct btf *targ_btf,
> +				 struct hashmap *cand_cache)
> +{
> +	const char *prog_name = bpf_program__title(prog, false);
> +	struct bpf_core_spec local_spec, cand_spec, targ_spec;
> +	const void *type_key = u32_to_ptr(relo->type_id);
> +	const struct btf_type *local_type, *cand_type;
> +	const char *local_name, *cand_name;
> +	struct ids_vec *cand_ids;
> +	__u32 local_id, cand_id;
> +	const char *spec_str;
> +	int i, j, err;
> +
> +	local_id = relo->type_id;
> +	local_type = btf__type_by_id(local_btf, local_id);
> +	if (!local_type)
> +		return -EINVAL;
> +
> +	local_name = btf__name_by_offset(local_btf, local_type->name_off);
> +	if (str_is_empty(local_name))
> +		return -EINVAL;
> +
> +	spec_str = btf__name_by_offset(local_btf, relo->access_str_off);
> +	if (str_is_empty(spec_str))
> +		return -EINVAL;
> +
> +	pr_debug("prog '%s': relo #%d: insn_off=%d, [%d] (%s) + %s\n",
> +		 prog_name, relo_idx, relo->insn_off,
> +		 local_id, local_name, spec_str);
> +
> +	err = bpf_core_spec_parse(local_btf, local_id, spec_str, &local_spec);
> +	if (err) {
> +		pr_warning("prog '%s': relo #%d: parsing [%d] (%s) + %s failed: %d\n",
> +			   prog_name, relo_idx, local_id, local_name, spec_str,
> +			   err);
> +		return -EINVAL;
> +	}
> +	pr_debug("prog '%s': relo #%d: [%d] (%s) + %s is off %u, len %d, raw_len %d\n",
> +		 prog_name, relo_idx, local_id, local_name, spec_str,
> +		 local_spec.offset, local_spec.len, local_spec.raw_len);
> +
> +	if (!hashmap__find(cand_cache, type_key, (void **)&cand_ids)) {
> +		cand_ids = bpf_core_find_cands(local_btf, local_id, targ_btf);
> +		if (IS_ERR(cand_ids)) {
> +			pr_warning("prog '%s': relo #%d: target candidate search failed for [%d] (%s) + %s: %ld\n",
> +				   prog_name, relo_idx, local_id, local_name,
> +				   spec_str, PTR_ERR(cand_ids));
> +			return PTR_ERR(cand_ids);
> +		}
> +		err = hashmap__set(cand_cache, type_key, cand_ids, NULL, NULL);
> +		if (err) {
> +			bpf_core_free_cands(cand_ids);
> +			return err;
> +		}
> +	}
> +
> +	for (i = 0, j = 0; i < cand_ids->len; i++) {
> +		cand_id = cand_ids->data[j];
> +		cand_type = btf__type_by_id(targ_btf, cand_id);
> +		cand_name = btf__name_by_offset(targ_btf, cand_type->name_off);
> +
> +		err = bpf_core_spec_match(&local_spec, targ_btf,
> +					  cand_id, &cand_spec);
> +		if (err < 0) {
> +			pr_warning("prog '%s': relo #%d: failed to match spec [%d] (%s) + %s to candidate #%d [%d] (%s): %d\n",
> +				   prog_name, relo_idx, local_id, local_name,
> +				   spec_str, i, cand_id, cand_name, err);
> +			return err;
> +		}
> +		if (err == 0) {
> +			pr_debug("prog '%s': relo #%d: candidate #%d [%d] (%s) doesn't match spec\n",
> +				 prog_name, relo_idx, i, cand_id, cand_name);
> +			continue;
> +		}
> +
> +		pr_debug("prog '%s': relo #%d: candidate #%d ([%d] %s) is off %u, len %d, raw_len %d\n",
> +			 prog_name, relo_idx, i, cand_id, cand_name,
> +			 cand_spec.offset, cand_spec.len, cand_spec.raw_len);
> +
> +		if (j == 0) {
> +			targ_spec = cand_spec;
> +		} else if (cand_spec.offset != targ_spec.offset) {
> +			/* if there are many candidates, they should all
> +			 * resolve to the same offset
> +			 */
> +			pr_warning("prog '%s': relo #%d: candidate #%d ([%d] %s): conflicting offset found (%u != %u)\n",
> +				   prog_name, relo_idx, i, cand_id, cand_name,
> +				   cand_spec.offset, targ_spec.offset);
> +			return -EINVAL;
> +		}
> +
> +		cand_ids->data[j++] = cand_spec.spec[0].type_id;
> +	}
> +
> +	cand_ids->len = j;
> +	if (cand_ids->len == 0) {
> +		pr_warning("prog '%s': relo #%d: no matching targets found for [%d] (%s) + %s\n",
> +			   prog_name, relo_idx, local_id, local_name, spec_str);
> +		return -ESRCH;
> +	}
> +
> +	err = bpf_core_reloc_insn(prog, relo->insn_off,
> +				  local_spec.offset, targ_spec.offset);
> +	if (err) {
> +		pr_warning("prog '%s': relo #%d: failed to patch insn at offset %d: %d\n",
> +			   prog_name, relo_idx, relo->insn_off, err);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +bpf_core_reloc_offsets(struct bpf_object *obj, const char *targ_btf_path)
> +{
> +	const struct btf_ext_info_sec *sec;
> +	const struct bpf_offset_reloc *rec;
> +	const struct btf_ext_info *seg;
> +	struct hashmap_entry *entry;
> +	struct hashmap *cand_cache = NULL;
> +	struct bpf_program *prog;
> +	struct btf *targ_btf;
> +	const char *sec_name;
> +	int i, err = 0;
> +
> +	if (targ_btf_path)
> +		targ_btf = btf__parse_elf(targ_btf_path, NULL);
> +	else
> +		targ_btf = bpf_core_find_kernel_btf();
> +	if (IS_ERR(targ_btf)) {
> +		pr_warning("failed to get target BTF: %ld\n",
> +			   PTR_ERR(targ_btf));
> +		return PTR_ERR(targ_btf);
> +	}
> +
> +	cand_cache = hashmap__new(bpf_core_hash_fn, bpf_core_equal_fn, NULL);
> +	if (IS_ERR(cand_cache)) {
> +		err = PTR_ERR(cand_cache);
> +		goto out;
> +	}
> +
> +	seg = &obj->btf_ext->offset_reloc_info;
> +	for_each_btf_ext_sec(seg, sec) {
> +		sec_name = btf__name_by_offset(obj->btf, sec->sec_name_off);
> +		if (str_is_empty(sec_name)) {
> +			err = -EINVAL;
> +			goto out;
> +		}
> +		prog = bpf_object__find_program_by_title(obj, sec_name);
> +		if (!prog) {
> +			pr_warning("failed to find program '%s' for CO-RE offset relocation\n",
> +				   sec_name);
> +			err = -EINVAL;
> +			goto out;
> +		}
> +
> +		pr_debug("prog '%s': performing %d CO-RE offset relocs\n",
> +			 sec_name, sec->num_info);
> +
> +		for_each_btf_ext_rec(seg, sec, i, rec) {
> +			err = bpf_core_reloc_offset(prog, rec, i, obj->btf,
> +						    targ_btf, cand_cache);
> +			if (err) {
> +				pr_warning("prog '%s': relo #%d: failed to relocate: %d\n",
> +					   sec_name, i, err);
> +				goto out;
> +			}
> +		}
> +	}
> +
> +out:
> +	btf__free(targ_btf);
> +	if (!IS_ERR_OR_NULL(cand_cache)) {
> +		hashmap__for_each_entry(cand_cache, entry, i) {
> +			bpf_core_free_cands(entry->value);
> +		}
> +		hashmap__free(cand_cache);
> +	}
> +	return err;
> +}
> +
> +static int
> +bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
> +{
> +	int err = 0;
> +
> +	if (obj->btf_ext->offset_reloc_info.len)
> +		err = bpf_core_reloc_offsets(obj, targ_btf_path);
> +
> +	return err;
> +}
> +
> static int
> bpf_program__reloc_text(struct bpf_program *prog, struct bpf_object *obj,
> 			struct reloc_desc *relo)
> @@ -2396,14 +3243,21 @@ bpf_program__relocate(struct bpf_program *prog, struct bpf_object *obj)
> 	return 0;
> }
> 
> -
> static int
> -bpf_object__relocate(struct bpf_object *obj)
> +bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path)
> {
> 	struct bpf_program *prog;
> 	size_t i;
> 	int err;
> 
> +	if (obj->btf_ext) {
> +		err = bpf_object__relocate_core(obj, targ_btf_path);
> +		if (err) {
> +			pr_warning("failed to perform CO-RE relocations: %d\n",
> +				   err);
> +			return err;
> +		}
> +	}
> 	for (i = 0; i < obj->nr_programs; i++) {
> 		prog = &obj->programs[i];
> 
> @@ -2804,7 +3658,7 @@ int bpf_object__load_xattr(struct bpf_object_load_attr *attr)
> 	obj->loaded = true;
> 
> 	CHECK_ERR(bpf_object__create_maps(obj), err, out);
> -	CHECK_ERR(bpf_object__relocate(obj), err, out);
> +	CHECK_ERR(bpf_object__relocate(obj, attr->target_btf_path), err, out);
> 	CHECK_ERR(bpf_object__load_progs(obj, attr->log_level), err, out);
> 
> 	return 0;
> diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
> index 5cbf459ece0b..6004bfe1ebf0 100644
> --- a/tools/lib/bpf/libbpf.h
> +++ b/tools/lib/bpf/libbpf.h
> @@ -92,6 +92,7 @@ LIBBPF_API void bpf_object__close(struct bpf_object *object);
> struct bpf_object_load_attr {
> 	struct bpf_object *obj;
> 	int log_level;
> +	const char *target_btf_path;
> };
> 
> /* Load/unload object into/from kernel */
> -- 
> 2.17.1
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm
  2019-07-24 19:27 ` [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm Andrii Nakryiko
  2019-07-25 19:32   ` Song Liu
@ 2019-07-25 23:18   ` Alexei Starovoitov
  2019-07-27  6:25     ` Andrii Nakryiko
  2019-07-29 19:56   ` Song Liu
  2 siblings, 1 reply; 42+ messages in thread
From: Alexei Starovoitov @ 2019-07-25 23:18 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, ast, daniel, yhs, andrii.nakryiko, kernel-team

On Wed, Jul 24, 2019 at 12:27:34PM -0700, Andrii Nakryiko wrote:
> This patch implements the core logic for BPF CO-RE offsets relocations.
> All the details are described in code comments.
> 
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
>  tools/lib/bpf/libbpf.c | 866 ++++++++++++++++++++++++++++++++++++++++-
>  tools/lib/bpf/libbpf.h |   1 +
>  2 files changed, 861 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 8741c39adb1c..86d87bf10d46 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -38,6 +38,7 @@
>  #include <sys/stat.h>
>  #include <sys/types.h>
>  #include <sys/vfs.h>
> +#include <sys/utsname.h>
>  #include <tools/libc_compat.h>
>  #include <libelf.h>
>  #include <gelf.h>
> @@ -47,6 +48,7 @@
>  #include "btf.h"
>  #include "str_error.h"
>  #include "libbpf_internal.h"
> +#include "hashmap.h"
>  
>  #ifndef EM_BPF
>  #define EM_BPF 247
> @@ -1013,16 +1015,22 @@ static int bpf_object__init_user_maps(struct bpf_object *obj, bool strict)
>  }
>  
>  static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
> -						     __u32 id)
> +						     __u32 id,
> +						     __u32 *res_id)

I think it would be more readable to format it like:
static const struct btf_type *
skip_mods_and_typedefs(const struct btf *btf, __u32 id, __u32 *res_id)

> +	} else if (class == BPF_ST && BPF_MODE(insn->code) == BPF_MEM) {
> +		if (insn->imm != orig_off)
> +			return -EINVAL;
> +		insn->imm = new_off;
> +		pr_debug("prog '%s': patched insn #%d (ST | MEM) imm %d -> %d\n",
> +			 bpf_program__title(prog, false),
> +			 insn_idx, orig_off, new_off);

I'm pretty sure llvm was not capable of emitting BPF_ST insn.
When did that change?

> +/* 
> + * CO-RE relocate single instruction.
> + *
> + * The outline and important points of the algorithm:
> + * 1. For given local type, find corresponding candidate target types.
> + *    Candidate type is a type with the same "essential" name, ignoring
> + *    everything after last triple underscore (___). E.g., `sample`,
> + *    `sample___flavor_one`, `sample___flavor_another_one`, are all candidates
> + *    for each other. Names with triple underscore are referred to as
> + *    "flavors" and are useful, among other things, to allow to
> + *    specify/support incompatible variations of the same kernel struct, which
> + *    might differ between different kernel versions and/or build
> + *    configurations.

"flavors" is a convention of bpftool btf2c converter, right?
May be mention it here with pointer to the code?

> +	pr_debug("prog '%s': relo #%d: insn_off=%d, [%d] (%s) + %s\n",
> +		 prog_name, relo_idx, relo->insn_off,
> +		 local_id, local_name, spec_str);
> +
> +	err = bpf_core_spec_parse(local_btf, local_id, spec_str, &local_spec);
> +	if (err) {
> +		pr_warning("prog '%s': relo #%d: parsing [%d] (%s) + %s failed: %d\n",
> +			   prog_name, relo_idx, local_id, local_name, spec_str,
> +			   err);
> +		return -EINVAL;
> +	}
> +	pr_debug("prog '%s': relo #%d: [%d] (%s) + %s is off %u, len %d, raw_len %d\n",
> +		 prog_name, relo_idx, local_id, local_name, spec_str,
> +		 local_spec.offset, local_spec.len, local_spec.raw_len);

one warn and two debug that print more or less the same info seems like overkill.

> +	for (i = 0, j = 0; i < cand_ids->len; i++) {
> +		cand_id = cand_ids->data[j];
> +		cand_type = btf__type_by_id(targ_btf, cand_id);
> +		cand_name = btf__name_by_offset(targ_btf, cand_type->name_off);
> +
> +		err = bpf_core_spec_match(&local_spec, targ_btf,
> +					  cand_id, &cand_spec);
> +		if (err < 0) {
> +			pr_warning("prog '%s': relo #%d: failed to match spec [%d] (%s) + %s to candidate #%d [%d] (%s): %d\n",
> +				   prog_name, relo_idx, local_id, local_name,
> +				   spec_str, i, cand_id, cand_name, err);
> +			return err;
> +		}
> +		if (err == 0) {
> +			pr_debug("prog '%s': relo #%d: candidate #%d [%d] (%s) doesn't match spec\n",
> +				 prog_name, relo_idx, i, cand_id, cand_name);
> +			continue;
> +		}
> +
> +		pr_debug("prog '%s': relo #%d: candidate #%d ([%d] %s) is off %u, len %d, raw_len %d\n",
> +			 prog_name, relo_idx, i, cand_id, cand_name,
> +			 cand_spec.offset, cand_spec.len, cand_spec.raw_len);

have the same feeling about 3 printfs above.


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 06/10] selftests/bpf: add CO-RE relocs array tests
  2019-07-24 19:27 ` [PATCH bpf-next 06/10] selftests/bpf: add CO-RE relocs array tests Andrii Nakryiko
@ 2019-07-25 23:26   ` Alexei Starovoitov
  2019-07-26 23:29     ` Andrii Nakryiko
  0 siblings, 1 reply; 42+ messages in thread
From: Alexei Starovoitov @ 2019-07-25 23:26 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, ast, daniel, yhs, andrii.nakryiko, kernel-team

On Wed, Jul 24, 2019 at 12:27:38PM -0700, Andrii Nakryiko wrote:
> Add tests for various array handling/relocation scenarios.
> 
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
...
> +
> +#define CORE_READ(dst, src) \
> +	bpf_probe_read(dst, sizeof(*src), __builtin_preserve_access_index(src))

This is the key accessor that all progs will use.
Please split just this single macro into individual commit and add
detailed comment about its purpose and
what __builtin_preserve_access_index() does underneath.

> +SEC("raw_tracepoint/sys_enter")
> +int test_core_nesting(void *ctx)
> +{
> +	struct core_reloc_arrays *in = (void *)&data.in;
> +	struct core_reloc_arrays_output *out = (void *)&data.out;
> +
> +	/* in->a[2] */
> +	if (CORE_READ(&out->a2, &in->a[2]))
> +		return 1;
> +	/* in->b[1][2][3] */
> +	if (CORE_READ(&out->b123, &in->b[1][2][3]))
> +		return 1;

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 06/10] selftests/bpf: add CO-RE relocs array tests
  2019-07-25 23:26   ` Alexei Starovoitov
@ 2019-07-26 23:29     ` Andrii Nakryiko
  0 siblings, 0 replies; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-26 23:29 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Yonghong Song, Kernel Team

On Thu, Jul 25, 2019 at 4:26 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Wed, Jul 24, 2019 at 12:27:38PM -0700, Andrii Nakryiko wrote:
> > Add tests for various array handling/relocation scenarios.
> >
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ...
> > +
> > +#define CORE_READ(dst, src) \
> > +     bpf_probe_read(dst, sizeof(*src), __builtin_preserve_access_index(src))
>
> This is the key accessor that all progs will use.
> Please split just this single macro into individual commit and add
> detailed comment about its purpose and
> what __builtin_preserve_access_index() does underneath.

I'm planning to add more powerful and flexible set of macros to
support BCC style a->b->c->d accesses using single macro. Something
like BPF_CORE_READ(&dst, sizeof(dst), a, b, c, d). I want to move
bpf_helpers.h into libbpf itself first, after some clean up. How about
I write all that at that time and for now just add this simpler
CORE_READ into bpf_helpers.h?

Relocations recorded by __builtin_preserve_access_index() are
described pretty well in patch #1, which adds bpf_offset_reloc. I'll
double check if I mention this built-in there, and if not - will add
that.

>
> > +SEC("raw_tracepoint/sys_enter")
> > +int test_core_nesting(void *ctx)
> > +{
> > +     struct core_reloc_arrays *in = (void *)&data.in;
> > +     struct core_reloc_arrays_output *out = (void *)&data.out;
> > +
> > +     /* in->a[2] */
> > +     if (CORE_READ(&out->a2, &in->a[2]))
> > +             return 1;
> > +     /* in->b[1][2][3] */
> > +     if (CORE_READ(&out->b123, &in->b[1][2][3]))
> > +             return 1;

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 01/10] libbpf: add .BTF.ext offset relocation section loading
  2019-07-25  5:20       ` Song Liu
@ 2019-07-27  5:11         ` Andrii Nakryiko
  2019-07-29 20:00           ` Song Liu
  0 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-27  5:11 UTC (permalink / raw)
  To: Song Liu
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Yonghong Song, Kernel Team

On Wed, Jul 24, 2019 at 10:20 PM Song Liu <songliubraving@fb.com> wrote:
>
>
>
> > On Jul 24, 2019, at 5:37 PM, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> >
> > On Wed, Jul 24, 2019 at 5:00 PM Song Liu <songliubraving@fb.com> wrote:
> >>
> >>
> >>
> >>> On Jul 24, 2019, at 12:27 PM, Andrii Nakryiko <andriin@fb.com> wrote:
> >>>
> >>> Add support for BPF CO-RE offset relocations. Add section/record
> >>> iteration macros for .BTF.ext. These macro are useful for iterating over
> >>> each .BTF.ext record, either for dumping out contents or later for BPF
> >>> CO-RE relocation handling.
> >>>
> >>> To enable other parts of libbpf to work with .BTF.ext contents, moved
> >>> a bunch of type definitions into libbpf_internal.h.
> >>>
> >>> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> >>> ---
> >>> tools/lib/bpf/btf.c             | 64 +++++++++--------------
> >>> tools/lib/bpf/btf.h             |  4 ++
> >>> tools/lib/bpf/libbpf_internal.h | 91 +++++++++++++++++++++++++++++++++
> >>> 3 files changed, 118 insertions(+), 41 deletions(-)
> >>>
> >
> > [...]
> >
> >>> +
> >>> static int btf_ext_parse_hdr(__u8 *data, __u32 data_size)
> >>> {
> >>>      const struct btf_ext_header *hdr = (struct btf_ext_header *)data;
> >>> @@ -1004,6 +979,13 @@ struct btf_ext *btf_ext__new(__u8 *data, __u32 size)
> >>>      if (err)
> >>>              goto done;
> >>>
> >>> +     /* check if there is offset_reloc_off/offset_reloc_len fields */
> >>> +     if (btf_ext->hdr->hdr_len < sizeof(struct btf_ext_header))
> >>
> >> This check will break when we add more optional sections to btf_ext_header.
> >> Maybe use offsetof() instead?
> >
> > I didn't do it, because there are no fields after offset_reloc_len.
> > But now I though that maybe it would be ok to add zero-sized marker
> > field, kind of like marking off various versions of btf_ext header?
> >
> > Alternatively, I can add offsetofend() macro somewhere in libbpf_internal.h.
> >
> > Do you have any preference?
>
> We only need a stable number to compare against. offsetofend() works.
> Or we can simply have something like
>
>     if (btf_ext->hdr->hdr_len <= offsetof(struct btf_ext_header, offset_reloc_off))
>           goto done;
> or
>     if (btf_ext->hdr->hdr_len < offsetof(struct btf_ext_header, offset_reloc_len))
>           goto done;
>
> Does this make sense?

I think offsetofend() is the cleanest solution, I'll do just that.

>
> Thanks,
> Song

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm
  2019-07-25 19:32   ` Song Liu
@ 2019-07-27  6:11     ` Andrii Nakryiko
  2019-07-27 18:59       ` Song Liu
  0 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-27  6:11 UTC (permalink / raw)
  To: Song Liu
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Yonghong Song, Kernel Team

On Thu, Jul 25, 2019 at 12:32 PM Song Liu <songliubraving@fb.com> wrote:
>
>
>
> > On Jul 24, 2019, at 12:27 PM, Andrii Nakryiko <andriin@fb.com> wrote:
> >
> > This patch implements the core logic for BPF CO-RE offsets relocations.
> > All the details are described in code comments.
>
> Some description in the change log is still useful. Please at least
> copy-paste key comments here.

OK, will add some more.

>
> And, this is looooong. I think it is totally possible to split it into
> multiple smaller patches.

I don't really know how to split it further without hurting reviewing
by artificially splitting related code into separate patches. Remove
any single function and algorithm will be incomplete.

Let me give you some high-level overview of how pieces are put
together. There are 9 non-trivial functions, let's go over their
purpose in the orderd in which they are defined in file:

1. bpf_core_spec_parse()

This one take bpf_offset_reloc's type_id and accessor string
("0:1:2:3") and parses it into more convenient bpf_core_spec
datastructure, which has calculated offset and high-level spec
"steps": either named field or array access.

2. bpf_core_find_cands()

Given local type name, finds all possible target BTF types with same
name (modulo "flavor" differences, ___flavor suffix is just ignored).

3. bpf_core_fields_are_compat()

Given local and target field match, checks that their types are
compatible (so that we don't accidentally match, e.g., int against
struct).

4. bpf_core_match_member()

Given named local field, find corresponding field in target struct. To
understand why it's not trivial, here's an example:

Local type:

struct s___local {
  int a;
};

Target type:

struct s___target {
  struct {
    union {
      int a;
    };
  };
};

For both cases you can access a as s.a, but in local case, field a is
immediately inside s___local, while for s___target, you have to
traverse two levels deeper into anonymous fields to get to an `a`
inside anonymous union.

So this function find that `a` by doing exhaustive search across all
named field and anonymous struct/unions. But otherwise it's pretty
straightforward recursive function.

bpf_core_spec_match()

Just goes over high-level spec steps in local spec and tries to figure
out both high-level and low-level steps for targe type. Consider the
above example. For both structs accessing s.a is one high-level step,
but for s___local it's single low-level step (just another :0 in spec
string), while for s___target it's three low-level steps: ":0:0:0",
one step for each BTF type we need to traverse.

Array access is simpler, it's always one high-level and one low-level step.

bpf_core_reloc_insn()

Once we match local and target specs and have local and target
offsets, do the relocations - check that instruction has expected
local offset and replace it with target offset.

bpf_core_find_kernel_btf()

This is the only function that can be moved into separate patch, but
it's also very simple. It just iterates over few known possible
locations for vmlinux image and once found, tries to parse .BTF out of
it, to be used as target BTF.

bpf_core_reloc_offset()

It combines all the above functions to perform single relocation.
Parse spec, get candidates, for each candidate try to find matching
target spec. All candidates that matched are cached for given local
root type.

bpf_core_reloc_offsets()

High-level coordination. Iterate over all per-program .BTF.ext offset
reloc sections, each relocation within them. Find corresponding
program and try to apply relocations one by one.


I think the only non-obvious part here is to understand that
relocation records local raw spec with every single anonymous type
traversal, which is not that useful when we try to match it against
target type, which can have very different composition, but still the
same field access pattern, from C language standpoint (which hides all
those anonymous type traversals from programmer).

But it should be pretty clear now, plus also check tests, they have
lots of cases showing what's compatible and what's not.


>
> I haven't finished all of it. Please see my comments below of parts I
> have covered.
>
> Thanks,
> Song
>
> >
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > ---
> > tools/lib/bpf/libbpf.c | 866 ++++++++++++++++++++++++++++++++++++++++-
> > tools/lib/bpf/libbpf.h |   1 +
> > 2 files changed, 861 insertions(+), 6 deletions(-)
> >
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 8741c39adb1c..86d87bf10d46 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -38,6 +38,7 @@
> > #include <sys/stat.h>
> > #include <sys/types.h>
> > #include <sys/vfs.h>
> > +#include <sys/utsname.h>
> > #include <tools/libc_compat.h>
> > #include <libelf.h>
> > #include <gelf.h>
> > @@ -47,6 +48,7 @@
> > #include "btf.h"
> > #include "str_error.h"
> > #include "libbpf_internal.h"
> > +#include "hashmap.h"
> >
> > #ifndef EM_BPF
> > #define EM_BPF 247
> > @@ -1013,16 +1015,22 @@ static int bpf_object__init_user_maps(struct bpf_object *obj, bool strict)
> > }
> >
> > static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
> > -                                                  __u32 id)
> > +                                                  __u32 id,
> > +                                                  __u32 *res_id)
> > {
> >       const struct btf_type *t = btf__type_by_id(btf, id);
>
> Maybe have a local "__u32 rid;"
>
> >
> > +     if (res_id)
> > +             *res_id = id;
> > +
>
> and do "rid = id;" here
>
> >       while (true) {
> >               switch (BTF_INFO_KIND(t->info)) {
> >               case BTF_KIND_VOLATILE:
> >               case BTF_KIND_CONST:
> >               case BTF_KIND_RESTRICT:
> >               case BTF_KIND_TYPEDEF:
> > +                     if (res_id)
> > +                             *res_id = t->type;
> and here
>
> >                       t = btf__type_by_id(btf, t->type);
> >                       break;
> >               default:
> and "*res_id = rid;" right before return?

Sure, but why?

>
> > @@ -1041,7 +1049,7 @@ static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
> > static bool get_map_field_int(const char *map_name, const struct btf *btf,
> >                             const struct btf_type *def,
> >                             const struct btf_member *m, __u32 *res) {

[...]

> > +struct bpf_core_spec {
> > +     const struct btf *btf;
> > +     /* high-level spec: named fields and array indicies only */
>
> typo: indices

thanks!

>
> > +     struct bpf_core_accessor spec[BPF_CORE_SPEC_MAX_LEN];
> > +     /* high-level spec length */
> > +     int len;
> > +     /* raw, low-level spec: 1-to-1 with accessor spec string */
> > +     int raw_spec[BPF_CORE_SPEC_MAX_LEN];
> > +     /* raw spec length */
> > +     int raw_len;
> > +     /* field byte offset represented by spec */
> > +     __u32 offset;
> > +};

[...]

> > + *
> > + *   int x = &s->a[3]; // access string = '0:1:2:3'
> > + *
> > + * Low-level spec has 1:1 mapping with each element of access string (it's
> > + * just a parsed access string representation): [0, 1, 2, 3].
> > + *
> > + * High-level spec will capture only 3 points:
> > + *   - intial zero-index access by pointer (&s->... is the same as &s[0]...);
> > + *   - field 'a' access (corresponds to '2' in low-level spec);
> > + *   - array element #3 access (corresponds to '3' in low-level spec).
> > + *
> > + */
>
> IIUC, high-level points are subset of low-level points. How about we introduce
> "anonymous" high-level points, so that high-level points and low-level points
> are 1:1 mapping?

No, that will just hurt and complicate things. See above explanation
about why we need high-level points (it's what you as C programmer try
to achieve vs low-level spec is what C-language does in reality, with
all the anonymous struct/union traversal).

What's wrong with this separation? Think about it as recording
"intent" (high-level spec) vs "mechanics" (low-level spec, how exactly
to achieve that intent, in excruciating details).

>
> > +static int bpf_core_spec_parse(const struct btf *btf,
> > +                            __u32 type_id,
> > +                            const char *spec_str,
> > +                            struct bpf_core_spec *spec)
> > +{
> > +     int access_idx, parsed_len, i;
> > +     const struct btf_type *t;
> > +     __u32 id = type_id;
> > +     const char *name;
> > +     __s64 sz;
> > +
> > +     if (str_is_empty(spec_str) || *spec_str == ':')
> > +             return -EINVAL;
> > +
> > +     memset(spec, 0, sizeof(*spec));
> > +     spec->btf = btf;
> > +
> > +     /* parse spec_str="0:1:2:3:4" into array raw_spec=[0, 1, 2, 3, 4] */
> > +     while (*spec_str) {
> > +             if (*spec_str == ':')
> > +                     ++spec_str;
> > +             if (sscanf(spec_str, "%d%n", &access_idx, &parsed_len) != 1)
> > +                     return -EINVAL;
> > +             if (spec->raw_len == BPF_CORE_SPEC_MAX_LEN)
> > +                     return -E2BIG;
> > +             spec_str += parsed_len;
> > +             spec->raw_spec[spec->raw_len++] = access_idx;
> > +     }
> > +
> > +     if (spec->raw_len == 0)
> > +             return -EINVAL;
> > +
> > +     for (i = 0; i < spec->raw_len; i++) {
> > +             t = skip_mods_and_typedefs(btf, id, &id);
> > +             if (!t)
> > +                     return -EINVAL;
> > +
> > +             access_idx = spec->raw_spec[i];
> > +
> > +             if (i == 0) {
> > +                     /* first spec value is always reloc type array index */
> > +                     spec->spec[spec->len].type_id = id;
> > +                     spec->spec[spec->len].idx = access_idx;
> > +                     spec->len++;
> > +
> > +                     sz = btf__resolve_size(btf, id);
> > +                     if (sz < 0)
> > +                             return sz;
> > +                     spec->offset += access_idx * sz;
>           spec->offset = access_idx * sz;  should be enough

No. spec->offset is carefully maintained across multiple low-level
steps, as we traverse down embedded structs/unions.

Think about, e.g.:

struct s {
    int a;
    struct {
        int b;
    };
};

Imagine you are trying to match s.b access. With what you propose
you'll end up with offset 0, but it should be 4.

>
> > +                     continue;
> > +             }
>
> Maybe pull i == 0 case out of the for loop?
>
> > +
> > +             if (btf_is_composite(t)) {

[...]

> > +
> > +     if (spec->len == 0)
> > +             return -EINVAL;
>
> Can this ever happen?

Not really, because I already check raw_len == 0 and exit with error.
I'll remove.

>
> > +
> > +     return 0;
> > +}
> > +

[...]

> > +
> > +/*
> > + * Given single high-level accessor (either named field or array index) in
> > + * local type, find corresponding high-level accessor for a target type. Along
> > + * the way, maintain low-level spec for target as well. Also keep updating
> > + * target offset.
> > + */
>
> Please describe the recursive algorithm here. I am kinda lost.

Explained above. I'll extend description a bit. But it's just
recursive exhaustive search:
1. if struct field is anonymous and is struct/union, go one level
deeper and try to find field with given name inside those.
2. if field has name and it matched what we are searching - check type
compatibility. It has to be compatible, so if it's not, then it's not
a match.

> Also, please document the meaning of zero, positive, negative return values.

Ok. It's standard <0 - error, 0 - false, 1 - true.

>
> > +static int bpf_core_match_member(const struct btf *local_btf,
> > +                              const struct bpf_core_accessor *local_acc,
> > +                              const struct btf *targ_btf,
> > +                              __u32 targ_id,
> > +                              struct bpf_core_spec *spec,
> > +                              __u32 *next_targ_id)
> > +{

[...]

> > +
> > +/*
> > + * Try to match local spec to a target type and, if successful, produce full
> > + * target spec (high-level, low-level + offset).
> > + */
> > +static int bpf_core_spec_match(struct bpf_core_spec *local_spec,
> > +                            const struct btf *targ_btf, __u32 targ_id,
> > +                            struct bpf_core_spec *targ_spec)
> > +{
> > +     const struct btf_type *targ_type;
> > +     const struct bpf_core_accessor *local_acc;
> > +     struct bpf_core_accessor *targ_acc;
> > +     int i, sz, matched;
> > +
> > +     memset(targ_spec, 0, sizeof(*targ_spec));
> > +     targ_spec->btf = targ_btf;
> > +
> > +     local_acc = &local_spec->spec[0];
> > +     targ_acc = &targ_spec->spec[0];
> > +
> > +     for (i = 0; i < local_spec->len; i++, local_acc++, targ_acc++) {
> > +             targ_type = skip_mods_and_typedefs(targ_spec->btf, targ_id,
> > +                                                &targ_id);
> > +             if (!targ_type)
> > +                     return -EINVAL;
> > +
> > +             if (local_acc->name) {
> > +                     if (!btf_is_composite(targ_type))
> > +                             return 0;
> > +
> > +                     matched = bpf_core_match_member(local_spec->btf,
> > +                                                     local_acc,
> > +                                                     targ_btf, targ_id,
> > +                                                     targ_spec, &targ_id);
> > +                     if (matched <= 0)
> > +                             return matched;
> > +             } else {
> > +                     /* for i=0, targ_id is already treated as array element
> > +                      * type (because it's the original struct), for others
> > +                      * we should find array element type first
> > +                      */
> > +                     if (i > 0) {
>
> i == 0 case would go into "if (local_acc->name)" branch, no?

No, i == 0 is always an array access. s->a.b.c is the same as
s[0].a.b.c, so relocation's first spec element is always either zero
for pointer access or any non-negative index for array access. But it
is always array access.

>
> > +                             const struct btf_array *a;
> > +
> > +                             if (!btf_is_array(targ_type))
> > +                                     return 0;
> > +
> > +                             a = (void *)(targ_type + 1);

[...]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm
  2019-07-25 23:18   ` Alexei Starovoitov
@ 2019-07-27  6:25     ` Andrii Nakryiko
  2019-07-27 17:00       ` Alexei Starovoitov
  0 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-27  6:25 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Yonghong Song, Kernel Team

On Thu, Jul 25, 2019 at 4:18 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Wed, Jul 24, 2019 at 12:27:34PM -0700, Andrii Nakryiko wrote:
> > This patch implements the core logic for BPF CO-RE offsets relocations.
> > All the details are described in code comments.
> >
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > ---
> >  tools/lib/bpf/libbpf.c | 866 ++++++++++++++++++++++++++++++++++++++++-
> >  tools/lib/bpf/libbpf.h |   1 +
> >  2 files changed, 861 insertions(+), 6 deletions(-)
> >
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 8741c39adb1c..86d87bf10d46 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -38,6 +38,7 @@
> >  #include <sys/stat.h>
> >  #include <sys/types.h>
> >  #include <sys/vfs.h>
> > +#include <sys/utsname.h>
> >  #include <tools/libc_compat.h>
> >  #include <libelf.h>
> >  #include <gelf.h>
> > @@ -47,6 +48,7 @@
> >  #include "btf.h"
> >  #include "str_error.h"
> >  #include "libbpf_internal.h"
> > +#include "hashmap.h"
> >
> >  #ifndef EM_BPF
> >  #define EM_BPF 247
> > @@ -1013,16 +1015,22 @@ static int bpf_object__init_user_maps(struct bpf_object *obj, bool strict)
> >  }
> >
> >  static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
> > -                                                  __u32 id)
> > +                                                  __u32 id,
> > +                                                  __u32 *res_id)
>
> I think it would be more readable to format it like:
> static const struct btf_type *
> skip_mods_and_typedefs(const struct btf *btf, __u32 id, __u32 *res_id)

Ok.

>
> > +     } else if (class == BPF_ST && BPF_MODE(insn->code) == BPF_MEM) {
> > +             if (insn->imm != orig_off)
> > +                     return -EINVAL;
> > +             insn->imm = new_off;
> > +             pr_debug("prog '%s': patched insn #%d (ST | MEM) imm %d -> %d\n",
> > +                      bpf_program__title(prog, false),
> > +                      insn_idx, orig_off, new_off);
>
> I'm pretty sure llvm was not capable of emitting BPF_ST insn.
> When did that change?

I just looked at possible instructions that could have 32-bit
immediate value. This is `*(rX) = offsetof(struct s, field)`, which I
though is conceivable. Do you think I should drop it?

>
> > +/*
> > + * CO-RE relocate single instruction.
> > + *
> > + * The outline and important points of the algorithm:
> > + * 1. For given local type, find corresponding candidate target types.
> > + *    Candidate type is a type with the same "essential" name, ignoring
> > + *    everything after last triple underscore (___). E.g., `sample`,
> > + *    `sample___flavor_one`, `sample___flavor_another_one`, are all candidates
> > + *    for each other. Names with triple underscore are referred to as
> > + *    "flavors" and are useful, among other things, to allow to
> > + *    specify/support incompatible variations of the same kernel struct, which
> > + *    might differ between different kernel versions and/or build
> > + *    configurations.
>
> "flavors" is a convention of bpftool btf2c converter, right?
> May be mention it here with pointer to the code?

Yes, btf2c converter generates "flavors" on type name conflict (adding
___2, ___3), but it's not the only use case. It's a general way to
have independent incompatible definitions for the same target type.
E.g., locally in your BPF program you can define two thread_structs to
accommodate field rename between kernel version changes:

struct thread_struct___before_47 {
    long fs;
};

struct thread_struct___after_47 {
    long fsbase;
};

Then with conditional relocations you'll use one of them to "extract"
it from real kernel's thread_struct:

void *fsbase;

if (LINUX_VERSION < 407)
    BPF_CORE_READ(&fsbase, sizeof(fsbase),
                  &((struct thread_struct___before_47 *)&thread)->fs);
else
    BPF_CORE_READ(&fsbase, sizeof(fsbase),
                  &((struct thread_struct___after_47 *)&thread)->fsbase);

So it works both ways (for local and target types) by design. I can
mention that btf2c converter uses this convention for types with
conflicting names, but btf2c is not a definition of what flavor is.

>
> > +     pr_debug("prog '%s': relo #%d: insn_off=%d, [%d] (%s) + %s\n",
> > +              prog_name, relo_idx, relo->insn_off,
> > +              local_id, local_name, spec_str);
> > +
> > +     err = bpf_core_spec_parse(local_btf, local_id, spec_str, &local_spec);
> > +     if (err) {
> > +             pr_warning("prog '%s': relo #%d: parsing [%d] (%s) + %s failed: %d\n",
> > +                        prog_name, relo_idx, local_id, local_name, spec_str,
> > +                        err);
> > +             return -EINVAL;
> > +     }
> > +     pr_debug("prog '%s': relo #%d: [%d] (%s) + %s is off %u, len %d, raw_len %d\n",
> > +              prog_name, relo_idx, local_id, local_name, spec_str,
> > +              local_spec.offset, local_spec.len, local_spec.raw_len);
>
> one warn and two debug that print more or less the same info seems like overkill.

Only one of them will ever be emitted, though. And this information is
and will be invaluable to debug issues/explain behavior in the future
once adoption starts. So I'm inclined to keep them, at least for now.
But I think I'll extract spec formatting into a separate reusable
function, which will make this significantly less verbose.

>
> > +     for (i = 0, j = 0; i < cand_ids->len; i++) {
> > +             cand_id = cand_ids->data[j];
> > +             cand_type = btf__type_by_id(targ_btf, cand_id);
> > +             cand_name = btf__name_by_offset(targ_btf, cand_type->name_off);
> > +
> > +             err = bpf_core_spec_match(&local_spec, targ_btf,
> > +                                       cand_id, &cand_spec);
> > +             if (err < 0) {
> > +                     pr_warning("prog '%s': relo #%d: failed to match spec [%d] (%s) + %s to candidate #%d [%d] (%s): %d\n",
> > +                                prog_name, relo_idx, local_id, local_name,
> > +                                spec_str, i, cand_id, cand_name, err);
> > +                     return err;
> > +             }
> > +             if (err == 0) {
> > +                     pr_debug("prog '%s': relo #%d: candidate #%d [%d] (%s) doesn't match spec\n",
> > +                              prog_name, relo_idx, i, cand_id, cand_name);
> > +                     continue;
> > +             }
> > +
> > +             pr_debug("prog '%s': relo #%d: candidate #%d ([%d] %s) is off %u, len %d, raw_len %d\n",
> > +                      prog_name, relo_idx, i, cand_id, cand_name,
> > +                      cand_spec.offset, cand_spec.len, cand_spec.raw_len);
>
> have the same feeling about 3 printfs above.
>

Same as above.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm
  2019-07-27  6:25     ` Andrii Nakryiko
@ 2019-07-27 17:00       ` Alexei Starovoitov
  2019-07-27 18:24         ` Andrii Nakryiko
  0 siblings, 1 reply; 42+ messages in thread
From: Alexei Starovoitov @ 2019-07-27 17:00 UTC (permalink / raw)
  To: Andrii Nakryiko, Alexei Starovoitov
  Cc: Andrii Nakryiko, bpf, Networking, Daniel Borkmann, Yonghong Song,
	Kernel Team

On 7/26/19 11:25 PM, Andrii Nakryiko wrote:
>>> +     } else if (class == BPF_ST && BPF_MODE(insn->code) == BPF_MEM) {
>>> +             if (insn->imm != orig_off)
>>> +                     return -EINVAL;
>>> +             insn->imm = new_off;
>>> +             pr_debug("prog '%s': patched insn #%d (ST | MEM) imm %d -> %d\n",
>>> +                      bpf_program__title(prog, false),
>>> +                      insn_idx, orig_off, new_off);
>> I'm pretty sure llvm was not capable of emitting BPF_ST insn.
>> When did that change?
> I just looked at possible instructions that could have 32-bit
> immediate value. This is `*(rX) = offsetof(struct s, field)`, which I
> though is conceivable. Do you think I should drop it?

Just trying to point out that since it's not emitted by llvm
this code is likely untested ?
Or you've created a bpf asm test for this?



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm
  2019-07-27 17:00       ` Alexei Starovoitov
@ 2019-07-27 18:24         ` Andrii Nakryiko
  2019-07-27 21:29           ` Yonghong Song
  0 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-27 18:24 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, Andrii Nakryiko, bpf, Networking,
	Daniel Borkmann, Yonghong Song, Kernel Team

On Sat, Jul 27, 2019 at 10:00 AM Alexei Starovoitov <ast@fb.com> wrote:
>
> On 7/26/19 11:25 PM, Andrii Nakryiko wrote:
> >>> +     } else if (class == BPF_ST && BPF_MODE(insn->code) == BPF_MEM) {
> >>> +             if (insn->imm != orig_off)
> >>> +                     return -EINVAL;
> >>> +             insn->imm = new_off;
> >>> +             pr_debug("prog '%s': patched insn #%d (ST | MEM) imm %d -> %d\n",
> >>> +                      bpf_program__title(prog, false),
> >>> +                      insn_idx, orig_off, new_off);
> >> I'm pretty sure llvm was not capable of emitting BPF_ST insn.
> >> When did that change?
> > I just looked at possible instructions that could have 32-bit
> > immediate value. This is `*(rX) = offsetof(struct s, field)`, which I
> > though is conceivable. Do you think I should drop it?
>
> Just trying to point out that since it's not emitted by llvm
> this code is likely untested ?
> Or you've created a bpf asm test for this?


Yeah, it's untested right now. Let me try to come up with LLVM
assembly + relocation (not yet sure how/whether builtin works with
inline assembly), if that works out, I'll leave this, if not, I'll
drop BPF_ST|BPF_MEM part.

>
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm
  2019-07-27  6:11     ` Andrii Nakryiko
@ 2019-07-27 18:59       ` Song Liu
  2019-07-27 19:09         ` Andrii Nakryiko
  0 siblings, 1 reply; 42+ messages in thread
From: Song Liu @ 2019-07-27 18:59 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Yonghong Song, Kernel Team



> On Jul 26, 2019, at 11:11 PM, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> 
> On Thu, Jul 25, 2019 at 12:32 PM Song Liu <songliubraving@fb.com> wrote:
>> 
>> 
>> 
>>> On Jul 24, 2019, at 12:27 PM, Andrii Nakryiko <andriin@fb.com> wrote:
>>> 
>>> This patch implements the core logic for BPF CO-RE offsets relocations.
>>> All the details are described in code comments.
>> 
>> Some description in the change log is still useful. Please at least
>> copy-paste key comments here.
> 
> OK, will add some more.
> 
>> 
>> And, this is looooong. I think it is totally possible to split it into
>> multiple smaller patches.
> 
> I don't really know how to split it further without hurting reviewing
> by artificially splitting related code into separate patches. Remove
> any single function and algorithm will be incomplete.
> 
> Let me give you some high-level overview of how pieces are put
> together. There are 9 non-trivial functions, let's go over their
> purpose in the orderd in which they are defined in file:
> 
> 1. bpf_core_spec_parse()
> 
> This one take bpf_offset_reloc's type_id and accessor string
> ("0:1:2:3") and parses it into more convenient bpf_core_spec
> datastructure, which has calculated offset and high-level spec
> "steps": either named field or array access.
> 
> 2. bpf_core_find_cands()
> 
> Given local type name, finds all possible target BTF types with same
> name (modulo "flavor" differences, ___flavor suffix is just ignored).
> 
> 3. bpf_core_fields_are_compat()
> 
> Given local and target field match, checks that their types are
> compatible (so that we don't accidentally match, e.g., int against
> struct).
> 
> 4. bpf_core_match_member()
> 
> Given named local field, find corresponding field in target struct. To
> understand why it's not trivial, here's an example:
> 
> Local type:
> 
> struct s___local {
>  int a;
> };
> 
> Target type:
> 
> struct s___target {
>  struct {
>    union {
>      int a;
>    };
>  };
> };
> 
> For both cases you can access a as s.a, but in local case, field a is
> immediately inside s___local, while for s___target, you have to
> traverse two levels deeper into anonymous fields to get to an `a`
> inside anonymous union.
> 
> So this function find that `a` by doing exhaustive search across all
> named field and anonymous struct/unions. But otherwise it's pretty
> straightforward recursive function.
> 
> bpf_core_spec_match()
> 
> Just goes over high-level spec steps in local spec and tries to figure
> out both high-level and low-level steps for targe type. Consider the
> above example. For both structs accessing s.a is one high-level step,
> but for s___local it's single low-level step (just another :0 in spec
> string), while for s___target it's three low-level steps: ":0:0:0",
> one step for each BTF type we need to traverse.
> 
> Array access is simpler, it's always one high-level and one low-level step.
> 
> bpf_core_reloc_insn()
> 
> Once we match local and target specs and have local and target
> offsets, do the relocations - check that instruction has expected
> local offset and replace it with target offset.
> 
> bpf_core_find_kernel_btf()
> 
> This is the only function that can be moved into separate patch, but
> it's also very simple. It just iterates over few known possible
> locations for vmlinux image and once found, tries to parse .BTF out of
> it, to be used as target BTF.
> 
> bpf_core_reloc_offset()
> 
> It combines all the above functions to perform single relocation.
> Parse spec, get candidates, for each candidate try to find matching
> target spec. All candidates that matched are cached for given local
> root type.

Thanks for these explanation. They are really helpful. 

I think some example explaining each step of bpf_core_reloc_offset()
will be very helpful. Something like:

Example:

struct s {
	int a;
	struct {
		int b;
		bool c;
	};
};

To get offset for c, we do:

bpf_core_reloc_offset() {
	
	/* input data: xxx */

	/* first step: bpf_core_spec_parse() */

	/* data after first step */

	/* second step: bpf_core_find_cands() */

	/* candidate A and B after second step */

	...
}

Well, it requires quite some work to document this way. Please let me 
know if you feel this is an overkill. 

> 
> bpf_core_reloc_offsets()
> 
> High-level coordination. Iterate over all per-program .BTF.ext offset
> reloc sections, each relocation within them. Find corresponding
> program and try to apply relocations one by one.
> 
> 
> I think the only non-obvious part here is to understand that
> relocation records local raw spec with every single anonymous type
> traversal, which is not that useful when we try to match it against
> target type, which can have very different composition, but still the
> same field access pattern, from C language standpoint (which hides all
> those anonymous type traversals from programmer).
> 
> But it should be pretty clear now, plus also check tests, they have
> lots of cases showing what's compatible and what's not.

I see. I will review the tests. 

>>> 
>>> static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
>>> -                                                  __u32 id)
>>> +                                                  __u32 id,
>>> +                                                  __u32 *res_id)
>>> {
>>>      const struct btf_type *t = btf__type_by_id(btf, id);
>> 
>> Maybe have a local "__u32 rid;"
>> 
>>> 
>>> +     if (res_id)
>>> +             *res_id = id;
>>> +
>> 
>> and do "rid = id;" here
>> 
>>>      while (true) {
>>>              switch (BTF_INFO_KIND(t->info)) {
>>>              case BTF_KIND_VOLATILE:
>>>              case BTF_KIND_CONST:
>>>              case BTF_KIND_RESTRICT:
>>>              case BTF_KIND_TYPEDEF:
>>> +                     if (res_id)
>>> +                             *res_id = t->type;
>> and here
>> 
>>>                      t = btf__type_by_id(btf, t->type);
>>>                      break;
>>>              default:
>> and "*res_id = rid;" right before return?
> 
> Sure, but why?

I think it is cleaner that way. But feel free to ignore if you
think otherwise. 

> 
>> 
>>> @@ -1041,7 +1049,7 @@ static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
>>> static bool get_map_field_int(const char *map_name, const struct btf *btf,
>>>                            const struct btf_type *def,
>>>                            const struct btf_member *m, __u32 *res) {
> 
> [...]
> 
>>> +struct bpf_core_spec {
>>> +     const struct btf *btf;
>>> +     /* high-level spec: named fields and array indicies only */
>> 
>> typo: indices
> 
> thanks!
> 
>> 
>>> +     struct bpf_core_accessor spec[BPF_CORE_SPEC_MAX_LEN];
>>> +     /* high-level spec length */
>>> +     int len;
>>> +     /* raw, low-level spec: 1-to-1 with accessor spec string */
>>> +     int raw_spec[BPF_CORE_SPEC_MAX_LEN];
>>> +     /* raw spec length */
>>> +     int raw_len;
>>> +     /* field byte offset represented by spec */
>>> +     __u32 offset;
>>> +};
> 
> [...]
> 
>>> + *
>>> + *   int x = &s->a[3]; // access string = '0:1:2:3'
>>> + *
>>> + * Low-level spec has 1:1 mapping with each element of access string (it's
>>> + * just a parsed access string representation): [0, 1, 2, 3].
>>> + *
>>> + * High-level spec will capture only 3 points:
>>> + *   - intial zero-index access by pointer (&s->... is the same as &s[0]...);
>>> + *   - field 'a' access (corresponds to '2' in low-level spec);
>>> + *   - array element #3 access (corresponds to '3' in low-level spec).
>>> + *
>>> + */
>> 
>> IIUC, high-level points are subset of low-level points. How about we introduce
>> "anonymous" high-level points, so that high-level points and low-level points
>> are 1:1 mapping?
> 
> No, that will just hurt and complicate things. See above explanation
> about why we need high-level points (it's what you as C programmer try
> to achieve vs low-level spec is what C-language does in reality, with
> all the anonymous struct/union traversal).
> 
> What's wrong with this separation? Think about it as recording
> "intent" (high-level spec) vs "mechanics" (low-level spec, how exactly
> to achieve that intent, in excruciating details).

There is nothing wrong with separation. I just personally think it is 
cleaner the other way. That's why I raised the question. 

I will go with your assessment, as you looked into this much more than 
I did. :-)

[...]

>>> +
>>> +     memset(spec, 0, sizeof(*spec));
>>> +     spec->btf = btf;
>>> +
>>> +     /* parse spec_str="0:1:2:3:4" into array raw_spec=[0, 1, 2, 3, 4] */
>>> +     while (*spec_str) {
>>> +             if (*spec_str == ':')
>>> +                     ++spec_str;
>>> +             if (sscanf(spec_str, "%d%n", &access_idx, &parsed_len) != 1)
>>> +                     return -EINVAL;
>>> +             if (spec->raw_len == BPF_CORE_SPEC_MAX_LEN)
>>> +                     return -E2BIG;
>>> +             spec_str += parsed_len;
>>> +             spec->raw_spec[spec->raw_len++] = access_idx;
>>> +     }
>>> +
>>> +     if (spec->raw_len == 0)
>>> +             return -EINVAL;
>>> +
>>> +     for (i = 0; i < spec->raw_len; i++) {
>>> +             t = skip_mods_and_typedefs(btf, id, &id);
>>> +             if (!t)
>>> +                     return -EINVAL;
>>> +
>>> +             access_idx = spec->raw_spec[i];
>>> +
>>> +             if (i == 0) {
>>> +                     /* first spec value is always reloc type array index */
>>> +                     spec->spec[spec->len].type_id = id;
>>> +                     spec->spec[spec->len].idx = access_idx;
>>> +                     spec->len++;
>>> +
>>> +                     sz = btf__resolve_size(btf, id);
>>> +                     if (sz < 0)
>>> +                             return sz;
>>> +                     spec->offset += access_idx * sz;
>>          spec->offset = access_idx * sz;  should be enough
> 
> No. spec->offset is carefully maintained across multiple low-level
> steps, as we traverse down embedded structs/unions.
> 
> Think about, e.g.:
> 
> struct s {
>    int a;
>    struct {
>        int b;
>    };
> };
> 
> Imagine you are trying to match s.b access. With what you propose
> you'll end up with offset 0, but it should be 4.

Hmm... this is just for i == 0, right? Which line updated spec->offset
after "memset(spec, 0, sizeof(*spec));"?

> 
>> 
>>> +                     continue;
>>> +             }
>> 
>> Maybe pull i == 0 case out of the for loop?
>> 
>>> +
>>> +             if (btf_is_composite(t)) {
> 
> [...]
> 
>>> +
>>> +     if (spec->len == 0)
>>> +             return -EINVAL;
>> 
>> Can this ever happen?
> 
> Not really, because I already check raw_len == 0 and exit with error.
> I'll remove.
> 
>> 
>>> +
>>> +     return 0;
>>> +}
>>> +
> 
> [...]
> 
>>> +
>>> +/*
>>> + * Given single high-level accessor (either named field or array index) in
>>> + * local type, find corresponding high-level accessor for a target type. Along
>>> + * the way, maintain low-level spec for target as well. Also keep updating
>>> + * target offset.
>>> + */
>> 
>> Please describe the recursive algorithm here. I am kinda lost.
> 
> Explained above. I'll extend description a bit. But it's just
> recursive exhaustive search:
> 1. if struct field is anonymous and is struct/union, go one level
> deeper and try to find field with given name inside those.
> 2. if field has name and it matched what we are searching - check type
> compatibility. It has to be compatible, so if it's not, then it's not
> a match.
> 
>> Also, please document the meaning of zero, positive, negative return values.
> 
> Ok. It's standard <0 - error, 0 - false, 1 - true.
> 
>> 
>>> +static int bpf_core_match_member(const struct btf *local_btf,
>>> +                              const struct bpf_core_accessor *local_acc,
>>> +                              const struct btf *targ_btf,
>>> +                              __u32 targ_id,
>>> +                              struct bpf_core_spec *spec,
>>> +                              __u32 *next_targ_id)
>>> +{
> 
> [...]
> 
>>> +             if (local_acc->name) {
>>> +                     if (!btf_is_composite(targ_type))
>>> +                             return 0;
>>> +
>>> +                     matched = bpf_core_match_member(local_spec->btf,
>>> +                                                     local_acc,
>>> +                                                     targ_btf, targ_id,
>>> +                                                     targ_spec, &targ_id);
>>> +                     if (matched <= 0)
>>> +                             return matched;
>>> +             } else {
>>> +                     /* for i=0, targ_id is already treated as array element
>>> +                      * type (because it's the original struct), for others
>>> +                      * we should find array element type first
>>> +                      */
>>> +                     if (i > 0) {
>> 
>> i == 0 case would go into "if (local_acc->name)" branch, no?
> 
> No, i == 0 is always an array access. s->a.b.c is the same as
> s[0].a.b.c, so relocation's first spec element is always either zero
> for pointer access or any non-negative index for array access. But it
> is always array access.

I see. Thanks for the explanation.

Song

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm
  2019-07-27 18:59       ` Song Liu
@ 2019-07-27 19:09         ` Andrii Nakryiko
  2019-07-28  0:24           ` Song Liu
  0 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-27 19:09 UTC (permalink / raw)
  To: Song Liu
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Yonghong Song, Kernel Team

On Sat, Jul 27, 2019 at 11:59 AM Song Liu <songliubraving@fb.com> wrote:
>
>
>
> > On Jul 26, 2019, at 11:11 PM, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> >
> > On Thu, Jul 25, 2019 at 12:32 PM Song Liu <songliubraving@fb.com> wrote:
> >>
> >>
> >>
> >>> On Jul 24, 2019, at 12:27 PM, Andrii Nakryiko <andriin@fb.com> wrote:
> >>>
> >>> This patch implements the core logic for BPF CO-RE offsets relocations.
> >>> All the details are described in code comments.
> >>
> >> Some description in the change log is still useful. Please at least
> >> copy-paste key comments here.
> >
> > OK, will add some more.
> >
> >>
> >> And, this is looooong. I think it is totally possible to split it into
> >> multiple smaller patches.
> >
> > I don't really know how to split it further without hurting reviewing
> > by artificially splitting related code into separate patches. Remove
> > any single function and algorithm will be incomplete.
> >
> > Let me give you some high-level overview of how pieces are put
> > together. There are 9 non-trivial functions, let's go over their
> > purpose in the orderd in which they are defined in file:
> >
> > 1. bpf_core_spec_parse()
> >
> > This one take bpf_offset_reloc's type_id and accessor string
> > ("0:1:2:3") and parses it into more convenient bpf_core_spec
> > datastructure, which has calculated offset and high-level spec
> > "steps": either named field or array access.
> >
> > 2. bpf_core_find_cands()
> >
> > Given local type name, finds all possible target BTF types with same
> > name (modulo "flavor" differences, ___flavor suffix is just ignored).
> >
> > 3. bpf_core_fields_are_compat()
> >
> > Given local and target field match, checks that their types are
> > compatible (so that we don't accidentally match, e.g., int against
> > struct).
> >
> > 4. bpf_core_match_member()
> >
> > Given named local field, find corresponding field in target struct. To
> > understand why it's not trivial, here's an example:
> >
> > Local type:
> >
> > struct s___local {
> >  int a;
> > };
> >
> > Target type:
> >
> > struct s___target {
> >  struct {
> >    union {
> >      int a;
> >    };
> >  };
> > };
> >
> > For both cases you can access a as s.a, but in local case, field a is
> > immediately inside s___local, while for s___target, you have to
> > traverse two levels deeper into anonymous fields to get to an `a`
> > inside anonymous union.
> >
> > So this function find that `a` by doing exhaustive search across all
> > named field and anonymous struct/unions. But otherwise it's pretty
> > straightforward recursive function.
> >
> > bpf_core_spec_match()
> >
> > Just goes over high-level spec steps in local spec and tries to figure
> > out both high-level and low-level steps for targe type. Consider the
> > above example. For both structs accessing s.a is one high-level step,
> > but for s___local it's single low-level step (just another :0 in spec
> > string), while for s___target it's three low-level steps: ":0:0:0",
> > one step for each BTF type we need to traverse.
> >
> > Array access is simpler, it's always one high-level and one low-level step.
> >
> > bpf_core_reloc_insn()
> >
> > Once we match local and target specs and have local and target
> > offsets, do the relocations - check that instruction has expected
> > local offset and replace it with target offset.
> >
> > bpf_core_find_kernel_btf()
> >
> > This is the only function that can be moved into separate patch, but
> > it's also very simple. It just iterates over few known possible
> > locations for vmlinux image and once found, tries to parse .BTF out of
> > it, to be used as target BTF.
> >
> > bpf_core_reloc_offset()
> >
> > It combines all the above functions to perform single relocation.
> > Parse spec, get candidates, for each candidate try to find matching
> > target spec. All candidates that matched are cached for given local
> > root type.
>
> Thanks for these explanation. They are really helpful.
>
> I think some example explaining each step of bpf_core_reloc_offset()
> will be very helpful. Something like:
>
> Example:
>
> struct s {
>         int a;
>         struct {
>                 int b;
>                 bool c;
>         };
> };
>
> To get offset for c, we do:
>
> bpf_core_reloc_offset() {
>
>         /* input data: xxx */
>
>         /* first step: bpf_core_spec_parse() */
>
>         /* data after first step */
>
>         /* second step: bpf_core_find_cands() */
>
>         /* candidate A and B after second step */
>
>         ...
> }
>
> Well, it requires quite some work to document this way. Please let me
> know if you feel this is an overkill.

Yeah :) And it's not just work, but I think it's bad if comments
become too specific and document very low-level steps, because code
might evolve and comments can quickly get out of sync and just add to
confusion. Which is why I tried to document high-level ideas, leaving
it up to the source code to be the ultimate reference of minutia
details.

>
> >
> > bpf_core_reloc_offsets()
> >
> > High-level coordination. Iterate over all per-program .BTF.ext offset
> > reloc sections, each relocation within them. Find corresponding
> > program and try to apply relocations one by one.
> >
> >
> > I think the only non-obvious part here is to understand that
> > relocation records local raw spec with every single anonymous type
> > traversal, which is not that useful when we try to match it against
> > target type, which can have very different composition, but still the
> > same field access pattern, from C language standpoint (which hides all
> > those anonymous type traversals from programmer).
> >
> > But it should be pretty clear now, plus also check tests, they have
> > lots of cases showing what's compatible and what's not.
>
> I see. I will review the tests.
>
> >>>
> >>> static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
> >>> -                                                  __u32 id)
> >>> +                                                  __u32 id,
> >>> +                                                  __u32 *res_id)
> >>> {
> >>>      const struct btf_type *t = btf__type_by_id(btf, id);
> >>
> >> Maybe have a local "__u32 rid;"
> >>
> >>>
> >>> +     if (res_id)
> >>> +             *res_id = id;
> >>> +
> >>
> >> and do "rid = id;" here
> >>
> >>>      while (true) {
> >>>              switch (BTF_INFO_KIND(t->info)) {
> >>>              case BTF_KIND_VOLATILE:
> >>>              case BTF_KIND_CONST:
> >>>              case BTF_KIND_RESTRICT:
> >>>              case BTF_KIND_TYPEDEF:
> >>> +                     if (res_id)
> >>> +                             *res_id = t->type;
> >> and here
> >>
> >>>                      t = btf__type_by_id(btf, t->type);
> >>>                      break;
> >>>              default:
> >> and "*res_id = rid;" right before return?
> >
> > Sure, but why?
>
> I think it is cleaner that way. But feel free to ignore if you
> think otherwise.
>
> >
> >>
> >>> @@ -1041,7 +1049,7 @@ static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
> >>> static bool get_map_field_int(const char *map_name, const struct btf *btf,
> >>>                            const struct btf_type *def,
> >>>                            const struct btf_member *m, __u32 *res) {
> >
> > [...]
> >
> >>> +struct bpf_core_spec {
> >>> +     const struct btf *btf;
> >>> +     /* high-level spec: named fields and array indicies only */
> >>
> >> typo: indices
> >
> > thanks!
> >
> >>
> >>> +     struct bpf_core_accessor spec[BPF_CORE_SPEC_MAX_LEN];
> >>> +     /* high-level spec length */
> >>> +     int len;
> >>> +     /* raw, low-level spec: 1-to-1 with accessor spec string */
> >>> +     int raw_spec[BPF_CORE_SPEC_MAX_LEN];
> >>> +     /* raw spec length */
> >>> +     int raw_len;
> >>> +     /* field byte offset represented by spec */
> >>> +     __u32 offset;
> >>> +};
> >
> > [...]
> >
> >>> + *
> >>> + *   int x = &s->a[3]; // access string = '0:1:2:3'
> >>> + *
> >>> + * Low-level spec has 1:1 mapping with each element of access string (it's
> >>> + * just a parsed access string representation): [0, 1, 2, 3].
> >>> + *
> >>> + * High-level spec will capture only 3 points:
> >>> + *   - intial zero-index access by pointer (&s->... is the same as &s[0]...);
> >>> + *   - field 'a' access (corresponds to '2' in low-level spec);
> >>> + *   - array element #3 access (corresponds to '3' in low-level spec).
> >>> + *
> >>> + */
> >>
> >> IIUC, high-level points are subset of low-level points. How about we introduce
> >> "anonymous" high-level points, so that high-level points and low-level points
> >> are 1:1 mapping?
> >
> > No, that will just hurt and complicate things. See above explanation
> > about why we need high-level points (it's what you as C programmer try
> > to achieve vs low-level spec is what C-language does in reality, with
> > all the anonymous struct/union traversal).
> >
> > What's wrong with this separation? Think about it as recording
> > "intent" (high-level spec) vs "mechanics" (low-level spec, how exactly
> > to achieve that intent, in excruciating details).
>
> There is nothing wrong with separation. I just personally think it is
> cleaner the other way. That's why I raised the question.
>
> I will go with your assessment, as you looked into this much more than
> I did. :-)

For me it's a machine view of the problem (raw spec) vs human view of
the problem (high-level spec, which resembles how you think about this
in C code). I'll keep it separate unless it proves to be problematic
going forward.

>
> [...]
>
> >>> +
> >>> +     memset(spec, 0, sizeof(*spec));
> >>> +     spec->btf = btf;
> >>> +
> >>> +     /* parse spec_str="0:1:2:3:4" into array raw_spec=[0, 1, 2, 3, 4] */
> >>> +     while (*spec_str) {
> >>> +             if (*spec_str == ':')
> >>> +                     ++spec_str;
> >>> +             if (sscanf(spec_str, "%d%n", &access_idx, &parsed_len) != 1)
> >>> +                     return -EINVAL;
> >>> +             if (spec->raw_len == BPF_CORE_SPEC_MAX_LEN)
> >>> +                     return -E2BIG;
> >>> +             spec_str += parsed_len;
> >>> +             spec->raw_spec[spec->raw_len++] = access_idx;
> >>> +     }
> >>> +
> >>> +     if (spec->raw_len == 0)
> >>> +             return -EINVAL;
> >>> +
> >>> +     for (i = 0; i < spec->raw_len; i++) {
> >>> +             t = skip_mods_and_typedefs(btf, id, &id);
> >>> +             if (!t)
> >>> +                     return -EINVAL;
> >>> +
> >>> +             access_idx = spec->raw_spec[i];
> >>> +
> >>> +             if (i == 0) {
> >>> +                     /* first spec value is always reloc type array index */
> >>> +                     spec->spec[spec->len].type_id = id;
> >>> +                     spec->spec[spec->len].idx = access_idx;
> >>> +                     spec->len++;
> >>> +
> >>> +                     sz = btf__resolve_size(btf, id);
> >>> +                     if (sz < 0)
> >>> +                             return sz;
> >>> +                     spec->offset += access_idx * sz;
> >>          spec->offset = access_idx * sz;  should be enough
> >
> > No. spec->offset is carefully maintained across multiple low-level
> > steps, as we traverse down embedded structs/unions.
> >
> > Think about, e.g.:
> >
> > struct s {
> >    int a;
> >    struct {
> >        int b;
> >    };
> > };
> >
> > Imagine you are trying to match s.b access. With what you propose
> > you'll end up with offset 0, but it should be 4.
>
> Hmm... this is just for i == 0, right? Which line updated spec->offset
> after "memset(spec, 0, sizeof(*spec));"?

Ah, I missed that you are referring to the special i == 0 case. I can
do assignment, yes, you are right. I'll probably also extract it out
of the loop to make it less confusing.

>
> >
> >>
> >>> +                     continue;
> >>> +             }
> >>
> >> Maybe pull i == 0 case out of the for loop?
> >>
> >>> +
> >>> +             if (btf_is_composite(t)) {
> >
> > [...]
> >
> >>> +
> >>> +     if (spec->len == 0)
> >>> +             return -EINVAL;
> >>
> >> Can this ever happen?
> >
> > Not really, because I already check raw_len == 0 and exit with error.
> > I'll remove.
> >
> >>
> >>> +
> >>> +     return 0;
> >>> +}
> >>> +
> >
> > [...]
> >
> >>> +
> >>> +/*
> >>> + * Given single high-level accessor (either named field or array index) in
> >>> + * local type, find corresponding high-level accessor for a target type. Along
> >>> + * the way, maintain low-level spec for target as well. Also keep updating
> >>> + * target offset.
> >>> + */
> >>
> >> Please describe the recursive algorithm here. I am kinda lost.
> >
> > Explained above. I'll extend description a bit. But it's just
> > recursive exhaustive search:
> > 1. if struct field is anonymous and is struct/union, go one level
> > deeper and try to find field with given name inside those.
> > 2. if field has name and it matched what we are searching - check type
> > compatibility. It has to be compatible, so if it's not, then it's not
> > a match.
> >
> >> Also, please document the meaning of zero, positive, negative return values.
> >
> > Ok. It's standard <0 - error, 0 - false, 1 - true.
> >
> >>
> >>> +static int bpf_core_match_member(const struct btf *local_btf,
> >>> +                              const struct bpf_core_accessor *local_acc,
> >>> +                              const struct btf *targ_btf,
> >>> +                              __u32 targ_id,
> >>> +                              struct bpf_core_spec *spec,
> >>> +                              __u32 *next_targ_id)
> >>> +{
> >
> > [...]
> >
> >>> +             if (local_acc->name) {
> >>> +                     if (!btf_is_composite(targ_type))
> >>> +                             return 0;
> >>> +
> >>> +                     matched = bpf_core_match_member(local_spec->btf,
> >>> +                                                     local_acc,
> >>> +                                                     targ_btf, targ_id,
> >>> +                                                     targ_spec, &targ_id);
> >>> +                     if (matched <= 0)
> >>> +                             return matched;
> >>> +             } else {
> >>> +                     /* for i=0, targ_id is already treated as array element
> >>> +                      * type (because it's the original struct), for others
> >>> +                      * we should find array element type first
> >>> +                      */
> >>> +                     if (i > 0) {
> >>
> >> i == 0 case would go into "if (local_acc->name)" branch, no?
> >
> > No, i == 0 is always an array access. s->a.b.c is the same as
> > s[0].a.b.c, so relocation's first spec element is always either zero
> > for pointer access or any non-negative index for array access. But it
> > is always array access.
>
> I see. Thanks for the explanation.
>
> Song

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm
  2019-07-27 18:24         ` Andrii Nakryiko
@ 2019-07-27 21:29           ` Yonghong Song
  2019-07-27 21:36             ` Andrii Nakryiko
  0 siblings, 1 reply; 42+ messages in thread
From: Yonghong Song @ 2019-07-27 21:29 UTC (permalink / raw)
  To: Andrii Nakryiko, Alexei Starovoitov
  Cc: Alexei Starovoitov, Andrii Nakryiko, bpf, Networking,
	Daniel Borkmann, Kernel Team



On 7/27/19 11:24 AM, Andrii Nakryiko wrote:
> On Sat, Jul 27, 2019 at 10:00 AM Alexei Starovoitov <ast@fb.com> wrote:
>>
>> On 7/26/19 11:25 PM, Andrii Nakryiko wrote:
>>>>> +     } else if (class == BPF_ST && BPF_MODE(insn->code) == BPF_MEM) {
>>>>> +             if (insn->imm != orig_off)
>>>>> +                     return -EINVAL;
>>>>> +             insn->imm = new_off;
>>>>> +             pr_debug("prog '%s': patched insn #%d (ST | MEM) imm %d -> %d\n",
>>>>> +                      bpf_program__title(prog, false),
>>>>> +                      insn_idx, orig_off, new_off);
>>>> I'm pretty sure llvm was not capable of emitting BPF_ST insn.
>>>> When did that change?
>>> I just looked at possible instructions that could have 32-bit
>>> immediate value. This is `*(rX) = offsetof(struct s, field)`, which I
>>> though is conceivable. Do you think I should drop it?
>>
>> Just trying to point out that since it's not emitted by llvm
>> this code is likely untested ?
>> Or you've created a bpf asm test for this?
> 
> 
> Yeah, it's untested right now. Let me try to come up with LLVM
> assembly + relocation (not yet sure how/whether builtin works with
> inline assembly), if that works out, I'll leave this, if not, I'll
> drop BPF_ST|BPF_MEM part.

FYI. The llvm does not have assembly code format for BPF_ST instructions 
as it does not generate code for it. So inline asm through llvm won't 
work. llvm disasseembler won't be able to decode BPF_ST either.

>>
>>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm
  2019-07-27 21:29           ` Yonghong Song
@ 2019-07-27 21:36             ` Andrii Nakryiko
  0 siblings, 0 replies; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-27 21:36 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Alexei Starovoitov, Alexei Starovoitov, Andrii Nakryiko, bpf,
	Networking, Daniel Borkmann, Kernel Team

On Sat, Jul 27, 2019 at 2:29 PM Yonghong Song <yhs@fb.com> wrote:
>
>
>
> On 7/27/19 11:24 AM, Andrii Nakryiko wrote:
> > On Sat, Jul 27, 2019 at 10:00 AM Alexei Starovoitov <ast@fb.com> wrote:
> >>
> >> On 7/26/19 11:25 PM, Andrii Nakryiko wrote:
> >>>>> +     } else if (class == BPF_ST && BPF_MODE(insn->code) == BPF_MEM) {
> >>>>> +             if (insn->imm != orig_off)
> >>>>> +                     return -EINVAL;
> >>>>> +             insn->imm = new_off;
> >>>>> +             pr_debug("prog '%s': patched insn #%d (ST | MEM) imm %d -> %d\n",
> >>>>> +                      bpf_program__title(prog, false),
> >>>>> +                      insn_idx, orig_off, new_off);
> >>>> I'm pretty sure llvm was not capable of emitting BPF_ST insn.
> >>>> When did that change?
> >>> I just looked at possible instructions that could have 32-bit
> >>> immediate value. This is `*(rX) = offsetof(struct s, field)`, which I
> >>> though is conceivable. Do you think I should drop it?
> >>
> >> Just trying to point out that since it's not emitted by llvm
> >> this code is likely untested ?
> >> Or you've created a bpf asm test for this?
> >
> >
> > Yeah, it's untested right now. Let me try to come up with LLVM
> > assembly + relocation (not yet sure how/whether builtin works with
> > inline assembly), if that works out, I'll leave this, if not, I'll
> > drop BPF_ST|BPF_MEM part.
>
> FYI. The llvm does not have assembly code format for BPF_ST instructions
> as it does not generate code for it. So inline asm through llvm won't
> work. llvm disasseembler won't be able to decode BPF_ST either.

Well then, I'll just drop it for now. Thanks!

>
> >>
> >>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm
  2019-07-27 19:09         ` Andrii Nakryiko
@ 2019-07-28  0:24           ` Song Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Song Liu @ 2019-07-28  0:24 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Yonghong Song, Kernel Team



> On Jul 27, 2019, at 12:09 PM, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> 
> On Sat, Jul 27, 2019 at 11:59 AM Song Liu <songliubraving@fb.com> wrote:
>> 
>> 
>> 
>>> On Jul 26, 2019, at 11:11 PM, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
>>> 
>>> On Thu, Jul 25, 2019 at 12:32 PM Song Liu <songliubraving@fb.com> wrote:
>>>> 
>>>> 
>>>> 
>>>>> On Jul 24, 2019, at 12:27 PM, Andrii Nakryiko <andriin@fb.com> wrote:
>>>>> 
>>>>> This patch implements the core logic for BPF CO-RE offsets relocations.
>>>>> All the details are described in code comments.
>>>> 
>>>> Some description in the change log is still useful. Please at least
>>>> copy-paste key comments here.
>>> 
>>> OK, will add some more.
>>> 
>>>> 
>>>> And, this is looooong. I think it is totally possible to split it into
>>>> multiple smaller patches.
>>> 
>>> I don't really know how to split it further without hurting reviewing
>>> by artificially splitting related code into separate patches. Remove
>>> any single function and algorithm will be incomplete.
>>> 
>>> Let me give you some high-level overview of how pieces are put
>>> together. There are 9 non-trivial functions, let's go over their
>>> purpose in the orderd in which they are defined in file:
>>> 
>>> 1. bpf_core_spec_parse()
>>> 
>>> This one take bpf_offset_reloc's type_id and accessor string
>>> ("0:1:2:3") and parses it into more convenient bpf_core_spec
>>> datastructure, which has calculated offset and high-level spec
>>> "steps": either named field or array access.
>>> 
>>> 2. bpf_core_find_cands()
>>> 
>>> Given local type name, finds all possible target BTF types with same
>>> name (modulo "flavor" differences, ___flavor suffix is just ignored).
>>> 
>>> 3. bpf_core_fields_are_compat()
>>> 
>>> Given local and target field match, checks that their types are
>>> compatible (so that we don't accidentally match, e.g., int against
>>> struct).
>>> 
>>> 4. bpf_core_match_member()
>>> 
>>> Given named local field, find corresponding field in target struct. To
>>> understand why it's not trivial, here's an example:
>>> 
>>> Local type:
>>> 
>>> struct s___local {
>>> int a;
>>> };
>>> 
>>> Target type:
>>> 
>>> struct s___target {
>>> struct {
>>>   union {
>>>     int a;
>>>   };
>>> };
>>> };
>>> 
>>> For both cases you can access a as s.a, but in local case, field a is
>>> immediately inside s___local, while for s___target, you have to
>>> traverse two levels deeper into anonymous fields to get to an `a`
>>> inside anonymous union.
>>> 
>>> So this function find that `a` by doing exhaustive search across all
>>> named field and anonymous struct/unions. But otherwise it's pretty
>>> straightforward recursive function.
>>> 
>>> bpf_core_spec_match()
>>> 
>>> Just goes over high-level spec steps in local spec and tries to figure
>>> out both high-level and low-level steps for targe type. Consider the
>>> above example. For both structs accessing s.a is one high-level step,
>>> but for s___local it's single low-level step (just another :0 in spec
>>> string), while for s___target it's three low-level steps: ":0:0:0",
>>> one step for each BTF type we need to traverse.
>>> 
>>> Array access is simpler, it's always one high-level and one low-level step.
>>> 
>>> bpf_core_reloc_insn()
>>> 
>>> Once we match local and target specs and have local and target
>>> offsets, do the relocations - check that instruction has expected
>>> local offset and replace it with target offset.
>>> 
>>> bpf_core_find_kernel_btf()
>>> 
>>> This is the only function that can be moved into separate patch, but
>>> it's also very simple. It just iterates over few known possible
>>> locations for vmlinux image and once found, tries to parse .BTF out of
>>> it, to be used as target BTF.
>>> 
>>> bpf_core_reloc_offset()
>>> 
>>> It combines all the above functions to perform single relocation.
>>> Parse spec, get candidates, for each candidate try to find matching
>>> target spec. All candidates that matched are cached for given local
>>> root type.
>> 
>> Thanks for these explanation. They are really helpful.
>> 
>> I think some example explaining each step of bpf_core_reloc_offset()
>> will be very helpful. Something like:
>> 
>> Example:
>> 
>> struct s {
>>        int a;
>>        struct {
>>                int b;
>>                bool c;
>>        };
>> };
>> 
>> To get offset for c, we do:
>> 
>> bpf_core_reloc_offset() {
>> 
>>        /* input data: xxx */
>> 
>>        /* first step: bpf_core_spec_parse() */
>> 
>>        /* data after first step */
>> 
>>        /* second step: bpf_core_find_cands() */
>> 
>>        /* candidate A and B after second step */
>> 
>>        ...
>> }
>> 
>> Well, it requires quite some work to document this way. Please let me
>> know if you feel this is an overkill.
> 
> Yeah :) And it's not just work, but I think it's bad if comments
> become too specific and document very low-level steps, because code
> might evolve and comments can quickly get out of sync and just add to
> confusion. Which is why I tried to document high-level ideas, leaving
> it up to the source code to be the ultimate reference of minutia
> details.

Fair enough. 

> 
>> 
>>> 
>>> bpf_core_reloc_offsets()
>>> 
>>> High-level coordination. Iterate over all per-program .BTF.ext offset
>>> reloc sections, each relocation within them. Find corresponding
>>> program and try to apply relocations one by one.
>>> 
>>> 
>>> I think the only non-obvious part here is to understand that
>>> relocation records local raw spec with every single anonymous type
>>> traversal, which is not that useful when we try to match it against
>>> target type, which can have very different composition, but still the
>>> same field access pattern, from C language standpoint (which hides all
>>> those anonymous type traversals from programmer).
>>> 
>>> But it should be pretty clear now, plus also check tests, they have
>>> lots of cases showing what's compatible and what's not.
>> 
>> I see. I will review the tests.
>> 
>>>>> 
>>>>> static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
>>>>> -                                                  __u32 id)
>>>>> +                                                  __u32 id,
>>>>> +                                                  __u32 *res_id)
>>>>> {
>>>>>     const struct btf_type *t = btf__type_by_id(btf, id);
>>>> 
>>>> Maybe have a local "__u32 rid;"
>>>> 
>>>>> 
>>>>> +     if (res_id)
>>>>> +             *res_id = id;
>>>>> +
>>>> 
>>>> and do "rid = id;" here
>>>> 
>>>>>     while (true) {
>>>>>             switch (BTF_INFO_KIND(t->info)) {
>>>>>             case BTF_KIND_VOLATILE:
>>>>>             case BTF_KIND_CONST:
>>>>>             case BTF_KIND_RESTRICT:
>>>>>             case BTF_KIND_TYPEDEF:
>>>>> +                     if (res_id)
>>>>> +                             *res_id = t->type;
>>>> and here
>>>> 
>>>>>                     t = btf__type_by_id(btf, t->type);
>>>>>                     break;
>>>>>             default:
>>>> and "*res_id = rid;" right before return?
>>> 
>>> Sure, but why?
>> 
>> I think it is cleaner that way. But feel free to ignore if you
>> think otherwise.
>> 
>>> 
>>>> 
>>>>> @@ -1041,7 +1049,7 @@ static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
>>>>> static bool get_map_field_int(const char *map_name, const struct btf *btf,
>>>>>                           const struct btf_type *def,
>>>>>                           const struct btf_member *m, __u32 *res) {
>>> 
>>> [...]
>>> 
>>>>> +struct bpf_core_spec {
>>>>> +     const struct btf *btf;
>>>>> +     /* high-level spec: named fields and array indicies only */
>>>> 
>>>> typo: indices
>>> 
>>> thanks!
>>> 
>>>> 
>>>>> +     struct bpf_core_accessor spec[BPF_CORE_SPEC_MAX_LEN];
>>>>> +     /* high-level spec length */
>>>>> +     int len;
>>>>> +     /* raw, low-level spec: 1-to-1 with accessor spec string */
>>>>> +     int raw_spec[BPF_CORE_SPEC_MAX_LEN];
>>>>> +     /* raw spec length */
>>>>> +     int raw_len;
>>>>> +     /* field byte offset represented by spec */
>>>>> +     __u32 offset;
>>>>> +};
>>> 
>>> [...]
>>> 
>>>>> + *
>>>>> + *   int x = &s->a[3]; // access string = '0:1:2:3'
>>>>> + *
>>>>> + * Low-level spec has 1:1 mapping with each element of access string (it's
>>>>> + * just a parsed access string representation): [0, 1, 2, 3].
>>>>> + *
>>>>> + * High-level spec will capture only 3 points:
>>>>> + *   - intial zero-index access by pointer (&s->... is the same as &s[0]...);
>>>>> + *   - field 'a' access (corresponds to '2' in low-level spec);
>>>>> + *   - array element #3 access (corresponds to '3' in low-level spec).
>>>>> + *
>>>>> + */
>>>> 
>>>> IIUC, high-level points are subset of low-level points. How about we introduce
>>>> "anonymous" high-level points, so that high-level points and low-level points
>>>> are 1:1 mapping?
>>> 
>>> No, that will just hurt and complicate things. See above explanation
>>> about why we need high-level points (it's what you as C programmer try
>>> to achieve vs low-level spec is what C-language does in reality, with
>>> all the anonymous struct/union traversal).
>>> 
>>> What's wrong with this separation? Think about it as recording
>>> "intent" (high-level spec) vs "mechanics" (low-level spec, how exactly
>>> to achieve that intent, in excruciating details).
>> 
>> There is nothing wrong with separation. I just personally think it is
>> cleaner the other way. That's why I raised the question.
>> 
>> I will go with your assessment, as you looked into this much more than
>> I did. :-)
> 
> For me it's a machine view of the problem (raw spec) vs human view of
> the problem (high-level spec, which resembles how you think about this
> in C code). I'll keep it separate unless it proves to be problematic
> going forward.

[...]

>>> 
>>> No. spec->offset is carefully maintained across multiple low-level
>>> steps, as we traverse down embedded structs/unions.
>>> 
>>> Think about, e.g.:
>>> 
>>> struct s {
>>>   int a;
>>>   struct {
>>>       int b;
>>>   };
>>> };
>>> 
>>> Imagine you are trying to match s.b access. With what you propose
>>> you'll end up with offset 0, but it should be 4.
>> 
>> Hmm... this is just for i == 0, right? Which line updated spec->offset
>> after "memset(spec, 0, sizeof(*spec));"?
> 
> Ah, I missed that you are referring to the special i == 0 case. I can
> do assignment, yes, you are right. I'll probably also extract it out
> of the loop to make it less confusing.

Yes, please. 

> 
>>>>> +                     continue;
>>>>> +             }
>>>> 
>>>> Maybe pull i == 0 case out of the for loop?

As I mentioned earlier. ;-)

>>>> 
>>>>> +
>>>>> +             if (btf_is_composite(t)) {
>>> 
>>> [...]
>>> 
>>>>> +
>>>>> +     if (spec->len == 0)
>>>>> +             return -EINVAL;
>>>> 
>>>> Can this ever happen?
>>> 
>>> Not really, because I already check raw_len == 0 and exit with error.
>>> I'll remove.
>>> 
>>>> 
>>>>> +
>>>>> +     return 0;
>>>>> +}
>>>>> +
>>> 

[...]


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm
  2019-07-24 19:27 ` [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm Andrii Nakryiko
  2019-07-25 19:32   ` Song Liu
  2019-07-25 23:18   ` Alexei Starovoitov
@ 2019-07-29 19:56   ` Song Liu
  2 siblings, 0 replies; 42+ messages in thread
From: Song Liu @ 2019-07-29 19:56 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Yonghong Song, Andrii Nakryiko, Kernel Team



> On Jul 24, 2019, at 12:27 PM, Andrii Nakryiko <andriin@fb.com> wrote:
> 
> This patch implements the core logic for BPF CO-RE offsets relocations.
> All the details are described in code comments.
> 
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
> tools/lib/bpf/libbpf.c | 866 ++++++++++++++++++++++++++++++++++++++++-
> tools/lib/bpf/libbpf.h |   1 +
> 2 files changed, 861 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 8741c39adb1c..86d87bf10d46 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -38,6 +38,7 @@
> #include <sys/stat.h>
> #include <sys/types.h>
> #include <sys/vfs.h>
> +#include <sys/utsname.h>
> #include <tools/libc_compat.h>
> #include <libelf.h>
> #include <gelf.h>
> @@ -47,6 +48,7 @@
> #include "btf.h"
> #include "str_error.h"
> #include "libbpf_internal.h"
> +#include "hashmap.h"
> 
> #ifndef EM_BPF
> #define EM_BPF 247
> @@ -1013,16 +1015,22 @@ static int bpf_object__init_user_maps(struct bpf_object *obj, bool strict)
> }
> 
> static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
> -						     __u32 id)
> +						     __u32 id,
> +						     __u32 *res_id)
> {
> 	const struct btf_type *t = btf__type_by_id(btf, id);
> 
> +	if (res_id)
> +		*res_id = id;
> +
> 	while (true) {
> 		switch (BTF_INFO_KIND(t->info)) {
> 		case BTF_KIND_VOLATILE:
> 		case BTF_KIND_CONST:
> 		case BTF_KIND_RESTRICT:
> 		case BTF_KIND_TYPEDEF:
> +			if (res_id)
> +				*res_id = t->type;
> 			t = btf__type_by_id(btf, t->type);
> 			break;
> 		default:
> @@ -1041,7 +1049,7 @@ static const struct btf_type *skip_mods_and_typedefs(const struct btf *btf,
> static bool get_map_field_int(const char *map_name, const struct btf *btf,
> 			      const struct btf_type *def,
> 			      const struct btf_member *m, __u32 *res) {
> -	const struct btf_type *t = skip_mods_and_typedefs(btf, m->type);
> +	const struct btf_type *t = skip_mods_and_typedefs(btf, m->type, NULL);
> 	const char *name = btf__name_by_offset(btf, m->name_off);
> 	const struct btf_array *arr_info;
> 	const struct btf_type *arr_t;
> @@ -1107,7 +1115,7 @@ static int bpf_object__init_user_btf_map(struct bpf_object *obj,
> 		return -EOPNOTSUPP;
> 	}
> 
> -	def = skip_mods_and_typedefs(obj->btf, var->type);
> +	def = skip_mods_and_typedefs(obj->btf, var->type, NULL);
> 	if (BTF_INFO_KIND(def->info) != BTF_KIND_STRUCT) {
> 		pr_warning("map '%s': unexpected def kind %u.\n",
> 			   map_name, BTF_INFO_KIND(var->info));
> @@ -2289,6 +2297,845 @@ bpf_program_reloc_btf_ext(struct bpf_program *prog, struct bpf_object *obj,
> 	return 0;
> }
> 
> +#define BPF_CORE_SPEC_MAX_LEN 64
> +
> +/* represents BPF CO-RE field or array element accessor */
> +struct bpf_core_accessor {
> +	__u32 type_id;		/* struct/union type or array element type */
> +	__u32 idx;		/* field index or array index */
> +	const char *name;	/* field name or NULL for array accessor */
> +};
> +
> +struct bpf_core_spec {
> +	const struct btf *btf;
> +	/* high-level spec: named fields and array indicies only */
> +	struct bpf_core_accessor spec[BPF_CORE_SPEC_MAX_LEN];
> +	/* high-level spec length */
> +	int len;
> +	/* raw, low-level spec: 1-to-1 with accessor spec string */
> +	int raw_spec[BPF_CORE_SPEC_MAX_LEN];
> +	/* raw spec length */
> +	int raw_len;
> +	/* field byte offset represented by spec */
> +	__u32 offset;
> +};
> +
> +static bool str_is_empty(const char *s)
> +{
> +	return !s || !s[0];
> +}
> +
> +static int btf_kind(const struct btf_type *t)
> +{
> +	return BTF_INFO_KIND(t->info);
> +}
> +
> +static bool btf_is_composite(const struct btf_type *t)
> +{
> +	int kind = btf_kind(t);
> +
> +	return kind == BTF_KIND_STRUCT || kind == BTF_KIND_UNION;
> +}
> +
> +static bool btf_is_array(const struct btf_type *t)
> +{
> +	return btf_kind(t) == BTF_KIND_ARRAY;
> +}
> +
> +/* 
> + * Turn bpf_offset_reloc into a low- and high-level spec representation,
> + * validating correctness along the way, as well as calculating resulting
> + * field offset (in bytes), specified by accessor string. Low-level spec
> + * captures every single level of nestedness, including traversing anonymous
> + * struct/union members. High-level one only captures semantically meaningful
> + * "turning points": named fields and array indicies.
> + * E.g., for this case:
> + *
> + *   struct sample {
> + *       int __unimportant;
> + *       struct {
> + *           int __1;
> + *           int __2;
> + *           int a[7];
> + *       };
> + *   };
> + *
> + *   struct sample *s = ...;
> + *
> + *   int x = &s->a[3]; // access string = '0:1:2:3'
> + *
> + * Low-level spec has 1:1 mapping with each element of access string (it's
> + * just a parsed access string representation): [0, 1, 2, 3].
> + *
> + * High-level spec will capture only 3 points:
> + *   - intial zero-index access by pointer (&s->... is the same as &s[0]...);
> + *   - field 'a' access (corresponds to '2' in low-level spec);
> + *   - array element #3 access (corresponds to '3' in low-level spec).
> + *
> + */
> +static int bpf_core_spec_parse(const struct btf *btf,
> +			       __u32 type_id,
> +			       const char *spec_str,
> +			       struct bpf_core_spec *spec)
> +{
> +	int access_idx, parsed_len, i;
> +	const struct btf_type *t;
> +	__u32 id = type_id;
> +	const char *name;
> +	__s64 sz;
> +
> +	if (str_is_empty(spec_str) || *spec_str == ':')
> +		return -EINVAL;
> +
> +	memset(spec, 0, sizeof(*spec));
> +	spec->btf = btf;
> +
> +	/* parse spec_str="0:1:2:3:4" into array raw_spec=[0, 1, 2, 3, 4] */
> +	while (*spec_str) {
> +		if (*spec_str == ':')
> +			++spec_str;
> +		if (sscanf(spec_str, "%d%n", &access_idx, &parsed_len) != 1)
> +			return -EINVAL;
> +		if (spec->raw_len == BPF_CORE_SPEC_MAX_LEN)
> +			return -E2BIG;
> +		spec_str += parsed_len;
> +		spec->raw_spec[spec->raw_len++] = access_idx;
> +	}
> +
> +	if (spec->raw_len == 0)
> +		return -EINVAL;
> +
> +	for (i = 0; i < spec->raw_len; i++) {
> +		t = skip_mods_and_typedefs(btf, id, &id);
> +		if (!t)
> +			return -EINVAL;
> +
> +		access_idx = spec->raw_spec[i];
> +
> +		if (i == 0) {
> +			/* first spec value is always reloc type array index */
> +			spec->spec[spec->len].type_id = id;
> +			spec->spec[spec->len].idx = access_idx;
> +			spec->len++;
> +
> +			sz = btf__resolve_size(btf, id);
> +			if (sz < 0)
> +				return sz;
> +			spec->offset += access_idx * sz;
> +			continue;
> +		}
> +
> +		if (btf_is_composite(t)) {
> +			const struct btf_member *m = (void *)(t + 1);
> +			__u32 offset;
> +
> +			if (access_idx >= BTF_INFO_VLEN(t->info))
> +				return -EINVAL;
> +
> +			m = &m[access_idx];
> +
> +			if (BTF_INFO_KFLAG(t->info)) {
> +				if (BTF_MEMBER_BITFIELD_SIZE(m->offset))
> +					return -EINVAL;
> +				offset = BTF_MEMBER_BIT_OFFSET(m->offset);
> +			} else {
> +				offset = m->offset;
> +			}
> +			if (m->offset % 8)
> +				return -EINVAL;
> +			spec->offset += offset / 8;
> +
> +			if (m->name_off) {
> +				name = btf__name_by_offset(btf, m->name_off);
> +				if (str_is_empty(name))
> +					return -EINVAL;
> +
> +				spec->spec[spec->len].type_id = id;
> +				spec->spec[spec->len].idx = access_idx;
> +				spec->spec[spec->len].name = name;
> +				spec->len++;
> +			}
> +
> +			id = m->type;
> +		} else if (btf_is_array(t)) {
> +			const struct btf_array *a = (void *)(t + 1);
> +
> +			t = skip_mods_and_typedefs(btf, a->type, &id);
> +			if (!t || access_idx >= a->nelems)
> +				return -EINVAL;
> +
> +			spec->spec[spec->len].type_id = id;
> +			spec->spec[spec->len].idx = access_idx;
> +			spec->len++;
> +
> +			sz = btf__resolve_size(btf, id);
> +			if (sz < 0)
> +				return sz;
> +			spec->offset += access_idx * sz;
> +		} else {
> +			pr_warning("relo for [%u] %s (at idx %d) captures type [%d] of unexpected kind %d\n",
> +				   type_id, spec_str, i, id, btf_kind(t));
> +			return -EINVAL;
> +		}
> +	}
> +
> +	if (spec->len == 0)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +/* Given 'some_struct_name___with_flavor' return the length of a name prefix
> + * before last triple underscore. Struct name part after last triple
> + * underscore is ignored by BPF CO-RE relocation during relocation matching.
> + */
> +static size_t bpf_core_essential_name_len(const char *name)
> +{
> +	size_t n = strlen(name);
> +	int i = n - 3;
> +
> +	while (i > 0) {
> +		if (name[i] == '_' && name[i + 1] == '_' && name[i + 2] == '_')
> +			return i;
> +		i--;
> +	}
> +	return n;
> +}
> +
> +/* dynamically sized list of type IDs */
> +struct ids_vec {
> +	__u32 *data;
> +	int len;
> +};
> +
> +static void bpf_core_free_cands(struct ids_vec *cand_ids)
> +{
> +	free(cand_ids->data);
> +	free(cand_ids);
> +}
> +
> +static struct ids_vec *bpf_core_find_cands(const struct btf *local_btf,
> +					   __u32 local_type_id,
> +					   const struct btf *targ_btf)
> +{
> +	size_t local_essent_len, targ_essent_len;
> +	const char *local_name, *targ_name;
> +	const struct btf_type *t;
> +	struct ids_vec *cand_ids;
> +	__u32 *new_ids;
> +	int i, err, n;
> +
> +	t = btf__type_by_id(local_btf, local_type_id);
> +	if (!t)
> +		return ERR_PTR(-EINVAL);
> +
> +	local_name = btf__name_by_offset(local_btf, t->name_off);
> +	if (str_is_empty(local_name))
> +		return ERR_PTR(-EINVAL);
> +	local_essent_len = bpf_core_essential_name_len(local_name);
> +
> +	cand_ids = calloc(1, sizeof(*cand_ids));
> +	if (!cand_ids)
> +		return ERR_PTR(-ENOMEM);
> +
> +	n = btf__get_nr_types(targ_btf);
> +	for (i = 1; i <= n; i++) {
> +		t = btf__type_by_id(targ_btf, i);
> +		targ_name = btf__name_by_offset(targ_btf, t->name_off);
> +		if (str_is_empty(targ_name))
> +			continue;
> +
> +		targ_essent_len = bpf_core_essential_name_len(targ_name);
> +		if (targ_essent_len != local_essent_len)
> +			continue;
> +
> +		if (strncmp(local_name, targ_name, local_essent_len) == 0) {
> +			pr_debug("[%d] (%s): found candidate [%d] (%s)\n",
> +				 local_type_id, local_name, i, targ_name);
> +			new_ids = realloc(cand_ids->data, cand_ids->len + 1);
> +			if (!new_ids) {
> +				err = -ENOMEM;
> +				goto err_out;
> +			}
> +			cand_ids->data = new_ids;
> +			cand_ids->data[cand_ids->len++] = i;
> +		}
> +	}
> +	return cand_ids;
> +err_out:
> +	bpf_core_free_cands(cand_ids);
> +	return ERR_PTR(err);
> +}
> +
> +/* Check two types for compatibility, skipping const/volatile/restrict and
> + * typedefs, to ensure we are relocating offset to the compatible entities:
> + *   - any two STRUCTs/UNIONs are compatible and can be mixed;
> + *   - any two FWDs are compatible;
> + *   - any two PTRs are always compatible;
> + *   - for ENUMs, check sizes, names are ignored;
> + *   - for INT, size and bitness should match, signedness is ignored;
> + *   - for ARRAY, dimensionality is ignored, element types are checked for
> + *     compatibility recursively;
> + *   - everything else shouldn't be ever a target of relocation.
> + * These rules are not set in stone and probably will be adjusted as we get
> + * more experience with using BPF CO-RE relocations.
> + */
> +static int bpf_core_fields_are_compat(const struct btf *local_btf,
> +				      __u32 local_id,
> +				      const struct btf *targ_btf,
> +				      __u32 targ_id)
> +{
> +	const struct btf_type *local_type, *targ_type;
> +	__u16 kind;
> +
> +recur:
> +	local_type = skip_mods_and_typedefs(local_btf, local_id, &local_id);
> +	targ_type = skip_mods_and_typedefs(targ_btf, targ_id, &targ_id);
> +	if (!local_type || !targ_type)
> +		return -EINVAL;
> +
> +	if (btf_is_composite(local_type) && btf_is_composite(targ_type))
> +		return 1;
> +	if (BTF_INFO_KIND(local_type->info) != BTF_INFO_KIND(targ_type->info))
> +		return 0;
> +
> +	kind = BTF_INFO_KIND(local_type->info);
> +	switch (kind) {
> +	case BTF_KIND_FWD:
> +	case BTF_KIND_PTR:
> +		return 1;
> +	case BTF_KIND_ENUM:
> +		return local_type->size == targ_type->size;
> +	case BTF_KIND_INT: {
> +		__u32 loc_int = *(__u32 *)(local_type + 1);
> +		__u32 targ_int = *(__u32 *)(targ_type + 1);
> +
> +		return BTF_INT_OFFSET(loc_int) == 0 &&
> +		       BTF_INT_OFFSET(targ_int) == 0 &&
> +		       local_type->size == targ_type->size &&
> +		       BTF_INT_BITS(loc_int) == BTF_INT_BITS(targ_int);
> +	}
> +	case BTF_KIND_ARRAY: {
> +		const struct btf_array *loc_a, *targ_a;
> +
> +		loc_a = (void *)(local_type + 1);
> +		targ_a = (void *)(targ_type + 1);
> +		local_id = loc_a->type;
> +		targ_id = targ_a->type;
> +		goto recur;
> +	}
> +	default:
> +		pr_warning("unexpected kind %d relocated, local [%d], target [%d]\n",
> +			   kind, local_id, targ_id);
> +		return 0;
> +	}
> +}
> +
> +/* 
> + * Given single high-level accessor (either named field or array index) in
> + * local type, find corresponding high-level accessor for a target type. Along
> + * the way, maintain low-level spec for target as well. Also keep updating
> + * target offset.
> + */
> +static int bpf_core_match_member(const struct btf *local_btf,
> +				 const struct bpf_core_accessor *local_acc,
> +				 const struct btf *targ_btf,
> +				 __u32 targ_id,
> +				 struct bpf_core_spec *spec,
> +				 __u32 *next_targ_id)
> +{
> +	const struct btf_type *local_type, *targ_type;
> +	const struct btf_member *local_member, *m;
> +	const char *local_name, *targ_name;
> +	__u32 local_id;
> +	int i, n, found;
> +
> +	targ_type = skip_mods_and_typedefs(targ_btf, targ_id, &targ_id);
> +	if (!targ_type)
> +		return -EINVAL;
> +	if (!btf_is_composite(targ_type))
> +		return 0;
> +
> +	local_id = local_acc->type_id;
> +	local_type = btf__type_by_id(local_btf, local_id);
> +	local_member = (void *)(local_type + 1);
> +	local_member += local_acc->idx;
> +	local_name = btf__name_by_offset(local_btf, local_member->name_off);
> +
> +	n = BTF_INFO_VLEN(targ_type->info);
> +	m = (void *)(targ_type + 1);
> +	for (i = 0; i < n; i++, m++) {
> +		__u32 offset;
> +
> +		/* bitfield relocations not supported */
> +		if (BTF_INFO_KFLAG(targ_type->info)) {
> +			if (BTF_MEMBER_BITFIELD_SIZE(m->offset))
> +				continue;
> +			offset = BTF_MEMBER_BIT_OFFSET(m->offset);
> +		} else {
> +			offset = m->offset;
> +		}
> +		if (offset % 8)
> +			continue;
> +
> +		/* too deep struct/union/array nesting */
> +		if (spec->raw_len == BPF_CORE_SPEC_MAX_LEN)
> +			return -E2BIG;
> +
> +		/* speculate this member will be the good one */
> +		spec->offset += offset / 8;
> +		spec->raw_spec[spec->raw_len++] = i;
> +
> +		targ_name = btf__name_by_offset(targ_btf, m->name_off);
> +		if (str_is_empty(targ_name)) {
> +			/* embedded struct/union, we need to go deeper */
> +			found = bpf_core_match_member(local_btf, local_acc,
> +						      targ_btf, m->type,
> +						      spec, next_targ_id);
> +			if (found) /* either found or error */
> +				return found;
> +		} else if (strcmp(local_name, targ_name) == 0) {
> +			/* matching named field */
> +			struct bpf_core_accessor *targ_acc;
> +
> +			targ_acc = &spec->spec[spec->len++];
> +			targ_acc->type_id = targ_id;
> +			targ_acc->idx = i;
> +			targ_acc->name = targ_name;
> +
> +			*next_targ_id = m->type;
> +			found = bpf_core_fields_are_compat(local_btf,
> +							   local_member->type,
> +							   targ_btf, m->type);
> +			if (!found)
> +				spec->len--; /* pop accessor */
> +			return found;
> +		}
> +		/* member turned out to be not we looked for */

	/* member turned out not to be what we looked for */ 

Or something similar. 

The rest of this patch looks good to me. 

Thanks,
Song


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 01/10] libbpf: add .BTF.ext offset relocation section loading
  2019-07-27  5:11         ` Andrii Nakryiko
@ 2019-07-29 20:00           ` Song Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Song Liu @ 2019-07-29 20:00 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Yonghong Song, Kernel Team



> On Jul 26, 2019, at 10:11 PM, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> 
> On Wed, Jul 24, 2019 at 10:20 PM Song Liu <songliubraving@fb.com> wrote:
>> 
>> 
>> 
>>> On Jul 24, 2019, at 5:37 PM, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
>>> 
>>> On Wed, Jul 24, 2019 at 5:00 PM Song Liu <songliubraving@fb.com> wrote:
>>>> 
>>>> 
>>>> 
>>>>> On Jul 24, 2019, at 12:27 PM, Andrii Nakryiko <andriin@fb.com> wrote:
>>>>> 
>>>>> Add support for BPF CO-RE offset relocations. Add section/record
>>>>> iteration macros for .BTF.ext. These macro are useful for iterating over
>>>>> each .BTF.ext record, either for dumping out contents or later for BPF
>>>>> CO-RE relocation handling.
>>>>> 
>>>>> To enable other parts of libbpf to work with .BTF.ext contents, moved
>>>>> a bunch of type definitions into libbpf_internal.h.
>>>>> 
>>>>> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
>>>>> ---
>>>>> tools/lib/bpf/btf.c             | 64 +++++++++--------------
>>>>> tools/lib/bpf/btf.h             |  4 ++
>>>>> tools/lib/bpf/libbpf_internal.h | 91 +++++++++++++++++++++++++++++++++
>>>>> 3 files changed, 118 insertions(+), 41 deletions(-)
>>>>> 
>>> 
>>> [...]
>>> 
>>>>> +
>>>>> static int btf_ext_parse_hdr(__u8 *data, __u32 data_size)
>>>>> {
>>>>>     const struct btf_ext_header *hdr = (struct btf_ext_header *)data;
>>>>> @@ -1004,6 +979,13 @@ struct btf_ext *btf_ext__new(__u8 *data, __u32 size)
>>>>>     if (err)
>>>>>             goto done;
>>>>> 
>>>>> +     /* check if there is offset_reloc_off/offset_reloc_len fields */
>>>>> +     if (btf_ext->hdr->hdr_len < sizeof(struct btf_ext_header))
>>>> 
>>>> This check will break when we add more optional sections to btf_ext_header.
>>>> Maybe use offsetof() instead?
>>> 
>>> I didn't do it, because there are no fields after offset_reloc_len.
>>> But now I though that maybe it would be ok to add zero-sized marker
>>> field, kind of like marking off various versions of btf_ext header?
>>> 
>>> Alternatively, I can add offsetofend() macro somewhere in libbpf_internal.h.
>>> 
>>> Do you have any preference?
>> 
>> We only need a stable number to compare against. offsetofend() works.
>> Or we can simply have something like
>> 
>>    if (btf_ext->hdr->hdr_len <= offsetof(struct btf_ext_header, offset_reloc_off))
>>          goto done;
>> or
>>    if (btf_ext->hdr->hdr_len < offsetof(struct btf_ext_header, offset_reloc_len))
>>          goto done;
>> 
>> Does this make sense?
> 
> I think offsetofend() is the cleanest solution, I'll do just that.

Agreed that offsetofend() is the best. 

Song


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 00/10] CO-RE offset relocations
  2019-07-24 19:27 [PATCH bpf-next 00/10] CO-RE offset relocations Andrii Nakryiko
                   ` (9 preceding siblings ...)
  2019-07-24 19:27 ` [PATCH bpf-next 10/10] selftests/bpf: add CO-RE relocs ints tests Andrii Nakryiko
@ 2019-07-29 20:20 ` Song Liu
  2019-07-29 20:36   ` Song Liu
  10 siblings, 1 reply; 42+ messages in thread
From: Song Liu @ 2019-07-29 20:20 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Yonghong Song, Andrii Nakryiko, Kernel Team

On Wed, Jul 24, 2019 at 1:34 PM Andrii Nakryiko <andriin@fb.com> wrote:
>
> This patch set implements central part of CO-RE (Compile Once - Run
> Everywhere, see [0] and [1] for slides and video): relocating field offsets.
> Most of the details are written down as comments to corresponding parts of the
> code.
>
> Patch #1 adds loading of .BTF.ext offset relocations section and macros to
> work with its contents.
> Patch #2 implements CO-RE relocations algorithm in libbpf.
> Patches #3-#10 adds selftests validating various parts of relocation handling,
> type compatibility, etc.
>
> For all tests to work, you'll need latest Clang/LLVM supporting
> __builtin_preserve_access_index intrinsic, used for recording offset
> relocations. Kernel on which selftests run should have BTF information built
> in (CONFIG_DEBUG_INFO_BTF=y).
>
>   [0] http://vger.kernel.org/bpfconf2019.html#session-2
>   [1] http://vger.kernel.org/lpc-bpf2018.html#session-2CO-RE relocations
>
> This patch set implements central part of CO-RE (Compile Once - Run
> Everywhere, see [0] and [1] for slides and video): relocating field offsets.
> Most of the details are written down as comments to corresponding parts of the
> code.
>
> Patch #1 adds loading of .BTF.ext offset relocations section and macros to
> work with its contents.
> Patch #2 implements CO-RE relocations algorithm in libbpf.
> Patches #3-#10 adds selftests validating various parts of relocation handling,
> type compatibility, etc.
>
> For all tests to work, you'll need latest Clang/LLVM supporting
> __builtin_preserve_access_index intrinsic, used for recording offset
> relocations. Kernel on which selftests run should have BTF information built
> in (CONFIG_DEBUG_INFO_BTF=y).
>
>   [0] http://vger.kernel.org/bpfconf2019.html#session-2
>   [1] http://vger.kernel.org/lpc-bpf2018.html#session-2
>
> Andrii Nakryiko (10):
>   libbpf: add .BTF.ext offset relocation section loading
>   libbpf: implement BPF CO-RE offset relocation algorithm
>   selftests/bpf: add CO-RE relocs testing setup
>   selftests/bpf: add CO-RE relocs struct flavors tests
>   selftests/bpf: add CO-RE relocs nesting tests
>   selftests/bpf: add CO-RE relocs array tests
>   selftests/bpf: add CO-RE relocs enum/ptr/func_proto tests
>   selftests/bpf: add CO-RE relocs modifiers/typedef tests
>   selftest/bpf: add CO-RE relocs ptr-as-array tests
>   selftests/bpf: add CO-RE relocs ints tests
>
>  tools/lib/bpf/btf.c                           |  64 +-
>  tools/lib/bpf/btf.h                           |   4 +
>  tools/lib/bpf/libbpf.c                        | 866 +++++++++++++++++-
>  tools/lib/bpf/libbpf.h                        |   1 +
>  tools/lib/bpf/libbpf_internal.h               |  91 ++
>  .../selftests/bpf/prog_tests/core_reloc.c     | 363 ++++++++
>  .../bpf/progs/btf__core_reloc_arrays.c        |   3 +
>  .../btf__core_reloc_arrays___diff_arr_dim.c   |   3 +
>  ...btf__core_reloc_arrays___diff_arr_val_sz.c |   3 +
>  .../btf__core_reloc_arrays___err_non_array.c  |   3 +
>  ...btf__core_reloc_arrays___err_too_shallow.c |   3 +
>  .../btf__core_reloc_arrays___err_too_small.c  |   3 +
>  ..._core_reloc_arrays___err_wrong_val_type1.c |   3 +
>  ..._core_reloc_arrays___err_wrong_val_type2.c |   3 +
>  .../bpf/progs/btf__core_reloc_flavors.c       |   3 +
>  .../btf__core_reloc_flavors__err_wrong_name.c |   3 +
>  .../bpf/progs/btf__core_reloc_ints.c          |   3 +
>  .../bpf/progs/btf__core_reloc_ints___bool.c   |   3 +
>  .../btf__core_reloc_ints___err_bitfield.c     |   3 +
>  .../btf__core_reloc_ints___err_wrong_sz_16.c  |   3 +
>  .../btf__core_reloc_ints___err_wrong_sz_32.c  |   3 +
>  .../btf__core_reloc_ints___err_wrong_sz_64.c  |   3 +
>  .../btf__core_reloc_ints___err_wrong_sz_8.c   |   3 +
>  .../btf__core_reloc_ints___reverse_sign.c     |   3 +
>  .../bpf/progs/btf__core_reloc_mods.c          |   3 +
>  .../progs/btf__core_reloc_mods___mod_swap.c   |   3 +
>  .../progs/btf__core_reloc_mods___typedefs.c   |   3 +
>  .../bpf/progs/btf__core_reloc_nesting.c       |   3 +
>  .../btf__core_reloc_nesting___anon_embed.c    |   3 +
>  ...f__core_reloc_nesting___dup_compat_types.c |   5 +
>  ...core_reloc_nesting___err_array_container.c |   3 +
>  ...tf__core_reloc_nesting___err_array_field.c |   3 +
>  ...e_reloc_nesting___err_dup_incompat_types.c |   4 +
>  ...re_reloc_nesting___err_missing_container.c |   3 +
>  ...__core_reloc_nesting___err_missing_field.c |   3 +
>  ..._reloc_nesting___err_nonstruct_container.c |   3 +
>  ...e_reloc_nesting___err_partial_match_dups.c |   4 +
>  .../btf__core_reloc_nesting___err_too_deep.c  |   3 +
>  .../btf__core_reloc_nesting___extra_nesting.c |   3 +
>  ..._core_reloc_nesting___struct_union_mixup.c |   3 +
>  .../bpf/progs/btf__core_reloc_primitives.c    |   3 +
>  ...f__core_reloc_primitives___diff_enum_def.c |   3 +
>  ..._core_reloc_primitives___diff_func_proto.c |   3 +
>  ...f__core_reloc_primitives___diff_ptr_type.c |   3 +
>  ...tf__core_reloc_primitives___err_non_enum.c |   3 +
>  ...btf__core_reloc_primitives___err_non_int.c |   3 +
>  ...btf__core_reloc_primitives___err_non_ptr.c |   3 +
>  .../bpf/progs/btf__core_reloc_ptr_as_arr.c    |   3 +
>  .../btf__core_reloc_ptr_as_arr___diff_sz.c    |   3 +
>  .../selftests/bpf/progs/core_reloc_types.h    | 642 +++++++++++++
>  .../bpf/progs/test_core_reloc_arrays.c        |  58 ++
>  .../bpf/progs/test_core_reloc_flavors.c       |  65 ++
>  .../bpf/progs/test_core_reloc_ints.c          |  48 +
>  .../bpf/progs/test_core_reloc_kernel.c        |  39 +
>  .../bpf/progs/test_core_reloc_mods.c          |  68 ++
>  .../bpf/progs/test_core_reloc_nesting.c       |  48 +
>  .../bpf/progs/test_core_reloc_primitives.c    |  50 +
>  .../bpf/progs/test_core_reloc_ptr_as_arr.c    |  34 +
>  58 files changed, 2527 insertions(+), 47 deletions(-)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/core_reloc.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_dim.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_val_sz.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_non_array.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_shallow.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_small.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type1.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type2.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_flavors.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_flavors__err_wrong_name.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___bool.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_bitfield.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_16.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_32.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_64.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_8.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___reverse_sign.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods___mod_swap.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods___typedefs.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___anon_embed.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___dup_compat_types.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_container.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_field.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_dup_incompat_types.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_container.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_field.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_nonstruct_container.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_partial_match_dups.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_too_deep.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___extra_nesting.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___struct_union_mixup.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_enum_def.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_func_proto.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_ptr_type.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_enum.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_int.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_ptr.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr.c
>  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr___diff_sz.c
>  create mode 100644 tools/testing/selftests/bpf/progs/core_reloc_types.h
>  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_flavors.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_ints.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_kernel.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_mods.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_nesting.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_primitives.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_ptr_as_arr.c

We have created a lot of small files. Would it be cleaner if we can
somehow put these
data in one file (maybe different sections?).

Alternatively, maybe create a folder for these files:
  tools/testing/selftests/bpf/progs/core/

Thanks,
Song

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 03/10] selftests/bpf: add CO-RE relocs testing setup
  2019-07-24 19:27 ` [PATCH bpf-next 03/10] selftests/bpf: add CO-RE relocs testing setup Andrii Nakryiko
@ 2019-07-29 20:22   ` Song Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Song Liu @ 2019-07-29 20:22 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Yonghong Song, Andrii Nakryiko, Kernel Team

On Wed, Jul 24, 2019 at 1:34 PM Andrii Nakryiko <andriin@fb.com> wrote:
>
> Add CO-RE relocation test runner. Add one simple test validating that
> libbpf's logic for searching for kernel image and loading BTF out of it
> works.
>
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>

Acked-by: Song Liu <songliubraving@fb.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 00/10] CO-RE offset relocations
  2019-07-29 20:20 ` [PATCH bpf-next 00/10] CO-RE offset relocations Song Liu
@ 2019-07-29 20:36   ` Song Liu
  2019-07-29 23:09     ` Andrii Nakryiko
  0 siblings, 1 reply; 42+ messages in thread
From: Song Liu @ 2019-07-29 20:36 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Yonghong Song, Andrii Nakryiko, Kernel Team

On Mon, Jul 29, 2019 at 1:20 PM Song Liu <liu.song.a23@gmail.com> wrote:
>
> On Wed, Jul 24, 2019 at 1:34 PM Andrii Nakryiko <andriin@fb.com> wrote:
> >
> > This patch set implements central part of CO-RE (Compile Once - Run
> > Everywhere, see [0] and [1] for slides and video): relocating field offsets.
> > Most of the details are written down as comments to corresponding parts of the
> > code.
> >
> > Patch #1 adds loading of .BTF.ext offset relocations section and macros to
> > work with its contents.
> > Patch #2 implements CO-RE relocations algorithm in libbpf.
> > Patches #3-#10 adds selftests validating various parts of relocation handling,
> > type compatibility, etc.
> >
> > For all tests to work, you'll need latest Clang/LLVM supporting
> > __builtin_preserve_access_index intrinsic, used for recording offset
> > relocations. Kernel on which selftests run should have BTF information built
> > in (CONFIG_DEBUG_INFO_BTF=y).
> >
> >   [0] http://vger.kernel.org/bpfconf2019.html#session-2
> >   [1] http://vger.kernel.org/lpc-bpf2018.html#session-2CO-RE relocations
> >
> > This patch set implements central part of CO-RE (Compile Once - Run
> > Everywhere, see [0] and [1] for slides and video): relocating field offsets.
> > Most of the details are written down as comments to corresponding parts of the
> > code.
> >
> > Patch #1 adds loading of .BTF.ext offset relocations section and macros to
> > work with its contents.
> > Patch #2 implements CO-RE relocations algorithm in libbpf.
> > Patches #3-#10 adds selftests validating various parts of relocation handling,
> > type compatibility, etc.
> >
> > For all tests to work, you'll need latest Clang/LLVM supporting
> > __builtin_preserve_access_index intrinsic, used for recording offset
> > relocations. Kernel on which selftests run should have BTF information built
> > in (CONFIG_DEBUG_INFO_BTF=y).
> >
> >   [0] http://vger.kernel.org/bpfconf2019.html#session-2
> >   [1] http://vger.kernel.org/lpc-bpf2018.html#session-2
> >
> > Andrii Nakryiko (10):
> >   libbpf: add .BTF.ext offset relocation section loading
> >   libbpf: implement BPF CO-RE offset relocation algorithm
> >   selftests/bpf: add CO-RE relocs testing setup
> >   selftests/bpf: add CO-RE relocs struct flavors tests
> >   selftests/bpf: add CO-RE relocs nesting tests
> >   selftests/bpf: add CO-RE relocs array tests
> >   selftests/bpf: add CO-RE relocs enum/ptr/func_proto tests
> >   selftests/bpf: add CO-RE relocs modifiers/typedef tests
> >   selftest/bpf: add CO-RE relocs ptr-as-array tests
> >   selftests/bpf: add CO-RE relocs ints tests
> >
> >  tools/lib/bpf/btf.c                           |  64 +-
> >  tools/lib/bpf/btf.h                           |   4 +
> >  tools/lib/bpf/libbpf.c                        | 866 +++++++++++++++++-
> >  tools/lib/bpf/libbpf.h                        |   1 +
> >  tools/lib/bpf/libbpf_internal.h               |  91 ++
> >  .../selftests/bpf/prog_tests/core_reloc.c     | 363 ++++++++
> >  .../bpf/progs/btf__core_reloc_arrays.c        |   3 +
> >  .../btf__core_reloc_arrays___diff_arr_dim.c   |   3 +
> >  ...btf__core_reloc_arrays___diff_arr_val_sz.c |   3 +
> >  .../btf__core_reloc_arrays___err_non_array.c  |   3 +
> >  ...btf__core_reloc_arrays___err_too_shallow.c |   3 +
> >  .../btf__core_reloc_arrays___err_too_small.c  |   3 +
> >  ..._core_reloc_arrays___err_wrong_val_type1.c |   3 +
> >  ..._core_reloc_arrays___err_wrong_val_type2.c |   3 +
> >  .../bpf/progs/btf__core_reloc_flavors.c       |   3 +
> >  .../btf__core_reloc_flavors__err_wrong_name.c |   3 +
> >  .../bpf/progs/btf__core_reloc_ints.c          |   3 +
> >  .../bpf/progs/btf__core_reloc_ints___bool.c   |   3 +
> >  .../btf__core_reloc_ints___err_bitfield.c     |   3 +
> >  .../btf__core_reloc_ints___err_wrong_sz_16.c  |   3 +
> >  .../btf__core_reloc_ints___err_wrong_sz_32.c  |   3 +
> >  .../btf__core_reloc_ints___err_wrong_sz_64.c  |   3 +
> >  .../btf__core_reloc_ints___err_wrong_sz_8.c   |   3 +
> >  .../btf__core_reloc_ints___reverse_sign.c     |   3 +
> >  .../bpf/progs/btf__core_reloc_mods.c          |   3 +
> >  .../progs/btf__core_reloc_mods___mod_swap.c   |   3 +
> >  .../progs/btf__core_reloc_mods___typedefs.c   |   3 +
> >  .../bpf/progs/btf__core_reloc_nesting.c       |   3 +
> >  .../btf__core_reloc_nesting___anon_embed.c    |   3 +
> >  ...f__core_reloc_nesting___dup_compat_types.c |   5 +
> >  ...core_reloc_nesting___err_array_container.c |   3 +
> >  ...tf__core_reloc_nesting___err_array_field.c |   3 +
> >  ...e_reloc_nesting___err_dup_incompat_types.c |   4 +
> >  ...re_reloc_nesting___err_missing_container.c |   3 +
> >  ...__core_reloc_nesting___err_missing_field.c |   3 +
> >  ..._reloc_nesting___err_nonstruct_container.c |   3 +
> >  ...e_reloc_nesting___err_partial_match_dups.c |   4 +
> >  .../btf__core_reloc_nesting___err_too_deep.c  |   3 +
> >  .../btf__core_reloc_nesting___extra_nesting.c |   3 +
> >  ..._core_reloc_nesting___struct_union_mixup.c |   3 +
> >  .../bpf/progs/btf__core_reloc_primitives.c    |   3 +
> >  ...f__core_reloc_primitives___diff_enum_def.c |   3 +
> >  ..._core_reloc_primitives___diff_func_proto.c |   3 +
> >  ...f__core_reloc_primitives___diff_ptr_type.c |   3 +
> >  ...tf__core_reloc_primitives___err_non_enum.c |   3 +
> >  ...btf__core_reloc_primitives___err_non_int.c |   3 +
> >  ...btf__core_reloc_primitives___err_non_ptr.c |   3 +
> >  .../bpf/progs/btf__core_reloc_ptr_as_arr.c    |   3 +
> >  .../btf__core_reloc_ptr_as_arr___diff_sz.c    |   3 +
> >  .../selftests/bpf/progs/core_reloc_types.h    | 642 +++++++++++++
> >  .../bpf/progs/test_core_reloc_arrays.c        |  58 ++
> >  .../bpf/progs/test_core_reloc_flavors.c       |  65 ++
> >  .../bpf/progs/test_core_reloc_ints.c          |  48 +
> >  .../bpf/progs/test_core_reloc_kernel.c        |  39 +
> >  .../bpf/progs/test_core_reloc_mods.c          |  68 ++
> >  .../bpf/progs/test_core_reloc_nesting.c       |  48 +
> >  .../bpf/progs/test_core_reloc_primitives.c    |  50 +
> >  .../bpf/progs/test_core_reloc_ptr_as_arr.c    |  34 +
> >  58 files changed, 2527 insertions(+), 47 deletions(-)
> >  create mode 100644 tools/testing/selftests/bpf/prog_tests/core_reloc.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_dim.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_val_sz.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_non_array.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_shallow.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_small.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type1.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type2.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_flavors.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_flavors__err_wrong_name.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___bool.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_bitfield.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_16.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_32.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_64.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_8.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___reverse_sign.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods___mod_swap.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods___typedefs.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___anon_embed.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___dup_compat_types.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_container.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_field.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_dup_incompat_types.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_container.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_field.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_nonstruct_container.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_partial_match_dups.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_too_deep.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___extra_nesting.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___struct_union_mixup.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_enum_def.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_func_proto.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_ptr_type.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_enum.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_int.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_ptr.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr___diff_sz.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/core_reloc_types.h
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_flavors.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_ints.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_kernel.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_mods.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_nesting.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_primitives.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_ptr_as_arr.c
>
> We have created a lot of small files. Would it be cleaner if we can
> somehow put these
> data in one file (maybe different sections?).

After reading more, I guess you have tried this and end up with current
design: keep most struct defines in core_reloc_types.h.

>
> Alternatively, maybe create a folder for these files:
>   tools/testing/selftests/bpf/progs/core/

I guess this would still make it cleaner.

Thanks,
Song

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 04/10] selftests/bpf: add CO-RE relocs struct flavors tests
  2019-07-24 19:27 ` [PATCH bpf-next 04/10] selftests/bpf: add CO-RE relocs struct flavors tests Andrii Nakryiko
@ 2019-07-29 20:37   ` Song Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Song Liu @ 2019-07-29 20:37 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Yonghong Song, Andrii Nakryiko, Kernel Team

On Wed, Jul 24, 2019 at 1:34 PM Andrii Nakryiko <andriin@fb.com> wrote:
>
> Add tests verifying that BPF program can use various struct/union
> "flavors" to extract data from the same target struct/union.
>
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>

Acked-by: Song Liu <songliubraving@fb.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 05/10] selftests/bpf: add CO-RE relocs nesting tests
  2019-07-24 19:27 ` [PATCH bpf-next 05/10] selftests/bpf: add CO-RE relocs nesting tests Andrii Nakryiko
@ 2019-07-29 21:06   ` Song Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Song Liu @ 2019-07-29 21:06 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Yonghong Song, Andrii Nakryiko, Kernel Team

On Wed, Jul 24, 2019 at 1:33 PM Andrii Nakryiko <andriin@fb.com> wrote:
>
> Add a bunch of test validating correct handling of nested
> structs/unions.
>
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>

Acked-by: Song Liu <songliubraving@fb.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 07/10] selftests/bpf: add CO-RE relocs enum/ptr/func_proto tests
  2019-07-24 19:27 ` [PATCH bpf-next 07/10] selftests/bpf: add CO-RE relocs enum/ptr/func_proto tests Andrii Nakryiko
@ 2019-07-29 21:09   ` Song Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Song Liu @ 2019-07-29 21:09 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Yonghong Song, Andrii Nakryiko, Kernel Team

On Wed, Jul 24, 2019 at 12:31 PM Andrii Nakryiko <andriin@fb.com> wrote:
>
> Test CO-RE relocation handling of ints, enums, pointers, func protos, etc.
>
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>

Acked-by: Song Liu <songliubraving@fb.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 08/10] selftests/bpf: add CO-RE relocs modifiers/typedef tests
  2019-07-24 19:27 ` [PATCH bpf-next 08/10] selftests/bpf: add CO-RE relocs modifiers/typedef tests Andrii Nakryiko
@ 2019-07-29 21:11   ` Song Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Song Liu @ 2019-07-29 21:11 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Yonghong Song, Andrii Nakryiko, Kernel Team

On Wed, Jul 24, 2019 at 12:30 PM Andrii Nakryiko <andriin@fb.com> wrote:
>
> Add tests validating correct handling of various combinations of
> typedefs and const/volatile/restrict modifiers.
>
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>


Acked-by: Song Liu <songliubraving@fb.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 09/10] selftest/bpf: add CO-RE relocs ptr-as-array tests
  2019-07-24 19:27 ` [PATCH bpf-next 09/10] selftest/bpf: add CO-RE relocs ptr-as-array tests Andrii Nakryiko
@ 2019-07-29 21:14   ` Song Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Song Liu @ 2019-07-29 21:14 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Yonghong Song, Andrii Nakryiko, Kernel Team

On Wed, Jul 24, 2019 at 12:31 PM Andrii Nakryiko <andriin@fb.com> wrote:
>
> Add test validating correct relocation handling for cases where pointer
> to something is used as an array. E.g.:
>
>   int *ptr = ...;
>   int x = ptr[42];
>
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>

Acked-by: Song Liu <songliubraving@fb.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 10/10] selftests/bpf: add CO-RE relocs ints tests
  2019-07-24 19:27 ` [PATCH bpf-next 10/10] selftests/bpf: add CO-RE relocs ints tests Andrii Nakryiko
@ 2019-07-29 21:21   ` Song Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Song Liu @ 2019-07-29 21:21 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Networking, Alexei Starovoitov, Daniel Borkmann,
	Yonghong Song, Andrii Nakryiko, Kernel Team

On Wed, Jul 24, 2019 at 1:34 PM Andrii Nakryiko <andriin@fb.com> wrote:
>
> Add various tests validating handling compatible/incompatible integer
> types.
>
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>

Acked-by: Song Liu <songliubraving@fb.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 00/10] CO-RE offset relocations
  2019-07-29 20:36   ` Song Liu
@ 2019-07-29 23:09     ` Andrii Nakryiko
  2019-07-30  5:27       ` Song Liu
  0 siblings, 1 reply; 42+ messages in thread
From: Andrii Nakryiko @ 2019-07-29 23:09 UTC (permalink / raw)
  To: Song Liu
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Yonghong Song, Kernel Team

On Mon, Jul 29, 2019 at 1:37 PM Song Liu <liu.song.a23@gmail.com> wrote:
>
> On Mon, Jul 29, 2019 at 1:20 PM Song Liu <liu.song.a23@gmail.com> wrote:
> >
> > On Wed, Jul 24, 2019 at 1:34 PM Andrii Nakryiko <andriin@fb.com> wrote:
> > >
> > > This patch set implements central part of CO-RE (Compile Once - Run
> > > Everywhere, see [0] and [1] for slides and video): relocating field offsets.
> > > Most of the details are written down as comments to corresponding parts of the
> > > code.
> > >
> > > Patch #1 adds loading of .BTF.ext offset relocations section and macros to
> > > work with its contents.
> > > Patch #2 implements CO-RE relocations algorithm in libbpf.
> > > Patches #3-#10 adds selftests validating various parts of relocation handling,
> > > type compatibility, etc.
> > >
> > > For all tests to work, you'll need latest Clang/LLVM supporting
> > > __builtin_preserve_access_index intrinsic, used for recording offset
> > > relocations. Kernel on which selftests run should have BTF information built
> > > in (CONFIG_DEBUG_INFO_BTF=y).
> > >
> > >   [0] http://vger.kernel.org/bpfconf2019.html#session-2
> > >   [1] http://vger.kernel.org/lpc-bpf2018.html#session-2CO-RE relocations
> > >
> > > This patch set implements central part of CO-RE (Compile Once - Run
> > > Everywhere, see [0] and [1] for slides and video): relocating field offsets.
> > > Most of the details are written down as comments to corresponding parts of the
> > > code.
> > >
> > > Patch #1 adds loading of .BTF.ext offset relocations section and macros to
> > > work with its contents.
> > > Patch #2 implements CO-RE relocations algorithm in libbpf.
> > > Patches #3-#10 adds selftests validating various parts of relocation handling,
> > > type compatibility, etc.
> > >
> > > For all tests to work, you'll need latest Clang/LLVM supporting
> > > __builtin_preserve_access_index intrinsic, used for recording offset
> > > relocations. Kernel on which selftests run should have BTF information built
> > > in (CONFIG_DEBUG_INFO_BTF=y).
> > >
> > >   [0] http://vger.kernel.org/bpfconf2019.html#session-2
> > >   [1] http://vger.kernel.org/lpc-bpf2018.html#session-2
> > >
> > > Andrii Nakryiko (10):
> > >   libbpf: add .BTF.ext offset relocation section loading
> > >   libbpf: implement BPF CO-RE offset relocation algorithm
> > >   selftests/bpf: add CO-RE relocs testing setup
> > >   selftests/bpf: add CO-RE relocs struct flavors tests
> > >   selftests/bpf: add CO-RE relocs nesting tests
> > >   selftests/bpf: add CO-RE relocs array tests
> > >   selftests/bpf: add CO-RE relocs enum/ptr/func_proto tests
> > >   selftests/bpf: add CO-RE relocs modifiers/typedef tests
> > >   selftest/bpf: add CO-RE relocs ptr-as-array tests
> > >   selftests/bpf: add CO-RE relocs ints tests
> > >
> > >  tools/lib/bpf/btf.c                           |  64 +-
> > >  tools/lib/bpf/btf.h                           |   4 +
> > >  tools/lib/bpf/libbpf.c                        | 866 +++++++++++++++++-
> > >  tools/lib/bpf/libbpf.h                        |   1 +
> > >  tools/lib/bpf/libbpf_internal.h               |  91 ++
> > >  .../selftests/bpf/prog_tests/core_reloc.c     | 363 ++++++++
> > >  .../bpf/progs/btf__core_reloc_arrays.c        |   3 +
> > >  .../btf__core_reloc_arrays___diff_arr_dim.c   |   3 +
> > >  ...btf__core_reloc_arrays___diff_arr_val_sz.c |   3 +
> > >  .../btf__core_reloc_arrays___err_non_array.c  |   3 +
> > >  ...btf__core_reloc_arrays___err_too_shallow.c |   3 +
> > >  .../btf__core_reloc_arrays___err_too_small.c  |   3 +
> > >  ..._core_reloc_arrays___err_wrong_val_type1.c |   3 +
> > >  ..._core_reloc_arrays___err_wrong_val_type2.c |   3 +
> > >  .../bpf/progs/btf__core_reloc_flavors.c       |   3 +
> > >  .../btf__core_reloc_flavors__err_wrong_name.c |   3 +
> > >  .../bpf/progs/btf__core_reloc_ints.c          |   3 +
> > >  .../bpf/progs/btf__core_reloc_ints___bool.c   |   3 +
> > >  .../btf__core_reloc_ints___err_bitfield.c     |   3 +
> > >  .../btf__core_reloc_ints___err_wrong_sz_16.c  |   3 +
> > >  .../btf__core_reloc_ints___err_wrong_sz_32.c  |   3 +
> > >  .../btf__core_reloc_ints___err_wrong_sz_64.c  |   3 +
> > >  .../btf__core_reloc_ints___err_wrong_sz_8.c   |   3 +
> > >  .../btf__core_reloc_ints___reverse_sign.c     |   3 +
> > >  .../bpf/progs/btf__core_reloc_mods.c          |   3 +
> > >  .../progs/btf__core_reloc_mods___mod_swap.c   |   3 +
> > >  .../progs/btf__core_reloc_mods___typedefs.c   |   3 +
> > >  .../bpf/progs/btf__core_reloc_nesting.c       |   3 +
> > >  .../btf__core_reloc_nesting___anon_embed.c    |   3 +
> > >  ...f__core_reloc_nesting___dup_compat_types.c |   5 +
> > >  ...core_reloc_nesting___err_array_container.c |   3 +
> > >  ...tf__core_reloc_nesting___err_array_field.c |   3 +
> > >  ...e_reloc_nesting___err_dup_incompat_types.c |   4 +
> > >  ...re_reloc_nesting___err_missing_container.c |   3 +
> > >  ...__core_reloc_nesting___err_missing_field.c |   3 +
> > >  ..._reloc_nesting___err_nonstruct_container.c |   3 +
> > >  ...e_reloc_nesting___err_partial_match_dups.c |   4 +
> > >  .../btf__core_reloc_nesting___err_too_deep.c  |   3 +
> > >  .../btf__core_reloc_nesting___extra_nesting.c |   3 +
> > >  ..._core_reloc_nesting___struct_union_mixup.c |   3 +
> > >  .../bpf/progs/btf__core_reloc_primitives.c    |   3 +
> > >  ...f__core_reloc_primitives___diff_enum_def.c |   3 +
> > >  ..._core_reloc_primitives___diff_func_proto.c |   3 +
> > >  ...f__core_reloc_primitives___diff_ptr_type.c |   3 +
> > >  ...tf__core_reloc_primitives___err_non_enum.c |   3 +
> > >  ...btf__core_reloc_primitives___err_non_int.c |   3 +
> > >  ...btf__core_reloc_primitives___err_non_ptr.c |   3 +
> > >  .../bpf/progs/btf__core_reloc_ptr_as_arr.c    |   3 +
> > >  .../btf__core_reloc_ptr_as_arr___diff_sz.c    |   3 +
> > >  .../selftests/bpf/progs/core_reloc_types.h    | 642 +++++++++++++
> > >  .../bpf/progs/test_core_reloc_arrays.c        |  58 ++
> > >  .../bpf/progs/test_core_reloc_flavors.c       |  65 ++
> > >  .../bpf/progs/test_core_reloc_ints.c          |  48 +
> > >  .../bpf/progs/test_core_reloc_kernel.c        |  39 +
> > >  .../bpf/progs/test_core_reloc_mods.c          |  68 ++
> > >  .../bpf/progs/test_core_reloc_nesting.c       |  48 +
> > >  .../bpf/progs/test_core_reloc_primitives.c    |  50 +
> > >  .../bpf/progs/test_core_reloc_ptr_as_arr.c    |  34 +
> > >  58 files changed, 2527 insertions(+), 47 deletions(-)
> > >  create mode 100644 tools/testing/selftests/bpf/prog_tests/core_reloc.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_dim.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_val_sz.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_non_array.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_shallow.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_small.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type1.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type2.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_flavors.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_flavors__err_wrong_name.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___bool.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_bitfield.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_16.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_32.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_64.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_8.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___reverse_sign.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods___mod_swap.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods___typedefs.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___anon_embed.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___dup_compat_types.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_container.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_field.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_dup_incompat_types.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_container.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_field.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_nonstruct_container.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_partial_match_dups.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_too_deep.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___extra_nesting.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___struct_union_mixup.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_enum_def.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_func_proto.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_ptr_type.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_enum.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_int.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_ptr.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr___diff_sz.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/core_reloc_types.h
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_flavors.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_ints.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_kernel.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_mods.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_nesting.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_primitives.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_ptr_as_arr.c
> >
> > We have created a lot of small files. Would it be cleaner if we can
> > somehow put these
> > data in one file (maybe different sections?).
>
> After reading more, I guess you have tried this and end up with current
> design: keep most struct defines in core_reloc_types.h.

Yeah, I have all the definition in one header file, but then I need
individual combinations as separate BTFs, so I essentially "pick"
desired types using function declarations. Creating those BTFs by hand
would be a nightmare to create and maintain.

>
> >
> > Alternatively, maybe create a folder for these files:
> >   tools/testing/selftests/bpf/progs/core/
>
> I guess this would still make it cleaner.

There is nothing too special about core tests to split them. Also it
would require Makefile changes and would deviate test_progs
definitions from analogous test_maps, test_verifier, test_btf, etc, so
I'm not sure about that. I though about putting those btf__* files
under separate directory, but I'm on the fence there as well, as I'd
rather have related files to stay together...


>
> Thanks,
> Song

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH bpf-next 00/10] CO-RE offset relocations
  2019-07-29 23:09     ` Andrii Nakryiko
@ 2019-07-30  5:27       ` Song Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Song Liu @ 2019-07-30  5:27 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Song Liu, Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Yonghong Song, Kernel Team



> On Jul 29, 2019, at 4:09 PM, Andrii Nakryiko <andrii.nakryiko@gmail.com> wrote:
> 
> On Mon, Jul 29, 2019 at 1:37 PM Song Liu <liu.song.a23@gmail.com> wrote:
>> 
>> On Mon, Jul 29, 2019 at 1:20 PM Song Liu <liu.song.a23@gmail.com> wrote:
>>> 
>>> On Wed, Jul 24, 2019 at 1:34 PM Andrii Nakryiko <andriin@fb.com> wrote:
>>>> 
>>>> This patch set implements central part of CO-RE (Compile Once - Run
>>>> Everywhere, see [0] and [1] for slides and video): relocating field offsets.
>>>> Most of the details are written down as comments to corresponding parts of the
>>>> code.
>>>> 
>>>> Patch #1 adds loading of .BTF.ext offset relocations section and macros to
>>>> work with its contents.
>>>> Patch #2 implements CO-RE relocations algorithm in libbpf.
>>>> Patches #3-#10 adds selftests validating various parts of relocation handling,
>>>> type compatibility, etc.
>>>> 
>>>> For all tests to work, you'll need latest Clang/LLVM supporting
>>>> __builtin_preserve_access_index intrinsic, used for recording offset
>>>> relocations. Kernel on which selftests run should have BTF information built
>>>> in (CONFIG_DEBUG_INFO_BTF=y).
>>>> 
>>>>  [0] http://vger.kernel.org/bpfconf2019.html#session-2
>>>>  [1] http://vger.kernel.org/lpc-bpf2018.html#session-2CO-RE relocations
>>>> 
>>>> This patch set implements central part of CO-RE (Compile Once - Run
>>>> Everywhere, see [0] and [1] for slides and video): relocating field offsets.
>>>> Most of the details are written down as comments to corresponding parts of the
>>>> code.
>>>> 
>>>> Patch #1 adds loading of .BTF.ext offset relocations section and macros to
>>>> work with its contents.
>>>> Patch #2 implements CO-RE relocations algorithm in libbpf.
>>>> Patches #3-#10 adds selftests validating various parts of relocation handling,
>>>> type compatibility, etc.
>>>> 
>>>> For all tests to work, you'll need latest Clang/LLVM supporting
>>>> __builtin_preserve_access_index intrinsic, used for recording offset
>>>> relocations. Kernel on which selftests run should have BTF information built
>>>> in (CONFIG_DEBUG_INFO_BTF=y).
>>>> 
>>>>  [0] http://vger.kernel.org/bpfconf2019.html#session-2
>>>>  [1] http://vger.kernel.org/lpc-bpf2018.html#session-2
>>>> 
>>>> Andrii Nakryiko (10):
>>>>  libbpf: add .BTF.ext offset relocation section loading
>>>>  libbpf: implement BPF CO-RE offset relocation algorithm
>>>>  selftests/bpf: add CO-RE relocs testing setup
>>>>  selftests/bpf: add CO-RE relocs struct flavors tests
>>>>  selftests/bpf: add CO-RE relocs nesting tests
>>>>  selftests/bpf: add CO-RE relocs array tests
>>>>  selftests/bpf: add CO-RE relocs enum/ptr/func_proto tests
>>>>  selftests/bpf: add CO-RE relocs modifiers/typedef tests
>>>>  selftest/bpf: add CO-RE relocs ptr-as-array tests
>>>>  selftests/bpf: add CO-RE relocs ints tests
>>>> 
>>>> tools/lib/bpf/btf.c                           |  64 +-
>>>> tools/lib/bpf/btf.h                           |   4 +
>>>> tools/lib/bpf/libbpf.c                        | 866 +++++++++++++++++-
>>>> tools/lib/bpf/libbpf.h                        |   1 +
>>>> tools/lib/bpf/libbpf_internal.h               |  91 ++
>>>> .../selftests/bpf/prog_tests/core_reloc.c     | 363 ++++++++
>>>> .../bpf/progs/btf__core_reloc_arrays.c        |   3 +
>>>> .../btf__core_reloc_arrays___diff_arr_dim.c   |   3 +
>>>> ...btf__core_reloc_arrays___diff_arr_val_sz.c |   3 +
>>>> .../btf__core_reloc_arrays___err_non_array.c  |   3 +
>>>> ...btf__core_reloc_arrays___err_too_shallow.c |   3 +
>>>> .../btf__core_reloc_arrays___err_too_small.c  |   3 +
>>>> ..._core_reloc_arrays___err_wrong_val_type1.c |   3 +
>>>> ..._core_reloc_arrays___err_wrong_val_type2.c |   3 +
>>>> .../bpf/progs/btf__core_reloc_flavors.c       |   3 +
>>>> .../btf__core_reloc_flavors__err_wrong_name.c |   3 +
>>>> .../bpf/progs/btf__core_reloc_ints.c          |   3 +
>>>> .../bpf/progs/btf__core_reloc_ints___bool.c   |   3 +
>>>> .../btf__core_reloc_ints___err_bitfield.c     |   3 +
>>>> .../btf__core_reloc_ints___err_wrong_sz_16.c  |   3 +
>>>> .../btf__core_reloc_ints___err_wrong_sz_32.c  |   3 +
>>>> .../btf__core_reloc_ints___err_wrong_sz_64.c  |   3 +
>>>> .../btf__core_reloc_ints___err_wrong_sz_8.c   |   3 +
>>>> .../btf__core_reloc_ints___reverse_sign.c     |   3 +
>>>> .../bpf/progs/btf__core_reloc_mods.c          |   3 +
>>>> .../progs/btf__core_reloc_mods___mod_swap.c   |   3 +
>>>> .../progs/btf__core_reloc_mods___typedefs.c   |   3 +
>>>> .../bpf/progs/btf__core_reloc_nesting.c       |   3 +
>>>> .../btf__core_reloc_nesting___anon_embed.c    |   3 +
>>>> ...f__core_reloc_nesting___dup_compat_types.c |   5 +
>>>> ...core_reloc_nesting___err_array_container.c |   3 +
>>>> ...tf__core_reloc_nesting___err_array_field.c |   3 +
>>>> ...e_reloc_nesting___err_dup_incompat_types.c |   4 +
>>>> ...re_reloc_nesting___err_missing_container.c |   3 +
>>>> ...__core_reloc_nesting___err_missing_field.c |   3 +
>>>> ..._reloc_nesting___err_nonstruct_container.c |   3 +
>>>> ...e_reloc_nesting___err_partial_match_dups.c |   4 +
>>>> .../btf__core_reloc_nesting___err_too_deep.c  |   3 +
>>>> .../btf__core_reloc_nesting___extra_nesting.c |   3 +
>>>> ..._core_reloc_nesting___struct_union_mixup.c |   3 +
>>>> .../bpf/progs/btf__core_reloc_primitives.c    |   3 +
>>>> ...f__core_reloc_primitives___diff_enum_def.c |   3 +
>>>> ..._core_reloc_primitives___diff_func_proto.c |   3 +
>>>> ...f__core_reloc_primitives___diff_ptr_type.c |   3 +
>>>> ...tf__core_reloc_primitives___err_non_enum.c |   3 +
>>>> ...btf__core_reloc_primitives___err_non_int.c |   3 +
>>>> ...btf__core_reloc_primitives___err_non_ptr.c |   3 +
>>>> .../bpf/progs/btf__core_reloc_ptr_as_arr.c    |   3 +
>>>> .../btf__core_reloc_ptr_as_arr___diff_sz.c    |   3 +
>>>> .../selftests/bpf/progs/core_reloc_types.h    | 642 +++++++++++++
>>>> .../bpf/progs/test_core_reloc_arrays.c        |  58 ++
>>>> .../bpf/progs/test_core_reloc_flavors.c       |  65 ++
>>>> .../bpf/progs/test_core_reloc_ints.c          |  48 +
>>>> .../bpf/progs/test_core_reloc_kernel.c        |  39 +
>>>> .../bpf/progs/test_core_reloc_mods.c          |  68 ++
>>>> .../bpf/progs/test_core_reloc_nesting.c       |  48 +
>>>> .../bpf/progs/test_core_reloc_primitives.c    |  50 +
>>>> .../bpf/progs/test_core_reloc_ptr_as_arr.c    |  34 +
>>>> 58 files changed, 2527 insertions(+), 47 deletions(-)
>>>> create mode 100644 tools/testing/selftests/bpf/prog_tests/core_reloc.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_dim.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___diff_arr_val_sz.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_non_array.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_shallow.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_too_small.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type1.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_arrays___err_wrong_val_type2.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_flavors.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_flavors__err_wrong_name.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___bool.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_bitfield.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_16.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_32.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_64.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___err_wrong_sz_8.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ints___reverse_sign.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods___mod_swap.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_mods___typedefs.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___anon_embed.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___dup_compat_types.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_container.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_array_field.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_dup_incompat_types.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_container.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_missing_field.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_nonstruct_container.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_partial_match_dups.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___err_too_deep.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___extra_nesting.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_nesting___struct_union_mixup.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_enum_def.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_func_proto.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___diff_ptr_type.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_enum.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_int.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_primitives___err_non_ptr.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/btf__core_reloc_ptr_as_arr___diff_sz.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/core_reloc_types.h
>>>> create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_arrays.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_flavors.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_ints.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_kernel.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_mods.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_nesting.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_primitives.c
>>>> create mode 100644 tools/testing/selftests/bpf/progs/test_core_reloc_ptr_as_arr.c
>>> 
>>> We have created a lot of small files. Would it be cleaner if we can
>>> somehow put these
>>> data in one file (maybe different sections?).
>> 
>> After reading more, I guess you have tried this and end up with current
>> design: keep most struct defines in core_reloc_types.h.
> 
> Yeah, I have all the definition in one header file, but then I need
> individual combinations as separate BTFs, so I essentially "pick"
> desired types using function declarations. Creating those BTFs by hand
> would be a nightmare to create and maintain.
> 
>> 
>>> 
>>> Alternatively, maybe create a folder for these files:
>>>  tools/testing/selftests/bpf/progs/core/
>> 
>> I guess this would still make it cleaner.
> 
> There is nothing too special about core tests to split them. Also it
> would require Makefile changes and would deviate test_progs
> definitions from analogous test_maps, test_verifier, test_btf, etc, so
> I'm not sure about that. I though about putting those btf__* files
> under separate directory, but I'm on the fence there as well, as I'd
> rather have related files to stay together...
> 

I guess we can defer this decision to later. So I am OK with current
approach for now. 

Thanks,
Song


^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2019-07-30  5:28 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-24 19:27 [PATCH bpf-next 00/10] CO-RE offset relocations Andrii Nakryiko
2019-07-24 19:27 ` [PATCH bpf-next 01/10] libbpf: add .BTF.ext offset relocation section loading Andrii Nakryiko
2019-07-24 21:42   ` Andrii Nakryiko
2019-07-25  0:00   ` Song Liu
2019-07-25  0:37     ` Andrii Nakryiko
2019-07-25  5:20       ` Song Liu
2019-07-27  5:11         ` Andrii Nakryiko
2019-07-29 20:00           ` Song Liu
2019-07-24 19:27 ` [PATCH bpf-next 02/10] libbpf: implement BPF CO-RE offset relocation algorithm Andrii Nakryiko
2019-07-25 19:32   ` Song Liu
2019-07-27  6:11     ` Andrii Nakryiko
2019-07-27 18:59       ` Song Liu
2019-07-27 19:09         ` Andrii Nakryiko
2019-07-28  0:24           ` Song Liu
2019-07-25 23:18   ` Alexei Starovoitov
2019-07-27  6:25     ` Andrii Nakryiko
2019-07-27 17:00       ` Alexei Starovoitov
2019-07-27 18:24         ` Andrii Nakryiko
2019-07-27 21:29           ` Yonghong Song
2019-07-27 21:36             ` Andrii Nakryiko
2019-07-29 19:56   ` Song Liu
2019-07-24 19:27 ` [PATCH bpf-next 03/10] selftests/bpf: add CO-RE relocs testing setup Andrii Nakryiko
2019-07-29 20:22   ` Song Liu
2019-07-24 19:27 ` [PATCH bpf-next 04/10] selftests/bpf: add CO-RE relocs struct flavors tests Andrii Nakryiko
2019-07-29 20:37   ` Song Liu
2019-07-24 19:27 ` [PATCH bpf-next 05/10] selftests/bpf: add CO-RE relocs nesting tests Andrii Nakryiko
2019-07-29 21:06   ` Song Liu
2019-07-24 19:27 ` [PATCH bpf-next 06/10] selftests/bpf: add CO-RE relocs array tests Andrii Nakryiko
2019-07-25 23:26   ` Alexei Starovoitov
2019-07-26 23:29     ` Andrii Nakryiko
2019-07-24 19:27 ` [PATCH bpf-next 07/10] selftests/bpf: add CO-RE relocs enum/ptr/func_proto tests Andrii Nakryiko
2019-07-29 21:09   ` Song Liu
2019-07-24 19:27 ` [PATCH bpf-next 08/10] selftests/bpf: add CO-RE relocs modifiers/typedef tests Andrii Nakryiko
2019-07-29 21:11   ` Song Liu
2019-07-24 19:27 ` [PATCH bpf-next 09/10] selftest/bpf: add CO-RE relocs ptr-as-array tests Andrii Nakryiko
2019-07-29 21:14   ` Song Liu
2019-07-24 19:27 ` [PATCH bpf-next 10/10] selftests/bpf: add CO-RE relocs ints tests Andrii Nakryiko
2019-07-29 21:21   ` Song Liu
2019-07-29 20:20 ` [PATCH bpf-next 00/10] CO-RE offset relocations Song Liu
2019-07-29 20:36   ` Song Liu
2019-07-29 23:09     ` Andrii Nakryiko
2019-07-30  5:27       ` Song Liu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.