netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions
@ 2020-10-02  1:06 Andrii Nakryiko
  2020-10-02  1:06 ` [PATCH bpf-next 1/3] libbpf: support safe subset of load/store instruction resizing with CO-RE Andrii Nakryiko
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Andrii Nakryiko @ 2020-10-02  1:06 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Luka Perkov,
	Tony Ambardar

Patch set implements logic in libbpf to auto-adjust memory size (1-, 2-, 4-,
8-bytes) of load/store (LD/ST/STX) instructions which have BPF CO-RE field
offset relocation associated with it. In practice this means transparent
handling of 32-bit kernels, both pointer and unsigned integers. Signed
integers are not relocatable with zero-extending loads/stores, so libbpf
poisons them and generates a warning. If/when BPF gets support for sign-extending
loads/stores, it would be possible to automatically relocate them as well.

All the details are contained in patch #1 comments and commit message.
Patch #2 is a simple change in libbpf to make advanced testing with custom BTF
easier. Patch #3 validates correct uses of auto-resizable loads, as well as
check that libbpf fails invalid uses.

I'd really appreciate folks that use BPF on 32-bit architectures to test this
out with their BPF programs and report if there are any problems with the
approach.

Cc: Luka Perkov <luka.perkov@sartura.hr>
Cc: Tony Ambardar <tony.ambardar@gmail.com>

Andrii Nakryiko (3):
  libbpf: support safe subset of load/store instruction resizing with
    CO-RE
  libbpf: allow specifying both ELF and raw BTF for CO-RE BTF override
  selftests/bpf: validate libbpf's auto-sizing of LD/ST/STX instructions

 tools/lib/bpf/libbpf.c                        | 146 ++++++++++++-
 .../selftests/bpf/prog_tests/core_autosize.c  | 199 ++++++++++++++++++
 .../selftests/bpf/progs/test_core_autosize.c  | 148 +++++++++++++
 3 files changed, 483 insertions(+), 10 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/core_autosize.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_autosize.c

-- 
2.24.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH bpf-next 1/3] libbpf: support safe subset of load/store instruction resizing with CO-RE
  2020-10-02  1:06 [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions Andrii Nakryiko
@ 2020-10-02  1:06 ` Andrii Nakryiko
  2020-10-06 18:08   ` Alexei Starovoitov
  2020-10-02  1:06 ` [PATCH bpf-next 2/3] libbpf: allow specifying both ELF and raw BTF for CO-RE BTF override Andrii Nakryiko
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Andrii Nakryiko @ 2020-10-02  1:06 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Luka Perkov,
	Tony Ambardar

Add support for patching instructions of the following form:
  - rX = *(T *)(rY + <off>);
  - *(T *)(rX + <off>) = rY;
  - *(T *)(rX + <off>) = <imm>, where T is one of {u8, u16, u32, u64}.

For such instructions, if the actual kernel field recorded in CO-RE relocation
has a different size than the one recorded locally (e.g., from vmlinux.h),
then libbpf will adjust T to an appropriate 1-, 2-, 4-, or 8-byte loads.

In general, such transformation is not always correct and could lead to
invalid final value being loaded or stored. But two classes of cases are
always safe:
  - if both local and target (kernel) types are unsigned integers, but of
  different sizes, then it's OK to adjust load/store instruction according to
  the necessary memory size. Zero-extending nature of such instructions and
  unsignedness make sure that the final value is always correct;
  - pointer size mismatch between BPF target architecture (which is always
  64-bit) and 32-bit host kernel architecture can be similarly resolved
  automatically, because pointer is essentially an unsigned integer. Loading
  32-bit pointer into 64-bit BPF register with zero extension will leave
  correct pointer in the register.

Both cases are necessary to support CO-RE on 32-bit kernels, as `unsigned
long` in vmlinux.h generated from 32-bit kernel is 32-bit, but when compiled
with BPF program for BPF target it will be treated by compiler as 64-bit
integer. Similarly, pointers in vmlinux.h are 32-bit for kernel, but treated
as 64-bit values by compiler for BPF target. Both problems are now resolved by
libbpf for direct memory reads.

But similar transformations are useful in general when kernel fields are
"resized" from, e.g., unsigned int to unsigned long (or vice versa).

Now, similar transformations for signed integers are not safe to perform as
they will result in incorrect sign extension of the value. If such situation
is detected, libbpf will emit helpful message and will poison the instruction.
Not failing immediately means that it's possible to guard the instruction
based on kernel version (or other conditions) and make sure it's not
reachable.

If there is a need to read signed integers that change sizes between different
kernels, it's possible to use BPF_CORE_READ_BITFIELD() macro, which works both
with bitfields and non-bitfield integers of any signedness and handles
sign-extension properly. Also, bpf_core_read() with proper size and/or use of
bpf_core_field_size() relocation could allow to deal with such complicated
situations explicitly, if not so conventiently as direct memory reads.

Selftests added in a separate patch in progs/test_core_autosize.c demonstrate
both direct memory and probed use cases.

BPF_CORE_READ() is not changed and it won't deal with such situations as
automatically as direct memory reads due to the signedness integer
limitations, which are much harder to detect and control with compiler macro
magic. So it's encouraged to utilize direct memory reads as much as possible.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 tools/lib/bpf/libbpf.c | 144 ++++++++++++++++++++++++++++++++++++++---
 1 file changed, 135 insertions(+), 9 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index a4f55f8a460d..20a47b729f7c 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -5017,16 +5017,19 @@ static int bpf_core_spec_match(struct bpf_core_spec *local_spec,
 static int bpf_core_calc_field_relo(const struct bpf_program *prog,
 				    const struct bpf_core_relo *relo,
 				    const struct bpf_core_spec *spec,
-				    __u32 *val, bool *validate)
+				    __u32 *val, __u32 *field_sz, __u32 *type_id,
+				    bool *validate)
 {
 	const struct bpf_core_accessor *acc;
 	const struct btf_type *t;
-	__u32 byte_off, byte_sz, bit_off, bit_sz;
+	__u32 byte_off, byte_sz, bit_off, bit_sz, field_type_id;
 	const struct btf_member *m;
 	const struct btf_type *mt;
 	bool bitfield;
 	__s64 sz;
 
+	*field_sz = 0;
+
 	if (relo->kind == BPF_FIELD_EXISTS) {
 		*val = spec ? 1 : 0;
 		return 0;
@@ -5042,6 +5045,12 @@ static int bpf_core_calc_field_relo(const struct bpf_program *prog,
 	if (!acc->name) {
 		if (relo->kind == BPF_FIELD_BYTE_OFFSET) {
 			*val = spec->bit_offset / 8;
+			/* remember field size for load/store mem size */
+			sz = btf__resolve_size(spec->btf, acc->type_id);
+			if (sz < 0)
+				return -EINVAL;
+			*field_sz = sz;
+			*type_id = acc->type_id;
 		} else if (relo->kind == BPF_FIELD_BYTE_SIZE) {
 			sz = btf__resolve_size(spec->btf, acc->type_id);
 			if (sz < 0)
@@ -5058,7 +5067,7 @@ static int bpf_core_calc_field_relo(const struct bpf_program *prog,
 	}
 
 	m = btf_members(t) + acc->idx;
-	mt = skip_mods_and_typedefs(spec->btf, m->type, NULL);
+	mt = skip_mods_and_typedefs(spec->btf, m->type, &field_type_id);
 	bit_off = spec->bit_offset;
 	bit_sz = btf_member_bitfield_size(t, acc->idx);
 
@@ -5078,7 +5087,7 @@ static int bpf_core_calc_field_relo(const struct bpf_program *prog,
 			byte_off = bit_off / 8 / byte_sz * byte_sz;
 		}
 	} else {
-		sz = btf__resolve_size(spec->btf, m->type);
+		sz = btf__resolve_size(spec->btf, field_type_id);
 		if (sz < 0)
 			return -EINVAL;
 		byte_sz = sz;
@@ -5096,6 +5105,10 @@ static int bpf_core_calc_field_relo(const struct bpf_program *prog,
 	switch (relo->kind) {
 	case BPF_FIELD_BYTE_OFFSET:
 		*val = byte_off;
+		if (!bitfield) {
+			*field_sz = byte_sz;
+			*type_id = field_type_id;
+		}
 		break;
 	case BPF_FIELD_BYTE_SIZE:
 		*val = byte_sz;
@@ -5196,6 +5209,19 @@ struct bpf_core_relo_res
 	bool poison;
 	/* some relocations can't be validated against orig_val */
 	bool validate;
+	/* for field byte offset relocations or the forms:
+	 *     *(T *)(rX + <off>) = rY
+	 *     rX = *(T *)(rY + <off>),
+	 * we remember original and resolved field size to adjust direct
+	 * memory loads of pointers and integers; this is necessary for 32-bit
+	 * host kernel architectures, but also allows to automatically
+	 * relocate fields that were resized from, e.g., u32 to u64, etc.
+	 */
+	bool fail_memsz_adjust;
+	__u32 orig_sz;
+	__u32 orig_type_id;
+	__u32 new_sz;
+	__u32 new_type_id;
 };
 
 /* Calculate original and target relocation values, given local and target
@@ -5217,10 +5243,56 @@ static int bpf_core_calc_relo(const struct bpf_program *prog,
 	res->new_val = 0;
 	res->poison = false;
 	res->validate = true;
+	res->fail_memsz_adjust = false;
+	res->orig_sz = res->new_sz = 0;
+	res->orig_type_id = res->new_type_id = 0;
 
 	if (core_relo_is_field_based(relo->kind)) {
-		err = bpf_core_calc_field_relo(prog, relo, local_spec, &res->orig_val, &res->validate);
-		err = err ?: bpf_core_calc_field_relo(prog, relo, targ_spec, &res->new_val, NULL);
+		err = bpf_core_calc_field_relo(prog, relo, local_spec,
+					       &res->orig_val, &res->orig_sz,
+					       &res->orig_type_id, &res->validate);
+		err = err ?: bpf_core_calc_field_relo(prog, relo, targ_spec,
+						      &res->new_val, &res->new_sz,
+						      &res->new_type_id, NULL);
+		if (err)
+			goto done;
+		/* Validate if it's safe to adjust load/store memory size.
+		 * Adjustments are performed only if original and new memory
+		 * sizes differ.
+		 */
+		res->fail_memsz_adjust = false;
+		if (res->orig_sz != res->new_sz) {
+			const struct btf_type *orig_t, *new_t;
+
+			orig_t = btf__type_by_id(local_spec->btf, res->orig_type_id);
+			new_t = btf__type_by_id(targ_spec->btf, res->new_type_id);
+
+			/* There are two use cases in which it's safe to
+			 * adjust load/store's mem size:
+			 *   - reading a 32-bit kernel pointer, while on BPF
+			 *   size pointers are always 64-bit; in this case
+			 *   it's safe to "downsize" instruction size due to
+			 *   pointer being treated as unsigned integer with
+			 *   zero-extended upper 32-bits;
+			 *   - reading unsigned integers, again due to
+			 *   zero-extension is preserving the value correctly.
+			 *
+			 * In all other cases it's incorrect to attempt to
+			 * load/store field because read value will be
+			 * incorrect, so we poison relocated instruction.
+			 */
+			if (btf_is_ptr(orig_t) && btf_is_ptr(new_t))
+				goto done;
+			if (btf_is_int(orig_t) && btf_is_int(new_t) &&
+			    btf_int_encoding(orig_t) != BTF_INT_SIGNED &&
+			    btf_int_encoding(new_t) != BTF_INT_SIGNED)
+				goto done;
+
+			/* mark as invalid mem size adjustment, but this will
+			 * only be checked for LDX/STX/ST insns
+			 */
+			res->fail_memsz_adjust = true;
+		}
 	} else if (core_relo_is_type_based(relo->kind)) {
 		err = bpf_core_calc_type_relo(relo, local_spec, &res->orig_val);
 		err = err ?: bpf_core_calc_type_relo(relo, targ_spec, &res->new_val);
@@ -5229,6 +5301,7 @@ static int bpf_core_calc_relo(const struct bpf_program *prog,
 		err = err ?: bpf_core_calc_enumval_relo(relo, targ_spec, &res->new_val);
 	}
 
+done:
 	if (err == -EUCLEAN) {
 		/* EUCLEAN is used to signal instruction poisoning request */
 		res->poison = true;
@@ -5268,6 +5341,28 @@ static bool is_ldimm64(struct bpf_insn *insn)
 	return insn->code == (BPF_LD | BPF_IMM | BPF_DW);
 }
 
+static int insn_mem_sz_to_bytes(struct bpf_insn *insn)
+{
+	switch (BPF_SIZE(insn->code)) {
+	case BPF_DW: return 8;
+	case BPF_W: return 4;
+	case BPF_H: return 2;
+	case BPF_B: return 1;
+	default: return -1;
+	}
+}
+
+static int insn_bytes_to_mem_sz(__u32 sz)
+{
+	switch (sz) {
+	case 8: return BPF_DW;
+	case 4: return BPF_W;
+	case 2: return BPF_H;
+	case 1: return BPF_B;
+	default: return -1;
+	}
+}
+
 /*
  * Patch relocatable BPF instruction.
  *
@@ -5277,10 +5372,13 @@ static bool is_ldimm64(struct bpf_insn *insn)
  * spec, and is checked before patching instruction. If actual insn->imm value
  * is wrong, bail out with error.
  *
- * Currently three kinds of BPF instructions are supported:
+ * Currently supported classes of BPF instruction are:
  * 1. rX = <imm> (assignment with immediate operand);
  * 2. rX += <imm> (arithmetic operations with immediate operand);
- * 3. rX = <imm64> (load with 64-bit immediate value).
+ * 3. rX = <imm64> (load with 64-bit immediate value);
+ * 4. rX = *(T *)(rY + <off>), where T is one of {u8, u16, u32, u64};
+ * 5. *(T *)(rX + <off>) = rY, where T is one of {u8, u16, u32, u64};
+ * 6. *(T *)(rX + <off>) = <imm>, where T is one of {u8, u16, u32, u64}.
  */
 static int bpf_core_patch_insn(struct bpf_program *prog,
 			       const struct bpf_core_relo *relo,
@@ -5289,7 +5387,7 @@ static int bpf_core_patch_insn(struct bpf_program *prog,
 {
 	__u32 orig_val, new_val;
 	struct bpf_insn *insn;
-	int insn_idx;
+	int insn_idx, mem_sz;
 	__u8 class;
 
 	if (relo->insn_off % BPF_INSN_SZ)
@@ -5304,6 +5402,7 @@ static int bpf_core_patch_insn(struct bpf_program *prog,
 	class = BPF_CLASS(insn->code);
 
 	if (res->poison) {
+poison:
 		/* poison second part of ldimm64 to avoid confusing error from
 		 * verifier about "unknown opcode 00"
 		 */
@@ -5346,10 +5445,37 @@ static int bpf_core_patch_insn(struct bpf_program *prog,
 				prog->name, relo_idx, insn_idx, new_val);
 			return -ERANGE;
 		}
+		if (res->fail_memsz_adjust) {
+			pr_warn("prog '%s': relo #%d: insn #%d (LDX/ST/STX) accesses field incorrectly. "
+				"Make sure you are accessing pointers, unsigned integers, or fields of matching type and size.\n",
+				prog->name, relo_idx, insn_idx);
+			goto poison;
+		}
+
 		orig_val = insn->off;
 		insn->off = new_val;
 		pr_debug("prog '%s': relo #%d: patched insn #%d (LDX/ST/STX) off %u -> %u\n",
 			 prog->name, relo_idx, insn_idx, orig_val, new_val);
+
+		if (res->new_sz != res->orig_sz) {
+			mem_sz = insn_mem_sz_to_bytes(insn);
+			if (mem_sz != res->orig_sz) {
+				pr_warn("prog '%s': relo #%d: insn #%d (LDX/ST/STX) unexpected mem size: got %u, exp %u\n",
+					prog->name, relo_idx, insn_idx, mem_sz, res->orig_sz);
+				return -EINVAL;
+			}
+
+			mem_sz = insn_bytes_to_mem_sz(res->new_sz);
+			if (mem_sz < 0) {
+				pr_warn("prog '%s': relo #%d: insn #%d (LDX/ST/STX) invalid new mem size: %u\n",
+					prog->name, relo_idx, insn_idx, res->new_sz);
+				return -EINVAL;
+			}
+
+			insn->code = BPF_MODE(insn->code) | mem_sz | BPF_CLASS(insn->code);
+			pr_debug("prog '%s': relo #%d: patched insn #%d (LDX/ST/STX) mem_sz %u -> %u\n",
+				 prog->name, relo_idx, insn_idx, res->orig_sz, res->new_sz);
+		}
 		break;
 	case BPF_LD: {
 		__u64 imm;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next 2/3] libbpf: allow specifying both ELF and raw BTF for CO-RE BTF override
  2020-10-02  1:06 [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions Andrii Nakryiko
  2020-10-02  1:06 ` [PATCH bpf-next 1/3] libbpf: support safe subset of load/store instruction resizing with CO-RE Andrii Nakryiko
@ 2020-10-02  1:06 ` Andrii Nakryiko
  2020-10-02  1:06 ` [PATCH bpf-next 3/3] selftests/bpf: validate libbpf's auto-sizing of LD/ST/STX instructions Andrii Nakryiko
  2020-10-07 17:56 ` [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions Luka Perkov
  3 siblings, 0 replies; 10+ messages in thread
From: Andrii Nakryiko @ 2020-10-02  1:06 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Luka Perkov,
	Tony Ambardar

Use generalized BTF parsing logic, making it possible to parse BTF both from
ELF file, as well as a raw BTF dump. This makes it easier to write custom
tests with manually generated BTFs.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 tools/lib/bpf/libbpf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 20a47b729f7c..cb198283fe3d 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -5817,7 +5817,7 @@ bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
 		return 0;
 
 	if (targ_btf_path)
-		targ_btf = btf__parse_elf(targ_btf_path, NULL);
+		targ_btf = btf__parse(targ_btf_path, NULL);
 	else
 		targ_btf = obj->btf_vmlinux;
 	if (IS_ERR_OR_NULL(targ_btf)) {
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next 3/3] selftests/bpf: validate libbpf's auto-sizing of LD/ST/STX instructions
  2020-10-02  1:06 [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions Andrii Nakryiko
  2020-10-02  1:06 ` [PATCH bpf-next 1/3] libbpf: support safe subset of load/store instruction resizing with CO-RE Andrii Nakryiko
  2020-10-02  1:06 ` [PATCH bpf-next 2/3] libbpf: allow specifying both ELF and raw BTF for CO-RE BTF override Andrii Nakryiko
@ 2020-10-02  1:06 ` Andrii Nakryiko
  2020-10-07 17:56 ` [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions Luka Perkov
  3 siblings, 0 replies; 10+ messages in thread
From: Andrii Nakryiko @ 2020-10-02  1:06 UTC (permalink / raw)
  To: bpf, netdev, ast, daniel
  Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Luka Perkov,
	Tony Ambardar

Add selftests validating libbpf's auto-resizing of load/store instructions
when used with CO-RE relocations. An explicit and manual approach with using
bpf_core_read() is also demonstrated and tested. Separate BPF program is
supposed to fail due to using signed integers of sizes that differ from
kernel's sizes.

To reliably simulate 32-bit BTF (i.e., the one with sizeof(long) ==
sizeof(void *) == 4), selftest generates its own custom BTF and passes it as
a replacement for real kernel BTF. This allows to test 32/64-bitness mix on
all architectures.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
 .../selftests/bpf/prog_tests/core_autosize.c  | 199 ++++++++++++++++++
 .../selftests/bpf/progs/test_core_autosize.c  | 148 +++++++++++++
 2 files changed, 347 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/core_autosize.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_core_autosize.c

diff --git a/tools/testing/selftests/bpf/prog_tests/core_autosize.c b/tools/testing/selftests/bpf/prog_tests/core_autosize.c
new file mode 100644
index 000000000000..2155c12bd83e
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/core_autosize.c
@@ -0,0 +1,199 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2020 Facebook */
+
+#include <test_progs.h>
+#include "test_core_autosize.skel.h"
+#include <bpf/btf.h>
+
+static int duration = 0;
+
+static struct {
+	unsigned long long ptr_samesized;
+	unsigned long long val1_samesized;
+	unsigned long long val2_samesized;
+	unsigned long long val3_samesized;
+	unsigned long long val4_samesized;
+
+	unsigned long long ptr_downsized;
+	unsigned long long val1_downsized;
+	unsigned long long val2_downsized;
+	unsigned long long val3_downsized;
+	unsigned long long val4_downsized;
+
+	unsigned long long ptr_probed;
+	unsigned long long val1_probed;
+	unsigned long long val2_probed;
+	unsigned long long val3_probed;
+	unsigned long long val4_probed;
+
+	unsigned long long ptr_signed;
+	unsigned long long val1_signed;
+	unsigned long long val2_signed;
+	unsigned long long val3_signed;
+	unsigned long long val4_signed;
+} out;
+
+void test_core_autosize(void)
+{
+	char btf_file[] = "/tmp/core_autosize.btf.XXXXXX";
+	int err, fd = -1, zero = 0;
+	int char_id, short_id, int_id, long_long_id, void_ptr_id, id;
+	struct test_core_autosize* skel = NULL;
+	struct bpf_object_load_attr load_attr = {};
+	struct bpf_program *prog;
+	struct bpf_map *bss_map;
+	struct btf *btf = NULL;
+	size_t written;
+	const void *raw_data;
+	__u32 raw_sz;
+	FILE *f = NULL;
+
+	btf = btf__new_empty();
+	if (!ASSERT_OK_PTR(btf, "empty_btf"))
+		return;
+	/* Emit the following struct with 32-bit pointer size:
+	 *
+	 * struct test_struct {
+	 *     void *ptr;
+	 *     unsigned long val2;
+	 *     unsigned long long val1;
+	 *     unsigned short val3;
+	 *     unsigned char val4;
+	 *     char: 8;
+	 * };
+	 *
+	 * This struct is going to be used as the "kernel BTF" for this test.
+	 */
+
+	/* force 32-bit pointer size */
+	btf__set_pointer_size(btf, 4);
+
+	char_id = btf__add_int(btf, "unsigned char", 1, 0);
+	ASSERT_EQ(char_id, 1, "char_id");
+	short_id = btf__add_int(btf, "unsigned short", 2, 0);
+	ASSERT_EQ(short_id, 2, "short_id");
+	/* "long unsigned int" of 4 byte size tells BTF that sizeof(void *) == 4 */
+	int_id = btf__add_int(btf, "long unsigned int", 4, 0);
+	ASSERT_EQ(int_id, 3, "int_id");
+	long_long_id = btf__add_int(btf, "unsigned long long", 8, 0);
+	ASSERT_EQ(long_long_id, 4, "long_long_id");
+	void_ptr_id = btf__add_ptr(btf, 0);
+	ASSERT_EQ(void_ptr_id, 5, "void_ptr_id");
+
+	id = btf__add_struct(btf, "test_struct", 20 /* bytes */);
+	ASSERT_EQ(id, 6, "struct_id");
+	err = btf__add_field(btf, "ptr", void_ptr_id, 0, 0);
+	err = err ?: btf__add_field(btf, "val2", int_id, 32, 0);
+	err = err ?: btf__add_field(btf, "val1", long_long_id, 64, 0);
+	err = err ?: btf__add_field(btf, "val3", short_id, 128, 0);
+	err = err ?: btf__add_field(btf, "val4", char_id, 144, 0);
+	ASSERT_OK(err, "struct_fields");
+
+	fd = mkstemp(btf_file);
+	if (CHECK(fd < 0, "btf_tmp", "failed to create file: %d\n", fd))
+		goto cleanup;
+	f = fdopen(fd, "w");
+	if (!ASSERT_OK_PTR(f, "btf_fdopen"))
+		goto cleanup;
+
+	raw_data = btf__get_raw_data(btf, &raw_sz);
+	if (!ASSERT_OK_PTR(raw_data, "raw_data"))
+		goto cleanup;
+	written = fwrite(raw_data, 1, raw_sz, f);
+	if (CHECK(written != raw_sz, "btf_write", "written: %zu, errno: %d\n", written, errno))
+		goto cleanup;
+	fflush(f);
+	fclose(f);
+	f = NULL;
+	close(fd);
+	fd = -1;
+
+	/* open and load BPF program with custom BTF as the kernel BTF */
+	skel = test_core_autosize__open();
+	if (!ASSERT_OK_PTR(skel, "skel_open"))
+		return;
+
+	/* disable handle_signed() for now */
+	prog = bpf_object__find_program_by_name(skel->obj, "handle_signed");
+	if (!ASSERT_OK_PTR(prog, "prog_find"))
+		goto cleanup;
+	bpf_program__set_autoload(prog, false);
+
+	load_attr.obj = skel->obj;
+	load_attr.target_btf_path = btf_file;
+	err = bpf_object__load_xattr(&load_attr);
+	if (!ASSERT_OK(err, "prog_load"))
+		goto cleanup;
+
+	prog = bpf_object__find_program_by_name(skel->obj, "handle_samesize");
+	if (!ASSERT_OK_PTR(prog, "prog_find"))
+		goto cleanup;
+	skel->links.handle_samesize = bpf_program__attach(prog);
+	if (!ASSERT_OK_PTR(skel->links.handle_samesize, "prog_attach"))
+		goto cleanup;
+
+	prog = bpf_object__find_program_by_name(skel->obj, "handle_downsize");
+	if (!ASSERT_OK_PTR(prog, "prog_find"))
+		goto cleanup;
+	skel->links.handle_downsize = bpf_program__attach(prog);
+	if (!ASSERT_OK_PTR(skel->links.handle_downsize, "prog_attach"))
+		goto cleanup;
+
+	prog = bpf_object__find_program_by_name(skel->obj, "handle_probed");
+	if (!ASSERT_OK_PTR(prog, "prog_find"))
+		goto cleanup;
+	skel->links.handle_probed = bpf_program__attach(prog);
+	if (!ASSERT_OK_PTR(skel->links.handle_probed, "prog_attach"))
+		goto cleanup;
+
+	usleep(1);
+
+	bss_map = bpf_object__find_map_by_name(skel->obj, "test_cor.bss");
+	if (!ASSERT_OK_PTR(bss_map, "bss_map_find"))
+		goto cleanup;
+
+	err = bpf_map_lookup_elem(bpf_map__fd(bss_map), &zero, (void *)&out);
+	if (!ASSERT_OK(err, "bss_lookup"))
+		goto cleanup;
+
+	ASSERT_EQ(out.ptr_samesized, 0x01020304, "ptr_samesized");
+	ASSERT_EQ(out.val1_samesized, 0x1020304050607080, "val1_samesized");
+	ASSERT_EQ(out.val2_samesized, 0x0a0b0c0d, "val2_samesized");
+	ASSERT_EQ(out.val3_samesized, 0xfeed, "val3_samesized");
+	ASSERT_EQ(out.val4_samesized, 0xb9, "val4_samesized");
+
+	ASSERT_EQ(out.ptr_downsized, 0x01020304, "ptr_downsized");
+	ASSERT_EQ(out.val1_downsized, 0x1020304050607080, "val1_downsized");
+	ASSERT_EQ(out.val2_downsized, 0x0a0b0c0d, "val2_downsized");
+	ASSERT_EQ(out.val3_downsized, 0xfeed, "val3_downsized");
+	ASSERT_EQ(out.val4_downsized, 0xb9, "val4_downsized");
+
+	ASSERT_EQ(out.ptr_probed, 0x01020304, "ptr_probed");
+	ASSERT_EQ(out.val1_probed, 0x1020304050607080, "val1_probed");
+	ASSERT_EQ(out.val2_probed, 0x0a0b0c0d, "val2_probed");
+	ASSERT_EQ(out.val3_probed, 0xfeed, "val3_probed");
+	ASSERT_EQ(out.val4_probed, 0xb9, "val4_probed");
+
+	test_core_autosize__destroy(skel);
+	skel = NULL;
+
+	/* now re-load with handle_signed() enabled, it should fail loading */
+	skel = test_core_autosize__open();
+	if (!ASSERT_OK_PTR(skel, "skel_open"))
+		return;
+
+	load_attr.obj = skel->obj;
+	load_attr.target_btf_path = btf_file;
+	err = bpf_object__load_xattr(&load_attr);
+	if (!ASSERT_ERR(err, "bad_prog_load"))
+		goto cleanup;
+
+cleanup:
+	if (f)
+		fclose(f);
+	if (fd >= 0)
+		close(fd);
+	remove(btf_file);
+	btf__free(btf);
+	test_core_autosize__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/test_core_autosize.c b/tools/testing/selftests/bpf/progs/test_core_autosize.c
new file mode 100644
index 000000000000..1afeae83da1d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_core_autosize.c
@@ -0,0 +1,148 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2019 Facebook
+
+#include <linux/bpf.h>
+#include <stdint.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_core_read.h>
+
+char _license[] SEC("license") = "GPL";
+
+/* fields of exactly the same size */
+struct test_struct___samesize {
+	void *ptr;
+	unsigned long long val1;
+	unsigned int val2;
+	unsigned short val3;
+	unsigned char val4;
+} __attribute((preserve_access_index));
+
+/* unsigned fields that have to be downsized by libbpf */
+struct test_struct___downsize {
+	void *ptr;
+	unsigned long val1;
+	unsigned long val2;
+	unsigned long val3;
+	unsigned long val4;
+	/* total sz: 40 */
+} __attribute__((preserve_access_index));
+
+/* fields with signed integers of wrong size, should be rejected */
+struct test_struct___signed {
+	void *ptr;
+	long val1;
+	long val2;
+	long val3;
+	long val4;
+} __attribute((preserve_access_index));
+
+/* real layout and sizes according to test's (32-bit) BTF */
+struct {
+	unsigned int ptr; /* can't use `void *`, it is always 8 byte in BPF target */
+	unsigned int val2;
+	unsigned long long val1;
+	unsigned short val3;
+	unsigned char val4;
+	unsigned char _pad;
+	/* total sz: 20 */
+} input = {
+	.ptr = 0x01020304,
+	.val1 = 0x1020304050607080,
+	.val2 = 0x0a0b0c0d,
+	.val3 = 0xfeed,
+	.val4 = 0xb9,
+	._pad = 0xff, /* make sure no accidental zeros are present */
+};
+
+unsigned long long ptr_samesized = 0;
+unsigned long long val1_samesized = 0;
+unsigned long long val2_samesized = 0;
+unsigned long long val3_samesized = 0;
+unsigned long long val4_samesized = 0;
+
+unsigned long long ptr_downsized = 0;
+unsigned long long val1_downsized = 0;
+unsigned long long val2_downsized = 0;
+unsigned long long val3_downsized = 0;
+unsigned long long val4_downsized = 0;
+
+unsigned long long ptr_probed = 0;
+unsigned long long val1_probed = 0;
+unsigned long long val2_probed = 0;
+unsigned long long val3_probed = 0;
+unsigned long long val4_probed = 0;
+
+unsigned long long ptr_signed = 0;
+unsigned long long val1_signed = 0;
+unsigned long long val2_signed = 0;
+unsigned long long val3_signed = 0;
+unsigned long long val4_signed = 0;
+
+SEC("raw_tp/sys_exit")
+int handle_samesize(void *ctx)
+{
+	struct test_struct___samesize *in = (void *)&input;
+
+	ptr_samesized = (unsigned long long)in->ptr;
+	val1_samesized = in->val1;
+	val2_samesized = in->val2;
+	val3_samesized = in->val3;
+	val4_samesized = in->val4;
+
+	return 0;
+}
+
+SEC("raw_tp/sys_exit")
+int handle_downsize(void *ctx)
+{
+	struct test_struct___downsize *in = (void *)&input;
+
+	ptr_downsized = (unsigned long long)in->ptr;
+	val1_downsized = in->val1;
+	val2_downsized = in->val2;
+	val3_downsized = in->val3;
+	val4_downsized = in->val4;
+
+	return 0;
+}
+
+SEC("raw_tp/sys_enter")
+int handle_probed(void *ctx)
+{
+	struct test_struct___downsize *in = (void *)&input;
+	__u64 tmp;
+
+	tmp = 0;
+	bpf_core_read(&tmp, bpf_core_field_size(in->ptr), &in->ptr);
+	ptr_probed = tmp;
+
+	tmp = 0;
+	bpf_core_read(&tmp, bpf_core_field_size(in->val1), &in->val1);
+	val1_probed = tmp;
+
+	tmp = 0;
+	bpf_core_read(&tmp, bpf_core_field_size(in->val2), &in->val2);
+	val2_probed = tmp;
+
+	tmp = 0;
+	bpf_core_read(&tmp, bpf_core_field_size(in->val3), &in->val3);
+	val3_probed = tmp;
+
+	tmp = 0;
+	bpf_core_read(&tmp, bpf_core_field_size(in->val4), &in->val4);
+	val4_probed = tmp;
+
+	return 0;
+}
+
+SEC("raw_tp/sys_enter")
+int handle_signed(void *ctx)
+{
+	struct test_struct___signed *in = (void *)&input;
+
+	val2_signed = in->val2;
+	val3_signed = in->val3;
+	val4_signed = in->val4;
+
+	return 0;
+}
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next 1/3] libbpf: support safe subset of load/store instruction resizing with CO-RE
  2020-10-02  1:06 ` [PATCH bpf-next 1/3] libbpf: support safe subset of load/store instruction resizing with CO-RE Andrii Nakryiko
@ 2020-10-06 18:08   ` Alexei Starovoitov
  2020-10-06 18:17     ` Andrii Nakryiko
  0 siblings, 1 reply; 10+ messages in thread
From: Alexei Starovoitov @ 2020-10-06 18:08 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, ast, daniel, andrii.nakryiko, kernel-team,
	Luka Perkov, Tony Ambardar

On Thu, Oct 01, 2020 at 06:06:31PM -0700, Andrii Nakryiko wrote:
> Add support for patching instructions of the following form:
>   - rX = *(T *)(rY + <off>);
>   - *(T *)(rX + <off>) = rY;
>   - *(T *)(rX + <off>) = <imm>, where T is one of {u8, u16, u32, u64}.

llvm doesn't generate ST instruction. It never did.
STX is generated, but can it actually be used with relocations?
Looking at the test in patch 3... it's testing LDX only.
ST/STX suppose to work by analogy, but would be good to have a test.
At least of STX.

> +static int insn_mem_sz_to_bytes(struct bpf_insn *insn)
> +{
> +	switch (BPF_SIZE(insn->code)) {
> +	case BPF_DW: return 8;
> +	case BPF_W: return 4;
> +	case BPF_H: return 2;
> +	case BPF_B: return 1;
> +	default: return -1;
> +	}
> +}
> +
> +static int insn_bytes_to_mem_sz(__u32 sz)
> +{
> +	switch (sz) {
> +	case 8: return BPF_DW;
> +	case 4: return BPF_W;
> +	case 2: return BPF_H;
> +	case 1: return BPF_B;
> +	default: return -1;
> +	}
> +}

filter.h has these two helpers. They're named bytes_to_bpf_size() and bpf_size_to_bytes().
I guess we cannot really share kernel and libbpf implementation, but
could you please name them the same way so it's easier to follow
for folks who read both kernel and libbpf code?

> +		if (res->new_sz != res->orig_sz) {
> +			mem_sz = insn_mem_sz_to_bytes(insn);
> +			if (mem_sz != res->orig_sz) {
> +				pr_warn("prog '%s': relo #%d: insn #%d (LDX/ST/STX) unexpected mem size: got %u, exp %u\n",
> +					prog->name, relo_idx, insn_idx, mem_sz, res->orig_sz);
> +				return -EINVAL;
> +			}
> +
> +			mem_sz = insn_bytes_to_mem_sz(res->new_sz);
> +			if (mem_sz < 0) {

Please use new variable here appropriately named.
Few lines above mem_sz is in bytes while here it's encoding opcode.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next 1/3] libbpf: support safe subset of load/store instruction resizing with CO-RE
  2020-10-06 18:08   ` Alexei Starovoitov
@ 2020-10-06 18:17     ` Andrii Nakryiko
  0 siblings, 0 replies; 10+ messages in thread
From: Andrii Nakryiko @ 2020-10-06 18:17 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Luka Perkov, Tony Ambardar

On Tue, Oct 6, 2020 at 11:08 AM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Thu, Oct 01, 2020 at 06:06:31PM -0700, Andrii Nakryiko wrote:
> > Add support for patching instructions of the following form:
> >   - rX = *(T *)(rY + <off>);
> >   - *(T *)(rX + <off>) = rY;
> >   - *(T *)(rX + <off>) = <imm>, where T is one of {u8, u16, u32, u64}.
>
> llvm doesn't generate ST instruction. It never did.
> STX is generated, but can it actually be used with relocations?

interesting, so whe you do `some_struct->some_field = 123;` Clang will still do:

r1 = 123;
*(u32 *)(r2 + 0) = r1;

?

I'll add a test with constant and see what gets generated.

To answer the second part, unless someone is willing to manually
generate .BTF.ext for assembly instruction, then no, you can't really
use it with CO-RE relocation.

> Looking at the test in patch 3... it's testing LDX only.
> ST/STX suppose to work by analogy, but would be good to have a test.
> At least of STX.

yep, will add.

>
> > +static int insn_mem_sz_to_bytes(struct bpf_insn *insn)
> > +{
> > +     switch (BPF_SIZE(insn->code)) {
> > +     case BPF_DW: return 8;
> > +     case BPF_W: return 4;
> > +     case BPF_H: return 2;
> > +     case BPF_B: return 1;
> > +     default: return -1;
> > +     }
> > +}
> > +
> > +static int insn_bytes_to_mem_sz(__u32 sz)
> > +{
> > +     switch (sz) {
> > +     case 8: return BPF_DW;
> > +     case 4: return BPF_W;
> > +     case 2: return BPF_H;
> > +     case 1: return BPF_B;
> > +     default: return -1;
> > +     }
> > +}
>
> filter.h has these two helpers. They're named bytes_to_bpf_size() and bpf_size_to_bytes().
> I guess we cannot really share kernel and libbpf implementation, but
> could you please name them the same way so it's easier to follow
> for folks who read both kernel and libbpf code?

yes, of course. I actually spent some time searching for them, but
couldn't find them in kernel code. I remembered seeing them
previously, but it never occurred to me to chekc filter.h.

>
> > +             if (res->new_sz != res->orig_sz) {
> > +                     mem_sz = insn_mem_sz_to_bytes(insn);
> > +                     if (mem_sz != res->orig_sz) {
> > +                             pr_warn("prog '%s': relo #%d: insn #%d (LDX/ST/STX) unexpected mem size: got %u, exp %u\n",
> > +                                     prog->name, relo_idx, insn_idx, mem_sz, res->orig_sz);
> > +                             return -EINVAL;
> > +                     }
> > +
> > +                     mem_sz = insn_bytes_to_mem_sz(res->new_sz);
> > +                     if (mem_sz < 0) {
>
> Please use new variable here appropriately named.
> Few lines above mem_sz is in bytes while here it's encoding opcode.

ok

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions
  2020-10-02  1:06 [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions Andrii Nakryiko
                   ` (2 preceding siblings ...)
  2020-10-02  1:06 ` [PATCH bpf-next 3/3] selftests/bpf: validate libbpf's auto-sizing of LD/ST/STX instructions Andrii Nakryiko
@ 2020-10-07 17:56 ` Luka Perkov
  2020-10-07 18:01   ` Andrii Nakryiko
  3 siblings, 1 reply; 10+ messages in thread
From: Luka Perkov @ 2020-10-07 17:56 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, kernel-team, Tony Ambardar, Juraj Vijtiuk,
	Luka Oreskovic, Sven Fijan, David Marcinkovic, Jakov Petrina

Hello Andrii,

On Fri, Oct 2, 2020 at 3:09 AM Andrii Nakryiko <andriin@fb.com> wrote:
> Patch set implements logic in libbpf to auto-adjust memory size (1-, 2-, 4-,
> 8-bytes) of load/store (LD/ST/STX) instructions which have BPF CO-RE field
> offset relocation associated with it. In practice this means transparent
> handling of 32-bit kernels, both pointer and unsigned integers. Signed
> integers are not relocatable with zero-extending loads/stores, so libbpf
> poisons them and generates a warning. If/when BPF gets support for sign-extending
> loads/stores, it would be possible to automatically relocate them as well.
>
> All the details are contained in patch #1 comments and commit message.
> Patch #2 is a simple change in libbpf to make advanced testing with custom BTF
> easier. Patch #3 validates correct uses of auto-resizable loads, as well as
> check that libbpf fails invalid uses.
>
> I'd really appreciate folks that use BPF on 32-bit architectures to test this
> out with their BPF programs and report if there are any problems with the
> approach.
>
> Cc: Luka Perkov <luka.perkov@sartura.hr>

First, thank you for the support and sending this series. It took us a
bit longer to run the tests as our target hardware still did not fully
get complete mainline support and we had to rebase our patches. These
are not related to BPF.

Related to this patch, we have tested various BPF programs with this
patch, and can confirm that it fixed previous issues with pointer
offsets that we had and reported at:

https://lore.kernel.org/r/CA+XBgLU=8PFkP8S32e4gpst0=R4MFv8rZA5KaO+cEPYSnTRYYw@mail.gmail.com/.

Most of our programs now work and we are currently debugging other
programs that still aren't working. We are still not sure if the
remaining issues are related to this or not, but will let you know
sometime this week after further and more detailed investigation.

Thanks,
Luka

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions
  2020-10-07 17:56 ` [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions Luka Perkov
@ 2020-10-07 18:01   ` Andrii Nakryiko
  2020-10-08 10:34     ` Luka Perkov
  0 siblings, 1 reply; 10+ messages in thread
From: Andrii Nakryiko @ 2020-10-07 18:01 UTC (permalink / raw)
  To: Luka Perkov
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Tony Ambardar, Juraj Vijtiuk,
	Luka Oreskovic, Sven Fijan, David Marcinkovic, Jakov Petrina

On Wed, Oct 7, 2020 at 10:56 AM Luka Perkov <luka.perkov@sartura.hr> wrote:
>
> Hello Andrii,
>
> On Fri, Oct 2, 2020 at 3:09 AM Andrii Nakryiko <andriin@fb.com> wrote:
> > Patch set implements logic in libbpf to auto-adjust memory size (1-, 2-, 4-,
> > 8-bytes) of load/store (LD/ST/STX) instructions which have BPF CO-RE field
> > offset relocation associated with it. In practice this means transparent
> > handling of 32-bit kernels, both pointer and unsigned integers. Signed
> > integers are not relocatable with zero-extending loads/stores, so libbpf
> > poisons them and generates a warning. If/when BPF gets support for sign-extending
> > loads/stores, it would be possible to automatically relocate them as well.
> >
> > All the details are contained in patch #1 comments and commit message.
> > Patch #2 is a simple change in libbpf to make advanced testing with custom BTF
> > easier. Patch #3 validates correct uses of auto-resizable loads, as well as
> > check that libbpf fails invalid uses.
> >
> > I'd really appreciate folks that use BPF on 32-bit architectures to test this
> > out with their BPF programs and report if there are any problems with the
> > approach.
> >
> > Cc: Luka Perkov <luka.perkov@sartura.hr>
>
> First, thank you for the support and sending this series. It took us a
> bit longer to run the tests as our target hardware still did not fully
> get complete mainline support and we had to rebase our patches. These
> are not related to BPF.
>
> Related to this patch, we have tested various BPF programs with this
> patch, and can confirm that it fixed previous issues with pointer
> offsets that we had and reported at:
>
> https://lore.kernel.org/r/CA+XBgLU=8PFkP8S32e4gpst0=R4MFv8rZA5KaO+cEPYSnTRYYw@mail.gmail.com/.
>
> Most of our programs now work and we are currently debugging other
> programs that still aren't working. We are still not sure if the
> remaining issues are related to this or not, but will let you know
> sometime this week after further and more detailed investigation.
>

Ok, great, thanks for the update.

> Thanks,
> Luka

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions
  2020-10-07 18:01   ` Andrii Nakryiko
@ 2020-10-08 10:34     ` Luka Perkov
  2020-10-08 17:59       ` Andrii Nakryiko
  0 siblings, 1 reply; 10+ messages in thread
From: Luka Perkov @ 2020-10-08 10:34 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Tony Ambardar, Juraj Vijtiuk,
	Luka Oreskovic, Sven Fijan, David Marcinkovic, Jakov Petrina

Hello Andrii,

On Wed, Oct 7, 2020 at 8:01 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Wed, Oct 7, 2020 at 10:56 AM Luka Perkov <luka.perkov@sartura.hr> wrote:
> >
> > Hello Andrii,
> >
> > On Fri, Oct 2, 2020 at 3:09 AM Andrii Nakryiko <andriin@fb.com> wrote:
> > > Patch set implements logic in libbpf to auto-adjust memory size (1-, 2-, 4-,
> > > 8-bytes) of load/store (LD/ST/STX) instructions which have BPF CO-RE field
> > > offset relocation associated with it. In practice this means transparent
> > > handling of 32-bit kernels, both pointer and unsigned integers. Signed
> > > integers are not relocatable with zero-extending loads/stores, so libbpf
> > > poisons them and generates a warning. If/when BPF gets support for sign-extending
> > > loads/stores, it would be possible to automatically relocate them as well.
> > >
> > > All the details are contained in patch #1 comments and commit message.
> > > Patch #2 is a simple change in libbpf to make advanced testing with custom BTF
> > > easier. Patch #3 validates correct uses of auto-resizable loads, as well as
> > > check that libbpf fails invalid uses.
> > >
> > > I'd really appreciate folks that use BPF on 32-bit architectures to test this
> > > out with their BPF programs and report if there are any problems with the
> > > approach.
> > >
> > > Cc: Luka Perkov <luka.perkov@sartura.hr>
> >
> > First, thank you for the support and sending this series. It took us a
> > bit longer to run the tests as our target hardware still did not fully
> > get complete mainline support and we had to rebase our patches. These
> > are not related to BPF.
> >
> > Related to this patch, we have tested various BPF programs with this
> > patch, and can confirm that it fixed previous issues with pointer
> > offsets that we had and reported at:
> >
> > https://lore.kernel.org/r/CA+XBgLU=8PFkP8S32e4gpst0=R4MFv8rZA5KaO+cEPYSnTRYYw@mail.gmail.com/.
> >
> > Most of our programs now work and we are currently debugging other
> > programs that still aren't working. We are still not sure if the
> > remaining issues are related to this or not, but will let you know
> > sometime this week after further and more detailed investigation.
> >
>
> Ok, great, thanks for the update.

Just to update you that we have identified that the problem was a
known issue with JIT as we had enabled the BPF_JIT_ALWAYS_ON.

That said, it would be great to see this series included in 5.10 :)

Thanks,
Luka

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions
  2020-10-08 10:34     ` Luka Perkov
@ 2020-10-08 17:59       ` Andrii Nakryiko
  0 siblings, 0 replies; 10+ messages in thread
From: Andrii Nakryiko @ 2020-10-08 17:59 UTC (permalink / raw)
  To: Luka Perkov
  Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
	Daniel Borkmann, Kernel Team, Tony Ambardar, Juraj Vijtiuk,
	Luka Oreskovic, Sven Fijan, David Marcinkovic, Jakov Petrina

On Thu, Oct 8, 2020 at 3:34 AM Luka Perkov <luka.perkov@sartura.hr> wrote:
>
> Hello Andrii,
>
> On Wed, Oct 7, 2020 at 8:01 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Wed, Oct 7, 2020 at 10:56 AM Luka Perkov <luka.perkov@sartura.hr> wrote:
> > >
> > > Hello Andrii,
> > >
> > > On Fri, Oct 2, 2020 at 3:09 AM Andrii Nakryiko <andriin@fb.com> wrote:
> > > > Patch set implements logic in libbpf to auto-adjust memory size (1-, 2-, 4-,
> > > > 8-bytes) of load/store (LD/ST/STX) instructions which have BPF CO-RE field
> > > > offset relocation associated with it. In practice this means transparent
> > > > handling of 32-bit kernels, both pointer and unsigned integers. Signed
> > > > integers are not relocatable with zero-extending loads/stores, so libbpf
> > > > poisons them and generates a warning. If/when BPF gets support for sign-extending
> > > > loads/stores, it would be possible to automatically relocate them as well.
> > > >
> > > > All the details are contained in patch #1 comments and commit message.
> > > > Patch #2 is a simple change in libbpf to make advanced testing with custom BTF
> > > > easier. Patch #3 validates correct uses of auto-resizable loads, as well as
> > > > check that libbpf fails invalid uses.
> > > >
> > > > I'd really appreciate folks that use BPF on 32-bit architectures to test this
> > > > out with their BPF programs and report if there are any problems with the
> > > > approach.
> > > >
> > > > Cc: Luka Perkov <luka.perkov@sartura.hr>
> > >
> > > First, thank you for the support and sending this series. It took us a
> > > bit longer to run the tests as our target hardware still did not fully
> > > get complete mainline support and we had to rebase our patches. These
> > > are not related to BPF.
> > >
> > > Related to this patch, we have tested various BPF programs with this
> > > patch, and can confirm that it fixed previous issues with pointer
> > > offsets that we had and reported at:
> > >
> > > https://lore.kernel.org/r/CA+XBgLU=8PFkP8S32e4gpst0=R4MFv8rZA5KaO+cEPYSnTRYYw@mail.gmail.com/.
> > >
> > > Most of our programs now work and we are currently debugging other
> > > programs that still aren't working. We are still not sure if the
> > > remaining issues are related to this or not, but will let you know
> > > sometime this week after further and more detailed investigation.
> > >
> >
> > Ok, great, thanks for the update.
>
> Just to update you that we have identified that the problem was a
> known issue with JIT as we had enabled the BPF_JIT_ALWAYS_ON.
>
> That said, it would be great to see this series included in 5.10 :)

This is purely a libbpf feature, completely agnostic to kernel
versions. So you'll get this with upcoming libbpf 0.2.0 release.

>
> Thanks,
> Luka

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-10-08 17:59 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-02  1:06 [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions Andrii Nakryiko
2020-10-02  1:06 ` [PATCH bpf-next 1/3] libbpf: support safe subset of load/store instruction resizing with CO-RE Andrii Nakryiko
2020-10-06 18:08   ` Alexei Starovoitov
2020-10-06 18:17     ` Andrii Nakryiko
2020-10-02  1:06 ` [PATCH bpf-next 2/3] libbpf: allow specifying both ELF and raw BTF for CO-RE BTF override Andrii Nakryiko
2020-10-02  1:06 ` [PATCH bpf-next 3/3] selftests/bpf: validate libbpf's auto-sizing of LD/ST/STX instructions Andrii Nakryiko
2020-10-07 17:56 ` [PATCH bpf-next 0/3] libbpf: auto-resize relocatable LOAD/STORE instructions Luka Perkov
2020-10-07 18:01   ` Andrii Nakryiko
2020-10-08 10:34     ` Luka Perkov
2020-10-08 17:59       ` Andrii Nakryiko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).