All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/4] reduce BPF_ID_MAP_SIZE to fit only valid programs
@ 2022-12-17  2:17 Eduard Zingerman
  2022-12-17  2:17 ` [PATCH bpf-next 1/4] selftests/bpf: support for BPF_F_TEST_STATE_FREQ in test_loader Eduard Zingerman
                   ` (4 more replies)
  0 siblings, 5 replies; 18+ messages in thread
From: Eduard Zingerman @ 2022-12-17  2:17 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, kernel-team, yhs, Eduard Zingerman

This is a follow-up for threads [1] and [2]:
 - The size of the bpf_verifier_state->idmap_scratch array is reduced but
   remains sufficient for any valid BPF program.
 - selftests/bpf test_loader is updated to allow specifying that program
   requires BPF_F_TEST_STATE_FREQ flag.
 - Test case for the verifier.c:check_ids() update uses test_loader and is
   meant as an example of using it for test_verifier-kind tests.

The combination of test_loader and naked functions (see [3]) with inline
assembly allows to write verifier tests easier than it is done currently
with BPF_RAW_INSN-series macros.

One can follow the steps below to add new tests of such kind:
 - add a topic file under bpf/progs/ directory;
 - define test programs using naked functions and inline assembly:

	#include <linux/bpf.h>
	#include "bpf_misc.h"
	
	SEC(...)
	__naked int foo_test(void)
	{
		asm volatile(
			"r0 = 0;"
			"exit;"
			::: __clobber_all);
	}
	
 - add skeleton and runner functions in prog_tests/verifier.c:
 
	#include "topic.skel.h"
	TEST_SET(topic)

After these steps the test_progs binary would include the topic tests.
Topic tests could be selectively executed using the following command:

$ ./test_progs -vvv -a topic

These changes are suggested by Andrii Nakryiko.

[1] https://lore.kernel.org/bpf/CAEf4BzYN1JmY9t03pnCHc4actob80wkBz2vk90ihJCBzi8CT9w@mail.gmail.com/
[2] https://lore.kernel.org/bpf/CAEf4BzYPsDWdRgx+ND1wiKAB62P=WwoLhr2uWkbVpQfbHqi1oA@mail.gmail.com/
[3] https://gcc.gnu.org/onlinedocs/gcc/Basic-Asm.html#Basic-Asm

Eduard Zingerman (4):
  selftests/bpf: support for BPF_F_TEST_STATE_FREQ in test_loader
  selftests/bpf: convenience macro for use with 'asm volatile' blocks
  bpf: reduce BPF_ID_MAP_SIZE to fit only valid programs
  selftests/bpf: check if verifier.c:check_ids() handles 64+5 ids

 include/linux/bpf_verifier.h                  |  4 +-
 kernel/bpf/verifier.c                         |  6 +-
 .../selftests/bpf/prog_tests/verifier.c       | 12 +++
 tools/testing/selftests/bpf/progs/bpf_misc.h  |  7 ++
 .../selftests/bpf/progs/check_ids_limits.c    | 77 +++++++++++++++++++
 tools/testing/selftests/bpf/test_loader.c     | 10 +++
 6 files changed, 112 insertions(+), 4 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/verifier.c
 create mode 100644 tools/testing/selftests/bpf/progs/check_ids_limits.c

-- 
2.38.2


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH bpf-next 1/4] selftests/bpf: support for BPF_F_TEST_STATE_FREQ in test_loader
  2022-12-17  2:17 [PATCH bpf-next 0/4] reduce BPF_ID_MAP_SIZE to fit only valid programs Eduard Zingerman
@ 2022-12-17  2:17 ` Eduard Zingerman
  2022-12-17 18:44   ` Yonghong Song
  2022-12-20 21:03   ` Andrii Nakryiko
  2022-12-17  2:17 ` [PATCH bpf-next 2/4] selftests/bpf: convenience macro for use with 'asm volatile' blocks Eduard Zingerman
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 18+ messages in thread
From: Eduard Zingerman @ 2022-12-17  2:17 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, kernel-team, yhs, Eduard Zingerman

Adds a macro __test_state_freq, the macro expands as a btf_decl_tag of a
special form that instructs test_loader that the flag BPF_F_TEST_STATE_FREQ
has to be passed to BPF verifier when program is loaded.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 tools/testing/selftests/bpf/progs/bpf_misc.h |  1 +
 tools/testing/selftests/bpf/test_loader.c    | 10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
index 4a01ea9113bf..a42363a3fef1 100644
--- a/tools/testing/selftests/bpf/progs/bpf_misc.h
+++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
@@ -6,6 +6,7 @@
 #define __failure		__attribute__((btf_decl_tag("comment:test_expect_failure")))
 #define __success		__attribute__((btf_decl_tag("comment:test_expect_success")))
 #define __log_level(lvl)	__attribute__((btf_decl_tag("comment:test_log_level="#lvl)))
+#define __test_state_freq	__attribute__((btf_decl_tag("comment:test_state_freq")))
 
 #if defined(__TARGET_ARCH_x86)
 #define SYSCALL_WRAPPER 1
diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
index 679efb3aa785..ac8517a77161 100644
--- a/tools/testing/selftests/bpf/test_loader.c
+++ b/tools/testing/selftests/bpf/test_loader.c
@@ -11,6 +11,7 @@
 
 #define TEST_TAG_EXPECT_FAILURE "comment:test_expect_failure"
 #define TEST_TAG_EXPECT_SUCCESS "comment:test_expect_success"
+#define TEST_TAG_TEST_STATE_FREQ "comment:test_state_freq"
 #define TEST_TAG_EXPECT_MSG_PFX "comment:test_expect_msg="
 #define TEST_TAG_LOG_LEVEL_PFX "comment:test_log_level="
 
@@ -19,6 +20,7 @@ struct test_spec {
 	bool expect_failure;
 	const char *expect_msg;
 	int log_level;
+	bool test_state_freq;
 };
 
 static int tester_init(struct test_loader *tester)
@@ -81,6 +83,8 @@ static int parse_test_spec(struct test_loader *tester,
 			spec->expect_failure = true;
 		} else if (strcmp(s, TEST_TAG_EXPECT_SUCCESS) == 0) {
 			spec->expect_failure = false;
+		} else if (strcmp(s, TEST_TAG_TEST_STATE_FREQ) == 0) {
+			spec->test_state_freq = true;
 		} else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX)) {
 			spec->expect_msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1;
 		} else if (str_has_pfx(s, TEST_TAG_LOG_LEVEL_PFX)) {
@@ -102,6 +106,7 @@ static void prepare_case(struct test_loader *tester,
 			 struct bpf_program *prog)
 {
 	int min_log_level = 0;
+	__u32 flags = 0;
 
 	if (env.verbosity > VERBOSE_NONE)
 		min_log_level = 1;
@@ -120,6 +125,11 @@ static void prepare_case(struct test_loader *tester,
 		bpf_program__set_log_level(prog, spec->log_level);
 
 	tester->log_buf[0] = '\0';
+
+	if (spec->test_state_freq)
+		flags |= BPF_F_TEST_STATE_FREQ;
+
+	bpf_program__set_flags(prog, flags);
 }
 
 static void emit_verifier_log(const char *log_buf, bool force)
-- 
2.38.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH bpf-next 2/4] selftests/bpf: convenience macro for use with 'asm volatile' blocks
  2022-12-17  2:17 [PATCH bpf-next 0/4] reduce BPF_ID_MAP_SIZE to fit only valid programs Eduard Zingerman
  2022-12-17  2:17 ` [PATCH bpf-next 1/4] selftests/bpf: support for BPF_F_TEST_STATE_FREQ in test_loader Eduard Zingerman
@ 2022-12-17  2:17 ` Eduard Zingerman
  2022-12-17 18:58   ` Yonghong Song
  2022-12-20 21:05   ` Andrii Nakryiko
  2022-12-17  2:17 ` [PATCH bpf-next 3/4] bpf: reduce BPF_ID_MAP_SIZE to fit only valid programs Eduard Zingerman
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 18+ messages in thread
From: Eduard Zingerman @ 2022-12-17  2:17 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, kernel-team, yhs, Eduard Zingerman

A set of macros useful for writing naked BPF functions using inline
assembly. E.g. as follows:

struct map_struct {
	...
} map SEC(".maps");

SEC(...)
__naked int foo_test(void)
{
	asm volatile(
		"r0 = 0;"
		"*(u64*)(r10 - 8) = r0;"
		"r1 = %[map] ll;"
		"r2 = r10;"
		"r2 += -8;"
		"call %[bpf_map_lookup_elem];"
		"r0 = 0;"
		"exit;"
		:
		: __imm(bpf_map_lookup_elem),
		  __imm_addr(map)
		: __clobber_all);
}

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 tools/testing/selftests/bpf/progs/bpf_misc.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
index a42363a3fef1..bbf56ad95636 100644
--- a/tools/testing/selftests/bpf/progs/bpf_misc.h
+++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
@@ -8,6 +8,12 @@
 #define __log_level(lvl)	__attribute__((btf_decl_tag("comment:test_log_level="#lvl)))
 #define __test_state_freq	__attribute__((btf_decl_tag("comment:test_state_freq")))
 
+/* Convenience macro for use with 'asm volatile' blocks */
+#define __naked __attribute__((naked))
+#define __clobber_all "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "memory"
+#define __imm(name) [name]"i"(name)
+#define __imm_addr(name) [name]"i"(&name)
+
 #if defined(__TARGET_ARCH_x86)
 #define SYSCALL_WRAPPER 1
 #define SYS_PREFIX "__x64_"
-- 
2.38.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH bpf-next 3/4] bpf: reduce BPF_ID_MAP_SIZE to fit only valid programs
  2022-12-17  2:17 [PATCH bpf-next 0/4] reduce BPF_ID_MAP_SIZE to fit only valid programs Eduard Zingerman
  2022-12-17  2:17 ` [PATCH bpf-next 1/4] selftests/bpf: support for BPF_F_TEST_STATE_FREQ in test_loader Eduard Zingerman
  2022-12-17  2:17 ` [PATCH bpf-next 2/4] selftests/bpf: convenience macro for use with 'asm volatile' blocks Eduard Zingerman
@ 2022-12-17  2:17 ` Eduard Zingerman
  2022-12-17 18:59   ` Yonghong Song
  2022-12-20 21:06   ` Andrii Nakryiko
  2022-12-17  2:17 ` [PATCH bpf-next 4/4] selftests/bpf: check if verifier.c:check_ids() handles 64+5 ids Eduard Zingerman
  2022-12-20 21:21 ` [PATCH bpf-next 0/4] reduce BPF_ID_MAP_SIZE to fit only valid programs Andrii Nakryiko
  4 siblings, 2 replies; 18+ messages in thread
From: Eduard Zingerman @ 2022-12-17  2:17 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, kernel-team, yhs, Eduard Zingerman

BPF limits stack usage by MAX_BPF_STACK bytes across all call frames,
however this is enforced by function check_max_stack_depth() which is
executed after do_check_{subprogs,main}().

This means that when check_ids() is executed the maximal stack depth is not
yet verified, thus in theory the number of stack spills might be
MAX_CALL_FRAMES * MAX_BPF_STACK / BPF_REG_SIZE.

However, any program with stack usage deeper than
MAX_BPF_STACK / BPF_REG_SIZE would be rejected by verifier.

Hence save some memory by reducing the BPF_ID_MAP_SIZE.

This is a follow up for
https://lore.kernel.org/bpf/CAEf4BzYN1JmY9t03pnCHc4actob80wkBz2vk90ihJCBzi8CT9w@mail.gmail.com/

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Suggested-by: Andrii Nakryiko <andrii@kernel.org>
---
 include/linux/bpf_verifier.h | 4 ++--
 kernel/bpf/verifier.c        | 6 ++++--
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 53d175cbaa02..da72e16f1dee 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -274,8 +274,8 @@ struct bpf_id_pair {
 };
 
 #define MAX_CALL_FRAMES 8
-/* Maximum number of register states that can exist at once */
-#define BPF_ID_MAP_SIZE ((MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE) * MAX_CALL_FRAMES)
+/* Maximum number of register states that can exist at once in a valid program */
+#define BPF_ID_MAP_SIZE (MAX_BPF_REG * MAX_CALL_FRAMES + MAX_BPF_STACK / BPF_REG_SIZE)
 struct bpf_verifier_state {
 	/* call stack tracking */
 	struct bpf_func_state *frame[MAX_CALL_FRAMES];
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a5255a0dcbb6..fb040516a946 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12951,8 +12951,10 @@ static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
 		if (idmap[i].old == old_id)
 			return idmap[i].cur == cur_id;
 	}
-	/* We ran out of idmap slots, which should be impossible */
-	WARN_ON_ONCE(1);
+	/* Run out of slots in idmap, conservatively return false, cached
+	 * state will not be reused. The BPF_ID_MAP_SIZE is sufficiently
+	 * large to fit all valid programs.
+	 */
 	return false;
 }
 
-- 
2.38.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH bpf-next 4/4] selftests/bpf: check if verifier.c:check_ids() handles 64+5 ids
  2022-12-17  2:17 [PATCH bpf-next 0/4] reduce BPF_ID_MAP_SIZE to fit only valid programs Eduard Zingerman
                   ` (2 preceding siblings ...)
  2022-12-17  2:17 ` [PATCH bpf-next 3/4] bpf: reduce BPF_ID_MAP_SIZE to fit only valid programs Eduard Zingerman
@ 2022-12-17  2:17 ` Eduard Zingerman
  2022-12-17 19:17   ` Yonghong Song
  2022-12-20 21:18   ` Andrii Nakryiko
  2022-12-20 21:21 ` [PATCH bpf-next 0/4] reduce BPF_ID_MAP_SIZE to fit only valid programs Andrii Nakryiko
  4 siblings, 2 replies; 18+ messages in thread
From: Eduard Zingerman @ 2022-12-17  2:17 UTC (permalink / raw)
  To: bpf, ast; +Cc: andrii, daniel, kernel-team, yhs, Eduard Zingerman

A simple program that allocates a bunch of unique register ids than
branches. The goal is to confirm that idmap used in verifier.c:check_ids()
has sufficient capacity to verify that branches converge to a same state.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
---
 .../selftests/bpf/prog_tests/verifier.c       | 12 +++
 .../selftests/bpf/progs/check_ids_limits.c    | 77 +++++++++++++++++++
 2 files changed, 89 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/verifier.c
 create mode 100644 tools/testing/selftests/bpf/progs/check_ids_limits.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
new file mode 100644
index 000000000000..3933141928a7
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <test_progs.h>
+
+#include "check_ids_limits.skel.h"
+
+#define TEST_SET(skel)			\
+	void test_##skel(void)		\
+	{				\
+		RUN_TESTS(skel);	\
+	}
+
+TEST_SET(check_ids_limits)
diff --git a/tools/testing/selftests/bpf/progs/check_ids_limits.c b/tools/testing/selftests/bpf/progs/check_ids_limits.c
new file mode 100644
index 000000000000..36c4a8bbe8ca
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/check_ids_limits.c
@@ -0,0 +1,77 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+struct map_struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, int);
+} map SEC(".maps");
+
+/* Make sure that verifier.c:check_ids() can handle (almost) maximal
+ * number of ids.
+ */
+SEC("?raw_tp")
+__naked __test_state_freq __log_level(2) __msg("43 to 45: safe")
+int allocate_many_ids(void)
+{
+	/* Use bpf_map_lookup_elem() as a way to get a bunch of values
+	 * with unique ids.
+	 */
+#define __lookup(dst)				\
+		"r1 = %[map] ll;"		\
+		"r2 = r10;"			\
+		"r2 += -8;"			\
+		"call %[bpf_map_lookup_elem];"	\
+		dst " = r0;"
+	asm volatile(
+		"r0 = 0;"
+		"*(u64*)(r10 - 8) = r0;"
+		"r7 = r10;"
+		"r8 = 0;"
+		/* Spill 64 bpf_map_lookup_elem() results to stack,
+		 * each lookup gets its own unique id.
+		 */
+	"write_loop:"
+		"r7 += -8;"
+		"r8 += -8;"
+		__lookup("*(u64*)(r7 + 0)")
+		"if r8 != -512 goto write_loop;"
+		/* No way to source unique ids for r1-r5 as these
+		 * would be clobbered by bpf_map_lookup_elem call,
+		 * so make do with 64+5 unique ids.
+		 */
+		__lookup("r6")
+		__lookup("r7")
+		__lookup("r8")
+		__lookup("r9")
+		__lookup("r0")
+		/* Create a branching point for states comparison. */
+/* 43: */	"if r0 != 0 goto skip_one;"
+		/* Read all registers and stack spills to make these
+		 * persist in the checkpoint state.
+		 */
+		"r0 = r0;"
+	"skip_one:"
+/* 45: */	"r0 = r6;"
+		"r0 = r7;"
+		"r0 = r8;"
+		"r0 = r9;"
+		"r0 = r10;"
+		"r1 = 0;"
+	"read_loop:"
+		"r0 += -8;"
+		"r1 += -8;"
+		"r2 = *(u64*)(r0 + 0);"
+		"if r1 != -512 goto read_loop;"
+		"r0 = 0;"
+		"exit;"
+		:
+		: __imm(bpf_map_lookup_elem),
+		  __imm_addr(map)
+		: __clobber_all);
+#undef __lookup
+}
-- 
2.38.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/4] selftests/bpf: support for BPF_F_TEST_STATE_FREQ in test_loader
  2022-12-17  2:17 ` [PATCH bpf-next 1/4] selftests/bpf: support for BPF_F_TEST_STATE_FREQ in test_loader Eduard Zingerman
@ 2022-12-17 18:44   ` Yonghong Song
  2022-12-20 21:03   ` Andrii Nakryiko
  1 sibling, 0 replies; 18+ messages in thread
From: Yonghong Song @ 2022-12-17 18:44 UTC (permalink / raw)
  To: Eduard Zingerman, bpf, ast; +Cc: andrii, daniel, kernel-team, yhs



On 12/16/22 6:17 PM, Eduard Zingerman wrote:
> Adds a macro __test_state_freq, the macro expands as a btf_decl_tag of a
> special form that instructs test_loader that the flag BPF_F_TEST_STATE_FREQ
> has to be passed to BPF verifier when program is loaded.
> 
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 2/4] selftests/bpf: convenience macro for use with 'asm volatile' blocks
  2022-12-17  2:17 ` [PATCH bpf-next 2/4] selftests/bpf: convenience macro for use with 'asm volatile' blocks Eduard Zingerman
@ 2022-12-17 18:58   ` Yonghong Song
  2022-12-20 21:05   ` Andrii Nakryiko
  1 sibling, 0 replies; 18+ messages in thread
From: Yonghong Song @ 2022-12-17 18:58 UTC (permalink / raw)
  To: Eduard Zingerman, bpf, ast; +Cc: andrii, daniel, kernel-team, yhs



On 12/16/22 6:17 PM, Eduard Zingerman wrote:
> A set of macros useful for writing naked BPF functions using inline
> assembly. E.g. as follows:
> 
> struct map_struct {
> 	...
> } map SEC(".maps");
> 
> SEC(...)
> __naked int foo_test(void)
> {
> 	asm volatile(
> 		"r0 = 0;"
> 		"*(u64*)(r10 - 8) = r0;"
> 		"r1 = %[map] ll;"
> 		"r2 = r10;"
> 		"r2 += -8;"
> 		"call %[bpf_map_lookup_elem];"
> 		"r0 = 0;"
> 		"exit;"
> 		:
> 		: __imm(bpf_map_lookup_elem),
> 		  __imm_addr(map)
> 		: __clobber_all);
> }
> 
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 3/4] bpf: reduce BPF_ID_MAP_SIZE to fit only valid programs
  2022-12-17  2:17 ` [PATCH bpf-next 3/4] bpf: reduce BPF_ID_MAP_SIZE to fit only valid programs Eduard Zingerman
@ 2022-12-17 18:59   ` Yonghong Song
  2022-12-20 21:06   ` Andrii Nakryiko
  1 sibling, 0 replies; 18+ messages in thread
From: Yonghong Song @ 2022-12-17 18:59 UTC (permalink / raw)
  To: Eduard Zingerman, bpf, ast; +Cc: andrii, daniel, kernel-team, yhs



On 12/16/22 6:17 PM, Eduard Zingerman wrote:
> BPF limits stack usage by MAX_BPF_STACK bytes across all call frames,
> however this is enforced by function check_max_stack_depth() which is
> executed after do_check_{subprogs,main}().
> 
> This means that when check_ids() is executed the maximal stack depth is not
> yet verified, thus in theory the number of stack spills might be
> MAX_CALL_FRAMES * MAX_BPF_STACK / BPF_REG_SIZE.
> 
> However, any program with stack usage deeper than
> MAX_BPF_STACK / BPF_REG_SIZE would be rejected by verifier.
> 
> Hence save some memory by reducing the BPF_ID_MAP_SIZE.
> 
> This is a follow up for
> https://lore.kernel.org/bpf/CAEf4BzYN1JmY9t03pnCHc4actob80wkBz2vk90ihJCBzi8CT9w@mail.gmail.com/
> 
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 4/4] selftests/bpf: check if verifier.c:check_ids() handles 64+5 ids
  2022-12-17  2:17 ` [PATCH bpf-next 4/4] selftests/bpf: check if verifier.c:check_ids() handles 64+5 ids Eduard Zingerman
@ 2022-12-17 19:17   ` Yonghong Song
  2022-12-20 21:18   ` Andrii Nakryiko
  1 sibling, 0 replies; 18+ messages in thread
From: Yonghong Song @ 2022-12-17 19:17 UTC (permalink / raw)
  To: Eduard Zingerman, bpf, ast; +Cc: andrii, daniel, kernel-team, yhs



On 12/16/22 6:17 PM, Eduard Zingerman wrote:
> A simple program that allocates a bunch of unique register ids than
> branches. The goal is to confirm that idmap used in verifier.c:check_ids()
> has sufficient capacity to verify that branches converge to a same state.
> 
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/4] selftests/bpf: support for BPF_F_TEST_STATE_FREQ in test_loader
  2022-12-17  2:17 ` [PATCH bpf-next 1/4] selftests/bpf: support for BPF_F_TEST_STATE_FREQ in test_loader Eduard Zingerman
  2022-12-17 18:44   ` Yonghong Song
@ 2022-12-20 21:03   ` Andrii Nakryiko
  2022-12-22  0:11     ` Eduard Zingerman
  1 sibling, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2022-12-20 21:03 UTC (permalink / raw)
  To: Eduard Zingerman; +Cc: bpf, ast, andrii, daniel, kernel-team, yhs

On Fri, Dec 16, 2022 at 6:17 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> Adds a macro __test_state_freq, the macro expands as a btf_decl_tag of a
> special form that instructs test_loader that the flag BPF_F_TEST_STATE_FREQ
> has to be passed to BPF verifier when program is loaded.
>

I needed similar capabilities locally, but I went a slightly different
direction. Instead of defining custom macros and logic, I define just
__flags(X) macro and then parse flags either by their symbolic name
(or just integer value, which might be useful sometimes for
development purposes). I've also added support for matching multiple
messages sequentially which locally is in the same commit. Feel free
to ignore that part, but I think it's useful as well. So WDYT about
the below?


commit 936bb5d21d717d54c85e74047e082ca3216a7a40
Author: Andrii Nakryiko <andrii@kernel.org>
Date:   Mon Dec 19 15:57:26 2022 -0800

    selftests/bpf: support custom per-test flags and multiple expected messages

    Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

diff --git a/tools/testing/selftests/bpf/test_loader.c
b/tools/testing/selftests/bpf/test_loader.c
index 679efb3aa785..b0dab5dee38c 100644
--- a/tools/testing/selftests/bpf/test_loader.c
+++ b/tools/testing/selftests/bpf/test_loader.c
@@ -13,12 +13,15 @@
 #define TEST_TAG_EXPECT_SUCCESS "comment:test_expect_success"
 #define TEST_TAG_EXPECT_MSG_PFX "comment:test_expect_msg="
 #define TEST_TAG_LOG_LEVEL_PFX "comment:test_log_level="
+#define TEST_TAG_PROG_FLAGS_PFX "comment:test_prog_flags="

 struct test_spec {
     const char *name;
     bool expect_failure;
-    const char *expect_msg;
+    const char **expect_msgs;
+    size_t expect_msg_cnt;
     int log_level;
+    int prog_flags;
 };

 static int tester_init(struct test_loader *tester)
@@ -67,7 +70,8 @@ static int parse_test_spec(struct test_loader *tester,

     for (i = 1; i < btf__type_cnt(btf); i++) {
         const struct btf_type *t;
-        const char *s;
+        const char *s, *val;
+        char *e;

         t = btf__type_by_id(btf, i);
         if (!btf_is_decl_tag(t))
@@ -82,14 +86,47 @@ static int parse_test_spec(struct test_loader *tester,
         } else if (strcmp(s, TEST_TAG_EXPECT_SUCCESS) == 0) {
             spec->expect_failure = false;
         } else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX)) {
-            spec->expect_msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1;
+            void *tmp;
+            const char **msg;
+
+            tmp = realloc(spec->expect_msgs, (1 +
spec->expect_msg_cnt) * sizeof(void *));
+            if (!tmp) {
+                ASSERT_FAIL("failed to realloc memory for messages\n");
+                return -ENOMEM;
+            }
+            spec->expect_msgs = tmp;
+            msg = &spec->expect_msgs[spec->expect_msg_cnt++];
+            *msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1;
         } else if (str_has_pfx(s, TEST_TAG_LOG_LEVEL_PFX)) {
+            val = s + sizeof(TEST_TAG_LOG_LEVEL_PFX) - 1;
             errno = 0;
-            spec->log_level = strtol(s +
sizeof(TEST_TAG_LOG_LEVEL_PFX) - 1, NULL, 0);
-            if (errno) {
+            spec->log_level = strtol(val, &e, 0);
+            if (errno || e[0] != '\0') {
                 ASSERT_FAIL("failed to parse test log level from '%s'", s);
                 return -EINVAL;
             }
+        } else if (str_has_pfx(s, TEST_TAG_PROG_FLAGS_PFX)) {
+            val = s + sizeof(TEST_TAG_PROG_FLAGS_PFX) - 1;
+            if (strcmp(val, "BPF_F_STRICT_ALIGNMENT") == 0) {
+                spec->prog_flags |= BPF_F_STRICT_ALIGNMENT;
+            } else if (strcmp(val, "BPF_F_ANY_ALIGNMENT") == 0) {
+                spec->prog_flags |= BPF_F_ANY_ALIGNMENT;
+            } else if (strcmp(val, "BPF_F_TEST_RND_HI32") == 0) {
+                spec->prog_flags |= BPF_F_TEST_RND_HI32;
+            } else if (strcmp(val, "BPF_F_TEST_STATE_FREQ") == 0) {
+                spec->prog_flags |= BPF_F_TEST_STATE_FREQ;
+            } else if (strcmp(val, "BPF_F_SLEEPABLE") == 0) {
+                spec->prog_flags |= BPF_F_SLEEPABLE;
+            } else if (strcmp(val, "BPF_F_XDP_HAS_FRAGS") == 0) {
+                spec->prog_flags |= BPF_F_XDP_HAS_FRAGS;
+            } else /* assume numeric value */ {
+                errno = 0;
+                spec->prog_flags |= strtol(val, &e, 0);
+                if (errno || e[0] != '\0') {
+                    ASSERT_FAIL("failed to parse test prog flags from
'%s'", s);
+                    return -EINVAL;
+                }
+            }
         }
     }

@@ -101,7 +138,7 @@ static void prepare_case(struct test_loader *tester,
              struct bpf_object *obj,
              struct bpf_program *prog)
 {
-    int min_log_level = 0;
+    int min_log_level = 0, prog_flags;

     if (env.verbosity > VERBOSE_NONE)
         min_log_level = 1;
@@ -119,7 +156,11 @@ static void prepare_case(struct test_loader *tester,
     else
         bpf_program__set_log_level(prog, spec->log_level);

+    prog_flags = bpf_program__flags(prog);
+    bpf_program__set_flags(prog, prog_flags | spec->prog_flags);
+
     tester->log_buf[0] = '\0';
+    tester->next_match_pos = 0;
 }

 static void emit_verifier_log(const char *log_buf, bool force)
@@ -135,17 +176,26 @@ static void validate_case(struct test_loader *tester,
               struct bpf_program *prog,
               int load_err)
 {
-    if (spec->expect_msg) {
+    int i, j;
+
+    for (i = 0; i < spec->expect_msg_cnt; i++) {
         char *match;
+        const char *expect_msg;
+
+        expect_msg = spec->expect_msgs[i];

-        match = strstr(tester->log_buf, spec->expect_msg);
+        match = strstr(tester->log_buf + tester->next_match_pos, expect_msg);
         if (!ASSERT_OK_PTR(match, "expect_msg")) {
             /* if we are in verbose mode, we've already emitted log */
             if (env.verbosity == VERBOSE_NONE)
                 emit_verifier_log(tester->log_buf, true /*force*/);
-            fprintf(stderr, "EXPECTED MSG: '%s'\n", spec->expect_msg);
+            for (j = 0; j < i; j++)
+                fprintf(stderr, "MATCHED  MSG: '%s'\n", spec->expect_msgs[j]);
+            fprintf(stderr, "EXPECTED MSG: '%s'\n", expect_msg);
             return;
         }
+
+        tester->next_match_pos = match - tester->log_buf + strlen(expect_msg);
     }
 }

diff --git a/tools/testing/selftests/bpf/test_progs.h
b/tools/testing/selftests/bpf/test_progs.h
index 3f058dfadbaf..9af80704f20a 100644
--- a/tools/testing/selftests/bpf/test_progs.h
+++ b/tools/testing/selftests/bpf/test_progs.h
@@ -410,6 +410,7 @@ int write_sysctl(const char *sysctl, const char *value);
 struct test_loader {
     char *log_buf;
     size_t log_buf_sz;
+    size_t next_match_pos;

     struct bpf_object *obj;
 };




> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
>  tools/testing/selftests/bpf/progs/bpf_misc.h |  1 +
>  tools/testing/selftests/bpf/test_loader.c    | 10 ++++++++++
>  2 files changed, 11 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
> index 4a01ea9113bf..a42363a3fef1 100644
> --- a/tools/testing/selftests/bpf/progs/bpf_misc.h
> +++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
> @@ -6,6 +6,7 @@
>  #define __failure              __attribute__((btf_decl_tag("comment:test_expect_failure")))
>  #define __success              __attribute__((btf_decl_tag("comment:test_expect_success")))
>  #define __log_level(lvl)       __attribute__((btf_decl_tag("comment:test_log_level="#lvl)))
> +#define __test_state_freq      __attribute__((btf_decl_tag("comment:test_state_freq")))
>
>  #if defined(__TARGET_ARCH_x86)
>  #define SYSCALL_WRAPPER 1
> diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
> index 679efb3aa785..ac8517a77161 100644
> --- a/tools/testing/selftests/bpf/test_loader.c
> +++ b/tools/testing/selftests/bpf/test_loader.c
> @@ -11,6 +11,7 @@
>
>  #define TEST_TAG_EXPECT_FAILURE "comment:test_expect_failure"
>  #define TEST_TAG_EXPECT_SUCCESS "comment:test_expect_success"
> +#define TEST_TAG_TEST_STATE_FREQ "comment:test_state_freq"
>  #define TEST_TAG_EXPECT_MSG_PFX "comment:test_expect_msg="
>  #define TEST_TAG_LOG_LEVEL_PFX "comment:test_log_level="
>
> @@ -19,6 +20,7 @@ struct test_spec {
>         bool expect_failure;
>         const char *expect_msg;
>         int log_level;
> +       bool test_state_freq;
>  };
>
>  static int tester_init(struct test_loader *tester)
> @@ -81,6 +83,8 @@ static int parse_test_spec(struct test_loader *tester,
>                         spec->expect_failure = true;
>                 } else if (strcmp(s, TEST_TAG_EXPECT_SUCCESS) == 0) {
>                         spec->expect_failure = false;
> +               } else if (strcmp(s, TEST_TAG_TEST_STATE_FREQ) == 0) {
> +                       spec->test_state_freq = true;
>                 } else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX)) {
>                         spec->expect_msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1;
>                 } else if (str_has_pfx(s, TEST_TAG_LOG_LEVEL_PFX)) {
> @@ -102,6 +106,7 @@ static void prepare_case(struct test_loader *tester,
>                          struct bpf_program *prog)
>  {
>         int min_log_level = 0;
> +       __u32 flags = 0;
>
>         if (env.verbosity > VERBOSE_NONE)
>                 min_log_level = 1;
> @@ -120,6 +125,11 @@ static void prepare_case(struct test_loader *tester,
>                 bpf_program__set_log_level(prog, spec->log_level);
>
>         tester->log_buf[0] = '\0';
> +
> +       if (spec->test_state_freq)
> +               flags |= BPF_F_TEST_STATE_FREQ;
> +
> +       bpf_program__set_flags(prog, flags);

see my example above, it's safer to fetch current prog flags to not
override stuff like BPF_F_SLEEPABLE

>  }
>
>  static void emit_verifier_log(const char *log_buf, bool force)
> --
> 2.38.2
>

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 2/4] selftests/bpf: convenience macro for use with 'asm volatile' blocks
  2022-12-17  2:17 ` [PATCH bpf-next 2/4] selftests/bpf: convenience macro for use with 'asm volatile' blocks Eduard Zingerman
  2022-12-17 18:58   ` Yonghong Song
@ 2022-12-20 21:05   ` Andrii Nakryiko
  2022-12-22  0:12     ` Eduard Zingerman
  1 sibling, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2022-12-20 21:05 UTC (permalink / raw)
  To: Eduard Zingerman; +Cc: bpf, ast, andrii, daniel, kernel-team, yhs

On Fri, Dec 16, 2022 at 6:17 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> A set of macros useful for writing naked BPF functions using inline
> assembly. E.g. as follows:
>
> struct map_struct {
>         ...
> } map SEC(".maps");
>
> SEC(...)
> __naked int foo_test(void)
> {
>         asm volatile(
>                 "r0 = 0;"
>                 "*(u64*)(r10 - 8) = r0;"
>                 "r1 = %[map] ll;"
>                 "r2 = r10;"
>                 "r2 += -8;"
>                 "call %[bpf_map_lookup_elem];"
>                 "r0 = 0;"
>                 "exit;"
>                 :
>                 : __imm(bpf_map_lookup_elem),
>                   __imm_addr(map)
>                 : __clobber_all);
> }
>
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
>  tools/testing/selftests/bpf/progs/bpf_misc.h | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
> index a42363a3fef1..bbf56ad95636 100644
> --- a/tools/testing/selftests/bpf/progs/bpf_misc.h
> +++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
> @@ -8,6 +8,12 @@
>  #define __log_level(lvl)       __attribute__((btf_decl_tag("comment:test_log_level="#lvl)))
>  #define __test_state_freq      __attribute__((btf_decl_tag("comment:test_state_freq")))
>
> +/* Convenience macro for use with 'asm volatile' blocks */
> +#define __naked __attribute__((naked))
> +#define __clobber_all "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "memory"

I found that this one doesn't work well when passing some inputs as
registers (e.g., for address of a variable on stack). Compiler
complains that it couldn't find any free registers to use. So I ended
up using

#define __asm_common_clobbers "r0", "r1", "r2", "r3", "r4", "r5", "memory"

and adding "r6", "r7", etc manually, depending on the test.

So maybe let's add it upfront as `__clobber_common` as well?

But changes look good to me:

Acked-by: Andrii Nakryiko <andrii@kernel.org>

> +#define __imm(name) [name]"i"(name)
> +#define __imm_addr(name) [name]"i"(&name)
> +
>  #if defined(__TARGET_ARCH_x86)
>  #define SYSCALL_WRAPPER 1
>  #define SYS_PREFIX "__x64_"
> --
> 2.38.2
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 3/4] bpf: reduce BPF_ID_MAP_SIZE to fit only valid programs
  2022-12-17  2:17 ` [PATCH bpf-next 3/4] bpf: reduce BPF_ID_MAP_SIZE to fit only valid programs Eduard Zingerman
  2022-12-17 18:59   ` Yonghong Song
@ 2022-12-20 21:06   ` Andrii Nakryiko
  1 sibling, 0 replies; 18+ messages in thread
From: Andrii Nakryiko @ 2022-12-20 21:06 UTC (permalink / raw)
  To: Eduard Zingerman; +Cc: bpf, ast, andrii, daniel, kernel-team, yhs

On Fri, Dec 16, 2022 at 6:17 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> BPF limits stack usage by MAX_BPF_STACK bytes across all call frames,
> however this is enforced by function check_max_stack_depth() which is
> executed after do_check_{subprogs,main}().
>
> This means that when check_ids() is executed the maximal stack depth is not
> yet verified, thus in theory the number of stack spills might be
> MAX_CALL_FRAMES * MAX_BPF_STACK / BPF_REG_SIZE.
>
> However, any program with stack usage deeper than
> MAX_BPF_STACK / BPF_REG_SIZE would be rejected by verifier.
>
> Hence save some memory by reducing the BPF_ID_MAP_SIZE.
>
> This is a follow up for
> https://lore.kernel.org/bpf/CAEf4BzYN1JmY9t03pnCHc4actob80wkBz2vk90ihJCBzi8CT9w@mail.gmail.com/
>
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> ---

LGTM, thanks.

Acked-by: Andrii Nakryiko <andrii@kernel.org>

>  include/linux/bpf_verifier.h | 4 ++--
>  kernel/bpf/verifier.c        | 6 ++++--
>  2 files changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index 53d175cbaa02..da72e16f1dee 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -274,8 +274,8 @@ struct bpf_id_pair {
>  };
>
>  #define MAX_CALL_FRAMES 8
> -/* Maximum number of register states that can exist at once */
> -#define BPF_ID_MAP_SIZE ((MAX_BPF_REG + MAX_BPF_STACK / BPF_REG_SIZE) * MAX_CALL_FRAMES)
> +/* Maximum number of register states that can exist at once in a valid program */
> +#define BPF_ID_MAP_SIZE (MAX_BPF_REG * MAX_CALL_FRAMES + MAX_BPF_STACK / BPF_REG_SIZE)
>  struct bpf_verifier_state {
>         /* call stack tracking */
>         struct bpf_func_state *frame[MAX_CALL_FRAMES];
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index a5255a0dcbb6..fb040516a946 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -12951,8 +12951,10 @@ static bool check_ids(u32 old_id, u32 cur_id, struct bpf_id_pair *idmap)
>                 if (idmap[i].old == old_id)
>                         return idmap[i].cur == cur_id;
>         }
> -       /* We ran out of idmap slots, which should be impossible */
> -       WARN_ON_ONCE(1);
> +       /* Run out of slots in idmap, conservatively return false, cached
> +        * state will not be reused. The BPF_ID_MAP_SIZE is sufficiently
> +        * large to fit all valid programs.
> +        */
>         return false;
>  }
>
> --
> 2.38.2
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 4/4] selftests/bpf: check if verifier.c:check_ids() handles 64+5 ids
  2022-12-17  2:17 ` [PATCH bpf-next 4/4] selftests/bpf: check if verifier.c:check_ids() handles 64+5 ids Eduard Zingerman
  2022-12-17 19:17   ` Yonghong Song
@ 2022-12-20 21:18   ` Andrii Nakryiko
  2022-12-22  0:33     ` Eduard Zingerman
  1 sibling, 1 reply; 18+ messages in thread
From: Andrii Nakryiko @ 2022-12-20 21:18 UTC (permalink / raw)
  To: Eduard Zingerman; +Cc: bpf, ast, andrii, daniel, kernel-team, yhs

On Fri, Dec 16, 2022 at 6:17 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> A simple program that allocates a bunch of unique register ids than
> branches. The goal is to confirm that idmap used in verifier.c:check_ids()
> has sufficient capacity to verify that branches converge to a same state.
>
> Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> ---
>  .../selftests/bpf/prog_tests/verifier.c       | 12 +++
>  .../selftests/bpf/progs/check_ids_limits.c    | 77 +++++++++++++++++++
>  2 files changed, 89 insertions(+)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/verifier.c
>  create mode 100644 tools/testing/selftests/bpf/progs/check_ids_limits.c
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
> new file mode 100644
> index 000000000000..3933141928a7
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
> @@ -0,0 +1,12 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +#include <test_progs.h>
> +
> +#include "check_ids_limits.skel.h"
> +
> +#define TEST_SET(skel)                 \
> +       void test_##skel(void)          \
> +       {                               \
> +               RUN_TESTS(skel);        \
> +       }

Let's not use such trivial macros, please. It makes grepping for tests
much harder and saves 1 line of code only. Let's define funcs
explicitly?

I'm also surprised it works at all (it does, right?), because Makefile
is grepping explicitly for `void (serial_)test_xxx` pattern when
generating a list of tests. So this shouldn't have worked, unless I'm
missing something.

> +
> +TEST_SET(check_ids_limits)
> diff --git a/tools/testing/selftests/bpf/progs/check_ids_limits.c b/tools/testing/selftests/bpf/progs/check_ids_limits.c
> new file mode 100644
> index 000000000000..36c4a8bbe8ca
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/check_ids_limits.c
> @@ -0,0 +1,77 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/bpf.h>
> +#include <bpf/bpf_helpers.h>
> +#include "bpf_misc.h"
> +
> +struct map_struct {
> +       __uint(type, BPF_MAP_TYPE_ARRAY);
> +       __uint(max_entries, 1);
> +       __type(key, int);
> +       __type(value, int);
> +} map SEC(".maps");
> +
> +/* Make sure that verifier.c:check_ids() can handle (almost) maximal
> + * number of ids.
> + */
> +SEC("?raw_tp")
> +__naked __test_state_freq __log_level(2) __msg("43 to 45: safe")

it's not clear what's special about 43 -> 45 jump?

can we also validate that id=69 was somewhere in verifier output?
which would require multiple __msg support, of course.

> +int allocate_many_ids(void)
> +{
> +       /* Use bpf_map_lookup_elem() as a way to get a bunch of values
> +        * with unique ids.
> +        */
> +#define __lookup(dst)                          \
> +               "r1 = %[map] ll;"               \
> +               "r2 = r10;"                     \
> +               "r2 += -8;"                     \
> +               "call %[bpf_map_lookup_elem];"  \
> +               dst " = r0;"
> +       asm volatile(
> +               "r0 = 0;"
> +               "*(u64*)(r10 - 8) = r0;"
> +               "r7 = r10;"
> +               "r8 = 0;"
> +               /* Spill 64 bpf_map_lookup_elem() results to stack,
> +                * each lookup gets its own unique id.
> +                */
> +       "write_loop:"
> +               "r7 += -8;"
> +               "r8 += -8;"
> +               __lookup("*(u64*)(r7 + 0)")
> +               "if r8 != -512 goto write_loop;"
> +               /* No way to source unique ids for r1-r5 as these
> +                * would be clobbered by bpf_map_lookup_elem call,
> +                * so make do with 64+5 unique ids.
> +                */
> +               __lookup("r6")
> +               __lookup("r7")
> +               __lookup("r8")
> +               __lookup("r9")
> +               __lookup("r0")
> +               /* Create a branching point for states comparison. */
> +/* 43: */      "if r0 != 0 goto skip_one;"
> +               /* Read all registers and stack spills to make these
> +                * persist in the checkpoint state.
> +                */
> +               "r0 = r0;"
> +       "skip_one:"

where you trying to just create a checkpoint here? given
__test_state_freq the simplest way would be just

goto +0;

no?

> +/* 45: */      "r0 = r6;"
> +               "r0 = r7;"
> +               "r0 = r8;"
> +               "r0 = r9;"
> +               "r0 = r10;"
> +               "r1 = 0;"
> +       "read_loop:"
> +               "r0 += -8;"
> +               "r1 += -8;"
> +               "r2 = *(u64*)(r0 + 0);"
> +               "if r1 != -512 goto read_loop;"
> +               "r0 = 0;"
> +               "exit;"
> +               :
> +               : __imm(bpf_map_lookup_elem),
> +                 __imm_addr(map)
> +               : __clobber_all);
> +#undef __lookup
> +}
> --
> 2.38.2
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 0/4] reduce BPF_ID_MAP_SIZE to fit only valid programs
  2022-12-17  2:17 [PATCH bpf-next 0/4] reduce BPF_ID_MAP_SIZE to fit only valid programs Eduard Zingerman
                   ` (3 preceding siblings ...)
  2022-12-17  2:17 ` [PATCH bpf-next 4/4] selftests/bpf: check if verifier.c:check_ids() handles 64+5 ids Eduard Zingerman
@ 2022-12-20 21:21 ` Andrii Nakryiko
  4 siblings, 0 replies; 18+ messages in thread
From: Andrii Nakryiko @ 2022-12-20 21:21 UTC (permalink / raw)
  To: Eduard Zingerman; +Cc: bpf, ast, andrii, daniel, kernel-team, yhs

On Fri, Dec 16, 2022 at 6:17 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> This is a follow-up for threads [1] and [2]:
>  - The size of the bpf_verifier_state->idmap_scratch array is reduced but
>    remains sufficient for any valid BPF program.
>  - selftests/bpf test_loader is updated to allow specifying that program
>    requires BPF_F_TEST_STATE_FREQ flag.
>  - Test case for the verifier.c:check_ids() update uses test_loader and is
>    meant as an example of using it for test_verifier-kind tests.
>
> The combination of test_loader and naked functions (see [3]) with inline
> assembly allows to write verifier tests easier than it is done currently
> with BPF_RAW_INSN-series macros.
>
> One can follow the steps below to add new tests of such kind:
>  - add a topic file under bpf/progs/ directory;
>  - define test programs using naked functions and inline assembly:

it's better, you don't *have* to use __naked functions, you can mix
and match. E.g., I have local tests that rely on the C compiler to do
stack allocation for local variables (%[dynptr]) and global variables
(%[zero]) and pass an address (fp-X) register to asm parts to work
with it.

+int zero;
+
+SEC("?raw_tp")
+__failure
+__flags(BPF_F_TEST_STATE_FREQ)
+__msg("invalid size of register fill")
+int stacksafe_should_not_conflate_stack_spill_and_dynptr(void *ctx)
+{
+       struct bpf_dynptr dynptr;
+
+       asm volatile (
+               /* Create a fork in logic, with general setup as follows:
+                *   - fallthrough (first) path is valid;
+                *   - branch (second) path is invalid.
+                * Then depending on what we do in fallthrough vs branch path,
+                * we try to detect bugs in func_states_equal(), regsafe(),
+                * refsafe(), stack_safe(), and similar by tricking verifier
+                * into believing that branch state is a valid subset of
+                * a fallthrough state. Verifier should reject overall
+                * validation, unless there is a bug somewhere in verifier
+                * logic.
+                */
+               "call %[bpf_get_prandom_u32];"
+               "r6 = r0;"
+               "call %[bpf_get_prandom_u32];"
+               "r7 = r0;"
+               "if r6 > r7 goto bad;" /* fork */
+
+               /* spill r6 into stack slot of bpf_dynptr var */
+               "*(u64 *)(%[dynptr] + 0) = r6;"
+               "*(u64 *)(%[dynptr] + 8) = r6;"
+
+               "goto skip_bad;"
+       "bad:"
+               /* create dynptr in the same stack slot */
+               "r1 = %[zero];"
+               "r2 = 4;"
+               "r3 = 0;"
+               "r4 = %[dynptr];"
+               "call %[bpf_dynptr_from_mem];"
+
+               /* here, if stacksafe() doesn't distinguish STACK_DYNPTR from
+                * STACK_MISC, verifier will assume safety at checkpoint below
+                */
+       "skip_bad:"
+               "goto +0;" /* force checkpoint */
+
+               /* leak dynptr state */
+               "r0 = *(u64 *)(%[dynptr] + 8);"
+               "if r0 > 0 goto +1;"
+               "exit;"
+               :
+               : __asm_ptr(dynptr), __asm_ptr(zero),
+                 __asm_imm(bpf_get_prandom_u32),
+                 __asm_imm(bpf_dynptr_from_mem)
+               : __asm_common_clobbers, "r6", "r7"
+       );
+
+       return 0;
+}



>
>         #include <linux/bpf.h>
>         #include "bpf_misc.h"
>
>         SEC(...)
>         __naked int foo_test(void)
>         {
>                 asm volatile(
>                         "r0 = 0;"
>                         "exit;"
>                         ::: __clobber_all);
>         }
>
>  - add skeleton and runner functions in prog_tests/verifier.c:
>
>         #include "topic.skel.h"
>         TEST_SET(topic)
>
> After these steps the test_progs binary would include the topic tests.
> Topic tests could be selectively executed using the following command:
>
> $ ./test_progs -vvv -a topic
>
> These changes are suggested by Andrii Nakryiko.
>
> [1] https://lore.kernel.org/bpf/CAEf4BzYN1JmY9t03pnCHc4actob80wkBz2vk90ihJCBzi8CT9w@mail.gmail.com/
> [2] https://lore.kernel.org/bpf/CAEf4BzYPsDWdRgx+ND1wiKAB62P=WwoLhr2uWkbVpQfbHqi1oA@mail.gmail.com/
> [3] https://gcc.gnu.org/onlinedocs/gcc/Basic-Asm.html#Basic-Asm
>
> Eduard Zingerman (4):
>   selftests/bpf: support for BPF_F_TEST_STATE_FREQ in test_loader
>   selftests/bpf: convenience macro for use with 'asm volatile' blocks
>   bpf: reduce BPF_ID_MAP_SIZE to fit only valid programs
>   selftests/bpf: check if verifier.c:check_ids() handles 64+5 ids
>
>  include/linux/bpf_verifier.h                  |  4 +-
>  kernel/bpf/verifier.c                         |  6 +-
>  .../selftests/bpf/prog_tests/verifier.c       | 12 +++
>  tools/testing/selftests/bpf/progs/bpf_misc.h  |  7 ++
>  .../selftests/bpf/progs/check_ids_limits.c    | 77 +++++++++++++++++++
>  tools/testing/selftests/bpf/test_loader.c     | 10 +++
>  6 files changed, 112 insertions(+), 4 deletions(-)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/verifier.c
>  create mode 100644 tools/testing/selftests/bpf/progs/check_ids_limits.c
>
> --
> 2.38.2
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/4] selftests/bpf: support for BPF_F_TEST_STATE_FREQ in test_loader
  2022-12-20 21:03   ` Andrii Nakryiko
@ 2022-12-22  0:11     ` Eduard Zingerman
  2022-12-22 19:07       ` Andrii Nakryiko
  0 siblings, 1 reply; 18+ messages in thread
From: Eduard Zingerman @ 2022-12-22  0:11 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, andrii, daniel, kernel-team, yhs

On Tue, 2022-12-20 at 13:03 -0800, Andrii Nakryiko wrote:
> On Fri, Dec 16, 2022 at 6:17 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > 
> > Adds a macro __test_state_freq, the macro expands as a btf_decl_tag of a
> > special form that instructs test_loader that the flag BPF_F_TEST_STATE_FREQ
> > has to be passed to BPF verifier when program is loaded.
> > 
> 
> I needed similar capabilities locally, but I went a slightly different
> direction. Instead of defining custom macros and logic, I define just
> __flags(X) macro and then parse flags either by their symbolic name
> (or just integer value, which might be useful sometimes for
> development purposes). I've also added support for matching multiple
> messages sequentially which locally is in the same commit. Feel free
> to ignore that part, but I think it's useful as well. So WDYT about
> the below?

Makes total sense. I can replace my patch with your patch in the
patchset, or just wait until your changes land.

> 
> 
> commit 936bb5d21d717d54c85e74047e082ca3216a7a40
> Author: Andrii Nakryiko <andrii@kernel.org>
> Date:   Mon Dec 19 15:57:26 2022 -0800
> 
>     selftests/bpf: support custom per-test flags and multiple expected messages
> 
>     Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> 
> diff --git a/tools/testing/selftests/bpf/test_loader.c
> b/tools/testing/selftests/bpf/test_loader.c
> index 679efb3aa785..b0dab5dee38c 100644
> --- a/tools/testing/selftests/bpf/test_loader.c
> +++ b/tools/testing/selftests/bpf/test_loader.c
> @@ -13,12 +13,15 @@
>  #define TEST_TAG_EXPECT_SUCCESS "comment:test_expect_success"
>  #define TEST_TAG_EXPECT_MSG_PFX "comment:test_expect_msg="
>  #define TEST_TAG_LOG_LEVEL_PFX "comment:test_log_level="
> +#define TEST_TAG_PROG_FLAGS_PFX "comment:test_prog_flags="
> 
>  struct test_spec {
>      const char *name;
>      bool expect_failure;
> -    const char *expect_msg;
> +    const char **expect_msgs;
> +    size_t expect_msg_cnt;
>      int log_level;
> +    int prog_flags;
>  };
> 
>  static int tester_init(struct test_loader *tester)
> @@ -67,7 +70,8 @@ static int parse_test_spec(struct test_loader *tester,
> 
>      for (i = 1; i < btf__type_cnt(btf); i++) {
>          const struct btf_type *t;
> -        const char *s;
> +        const char *s, *val;
> +        char *e;
> 
>          t = btf__type_by_id(btf, i);
>          if (!btf_is_decl_tag(t))
> @@ -82,14 +86,47 @@ static int parse_test_spec(struct test_loader *tester,
>          } else if (strcmp(s, TEST_TAG_EXPECT_SUCCESS) == 0) {
>              spec->expect_failure = false;
>          } else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX)) {
> -            spec->expect_msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1;
> +            void *tmp;
> +            const char **msg;
> +
> +            tmp = realloc(spec->expect_msgs, (1 +
> spec->expect_msg_cnt) * sizeof(void *));
> +            if (!tmp) {
> +                ASSERT_FAIL("failed to realloc memory for messages\n");
> +                return -ENOMEM;
> +            }
> +            spec->expect_msgs = tmp;
> +            msg = &spec->expect_msgs[spec->expect_msg_cnt++];
> +            *msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1;
>          } else if (str_has_pfx(s, TEST_TAG_LOG_LEVEL_PFX)) {
> +            val = s + sizeof(TEST_TAG_LOG_LEVEL_PFX) - 1;
>              errno = 0;
> -            spec->log_level = strtol(s +
> sizeof(TEST_TAG_LOG_LEVEL_PFX) - 1, NULL, 0);
> -            if (errno) {
> +            spec->log_level = strtol(val, &e, 0);
> +            if (errno || e[0] != '\0') {
>                  ASSERT_FAIL("failed to parse test log level from '%s'", s);
>                  return -EINVAL;
>              }
> +        } else if (str_has_pfx(s, TEST_TAG_PROG_FLAGS_PFX)) {
> +            val = s + sizeof(TEST_TAG_PROG_FLAGS_PFX) - 1;
> +            if (strcmp(val, "BPF_F_STRICT_ALIGNMENT") == 0) {
> +                spec->prog_flags |= BPF_F_STRICT_ALIGNMENT;
> +            } else if (strcmp(val, "BPF_F_ANY_ALIGNMENT") == 0) {
> +                spec->prog_flags |= BPF_F_ANY_ALIGNMENT;
> +            } else if (strcmp(val, "BPF_F_TEST_RND_HI32") == 0) {
> +                spec->prog_flags |= BPF_F_TEST_RND_HI32;
> +            } else if (strcmp(val, "BPF_F_TEST_STATE_FREQ") == 0) {
> +                spec->prog_flags |= BPF_F_TEST_STATE_FREQ;
> +            } else if (strcmp(val, "BPF_F_SLEEPABLE") == 0) {
> +                spec->prog_flags |= BPF_F_SLEEPABLE;
> +            } else if (strcmp(val, "BPF_F_XDP_HAS_FRAGS") == 0) {
> +                spec->prog_flags |= BPF_F_XDP_HAS_FRAGS;
> +            } else /* assume numeric value */ {
> +                errno = 0;
> +                spec->prog_flags |= strtol(val, &e, 0);
> +                if (errno || e[0] != '\0') {
> +                    ASSERT_FAIL("failed to parse test prog flags from
> '%s'", s);
> +                    return -EINVAL;
> +                }
> +            }
>          }
>      }
> 
> @@ -101,7 +138,7 @@ static void prepare_case(struct test_loader *tester,
>               struct bpf_object *obj,
>               struct bpf_program *prog)
>  {
> -    int min_log_level = 0;
> +    int min_log_level = 0, prog_flags;
> 
>      if (env.verbosity > VERBOSE_NONE)
>          min_log_level = 1;
> @@ -119,7 +156,11 @@ static void prepare_case(struct test_loader *tester,
>      else
>          bpf_program__set_log_level(prog, spec->log_level);
> 
> +    prog_flags = bpf_program__flags(prog);
> +    bpf_program__set_flags(prog, prog_flags | spec->prog_flags);
> +
>      tester->log_buf[0] = '\0';
> +    tester->next_match_pos = 0;
>  }
> 
>  static void emit_verifier_log(const char *log_buf, bool force)
> @@ -135,17 +176,26 @@ static void validate_case(struct test_loader *tester,
>                struct bpf_program *prog,
>                int load_err)
>  {
> -    if (spec->expect_msg) {
> +    int i, j;
> +
> +    for (i = 0; i < spec->expect_msg_cnt; i++) {
>          char *match;
> +        const char *expect_msg;
> +
> +        expect_msg = spec->expect_msgs[i];
> 
> -        match = strstr(tester->log_buf, spec->expect_msg);
> +        match = strstr(tester->log_buf + tester->next_match_pos, expect_msg);
>          if (!ASSERT_OK_PTR(match, "expect_msg")) {
>              /* if we are in verbose mode, we've already emitted log */
>              if (env.verbosity == VERBOSE_NONE)
>                  emit_verifier_log(tester->log_buf, true /*force*/);
> -            fprintf(stderr, "EXPECTED MSG: '%s'\n", spec->expect_msg);
> +            for (j = 0; j < i; j++)
> +                fprintf(stderr, "MATCHED  MSG: '%s'\n", spec->expect_msgs[j]);
> +            fprintf(stderr, "EXPECTED MSG: '%s'\n", expect_msg);
>              return;
>          }
> +
> +        tester->next_match_pos = match - tester->log_buf + strlen(expect_msg);
>      }
>  }
> 
> diff --git a/tools/testing/selftests/bpf/test_progs.h
> b/tools/testing/selftests/bpf/test_progs.h
> index 3f058dfadbaf..9af80704f20a 100644
> --- a/tools/testing/selftests/bpf/test_progs.h
> +++ b/tools/testing/selftests/bpf/test_progs.h
> @@ -410,6 +410,7 @@ int write_sysctl(const char *sysctl, const char *value);
>  struct test_loader {
>      char *log_buf;
>      size_t log_buf_sz;
> +    size_t next_match_pos;
> 
>      struct bpf_object *obj;
>  };
> 
> 
> 
> 
> > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> > ---
> >  tools/testing/selftests/bpf/progs/bpf_misc.h |  1 +
> >  tools/testing/selftests/bpf/test_loader.c    | 10 ++++++++++
> >  2 files changed, 11 insertions(+)
> > 
> > diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
> > index 4a01ea9113bf..a42363a3fef1 100644
> > --- a/tools/testing/selftests/bpf/progs/bpf_misc.h
> > +++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
> > @@ -6,6 +6,7 @@
> >  #define __failure              __attribute__((btf_decl_tag("comment:test_expect_failure")))
> >  #define __success              __attribute__((btf_decl_tag("comment:test_expect_success")))
> >  #define __log_level(lvl)       __attribute__((btf_decl_tag("comment:test_log_level="#lvl)))
> > +#define __test_state_freq      __attribute__((btf_decl_tag("comment:test_state_freq")))
> > 
> >  #if defined(__TARGET_ARCH_x86)
> >  #define SYSCALL_WRAPPER 1
> > diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
> > index 679efb3aa785..ac8517a77161 100644
> > --- a/tools/testing/selftests/bpf/test_loader.c
> > +++ b/tools/testing/selftests/bpf/test_loader.c
> > @@ -11,6 +11,7 @@
> > 
> >  #define TEST_TAG_EXPECT_FAILURE "comment:test_expect_failure"
> >  #define TEST_TAG_EXPECT_SUCCESS "comment:test_expect_success"
> > +#define TEST_TAG_TEST_STATE_FREQ "comment:test_state_freq"
> >  #define TEST_TAG_EXPECT_MSG_PFX "comment:test_expect_msg="
> >  #define TEST_TAG_LOG_LEVEL_PFX "comment:test_log_level="
> > 
> > @@ -19,6 +20,7 @@ struct test_spec {
> >         bool expect_failure;
> >         const char *expect_msg;
> >         int log_level;
> > +       bool test_state_freq;
> >  };
> > 
> >  static int tester_init(struct test_loader *tester)
> > @@ -81,6 +83,8 @@ static int parse_test_spec(struct test_loader *tester,
> >                         spec->expect_failure = true;
> >                 } else if (strcmp(s, TEST_TAG_EXPECT_SUCCESS) == 0) {
> >                         spec->expect_failure = false;
> > +               } else if (strcmp(s, TEST_TAG_TEST_STATE_FREQ) == 0) {
> > +                       spec->test_state_freq = true;
> >                 } else if (str_has_pfx(s, TEST_TAG_EXPECT_MSG_PFX)) {
> >                         spec->expect_msg = s + sizeof(TEST_TAG_EXPECT_MSG_PFX) - 1;
> >                 } else if (str_has_pfx(s, TEST_TAG_LOG_LEVEL_PFX)) {
> > @@ -102,6 +106,7 @@ static void prepare_case(struct test_loader *tester,
> >                          struct bpf_program *prog)
> >  {
> >         int min_log_level = 0;
> > +       __u32 flags = 0;
> > 
> >         if (env.verbosity > VERBOSE_NONE)
> >                 min_log_level = 1;
> > @@ -120,6 +125,11 @@ static void prepare_case(struct test_loader *tester,
> >                 bpf_program__set_log_level(prog, spec->log_level);
> > 
> >         tester->log_buf[0] = '\0';
> > +
> > +       if (spec->test_state_freq)
> > +               flags |= BPF_F_TEST_STATE_FREQ;
> > +
> > +       bpf_program__set_flags(prog, flags);
> 
> see my example above, it's safer to fetch current prog flags to not
> override stuff like BPF_F_SLEEPABLE
> 
> >  }
> > 
> >  static void emit_verifier_log(const char *log_buf, bool force)
> > --
> > 2.38.2
> > 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 2/4] selftests/bpf: convenience macro for use with 'asm volatile' blocks
  2022-12-20 21:05   ` Andrii Nakryiko
@ 2022-12-22  0:12     ` Eduard Zingerman
  0 siblings, 0 replies; 18+ messages in thread
From: Eduard Zingerman @ 2022-12-22  0:12 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, andrii, daniel, kernel-team, yhs

On Tue, 2022-12-20 at 13:05 -0800, Andrii Nakryiko wrote:
> On Fri, Dec 16, 2022 at 6:17 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > 
> > A set of macros useful for writing naked BPF functions using inline
> > assembly. E.g. as follows:
> > 
> > struct map_struct {
> >         ...
> > } map SEC(".maps");
> > 
> > SEC(...)
> > __naked int foo_test(void)
> > {
> >         asm volatile(
> >                 "r0 = 0;"
> >                 "*(u64*)(r10 - 8) = r0;"
> >                 "r1 = %[map] ll;"
> >                 "r2 = r10;"
> >                 "r2 += -8;"
> >                 "call %[bpf_map_lookup_elem];"
> >                 "r0 = 0;"
> >                 "exit;"
> >                 :
> >                 : __imm(bpf_map_lookup_elem),
> >                   __imm_addr(map)
> >                 : __clobber_all);
> > }
> > 
> > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> > ---
> >  tools/testing/selftests/bpf/progs/bpf_misc.h | 6 ++++++
> >  1 file changed, 6 insertions(+)
> > 
> > diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
> > index a42363a3fef1..bbf56ad95636 100644
> > --- a/tools/testing/selftests/bpf/progs/bpf_misc.h
> > +++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
> > @@ -8,6 +8,12 @@
> >  #define __log_level(lvl)       __attribute__((btf_decl_tag("comment:test_log_level="#lvl)))
> >  #define __test_state_freq      __attribute__((btf_decl_tag("comment:test_state_freq")))
> > 
> > +/* Convenience macro for use with 'asm volatile' blocks */
> > +#define __naked __attribute__((naked))
> > +#define __clobber_all "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "memory"
> 
> I found that this one doesn't work well when passing some inputs as
> registers (e.g., for address of a variable on stack). Compiler
> complains that it couldn't find any free registers to use. So I ended
> up using
> 
> #define __asm_common_clobbers "r0", "r1", "r2", "r3", "r4", "r5", "memory"
> 
> and adding "r6", "r7", etc manually, depending on the test.
> 
> So maybe let's add it upfront as `__clobber_common` as well?

Will do.

> 
> But changes look good to me:
> 
> Acked-by: Andrii Nakryiko <andrii@kernel.org>
> 
> > +#define __imm(name) [name]"i"(name)
> > +#define __imm_addr(name) [name]"i"(&name)
> > +
> >  #if defined(__TARGET_ARCH_x86)
> >  #define SYSCALL_WRAPPER 1
> >  #define SYS_PREFIX "__x64_"
> > --
> > 2.38.2
> > 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 4/4] selftests/bpf: check if verifier.c:check_ids() handles 64+5 ids
  2022-12-20 21:18   ` Andrii Nakryiko
@ 2022-12-22  0:33     ` Eduard Zingerman
  0 siblings, 0 replies; 18+ messages in thread
From: Eduard Zingerman @ 2022-12-22  0:33 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, andrii, daniel, kernel-team, yhs

On Tue, 2022-12-20 at 13:18 -0800, Andrii Nakryiko wrote:
> On Fri, Dec 16, 2022 at 6:17 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > 
> > A simple program that allocates a bunch of unique register ids than
> > branches. The goal is to confirm that idmap used in verifier.c:check_ids()
> > has sufficient capacity to verify that branches converge to a same state.
> > 
> > Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
> > ---
> >  .../selftests/bpf/prog_tests/verifier.c       | 12 +++
> >  .../selftests/bpf/progs/check_ids_limits.c    | 77 +++++++++++++++++++
> >  2 files changed, 89 insertions(+)
> >  create mode 100644 tools/testing/selftests/bpf/prog_tests/verifier.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/check_ids_limits.c
> > 
> > diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
> > new file mode 100644
> > index 000000000000..3933141928a7
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
> > @@ -0,0 +1,12 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +#include <test_progs.h>
> > +
> > +#include "check_ids_limits.skel.h"
> > +
> > +#define TEST_SET(skel)                 \
> > +       void test_##skel(void)          \
> > +       {                               \
> > +               RUN_TESTS(skel);        \
> > +       }
> 
> Let's not use such trivial macros, please. It makes grepping for tests
> much harder and saves 1 line of code only. Let's define funcs
> explicitly?
> 
> I'm also surprised it works at all (it does, right?), because Makefile

Nope, it doesn't work and it is embarrassing. I've tested w/o this
macro and only added it before final tests run. And didn't check the log.
Thank you for catching it. Will remove this macro.

> is grepping explicitly for `void (serial_)test_xxx` pattern when
> generating a list of tests. So this shouldn't have worked, unless I'm
> missing something.
> 
> > +
> > +TEST_SET(check_ids_limits)
> > diff --git a/tools/testing/selftests/bpf/progs/check_ids_limits.c b/tools/testing/selftests/bpf/progs/check_ids_limits.c
> > new file mode 100644
> > index 000000000000..36c4a8bbe8ca
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/progs/check_ids_limits.c
> > @@ -0,0 +1,77 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +#include <linux/bpf.h>
> > +#include <bpf/bpf_helpers.h>
> > +#include "bpf_misc.h"
> > +
> > +struct map_struct {
> > +       __uint(type, BPF_MAP_TYPE_ARRAY);
> > +       __uint(max_entries, 1);
> > +       __type(key, int);
> > +       __type(value, int);
> > +} map SEC(".maps");
> > +
> > +/* Make sure that verifier.c:check_ids() can handle (almost) maximal
> > + * number of ids.
> > + */
> > +SEC("?raw_tp")
> > +__naked __test_state_freq __log_level(2) __msg("43 to 45: safe")
> 
> it's not clear what's special about 43 -> 45 jump?
> 
> can we also validate that id=69 was somewhere in verifier output?
> which would require multiple __msg support, of course.
> 
> > +int allocate_many_ids(void)
> > +{
> > +       /* Use bpf_map_lookup_elem() as a way to get a bunch of values
> > +        * with unique ids.
> > +        */
> > +#define __lookup(dst)                          \
> > +               "r1 = %[map] ll;"               \
> > +               "r2 = r10;"                     \
> > +               "r2 += -8;"                     \
> > +               "call %[bpf_map_lookup_elem];"  \
> > +               dst " = r0;"
> > +       asm volatile(
> > +               "r0 = 0;"
> > +               "*(u64*)(r10 - 8) = r0;"
> > +               "r7 = r10;"
> > +               "r8 = 0;"
> > +               /* Spill 64 bpf_map_lookup_elem() results to stack,
> > +                * each lookup gets its own unique id.
> > +                */
> > +       "write_loop:"
> > +               "r7 += -8;"
> > +               "r8 += -8;"
> > +               __lookup("*(u64*)(r7 + 0)")
> > +               "if r8 != -512 goto write_loop;"
> > +               /* No way to source unique ids for r1-r5 as these
> > +                * would be clobbered by bpf_map_lookup_elem call,
> > +                * so make do with 64+5 unique ids.
> > +                */
> > +               __lookup("r6")
> > +               __lookup("r7")
> > +               __lookup("r8")
> > +               __lookup("r9")
> > +               __lookup("r0")
> > +               /* Create a branching point for states comparison. */
> > +/* 43: */      "if r0 != 0 goto skip_one;"
> > +               /* Read all registers and stack spills to make these
> > +                * persist in the checkpoint state.
> > +                */
> > +               "r0 = r0;"
> > +       "skip_one:"
> 
> where you trying to just create a checkpoint here? given
> __test_state_freq the simplest way would be just
> 
> goto +0;
> 
> no?
> 
> > +/* 45: */      "r0 = r6;"
> > +               "r0 = r7;"
> > +               "r0 = r8;"
> > +               "r0 = r9;"
> > +               "r0 = r10;"
> > +               "r1 = 0;"
> > +       "read_loop:"
> > +               "r0 += -8;"
> > +               "r1 += -8;"
> > +               "r2 = *(u64*)(r0 + 0);"
> > +               "if r1 != -512 goto read_loop;"
> > +               "r0 = 0;"
> > +               "exit;"
> > +               :
> > +               : __imm(bpf_map_lookup_elem),
> > +                 __imm_addr(map)
> > +               : __clobber_all);
> > +#undef __lookup
> > +}
> > --
> > 2.38.2
> > 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH bpf-next 1/4] selftests/bpf: support for BPF_F_TEST_STATE_FREQ in test_loader
  2022-12-22  0:11     ` Eduard Zingerman
@ 2022-12-22 19:07       ` Andrii Nakryiko
  0 siblings, 0 replies; 18+ messages in thread
From: Andrii Nakryiko @ 2022-12-22 19:07 UTC (permalink / raw)
  To: Eduard Zingerman; +Cc: bpf, ast, andrii, daniel, kernel-team, yhs

On Wed, Dec 21, 2022 at 4:11 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Tue, 2022-12-20 at 13:03 -0800, Andrii Nakryiko wrote:
> > On Fri, Dec 16, 2022 at 6:17 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > >
> > > Adds a macro __test_state_freq, the macro expands as a btf_decl_tag of a
> > > special form that instructs test_loader that the flag BPF_F_TEST_STATE_FREQ
> > > has to be passed to BPF verifier when program is loaded.
> > >
> >
> > I needed similar capabilities locally, but I went a slightly different
> > direction. Instead of defining custom macros and logic, I define just
> > __flags(X) macro and then parse flags either by their symbolic name
> > (or just integer value, which might be useful sometimes for
> > development purposes). I've also added support for matching multiple
> > messages sequentially which locally is in the same commit. Feel free
> > to ignore that part, but I think it's useful as well. So WDYT about
> > the below?
>
> Makes total sense. I can replace my patch with your patch in the
> patchset, or just wait until your changes land.

Mine will take a bit more time and will be stuck in discussions
anyways, so the more prerequisites land before it the better. Please
go ahead and add it to your patch set.

>
> >
> >

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-12-22 19:07 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-17  2:17 [PATCH bpf-next 0/4] reduce BPF_ID_MAP_SIZE to fit only valid programs Eduard Zingerman
2022-12-17  2:17 ` [PATCH bpf-next 1/4] selftests/bpf: support for BPF_F_TEST_STATE_FREQ in test_loader Eduard Zingerman
2022-12-17 18:44   ` Yonghong Song
2022-12-20 21:03   ` Andrii Nakryiko
2022-12-22  0:11     ` Eduard Zingerman
2022-12-22 19:07       ` Andrii Nakryiko
2022-12-17  2:17 ` [PATCH bpf-next 2/4] selftests/bpf: convenience macro for use with 'asm volatile' blocks Eduard Zingerman
2022-12-17 18:58   ` Yonghong Song
2022-12-20 21:05   ` Andrii Nakryiko
2022-12-22  0:12     ` Eduard Zingerman
2022-12-17  2:17 ` [PATCH bpf-next 3/4] bpf: reduce BPF_ID_MAP_SIZE to fit only valid programs Eduard Zingerman
2022-12-17 18:59   ` Yonghong Song
2022-12-20 21:06   ` Andrii Nakryiko
2022-12-17  2:17 ` [PATCH bpf-next 4/4] selftests/bpf: check if verifier.c:check_ids() handles 64+5 ids Eduard Zingerman
2022-12-17 19:17   ` Yonghong Song
2022-12-20 21:18   ` Andrii Nakryiko
2022-12-22  0:33     ` Eduard Zingerman
2022-12-20 21:21 ` [PATCH bpf-next 0/4] reduce BPF_ID_MAP_SIZE to fit only valid programs Andrii Nakryiko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.